South by Southwest Interactive (SxSWi) is the premier event for web, social and interactive media, yet it is just as dynamic for networking. I have attended since 2008, and have found the off-site activities just as valuable, if not at times more so, than the panels themselves. This year’s SxSWi was only excepted by scale – such as the estimated 100% increase in attendance from 2010. I make a point of attending SxSWi to keep abreast of latest developments as well as [re]connect with the luminaries in the industry. My impressions from SxSWi were indelible – here are some of the more relevant bits:

This is just the beginning:  If you think we are at the height of the maturity curve for digital and social media, guess again.  It is clear we are still in infancy.  A quick tour around the showroom floor brought that reality to the fore. One of the more intriguing sessions I attended featured Reed Hoffman, of PayPal fame, now Executive Chairman of LinkedIn, who described the future of the internet as an open platform where data *is* the platform. In Web 1.0 we took the first steps with online “presence” – largely anonymous users interacting with websites, browsing for information. Web 2.0 gave birth to online personas, with our identity  and relationships transferred online, along with the concept of targetted search, rather than browsing. This is the generation of mass data with a revolution of empowered online users.

Web 3.0 is the next stage in this evolution, which delivers innovation around data. We presently experience considerable  levels of data exhaustion and information overload. Innovation is derived from making sense of this data,  and expressing its value to the user.  Today there are early instances of 3.0 – Mint.com pulls in your financial information, and provides a map of expenses.  Google has a data mash-up product called Refine, a tool for cleaning up and transforming unorganised data, in order to extend them to other web services. LinkedIn is working to use semantics to create a skills topology to mash-up with Wikipedia, for definition, which will assist job seekers in navigating the professional world, and identifying skills needed for certain roles and/or industries. Reed noted that in constructing the future of the internet it will be critical to think about what we are doing with data – how do we preserve trust of data and recognise that not all data is created equal.

Internet, television, mobile get cozy: It is likely not news to you that the internet, television and mobility are converging – the only way for TV to grow was to escape the living room. What is the impact of this?  I was amased at the number of attendees in a packed ballroom indicated that they have cancelled their cable or satellite provider, and are acquiring TV directly from the internet.  Also, significantly more users learn about new TV programmes from their social networks – principally Facebook and Twitter – than traditional TV and media advertisements? Intel’s futurist, Brian David Johnson, states how we are quickly moving towards more devices than people on the planet. As such, TV is no longer the TV of prior generations, but a display of choice to connect people socially, and with film and broadcast media – it is becoming a mainstream cultural connection. Thus, media now transcends single devices. We have just passed the tipping point with technology (tablets, smart phones, IPTV and set-top boxes, notebooks/netbooks) that can be connected together, and have content/storytelling weave seamlessly across each.  This not only impacts hardware product sets, software, applications and services, but marketing efforts to connect with customers along with methods to stay connected in our personal life.

Good content creation becomes king: Barry Diller, Chairman of IAC/InterActive Corp, discussed the internet as a mass of data, and the shift towards journalistic content. Barry notes how the old media journalistic process no longer has a place in the present age – access to information is real-time. The name of the game is to produce relevant and engaging content real-time. He argues great content will produce winners and losers on the web, and that producing content that can be consumed on any device in an open platform is the “future”.

CNN would state the future is already here, showcasing their new digital playground at SXSWi, with on the go programming options. As such, the role of the journalist will evolve from print, to real-time content, and storytelling that is ubiquitous. Barry states, “a great editor is a great editor no matter the medium.” Genevieve Bell, Intel Fellow, noted whilst the internet is not yet pervasive but that content is. For example in India, phones are not connected to the internet. So Indians bring their phones to a shop by a bus or train stop, to download content during their morning commute. Likewise, we will soon see journalists sought to fill digital marketing, media and interactive content jobs.

Science Fiction is not fictitious: An interesting panel discussion yielded a compelling prediction – social capital will supplant brand equity as the measure of brand success.  There are 3 forces enabling this shift:
(1) Rapid rate of change – By 2025 compute power will be able to simulate human brain power.
(2) Overwhelming complexity of data, number of apps and social networking sites
(3) Technology that will evolve beyond the present, primitive state.
Social media voices are something that marketers can no longer compete – a startling perspective being rapidly disappearing e number of inactive adult internet users (19%) in the US. Mike Spataro, VP of Enterprise Client Strategy for Visible Technologies, claims that social influence is the killer app to social capital.

What, then, is a person with social influence – this is someone that has greater than average impact through word of mouth.  We may not be able to compete with social capital, but we can leverage its power.  Currently there no standards defined for the influence of online customers, yet we can manually look at the relation of others in the community. The behavioural mapping of activity, relevance, reach and pull equates influence.

Three to five years out is potentially the Holy Grail for marketers – a/k/a predictive analysis of social influence, which means behavioural mapping across various communities and channels, to predict certain behavior based upon your online persona. Dell currently measures social capital across domains both on and off .com, based on the concept that marketing requires adaptive customer centric strategies. They create a social presence map with credibility and reputation overlays, have listening software that directly feeds back product ratings and reviews to the product teams, and look at their content reach – which is currently more than the top 12 publications. They have a command and control center that responds and engages with customers.

What happens in North America doesn’t happen everywhere: I attended a session on “Going Big in Japan”, where I enjoyed a well defined overview of how starkly different online behaviour is in various countries and markets. One size does *not* for all. For example, Facebook is not big in Japan. Why? it requires one to fully identify themselves, and Japanese online users are less comfortable with association of their content with their full name. Twitter on the other hand has taken off astronomically, and likely will be more pervasive in Japan than the U.S – it only requires a handle to participate. Notable is how more than 90% of Japanese blogs are anonymous. Other notable facts include how young girls are principally connected via mobile devices, not notebooks or netbooks; and that Japan still has the top circulation of newspapers and magazines. The learning for all walks is to understand geo and country online behavior, not to assume translation of English content will work across the globe, and that one’s brand may need a cute character J.

My principal takeaways are:

·        The future of the internet is the ability to make sense of and utilise the mass of data

·        Compelling, real-time content creation is key to online success

·        One size digital experience doesn’t fit all

·        The digital experience is now device- and display-agnostic, and ubiquitous

·        Social capital will supplant brand equity as the measure of success

·        Predictive analytics is the holy grail for marketers

 

[Charlton]

Jevon’s Paradox is the proposition that technological progress that increases the efficiency with which a resource is used tends to increase (rather than decrease) the rate of consumption of that resource. When that increase is not accompanied by any more than organic growth in resource management, it can result in a resource crunch. Simon Wardley of the CSC Leadership Forum has often pointed out that, whilst cheaper operations costs and reduced capital spending should signal a reduction in TCO, the truth is quite the opposite. This argument is rather interesting on its own, but I see the greater issue for Cloud to be resource starvation. There are several parallels which reflect that, despite observed reductions in TCO, the resource crunch has resulted in both increased costs and reduced service.

We have witnessed this effect in the mobile space, where mobile bandwidth has been consumed at an alarming rate with the emergence of data services. AT&T’s iPhone service in the US the last few years has only highlighted this, with the network brought to its knees in Manhattan, San Francisco, Los Angeles, Chicago and other major metropolitan centres. Advances in the utilisation of mobile network resources saw their near exhaustion in less than a decade. This has been accelerated and exacerbated by the common practise of planning infrastructure after the fact. Other regions have fared better, such as South Korea, Japan, and Europe, yet at the price of resourcing and implementing broad infrastructure projects.

Will the Public Cloud be the next usage model that only increases resource consumption? The present rate of adoption, for example, by small businesses and social network games of Amazon Web Services, seems to suggest so. These are early adopters, yet the benefits of Cloud, together with these trends, are significant enough to strongly indicate only an increase in consumption.

Despite all the buzz to the contrary, compute and storage resources are finite, andnetwork resources continue to provide the principal bottleneck. Presently, dominant Public Cloud campaigns gloss over, or entirely ignore, this finite nature. The more compute, storage and network resources transform into utilities, the greater the exponentiation of consumption. Virtualization only accelerates the consumption rate, given the increase in server density. Without understanding this, increased Cloud adoption will be pursued with poor management of resources, and, resultingly, costs will increase to a point where they are uneconomical.

Resource management will grow in importance as inter-connectivity and cloud consumption increase. Although billing and metering have developed to a certain degree within Cloud service providers, this has not been approached in a manner that would truly manage the use of resources. Much has been developed to support this at the virtualization layer, yet not at the service layer. Resource management aims to address this gap, by enabling actions to be taken with workloads based upon service level agreements and policies, resource profiles and how they map to workload execution within resource pools and individual nodes.

This impacts both TCO and ROI. TCO can be maintained at a consistent level of efficacy through managing resource availability and constraints, to optimise the cost-vs-service ratio. ROI can be targeted and achieved through assurance of resource availability. Without resource management, costs can easily increase beyond efficacy, and pathfinding activities suffocated, as resources are heavily consumed and capacity exhausted.

Whilst a high-volume, low-margin strategy suits a number of business models, those looking to the Cloud to execute their mission-critical or highly-regulated processing need more than an optimistic RAS model to assure their operations. Tenant visibility into the Cloud (more on this in my next post) combined with fine-grained resource management will make this actionable.

 

[Charlton]

I was recently interviewed by Robert Duffner of MSDN on Cloud Computing for the Windows Azure team – you can read it in full at http://bit.ly/9jMHtC #cloudcomputing

[Charlton]

As a Technology Strategist, my charter is to drive bold technologies to transform my organisation, deliver world-class technical innovation, know the external research community, know my organisation, and provide technical leadership. I thus spend a significant amount of my time looking for how to achieve that transformation through “innovation“. With a smaller portion of my time, I present at a number of conferences on these strategies, their associated technologies, and how they can be applied to solve real-world problems.

One recent presentation was given at CloudCamp Hamburg. One of the reasons I chose to do so was to reach a new audience. The sessions at the event were quite good, focussing on developments in the business models, usage models and technologies of the Cloud. Yet, I read posts such as this one, and wonder that, based on my experience in Europe and the Americas, if there a) is a problem with the state of innovation in Europe, or rather, b) the taxonomy of innovation exemplified in the post is a problem in and of itself.

Anyone who has spent any time with me will know that I am not one for fussing about with taxonomies. I feel that, essentially, these are a tertiary consideration, and to be addressed after what I consider to be the ‘real work of taking a concept from idea to reality. My personal opinion is that taxonomies are pedantry, used to constrain, limit and control the process of innovation, and are too often used as a device to be heard, rather than to solve any problems of significance. Yet, when I hear or read the use of the word ‘innovation‘ or phrase ‘big picture’ to focus only on business models, and ignore usage and technology, think there’s something amiss. If we limit ‘innovation‘ to *exclude* usage and technology advances, we are truly advancing style over substance.

Most of the widely recognized innovations in history which have advanced the standard of living for many in modern civilisation arose from usage or technology innovation addressing very specific, detailed problems. The innovation may have encompassed a grand vision, but that vision arose from connecting the dots which started with solving a specific problem. To make the blanket claim that innovation is hindered by being too “busy nitpicking on time consuming details” makes as much sense as reality television. An artist worth the enduring attention of the public will need to have mastered the fundamentals of their craft; a reality ‘star’ will exit the spotlight in relatively short order (although not fast enough for some of us) since their recognition was purely situational, and they had not addressed any of the ‘time consuming details’ necessary to sustain any career in entertainment.

Based on my assumptions, the iPhone is 10% innovation and 90% evolution. The device has truly been innovative in its enablement of Augmented Reality – Apple improved a number of existing technologies such as touch screen, application integration and metadata management, to provide a usage model which was a step beyond the expectations of a mobile device. The innovation lay in how the individual technologies were deployed to provide a better (and now widely adopted/mimicked) usage model; the whole is greater than the sum of the parts. Yet even the improvements in the parts, not to say new usage model, required a foundation rooted in time consuming details. Ask any Apple engineer working on any of the technologies that were developed or revised, and they can attest to how much effort – deep in the bowels of hardware, software and/or user experience engineering – was required of each and every one of them to deliver them.

The innovation of Rich Internet Applications (RIA), and subsequently the Rich Services Cloud – something with which I have been intimately involved – arose from similar efforts. The underlying engines supporting this, such as Flash, had already been widely deployed for nearly a decade, but it was the understanding of the limitations of Web 1.0, at the level of nitpicking on time consuming details, married with the vision for a rich yet agile development and user experience, that drove the development of platforms such as Flex and Apollo/AIR, Silverlight, and now HTML5/Chrome. Each of these implementations of RIA innovation recognised that the real world, devil-in-the-detail problems of Web 1.0′s fully remoted architecture were non-performant, did not scale, and limited the web experience to very simple, highly constrained models

Innovation is not in the vision itself, or in the nano-level details, but in how recognising the vision that can be realised from creating or enhancing bits at the level of time consuming detail, and successfully mapping them. Business across Europe may not have adopted North American-style Public Cloud, with its Lowest Common Denominator (LCD) user experience and generally coarse Service Level Agreements (SLAs), but that does not reflect their difficulty with innovation. Rather, it reflects that organisations across Europe have chosen to focus their energies solving some very important problems, after already bravely diving into related areas such as the Open Business Initiative wholly, and learning important lessons from such moves. Europe has proven itself truly innovative in business models, usage models, and technology, as exemplified in it’s world-leading or -challenging development in areas of mobile, automation, and web, to name a few. European organisations have not been shy to embark on bold endeavours to transform their business and usage models by developing, adapting or adopting innovative technologies.

Likewise, I’ve encountered plenty of innovation in North America arising from the unabashed confrontation of time-consuming-detail problems presented by the challenges of Cloud Computing. There have been plenty of clever applications of Cloud, but no matter how cool they may be, the real innovation, in my definition, is in the usage models and technologies that have enabled them together with a number of not-so-cool apps. Throwing something on the Cloud with a slick interface and a minimal SLA isn’t innovation. Developing the foundations for HTML5 and Native Client Libraries, developing MapReduce and Hadoop, developing image recognition algorithms that scale by shifting cycles across compute resources based on capabilities and utilisation  – these mapped to new business and usage models is innovation.

[Charlton]

This week executives at UK chip designer ARM Holdings came to Silicon Valley to
unveil their latest chip design: the ARM Cortex A15 MPCore, code named “Eagle,”
received with much excitement by the tech press.

Om Malik of GigaOm wrote, “I distinctly remember the day when Intel Corp. launched the Pentium processor. It was the day desktop computing changed for me and for a lot of others. It was also the day when Intel started to put a gap between itself and all its wannabe processor rivals. I bring up that day because I feel that we are about to see a similar shift in the world of mobile, thanks to ARM Holdings.”

Tom’s Hardware, which called it a “super chip,” said the Cortex A15 “promises to deliver a 5x performance improvement over today’s advanced smartphone processors, within a comparable energy footprint. The Cortex-A15 processor has the projected headroom to run at up to 2.5GHz and is targeted at manufacture in 32nm, 28nm, with a roadmap extending to 20nm.”

The New York Times
reported that ARM executives said consumers “could expect to see smartphones in 2012 that have about the same performance as a current business laptop. The
fastest phones at that time will have four 2.5-gigahertz processor cores and be able to handle things as complex as running virtualization software.

“People could theoretically use the virtualization software to give their phones different personalities, like a work version with added security and a personal one with entertainment applications.”

Cortex A15 MPCore

 

ARM’s upcoming Cortex-A15 core

While the NYT gave a 2012 estimate for products with the A15, other sites said it would be more like 2013 before they hit the market. Delays wouldn’t be surprising—ARM’s current top-of-the-line Cortex A9 processor still isn’t in any widely available smartphones.

Nevertheless, many writers are already foreseeing a titanic struggle emerging between ARM chipmakers and Intel.

“ARM’s A15 processor could be described as a cloud on Intel’s cloud computing horizon,” wrote Peter Clark in EETimes.

ARM executives have not been shy about saying the new core could be used in servers, The Wall Street Journal said, “…which would help propel ARM into Intel’s stronghold in corporate data centers.” The WSJ noted that execs from Dell and Hewlett-Packard, “two big Intel customers” attended the ARM event.

The one pervasive theme from this year’s VMworld has been mobility. Although the official positioning has focussed on server-centric models, the media and third party vendors have focussed on how workloads, compute and quality of experience can be moved across devices, using local resources to the fullest. Mobility is the emerging vector in cloud computing and virtualisation, and the reasons for this are clear.

Imagine you’re a new employee in the and it’s your first day on the job. You expect from past experience at other workplaces, that you’ll get an outdated PC, a desk phone, and maybe some training. You also expect that you may be staring at your cubicle walls for some hours before IT gets around to you.

Rather, let’s say you get a visit from your department admin. She hands you a voucher for, say, $1,500,  and a thumb drive. The admin explains that the voucher  is for purchasing the computing device of your choice, and the the thumb drive authenticates both you *and* your device to the network, where your OS and applications are waiting to be deployed to whatever device you may use.

Welcome to the mobile workplace.  If you think this scenario is too far-fetched, it’s time to rethink. Technically, it’s already here.

Traditionally, mobility has meant carrying around a standardized, company-issued PC weighing 8 to 10 pounds, fully loaded with an operating system, a stack of standard applications – many of which you won’t use – and all your data. The more you carry it around, the heavier it seems, and it’s a never-ending chore for you and IT to keep your hardware and software updated, and your data secure.

True mobility, however, provides device independence combined with device targetting. This enables the applications and data to be deployable, employing isolation, to a variety of devices, and grants you access to them from wherever you are. For example, you would choose a device for your corporate work, your home notebook, smartphone, mobile Internet device (MID), etc. Virtualisation technologies would employ a container, provided by your corporate IT, that can run on any hardware that supports the virtualisation platform. IT manages the container, deploying IT applications and data to it on the device. The environment is protected because it runs within a virtual environment employing memory and I/O isolation. Employees could run the IT environment alongside personal applications and data on the same device hardware, keeping both environments safely protected from each other.

Mobility also requires ability to move workloads from device to device, based on immediate needs, and maintain task continuity, is critical to user productivity. Employees performing tasks on one device in transit often need to, or can benefit from, moving to another device at home, or the office. The continuity of performing those tasks requires that the workload be moved, or synchronized, to the other device when that device is accessed. Users can then seamlessly switch from device to device, keeping their tasks intact.

Two major emerging technology trends behind the concept are workspace virtualisation and the consumerisation of IT.

Traditionally, technology became available to enterprises first and spread to consumers later. But a reverse trend—consumerization of IT—is emerging rapidly. A variety of technologies are being adopted by consumers first, who then informally introduce them into the enterprise. As the consumer market explodes with new products—lighter and smaller notebooks, netbooks, and feature-rich MIDs─employees want platforms they’re familiar with from personal use. And they’re using personal devices for work. The reality for IT shops everywhere is that consumer devices are crossing over into the workplace in increasing numbers. This unavoidable osmosis poses a real challenge for IT departments to manage and secure the maverick devices.

In the traditional computing platform model without any virtualisation, the IT applications and data become entwined in the registry, file system, and even the hardware itself. This model is complex and expensive to manage. One of IT’s largest costs is device hardware support. For employees, the model is limiting too, with little choice of client devices and an inability to run IT apps and data seamlessly alongside personal applications and data.

With workspace virtualisation, the entire software environment—applications, data, and IT services—are moved to a low-overhead virtual container and “decoupled” from the underlying subsystems. It can be managed and updated centrally and distributed to a variety of device hardware devices. IT could configure different containers, and distribute them to different hardware platforms. There could be a standard Windows* container, a standard Linux* container, a purely Open Source container, a Software Developer’s container, etc.

The result of implementing consumerisation and virtualisation is a flexible, platform-independent approach that can increase employees’ choices and productivity and allow IT to focus resources on providing and supporting IT services, not on managing the platform.

Combining multiple emerging technologies, a number of vendors have used used standard corporate and consumer notebooks and desktops with Intel® Core™ i5/i7 processors running Windows or Linux. At VMworld, RingCube demonstrated deploying workspace virtualisation technologies and policy management software, using Intel Virtualisation Technology™, illustrating that along with all traditional device functionality, workspace virtualisation was used to access IT applications over the corporate
network from home and office locations, clearly illustrating the feasibility of mobility. For more information on the joint RingCube/Intel efforts, see the white paper “Hardware Assisted Workspace Virtualization“.

Multiple technology vendors have defined both strategies and products with the potential to transform the way IT delivers services, including Citrix XenClient, Wanova Mirage, Phoenix Technologies’ HyperSpace, and Parallels Workstation. Combined with the continued media feedback at VMworld 2010, it is apparent that the future is here with true mobility.

 

[Charlton]

In his keynote at VMworld 2010, VMware CEO Paul Maritz spoke of how virtualisation is radically rewriting the computing landscape, in that the existing IT model is being replaced by cloud based on virtualisation, and that IT will increasingly focus on delivering customised applications. ”This is the new stack, and we are in a transition from a client/server world to the stack of the cloud era…. One thing history teaches us is that there are winners and losers in moving from stacks. But something like this will happen if we are for it or not,” Maritz said.

This is but the latest in the trend toward operating system abstraction. Since viable virtualisation technologies arrived on the scene, models and architectures have shaped to serve varying needs. From type-1 hypervisors, to virtual workspaces, to rich services runtimes, the number of virtual applications have outstripped the installed base since 2007. Now, service providers have deployed vast virtual data centres, and companies investing in virtualisation are following suit in building their private and hybrid clouds – the latter in which tiered virtual machines are being created in third-party datacentres so that companies can scale to meet demand.

As virtualisation continues to expand, software practices have had to dramatically shift in kind. As the operating system abstracted development from the underlying hardware, and interpreters abstracted away from the operating system, the latest generation of platforms now abstracts away from the gory details that once dogged development and test alike. Maritz stated how ”There hasn’t been a lot of development in the operating system environment for 20 years. Now, a developer working with Ruby on Rails doesn’t need to care about what operating system is under the hood.” I would take this a step further – a developer working with HTML5, JavaScript, RoR, Flex/AIR or the rest don’t need to care about what application platforms or containers are under the hood. It is difficult not to agree with Maritz that old-style software development has no place in the new, virtualised environment. Rather, new applications are needed so that companies can meld new information flows into existing applications and meet client demand.

 

[Charlton]

RingCube announced a series of enhancements to their vDesk Desktop Virtualization solution in security, performance, management and usability, which were developed in association with Intel. The capabilities introduced include:

  • Hardened security features, including pre-authentication host checking, to protect against keystroke loggers; and granular logging for regulation compliance;
  • Performance improvements to their MobileSync features, including compression of encrypted workspaces and block-level differencing
  • Management and usability features that enable deployment of vDesk across the network faster; as well as higher levels of resiliency with improved architecture; and usability improvements for USB-attached printers.

RingCube is demonstrating vDesk solution on Intel vPro devices VMworld 2010 this week.

Learn more about Workspace Virtualization from Intel, CSC, Gartner Research, and RingCube at: http://www.DesktopVirtualizationOptimized.com

 

[Charlton]

Per Jesus Diaz at gizmodo.com, downloading a simple PDF file can give hackers access to your iPhone, iPod touch, or iPad. The security hole is applicable to all iOS 4 devices.

“The vulnerability is easily exploitable. In fact, the latest one-click, no-computer-required Jailbreak solution for iOS 4 devices uses this same method to break Apple’s own security…It just requires the user to visit a web address using Safari. The web site can automatically load a simple PDF document, which contains a font that hides a special program. When your iOS device tries to display the PDF file, that font causes something stack overflow…The result is that, without any user intervention whatsoever, that program can do whatever it wants inside your iPhone, iPod touch, or iPad.”

For the complete story, see Apple Security Breach Gives Complete Access to Your iPhone.

I recently traveled through Europe and regularly also spent a fair amount of time talking to folks stateside about all things Cloud Computing. These are not just Silicon Valleyite technorati I talk to, but folks from all parts of an organization and many industries. Summarizing these conversations, I’m struck by two things:

  1. Most everyone has heard of Cloud Computing and is sure it’s a happening, important trend. A few brave souls admit they don’t really know what it’s about, and many others have latched on to some aspect or the other that they regard as the truth about Cloud Computing. But outside Silicon Valley, it’s a minority that truly seems to have grasped the essential concepts of Cloud Computing.
  2. No matter what their level of expertise, many aren’t really clear on why Cloud Computing is a good thing from purely a business point of view. The technically minded love the flexibility of allocating server capacity on demand, for example, and whisper knowingly about utility computing. But when pressed for some thoughts about the associated financial payoffs, relatively mundane things like power savings or lowering capex get mentioned. Nothing that has the potential of moving an organization’s financial needle to the extent the buzz surrounding Cloud Computing should warrant.

I believe that conceptual void has two reasons: First, Cloud Computing by and large still is a technology revolution driven by technorati for the technically literate, amplified by the punditry in the media that want to make sure they’re not missing out on reporting about the next big thing. Second, the business community at large has not yet wrapped their minds around the product potential of Cloud Computing, and thus far, by and large, sees it as a server cost reduction opportunity.

Almost every Cloud Computing solutions provider we talk to, after they’re done talking about the technical virtues of their offerings, privately admit they’re struggling engaging with the business leadership in a targeted organization. Typically those solution providers end up selling their wares into mid-level IT decision makers, and the TCO benefits that justify a sale indeed center on things like capex or power cost reduction. In the meantime they are trying to figure out how to engage the C-Level folks with a larger story, a story that for the most part is still elusive, however.

For all the technical elegance of the Cloud Computing solutions in the market place today, the industry as a whole has not yet addressed what I call the “product opportunity”. I.e., offering the market compelling solutions that facilitate the launch of new classes of products. The big payoff for a new technology comes when it spawns the launch of new types of disruptive products that bring with them new revenue potential, vs. simply offer cost reductions for earlier technologies. In the latter case, the total upside of the new technology will never surpass the total revenue of the industry it is replacing.

So, for Cloud Computing to outlast its own buzz and truly have profound financial impact on the entire technology industry, it needs to be understood by the technology-buying business community as being more than a technology for more efficient server utilization. Instead it needs to be perceived as being capable of spawning new products with new revenue potential. What some of these product ideas might be will be the topic of my next blog. Stay tuned.

- A list of cloud related reading materials is here: http://www.btclogic.com/resources.php

- A somewhat more technical definition of what Cloud Computing is can be found here: http://www.btclogic.com/documents/BTCLogic_CloudComputing_Why.pdf

© 2011 BTC Insights Suffusion theme by Sayontan Sinha