Three Predictions about Cloud Computing for 2011

With all the talk in 2010 about cloud computing you’d think the entire Internet was running on it.  We’re at the point now with cloud computing as we were in the late ’80s through the mid-late ’90s with networking.  Everyone can clearly see the benefits in cloud but the market is hyper fragmented as different pockets of users form a community around one of the available solutions.  To ensure new readers are aware, while I am employed by Rackspace Hosting working on the OpenStack project, the opinions expressed on the blog are mine and I try to present a non-biased view of the market.

With that opening I’ll dive into the three items of importance to cloud computing for the coming year..

..with additional more minor predictions in italics.

1. Cloud computing needs will change as we move from early adopters to mainstream users

Thus far the primary users of cloud infrastructure as a service (IaaS) offerings have been early adopter technology savvy users.  Those users may be founders of a Web 2.0 startup, consultants working for the R&D department of a system integrator, or forward thinking IT professionals on enterprise IT strategy teams.  Based on the stats in the chart, we still have less than 2% of the top 500k websites hosted on IaaS.  In 2011 this number will grow to 5-10% of the top 500k sites, more than doubling again like it did in 2010.

Source: http://www.jackofallclouds.com/ - Guy Rosen

Inside startup communities everyone will be using the cloud to start their business and in enterprises departmental innovation success stories will start to bubble up to corporate leadership.  This doesn’t mean existing applications will be migrated — people will experiment with migration as disaster recovery innovation but it won’t be a major driver of cloud growth in 2011.

The major driver of growth will be new applications and much of this growth won’t be consumer Internet sites that are easy to track making the “who’s winning in cloud” leader board more difficult to display.  This next wave of adopters will also require additional levels of support as they won’t have the same “DIY” mentality as early adopters.  Cloud providers will need to raise their own service levels or spend significant effort building a system integrator and consulting ecosystem that can provide it for them.

The ecosystem of tools built on the IaaS cloud APIs will be a foundation to enable the higher service levels.  They will be utilized by the practices for the SIs and consultants as well as the software development teams of many ISVs.  For cloud providers that do not yet have an ecosystem built around their API this will be the year they move on to adopting one of the open APIs with market traction.  Enough providers will use a group of 3-5 APIs that ISVs and startups will refuse to developer for others. API abstractions like jclouds, Libcloud, and Deltacloud will start to add depth rather than additional breadth.

2. Technology heavyweights with large developer communities will escalate their efforts to define and control PaaS

2006 brought us Rackspace Cloud Sites (originally branded as Mosso)…

2007 gave us Force.com

2008 launched Microsoft Windows Azure and Google App Engine

2009 delivered Heroku’s commercial release and moved VMware into the platform space with the acquisition of SpringSource

2010 veterans like Red Hat and Oracle [PDF] start announcing platform strategies and making acquisitions such as the recent purchase of Makara (by Red Hat)

The Growth of Cloud ComputingOver the past few years many platforms and application frameworks simplified development by providing a foundation and abstracting away lower level details.  This came with some drawbacks as most frameworks were not aware of their resource utilization nor did they have the ability to utilize the programmatic capabilities of IaaS to change their resource allocation based on load.  In 2011 many platform solutions will become tightly integrated with IaaS APIs providing dynamic resource management — the auto-scaling cloud workload across public, community, and private cloud installations will become an early adopter reality.

Building a PaaS solution is possible for a startup if they make it compatible with widely accepted development languages and frameworks as Mosso did by selecting PHP/Python/.NET with support for common applications like WordPress, Drupal, Django, and more.  Another example is what Heroku did with Ruby and PostgreSQL.  The heavyweights are the only players with deep enough pockets and the required patience to push new programming dynamics.  Microsoft will look at how things are evolving and in order to defend .NET/C# will embrace the Mono Project creating a real threat to Java in the enterprise.

Java won’t take things laying down.  Despite the fallout in 2010 over Oracle vs. Google, over the Apache Foundation first joking with Oracle and then stepping down from the JCP Executive Committee, and the many other examples such as James Gosling, the creator of Java, coming out against its new steward Java still maintains #1 position on the TIOBE index.  Oracle, IBM, and VMware all have deep pockets and big revenue streams tied to the continued success of the language.  IBM, while late to the PaaS party, will tie together Tivoli, WAS, and other components to build a robust platform for their customer base.

The blog-o-sphere will erupt in debate about “what is a true platform as a service” as much as they went on and on in 2010 about infrastructure as a service APIs.  Despite what the pundits believe the majority of the enterprise IT dollars will go towards “false platform” private cloud solutions. Making the leap for major projects from current development methodologies and procedures to one of the new platforms will be too much — IT organizations need evolution, not revolution.

3. Enterprises will begin to evolve their virtualization deployments into private clouds and they’ll expect networking and audit controls beyond the capabilities of many current systems

Cloud deployments in enterprises will go in two different directions depending on the expectations of the project sponsoring executive team.  Departmental usage of cloud that flies under the official process RADAR is not what I’m talking about.  Over the past year I’ve had conversations with numerous people on Fortune 500 IT strategy teams and cloud is being looked at a couple of different ways. One group looks at it only as a technology solution that will magically make their operations more efficient.  Another group realizes that the automation cloud brings only benefits them if process changes happen in parallel with with the systems improvements.  Enterprise cloud projects that are only technology focused will not provide any meaningful savings and enterprises going this route will become disenfranchised.  Virtualization was about consolidation not automation and because of that it didn’t include business process changes that cloud requires.  You can’t simply install a “cloud upgrade” to your virtualization system and instantly have a cloud.

Dilbert - Cloud EncryptionThe cloud projects are going to run into a second set of hurdles.  Up to this point typically departmental non-audited, non-regulated applications have been deployed by enterprises on clouds.  In 2011 projects will need to address the corporate risk management and IT audit requirements.   Major public clouds such as Amazon Web Services and Rackspace have addressed and received various attestations such as SAS70 Type II (Rackspace example), ISO27001 (AWS example), and PCI DSS [PDF] (Visa Global List of Validated Service Providers) showing that it is possible to build cloud services that meet compliance requirements.  For enterprise projects to be successful they need to involve risk and audit up front so the proper control mechanisms are in the deployment.  Because cloud is about workflow automation having to insert a manual audit control late in the project in order to meet the launch plans will eliminate many, if not all, of the projected benefits.

Corporate risk and IT audit teams will invalidate a number of cloud software fabrics and those platforms will quickly try to re-engage on projects by announcing partnerships with security companies. Platforms that come from service provider and government backgrounds such as OpenStack have a head start along with platforms that have evolved from enterprise DNA such as VMware.  As enterprises start to spend significant dollars on cloud the R&D investment in cloud platforms will dwarf what has been spent to date and any head start present currently can see it easily vanish in 2011.

Audit controls for cloud platforms need to include both host and network services.  It is possible to architect a cloud and map the controls into existing systems though this won’t be instantly turn-key or easy in 2011.  Network virtualization will make cloud systems more flexible at a cost of making compliance controls more complicated.  Process wise this also introduces another department into the cloud deployment discussions.  Most clouds deployed in 2011 will focus on server automation and networking will be addressed in subsequent phases of the technology transition in 2012+.

Conclusion

2011 will be another big year for the adoption of cloud technology but these fundamental shifts happen slowly — especially when they involve people learning new processes and not just transparent technology replacement.  When done right, cloud will make IT vastly more efficient and when cost of services decline the demand for those services often skyrocket.

This post focused on the cloud computing.  I’ll be making other posts in January about distributed storage platforms (aka. “cloud storage”) and why they’ll be important for enterprises to understand and have readily available to their users before the middle of the decade.  It is a fundamentally different problem as many cloud storage systems can be installed transparently without end user process changes.

Tags: , , , , , , , , , , ,

  • http://twitter.com/hollanddavids David Holland

    I have some questions:

    Given the revenue difference between EC2 and Rackspace (1B v .6B) how do you interpret the graph in prediction #1?
    The 2% to 10% growth prediction argues that infrastructure capacity will increase by a factor between 3x and 5x in 2011. We already know Amazon has begun this investment, is it safe to assume Rackspace has as well?

    If the driver of growth is not new Web applications but includes back end functions (a ‘la Netflix) how do we anticipate the resolution of known limitations in provider infrastructure (performance, visibility and isolation) will play out?

    Certainly it is clear to anyone who is watching that Amazon has their own approach to resolving those issues. How do you see the migration to Infrastructure 2.0 playing out? What impact with this migration have on the ecosystems and APIs in 2011 and beyond? In particular the infrastructure capabilities of the leading providers must be equivalent or at least competitive in terms of core functionality. What will happen to the distribution of ecosystems if this balance is not maintained?

    In as much as PaaS is built on IaaS API’s how will Infrastructure 2.0 impact PaaS development?

    Again, how will Infrastrucutre 2.0 support the network and audit controls that will be required as the cloud model expands?

  • http://www.bretpiatt.com Bret Piatt

    [Question] Given the revenue difference between EC2 and Rackspace (1B v .6B) how do you interpret the graph in prediction #1?

    The graph shows which cloud infrastructures are being utilized to host commercial websites. It doesn’t show cloud usage for data processing, archive, etc. AWS is excellent at hosting applications designed to run on it. Cloud Servers is easier to get started on and easier to deploy non-cloud designed applications. The difference in customer support models may also explain some of this — non “ubergeek” users that have Internet based business ideas often like the ability to call techs on the phone to get help with things.

    [Question] The 2% to 10% growth prediction argues that infrastructure capacity will increase by a factor between 3x and 5x in 2011. We already know Amazon has begun this investment, is it safe to assume Rackspace has as well?

    I can’t comment on the Rackspace specific investments as we’re a publicly traded company. You’ll have to look at our official disclosures on infrastructure spending.

    [Question] If the driver of growth is not new Web applications but includes back end functions (a ‘la Netflix) how do we anticipate the resolution of known limitations in provider infrastructure (performance, visibility and isolation) will play out?

    Performance visibility and matching resources allocated to current need is critical as sites scale. When you’re just starting out the difference between $50/mo and $250/mo isn’t worth spending significant time over. When you’re talking $50k/mo vs. $250k/mo an application owner will want to know what is driving that cost so they can optimize it. I’ll give a somewhat biased opinion and recommend Cloudkick for providing excellent visibility into utilization and for tracking how changes in an application impact needs (this probably deserves a post of its own, adding it to my list!).

    [Question] Certainly it is clear to anyone who is watching that Amazon has their own approach to resolving those issues. How do you see the migration to Infrastructure 2.0 playing out? What impact with this migration have on the ecosystems and APIs in 2011 and beyond? In particular the infrastructure capabilities of the leading providers must be equivalent or at least competitive in terms of core functionality. What will happen to the distribution of ecosystems if this balance is not maintained?

    It will be important for providers to have the functionality customers need for the use cases they are trying to satisfy. This doesn’t mean every provider needs to have the same or equivalent functionality in total. I commented on how I see IaaS APIs evolving over the next year in the post above — where it goes beyond 2011 will vary dramatically depending on how correct my predictions are. The one item I will add here is that we’ve seen with both AWS and Rackspace that the pairing of Compute + Object Storage is very powerful so I’d expect providers that currently only offer one or the other to add the complementary product in 2011 — specific features beyond that, it isn’t 100% clear yet what “core” is for mainstream users vs. early adopters.

    [Question] In as much as PaaS is built on IaaS API’s how will Infrastructure 2.0 impact PaaS development?

    PaaS built on IaaS will be able to offer much more dynamic environments for applications than classic PaaS services. This dynamic infrastructure will allow for free (or nearly free) developer accounts. This is very important for PaaS offerings where it is difficult (or impossible) for developers to replicate on their laptop. Heroku, Azure, and App Engine are all good examples of this (all 3 of those are built on IaaS, a provider doesn’t have to expose or offer the IaaS they build their PaaS on for it to be built in a layered manner).

    [Question] Again, how will Infrastrucutre 2.0 support the network and audit controls that will be required as the cloud model expands?

    A number of efforts are underway from the core SDOs and industry regulations. A number of ad-hoc groups are also organized to address this. Some examples: http://cloudaudit.com/ // http://www.cloudsecurityalliance.org/ // http://www.oasis-open.org/committees/id-cloud/charter.php // http://bit.ly/fXgqFv (IEEE Intercloud) // and many more…The feedback provided by enterprise customers in 2011 will dramatically accelerate the efforts here.

  • http://twitter.com/hollanddavids David Holland

    The coupling of Compute and Storage exacerbates the I/O limits of the softswitch model (performance). Further enhancement of the softswitch model to provide improved visibility (read diagnostic actions) in concert with improved isolation (read addressing models) is not feasible in the unified high capacity Compute/Storage environment. At least that was my experience (transcoding media for mobile consumption).

    A few months ago I asked myself, “if I were building a cloud infrastructure today what would it look like?” My answer to myself is that I would build the infrastructure on an IPv6 address model to provide each customer with their own IPv4 local address space. Using an IPv6 local address restores customer specific address coherency in the cloud infrastructure while providing customer specific local address coherency at the customer overlay network level. I believe providing the customer with a familiar network environment will be an important feature for IaaS providers in the near future.

    I also believe that creating a network layer for the customer between the Compute resource and the IaaS control boundary is the best way to do this because I don’t believe putting that control in the network is the right choice. NIST agrees with me on this.

    There are two issues with this model, the first is how L2 bcast/multicast is handled in the supporting infrastructure (which depends on migration regions and need not reflect customer L2 domains). My assumption is that L2 bcast/multicast should be supported by the infrastructure as long as the bcast can be limited to a specific customer’s compute resources. I am not aware of a reason to support bcast/multicast to storage. Since the density of any customer’s compute in a infrastructure bcast domain is always known (even though it is dynamic) the individual compute bcast rate can be limited to acceptable rates (based on the size of the infrastructure pipes which should be large enough for networked storage). L3 bcast/mcast should be the configuration choice of the customer (IMO) because I see this as in the Customer network domain.

    The second issue is how to introduce this addressing model in to the live network. Ideally the new infrastructure can change underneath the existing customer base with a live migration. This leaves some burden on the softswitch but the customer infrastructure layer can perform the required translation. As a side effect of this model live migration regions can be de-coupled from customer addressing schemes. Some of the additional benefits of this model are the ability to provide the customer with a controllable network component capable of supporting port mirroring, the ability to provide the customer with compute resources that are configurable as interior nodes differentiated from exterior nodes (mapping to the enterprise model).

    I think the existence of Cloudswitch demonstrates that some pretty successful folks believe this model (if not this mechanism) is worth developing. @lmacvittie had a well written piece on this a couple of weeks ago
    http://devcentral.f5.com/weblogs/macvittie/archive/2010/12/14/ldquolights-outrdquo-in-the-cloud.aspx

    I have not checked the most recent IPv6 support committed to Openstack, but I hope it includes this separation between infrastructure and customer in terms of address models (but it was not obvious that it did).

  • http://twitter.com/anupamsahain Anupam Saha

    If given an option between Rackspace and Amazon, I would go with Amazon. Reason: Rackspace CDN doesn’t support ‘https’ and CNAME, where Amazon does.

  • http://www.onlinetech.com/managed-services/it-disaster-recovery Disaster Recovery

    Great predictions. I’ve written a blog about 2011 cloud computing and disaster recovery statistics, you can find here: http://resource.onlinetech.com/2011-cloud-it-disaster-recovery-statistics/