If cloud computing is a commodity, so is real estate

Running Apps@Rackspace has me thinking about cloud infrastructure services and what I want from them as a customer quite a bit these days.  As recent as yesterday articles are being published calling cloud computing a commodity — this isn’t new, it has been in the headlines for years now.  The article in The Hindu by Shubashree Desikan yesterday makes a point very clearly that I haven’t seen articulated as well before, “While electricity is a commodity produced at a centre and transported to the user, computing power is stored in the servers and users connect to it through their networks.  Data needs to be transferred from the user to the server and processed there. Electricity is a stateless resource, whereas the data and its stages of processing (states) need to be stored and backed up in the server.”

Computing is not a stateless resource — it requires data and moving that data around is difficult — this makes it location based around the data.  Because cloud computing is location based that maps it more to real estate than to a electrical power grid.  The needs of a cloud computing buyer vary wildly.  I’ll use the three businesses I run for examples on the differing requirements. Jungle Disk’s most important requirement is efficient massive scale object storage.  Rackspace Email requires a wide variety of computing platforms and storage systems.  Location also matters for both of these businesses.  The majority of the customers for SharePoint@Rackspace choose dedicated cloud environments for their large production applications.  All of these business can run at Rackspace because of the hybrid cloud infrastructure offered — this is akin to a mixed use real estate development — some retail, professional office space, and housing all in one.

In a post I made about OpenStack a couple of years ago I mentioned “inter-cloud services” that will allow people to temporarily rent resources on other clouds away from the main cloud where their application resides.  Mapping back to real estate this is the hotel market of cloud computing that has yet to evolve.  A hotel room has the furniture you need when you check in — this maps to having the data you want to compute on a cloud.  An example of this could be many clouds may have public or community data sets in the future pre-loaded so if a company wanted to process that data set they could pick from a number of clouds.  Technologies such as ZeroVM running integrated with an OpenStack Object Storage cluster could be a “hotel chain” that emerges — another may be pre-loaded Hadoop clusters — Hilton vs. Hyatt.

I look forward to consuming a variety of cloud computing resources to power my SaaS businesses and I don’t think of them as a commodity any more than you’d think that your home is interchangeable with a hotel room — cloud computing is not, and won’t become, a commodity in the same way electricity is.

Pragmatic versus ideological — casting your vote for yourself or your fellow Americans?


Voting Booth

Selfish or altruistic? Why are you in the booth?

I enjoy observing the political theater every four years when we elect a new president.  I’m taking off the white gloves and asking some difficult questions the rest of this post.  My asking may offend you, politics is one of those “third rails” that get people riled up. If you are offended that’s fine, it is your personal choice — feel free to even share why in comments or to me directly.

The big question… Who should I vote for?  Five words, such a short question, such a complex answer.  Should you vote for the candidate that best matches your ideologies?  What if you know those ideologies are likely to benefit you and harm a majority of the others in America?  Are you morally obligated to vote for the candidate that will be the most beneficial for the country as a whole?

After thinking through that, should you vote for the candidate that makes the biggest impact in the short term or the long term?  What about policy making that benefits future generations or takes care of people today?  We have been voting, as a nation, in favor of candidates that take care of us today, that focus on the short term, and because that is what the electorate is asking for that is what you get running for office.

Carrie Lukas does an excellent job explaining our budget problems by comparison the federal budget to an individual household by removing eight zeros.  What it basically shows is the core federal debt is ~7x annual income and growing rapidly.  This is forcing us into the zero interest rate policy which is going to inhibit growth for the long term.  If the average interest on our debt went up to a still historically low 5.5% we’d be spending ~25% of our income just on interest.  If we have another high interest rate period comparable to the late 70s we would be spending nearly all of the federal government income on interest — how scary is that?

The problem we have is neither party wants to solve the problem — the giant pile of money is their source of power and politicians, in general, are excited about the power because they believe they can use it to improve lives.  Power isn’t necessarily bad — used properly to allocate resources a centrally led organization (a nation is a very large organization) can have amazing results.  We aren’t getting an efficiently run centrally planned organization — example is the manufacturing of the Joint Strike Fighter as reported by the GAO [PDF], “The new program baseline projects total acquisition costs of $395.7 billion, an increase of $117.2 billion (42 percent) from the prior 2007 baseline. Full rate production is now planned for 2019, a delay of 6 years from the 2007 baseline. Unit costs per aircraft have doubled since start of development in 2001.”

The bad news is efficiently operating our current services won’t fix the situation — even if half of the spending was waste in the purchase of goods or services the budget still does not balance over the long term (exclude wages for active employees in civil service or military and direct payments for pensions or welfare).  We do not have the resources to fund the entitlements we promised.

Cities are already going bankrupt wiping out public pensions (ex. Detroit, Stockton, Vallejo) as the only way to become solvent again.  This isn’t unique to public pensions or budgets.  Private companies file for bankruptcy all the time and in some cases pensions are wiped out (ex. American Airlines) or significantly reduced.  It is hitting cities first, next will be the states and they technically can’t go bankrupt. This will give us an opportunity on a smaller scale to see what happens when the federal obligations that cannot be met come due.  I expect it’ll be quite a bit like Greece if it is a smaller state.  The situation is very similar as they aren’t in control of their own currency and have membership in a larger group that will want to influence the outcome.  Without the orderly process of a legal bankruptcy it will be interesting to see who gets paid and who doesn’t. For the sake of the federal courts it would be ideal if a smaller state defaulted such as Rhode Island (it would be half the size of the Detroit city bankruptcy) to work out the kinks before a larger state such as Illinois or California reach the finish line.

Personally I cast my vote for what I believe will give America the best opportunity to succeed as a nation.  If the land is full of growth creating opportunity then individual success is easier to achieve. If we end up in a malaise or depression for decades it’ll make it harder to meet obligations and for individuals to achieve success.  For our pension situation I feel bad about promises that were made that likely have no way of being kept.  It may actually lead to increased happiness as families will spend more time together.  Would you rather live in your own home as a retiree and burden your children and grandchildren with debt you know their generation cannot repay or would it be better to live with them? If Greece is our bellwether then nobody is going to voluntarily give up their pension or benefits for the good of the whole. You may ask, “Why should I give up what is mine?  My benefits alone won’t balance the budget — everyone would have to do it together for it to matter.  Until they join me I’m not going first.” — Are you willing to help organize and make this orderly with a group going together?

Customer reward programs — don’t ignore or belittle the 99%

Airport Security Line

Welcome to the back of the line.

So you’re running a business and want to come up with a way to reward your most valuable customers — why wouldn’t you want to thank them?  In abstract, thanking them is great and in reality you have to be very careful about how you do it.  If you aren’t careful you’ll setup a system that makes the 99% of the people that buy from you who aren’t “in the club” hate doing business with you.

A perfect example of what not to do is the airline industry.  For the non-frequent fliers from the moment you step into an airport you’re notified that you’re a second class citizen.  Priority lines for “first class” and “platinum” members to check luggage — often in many cases personnel are standing not helping anyone because nobody from the privileged class is in line.  Now that you’re through checking luggage things should be okay — oh wait — that isn’t the case at all.   Non-”members” see where they rank again in the security lines; at the gate while the agent calls out over a speaker for all to hear, “peons wait until boarding group 4 while we parade the 1% through in front of you” (They don’t say that but the people standing there think this); then on the flight, “peons, the restroom at the front of the plane isn’t for you, and your drinks will be $7 each in cash only while we give 1st class drinks for free”.

A great example of what to do is Apple.  Go into a store and you’ll be helped in the same manner if you’re buying a $10 accessory or a $2000 laptop.  Call on the phone..same again..imagine if Apple changed to model after the airlines? Everyone who wasn’t a frequent repeat customer had to stand in a long line while the Apple Platinum Status members received free upgrades from the iPhone 4 to iPhone 5. A way to measure how your programs work to create the brand value and loyalty you want is called the Net Promoter Score.  In the airline industry the top two scores.. Jet Blue and Southwest.  What do they have in common?  They’re the only airlines that don’t treat people differently.  (Southwest is actually starting to do this and I wonder if it’ll lead to the beginning of the decline; hope they keep watching NPS closely.)

Airlines are making these decisions trying to maximize profits.  I don’t have data but my gut instinct is the system they’ve setup suppresses air travel and makes people very price conscious because they are never delighted by the purchase decision .. flying from their pocketbook rather than their emotional response.  If all of the airlines could have the courage to reboot and learn from the most profitable company in the history of the world (Apple, and by the way I am not a “fanboy” .. I do own some products but this blog is being posted from a Dell laptop, I’ll tweet about it later on a HTC One X Android phone, I may reply to a comment from a Microsoft Surface tablet [I do also own an iPad but I generally use it for reading my newspaper or books and some web browsing]) they might be making money rather than struggling to stay out of bankruptcy.

Can you think of industries or companies that “get it”?  I’d love more examples.

Open Compute and the future of infrastructure

If you haven’t heard about it yet Facebook is not only helping us all stay in contact with friends and share our life experiences.  They’re also perhaps doing something even more influential.  They started the Open Compute Project in 2011 and in October at the 3rd event announced the Open Compute Foundation that my employer (Rackspace Hosting) is also part of.  The opinions in the article are mine and mine alone.

Okay, okay.. yes I did say more influential and I’m not talking in hyperbole.  Global IT spend is around $2,700,000,000,000 (yes, trillions with a ‘t’) annually and much of that is due to the complexity involved in making hardware and software work together — along with the direct and obvious market fragmentation from the top to the bottom of the supply chain.  How many different server models and configurations are available from your vendor of choice?  How many vendors are in the market?  Where is value really being added and where are manufacturers engineering in lock-in / increasing switching costs on purpose?

Today Wired ran an article talking about Open Compute expanding to include storage gear and virtual I/O (full transparency this project is led by Rackspace).  This is very exciting as the interaction between servers and storage has driven much of the complexity and with all of these being worked on under a single umbrella now the light at the end of the tunnel to a simplified system is visible.

All of this can lead us to a future of well understood building blocks — a sign of maturity for “IT systems engineering”.  When a new bridge is built we no longer need to do years of lab testing, integration testing, and all of the other tasks required around inventing.  A structural engineer uses well known building blocks combined with data about the requirements for the given bridge (load, soil, climate, etc.) and builds a bridge.  We’re not there year but a decade from now an IT engineer will be able to do the same thing which will create much more simple and reliable systems.  How many of you want to drive over a bridge to work each day that has 3 9s of reliability? .   Virtual I/O allows us to decouple CPUs from RAM and other resources.  This could allow you to upgrade CPUs while leaving the memory in place — no need to throw out that old DRAM as the chip doesn’t fit in the new motherboard that the new CPU needs.

By the middle of this century we’ll look back at the systems we build now in IT like a structural engineer looks at the “Galloping Gertie” (image top left) — lessons learned, reliability much lower than what we take for granted.  We’ll also be able to do it at much larger scale.

It would be nice, but cloud doesn’t have to be “interoperable”

Disclosure: If you’re coming straight here you may not know work for Rackspace Hosting and I’ve been involved with OpenStack since the inception of the project.  The opinions on this blog are my personal ones, not those of my employer.

This post is an assessment, a thought.  I don’t really explore the meaning or outcomes completely.  I may do that in a future rambling… on to the thought…

For the first decade of networking or more we had many  competing technologies that didn’t interoperate: SNA, IPX/SPX, TCP/IP, AppleTalk, DECnet, NetBEUI, and more.  The lack of a consistent and unified standard didn’t stop networking from succeeding anymore than it will stop cloud computing.  Cloud is a fundamental shift that dramatically increases productivity just like networking did — businesses love increases in productivity and will adopt anything that yields one  – often the first one presented to them and they’ll run it for a refresh cycle before switching to the “interoperable” platform.  Ok, so I’m a networking geek but this isn’t the only analogy that holds true…

We have many programming languages.. compiled, interpreted, functional, object oriented.. all with major differences.

We have many types of processors from low energy mobile chips to super fast server chips all with different instruction sets.

We have a variety of operating systems all with a loyal following and a vastly different set of capabilities.

I believe cloud could see wider and more rapid adoption if interoperability is figured out but looking back at history, and even history specifically in the technology world, we have many successful markets without true interoperability as a fundamental capability.

Over time most of these markets have achieved the guise interoperability through consolidation and it looks like cloud computing is headed the same way.  Networking is predominately IP; programming is C/C++/C# for OS/infrastructure, Java for enterprise applications, and PHP for web applications; processors are x86 in desktops and servers, and ARM in mobile devices; Operating Systems are generally Windows for consumer and SMB/departmental large business IT, and Linux for web and larger business core IT.

With the pace of innovation and the foundation laid down by previous generational shifts the cloud market will grow and reach a critical mass market share much more rapidly as technology companies that are involved know the path to follow.  Microprocessors, operating systems, and networking took many decades.  Java swept through the enterprise software development market in a decade as did PHP across the web.  The cloud market really started to emerge around the start of the decade and by the current look of things by middle we’ll have a clear picture of interoperability for clouds.

DevOps != sneaky, reckless, or process-adverse hooligans

Operations has one primary mission above all else… uptime.  This often gets them labeled by the rest of the organization by names such as, “the brick wall”, “organization of ‘no’”, and a host of other names I won’t use on a PG blog.  With automation coming to IT operations teams are being asked to ensure uptime while allowing a more rapid evolution of the environment.  The days of an 18 month release cycle are over.  So how exactly do we accomplish this without wrecking things?  Proper testing, visibility, and a scale staging environment.

Almost all application deployments are iterations and evolutions of existing applications.  They also aren’t a single function system so changes to one application can impact the performance or availability of others — this is why operations has become the organization of “no”.  To prevent bad things from happening first you have to understand what is happening.  You need to have monitoring that can show network, operating system, platform, and application performance.

Now that we can see what is going on with production we need to create a small scale version of production used for staging.  When creating this scaled down version of the environment you’ll need some understanding of how increasing size and load impact the system.  If all of your algorithms are O(x) then it is easy to scale down and predict what it will do in a larger environment.  This won’t be the case though — some things will be O(log(x)) where it takes more resources at a smaller scale than it will once scaled up and others will be O(#x) or even O(x^#) where as you grow they consume resources in a linear multiple or exponential manner.

So then where does DevOps come in?

It isn’t just a crazy idea by the developers to try and find a way around “no”.  If implemented properly it will build a strong cooperative effort between development and operations — get rid of the “Us vs. Them” mentality.  It will also eliminate duplication of effort where a development team writes unit, integration, and system tests and operations will write all different tests in their monitoring and management systems.

This divide also creates problems where new releases get delayed as operations find issues in staging because they weren’t involved with the test case development early on.  With continuous integration and automated development testing many of these tools also make great operational monitoring systems as well.  This can allow for the operations management team to write unified tests with development that are used throughout the process — everyone is working from the same goals — no more meetings of, “Why didn’t you catch this in development?”, “It works fine in development, you screwed up the installation.”

The next benefit you’ll get from a DevOps culture is tasks will be automated in other areas decreasing error rates and increasing delivery speed.  Today developers use distributed source repositories with version control to deploy applications onto development and testing systems.  Operations will then package up those applications into an installer that introduces another step in the process that has to have additional testing and new tooling.  DVCS platforms have authentication and tracking built in, all of the audit controls an operations department wants.

I know I’ve glossed over the details in this post.  I plan on putting together some detailed future posts covering examples of how I’ll be implementing each section using open source tools to help with work my team is doing on OpenStack.

Three Predictions about Cloud Computing for 2011

With all the talk in 2010 about cloud computing you’d think the entire Internet was running on it.  We’re at the point now with cloud computing as we were in the late ’80s through the mid-late ’90s with networking.  Everyone can clearly see the benefits in cloud but the market is hyper fragmented as different pockets of users form a community around one of the available solutions.  To ensure new readers are aware, while I am employed by Rackspace Hosting working on the OpenStack project, the opinions expressed on the blog are mine and I try to present a non-biased view of the market.

With that opening I’ll dive into the three items of importance to cloud computing for the coming year..

..with additional more minor predictions in italics.

1. Cloud computing needs will change as we move from early adopters to mainstream users

Thus far the primary users of cloud infrastructure as a service (IaaS) offerings have been early adopter technology savvy users.  Those users may be founders of a Web 2.0 startup, consultants working for the R&D department of a system integrator, or forward thinking IT professionals on enterprise IT strategy teams.  Based on the stats in the chart, we still have less than 2% of the top 500k websites hosted on IaaS.  In 2011 this number will grow to 5-10% of the top 500k sites, more than doubling again like it did in 2010.

Source: http://www.jackofallclouds.com/ - Guy Rosen

Inside startup communities everyone will be using the cloud to start their business and in enterprises departmental innovation success stories will start to bubble up to corporate leadership.  This doesn’t mean existing applications will be migrated — people will experiment with migration as disaster recovery innovation but it won’t be a major driver of cloud growth in 2011.

The major driver of growth will be new applications and much of this growth won’t be consumer Internet sites that are easy to track making the “who’s winning in cloud” leader board more difficult to display.  This next wave of adopters will also require additional levels of support as they won’t have the same “DIY” mentality as early adopters.  Cloud providers will need to raise their own service levels or spend significant effort building a system integrator and consulting ecosystem that can provide it for them.

The ecosystem of tools built on the IaaS cloud APIs will be a foundation to enable the higher service levels.  They will be utilized by the practices for the SIs and consultants as well as the software development teams of many ISVs.  For cloud providers that do not yet have an ecosystem built around their API this will be the year they move on to adopting one of the open APIs with market traction.  Enough providers will use a group of 3-5 APIs that ISVs and startups will refuse to developer for others. API abstractions like jclouds, Libcloud, and Deltacloud will start to add depth rather than additional breadth.

2. Technology heavyweights with large developer communities will escalate their efforts to define and control PaaS

2006 brought us Rackspace Cloud Sites (originally branded as Mosso)…

2007 gave us Force.com

2008 launched Microsoft Windows Azure and Google App Engine

2009 delivered Heroku’s commercial release and moved VMware into the platform space with the acquisition of SpringSource

2010 veterans like Red Hat and Oracle [PDF] start announcing platform strategies and making acquisitions such as the recent purchase of Makara (by Red Hat)

The Growth of Cloud ComputingOver the past few years many platforms and application frameworks simplified development by providing a foundation and abstracting away lower level details.  This came with some drawbacks as most frameworks were not aware of their resource utilization nor did they have the ability to utilize the programmatic capabilities of IaaS to change their resource allocation based on load.  In 2011 many platform solutions will become tightly integrated with IaaS APIs providing dynamic resource management — the auto-scaling cloud workload across public, community, and private cloud installations will become an early adopter reality.

Building a PaaS solution is possible for a startup if they make it compatible with widely accepted development languages and frameworks as Mosso did by selecting PHP/Python/.NET with support for common applications like WordPress, Drupal, Django, and more.  Another example is what Heroku did with Ruby and PostgreSQL.  The heavyweights are the only players with deep enough pockets and the required patience to push new programming dynamics.  Microsoft will look at how things are evolving and in order to defend .NET/C# will embrace the Mono Project creating a real threat to Java in the enterprise.

Java won’t take things laying down.  Despite the fallout in 2010 over Oracle vs. Google, over the Apache Foundation first joking with Oracle and then stepping down from the JCP Executive Committee, and the many other examples such as James Gosling, the creator of Java, coming out against its new steward Java still maintains #1 position on the TIOBE index.  Oracle, IBM, and VMware all have deep pockets and big revenue streams tied to the continued success of the language.  IBM, while late to the PaaS party, will tie together Tivoli, WAS, and other components to build a robust platform for their customer base.

The blog-o-sphere will erupt in debate about “what is a true platform as a service” as much as they went on and on in 2010 about infrastructure as a service APIs.  Despite what the pundits believe the majority of the enterprise IT dollars will go towards “false platform” private cloud solutions. Making the leap for major projects from current development methodologies and procedures to one of the new platforms will be too much — IT organizations need evolution, not revolution.

3. Enterprises will begin to evolve their virtualization deployments into private clouds and they’ll expect networking and audit controls beyond the capabilities of many current systems

Cloud deployments in enterprises will go in two different directions depending on the expectations of the project sponsoring executive team.  Departmental usage of cloud that flies under the official process RADAR is not what I’m talking about.  Over the past year I’ve had conversations with numerous people on Fortune 500 IT strategy teams and cloud is being looked at a couple of different ways. One group looks at it only as a technology solution that will magically make their operations more efficient.  Another group realizes that the automation cloud brings only benefits them if process changes happen in parallel with with the systems improvements.  Enterprise cloud projects that are only technology focused will not provide any meaningful savings and enterprises going this route will become disenfranchised.  Virtualization was about consolidation not automation and because of that it didn’t include business process changes that cloud requires.  You can’t simply install a “cloud upgrade” to your virtualization system and instantly have a cloud.

Dilbert - Cloud EncryptionThe cloud projects are going to run into a second set of hurdles.  Up to this point typically departmental non-audited, non-regulated applications have been deployed by enterprises on clouds.  In 2011 projects will need to address the corporate risk management and IT audit requirements.   Major public clouds such as Amazon Web Services and Rackspace have addressed and received various attestations such as SAS70 Type II (Rackspace example), ISO27001 (AWS example), and PCI DSS [PDF] (Visa Global List of Validated Service Providers) showing that it is possible to build cloud services that meet compliance requirements.  For enterprise projects to be successful they need to involve risk and audit up front so the proper control mechanisms are in the deployment.  Because cloud is about workflow automation having to insert a manual audit control late in the project in order to meet the launch plans will eliminate many, if not all, of the projected benefits.

Corporate risk and IT audit teams will invalidate a number of cloud software fabrics and those platforms will quickly try to re-engage on projects by announcing partnerships with security companies. Platforms that come from service provider and government backgrounds such as OpenStack have a head start along with platforms that have evolved from enterprise DNA such as VMware.  As enterprises start to spend significant dollars on cloud the R&D investment in cloud platforms will dwarf what has been spent to date and any head start present currently can see it easily vanish in 2011.

Audit controls for cloud platforms need to include both host and network services.  It is possible to architect a cloud and map the controls into existing systems though this won’t be instantly turn-key or easy in 2011.  Network virtualization will make cloud systems more flexible at a cost of making compliance controls more complicated.  Process wise this also introduces another department into the cloud deployment discussions.  Most clouds deployed in 2011 will focus on server automation and networking will be addressed in subsequent phases of the technology transition in 2012+.

Conclusion

2011 will be another big year for the adoption of cloud technology but these fundamental shifts happen slowly — especially when they involve people learning new processes and not just transparent technology replacement.  When done right, cloud will make IT vastly more efficient and when cost of services decline the demand for those services often skyrocket.

This post focused on the cloud computing.  I’ll be making other posts in January about distributed storage platforms (aka. “cloud storage”) and why they’ll be important for enterprises to understand and have readily available to their users before the middle of the decade.  It is a fundamentally different problem as many cloud storage systems can be installed transparently without end user process changes.

Why OpenStack matters to me

I’d like to start off with an apology to everyone out there that over the past 9 months if I didn’t reply to your email, didn’t answer your phone call, or made your life less interesting by disappearing from Twitter and from sharing my thoughts on this blog.  I’ll be out, alive and available again now that OpenStack is a reality.

Life is about priorities and hopefully at some point in your life you have already had or will have in the future an opportunity to work on something that has the ability to really make an impact.  At Rackspace we are a Strengths based organization.  My top 5 are Learner, Achiever, Competition, Analytical, and Focus.  I’ll use my strengths as a way to explain the past ~9 months.

When we started exploring the strategy around this all of us had lots to learn.  We’d all used open source software.  Some of us on the team had contributed to projects, but we all knew we had a lot to learn if we were going to get this right.  The great thing about open source, the full history of all of it is on the Internet.  You can go back and read mailing list archives, you can find out who contributed to a project, who led them, who had influence and you can reach out to those people and they’re often happy to talk about it.  This is very different from trying to do research on businesses where information is hard to find — no corporation will share their full mailing list archive that covers the history of their decision making (heck most don’t even have one).  The openness and ability to learn about things easily was a huge motivator for me.

So began the Learner->Analytical->Focus->Achiever “death spiral”, well the “death” of my learning anything not involved on this project that is.  The good news is those 4 strengths together make it so I really enjoy learning about new complex systems and figuring the best way to navigate, the bad news is the Focus->Achiever half may let me chase Alice all the way down the rabbit hole to Wonderland.  Sometimes this is counterproductive where a decision could have been made “good enough” with less analysis but in this case I’m really happy about it.  When forming an open source community you have a lot of choices to make and all of them have different benefits or drawbacks and the perception of is it a benefit or drawback varies from the perspective of the individual or group.

Forming this community is important enough to go all the way down the rabbit hole because thousands of people will become part of it and each potential member of the community is worth more than an hour of my time.  This gives me a good segway to talk about scale — If you’re only going to use a piece of software once to solve a single need then you should make it just good enough to get the job done — you should optimize for min(time coding + time for code to run[where you have to pay attention to it]).  The opposite end of the spectrum is a project like Linux (or like OpenStack will be — I dream big!) that runs on millions of machines 24/7 all around the globe.  If you can make an operation one minute faster on something that runs on a million machines you save 2 years worth of system time.  With that same idea we spent all the time we could making sure we got the community started the right way because every hour we spent will be multiplied by each of you that join it.

So now here is where my Competition kicks in.  I don’t want to make just an average community and then go watch reruns of “Everybody Loves Raymond” (Ray, hopefully you aren’t offended, you shouldn’t be, you were the first show that I know made it to rerun syndication that popped into my head!) on local TV — I want to make the best community ever.  The problem is… the bar is really high.. it isn’t like I said, “I want to make the biggest ball of rainbow yarn a person with a 9 letter long name made on a Tuesday afternoon” — I want to make the best open source community around a distribution of projects out there — and a lot of people have done an excellent job at this.  So to do this we’ve learned as much as we could from past projects to lay the proper foundation.  With that let me lay out the “4 opens” (I’d like to credit Rick Clark on our team for summarizing these thoughts into a concise and clear manner we can all hopefully understand)…

Open Source: We are committed to creating truly open source software that is usable and scalable. Truly open source software is not feature or performance limited and is not crippled. We will utilize the Apache Software License 2.0 making the code freely available to all. [Personal commentary: What this means is "we accept patches", the project won't block a feature contribution because it competes with a commercial feature a community member has.  This doesn't mean all of those commercial entities have to contribute all of their code -- it just means they aren't guaranteed exclusivity.]

Open Design: Every 6 months the development community will hold a design summit to gather requirements and write specifications for the upcoming release.  [Personal commentary: The design summits have been great (so far we've had 2) to get people aligned and to really get the complicated items solved.  An example on this is the large object support for Object Storage, members of the community had a number of different implementation ideas and through discussion we've come up with a great way to do it.]

Open Development: We will maintain a publicly available source code repository through the entire development process.  This will be hosted on Launchpad, the same community used by 100s of projects including the Ubuntu Linux distribution. [Personal commentary: Getting code and designs out in the open as early as possible in the process allows everyone to benefit from the power of a community in the biggest way possible.  This also makes finding and fixing big problems much easier as each patch can be tracked and its individual impact measured.]

Open Community: Our core goal is to produce a healthy, vibrant development and user community.  Most decisions will be made using a lazy consensus model.  All processes will be documented, open and transparent. [Personal commentary: Everyone should have a seat at the table at a level that corresponds to the effort and contributions they're putting into the project.  With all of the decision making done in IRC meetings (with transcripts) and over mailing lists members of the community can see "how the sausage was made" rather than just the end result of the decision -- this is really important to build and maintain trust.]

We’re off to a fun and exciting start.  Looking at the stats from this week I’m amazed at the amount of contribution we’re seeing from such a large group of developers (stats for the week of 12/3 to 12/9):

  • OpenStack Compute (NOVA) Data
    • 17 Active Reviews
    • 97 Active Branches – owned by 34 people & 4 teams
    • 472 commits by 26 people in last month
  • OpenStack Object Storage (SWIFT) Data
    • 5 Active Reviews
    • 41 Active Branches – owned by 19 people & 2 teams
    • 184 commits by 15 people in last month

This shows me what we’re doing is working and given the time to continue to grow and bloom OpenStack Compute can help IT make the move to automation the same way manufacturing has over the past 50 years.  Yes, I’m saying IT isn’t automated right now. IT automates other tasks inside the Enterprise but they haven’t really automated many of their own tasks (this probably deserves a full post of it’s own).

Object Storage is potentially more important even than the automation.  This is a topic I’ve been presenting on frequently because I’m very passionate about it (see the Strengths above) as it allows us to see an order of magnitude increase in efficiency over the TCO of “the average storage solution”.  It doesn’t serve every storage use case but the use case it does serve is growing rapidly and over the next decade it’ll be clear to everyone that their largest storage platform (in terms of GB stored) will be object based.

I expect we’ll see additional projects as part of OpenStack over the next year but we should keep that bar high as a community on what is a major project.  Both Compute and Object Storage are providing software for ubiquitous problems that are growing in importance to everyone.  Some items that clear the bar for me (these are critical issues to all users and operators of clouds a decade from now):

“Networking as a Service” — This should be abstracting from the end-point computing service as it can be utilized by all projects and to provide connection points to other inter-cloud and non-cloud services.  Here we can define, routing, switching, and filtering network devices and we can automate their integration with other cloud services.

“Inter-cloud Services” — As different clouds become available with varied services we need an automated way to discover and catalog them the same way routing protocols advertise network availability so we can have a loosely coupled global network (you may be familiar with it.. the Internet).  OpenStack is a great place to define a reference implementation of the directory and advertising capabilities as all interested parties can have a seat at the table to contribute their needs.

Some items I’m on the fence about (the reason I’m on the fence isn’t that they aren’t extremely important to some implementations, it is that they aren’t important to all implementations):

“Host Provisioning Automation” — For service providers that are constantly growing and re-provisioning assets automating these tasks is critical.  For a SMB that is going to build a 2-6 cabinet cloud solution once this isn’t nearly as important.

“Security & Compliance Services” — Everyone wants “some level” of security but what that level is and what amount of the resources that get dedicated to providing them varies widely.

“Network Block Storage Services” — As the performance and size of local storage continues to increase the need for network block storage decreases.  I’m still a big believer in the benefits here for many use cases; it just doesn’t apply for every use case.

I really believe 2011 our community has a chance to really deliver “the promise of cloud” to the masses through the efforts and commercial implementations created by the members of our community.  As exciting as getting things off the ground in 2010 I’m even more excited about the future to come.

Advertising isn’t the only business model for websites

People pick advertising because it doesn't require selling and selling is hard.

A post by Ken Fisher at Ars Technica stirred up quite the hornet’s nest.  Brian Carper replied that, “Advertising is devastating to my well-being”.  Rob Sayre chimed in on the Mozilla Blog about, “Why Ad Blockers Work”.  All three of these were picked up by Hacker News and became some of the most commented threads of the week.

I’m not going to rehash anything said in those posts — I’m instead going to look at the different business models in the print and broadcast media markets and ask the Internet site operators why they aren’t trying to monetize in those ways?

In print media publications exist that are 100% advertising supported.  You’ll find them in the magazine racks by the exit of your local supermarket or in between the exterior door and interior door of a coffee shop like Denny’s.  These publications have marginal quality content — not good enough I’d be willing to pay for it but good enough that if I want something to read while I eat my Grand Slam I might pick it up and thumb through it.  If you operate a website and you try to support it 100% through advertising you’re telling me, “My content is marginal so I only believe I can monetize it through advertising because you wouldn’t be willing to pay me for it.”

Moving to broadcast media the days of 100% advertising supported is nearly gone.  As of this study from December, 2008, nearly 90% of US households receive their television through a subscription based service.  We’ve seen a decade or more of whining from the major networks that they can’t continue to provide the quality we’re used to while viewership continues to decline.  None of the networks provide 24×7 original content, after 11:00PM on most you get 6 hours of infomercials until the early morning news shows.  The whining by website operators that users block their ads sounds a lot like the major networks crying the same thing with DVRs (a DVR is the functional equivalent to an Ad Blocker in your browser as long as you skip the commercials with it) and/or the fact we have more selection now due to competition from companies with other models.

Most content today is published under a hybrid model of pay for content (either through one time purchase or a subscription) plus advertising revenue.  This is model is used by magazines, newspapers, and cable TV channels.  Because they have a hybrid model they can produce content that doesn’t require as large of an audience to generate a profit.  Ars Technica comes close to using this model on the Internet except when you subscribe there all they do is stop showing ads — they aren’t getting the model right.  I pay a monthly subscriber fee to TNT or ESPN and they still show me advertising.  If you’re going to have a subscription service on a website give the users access to premium content — don’t just turn off ads.  I’ll pay for premium content and I won’t pay to have ads turned off when I can turn them off for free with an ad blocker.

The final model is 100% pay for content with no advertising.  In the print business this applies to very few publications — mostly academic journals.  With broadcast media many “premium channels” exist such as HBO, Showtime, Cinemax, and Starz that generate all of their revenue from pay for content.  Ars Technica is jumping from the 100% advertising model to the 100% pay for content model but they’re giving away the exact same content.  Many HBO subscribers would be willing to watch their favorite series with commercials for free each month instead of paying the $10 subscription fee — but HBO doesn’t give you that choice — it is subscribe or don’t get access.  For you to be successful with this model you have to have premium quality content that will attract more people willing to pay than your cost to produce.

Most of the Internet today is running in the first business model and because of that you get “weekly circular” quality content surrounded by tons of flashy advertising.  Very few websites have been able to successfully use a hybrid model.  The NY Times and WSJ are a couple of examples.  I’m not certain if their web divisions are profitable or not — that doesn’t have as much to do with the inability to run a hybrid model web property as it does that they have a mostly print based company still with costs a pure Internet business would not have.

We’re still very early in the days of media moving to the Internet.  Based on some 2009 estimates Internet advertising amounted to ~$21B whereas newspapers still brought in ~$31B, television at ~$36B, and magazines at ~$16B — these numbers are just advertising revenue, purchase/subscription numbers not included.  As revenue continues to shift to Internet publishing formats you’ll see all models emerge and as a publisher you’ll need to figure out which category you want to be in.  If you don’t view your content as “local circular” quality then perhaps you should start looking at a new business model today.

How to tell the difference between “cloud” and “virtualization”

Many people seem to think “cloud” is just off-premise “virtualization”.  Cloud comes in a few flavors and I’ll argue that you can have “private cloud” either hosted off-premise in a provider’s facility or in your own.  The fundamental difference between cloud and virtualization is the goal of cloud is to automate provisioning (this applies to IaaS, PaaS, and SaaS) and the goal of virtualization is resource utilization optimization.  You can (and many providers do) use virtualization as the basis for building a cloud but it is not required.

If we take a look at the Reductive Labs presentation from OpsCamp slide 3 illustrates the primary benefit of cloud.  Cloud helps companies even if their minimum unit of work is larger than a single host machine where virtualization just adds overhead in that case.  The difference between “cloud” and “grid computing” or HPC is that grid/HPC process jobs in a batch manner rather than serve interactive applications.  You can build a compute grid on top of a cloud but not vice versa.

Other folks are saying “private clouds can’t exist because you can’t have rapid elasticity and pay for what you use”.  For a small company you may not be able to have a private cloud but for a large enterprise with many business units you certainly can.  An IT infrastructure BU can provide other organizations in the company all of the requirements of a cloud.

For public cloud to succeed they need to provide all three

Depending on the current utilization across an enterprises infrastructure they may be able to defer spending for a number of years by moving to a fully cloud enabled business.  Right now many departments cling to servers they don’t need because they’re afraid if they release it they’ll never get it back.  With cloud removing that fear resource hoarding ends and many enterprises will have a significant increase in available computing power.

Over the long term if the public computing clouds continue to grow, increase their transparency, and optimize their delivery models it will no longer make financial sense for enterprises to build their own infrastructure.  Public cloud providers will need to prove over the next decade they can deliver on all three corners of the “impossible triangle”.

Next Page »