So you’re running a business and want to come up with a way to reward your most valuable customers — why wouldn’t you want to thank them? In abstract, thanking them is great and in reality you have to be very careful about how you do it. If you aren’t careful you’ll setup a system that makes the 99% of the people that buy from you who aren’t “in the club” hate doing business with you.
A perfect example of what not to do is the airline industry. For the non-frequent fliers from the moment you step into an airport you’re notified that you’re a second class citizen. Priority lines for “first class” and “platinum” members to check luggage — often in many cases personnel are standing not helping anyone because nobody from the privileged class is in line. Now that you’re through checking luggage things should be okay — oh wait — that isn’t the case at all. Non-”members” see where they rank again in the security lines; at the gate while the agent calls out over a speaker for all to hear, “peons wait until boarding group 4 while we parade the 1% through in front of you” (They don’t say that but the people standing there think this); then on the flight, “peons, the restroom at the front of the plane isn’t for you, and your drinks will be $7 each in cash only while we give 1st class drinks for free”.
A great example of what to do is Apple. Go into a store and you’ll be helped in the same manner if you’re buying a $10 accessory or a $2000 laptop. Call on the phone..same again..imagine if Apple changed to model after the airlines? Everyone who wasn’t a frequent repeat customer had to stand in a long line while the Apple Platinum Status members received free upgrades from the iPhone 4 to iPhone 5. A way to measure how your programs work to create the brand value and loyalty you want is called the Net Promoter Score. In the airline industry the top two scores.. Jet Blue and Southwest. What do they have in common? They’re the only airlines that don’t treat people differently. (Southwest is actually starting to do this and I wonder if it’ll lead to the beginning of the decline; hope they keep watching NPS closely.)
Airlines are making these decisions trying to maximize profits. I don’t have data but my gut instinct is the system they’ve setup suppresses air travel and makes people very price conscious because they are never delighted by the purchase decision .. flying from their pocketbook rather than their emotional response. If all of the airlines could have the courage to reboot and learn from the most profitable company in the history of the world (Apple, and by the way I am not a “fanboy” .. I do own some products but this blog is being posted from a Dell laptop, I’ll tweet about it later on a HTC One X Android phone, I may reply to a comment from a Microsoft Surface tablet [I do also own an iPad but I generally use it for reading my newspaper or books and some web browsing]) they might be making money rather than struggling to stay out of bankruptcy.
Can you think of industries or companies that “get it”? I’d love more examples.
If you haven’t heard about it yet Facebook is not only helping us all stay in contact with friends and share our life experiences. They’re also perhaps doing something even more influential. They started the Open Compute Project in 2011 and in October at the 3rd event announced the Open Compute Foundation that my employer (Rackspace Hosting) is also part of. The opinions in the article are mine and mine alone.
Okay, okay.. yes I did say more influential and I’m not talking in hyperbole. Global IT spend is around $2,700,000,000,000 (yes, trillions with a ‘t’) annually and much of that is due to the complexity involved in making hardware and software work together — along with the direct and obvious market fragmentation from the top to the bottom of the supply chain. How many different server models and configurations are available from your vendor of choice? How many vendors are in the market? Where is value really being added and where are manufacturers engineering in lock-in / increasing switching costs on purpose?
Today Wired ran an article talking about Open Compute expanding to include storage gear and virtual I/O (full transparency this project is led by Rackspace). This is very exciting as the interaction between servers and storage has driven much of the complexity and with all of these being worked on under a single umbrella now the light at the end of the tunnel to a simplified system is visible.
All of this can lead us to a future of well understood building blocks — a sign of maturity for “IT systems engineering”. When a new bridge is built we no longer need to do years of lab testing, integration testing, and all of the other tasks required around inventing. A structural engineer uses well known building blocks combined with data about the requirements for the given bridge (load, soil, climate, etc.) and builds a bridge. We’re not there year but a decade from now an IT engineer will be able to do the same thing which will create much more simple and reliable systems. How many of you want to drive over a bridge to work each day that has 3 9s of reliability? . Virtual I/O allows us to decouple CPUs from RAM and other resources. This could allow you to upgrade CPUs while leaving the memory in place — no need to throw out that old DRAM as the chip doesn’t fit in the new motherboard that the new CPU needs.
By the middle of this century we’ll look back at the systems we build now in IT like a structural engineer looks at the “Galloping Gertie” (image top left) — lessons learned, reliability much lower than what we take for granted. We’ll also be able to do it at much larger scale.
Disclosure: If you’re coming straight here you may not know work for Rackspace Hosting and I’ve been involved with OpenStack since the inception of the project. The opinions on this blog are my personal ones, not those of my employer.
This post is an assessment, a thought. I don’t really explore the meaning or outcomes completely. I may do that in a future rambling… on to the thought…
For the first decade of networking or more we had many competing technologies that didn’t interoperate: SNA, IPX/SPX, TCP/IP, AppleTalk, DECnet, NetBEUI, and more. The lack of a consistent and unified standard didn’t stop networking from succeeding anymore than it will stop cloud computing. Cloud is a fundamental shift that dramatically increases productivity just like networking did — businesses love increases in productivity and will adopt anything that yields one – often the first one presented to them and they’ll run it for a refresh cycle before switching to the “interoperable” platform. Ok, so I’m a networking geek but this isn’t the only analogy that holds true…
We have many programming languages.. compiled, interpreted, functional, object oriented.. all with major differences.
We have many types of processors from low energy mobile chips to super fast server chips all with different instruction sets.
We have a variety of operating systems all with a loyal following and a vastly different set of capabilities.
I believe cloud could see wider and more rapid adoption if interoperability is figured out but looking back at history, and even history specifically in the technology world, we have many successful markets without true interoperability as a fundamental capability.
Over time most of these markets have achieved the guise interoperability through consolidation and it looks like cloud computing is headed the same way. Networking is predominately IP; programming is C/C++/C# for OS/infrastructure, Java for enterprise applications, and PHP for web applications; processors are x86 in desktops and servers, and ARM in mobile devices; Operating Systems are generally Windows for consumer and SMB/departmental large business IT, and Linux for web and larger business core IT.
With the pace of innovation and the foundation laid down by previous generational shifts the cloud market will grow and reach a critical mass market share much more rapidly as technology companies that are involved know the path to follow. Microprocessors, operating systems, and networking took many decades. Java swept through the enterprise software development market in a decade as did PHP across the web. The cloud market really started to emerge around the start of the decade and by the current look of things by middle we’ll have a clear picture of interoperability for clouds.
Operations has one primary mission above all else… uptime. This often gets them labeled by the rest of the organization by names such as, “the brick wall”, “organization of ‘no’”, and a host of other names I won’t use on a PG blog. With automation coming to IT operations teams are being asked to ensure uptime while allowing a more rapid evolution of the environment. The days of an 18 month release cycle are over. So how exactly do we accomplish this without wrecking things? Proper testing, visibility, and a scale staging environment.
Almost all application deployments are iterations and evolutions of existing applications. They also aren’t a single function system so changes to one application can impact the performance or availability of others — this is why operations has become the organization of “no”. To prevent bad things from happening first you have to understand what is happening. You need to have monitoring that can show network, operating system, platform, and application performance.
Now that we can see what is going on with production we need to create a small scale version of production used for staging. When creating this scaled down version of the environment you’ll need some understanding of how increasing size and load impact the system. If all of your algorithms are O(x) then it is easy to scale down and predict what it will do in a larger environment. This won’t be the case though — some things will be O(log(x)) where it takes more resources at a smaller scale than it will once scaled up and others will be O(#x) or even O(x^#) where as you grow they consume resources in a linear multiple or exponential manner.
So then where does DevOps come in?
It isn’t just a crazy idea by the developers to try and find a way around “no”. If implemented properly it will build a strong cooperative effort between development and operations — get rid of the “Us vs. Them” mentality. It will also eliminate duplication of effort where a development team writes unit, integration, and system tests and operations will write all different tests in their monitoring and management systems.
This divide also creates problems where new releases get delayed as operations find issues in staging because they weren’t involved with the test case development early on. With continuous integration and automated development testing many of these tools also make great operational monitoring systems as well. This can allow for the operations management team to write unified tests with development that are used throughout the process — everyone is working from the same goals — no more meetings of, “Why didn’t you catch this in development?”, “It works fine in development, you screwed up the installation.”
The next benefit you’ll get from a DevOps culture is tasks will be automated in other areas decreasing error rates and increasing delivery speed. Today developers use distributed source repositories with version control to deploy applications onto development and testing systems. Operations will then package up those applications into an installer that introduces another step in the process that has to have additional testing and new tooling. DVCS platforms have authentication and tracking built in, all of the audit controls an operations department wants.
I know I’ve glossed over the details in this post. I plan on putting together some detailed future posts covering examples of how I’ll be implementing each section using open source tools to help with work my team is doing on OpenStack.
With all the talk in 2010 about cloud computing you’d think the entire Internet was running on it. We’re at the point now with cloud computing as we were in the late ’80s through the mid-late ’90s with networking. Everyone can clearly see the benefits in cloud but the market is hyper fragmented as different pockets of users form a community around one of the available solutions. To ensure new readers are aware, while I am employed by Rackspace Hosting working on the OpenStack project, the opinions expressed on the blog are mine and I try to present a non-biased view of the market.
With that opening I’ll dive into the three items of importance to cloud computing for the coming year..
..with additional more minor predictions in italics.
1. Cloud computing needs will change as we move from early adopters to mainstream users
Thus far the primary users of cloud infrastructure as a service (IaaS) offerings have been early adopter technology savvy users. Those users may be founders of a Web 2.0 startup, consultants working for the R&D department of a system integrator, or forward thinking IT professionals on enterprise IT strategy teams. Based on the stats in the chart, we still have less than 2% of the top 500k websites hosted on IaaS. In 2011 this number will grow to 5-10% of the top 500k sites, more than doubling again like it did in 2010.
Inside startup communities everyone will be using the cloud to start their business and in enterprises departmental innovation success stories will start to bubble up to corporate leadership. This doesn’t mean existing applications will be migrated — people will experiment with migration as disaster recovery innovation but it won’t be a major driver of cloud growth in 2011.
The major driver of growth will be new applications and much of this growth won’t be consumer Internet sites that are easy to track making the “who’s winning in cloud” leader board more difficult to display. This next wave of adopters will also require additional levels of support as they won’t have the same “DIY” mentality as early adopters. Cloud providers will need to raise their own service levels or spend significant effort building a system integrator and consulting ecosystem that can provide it for them.
The ecosystem of tools built on the IaaS cloud APIs will be a foundation to enable the higher service levels. They will be utilized by the practices for the SIs and consultants as well as the software development teams of many ISVs. For cloud providers that do not yet have an ecosystem built around their API this will be the year they move on to adopting one of the open APIs with market traction. Enough providers will use a group of 3-5 APIs that ISVs and startups will refuse to developer for others. API abstractions like jclouds, Libcloud, and Deltacloud will start to add depth rather than additional breadth.
2. Technology heavyweights with large developer communities will escalate their efforts to define and control PaaS
2007 gave us Force.com…
2009 delivered Heroku’s commercial release and moved VMware into the platform space with the acquisition of SpringSource…
Over the past few years many platforms and application frameworks simplified development by providing a foundation and abstracting away lower level details. This came with some drawbacks as most frameworks were not aware of their resource utilization nor did they have the ability to utilize the programmatic capabilities of IaaS to change their resource allocation based on load. In 2011 many platform solutions will become tightly integrated with IaaS APIs providing dynamic resource management — the auto-scaling cloud workload across public, community, and private cloud installations will become an early adopter reality.
Building a PaaS solution is possible for a startup if they make it compatible with widely accepted development languages and frameworks as Mosso did by selecting PHP/Python/.NET with support for common applications like WordPress, Drupal, Django, and more. Another example is what Heroku did with Ruby and PostgreSQL. The heavyweights are the only players with deep enough pockets and the required patience to push new programming dynamics. Microsoft will look at how things are evolving and in order to defend .NET/C# will embrace the Mono Project creating a real threat to Java in the enterprise.
Java won’t take things laying down. Despite the fallout in 2010 over Oracle vs. Google, over the Apache Foundation first joking with Oracle and then stepping down from the JCP Executive Committee, and the many other examples such as James Gosling, the creator of Java, coming out against its new steward Java still maintains #1 position on the TIOBE index. Oracle, IBM, and VMware all have deep pockets and big revenue streams tied to the continued success of the language. IBM, while late to the PaaS party, will tie together Tivoli, WAS, and other components to build a robust platform for their customer base.
The blog-o-sphere will erupt in debate about “what is a true platform as a service” as much as they went on and on in 2010 about infrastructure as a service APIs. Despite what the pundits believe the majority of the enterprise IT dollars will go towards “false platform” private cloud solutions. Making the leap for major projects from current development methodologies and procedures to one of the new platforms will be too much — IT organizations need evolution, not revolution.
3. Enterprises will begin to evolve their virtualization deployments into private clouds and they’ll expect networking and audit controls beyond the capabilities of many current systems
Cloud deployments in enterprises will go in two different directions depending on the expectations of the project sponsoring executive team. Departmental usage of cloud that flies under the official process RADAR is not what I’m talking about. Over the past year I’ve had conversations with numerous people on Fortune 500 IT strategy teams and cloud is being looked at a couple of different ways. One group looks at it only as a technology solution that will magically make their operations more efficient. Another group realizes that the automation cloud brings only benefits them if process changes happen in parallel with with the systems improvements. Enterprise cloud projects that are only technology focused will not provide any meaningful savings and enterprises going this route will become disenfranchised. Virtualization was about consolidation not automation and because of that it didn’t include business process changes that cloud requires. You can’t simply install a “cloud upgrade” to your virtualization system and instantly have a cloud.
The cloud projects are going to run into a second set of hurdles. Up to this point typically departmental non-audited, non-regulated applications have been deployed by enterprises on clouds. In 2011 projects will need to address the corporate risk management and IT audit requirements. Major public clouds such as Amazon Web Services and Rackspace have addressed and received various attestations such as SAS70 Type II (Rackspace example), ISO27001 (AWS example), and PCI DSS [PDF] (Visa Global List of Validated Service Providers) showing that it is possible to build cloud services that meet compliance requirements. For enterprise projects to be successful they need to involve risk and audit up front so the proper control mechanisms are in the deployment. Because cloud is about workflow automation having to insert a manual audit control late in the project in order to meet the launch plans will eliminate many, if not all, of the projected benefits.
Corporate risk and IT audit teams will invalidate a number of cloud software fabrics and those platforms will quickly try to re-engage on projects by announcing partnerships with security companies. Platforms that come from service provider and government backgrounds such as OpenStack have a head start along with platforms that have evolved from enterprise DNA such as VMware. As enterprises start to spend significant dollars on cloud the R&D investment in cloud platforms will dwarf what has been spent to date and any head start present currently can see it easily vanish in 2011.
Audit controls for cloud platforms need to include both host and network services. It is possible to architect a cloud and map the controls into existing systems though this won’t be instantly turn-key or easy in 2011. Network virtualization will make cloud systems more flexible at a cost of making compliance controls more complicated. Process wise this also introduces another department into the cloud deployment discussions. Most clouds deployed in 2011 will focus on server automation and networking will be addressed in subsequent phases of the technology transition in 2012+.
2011 will be another big year for the adoption of cloud technology but these fundamental shifts happen slowly — especially when they involve people learning new processes and not just transparent technology replacement. When done right, cloud will make IT vastly more efficient and when cost of services decline the demand for those services often skyrocket.
This post focused on the cloud computing. I’ll be making other posts in January about distributed storage platforms (aka. “cloud storage”) and why they’ll be important for enterprises to understand and have readily available to their users before the middle of the decade. It is a fundamentally different problem as many cloud storage systems can be installed transparently without end user process changes.
I’d like to start off with an apology to everyone out there that over the past 9 months if I didn’t reply to your email, didn’t answer your phone call, or made your life less interesting by disappearing from Twitter and from sharing my thoughts on this blog. I’ll be out, alive and available again now that OpenStack is a reality.
Life is about priorities and hopefully at some point in your life you have already had or will have in the future an opportunity to work on something that has the ability to really make an impact. At Rackspace we are a Strengths based organization. My top 5 are Learner, Achiever, Competition, Analytical, and Focus. I’ll use my strengths as a way to explain the past ~9 months.
When we started exploring the strategy around this all of us had lots to learn. We’d all used open source software. Some of us on the team had contributed to projects, but we all knew we had a lot to learn if we were going to get this right. The great thing about open source, the full history of all of it is on the Internet. You can go back and read mailing list archives, you can find out who contributed to a project, who led them, who had influence and you can reach out to those people and they’re often happy to talk about it. This is very different from trying to do research on businesses where information is hard to find — no corporation will share their full mailing list archive that covers the history of their decision making (heck most don’t even have one). The openness and ability to learn about things easily was a huge motivator for me.
So began the Learner->Analytical->Focus->Achiever “death spiral”, well the “death” of my learning anything not involved on this project that is. The good news is those 4 strengths together make it so I really enjoy learning about new complex systems and figuring the best way to navigate, the bad news is the Focus->Achiever half may let me chase Alice all the way down the rabbit hole to Wonderland. Sometimes this is counterproductive where a decision could have been made “good enough” with less analysis but in this case I’m really happy about it. When forming an open source community you have a lot of choices to make and all of them have different benefits or drawbacks and the perception of is it a benefit or drawback varies from the perspective of the individual or group.
Forming this community is important enough to go all the way down the rabbit hole because thousands of people will become part of it and each potential member of the community is worth more than an hour of my time. This gives me a good segway to talk about scale — If you’re only going to use a piece of software once to solve a single need then you should make it just good enough to get the job done — you should optimize for min(time coding + time for code to run[where you have to pay attention to it]). The opposite end of the spectrum is a project like Linux (or like OpenStack will be — I dream big!) that runs on millions of machines 24/7 all around the globe. If you can make an operation one minute faster on something that runs on a million machines you save 2 years worth of system time. With that same idea we spent all the time we could making sure we got the community started the right way because every hour we spent will be multiplied by each of you that join it.
So now here is where my Competition kicks in. I don’t want to make just an average community and then go watch reruns of “Everybody Loves Raymond” (Ray, hopefully you aren’t offended, you shouldn’t be, you were the first show that I know made it to rerun syndication that popped into my head!) on local TV — I want to make the best community ever. The problem is… the bar is really high.. it isn’t like I said, “I want to make the biggest ball of rainbow yarn a person with a 9 letter long name made on a Tuesday afternoon” — I want to make the best open source community around a distribution of projects out there — and a lot of people have done an excellent job at this. So to do this we’ve learned as much as we could from past projects to lay the proper foundation. With that let me lay out the “4 opens” (I’d like to credit Rick Clark on our team for summarizing these thoughts into a concise and clear manner we can all hopefully understand)…
Open Source: We are committed to creating truly open source software that is usable and scalable. Truly open source software is not feature or performance limited and is not crippled. We will utilize the Apache Software License 2.0 making the code freely available to all. [Personal commentary: What this means is "we accept patches", the project won't block a feature contribution because it competes with a commercial feature a community member has. This doesn't mean all of those commercial entities have to contribute all of their code -- it just means they aren't guaranteed exclusivity.]
Open Design: Every 6 months the development community will hold a design summit to gather requirements and write specifications for the upcoming release. [Personal commentary: The design summits have been great (so far we've had 2) to get people aligned and to really get the complicated items solved. An example on this is the large object support for Object Storage, members of the community had a number of different implementation ideas and through discussion we've come up with a great way to do it.]
Open Development: We will maintain a publicly available source code repository through the entire development process. This will be hosted on Launchpad, the same community used by 100s of projects including the Ubuntu Linux distribution. [Personal commentary: Getting code and designs out in the open as early as possible in the process allows everyone to benefit from the power of a community in the biggest way possible. This also makes finding and fixing big problems much easier as each patch can be tracked and its individual impact measured.]
Open Community: Our core goal is to produce a healthy, vibrant development and user community. Most decisions will be made using a lazy consensus model. All processes will be documented, open and transparent. [Personal commentary: Everyone should have a seat at the table at a level that corresponds to the effort and contributions they're putting into the project. With all of the decision making done in IRC meetings (with transcripts) and over mailing lists members of the community can see "how the sausage was made" rather than just the end result of the decision -- this is really important to build and maintain trust.]
We’re off to a fun and exciting start. Looking at the stats from this week I’m amazed at the amount of contribution we’re seeing from such a large group of developers (stats for the week of 12/3 to 12/9):
- OpenStack Compute (NOVA) Data
- 17 Active Reviews
- 97 Active Branches – owned by 34 people & 4 teams
- 472 commits by 26 people in last month
- OpenStack Object Storage (SWIFT) Data
- 5 Active Reviews
- 41 Active Branches – owned by 19 people & 2 teams
- 184 commits by 15 people in last month
This shows me what we’re doing is working and given the time to continue to grow and bloom OpenStack Compute can help IT make the move to automation the same way manufacturing has over the past 50 years. Yes, I’m saying IT isn’t automated right now. IT automates other tasks inside the Enterprise but they haven’t really automated many of their own tasks (this probably deserves a full post of it’s own).
Object Storage is potentially more important even than the automation. This is a topic I’ve been presenting on frequently because I’m very passionate about it (see the Strengths above) as it allows us to see an order of magnitude increase in efficiency over the TCO of “the average storage solution”. It doesn’t serve every storage use case but the use case it does serve is growing rapidly and over the next decade it’ll be clear to everyone that their largest storage platform (in terms of GB stored) will be object based.
I expect we’ll see additional projects as part of OpenStack over the next year but we should keep that bar high as a community on what is a major project. Both Compute and Object Storage are providing software for ubiquitous problems that are growing in importance to everyone. Some items that clear the bar for me (these are critical issues to all users and operators of clouds a decade from now):
“Networking as a Service” — This should be abstracting from the end-point computing service as it can be utilized by all projects and to provide connection points to other inter-cloud and non-cloud services. Here we can define, routing, switching, and filtering network devices and we can automate their integration with other cloud services.
“Inter-cloud Services” — As different clouds become available with varied services we need an automated way to discover and catalog them the same way routing protocols advertise network availability so we can have a loosely coupled global network (you may be familiar with it.. the Internet). OpenStack is a great place to define a reference implementation of the directory and advertising capabilities as all interested parties can have a seat at the table to contribute their needs.
Some items I’m on the fence about (the reason I’m on the fence isn’t that they aren’t extremely important to some implementations, it is that they aren’t important to all implementations):
“Host Provisioning Automation” — For service providers that are constantly growing and re-provisioning assets automating these tasks is critical. For a SMB that is going to build a 2-6 cabinet cloud solution once this isn’t nearly as important.
“Security & Compliance Services” — Everyone wants “some level” of security but what that level is and what amount of the resources that get dedicated to providing them varies widely.
“Network Block Storage Services” — As the performance and size of local storage continues to increase the need for network block storage decreases. I’m still a big believer in the benefits here for many use cases; it just doesn’t apply for every use case.
I really believe 2011 our community has a chance to really deliver “the promise of cloud” to the masses through the efforts and commercial implementations created by the members of our community. As exciting as getting things off the ground in 2010 I’m even more excited about the future to come.
A post by Ken Fisher at Ars Technica stirred up quite the hornet’s nest. Brian Carper replied that, “Advertising is devastating to my well-being”. Rob Sayre chimed in on the Mozilla Blog about, “Why Ad Blockers Work”. All three of these were picked up by Hacker News and became some of the most commented threads of the week.
I’m not going to rehash anything said in those posts — I’m instead going to look at the different business models in the print and broadcast media markets and ask the Internet site operators why they aren’t trying to monetize in those ways?
In print media publications exist that are 100% advertising supported. You’ll find them in the magazine racks by the exit of your local supermarket or in between the exterior door and interior door of a coffee shop like Denny’s. These publications have marginal quality content — not good enough I’d be willing to pay for it but good enough that if I want something to read while I eat my Grand Slam I might pick it up and thumb through it. If you operate a website and you try to support it 100% through advertising you’re telling me, “My content is marginal so I only believe I can monetize it through advertising because you wouldn’t be willing to pay me for it.”
Moving to broadcast media the days of 100% advertising supported is nearly gone. As of this study from December, 2008, nearly 90% of US households receive their television through a subscription based service. We’ve seen a decade or more of whining from the major networks that they can’t continue to provide the quality we’re used to while viewership continues to decline. None of the networks provide 24×7 original content, after 11:00PM on most you get 6 hours of infomercials until the early morning news shows. The whining by website operators that users block their ads sounds a lot like the major networks crying the same thing with DVRs (a DVR is the functional equivalent to an Ad Blocker in your browser as long as you skip the commercials with it) and/or the fact we have more selection now due to competition from companies with other models.
Most content today is published under a hybrid model of pay for content (either through one time purchase or a subscription) plus advertising revenue. This is model is used by magazines, newspapers, and cable TV channels. Because they have a hybrid model they can produce content that doesn’t require as large of an audience to generate a profit. Ars Technica comes close to using this model on the Internet except when you subscribe there all they do is stop showing ads — they aren’t getting the model right. I pay a monthly subscriber fee to TNT or ESPN and they still show me advertising. If you’re going to have a subscription service on a website give the users access to premium content — don’t just turn off ads. I’ll pay for premium content and I won’t pay to have ads turned off when I can turn them off for free with an ad blocker.
The final model is 100% pay for content with no advertising. In the print business this applies to very few publications — mostly academic journals. With broadcast media many “premium channels” exist such as HBO, Showtime, Cinemax, and Starz that generate all of their revenue from pay for content. Ars Technica is jumping from the 100% advertising model to the 100% pay for content model but they’re giving away the exact same content. Many HBO subscribers would be willing to watch their favorite series with commercials for free each month instead of paying the $10 subscription fee — but HBO doesn’t give you that choice — it is subscribe or don’t get access. For you to be successful with this model you have to have premium quality content that will attract more people willing to pay than your cost to produce.
Most of the Internet today is running in the first business model and because of that you get “weekly circular” quality content surrounded by tons of flashy advertising. Very few websites have been able to successfully use a hybrid model. The NY Times and WSJ are a couple of examples. I’m not certain if their web divisions are profitable or not — that doesn’t have as much to do with the inability to run a hybrid model web property as it does that they have a mostly print based company still with costs a pure Internet business would not have.
We’re still very early in the days of media moving to the Internet. Based on some 2009 estimates Internet advertising amounted to ~$21B whereas newspapers still brought in ~$31B, television at ~$36B, and magazines at ~$16B — these numbers are just advertising revenue, purchase/subscription numbers not included. As revenue continues to shift to Internet publishing formats you’ll see all models emerge and as a publisher you’ll need to figure out which category you want to be in. If you don’t view your content as “local circular” quality then perhaps you should start looking at a new business model today.
Many people seem to think “cloud” is just off-premise “virtualization”. Cloud comes in a few flavors and I’ll argue that you can have “private cloud” either hosted off-premise in a provider’s facility or in your own. The fundamental difference between cloud and virtualization is the goal of cloud is to automate provisioning (this applies to IaaS, PaaS, and SaaS) and the goal of virtualization is resource utilization optimization. You can (and many providers do) use virtualization as the basis for building a cloud but it is not required.
If we take a look at the Reductive Labs presentation from OpsCamp slide 3 illustrates the primary benefit of cloud. Cloud helps companies even if their minimum unit of work is larger than a single host machine where virtualization just adds overhead in that case. The difference between “cloud” and “grid computing” or HPC is that grid/HPC process jobs in a batch manner rather than serve interactive applications. You can build a compute grid on top of a cloud but not vice versa.
Other folks are saying “private clouds can’t exist because you can’t have rapid elasticity and pay for what you use”. For a small company you may not be able to have a private cloud but for a large enterprise with many business units you certainly can. An IT infrastructure BU can provide other organizations in the company all of the requirements of a cloud.
Depending on the current utilization across an enterprises infrastructure they may be able to defer spending for a number of years by moving to a fully cloud enabled business. Right now many departments cling to servers they don’t need because they’re afraid if they release it they’ll never get it back. With cloud removing that fear resource hoarding ends and many enterprises will have a significant increase in available computing power.
Over the long term if the public computing clouds continue to grow, increase their transparency, and optimize their delivery models it will no longer make financial sense for enterprises to build their own infrastructure. Public cloud providers will need to prove over the next decade they can deliver on all three corners of the “impossible triangle”.
I’ll come right out and give my theory up front and then explain why… We need to stop teaching young children “facts” and we need to start teaching them how to learn. The only reason we teach young children “facts” is to shape their world view into what we want it to be while their minds are easily influenced because they haven’t learned logic, critical/deductive reasoning, and other associated fundamentals required to think independently.
Elementary education in the US is typically half “learning to learn” and half “learning facts”. You can search and look through many online class schedules across the country and see this. The “learning to learn” — reading, music, math, art make up part of the day. The rest of the day — spelling, science, social studies, history is filled with teaching children “facts” and shaping their world view. Even a fundamental like reading is focused on content over skills to increase speed and comprehension. Almost none of the public schools offer foreign language even though a number of studies show significant benefits.
Is standardized testing to blame? Perhaps as it is hard to test for the ability to learn especially in a multiple choice format. Tests make sure you know “facts”. Because of these tests and constant measuring we’re afraid to spend time building a foundation so children can learn faster as they age. Linear progression is “safe” and teaching the ability to answer a question (often through memorization) is favored over teaching the understanding of how to figure out the answer. Laws like the No Child Left Behind Act focus dollars on ensuring everyone can reach “average” rather than allowing most of the class to move at an accelerated pace (if the effort is spent on getting students below SD -1 to average 80%+ of the students in the class are effectively held back).
My call to action — get social studies and fact memorization science out of elementary schools. Use social studies to stimulate debate allowing children to discuss issues and form their own opinions. Use science as a chance to teach critical thinking and problem solving skills. Resist the temptation to tell children what to believe — make them understand how to formulate an opinion. This applies to math as well as social studies. We typically wait until the second hear of high school to teach proofs in geometry. One example is instead of having children memorize their times tables with no understanding as to why have them figure out multiplication as a better way to do some addition problems.
At some point it is important to learn facts — history, geography, etc. — but by waiting to teach these facts they can be learned in a fraction of the time. You could spend an hour a day teaching a middle school child all of the facts they’d learn in 5 years of elementary school. Stop wasting an hour a day on spelling, teach latin and children will learn to spell naturally.
Imagine the following schedule for your child instead of what they have today…
7:50-8:25: Arrival – Pledge, announcements, and a critical thinking logic problem we’ll discuss as a group.
8:25-9:00: Latin – Replace spelling memorization with fundamentals that enable good spelling and a foreign language
9:00-9:40: Math – Teach problem solving, proofs, word problems
9:40-10:50: Reading – Teach concepts to increase speed and comprehension
12:10-12:40: Music (M, W, F) / Art (Tu, Th) – Encourage creativity and original thinking
12:40-1:20: Science (M, W, F) / Social Studies (Tu, Th) – Lessons focused on problem solving and critical thinking
1:25-1:55: P.E. (M, W, F) / Library (Tu, Th) – Focus on teamwork and leadership skills
Our education system isn’t an abysmal train wreck like some people will scream. It does a good job but it could be better. Like compound interest builds wealth over time a 10% increase annually in the amount of learning a child does more than doubles the amount they learn by the time they graduate from high school. Also by continuing to teach kids how to learn you’ll lower drop out rates — at some point when a child falls too far behind in memorizing facts they give up or start to cheat to fake their way until they reach 16 (or 18, whatever the minimum age in your state) and can stop going.
I’m going to break this post up into two sections, the first will discuss public clouds and their features focused on advanced networking as an example. The second portion will look at the future of cloud computing hardware — both networking and computing.
Public Clouds and Feature Selection
A discussion started on Twitter today after Werner Vogels (@Werner) tweeted about the future of networking through a blog post by James Hamilton entitled, “Networking: The Last Bastion of Mainframe Computing”. Christopher Hoff hasn’t been thrilled (understatement of 2009) with the networking features provided by cloud computing platforms both public and private. Unless I misunderstood his tweet he’d love to hear public cloud providers commit to a flexible API driven networking layer using technology such as OpenFlow.
I tossed back a question asking, “Are customers willing to pay for complex network customization in a cloud? If so, what percentage of them? Thoughts?” and he replied, “In terms of paying for parity in what I can do in even a basic enterprise today? No thanks. That’s on you as a provider in long term”. I threw this question out because here-in lies the problem… Public clouds will only end up with the features that a broad market will pay for or a small market will pay a very significant premium for. The reason behind this is when a cloud adds a core feature, it adds it everywhere. This leads providers to only invest in new features that a enough of their customers are interested in to offset the cost of deployment and still yield a satisfactory return on capital.
Today at Rackspace customers that want advanced networking configurations are directed to our Private Cloud platform (I say our because I’m employed by Rackspace — the opinions expressed here however are mine alone). They can then create security zones, use IPS/IDS, and enable enhanced DDoS defense services all behind dedicated firewalls and load balancers. The private cloud environment can have bridged network segments that connect to a public Rackspace Cloud Servers(tm) configuration for workloads that do not require advanced networking. The current addressable market interested in both public cloud as a primary platform and advanced networking is small. The early adopter group of start-ups and SMBs doesn’t typically need or is not willing to pay for advanced networking and the enterprises that are willing generally aren’t first movers on new technology.
As the public cloud market matures the addressable market will grow and you’ll start to see public cloud providers adding advanced networking capabilities though the cloud definition of “advanced” won’t ever be truly “cutting edge” on a mass market cloud. I expect we’ll see niche clouds emerge that will cater to specific application use cases that will have advanced features for their target customer. Early examples of this are Force.com or the OpSource Cloud.
The Future of Cloud Computing Hardware
I’m now going to loop back to James’s post that kicked this whole thing off where he compared the current network device situation to mainframe and the vertical scale centralized systems. He asserted that we’ll see a commoditization of the networking layer similar to what we’ve seen in the storage layer through technologies like RAID and through servers with x86. The reason RAID and x86 have been successful is they are multi-purpose with the capabilities to serve a broad range of applications well with proper configuration.
Networking gear is very different because the workloads are all uniform and when you have a uniform workload an ASIC (Application Specific Integrated Circuit) or a FPGA (Field Programmable Gate Array) that has is tailored to a specific type of workload will enable better performance per dollar. The second core difference between the server/storage markets and networking is once you step into the “carrier/cloud class” networking equipment only a few hundred potential customers exist — markets with fewer stronger customers tend to be more consolidated. Networking gear has also been “cloud like” for over a decade now. Lets look at the NIST requirements for a cloud:
On-demand self-service - This requirement is for a cloud to user relationship. I’ll translate this to a network cloud to network engineer relationship. For them, all carrier class networking gear supports SNMP along with other potential programmable configuration methods through management systems with APIs such as the Cisco Configuration Engine [PDF].
Rapid elasticity – This dates back to frame-relay where the concepts of a CIR (Committed Information Rate) was introduced. The space has continually evolved with QoS being introduced on ATM up through the advanced dynamic algorithmic traffic routing today over IP/MPLS networks.
Resource pooling - Doing this for computing is new outside of the HPC market — telecommunication networks have been multi-tenant since the point the 3rd phone was hooked up over 100 years ago.
Measured Service – Networking has been doing this for years as well, down to the minute or byte of data instead of the hour or GB (the smallest unit of measure any public cloud compute or storage platform bills in).
Broad network access – Service provider IP networks are the ultimate in heterogeneous access through standards based communication. They support connectivity over a number of layer 1 physical mediums using quite a few layer 2 communication protocols.
Cloud computing may actually end up bringing the server market closer to the current networking market than vice versa. An IBM Z-series is capable of very efficiently Linux instances. It also supports I/O virtualization for both networking and storage with granular controls — features we still don’t have at the same quality level from x86 virtualization solutions. The Oracle Exadata V2 is another example, it supports 1 million I/O per second for non-sequential workloads on databases up to 140TB in size. How many commodity x86 servers does it take to match either of those configurations and how do they compare in capex and TCO (Total Cost of Ownership) to the IBM or Oracle specialized platforms? We see even specialized x86 platforms being developed and deployed by a number of players. Some examples are the Cisco UCS, SGI Ice Cube, and the Sun Modular Datacenter. These platforms are all designed to optimize spend for virtualization/cloud computing workloads and while they may be made up of x86 sub-components they are designed to function as a complete “mainframe” functional unit.
We’re still very early in the technology transition to a full utility style computing grid. As the transition progresses we’ll see more use cases served by a broader range of features. For the small verticals with complex configuration needs and a low willingness to pay a premium we’ll see niche providers.
Networking hardware has been cloud like for more than a decade and a few major players dominate the market because of the small number of strong buyers. Technologies such as OpenFlow in combination with Moore’s law has the potential to disrupt the market but this isn’t a guarantee. The current clouds being built using a massive number of commodity x86 systems is also not guaranteed to be the future — specialized computing platforms have the potential to deliver better unit economics and in a commodity business it will come down to the financials in the end.