Open Compute and the future of infrastructure
If you haven’t heard about it yet Facebook is not only helping us all stay in contact with friends and share our life experiences. They’re also perhaps doing something even more influential. They started the Open Compute Project in 2011 and in October at the 3rd event announced the Open Compute Foundation that my employer (Rackspace Hosting) is also part of. The opinions in the article are mine and mine alone.
Okay, okay.. yes I did say more influential and I’m not talking in hyperbole. Global IT spend is around $2,700,000,000,000 (yes, trillions with a ‘t’) annually and much of that is due to the complexity involved in making hardware and software work together — along with the direct and obvious market fragmentation from the top to the bottom of the supply chain. How many different server models and configurations are available from your vendor of choice? How many vendors are in the market? Where is value really being added and where are manufacturers engineering in lock-in / increasing switching costs on purpose?
Today Wired ran an article talking about Open Compute expanding to include storage gear and virtual I/O (full transparency this project is led by Rackspace). This is very exciting as the interaction between servers and storage has driven much of the complexity and with all of these being worked on under a single umbrella now the light at the end of the tunnel to a simplified system is visible.
All of this can lead us to a future of well understood building blocks — a sign of maturity for “IT systems engineering”. When a new bridge is built we no longer need to do years of lab testing, integration testing, and all of the other tasks required around inventing. A structural engineer uses well known building blocks combined with data about the requirements for the given bridge (load, soil, climate, etc.) and builds a bridge. We’re not there year but a decade from now an IT engineer will be able to do the same thing which will create much more simple and reliable systems. How many of you want to drive over a bridge to work each day that has 3 9s of reliability? . Virtual I/O allows us to decouple CPUs from RAM and other resources. This could allow you to upgrade CPUs while leaving the memory in place — no need to throw out that old DRAM as the chip doesn’t fit in the new motherboard that the new CPU needs.
By the middle of this century we’ll look back at the systems we build now in IT like a structural engineer looks at the “Galloping Gertie” (image top left) — lessons learned, reliability much lower than what we take for granted. We’ll also be able to do it at much larger scale.