Looking Back To The Future of Data Centers

I was talking to some colleagues earlier this month about Intel’s (INTC) plan to have an 80-core processor ready for the market within five years.
I’ve written about commodity computing in this space before, but this latest Intel announcement made me realize that we’re on the verge of a fundamental architectural change in the enterprise data center that means our set-ups could soon start to look eerily reminiscent of those of the 1980s.
Before we go there, let’s recall that for at least a decade the enterprise data center has been a bastion of best-of-breed function-specific appliances and servers. A typical environment has the Internet connected to the firewall appliance that connects to a Intrusion Detection System (IDS) appliance that connects to a load balancer appliance that connects to arrays of servers and blades that connect to storage area networks and disk arrays (and there are many other potential appliances that could be in this path, including SSL processors, proxy servers, virus detection systems and so forth).
Many enterprises pick a best-of-breed vendor for each of these appliances and build a system that best serves the needs of their organization. While there are some large vendors that offer multiple components to these solutions (Cisco (CSCO), Nortel (NT), IBM (IBM), etc.), it is rare to find an enterprise with a single vendor providing all of their function-specific appliances.
Each of the connections between the appliances is more than likely a router or switch with Ethernet running at least one gigabit per second. In the near future, these connections will be ten gigabits per second, with one hundred gigabits per second on the near horizon, more than likely before 2012.
So, putting the pieces together, it is very conceivable that by 2012 we could have Intel-powered servers with 80-core processors interconnected by one-hundred-gigabits-per-second Ethernet connections. To fully utilize the processing power in these servers, they will probably run virtualization software that isolates processors to virtual run-time environments. If each of those virtual environments was dedicated to running software with the same features as the function-specific appliances (firewall, IDS, load-balancers, storage arrays, etc.) found in the enterprise data centers of today, you could have all data center functionality in a small number of servers.
From a practical standpoint, one way to deploy these servers would be to have a set of servers dedicated to networking functions, another set of servers dedicated to application processes, and a third set dedicated to storage functions.
What strikes me is that such a set-up looks remarkably similar to the 1980s data center architecture that had a front-end processor connected to a mainframe connected to a large storage array. IBM dominated that market and their enterprise data center architecture was the industry standard for decades. As one of my favorite sayings goes, “They call it a revolution because it goes in a circle.”
So, what was old may be new again as Intel 80-core processors combined with virtualization and one-hundred-gigabits-per-second Ethernet radically change the near-future enterprise data center architecture. The current enterprise data center function-specific appliance and server vendors should not only be paying attention, but getting prepared.
Allan Leinwand is a venture partner with Panorama Capital and founder of Vyatta. He was also the CTO of Digital Island.