Data center interconnection: An inconvenient truth

When it comes to data centers, it seems the servers and switches get all the glory, ie. they get lots of investment to drive technology down the cost curve. But as solid-state drives and multicore processors have become de rigueur and Moore’s Law, the law of the land, interconnection itself has demanded an increasing share of the expenses.
By interconnection, I do not necessarily mean just the network, but also all client ports and cables that connect them. Given that servers, high-performance computers and switches do the majority of the work inside data centers, one would hope that the plumbing among them would be a negligible part of the overall cost.
If only this were the case.
The inconvenient truth is that interconnection costs bottomed with Gigabit copper and has been rising since. The development of the RJ45 connector with integrated magnetics, Cat5 cable and Gigabit Ethernet protocol was a pinnacle achievement, and technologists (yours truly included) have struggled to come up with something better.
Speeds (bit rates) have certainly increased ever since—from 10 Gigabits per second (Gbit/s), to 40/100 Gbit/s, to the upcoming 400 Gbit/s. Yet, it takes only a cursory look at performance metrics to quickly realize that all efforts since have been abject failures when gauged against data center applications.
Whether looking at metrics such as cost per bit/s, cost per bit/s per square millimeter of faceplate area or watts per bit/s, improving upon the ubiquitous “GigE” has proven more difficult than anticipated. As typical when faced with an impending asymptote in technology, several paths forward have been explored, developed and, indeed, taken to market.
One approach has been to push copper to warp speeds, in this case 10 Gbit/s. The fruit of this effort is 10GBASE-T, which does push “10 Gig” down a Cat-something-or-other cable. This is seemingly a success … until you realize that the computing power required to do so would have made a supercomputer blush only a few years ago.
The resulting power dissipation of the supporting electronics is such that port densities are unable to approach those of plain old Gigabit Ethernet. You can’t cram as many 10GigE ports into a box, which is back for scale out clients. Also, while the power dissipation of 10G Base-T chips are rapidly plummeting, so is that of its predecessor. It’s a moving target, after all, and newer protocols’ chief competition is that which they seek to replace.
Another approach has been a move to fiber. Starting at Gigabit speeds, Ethernet finally made the leap to fiber. Since then, there has been a steady succession of optical protocols from 1-100 Gbit/s and modules. Unfortunately, optical link costs appear to have bottomed out well above the ubiquitous copper gigabit ethernet.
One attempt to breakthrough this barrier is the QSFP, which stuffs four of everything into a single module, hoping simple math will reduce costs by four. A second attempt is “active cables” which attempts to move the cost of the link from the end equipment to the cable. Neither attempt has been as successful as hoped. It turns out the 4X reduction of QSFP is not enough, and moving costs from a host to the cable with active cables does not actually change the total link cost, it just moves it around.
Just as the Segway attempted to fill the void in the performance envelope between running and riding a bicycle and a scooter, there remains today a void to be bridged between lower-speed copper interconnect and higher-speed optical interconnect. And, unfortunately, that void is smack dab in the sweet spot of what data centers need. Yes, copper and optical client ports have narrowed the void, but it still exists. Recently, technology visionaries have revisited the issue, and, in my next blog, I will explore some of these new approaches.
Jim Theodoras is director of technical marketing at ADVA Optical Networking, working on Optical+Ethernet transport products.