Amazon shed some light onto what goes on with its networking strategy inside its many data centers on Wednesday at AWS re:Invent 2014 in Las Vegas. James Hamilton, vice president and distinguished engineer for Amazon Web Services, laid out the networking details during his conference session that also touched on data centers and databases.
[pullquote person=”James Hamilton” attribution=”James Hamilton, Amazon Web Services”]“We’ve got 25 terabits per second of inter-availability zone traffic.”[/pullquote]
Just like telcos and other like-minded infrastructure-heavy businesses believe, the cost of networking gear is way too high for most organizations to deal with, so [company]Amazon[/company] decided to take things into its own hands; it buys routing equipment from original design manufacturers (Hamilton didn’t name any names) that it hooks up to a custom network-protocol software that’s supposedly more efficient than commodity gear, Hamilton said.
And to make sure that networking gear can handle a lot of traffic, Amazon does a lot of testing, Hamilton explained. He claimed that Amazon uses about 8,000 servers in its networking-testing environment and because the testing happens in the cloud, the company can decommission those servers when the testing is over so the total cost equates to hundreds of thousands of dollars instead of millions.
Amazon hooks up its eleven worldwide regions—with each region containing several data centers—using private fiber interconnects, which helps the company bypass worldwide networking problems that may arise because of peering issues or some sort of malfunction.
“We’ve got 25 terabits per second of inter-availability zone traffic,” Hamilton said.
Each availability zone in a region contains two or more data centers that are “completely independent buildings,” Hamilton said, and each data center has around 50,000 to 80,000 servers; he said that the size of Amazon’s data centers isn’t as big as other organizations, but there’s a limit to how much better performing a data center can be in regards to the amount of racks used. Also, if something catastrophic occurs, Amazon doesn’t want a ton of machines to go down with it.
Amazon also invested in single-root I/O virtualization, a type of software-defined-networking technique that Hamilton described was used to virtualize network cards so that “each guest gets their own network card.” While the technique is difficult to do and means Amazon has to pay extra attention to distributed denial-of-service (DDoS) protection and network capacity limits, the company has benefited with lower latency.
“The costs are gone,” said Hamilton, regarding how virtualization and custom gear lowered Amazon’s networking, storage and server costs. ”It’s not magic, they just aren’t there anymore.”
All images courtesy of Amazon