Combining the long haul fiber optic networks with IP transit routing networks could help telecom operators save 40 percent in costs or could let them boost their network capacity by the same amount, according to a study by Bell Labs. This matters to anyone who loves the internet, because while it’s cheap to zip those bits of zeros and ones around the globe, it’s not free and the capacity to do so isn’t actually infinite.
The pipes that carry our information do fill up, and any technology that helps cram more information into those pipes or enables the data to travel on the cheap is something to cheer over. Hence my excitement over this Bell Labs research. Today fiber optic networks — the long haul networks that carry communications traffic via light waves under the oceans, across continents and even via shorter distances in densely populated areas that need a lot of capacity — are pretty static.
When a company buys capacity, a network engineer provisions that capacity by slotting a physical card into a box and turning on the electronics to give the customer a dedicated connection between two places, for example 100 Gbps of lit fiber that stretch from New York to London. If that fiber route is disturbed, like when a shark attacks an undersea cable (sharks love undersea cables!) then to reroute that capacity might require people slotting in cards or at a minimum making changes in a computer elsewhere.
IP networks are different. Thanks to smarter routers and automation, when IP networks encounter congestion they route around it and find alternate routes. That’s the point. The Bell Labs paper proposes we treat fiber networks more like IP networks by giving them more automated routing functions. It does this with control plane management software at the fiber layer allowing the fiber network to route around problems there just like one does at the IP layer.
Then you can manage the optical (fiber) network and the IP network together so when a shark takes out an oceanic cable or an errant backhoe cuts through a cable in downtown NYC, the network automatically finds a new way to deliver you your data, rather then send a bunch technicians scrambling to reconnect your dead office LAN.
This means the network is more agile and the operator can run more traffic through it and keep less of it available for those “just in case” moments, which is where the cost savings or capacity gains come in. They’ve basically bought themselves a bit more elasticity on their networks and can now use it to carry more traffic if they choose.
Arnold Jansen, one of the co-authors of the study explained that the results could be replicated in any carrier network using equipment from any equipment maker — although since Bell Labs is owned by [company]Alcatel Lucent[/company] he believes Alcatel Lucent gear will have an edge in getting the software upgrades on its gear to make these upgrades possible.
Already operators including Telefonica and Deutsche Telekom are conducting their own studies on this concept. Jansen says a large European national operator is trying it out on its network, but progress will be slow. Operators are in general a cautious group, which makes sense if your job has been to deliver five nines reliability (99.999 percent up time). However, as we move more information to the cloud and more of our lives online, the companies that provide that infrastructure will have no choice but to try to keep up with any and every technology that will make transferring bits more efficient.