For the enterprise and service provider crowd, that means riverbed is looking to bring the agility of the cloud to existing infrastructure, by adding a new abstraction layer, which automates and simplifies thorny tasks, such as provisioning, scaling, and overall services management.
Our library of 1700 research reports is available only to our subscribers. We occasionally release ones for our larger audience to benefit from. This is one such report. If you would like access to our entire library, please subscribe here. Subscribers will have access to our 2017 editorial calendar, archived reports and video coverage from our 2016 and 2017 events.
SDN, NFV, and open source: the operator’s view by Mark Leary:
Software-defined networking (SDN) and network functions virtualization (NFV) represent two of the more dramatic oncoming technology shifts in networking. Both will significantly alter network designs, deployments, operations, and future networking and computing systems. They also will determine supplier and operator success (or failure) over the next five to 10 years.
As has always been the case with successful networking technologies, industry standards and open systems will play a strong role in the timely widespread adoption and ultimate success of both SDN and NFV solutions. Open source is poised to play an even more critical role in delivering on the promise of standardized and open networking.
This great promise and potential impact begs two questions. First, “Where are SDN and NFV today?” And second, “What influence will open systems and open source have on the future of SDN and NFV?”
To find answers to these questions, in December 2013 Gigaom Research ran an extensive survey of 600 operators (300 enterprises and 300 service providers) in North America. Based on findings from that survey, this research report provides key insights into the current activity and future direction of SDN and NFV advancements as well as the development and deployment of open systems and open source within SDN and NFV environments.
To read the full report click here.
This research report will explain how CSPs establish a framework for their analytics as well as review the business drivers for telcos and the key benefits that big data analytics provide. It will also address the impact of the business drivers and the advantages of streaming analytics, combined with the ability to harness big data to meet several CSP competitive requirements. It will conclude by summarizing this comprehensive big data analytics framework for CSPs.
To read the full report, click here.
Much of the new security capabilities and accelerated adoption of SDN and NFV solutions can be attributed to the preponderance of Open Source
Amazon shed some light onto what goes on with networking inside its many data centers on Wednesday at AWS re:Invent 2014 in Las Vegas. James Hamilton, the vice president and distinguished engineer of Amazon Web Services, laid out the networking details during his conference session that also touched on data centers and databases.
The San Francisco-based startup took in a seed round of $1.5 million. Its open-source software can virtualize the part of the network that handles intelligent routing with IP addresses.
Vendors often have their own program for new rolling out new products, driven by product development cycles, internal market research, and larger corporate strategy. But sometimes, especially in areas where the path for new technology is not well-defined, efforts led by end-user communities can provide a crucial input to shape agendas for vendors.
The Open Networking Users Group (ONUG) meeting later this month provides one such opportunity. After a productive meeting in May dedicated to improving networks at the enterprise level, the ONUG user-led community and its board of directors (which includes senior IT executives from large banks, transportation, retail, pharmaceutical and insurance companies) wants to move ahead quickly.
ONUG’s goal is to press for common approaches to facilitate the use of new open networking technologies. At the May meeting in New York, which I attended, participants selected software-defined wide area networks (SD-WAN), virtual Networks/overlays, and network services virtualization as the key areas where work efforts were needed. The ONUG board prepared a report on these three areas to ensure that the agenda’s key points were given a larger audience. My sense was that many participants agreed with Prof. Doug Comer, an authority on TCP/IP protocols, that SDN and other new network architectures and services might not be ready for prime time.
What was fascinating about the May meeting was the ability to mix user contributions from larger firms and financial players. Most of the IT perspectives that were shared reflected the issues that everyone — including a number of vendors — want to address. Among them are the perception of higher costs of more traditional, less open, systems, as well as discussion of the open resources alternatives that are already available.
The upcoming October 28-29 meeting presents results from three ONUG working groups led by IT executives and also including vendors. They focused on the three key issues defined at the May meeting. There will also be a lively a debate between Prof. Comer and Paul Mockapetris of Nominum, the creator of DNS. Mockapetris is far more optimistic about employing DNS with security to strengthen networks of the future. Comer and Mockapetris will debate the winning strategy for embracing SDN.
I think there is a good chance that ONUG members will join with a selected group of vendors to begin to resolve the sticky issues needed to make SD-WAN, virtual networks/overlays, and network services virtualization more robust. This could include a series of proof-of-concept projects or collaborations to refine how to deal with inadequacies or gaps in technologies that need to mature for SDN to work.
This effort could readily complement what has recently been pushed by the European Telecommunications Standards Institute and many service providers in their efforts to promote network function virtualization (NFV) as well as SDN largely from a service provider perspective. The ONUG efforts, in my opinion, could provide an interesting counterbalance to the ETSI efforts and ensure that adequate work is focused on security, interoperability, scalability, and other issues of primary importance to enterprises.
It is also likely that a few enterprises will play the role of “guinea pigs” and deploy the results of several RFQ/RFIs developed to solve problems defined in the ONUG reports. This would provide an opportunity for active engagement of enterprise IT experts and the vendor community. I believe this could make ONUG’s October meeting one of the more valuable infrastructure meetings of 2014.
This effort dovetails well with the coming publication of the Gigaom Research Sector Roadmap on SDN. The demand and awareness for SDN is high with recent surveys indicating that 87% of customers will have production deployments by next year. The market has progressed from “network controllers using the OpenFlow protocol” to a more comprehensive view that addresses specific use cases and customer needs. The Sector Roadmap will identify key issues that need to be addressed in the use of SDN in the enterprise and networking infrastructure as well as describe what players are making important contributions to the SDN marketplace.
I’m looking forward to the ONUG meeting in a couple of weeks to see how industry thought leaders will address many of the outstanding issues with SDN. It will be a great opportunity for enterprise IT users to help plot the course of an important technology as well as provide significant input into vendor thinking.
The new tech that uses a 64-bit ARM-based system-on-chip (SoC) can supposedly perform network function virtualization (NFV) techniques like virtualizing a network gateway and a serving gateway.
Companies like Cisco, Juniper Networks and Nokia Networks along with the Linux Foundation are hoping that the Open Platform for NFV Project will develop a standard for NFV, a network architecture concept that calls for all aspects of networking to be virtualized.
NFV is gaining momentum as evidenced by the production implementation of a virtual policy manager at a Tier 1 North American wireless operator. On Oct. 29, 2013, Openet announced deployment of its virtual software running on standard platforms across multiple data centers at this operator. Network Functions Virtualization (NFV) is a critical part of this service provider’s plans to deliver new network-based applications and reduce operational costs. NFV will be implemented in a phased approach to ease migration challenges and maintain service reliability.
What is NFV?
In the fall of 2012, a number of the largest communications service providers (CSPs) initiated an effort (in ETSI) to dramatically increase the use of virtualization and commercial off the shelf (COTS) technology in their telecommunications networks. In the fall of 2013, a larger group of 50+ CSPs and industry suppliers introduced a number of specifications to guide NFV adoption. Telecom infrastructure has long been a bastion of proprietary software running on purpose built hardware. The proponents of NFV hope to leverage IT technologies, including virtualization, standard servers, and open software to fundamentally change the way networks are built and operated. The key benefits that CSPs will derive from NFV implementation include faster time to market, enablement of new services, ability to rapidly scale resources up and down, and lower costs (both CAPEX and OPEX).
Challenges of virtualization
Software applications must be specifically designed or rewritten to run optimally in virtualized data center environments. In addition, telecom network systems must:
- Be highly reliable (99.999 uptime)
- Offer extremely high performance
- Support low latency
- Scale to support hundreds of millions of users
The performance testing and tuning of network applications in a virtual data center stack (e.g. OpenStack or VMware) is an important and challenging step in NFV deployments.
NFV implementation at the Tier 1 wireless operator
Openet specifically built a new software version of its policy manager to support virtualization. One of the key challenges was to migrate customers to the new, virtual system “in service” – meaning no downtime or disruption to its wireless customers. The wireless operator gains the benefit of scalability (ability to add new capacity by adding VMs) and high performance (the implementation supports more than one million transactions per second). Over time, the wireless operator hopes the use of NFV and COTS will:
- Significantly improve its service agility
- Provide flexibility in system design by eliminating physical (server) constraints with regards to software deployment
- Allow for elastic scaling of capacity via cloud data center resources
- Reduce the operational (OPEX) costs of running its network
NFV represents a very important set of virtual technologies that will transform the telecommunications network. NFV implementations will proceed gradually over time as leading CSPs test a variety of use case scenarios. Leading SPs are likely to evolve to NFV in a phased approach. The successful implementation of policy management by Openet at a Tier 1 wireless operator represents an important step in the validation of NFV.