Report: Evolving SDN: Tackling challenges for web-scale deployments

Our library of 1700 research reports is available only to our subscribers. We occasionally release ones for our larger audience to benefit from. This is one such report. If you would like access to our entire library, please subscribe here. Subscribers will have access to our 2017 editorial calendar, archived reports and video coverage from our 2016 and 2017 events.
SDN
Evolving SDN: Tackling challenges for web-scale deployments by Greg Ferro:
Customers want mobility, rapid change, and larger networks that work with less hassle through the use of powerful automation because this enables business to build speed in innovation. Software-defined networking (SDN) embraces these requirements with new dynamic networking features that enhance server value and user services while operating with an existing network.
Virtualization arrived in networking more than a decade ago in the form of virtual local area networks (VLAN). In the early 2000s, multiprotocol label switching (MPLS) network virtualization enabled vast global networks in the wide area network (WAN). Then, in the mid-2000s, device virtualization arrived and delivered virtual firewalls. Throughout this period of change, the network edge remained located at a fixed physical point with simple, static services.
Server virtualization enables data center mobility, while Wi-Fi and LTE networks enable user mobility. Yet our current network technology remains focused on fixed endpoints. The tension is leading to a change that adapts old requirements for static and stable connections to dynamic and variable forwarding methods. Today’s network is built from hundreds of individual devices that act like separate elements instead of a single platform.
So is SDN the best of everything? In examining that question, this report will also address questions including:

  • What are the major technology enablers for SDN?
  • How are carriers and enterprises implementing SDN to enable distributed applications and cloud infrastructures?
  • How can customers prepare for SDN given that the technology and marketplace are rapidly evolving?
  • What are the key industry standards efforts (IETF, ONF, NFV, etc.), and how do they differ?

To read the full report click here.

Report: Docker and the Linux container ecosystem

Our library of 1700 research reports is available only to our subscribers. We occasionally release ones for our larger audience to benefit from. This is one such report. If you would like access to our entire library, please subscribe here. Subscribers will have access to our 2017 editorial calendar, archived reports and video coverage from our 2016 and 2017 events.
Image 1 for post Navicron- Linux emerges as clear winner in mobile applications( 2008-02-07 22:25:59)
Docker and the Linux container ecosystem by Janakiram MSV:
Linux container technology is experiencing tremendous momentum in 2014. The ability to create multiple lightweight, self-contained execution environments on the same Linux host simplifies application deployment and management. By improving collaboration between developers and system administrators, container technology encourages a DevOps culture of continuous deployment and hyperscale, which is essential to meet current user demands for mobility, application availability, and performance.
Many developers interchange the terms “container” and “Docker,” sometimes making it difficult to distinguish between the two, but there is a very important distinction. Docker, Inc. is a key contributor to the container ecosystem in the development of orchestration tools and APIs. While container technology has existed for decades, the company’s open-source platform, Docker, makes that technology more accessible by creating simpler and more powerful tools. Using Docker, developers and system administrators can efficiently manage the lifecycle of tens of thousands of containers.
This report provides a detailed overview of the Linux container ecosystem. It explains the various components of container technology and analyzes the ecosystem contributions from companies to accelerate the adoption of Linux-based containers.
To read the full report click here.

Report: SDN, NFV, and open source: the operator’s view

Our library of 1700 research reports is available only to our subscribers. We occasionally release ones for our larger audience to benefit from. This is one such report. If you would like access to our entire library, please subscribe here. Subscribers will have access to our 2017 editorial calendar, archived reports and video coverage from our 2016 and 2017 events.
SDNs
SDN, NFV, and open source: the operator’s view by Mark Leary:
Software-defined networking (SDN) and network functions virtualization (NFV) represent two of the more dramatic oncoming technology shifts in networking. Both will significantly alter network designs, deployments, operations, and future networking and computing systems. They also will determine supplier and operator success (or failure) over the next five to 10 years.
As has always been the case with successful networking technologies, industry standards and open systems will play a strong role in the timely widespread adoption and ultimate success of both SDN and NFV solutions. Open source is poised to play an even more critical role in delivering on the promise of standardized and open networking.
This great promise and potential impact begs two questions. First, “Where are SDN and NFV today?” And second, “What influence will open systems and open source have on the future of SDN and NFV?”
To find answers to these questions, in December 2013 Gigaom Research ran an extensive survey of 600 operators (300 enterprises and 300 service providers) in North America. Based on findings from that survey, this research report provides key insights into the current activity and future direction of SDN and NFV advancements as well as the development and deployment of open systems and open source within SDN and NFV environments.
To read the full report click here.

With $15M, Sauce Labs wants to make software testing faster

In an agile world where developers are expected to churn out new application features on what can often times be a weekly basis, it can be really hard for engineers to create testing environments on the fly. Sauce Labs, a cloud-based testing startup, thinks it has a solution and it now has $15 million from a Series D funding round that the company plans to use to build out its development team and infrastructure, it said on Thursday.

Agile development not only impacts the lives of coders who need to be quicker than ever, but it also “dramatically changed the need for tooling over the years,” explained Steve Hazel, Sauce Labs’ chief product officer and co-founder. With the rise of open-source testing tools, including the popular Selenium, software testers now have new options to choose from that help them quickly test their projects when they are first developed.

However, the open-source testing tools have a problem when it comes to scale. When a development project becomes more widely used, it then has to be tested multiple times across multiple browsers and devices, which results in a lot of testing infrastructure in the form of virtual machines.

Sauce Labs’ testing tool is essentially a scale-out testing tool for Selenium and the company’s own open-source Appium mobile testing tool that’s designed to manage the testing infrastructure for big companies that need multiple testing environments spun up fast.

Hazel said that 80 to 90 percent of Sauce Labs’ customers use the continuous-integration tool Jenkins for the development of their products. When it comes time for the engineering teams to do their testing, Sauce Labs’ tool is integrated into Jenkins and can immediately load up the necessary virtual machines that contain the testing environments.

The virtual machines are spun up in Sauce Labs’ own data centers, which Hazel said provides for a more cost-effective testing method than if someone were to spin up testing environments on their own in Amazon or another cloud service.

For example, Sauce Labs can spin up 50 virtual machines (it charges clients by the VM) to run 50 tests and then take them down for an hour as the development team works on a new build. The testing team can then ping Sauce Labs to spin up the VMs again for another round of testing and then spin them down when they want. The whole thing is “bursty,” and if you were to try to do testing like that in Amazon and wanted fast access to VMs, you’d have to to keep a VM running the whole time without shutting it down because it takes longer for Amazon to spin up the VM with the necessary testing environment, Hazel said.

“We have a pool of VMs that are ready to go, that are ready to go at all times,” he said.?

The startup counts customers including [company]Salesforce[/company], [company]Yahoo[/company], [company]Bank of America[/company] and [company]Twitter[/company] and now has a total of $36 million in funding, according to Jim Cerna, CEO of Sauce Labs. Toba Capital led the new funding round.

Cloudsoft launches Clocker, an open source project used to spin up Docker containers

Cloudsoft is jumping on the Docker bandwagon. The Edinburgh-based company, whose software other companies can use to manage the development and operations of their applications, plans to announce Clocker at Structure on Wednesday, the aptly titled name to its open source project that allows users to spin up Docker containers without generating excess containers. Cloudsoft’s new project uses Apache Brooklyn, the open source framework for managing applications through their blueprints — a set of policies that an organization sets up to ensure that an application doesn’t spin up an excess amount of virtual machines — to deploy and manage multiple Docker clusters across the cloud and even on-premise. In short, Clocker manages all the Docker containers and ensures that only the correct amount are launched for a given application.