Tempered Networks wants to secure critical infrastructure so hacks don’t lead to sewage spills

Although the rise of the internet of things means that organizations could gather enormous quantities of data through the billions of connected devices out there today, the big elephant in the room is that security is not where it needs to be, which means there’s a lot more access points for thieves to hack into. Tempered Networks, a Seattle-based security startup, aims to solve this problem and it plans to announce on Tuesday that it brought in a $15 million series A investment round, bringing the company’s total funding to $22 million.

Tempered Networks focuses on protecting the type of critical infrastructure that people “take for granted” in their daily lives, said Tempered Networks President and CEO Jeff Hussey. This type of infrastructure includes facilities like electric dams, pipelines that distribute natural gas, nuclear power plants and wastewater plants.

This type of infrastructure helps move the gears of the modern world and if something were to go awry in one of these facilities, there’s a chance that the pandemonium caused could be of several times more magnitude than your typical run-of-the-mill data breach. Just imagine a world in which a wastewater facility getting hacked causes raw sewage to flow down to the nearest fresh-water system, Hussey explained.

According to Hussey, who was a co-founder of networking company F5 Networks, the thirst for big data has led to government agencies, municipalities and companies running these types of facilities to hook together the networks that support critical infrastructure to corporate data networks in the hopes of uniting the data flow between the two networks.

What makes this somewhat worrisome is the fact that the networks supporting critical infrastructure now have security vulnerabilities because the applications and hardware on those networks are united under the transmission control protocol/internet protocol (TCP/IP), which is the standard protocol of the internet. Hussey said it wasn’t always this way as these networks used to rely on several different protocols, which created “air gaps” between the different hardware devices hooked onto the networks.

Now that everything operates under the same protocol, these “air gaps” that once acted as security buffers in the network no longer exist, which means that a hacker can now do more damage in these critical networks than he could have done in the past.

“Everything speaks the same language,” said Hussey. “It’s a relatively straight hack.”

To secure those now open networks, Tempered Networks sells little devices called HIP (Host Identity Protocol) switches that users can install in their data centers. These devices can be linked up to the critical infrastructure networks and, when working in tandem with Tempered Networks’s networking orchestration system, can create a “secure encrypted channel” from which all the data can now flow through.

Tempered Networks - overlay network

Tempered Networks – overlay network

Instead of having those gaps as a security mechanism, Tempered Networks basically encrypts the backend where the networking data has to pass between devices and applications.

Of course anything involving encryption means that there will be a hit in performance because of the amount of compute required, but Hussey said that “most of the devices we are protecting” don’t necessarily need top-of-the-line speed to operate correctly and efficiently.

“There needs to be a solution to securely connect [these devices] to a modern networking infrastructure and that is what we are doing,” said Hussey.

Hussey said Tempered Networks will sell the device “to anybody who will return our phone call” but it’s right now eyeing public utilities and industries like oil and gas or electricity. The startup counts [company]Boeing[/company], Washington Gas and the University of Washington as customers, among others.

Ignition Partners drove the funding round along with IDG Ventures. As part of the financing, Ignition Partners managing partner John Connors is taking a seat on the startup’s board.

Can ARM processors move the mobile network into the cloud?

ARM is already powering our smartphones, and it’s seeing its processor architecture migrate into networks that supply those phones their connectivity, but it has even more ambitious ideas for the mobile industry. It’s latching onto a new idea called Cloud-RAN, which turns the mobile network as we know it inside out. At Mobile World Congress in Barcelona next week, ARM and Cavium will be demoing their concept of a mobile network on a chip.

Cellular networks are typically built with their processing power on the edges, right under the cell towers that send out radio signals. As demand for LTE capacity mounts, carriers are forced to put more and more horsepower into their cell sites. Network vendors like [company]Ericsson[/company], [company]Alcatel-Lucent[/company] and [company]Nokia[/company] are building more powerful base stations designed to host dozens of cells spanning multiple 4G frequency bands and support tens of thousands of subscribers.

But the mobile industry is starting to look for alternatives to this constant chasing of capacity and it’s looking squarely at the data center. If we could move all of that processing into the cloud, we could have a much more flexible network that moves baseband resources from cell to cell as demand dictates. What’s more, instead of using highly specialized baseband processors in equally specialized base stations, you could use off-the-shelf processors and servers and run all of the functions of the network as software.

thunderxcavium

That’s where [company]ARM[/company] and network semiconductor maker Cavium come in. [company]Cavium[/company] is using its ThunderX data center processors, which use up to 48 ARMv8 cores, as the building blocks for a virtualized base station. At Mobile World Congress, Cavium and ARM will basically “load” an LTE network into system-on-chip (SoC).

The concept isn’t unique. [company]Intel[/company] has long been pursuing Cloud-RAN and it has a big head start on ARM. It’s already working with mobile network vendors like Nokia and Alcatel-Lucent and some of the world’s biggest carriers, like [company]China Mobile[/company], [company]SK Telecom[/company] and [company]Telefonica[/company], to run elements of their networks on its Xeon processors.

For itS part, ARM is thinking bigger than just Cloud-RAN. On Thursday it announced a grand-scale vision it calls Intelligent Flexible Cloud, which puts ARM processors in every nook and cranny of future software-defined and virtualized networks. In addition to Cavium, it revealed partnerships with [company]Altera[/company], [company]Advanced Micro Devices[/company], [company]AppliedMicro[/company], Enea, [company]EZChip[/company], Linaro, [company]Marvell[/company] and [company]Xilinix[/company].

MWC-2015-ticker

Coolan lets companies pool and analyze hardware data

A common dilemma facing many companies that have a ton of gear in their data centers is having to figure out which hardware appliance is causing bottlenecks that may cause downtime and customer outrage. Coolan, a startup formed by former Facebook and Google engineers, aims to solve this problem and is exiting stealth with a new product that gathers together infrastructure data from multiple companies, which it then analyzes to unearth how all their gear is performing.

That strength-by-numbers approach separates Coolan from other IT monitoring services out there that companies plug into their data centers to discover how efficient (or not) their infrastructure really is so they can spot problems before they turn to something bigger.

Although these IT monitoring services essentially study the infrastructure of one company, Coolan’s software platform allows other entities to share their infrastructure data with each other in the hopes that, with more data available, organizations can put an end to unnecessary server failures and the like.

“[Organizations] are all curious about solving this problem, but they have a limited data set,” said Coolan co-founder and CEO Amir Michael. “By bringing the industry together you get a larger data set.”

Screenshot of failure rate

Screenshot of failure rate

Michael was a a hardware engineer at [company]Google[/company] and then a Facebook engineer and manager of the company’s hardware design. At [company]Facebook[/company], Michael’s contributions led to the creation of the Open Compute Project, where he is is still an active participant as vice-chair of the project’s Incubation Committee, responsible for reviewing new specifications.

The open compute project did a good job of getting people to talk about servers and design concepts from a hardware level, Michael said. However, when it comes to operations and getting the most out of hardware, there’s not a lot of information available to the general public on the actual performance metrics of individual pieces of hardware.

“We all want more transparency around our hardware,” Michael said.

The idea is for companies big and small, with 100 servers or 1,000 servers, to all benefit from the insights gleaned from the same big data set. Companies will have to install software (three lines of code, apparently) onto their fleet of servers, which will allow for infrastructure data to flow over to Coolan’s own servers, stored in Amazon S3.

Screenshot of notification report

Screenshot of notification report

Michael seemed aware of the irony that his startup that specializes in hardware-performance metrics operates in the cloud, but he said that “we will eat our own dog food and be running our own servers” once it reaches a certain size.

Coolan will not be syphoning the type of software-related data that New Relic or AppDynamics need for their analytics purposes, but rather hardware data, like the name of a device manufacturer, the temperature of the hardware when running, the model number of an appliance, when the device started generating errors, and so on.

From all this data, Coolan’s team can run machine-learning algorithms to learn how the hardware stacks up and which devices have a higher chance of failure. If a bunch of companies that contribute to Coolan all find that a fan in a particular manufacturer’s device cracks out at the two-year mark, then users who recently purchased that device will now have some warning that their devices might not function properly down the road.

Coolan CEO Amir Michael

Coolan CEO Amir Michael

Coolan’s not ready to disclose who its pilot customers are, but Michael did say that a number of the company’s clients are organizations that started out in the cloud and are now moving off of it to build their own data centers.

The startup could one day have a tool that also monitors a company’s cloud infrastructure, but Michael said that’s “not the primary focus right now.” Coolan is also still figuring out its pricing model, but its main goal as of now is to simply get more companies on board to “get more data.”

“I think part of it is in my DNA,” said Michael in reference to how his days at Facebook could have made him more open to the idea of sharing and collaborative projects. Facebook recently launched a collaborative threat-detection framework that seems similar to Coolan except instead of hardware data, companies are dumping into a central hub security data.

The six-person team at Coolan is not disclosing how much funding it has raised so far, but it closed a seed round in February led by Social + Capital, North Bridge Venture Partners and Keshif Ventures.

Apple unveils $2B plans for Irish and Danish data centers

Apple is set to spend €1.7 billion ($1.93 billion) on two new European data centers, one in Ireland and one in Denmark.

The Galway and Jutland data centers will each measure 166,000 square meters and will, in line with Apple’s other data facilities, be powered entirely by clean, renewable energy. They are expected to go online in 2017, handling data for iTunes, the App Store, iMessage, Maps and Siri.

“We’re excited to spur green industry growth in Ireland and Denmark and develop energy systems that take advantage of their strong wind resources,” [company]Apple[/company] Environmental Initiatives vice-president Lisa Jackson said in a statement. Apple CEO Tim Cook described the initiative as “Apple’s biggest project in Europe to date.”

The company said it will embark on a native tree-planting exercise to accompany the construction of its Irish data center, which will occupy land that was previously used for non-native trees. Meanwhile, excess heat from the Danish facility will be siphoned off to warm neighboring homes.

Apart from green credentials and the hundreds of jobs that will accompany the construction and operation of the new data centers, the sites will of course also help Apple keep Europeans’ data in Europe. With widespread concerns over the privacy implications of using U.S. services, particularly in the enterprise sector that Apple is so keenly courting, this is no minor factor.

If Apple ever launches a Spotify competitor, the new facilities will also prove helpful in supporting all that streaming.

The development of Apple’s new European data centers had been rumored for some time, with Eemshaven in the Netherlands (the site of a major new Google facility) also having been touted as a potential location.

Confirmed: Amazon is buying Annapurna Labs

Amazon has indeed agreed to purchase Annapurna Labs, a super-stealthy Israeli company that is reportedly working on new chip technology. Talks were first reported in Israeli financial newspaper Calcalist and picked up by Reuters and others.

An Amazon spokesperson confirmed the acquisition via email Thursday afternoon but provided no detail.

Annapurna Labs was privately owned by Avigdor Willenz, who founded Marvell Semiconductor in 1992, with additional investment from ARM, the British chip maker and Walden International a VC firm, according to the original report. The purchase price was reportedly $350 million.

According to its LinkedIN page Annapurna Labs:

is a cutting-edge technology startup, established in 2011 by industry veterans. We are well funded, with sites in Israel and Silicon Valley. We are operating in stealth mode and can’t share much about our company, but we’re hiring on an exclusive basis, seeking smart, aggressive, multi-disciplinary engineers and business folks, with focus on teamwork in a group of highly talented team.

It would make sense for [company]Amazon[/company] to invest in cutting-edge chip technology given that its Amazon Web Services arm is always in the hunt for faster, more efficient infrastructure.

 

Switch to build huge data center near Tesla battery factory

The stretch of land that will house Tesla’s new massive battery factory just outside of Reno, Nevada, will become home to another very large customer: what’s being billed as the world’s largest data center to be built by Las Vegas-based data center provider Switch.

Nevada Governor Brian Sandoval announced Thursday during his annual State of the State address that Switch, which has huge SuperNAP data center facilities in Las Vegas, plans to build a 3 million square-foot, $1 billion data center (it’s largest project yet) at the Tahoe Reno Industrial Center. Switch plans to have the first 800,000 square-foot portion of the facility built by early 2016, and eBay will be the anchor tenant.

Switch's SuperNAP data center in Las Vegas

Switch’s SuperNAP data center in Las Vegas

Last year, after months of negotiations, Tesla announced that it had chosen the Tahoe Reno Industrial Center for the location of its battery factory, which will churn out enough lithium ion batteries for 500,000 electric cars by 2020. Tesla’s factory will cost $5 billion and will create 6,500 jobs, and Tesla received a $1.25 billion tax break over 20 years.

Switch is also expected to receive tax incentives for its Reno data center. In addition to the new site, Switch is also expanding its facilities in Vegas.

A recently raised spot of land in the Tahoe-Reno Industrial Center.

A recently raised spot of land in the Tahoe-Reno Industrial Center.

Switch’s SuperNAP Reno will connect to its Vegas facilities via fiber — dubbed the SuperLoop — which will be a 500 mile fiber network between the two regions. Data Center Knowledge said the fiber network will “place 50 million people within 14 milliseconds of data hosted at the SUPERNAPs.”

Switch won’t be the only data center operator in the Reno area. Apple is building out a sizable data center (recently expanded to nine buildings and 345 acres) at the Reno Technology Park about 20 miles east of Reno. One thing that attracted Apple, and likely Switch, to the Center is the region’s capacity to offer clean power for data center providers. Tesla plans to, down the road, fully support its factory with clean power.

The deal is good news for the city of Reno and the surrounding area, which has been trying to remake itself into a high tech manufacturing region and move beyond its image as a gambling backwater (see the Changing face of Reno: Why the world’s biggest little city is attracting Tesla and Apple). However data centers don’t provide the type of full time jobs that a factory does.

Google to close down Russian engineering operations

Google is closing its Russian engineering office, according to a report in The Information.

Google’s Russian engineers will be offered jobs in other countries or in other departments, the Financial Times noted. The company is not saying why it is shutting its Moscow engineering office, which focuses on Chrome OS and the Chrome Web Store, but it said in a statement: “We are deeply committed to our Russian users and customers and we have a dedicated team in Russia working to support them.”

The move follows a series of new restrictions on internet activity in the country, ranging from requirements for popular bloggers to register themselves and abide by censorship limitations, to requirements for Wi-Fi hotspot users to log on with personal ID.

Perhaps most pertinently — unless the department’s shuttering is purely for business reasons — [company]Google[/company] has been ordered to store the data of its Russian users in Russian data centers, and also to comply with the bloggers register law. Russia’s security services have previously urged the use of locally developed encryption in the country’s data centers, suggesting that the move is tied to a desire to be able to access citizens’ personal information.

Mesosphere’s new data center mother brain will blow your mind

Mesosphere has been making a name for itself in the the world of data centers and cloud computing since 2013 with its distributed-system smarts and various introductions of open-source technologies, each designed to tackle the challenges of running tons of workloads across multiple machines. On Monday, the startup plans to announce that its much-anticipated data center operating system — the culmination of its many technologies — has been released as a private beta and will be available to the public in early 2015.

As part of the new operating system’s launch, [company]Mesosphere[/company] also plans to announce that it has raised a $36 million Series B investment round, which brings its total funding to $50 million. Khosla Ventures, a new investor, drove the financing along with Andreessen Horowitz, Fuel Capital, SV Angel and other unnamed entities.

Mesosphere’s new data center operating system, dubbed DCOS, tackles the complexity behind trying to read all of the machines inside a data center as one giant computer. Similar to how an operating system on a personal computer can distribute the necessary resources to all the installed applications, DCOS can supposedly do the same thing across the data center.

The idea comes from the fact that today’s powerful data-crunching applications and services — like Kafka, Spark and Cassandra — span multiple servers, unlike more old-school applications like [company]Microsoft[/company] Excel. Asking developers and operations staff to configure and maintain each individual machine to accommodate the new distributed applications is quite a lot, as Apache Mesos co-creator and new Mesosphere hire Benjamin Hindman explained in an essay earlier this week.

Mesosphere CEO Florian Leibert

Mesosphere CEO Florian Leibert – Source: Mesosphere

Because of this complexity, the machines are nowhere near running full steam, said Mesosphere’s senior vice president of marketing and business development Matt Trifiro.

“85 percent of a data center’s capacity is typically wasted,” said Trifiro. Although developers and operations staff have come a long way to tether pieces of the underlying system together, there hasn’t yet been a nucleus of sorts that successfully links and controls everything.

“We’ve always been talking about it — this vision,” said Mesosphere CEO Florian Leibert. “Slowly but surely the pieces came together; now is the first time we are showing the total picture.”

Building an OS

The new DCOS is essentially a bundle of all of the components Mesosphere has been rolling out — including the Mesos resource management system, the Marathon framework and Chronos job scheduler — as well as third-party applications like the Hadoop file system and YARN.

The DCOS also includes common OS features one would would find in Linux or Windows, like a graphical user interface, command-line interface and a software-development kit.

These types of interfaces and extras are important for DCOS to be a true operating system, explained Leibert. While Mesos can automate the allocation of all the data center resources to many applications, the additional features provide coders and operations staff a centralized hub from which they can monitor their data center as a whole and even program.

“We took the core [Mesos] kernel and built the consumable systems around it,” said Trifiro. “[We] added Marathon, added Chronos and added the easy install of the entire package.”

To get DCOS up and running in a data center, Mesosphere installs a small agent on all Linux OS-based machines, which in turn allows them to be read as an “uber operating system,” explained Leibert. With all of the machines’ operating systems linked up, it’s supposedly easier for distributed applications, like Google’s Kubernetes, to function and receive what they needs.

The new graphical interface and command-line interface allows an organization to see a visual representation of all of their data center machines, all the installed distributed applications and how system resources like CPU and memory are being shared.

If a developer wants to install an application in the data center, he or she simply has to enter install commands in the command-line interface and the DCOS should automatically load it up. A visual representation of the app should then appear along with indicating which machine nodes are allocating the right resources.

DCOS interface

DCOS interface

The same process goes for installing a distributed database like Cassandra; you can now “have it running in a minute or so,” said Leibert.

Installing Cassandra on DCOS

Installing Cassandra on DCOS

A scheduler is built into DCOS that takes in account certain variables a developer might want to include in order to decide which machine should deliver resources to what application; this is helpful because it allows the developer to set up the configurations and the DCOS will automatically follow through with the orders.

“We basically turn the software developer into a data center programmer,” said Leibert.

And because DCOS makes it easier for a coder to program against, it’s possible that new distributed applications could be made faster than before because the developer can now write software to a fleet of machines rather than only one.

As of today, DCOS can run on on-premise environments like bare metal and OpenStack, major cloud providers — like [company]Amazon[/company], [company]Google[/company] and [company]Microsoft[/company] — and it supports Linux variants like CoreOS and Redhat.

Changing the notion of a data center

Leibert wouldn’t name which organizations are currently trying out DCOS in beta, but it’s hard not to think that companies like Twitter, Netflix or Airbnb — all users of Mesos — haven’t considered giving it a test drive. Leibert was a former engineer at Twitter and Airbnb, after all.

Beyond the top webscale companies, Mesosphere wants to court legacy enterprises like those in the financial-services industry who have existing data centers that aren’t nearly as efficient as those seen at Google.

Banks, for example, typically use “tens of thousands of machines” in their data centers to perform risk analysis, Leibert said. With DCOS, Leibert claims that banks can run the type of complex workloads they require in a more streamlined manner if they were to link up all those machines.

And for these companies that are under tight regulation, Leibert said that Mesosphere has taken security into account.

“We built a security product into this operating system that is above and beyond any open-source system, even as a commercial plugin,” said Leibert.

As for what lies ahead for DCOS, Leibert said that his team is working on new features like distributed checkpointing, which is basically the ability to take a snapshot of a running application so that you can pause your work; the next time you start it up, the data center remembers where it left off and can deliver the right resources as if there wasn’t a break. This method is apparently good for developers working on activities like genome sequencing, he said.

Support for containers is also something Mesosphere will continue to tout, as the startup has been a believer in the technology “even before the hype of [company]Docker[/company],” said Leibert. Containers, with their ability to isolate workloads even on the same machine, are fundamental to DCOS, he said.

Mesosphere believes there will be new container technology emerging, not just the recently announced CoreOS Rocket container technology, explained Trifiro, but as of now, Docker and native Linux cgroup containers are what customers are calling for. If Rocket gains momentum in the market place, Trifiro said, Mesosphere will “absolutely implement it.”

If DCOS ultimately lives up to what it promises it can deliver, managing data centers could be a way less difficult task. With a giant pool of resources at your disposal and an easier way to write new applications to a tethered-together cluster of computers, it’s possible that next-generation applications could be developed and managed far easier than they use to be.

Correction: This post was updated at 8:30 a.m. to correctly state Leibert’s previous employers. He worked at Airbnb, not Netflix.

Data center specialist IO splits into two companies

IO, the Phoenix-based company best known for selling modular data centers roughly the size of a shipping container, is splitting into two companies. The one focused on leasing data center space — which was IO’s original business — will retain the IO name, while the modular data center business will be called Baselayer. The move could help provide some clarity in both vision and messaging, and should provide some more runway as the company works toward a successful exit. It filed for an IPO in September 2013 but never followed through, citing less-than-ideal conditions for tech IPOs.