Advertising analytics platform Metamarkets raises another $15M

Metamarkets, a San Francisco startup providing streaming analytics to advertisers, has raised a $15 million round of venture capital. Data Collective led the round, which also included John Battelle and City National Bank and existing investors Khosla Ventures, IA Ventures, True Ventures and Village Ventures. Metamarkets (see disclosure) has now raised more than $43 million since launching in 2010.

While the company’s product, like most software as a service, is tied to a rather specific use case, its technology is not. To many engineers working on big data systems, Metamarkets is also known as the creator of Druid, an open source data store the company created in order to handle the speed and scale its analytics platform requires. Last week, Metamarkets announced that Druid is now available under the permissive Apache 2 open source license.

Metamarkets CEO Mike Driscoll has a clear view of where the software world is headed, and he thinks his company is one of many helping take it there. Essentially, it’s a world where applications are delivered as cloud services, and infrastructure technologies become commodities, often created and then open sourced by the companies building those applications.

If you look at the technology landscape today, it’s hard to argue with that premise. Large companies such as Google, Facebook and Yahoo have created a large number of today’s popular data analysis, data storage, programming and other technologies, and now Metamarkets, Airbnb, Twitter and others are getting in on the act. It’s not wholly inconceivable that future developers, even at the enterprise level will be able to find anything they need as open source, meaning money only has to change hands for services, support and, of course, applications.

Disclosure: Metamarkets is a portfolio company of True Ventures, which is also an investor in Gigaom.

Ex-Facebookers at Interana raise $20M for mass-scale event data

Interana, a Menlo Park, California, analytics startup that officially launched in October has closed a $20 million Series B round. Index Ventures led the round, which also included new investors AME Cloud Ventures, Harris Barton and Cloudera’s Mike Olson. Interana’s existing investors — Battery Ventures, Data Collective and Fuel Capital — from which the company had previously raised $8.2 million, also participated.

The idea behind Interana is to make [company]Facebook[/company]-style event analytics, as well as a the same type of data-centric culture, available to all types of companies with lots of event data to analyze. To do this, Interana built an entire system from the ground up, from columnar storage engine to user interface.

A portion of the Interana dashboard.

A portion of the Interana dashboard.

It was founded by two former Facebook engineers, Bobby Johnson and Lior Abraham, who built some of Facebook’s most-popular internal analytics systems, and Ann Johnson. Ann, who is CEO, and Bobby, who is CTO, are married.

Since the company launched, Ann Johnson said in a recent interview, Interana has scored several new customers and received interest from very large and well-known companies. She chalks up the interest to a dearth of products addressing event analytics at scale — like years worth of it — and to data volumes that “are growing faster than many people’s intuitions even tell them.”

Ann Johnson

Ann Johnson

The hope for customers is that they query their data regularly and treat Interana’s product like they treat Google’s search engine by asking lots of small questions throughout the day, she added.

A good analytics practice, Bobby Johnson noted, is to make the interface easy enough that people don’t just look for that one big insight, but rather ask as many questions as they need to, about whatever they need to. “Anything that should be answered by data should be,” he said. That way, people can spend their time worrying about other stuff.

Bobby Johnson

Bobby Johnson

To hear more about Interana’s product, as well as the experience of starting an enterprise software company with your spouse, check out our October podcast interview with Ann Johnson.

[soundcloud url=”″ params=”color=ff5500&auto_play=false&hide_related=false&show_comments=true&show_user=true&show_reposts=false” width=”100%” height=”166″ iframe=”true” /]

A startup wants to build a trading platform for sensor data

A startup out of Las Vegas is trying to capitalize on a very difficult, and potentially very lucrative, opportunity within the internet of things. The company, called Terbine, wants to become a data broker for the world of connected devices by building a platform where companies can buy, sell and share the data their sensors are collecting.

Terbine is still very young — the company has just raised seed funding from a firm called Incapture Group — but founder and CEO David Knight has big plans. He’s looking at everything from billboards to drones, from shipping vessels to satellites, as potential sources for a massive database of information about what’s happening in the physical world. He thinks companies will pay big money to able to monitor pedestrian traffic in key markets thousands of miles away, for example, or to identify the potential closure of shipping lanes because of an oil spill long before it’s being reported.

Terbine would play the middleman in all of these transactions, collecting the data, curating and formatting it, and then managing access to it. Knight envisions a market-like approach to access, where some data might be free, but most would be priced based on how timely it is, how rare it is or how relevant it is at any given time. He’s looking at sectors such as energy, agriculture, and oil and gas — which has become much less centralized thanks to fracking — as early targets.

David Knight

David Knight

“I realize a lot of people are talking about the internet of things,” Knight said, “but so far most of the conversation reminds me of the early days of CB radio.” Back then lots of people had a radio, like lots of people now have sensors, but there was no place to go to connect with the most interesting people.

It’s not an insane idea — even [company]Cisco[/company] has pitched the idea of “data infomediaries,” and [company]IBM[/company] has suggested companies could make money by recycling data — but so far no one has really been able to pull it off. There are myriad regulatory hurdles to overcome, not to mention the technological challenges of building such an infrastructure. Terbine has already prototyped a platform for the data exchange on Amazon Web Services and is thinking about its edge-network architecture, but actually building it is another story.

There’s also the not-so-small question of how Terbine, or any company attempting to build such a platform, will get companies on board with the data-sharing plan. Many data marketplaces so far have been populated with data that’s either not too interesting or, in the case of some early government efforts, not available in usable formats.


Knight thinks companies will certainly be willing to pay for quality data, but acknowledges that bartering (give some data to get some data) might be a better method for getting them initially involved and proving there’s value in the exchange. He said Terbine also hopes to deploy its own network of sensors with strategic partners so it can ensure certain data it perceives as valuable will be available.

Being headquartered in Las Vegas might be a strategic advantage, Knight said, because of the highly interconnected SuperNAP data center (where he hopes to eventually host Terbine’s platform) and the Department of Energy’s Remote Sensing Laboratory. He’s hopeful the latter could offer a testbed for some of Terbine’s plans, and possibly some talent.

It’s a longshot to be sure, but Knight, who has previously been involved in the quest to bring the Endeavour space shuttle to the California Science Center and is also working on a high-tech virtual reality tour of the craft, says he’s game for it.

“What I really like,” he said, “is being involved with things people say can’t be done.”

Indonesia is mapping Jakarta floods in real time using Twitter


After initially mapping 8 million flood-related tweets throughout the region over the past couple years as part of a Twitter Data Grant, the University of Wollongong and a local emergency agency have developed a project called PetaJakarta that builds a real-time map of areas affected by floods, based on geo-tagged tweets directed to the project using a specific hashtag. According to a Twitter blog post announcing the project, the goal is to help emergency workers and citizens in one of the world’ most-populous areas understand how floods are moving and what areas have been hit the hardest.

Twitter now indexes every tweet ever

Twitter has built a new search index that allows users to surface all public tweets since the service launched in 2006. At nearly half a trillion documents and a scale of 100 times Twitter’s standard real-time index, it’s an impressive feat of engineering.

The tricky business of acting on live data before it’s too late

We’re generating more data than ever and analyzing a lot more of it, too. But when it comes to responding quickly to potential public health crises or other situations, we need more data, more analysis and more people paying attention to it all.

DataTorrent’s Hadoop stream-processing engine is now for sale

DataTorrent, a startup building a stream-processing engine for Hadoop that it claims can analyze more than 1 billion data events per second, announced on Tuesday that its flagship product generally available. Stream processing is becoming more important as we move into an era of connected devices, ubiquitous sensors and fast-paced web platforms such as Twitter. Data is flowing into systems faster than ever, and many companies would like to get some use out of it in real time; in some cases, even hours-old data could be considered stale. Other products and projects addressing stream processing on Hadoop include Apache StormSpark Streaming and Samza, and Amazon Kinesis.

Hortonworks co-founder Baldeschwieler now advising DataTorrent

Eric Baldeschwieler, the founding CEO of Hortonworks and former Yahoo VP who led the company’s Hadoop development efforts, is now a strategic adviser to Hadoop startup DataTorrent. The company, which won the Structure Data Readers’ Choice award for infrastructure startups, sells stream-processing software designed to run in Hadoop environments (on top of YARN). Baldeschwieler also advises the white-hot Apache Spark startup Databricks. He left Hortonworks, where he was most recently CTO, in August 2013.