The promise of big data still looms, but execution lags

When something is hyped as much as the notion of big data, there’s bound to be disappointment when results don’t meet expectations right this second.

That realization — that implementation of big data analytics and related technologies hasn’t matched expectations — is a common thread across a recent spate of research reports. While corporate execs now “get” the possible impact of aggregating and analyzing all the data their companies generate, very few companies have realized that potential.

Most adoption remains at pilot or test stage

A new McKinsey Quarterly report acknowledged that earlier predictions that retailers would parlay big data analytics to boost operating margins by more than 60 percent, and that the healthcare sector could likewise use the technology to slice costs 8 percent, haven’t played out.

While massively scaled companies like [company]Amazon[/company] and [company]Google[/company] use data analytics to wring out significant costs, data analytics success at most legacy companies is limited to a few test projects or narrow pieces of the overall business. Very few of those accounts “have achieved what we would call ‘big impact through big data,’ or impact at scale,” according to McKinsey.

CEOs have an inflated view of their own big data projects

According to a just-released survey of 362 executives worldwide, there is a pronounced disconnect between what the CEO of a company perceives is happening with that company’s big data efforts and what the rest of the organization sees.

From the key findings of the work conducted The Economist Intelligence unit for [company]Teradata[/company],:

While 47 percent of CEOs believe that all employees have access to the data they need, only 27 percent of all respondents agree that they do. Similarly, 43 percent of CEOs think relevant data are captured and made available in real time, compared to 29 percent of all respondents. CEOs are also more likely to think that employees extract relevant insights from data – 38 percent of them hold this belief, as compared to 24 percent of all respondents and only 19 percent of senior vice presidents, vice presidents and directors.

So, while there is indeed a ton of available data, it’s still hard for most companies to parlay it into useful insights. More than half (57 percent) of the total respondents said their companies do a poor job in this respect, although almost everyone agreed that access to data and the ability to wring actionable insights out of it are critical.

Paranoia isn’t driving enough action

A whopping 60 percent of 226 execs surveyed by [company]Capgemini[/company] Consulting, in yet another report, said that big data will “disrupt” their industry in the next three years but only 13 percent of them have big data implementations in production. And, less than a third of those responding (27 percent) described their own big data initiatives as successful; 8 percent described them as very successful.

So while progress has been made — as Derrick Harris and Fast Forward Labs’ Founder Hilary Mason noted on this week’s Structure Show podcast — executives definitely know what Hadoop is now and have a grasp of what big data can do, much heavy lifting still needs to be done to realize the benefits of analyzing all that data.

For more on this topic of the latest and greatest technologies and how they are put to use — and the cognitive dissonance between big data hype and real-world adoption — will be discussed by top names in the field at Gigaom’s Structure Data event  in March.

Big data implementations status


To hear Fast Forward Labs’ founder Hilary Mason on how that company combs academia, open-source and “outsider art” sources for big data applications, check out the second half of the podcast below:

[soundcloud url=”” params=”color=ff5500&auto_play=false&hide_related=false&show_comments=true&show_user=true&show_reposts=false” width=”100%” height=”166″ iframe=”true” /]

Teradata dives further into Hadoop with RainStor acquisition

Data warehouse vendor [company]Teradata[/company] has made its fourth acquisition of the year, announcing on Wednesday it has bought data-archiving specialist RainStor for an undisclosed amount.

RainStor builds an archival system that can sit on top of Hadoop and, it claims, compress data volumes by up to 95 percent. The company has raised roughly $26 million since it was founded in 2004, with the last round — $12 million — coming in October 2012. The deal itself is neither earth-shaking nor bank-breaking (in a press release, Teradata describes the acquisition price as “not material”), but it does further clarify Teradata’s strategy for staying relevant in an increasingly scale-out, open source world.

Taken as a whole with the company’s other acquisitions, including Hadapt and Think Big Analytics, it’s pretty clear that Teradata wants to play a bigger role in companies’ big data environments than just that of a data warehouse and business intelligence provider. If customers are intent on storing and analyzing more data more cheaply in Hadoop or NoSQL data stores, Teradata would rather help them do that and accept a smaller profit margin rather than lose that data and those workloads altogether.

The big question now is for how long the Hadoop market will continue to play nice with existing data-management vendors. With one so-far successful IPO under its belt and others presumably on the way, it’s conceivable companies such as Cloudera, Hortonworks and MapR will attempt to grab a bigger piece of the pie as their war chests get bigger.

Teradata embraces the big data ecosystem, buys Think Big Analytics

Data warehouse vendor Teradata continues to step up its game in the broader big data market, this time by acquiring consulting firm Think Big Analytics, which specializes in helping clients deploy open source technologies and build analytics applications.

Teradata says Hadoop is good for business — but for how long?

Teradata announced a new set of features and products on Monday that should improve its position as a go-to analytics vendor even in an age of Hadoop. But as open sources technologies evolve, Teradata might face a challenge to attract new users.

Teradata gets its cloud and NoSQL on as big data pressures mount

Teradata is expanding out of the appliance world by offering a fully managed version of its data warehouse software as a cloud service. The company is also dipping a toe into the NoSQL world — the internet of things — with support for JSON files.

Teradata plunges 17% on Q3 warning: Is it economics, or Hadoop?

I think this is more about Hadoop and other emerging technologies than the analysts quoted here are willing to admit. Why do you think Teradata is pushing its Hadoop story so much lately? There is, for example, crazy excitement around big data and Hadoop in China. Customers with blank slates center their efforts around Hadoop, while big existing customers are trying to offload more to Hadoop. Teradata sales are fairly flat right now even in the U.S. because big existing customers are getting bigger but fewer are signing up.

Teradata Aster now does graph processing

Teradata has upped the capabilities of its Teradata Aster big data platform by adding in a native graph-processing engine called SQL-GR. Not a bad idea considering the increased attention around graph processing lately, as well as the need for an aging Teradata to keep up with (or ahead of) of the Joneses in the big data space. And Teradata’s SNAP Framework — which ingests a query and then decides the right processing engines and data stores to invoke — is pretty sweet in theory.

Prepare for change! This is not your father’s database industry

Incumbent database vendors aren’t exactly struggling to make ends meet, but the smart ones know that resting on their laurels might get them there someday. That’s because open source technologies like NoSQL and Hadoop are coming after their business.