Is machine learning the missing link between big data and the consumer?

Increasingly, machine learning is the critical process making sense of big data for consumers. From music to cooking and office apps, machine learning is providing a twist on expert systems for the mass market. In the news just this week

  • IBM’s Watson was carted out—literally, in a food truck—at SXSW to serve up custom-designed recipes based on a customer’s preferences for ingredients, ethnic cuisine, and food type. The cart drew a long line, although the recipients of Watson’s prescribed concoctions had to go home and cook up the recipes in order to test the fit with their taste buds.
  • Spotify announced its acquisition of The Echo Nest, which provides music discovery and personalization (currently used in Beats Music and Xbox Music) among several music analysis services.
  • TechCrunch has a story on eBay’s internal development of machine learning interpretation of context in ad content to augment its language translation and localization of ads.
  • Microsoft announced the adaptation of its Yammer Enterprise Graph technology to Office 365 with Office Graph.

In other applications, Facebook in January added a Trending feature that uses natural language processing to anticipate the personal interests of its members. Last year Gigaom profiled Pondera Solutions‘ use of Google’s Prediction API machine learning tools to detect fraud. This year, Pondera is extending its capability to Google Glass to support the field investigation of fraud.

All these applications unlock value by individualizing data—and often big data—within a personal context.

Cause for skepticism

Gigaom has been generous with its skepticism as to IBM’s ability to leverage Watson to the level to which it aspires, suggesting at the time that IBM launched its new division for the technology that “If Watson is going to be a $10B biz, IBM had better figure out the cloud”, and pointing out this week that IBM is still scrambling to find an effective way to monetize the Watson technology. And, venture capitalists have registered their own skepticism about startups creating their own machine learning algorithms without a strong applications base or in imitation of what has already been developed.

A maturing technology

However, concerns about the business model of suppliers of machine data and machine data applications are not the same as doubts about the technology and its growing adoption. Thirty years after expert systems were originally hyped, knowledge-based systems are finally maturing. Cloud and SaaS delivery are expanding the range of applications for which they are viable. Indeed, relatively simple consumer systems, such as personalized music selection, are being adopted at least as rapidly as more traditional ‘expert’ systems, such as Watson’s first commercial application in healthcare.

What does it mean?

What does this early proliferation of machine-based learning mean for the enterprise? For the optimization of big data, and selection and management of the underlying systems? Among other implications, it means the technology has become central to the big data discussion, and front and center at Gigaom’s premiere data conference.

Gigaom’s Structure Data conference in New York on March 19 and 20 will look at machine learning and examine its growing role as the link between big data and the consumer. Among the relevant sessions at Structure Data will be the following:

  • Democratizing artificial intelligence with APIs, including panelist Stephen Gold—VP, Worldwide Marketing and Sales Operations, Watson Solutions, IBM Software Group,
  • When you’re talking or typing, AI is there,
  • Bing, Xbox and the results of machine learning at Microsoft,
  • Why the future of social search is semantic,
  • Data Lab: Echo Nest, and
  • Mapping session: Machine learning – when does the payoff start?