It’s not Skynet yet: In machine learning, there’s still a role for humans

If you’ve ever seen any of The Terminator films, you’re familiar with Skynet, the self-aware computing system at odds with humanity. But, even though a perception persists that machines can increasingly solve complex problems and process large amounts of data on their own, machine learning experts say humans still play a very important role.
Human intervention is critical at multiple layers, from choosing the algorithms to apply to feature creation to crafting the entire structure within which a machine will learn, said Scott Brave, founder and CTO of Baynote, at GigaOM’s Structure: Data conference Wednesday.
Down the road, he said, there will be more opportunities for machine-man collaboration, as data scientists observe what the machines may be learning and then add new inputs and ideas to the system.
“A lot of times we forget that even though it’s big data, the amount of data that the machine has access to pales in comparison to the amount of data we’re absorbing and have access to,” he said. “We’re building intuitions and holistic pictures in our minds and we see these connections that the machine might not even have the possibility of seeing because it doesn’t have the right data.”
Humans have a powerful role in figuring out the sources of data to give the machine and projecting their intuition, he added.
Still, Timothy Estes, founder and CEO of Digital Reasoning, pointed out that there are three key areas in which machine bests man – and, over time, they could give rise to some interesting social and cultural questions.
Humans will never be able to consume the sheer amount of data machines can process (unless it’s with some “Ray Kurzweil-style” man and machine merging), humans weren’t designed to receive thousands of inputs at once, and we’re ill-equipped to create a unified model of knowledge across that scale of information and make judgements from it, Estes said.
Recognizing that, he said, he predicts a social debate between adopting a “Google”-like (s GOOG) model to artificial intelligence, in which the machine simply tells you what to do next, and a software model, that assumes more human agency.
“I believe we’re going to see that [debate] play out in the next decade between the software-centric model – a personal empowerment model – and a collective model,” he said. “And that’s the Skynet problem… you get a computer with intentionality that has access to data and the next thing you know you’re looking for a robot coming back from the future.”
Check out <a href=”“>the rest of our Structure:Data 2013 coverage here</a>, and a video embed of the session follows below:
A transcription of the video follows on the next page