Geoff Hinton, one of the godfathers of deep learning and neural network research, did a fascinating Ask Me Anything on Reddit late last week. Hinton, who now splits his time between the University of Toronto and Google, touched on just about everything important in the field: which methods work best for what, whether neural networks can achieve general artificial intelligence, debates over deep learning hype and even his new theory on improving model performance with a new type of neuron that he calls “capsules.”
Here are some highlights focusing on questions about Google’s techniques and the experience of working for Google, as well as general questions about the next big things in the deep learning space. (The questions have been trimmed a bit but link to the full threads.) But seriously, anyone interested in artificial intelligence as a researcher, entrepreneur or even investor should go read the whole thing, which contains a lot more information.
On deep learning research at Google
I think that Google, Facebook, Microsoft Research, and a few other labs are the new Bell Labs. I don’t think it was a big problem that a lot of the most important research half a century ago was done at Bell labs. We got transistors, unix and a lot of other good stuff.
Hi Professor Hinton, Since you joined Google lately, will your research there be proprietary? I’m just worried that the research done by one of the most important researchers in the field is being closed to a specific company.
Actually, Google encourages us to publish. The main thing I have been working on is my capsules theory and I haven’t published because I haven’t got it to work to my satisfaction yet.
It depends how your learning methods scale. For example, if you do phrase-based translation that relies on having seen particular phrases before, you need hugely more data to make a small improvement. If you use recurrent neural nets, however, the marginal effect of extra data is much greater.
In 2012, Alex Krizhevsky trained the system that blew away the computer vision state-of-the-art on two GPUs in his bedroom. Google (with Alex’s help) have now halved the error rate of that system using more computation. But I believe it’s still possible to achieve spectacular new deep learning results with modest resources if you have a radically new idea.
I find it likely that there are channels for fairly direct transfer of knowledge from companies like Google to U.S. (and possibly some other) spy agencies. Do you share my concerns about this, and is it something that people in the machine learning community around you discuss and try to deal with?
Technology is not itself inherently good or bad—the key is ethical deployment. So far as I can tell, Google really cares about ensuring technology is deployed responsibly. That’s why I am happy to work for them but not happy to take money from the “defense” department.
On how the field will evolve over time
In your opinion, which of the following ideas contain the lowest hanging fruit for improving accuracy on today’s typical classification problems: 1) Better hardware and bigger machine clusters; 2) Better algorithm implementations and optimizations; 3) Entirely new ideas and angles of attack?
I think entirely new ideas and approaches are the most important way to make major progress, but they are not low-hanging. They typically involve a lot of work and many disappointments. Better machines, better implementations and better optimization methods are all important and I don’t want to choose between them. I think you left out slightly new ideas which is what leads to a lot of the day to day progress. A bunch of slightly new ideas that play well together can have a big impact.
I cannot see ten years into the future. For me, the wall of fog starts at about 5 years. (Progress is exponential and so is the effect of fog so its a very good model for the fact that the next few years are pretty clear and a few years after that things become totally opaque). I think that the most exciting areas over the next five years will be really understanding videos and text. I will be disappointed if in five years time we do not have something that can watch a YouTube video and tell a story about what happened. I have had a lot of disappointments.
All good researchers will tell you that the most promising direction is the one they are currently pursuing. If they thought something else was more promising, they would be doing that instead.
I think the long-term future is quite likely to be something that most researchers currently regard as utterly ridiculous and would certainly reject as a NIPS paper. But this isn’t much help
For more on the state of the art and future of deep learning and artificial intelligence, check out the recap and videos of Gigaom’s recent Future of AI event, or this Reddit Ask Me Anything with Facebook AI director and NYU researcher Yann LeCun from May.