Voices in AI – Episode 64: A Conversation with Eli David

[voices_in_ai_byline]

About this Episode

Episode 64 of Voices in AI features host Byron Reese and Dr. Eli David discuss evolutionary computation, deep learning and neural networks, as well as AI’s role in improving cyber-security. Dr. David is the CTO and co-founder of Deep Instinct as well as having published multiple papers on deep learning and genetic algorithms in leading AI journals.
Visit www.VoicesinAI.com to listen to this one-hour podcast or read the full transcript.

Transcript Excerpt

Byron Reese: This is Voices in AI, brought to you by GigaOm. I’m Byron Reese. And today, our guest is Dr. Eli David. He is the CTO and the co-founder of Deep Instinct. He’s an expert in the field of computational intelligence, specializing in deep learning and evolutionary computation. He’s published more than 30 papers in leading AI journals and conferences, mostly focusing on applications of deep learning and genetic algorithms in various real-world domains. Welcome to the show, Eli.
Eli David: Thank you very much. Great to be here.
So bring us up to date, or let everybody know what do we mean by evolutionary computation, and deep learning and neural networks? Because all three of those are things that, let’s just say, they aren’t necessarily crystal clear in everybody’s minds what they are. So let’s begin by defining your terms. Explain those three concepts to us.
Sure, definitely. Now, both neural networks and evolutionary computation take inspiration from intelligence in nature. If instead of trying to come up with smart mathematical ways of creating intelligence, we just look at the nature to see how intelligence works there, we can reach two very obvious conclusions. First, the only algorithm that is in charge of creating intelligence – we started from single-cell organisms billions of years ago, and now we are intelligent organisms – and the main algorithm, or maybe the only algorithm, in charge of that was evolution. So evolutionary computation takes inspiration from the evolutionary process in the nature and trying to evolve computer programs so that, from one generation to other, they will become smarter and smarter, and the smarter they are, the more they breed, the more children they have, and so, hopefully the smart gene improves one generation after the other.
The other thing that we will notice when we observe nature is brains. Nearly all the intelligence in humans or other mammals or the intelligent animals, it is due to a neural network and network of neurons which we refer to as a brain — many small processing units connected to each other via what we call synapses. In our brains, for example, we have many tens of billions of such neurons, each one of them, on average, connected to about ten thousand other neurons, and these small processing units connected to each other, they create the brain; they create all our intelligence. So the two fields of evolutionary computation and artificial neural networks, nowadays referred to as deep learning, and we will shortly dwell on the difference as well, take direct inspiration from nature.
Now, what is the difference between deep learning, deep neural networks, traditional neural networks, etc? So, neural networks is not a new field. Already in the 1980s, we had most of the concepts that we have today. But the main difference is that during the past several years, we had several major breakthroughs, while until then, we could train only shallow neural networks, shallow artificial neural networks, just a few layers of neurons, just a few thousand synapses, connectors. A few years ago, we managed to make these neural networks deep, so instead of a few layers, we have many tens of layers; instead of a few thousand connectors, we have now hundreds of millions, or billions, of connectors. So instead of having shallow neural networks, nowadays we have deep neural networks, also known as deep learning. So deep learning and deep neural networks are synonyms.
Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com
[voices_in_ai_link_back]
 
Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.

Voices in AI – Episode 63: A Conversation with Hillery Hunter

[voices_in_ai_byline]

About this Episode

Episode 63 of Voices in AI features host Byron Reese and Hillery Hunter discuss AI, deep learning, power efficiency, and understanding the complexity of what AI does with the data it is fed. Hillery Hunter is an IBM Fellow and holds an MS and a PhD in electrical engineering from the University of Illinois Urbana-Champaign.
Visit www.VoicesinAI.com to listen to this one-hour podcast or read the full transcript.

Transcript Excerpt

Byron Reese: This is Voices in AI brought to you by GigaOm, I’m Byron Reese. Today, our guest is Hillery Hunter. She is an IBM Fellow, and she holds an MS and a PhD in electrical engineering from the University of Illinois Urbana-Champaign. Welcome to the show, Hillery.
Thank you it’s such a pleasure to be here today, looking forward to this discussion, Byron.
So, I always like to start off with my Rorschach test question, which is: what is artificial intelligence, and why is it artificial?
You know that’s a great question. My background is in hardware and in systems and in the actual compute substrate for AI. So one of the things I like to do is sort of demystify what AI is. There are certainly a lot of definitions out there, but I like to take people to the math that’s actually happening in the background. So when we talk about AI today, especially in the popular press and such and people talk about the things that AI is doing, be it understanding medical stands or labelling people’s pictures on a social media platform, or understanding speech or translating language, all those things that are considered core functions of AI today are actually deep learning, which means using many layered neural networks to solve a problem.
There’s also other parts of AI though, that are much less discussed in popular press, which include knowledge and reasoning and creativity and all these other aspects. And you know the reality is where we are today with AI, is we’re seeing a lot of productivity from the deep learning space and ultimately those are big math equations that are solved with lots of matrix math, and we’re basically creating a big equation that matches in its parameters to a set of data that it was fed.
So, would you say though that that it is actually intelligent, or that it is emulating intelligence, or would you say there’s no difference between those two things?
Yeah, so I’m really quite pragmatic as you just heard from me saying, “Okay, let’s go talk about what the math is that’s happening,” and right now where we’re at with AI is relatively narrow capabilities. AI is good at doing things like classification or answering yes and no kind of questions on data that it was fed and so in some sense, it’s mimicking intelligence in that it is taking in sort of human sensory data a computer can take in. What I mean by that is it can take in visual data or auditory data, people are even working on sensory data and things like that. But basically a computer can now take in things that we would consider sort of human process data, so visual things and auditory things, and make determinations as to what it thinks it is, but certainly far from something that’s actually thinking and reasoning and showing intelligence.
Well, staying squarely in the practical realm, that approach, which is basically, let’s look at the past and make guesses about the future, what is the limit of what that can do? I mean, for instance, is that approach going to master natural language for instance? Can you just feed a machine enough printed material and have it be able to converse? Like what are some things that model may not actually be able to do?
Yeah, you know it’s interesting because there’s a lot of debate. What are we doing today that’s different from analytics? We had the big data era, and we talked about doing analytics on the data. What’s new and what’s different and why are we calling it AI now? To refer to your question from that direction, one of the things that AI models do, be it anything from a deep learning model to something that’s more in the knowledge reasoning area, is that they’re much better interpolators, they’re much better able to predict on things that they’ve never seen before.
Classical rigid models that people programmed in computers, could answer “Oh, I’ve seen that thing before.” With deep learning and with more modern AI techniques, we are pushing forward into computers and models being able to guess on things that they haven’t exactly seen before. And so in that sense there’s a good amount of interpolation influx, whether or not and how AI pushes into forecasting on things well outside the bounds of what it’s never seen before and moving AI models to be effective at types of data that are very different from what they’ve seen before, is the type of advancement that people are really pushing for at this point.
Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com
[voices_in_ai_link_back]
 
Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.

Voices in AI – Episode 56: A Conversation with Babak Hodjat

[voices_in_ai_byline]

About this Episode

Episode 56 of Voices in AI features host Byron Reese and Babak Hodjat talking about genetic algorithms, cyber agriculture, and sentience. Babak Hodjat is the founder and CEO of Sentient Technologies. He holds a PhD in the study of machine intelligence.
Visit www.VoicesinAI.com to listen to this one-hour podcast or read the full transcript.

Transcript Excerpt

Byron Reese: This is Voices in AI brought to you by GigaOm, I’m Byron Reese. Today my guest is Babak Hodjat, he is the founder and CEO of Sentient Technologies. He holds a PhD in the study of machine intelligence. Welcome to the show, Babak. Rerecorded the intro
Babak Hodjat: Great to be here, thank you.
Let’s start off with my normal intro question, which is, what is artificial intelligence?
Yes, what a question. Well we know what artificial is, I think mainly the crux of this question is, “What is intelligence?”
Well actually no, there are two different senses in which it’s artificial. One is that it’s not really intelligence, it’s like artificial turf isn’t really grass, that it just looks like intelligence, but it’s not really. And the other one is, oh no it’s really intelligent it just happens to be something we made.
Yeah it’s the latter definition I think is the consensus. I’m saying this partly because there was a movement to call it machine intelligence, and there were other names to it as well, but I think artificial intelligence is, certainly the emphasis is on the fact that, as humans, we’ve been able to construct something that gives us a sense of intelligence. The main question then is, “What is this thing called intelligence?” And depending on how you answer that question, actual manifestations of AI have differed through the years.
There was a period in which AI was considered: If it tricks you into believing that it is intelligent, then it’s intelligent. So, if that’s the definition, then everything is fair game. You can cram this system with a whole bunch of rules, and back then we called them expert systems, and when you interact with these rule sets that are quite rigid, it might give you a sense of intelligence.
Then there was a movement around actually building intelligence systems, through machine learning, and mimicking how nature creates intelligence. Neural networks, genetic algorithms, some of the approaches, amongst many others that were proposed and suggested, reinforcement learning in its early form, but they would not scale. So the problem there was that they did actually show some very interesting properties of intelligence, namely learning, but they didn’t quite scale, for a number of different reasons, partly because we didn’t quite have the algorithms down yet, also the algorithms could not make use of scalable compute, and compute and memory storage was expensive.
Then we switched to redefinition in which we said, “Well, intelligence is about these smaller problem areas,” and that was the mid to late 90s where there was more interest in agenthood and agent-based systems, and agent-oriented systems where the agent was tasked with a simplified environment to solve. And intelligence was extracted into: If we were tasked with a reduced set of tools to interact with the world, and our world was much simpler than it is right now, how would we operate? That would be the definition of intelligence and those are agent based systems.
We’ve kind of swung back to machine learning based systems, partly because there have been some breakthroughs in the past, I would say 10-15 years, in neural networks in learning how to scale this technology, and an awesome rebranding of neural networks—calling them deep learning—the field has flourished on the back of that. Of course it doesn’t hurt that we have cheap compute and storage and lots and lots of data to feed these systems.
You know, one of the earlier things you said is that we try to mimic how nature creates intelligence, and you listed three examples: neural nets, and then GANNs, how we evolve things and reinforcement learning. I would probably agree with evolutionary algorithms, but do you really think… I’ve always thought neural nets, like you said, don’t really act like neurons. It’s a convenient metaphor I guess, but do you really consider neural nets to be really derived from biology or it’s just an analogy from biology?
Well it was very much inspired by biology, very much so. I mean models that we had of how we thought neurons and synapses between neurons and chemistry of the brain operates, fuels this field, absolutely. But these are very simplified versions of what the brain actually does, and every day there’s more learning about how brain cells operate. I was just reading an article yesterday about how RNA can capture memory, and how the basal ganglia also have a learning type of function—it’s not just the pre-frontal cortex. There’s a lot of complexity and depth in how the brain operates, that is completely lost when you simplify it. So absolutely we’re inspired definitely, but this is not a model of the brain by any stretch of the imagination.
Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com 
[voices_in_ai_link_back]
 
Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.

Voices in AI – Episode 9: A Conversation with Soumith Chintala

[voices_in_ai_byline]
In this episode, Byron and Soumith talk about transfer learning, child development, pain, neural networks, and adversarial networks.
[podcast_player name=”Episode 9: A Conversation with Soumith Chintala” artist=”Byron Reese” album=”Voices in AI” url=”https://voicesinai.s3.amazonaws.com/2017-10-16-(00-46-22)-soumith-chintala.mp3″ cover_art_url=”https://voicesinai.com/wp-content/uploads/2017/10/voices-headshot-card-1.jpg”]
[voices_in_ai_link_back]
Byron Reese: This is Voices in AI, brought to you by Gigaom. I’m Byron Reese. Today our guest is Soumith Chintala. He is an Artificial Intelligence Research Engineer over at Facebook. He holds a Master’s of Science and Computer Science from NYU. Welcome to the show, Soumith.
Soumith Chintala: Thanks, Byron. I am glad to be on the show.
So let’s start out with your background. How did you get to where you are today? I have been reading over your LinkedIn, and it’s pretty fascinating.
It’s almost accidental that I got into AI. I wanted to be an artist, more of a digital artist, and I went to intern at a visual effects studio. After the summer, I realized that I had no talent in that direction, so I instead picked something closer to where my core strength lies, which is programming.
I started working in computer vision, but just on my own in undergrad. And slowly and steadily, I got to CMU to do robotics research. But this was back in 2009, and still deep learning wasn’t really a thing, and AI wasn’t like a hot topic. I was doing stuff like teaching robots to play soccer and doing face recognition and stuff like that.
And then I applied for master’s programs at a bunch of places. I got into NYU, and I didn’t actually know what neural networks were or anything. Yann LeCun, in 2010, was more accessible than he is today, so I went, met with him, and I asked him what kind of computer vision work he could give me to do as a grad student. And he asked me if I knew what neural networks were, and I said no.
This was a stalwart in the field who I’m sitting in front of, and I’m like, “I don’t know, explain neural networks to me.” But he was very kind, and he guided me in the right direction. And I went on to work for a couple of years at NYU as a master’s student and simultaneously as a junior research scientist. I spent another year, almost a year there as a research scientist while also separately doing my startup.
I was part of a music and machine learning startup where we were trying to teach machines to understand and play music. That startup went south, and I was looking for new things. And at the same time, I’d started maintaining this tool called Torch, which was the industry-wide standard for deep learning back then. And so Yann asked me if I wanted to come to Facebook, because they were using a lot of Torch, and they wanted some experts in there.
That’s how I came about, and once I was at Facebook, I did a lot of things—research on adversarial networks, engineering, building PyTorch, etc.
Let’s go through some of that stuff. I’m curious about it. With regard to neural nets, in what way do you think they are similar to how the brain operates, and in what way are they completely different?
I’d say they’re completely different, period. We think they’re similar in very high-level and vague terms like, “Oh, they do hierarchical learning, like humans seem to think as well.” That’s pretty much where the similarity ends. We think, and we hypothesize, that in some very, very high-level way, artificial neural networks learn like human brains, but that’s about it.
So, the effort in Europe—the well-funded effort—The Human Brain Project, which is deliberately trying to build an AGI based on the human brain… Do you think that’s a worthwhile approach or not?
I think all scientific approaches, all scientific explorations are worthwhile, because unless we know… And it’s a reasonably motivated effort, right? It’s not like some random people with bad ideas are trying to put this together; it’s a very well-respected effort with a lot of experts.
I personally wouldn’t necessarily take that direction, because there are many approaches to these things. One is to reverse-engineer the brain at a very fundamental level, and try to put it back together exactly as it was. It’s like investigating a car engine… not knowing how it works, but taking X-ray scans of it and all that, and trying to put it back together and hoping it works.
I’m not sure if that would work with as complicated a system as the brain. So, in terms of the approach, I’m not sure I would do it the same way. But I think it’s always healthy to explore various different directions.
Some people speculate that a single neuron is as complicated in its operations as a supercomputer, which either implies we won’t get to an AGI, or we certainly won’t get it by building something like the human brain.  
Let’s talk about vision for just a minute. If I show a person just one sample of some object, a statue of a raven, and then I show them a hundred photos with it partially obscured, on its side, in the dark or half underwater, weirdly lit—a person could just boom, boom, boom, pick it all out.
But you can’t train computers anything like that. They need so many symbols, so many examples. What do you think is going on? What are humans doing that we haven’t taught computers how to do?
I think it’s just the diversity of tasks we handle every day. If we had a machine learning model that was also handling so many diverse tasks as humans do, it would be able to just pick out a raven out of a complicated image just fine. It’s just that when machines are being trained to identify ravens, they’re being trained to identify ravens from a database of images that don’t look very much like the complicated image that they’ve been given.
And because they don’t handle a diverse set of tasks, they’re doing very specific things. They kind of over-fit to that dataset they have been given, in some way. I think this is just a matter of increasing the number of tasks we can make a single machinery model do, and over time, they will get as smart. Of course, the hard problem is we haven’t figured out how to make the same model do a wide variety of tasks.
So that’s transfer learning, and it’s something humans seem to do very well.
Yes.
Does it hinder us that we take such an isolated, domain-specific view when we’re building neural AIs? We say, “Well, we can’t teach it everything, so let’s just teach it how to spot ravens,” and we reinvent the wheel each time? Do you have a gut intuition where the core, the secret of transfer learning at scale is hiding?
Yeah. It’s not that we don’t want to build models that can do a wide variety of tasks. It’s just that we haven’t figured it out yet. The most popular research that you see in media, that’s being highlighted, is the research that gets superhuman abilities in some specific niche task.
But there’s a lot of research that we deal with day-to-day, that we read about, that is not highlighted in popular media, which tries to do one-shot learning, and smarter transfer learning and stuff. And as a field, we’re still trying to figure out how to do this properly. I don’t think, as a community of AI researchers, we’re restricting ourselves to just do the expert systems. It’s just like we haven’t figured out as well how to do more diverse systems.
Well, you said neural nets aren’t much like the human brain. Would you say just in general, mechanical intelligence is different than human intelligence? Or should one watch how children learn things, or study how people recognize what they do, and cognitive biases and all of that?
I think there is a lot of value in doing cognitive science, like looking at how child development happens, and we do that a lot. A lot of inspiration and ideas, even in machine learning and neural networks, does come from looking at such aspects of human learning and human intelligence. And it’s being done.
We collaborate, for example at FAIR—Facebook AI Research—with a few researchers who do try to understand child development and child learning. We’ve been building projects in that direction. For example, children learn things like object permanence between certain ages. If you hide something from a child and then make it reappear, does the child understand that you just put it behind your back and then just showed it to them again? Or does a child think that that object actually just disappeared and then appeared again?
So, these kinds of things are heavily-studied, and we try to understand how the mechanisms of learning are… And we’ve been trying to replicate these for neural networks as well. Can a neural network understand what object permanence is? Can a neural network understand how physics works? Children learn how physics works by playing a lot, playing with blocks, playing with various things in their environment. And we’re trying to see if neural networks can do the same.
There’s a lot of inspiration that can be taken from how humans learn. But there is slight separation between whether we should exactly replicate how neurons work in a human brain, versus neurons work in a computer thing; because human brain neurons, their learning mechanisms and their activation mechanisms are using very different chemicals, different acids and proteins.
And the fundamental building blocks in a computer are very different. You have transistors, and they work bit-wise and so on. At a fundamental block level, we shouldn’t really look for exact inspirations, but at a very high level, we should definitely look for inspiration.
You used the word ‘understand’ several times, in that “Does the computer understand?” Do computers actually understand anything? Is that maybe the problem, that they don’t actually have an experiencing self that understands?
There’s—as they say in the field—‘nobody home’, and therefore there are just going to be these limits of things that come easy to us because we have a self, and we do understand things. But all a computer can do is sense things. Is that a meaningful distinction?
We can sense things, and a computer can sense things in the sense that you have a sensor. You can consume visual inputs, audio inputs, stuff like that. But understanding can be as simple as statistical understanding. You see something very frequently, and you associate that frequency with this particular association of a term or an object. Humans have a statistical understanding of things, and they have a causal understanding of things. We have various different understanding approaches.
And machines can, at this point, with neural networks and stuff… We take a statistical or frequentist approach to things, and we can do them really well. There’s other aspects of machine learning research as well that try to do different kinds of understanding. Causal models try to consume data and see if there’s a causal relationship between two sets of variables and so on.
There’s various levels of understanding, and understanding itself is not a magical word that can be broken down. I think we can break it down into what kinds and what approaches of understanding. Machines can do certain types of understanding, and humans can do certain more types of understanding that machines can’t.
Well, I want to explore that for just a moment. You’re probably familiar with Searle’s Chinese Room thought experiment, but for the benefit of the listeners…
The philosopher [Searle] put out this way to think about that word [‘understanding’]. The setup is that there’s a man who speaks no Chinese, none at all, and he’s in this giant room full of all these very special books. And people slide questions written in Chinese under the door. He picks them up, and he has what I guess you’d call an algorithm.
He looks at the first symbol, he finds the book with that symbol on the spine, he looks up the second symbol that directs him to a third book, a fourth book, a fifth book. He works his way all the way through until he gets to the last character, and he copies down the characters for the answer. Again, he doesn’t know what they are talking about at all. He slides it back under the door. The Chinese speaker [outside] picks it up, reads it, and it’s perfect Chinese. It’s a perfect answer. It rhymes, and it’s insightful and pithy.  
The question that Searle is trying to pose is… Obviously, that’s all a computer does. It’s a deterministic system that runs these canned algorithms, that doesn’t understand whether it’s talking about cholera or coffee beans or what have you. That there really is something to understanding.  
And Weizenbaum, the man who wrote ELIZA, went so far as to say that when a computer says, “I understand,” that it is just a lie. Because not only is there nothing to understand, there’s just not even an ‘I’ there to understand. So, in what sense would you say a computer understands something?
I think the Chinese Room thing is an interesting puzzle. It’s a thought-provoking situation, rather. But I don’t know about the conclusions you can come to. Like, we’ve seen a lot of historical manuscripts and stuff that we’ve excavated from various regions of the world, and we didn’t understand that language at all. But, over time, through certain statistical techniques, or certain associations, we did understand which words—what the fundamental letters in these languages are, or what these words mean, and so on.
And no one told us exactly what these words mean, or what this language exactly implies. We definitely don’t know how those languages are actually pronounced. But we do understand them by making frequentist associations with certain words to other words, or certain words to certain symbols. And we understand what the word for a ‘man’ is in a certain historical language, or what the word for a ‘woman’ is.
With statistical techniques, you can actually understand what a certain word is, even if you don’t understand the underlying language beforehand. There is a lot of information you can gain, and you can actually understand and learn concepts by using statistical techniques.
If you look at one example in recent machine learning time… is this thing called word2vec. It’s a system, and what it does is you give it a sentence, and it replaces the center word of the sentence with a random other word from the dictionary… And it uses that sentence with this random word in the middle as a negative example, and without replacing the random word—[using] the sentence as-is—is a positive example.
Just using this simple technique, you’ll learn embeddings of words; that is, numbers associated with each word that will try to give some statistical structure to the word. With just a simple model which doesn’t understand anything about what these words mean, or in what context these words are used, you can do simple things like [ask], “Can you tell me what ‘king’, minus ‘man’, plus ‘woman’ is?”
So, when you think of ‘king’, you think, “Okay, it’s a man, a head of state.” And then you say “minus man,” so “king minus man” will try to give you a neutral character of a head of state; and then you add ‘woman’ up, and then you expect ‘queen’… And that’s exactly what the system returns, without actually understanding what each of these words specifically mean, or how they’re spelled, or what context they’re in.
So I think there is more to the story than we actually understand. That is, I think there is a certain level of understanding we can get [to] even without the prior context of knowing how things work. In the same way, computers, I think, can learn and associate certain things without knowing about the real world.
One of the common arguments is like, “Well, but computers haven’t been there and seen that, just like humans did, so they can’t actually make full associations.” That’s probably true. They can’t make full associations, but I think with partial information, they can understand certain concepts and infer certain things just with statistical and causal models that they have to learn [from].
Let me try my question a little differently, and we will get back to the here and now… But this, to me, is really germane because it speaks to how far we’re going to be able to go—in terms of using our present techniques and our present architectures, to build things that we deem to be intelligent.  
In your mind, could a computer ever feel pain? Surely, you can put a sensor on a computer that can take the temperature, and then you write a program so that when it hits 500 degrees, it should start playing this mp3 of somebody screaming in agony. But could a computer ever feel pain? Could it ever experience anything?
I don’t think so. Pain is something that’s been baked into humans. If you bake pain into computers, then yeah, maybe, but not without it evolving to learn what pain is, or like baking that in ourselves. I don’t think it will—
—But is knowing what pain is really the same thing as experiencing it? You can know everything about it, but the experience of stubbing your toe is something different than the knowledge of what pain is.
Yeah, it probably doesn’t know exactly what pain is. It just knows how to associate with certain things about pain. But, there are certain aspects of humans that a computer probably can’t exactly relate to… But a computer, at this stage of machines, has a visual sensor, has an audio sensor, has a speaker, and has a touch sensor. Now we’re getting to smell sensors.
Yes, the computer probably can experience every single thing that humans experience, in the same way; but I think that’s largely dissociative from what we need for intelligence. I think a computer can have its own specific intelligence, but not necessarily have all [other] aspects of humans covered. We’re not trying to replicate a human; we’re trying to replicate intelligence that the human has.
Do you believe that the techniques that we’re using today, the way we look at machine learning, the algorithms we use, basic architectures… How long is that going to fuel the advance of AI? Do you think the techniques we have now—if just given more data, faster computers, tweaked algorithms—we’ll eventually get to something as versatile as a human?  
Or do you think to get to an AGI or something like it, something that really can effortlessly move between domains, is going to require some completely unknown and undiscovered technology?
I think what you’re implying is: Do we need a breakthrough that we don’t know about yet, that we need AGI for?
And my honest answer is we probably do. I just don’t know what that thing looks like, because we just don’t know ahead of time, I guess. I think we are going in certain directions that we think can get us to better intelligence. Right now, where we are is that we collect a very, very large dataset, and then we throw it into a neural network model; and then it will learn something of significance.
But we are trying to reduce the amount of data the neural network needs to learn the same thing. We are trying to increase the number of tasks the same neural network can learn, and we don’t know how to do either of [those] things properly yet. Not as properly as [we do] if we want to train some dog detector by throwing large amounts of dog pictures at it.
I think through scientific process, we will get to a place where we understand better what we need. Over this process, we’ll probably have some unknown models that will come up, or some breakthroughs that will happen. And I think that is largely needed for us to get to a general AI. I definitely don’t know what the timelines are like, or what that looks like.
Talk about adversarial AI for a moment. I watched a talk you gave on the topic. Can you give us a broad overview of what the theory is, and where we are at with it?
Sure. Adversarial networks are these very simple ways of [using] neural networks that we built.
We’ve realized that one of the most common ways we have been training neural networks is: You give a neural network some data, and then you give it an expected output; and if the neural network gives an output that is slightly off from your expected output, you train the neural network to get better at this particular task. Over time, as you give it more data, and you tune it to give the correct output, the neural network gets better.
But adversarial networks are these slightly different formulations of machines, where you have two neural networks. And one neural network tries to synthesize some data. It takes in no inputs, or it takes some random noise as input, and then it tries to generate some data. And you have another neural network that takes in some data, whether it’s real data or data that is generated by this generator neural network. And this [second] neural network, its job is to discriminate between the real data and the generated data. This is called a discriminator network.
[So] you have two networks: the generator network that tries to synthesize artificial data; and you have a discriminator network that tries to tell apart the real data and the artificially-generated data. And the way these things are trained, is that the generator network gets rewards if it can fool the discriminator—if it can make the discriminator think that the data it synthesized is real. And the discriminator only gets rewards when it can accurately separate out the fake data from the real data.
There’s just a slightly different formulation in how these neural networks learn; and we call this an unsupervised learning algorithm, because they’re not really hooking onto any aspects of what the task at hand is. They just want to play this game between each other, regardless of what data is being synthesized. So that’s adversarial networks in short.
It sounds like a digital Turing test, where one computer is trying to fool the other one to think that it’s got the real data.
Yeah, you could see it that way.
Where are we at, practically speaking… because it’s kind of the hot thing right now. Has this established itself? And what kinds of problems is it good at solving? Just general unsupervised learning problems?
Adversarial networks have gotten very popular because they seem to be a promising method to do unsupervised learning. And we think unsupervised learning is one of the biggest things we need to crack before we get to more intelligent machines. That’s basically the primary reason. They are a very promising method to do unsupervised learning.
Even without an AGI, there’s a lot of fear wrapped up in people about the effects of artificial intelligence, specifically automation, on the job market.
People fall into one of three groups: There’s people who think that we’re going to enter kind of a permanent Great Depression, where there’s a substantial portion of the population that’s not able to add economic value.
And then another group says, “Well, actually that’s going to happen to all of us. Anything a human can do, we’re going to be able to build a machine to do.”
And then there are people who say, “No, we’ve had disruptive technologies come along, like electricity and machines and steam power, and it’s never bumped unemployment. People have just used these new machines to increase productivity and therefore wages.”
Of those three camps, where do you find yourself? Or is there a fourth one? What are your thoughts on that?
I think it’s a very important policy and social question on how to deal with AI. Yes, we have in the past had technology disruptions and adapted to them, but they didn’t happen just by market forces, right? You had certain policy changes and certain incentives and short-term boosts for the Depression. And you had certain parachutes that you had to give to people during these drastically-changing times.
So it’s a very, very important policy question on how to deal with the progress that AI is making, and what that means for the job market. I follow the camp of… I don’t think it will just solve itself, and there’s a big role that government and companies and experts have to play in understanding what kind of changes are coming, and how to deal with them.
Organizations like the UN could probably help with this transition, but also, there’s a lot of non-profit companies and organizations coming up who have the mission of doing AI for good, and they also have policy research going on. And I think this will play more and more of a big role, and this is very, very important to deal with—our transition into a technology world where AI becomes the norm.
So, to be clear, it sounds like you’re saying you do think that automation or AI will be substantially disruptive to the job market. Am I understanding you correctly? And that we ought to prepare for it?  
That is correct. I think, even if we have no more breakthroughs in AI as of now, like if we have literally no significant progress in AI for the next five years or ten years, we will still—just with the current AI technology that we [already] have—we will still be disrupting large domains and fields and markets—
—What do you mean, specifically? Such as?
One of the most obvious is transportation, right? We largely solved the fundamental challenges in building self-driving vehicles—
—Let me interrupt you real quickly. You just said in the next five years. I mean, clearly, you’re not going to have massive displacement in that industry in five years, because even if we get over the technological hurdle, there’s still the regulatory hurdle, there’s still retrofitting machinery. That’s twenty years of transition, isn’t it?  
Umm, what I—
—In which time, everybody will retire who’s driving a truck now, and few people will enter into the field—
—What I specifically said was that even if we have no AI breakthroughs in the next five or ten years. I’m not saying that the markets themselves will change in five years. What I specifically said and meant is that even if you have no AI research breakthroughs in five years, we will still see large markets be disrupted, regardless. We don’t need another AI breakthrough to disrupt certain markets.
I see, but don’t you take any encouragement from the past? You can say transportation, but when you look at something like the replacement of animal power with mechanical power, and if you just think of all of the technology, all of the people that displaced… Or you think of the assembly line, which is—if you think about it—a kind of AI, right?
If you’re a craftsperson who makes cars or coaches or whatever one at a time, and this new technology comes along—the assembly line—that can do it for a tenth of the price and ten times the quality. That’s incredibly disrupting. And yet, in those two instances, we didn’t have upticks in unemployment.
Yes,—
—So why would AI be different?
I think it’s just the scale of things, and the fact that we don’t understand fully how things are going to change. Yes, we can try to associate something similar in the past with something similar that’s happening right now, but I think the scale and magnitude of things is very different. You’re talking about in the past over… like over [the course of] thirty years, something has changed.
And now you’re talking about in the next ten years something will change, or something even sooner. So, the scale of things and the number of jobs that are affected, all these things are very different. It’s going to be a hard question that we have to thoroughly investigate and take proper policy change. Because of the scale of things, I don’t know if market forces will just fix things.
So, when you weigh all of the future, as you said—with the technology we have now—and you look to the future and you see, in one column, a lot of disruption in the job market; and then you see all the things that artificial intelligence can do for us, in all its various fields.
To most people, is AI therefore a good thing? Are you overall optimistic about the future with regard to this technology?
Absolutely. I think AI provides us benefits that we absolutely need as humans. There’s no doubt that the upsides are enormous. You accelerate drug discovery, you accelerate how healthcare works, you accelerate how humans transport from one place to another. The magnitude of benefits is enormous if the promises are kept, or the expectations are kept.
And dealing with the policy changes is essential. But my definite bullish view is that the upsides are so enormous that it’s totally worth it.
What would you think, in an AI world, is a good technology path to go [on], from an employment status? Because I see two things. I saw pretty compelling things that say ‘data scientist’ is a super in-demand thing right now, but that’ll be one of the first things we automate, because we can just build tools that do a lot of what that job is.
Right.
And you have people like Mark Cuban, who believes, by the way, [that] the first trillionaires will come from this technology. He said if he had it to do all over again, if he were coming up now, he would study philosophy and liberal arts, because those are the things machines won’t be able to do.
What’s your take on that? If you were getting ready to enter university right now, and you were looking for something to study, that you think would be a field that you can make a career in long-term, what would you pick?
I wouldn’t pick something based on what’s going to be hot. The way I picked my career now, and I think the way people should pick their careers is really what they’re interested in. Now if their only goal is to find a job, then maybe they should pick what Mark Cuban says.
But I also think just being a technologist of some kind, whether they try to become a scientist, or just being an expert in something technology-wise, or being a doctor… I think these things will still be helpful. I don’t know how to associate…
The question is slightly weird to me, because it’s like, “How do I make the most successful career?” And I’ve never thought about it. I’ve just thought about what do I want to do, that’s most interesting. And so I don’t have a good answer, because I’ve never thought about it deeply.
Do you enjoy science fiction? Is there anything in the science fiction world, like movies or books or TV shows, that you think represents how the future is going to turn out? You look at it and think, “Oh, yes, things could happen that way.”
I do enjoy science fiction. I don’t necessarily have specific books or movies that exactly would depict how the future looks. But I think you can take various aspects from various movies and say, “Huh, that does seem like a possibility,” but you don’t necessarily have to buy into the full story.
For example, if you look at the movie Her: You have an OS that talks to you by voice, has a personality, and evolves with its experience and all that. And that seems very reasonable to me. You probably will have voice assistance that will be smarter, and will be programmed to develop a personality and evolve with their experiences.
Now, will they go and make their own OS society? I don’t know, that seems a bit weird. In popular culture, there are various examples like this that seem like they’re definitely plausible.
Do you keep up with the OpenAI initiative, and what are your thoughts on that?
Well, OpenAI seems to be a very good research lab that does fundamental AI research, tries to make progress in the field, just like all of the others are doing. They seem to have a specific mission to be non-profit, and whatever research they do, they want to try to not tie it to a particular company. I think they’re doing good work.
I guess the traditional worry about it is that an AGI, if we built one is, is of essentially limitless value, if you can make digital copies of it. If you think about it, all value is created, in essence, by technology—by human thought and human creativity—and if you somehow capture that genie in that bottle, you can use it for great good or great harm.
I think there are people who worry that by kind of giving ninety-nine percent of the formula away to everybody, no matter how bad their intentions are, you increase the likelihood that there’ll be one bad actor who gets that last little bit and has, essentially, control of this incredibly powerful technology.  
It would be akin to the Manhattan Project being open source, except for the very last step of the bomb. I think that’s a worry some people have expressed. What do you think?
I think AI is not going to be able to be developed in isolation. We will have to get to progress in AI collectively. I don’t think it will happen in a way where you just have a bunch of people secretly trying to develop AI, and suddenly they come up with this AGI that’s eternally powerful and something that will take over humanity, or something like that.
I don’t think that fantasy—which is one of the most popular ways you see things in fiction and in movies—will happen. The way I think it will happen is: Researchers will incrementally publish progress, and at some point… It will be gradual. AI will get smarter and smarter and smarter. Not just like some extra magic bit that will make it inhumanly smart. I don’t think that will happen.
Alright. Well, if people want to keep up with you, how do they follow you personally, and that stuff that you’re working on?
I have a Twitter account. That’s how people usually follow what I’ve been up to. It’s twitter.com/soumithchintala.
Alright, I want to thank you so much for taking the time to be on the show.
Thank you, Byron.
Byron explores issues around artificial intelligence and conscious computers in his upcoming book The Fourth Age, to be published in April by Atria, an imprint of Simon & Schuster. Pre-order a copy here
[voices_in_ai_link_back]

Four Questions For: Geoff Hinton

You’ve been referred to as the “godfather of neural networks.” Do you believe you’ll see true artificial intelligence in your lifetime?

It depends on what you mean by true artificial intelligence.  If you mean autonomous agents with human level abilities at perception, natural language, reasoning and motor control, probably not.  However, it’s very hard to see more than about 5 years into the future so I would not rule it out. Ten years ago, most people in AI would have been very confident that there was no hope of doing machine translation using neural nets that have to get all their linguistic knowledge from the raw training data. But that is now the approach that works best and it has just halved the gap in quality between machine translations and human translations.
What is there to fear about the existence of true artificial intelligence?
I am not too worried about the popular fantasy that evil robots will take over the world. I am much more worried about what people like Hitler or Mussolini might do if they had armies of intelligent robots at their disposal. I think there is a pressing need for international agreements on militarization of this technology.
How do you foresee AI affecting labor and the economy? Does it help or hurt?
Mechanical diggers and automatic teller machines increased productivity by eliminating a lot of tedious jobs, and very few people think that they should not have been introduced. In a fair political system, technological advances that increase productivity would be welcomed by everyone because they would allow everyone to be better off.  The technology is not the problem. The problem is a political system that doesn’t ensure the benefits accrue to everyone.
What’s the next big step for the deep learning movement?
At present, we are seeing unprecedented progress in solving tough problems that defied our best efforts for half a century.  Speech recognition is now very good and rapidly getting better. The ability to recognize objects in images has taken huge strides forward and I think computers will soon be able to understand what is going on in videos.  Neural networks have recently taken over for machine translation.  Every week, deep neural nets succeed at new and commercially significant tasks.  We have seen an amazing flowering of the basic deep learning techniques introduced 20 or more years ago. This flowering includes better types of neuron, better architectures,  better ways of making the learning work in very deep nets and better ways of getting neural networks to focus on the relevant parts of the input. Deep learning is now attracting large numbers of very smart people and huge resources, and I see no reason why this flowering should not continue for many more years.
I think that a lot of effort will be focussed on getting neural networks to really understand the content of a document. This may well involve developing new types of temporary memory, which is currently a hot topic.
One problem we still haven’t solved is getting neural nets to generalize well from small amounts of data, and I suspect that this may require radical changes in the types of neuron we use.  Eventually, I think the lessons we learn by applying deep learning will give us much better insight into how real neurons learn tasks, and I anticipate that this insight will have a big impact on deep learning.
geoff6
Geoffrey Hinton received his BA in experimental psychology from Cambridge in 1970 and his PhD in Artificial Intelligence from Edinburgh in 1978. He did postdoctoral work at the University of California San Diego and spent five years as a faculty member in Computer Science at Carnegie-Mellon. He then became a fellow of the Canadian Institute for Advanced Research and moved to the Department of Computer Science at the University of Toronto. He spent three years from 1998 until 2001 setting up the Gatsby Computational Neuroscience Unit at University College London and then returned to the University of Toronto where he is a University Professor. In 2013, he became a Distinguished Researcher at Google and he now works part-time at the University of Toronto and part-time at Google.
Geoffrey Hinton designs machine learning algorithms. He was one of the researchers who introduced the back-propagation algorithm that has been widely used for practical applications. In 2005 he published the first paper on deep belief nets which initiated a resurgence of interest in neural networks. His students then made seminal advances in the application of deep neural networks to speech recognition, object classification, and drug design.

A look at Zeroth, Qualcomm’s effort to put AI in your smartphone

What if your smartphone camera were smart enough to identify that the plate of clams and black beans appearing in its lens was actually food? What if it then automatically could make the necessary adjustments to take a decent picture of said dish in the low light conditions of a restaurant? And what if it then without prompting, uploaded that photo to Foodspotting along with your location because, your camera phone knows from past experience you like to keep an endless record of your culinary conquests for the world to see?

These are just a few of the questions that [company]Qualcomm[/company] is asking of its new cognitive computing technology Zeroth, which aims to bring artificial intelligence out of the cloud and move it – or at least a limited version of it – into your phone. At Mobile World Congress in Barcelona, I sat down with Qualcomm SVP of product management Raj Talluri, who explained what Zeroth was all about.

Zeroth phones aren’t going to beat chess Grand Masters or create its own unique culinary recipes, but it will perform basic intuitive tasks and anticipate your actions, thus eliminating many of the rudimentary steps required to operate the increasingly complex smartphone, Talluri explained.

“We wanted to see if we could build deep-learning neural networks on devices you carry with you instead of in the cloud,” Talluri said. Using that approach, Qualcomm could solve certain problems surrounding the everyday use of a device.

One such problem, Talluri called the camera problem. The typical smartphone can pick up a lot of images throughout the day, from selfies to landscape shots to receipts for your expense reports. You could load every image you have into the cloud and sort them there or figure out what to do with each photo as you snap them, but cognitive computing capabilities in your phone could do much of that work and it could it could it without you telling it what to do, Talluri said.

Zeroth can train the camera not just to recognize a landscape shot from a close up. It could determine between whole classes of objects, from fruit to mountains to buildings. It can distinguish children from adults and cats from dogs, Talluri said. What the camera does with that information depends on the user’s preferences and the application.

The most basic use case would be taking better photos as it can optimize the shot for the types of objects in them. It could also populate photos with tons of useful metadata. Then you could build on that foundation with other applications. Your smartphone might recognize, for instance, that you’re taking a bunch of landscape and architecture shots in foreign locale and automatically upload them to a vacation album on Flickr. A selfie might automatically produce a Facebook post prompt.

Zeroth devices would be pre-trained to recognize certain classes of objects – right now Qualcomm has used machine learning to create about 30 categories – but the devices could continue to learn after they’re shipped, Talluri said.

With permission, it could access your contact list and scan your social media accounts, and start recognizing the faces of your friends and family in your contact list, Talluri said. Then if you were taking a picture with a bunch of people in the frame, Zeroth would recognize your friends and focus in on their faces. Zeroth already has the ability to recognize handwriting, but you could train it to recognize the particular characteristics of your script, learning for instance that in my chicken scratch, lower case “A”s often look like “O”s.

Other examples of Zeroth applications include devices that could automatically adjust their power performance to the habits of its owner or scan its surroundings sensors to determine what a user’s most likely next smartphone action might be.

Zeroth itself isn’t a separate chip or component. It’s a software architecture designed to run across the different elements of Qualcomm’s Snapdragon processors, so as future Snapdragon products get more powerful, Zeroth becomes more intelligent, Talluri said. We’ll discuss the Zeroth capabilities and designing software that’s smarter and based on cognitive computing with a Qualcomm executive at our Structure Data event in New York later this month.

Qualcomm plans to debut the technology in next year’s premium smartphones and tablets that uses the forthcoming the Snapdragon 820, which uses a new 64-bit CPU architecture called Kyro and was announced at MWC. But Qualcomm was already showing off basic computer vision features like handwriting and object recognition on devices using the Snapdragon 810. Many of those devices were launched at MWC and should appear in markets in the coming months.

MWC-2015-ticker

Breakthrough in FPGAs could make custom chips faster, larger

Today we are worshipping the gods of the algorithm, according to one prominent magazine. It’s not a bad comparison. Everything from search results to our machine learning efforts are the basis of a series of equations that purport to solve for something that feels almost ineffable, human. Teaching a computer to see. Helping to figure out how to take our comings and goings and turn it into a schedule. Understanding our thermostat settings and turning that into a schedule.

But if our new gods are algorithms, then the chips that are performing those complicated equations are their shrines, and the more specific the shrine, the better your prayers work. The Greeks knew that. They built shrines to each of their individual gods with statues, symbols and other trappings of faith specific to their deity of choice. When it comes to algorithms computer scientists are less vested in faith but they are aware that their equations do run faster or more efficiently on a specially designed piece of silicon. But because algorithms change over time and hardware usually stays the same, the flexibility of being able to reprogram your hardware to match your changing algorithm becomes essential. That’s why big companies like Intel and Microsoft are turning to chips called Field Programmable Gate Arrays, or FPGAs.

Intel marries custom cores to its x86 architecture to help large data center customers (like [company]eBay[/company] or [company]Facebook[/company]) improve their performance. Because when worshipping algorithms, a custom shrine makes those prayers work better, and a shrine that changes with the algorithm is the best of both worlds. But like all religions, using FPGAs extracts a price.

The challenge with custom chips is that they are slower than general purpose processors like x86 or ARM-based cores. By making them software programmable, handy for algorithms that you might want to change later, and more flexible, you sacrifice speed in getting information on and off the chip. There is generally a bottleneck when shuttling information to an FPGA, so while it can solve problems really quickly and can adapt to solve different problems with a minor change in programming, sending it the data it needs to solve that problem slows things down.

But for certain applications, such as search engine algorithms or even Microsoft’s recent choice of using FPGA’s for neural networks, the flexibility of being able to tweak your hardware is more important than the performance hit. But what if in exchange for a larger piece of silicon, you didn’t have to take the performance hit? That’s the premise behind Flex Logic, a startup that launched this week with less than $10 million in funding and the IP for an FPGA that is both flexible and wired completely differently so it doesn’t create a bottleneck in getting data onto the core.

Flex Logic CEO Geoff Tate explained that the company has changed the wiring inside the FPGA so instead of having the FPGA outside the processor you can put it directly on the chip making it an integrated package or an SoC.

This makes the total area of the eventual chip larger, but boosts performance and lowers the overall cost. The Flex Logic cores also can snap together meaning that the design of these FPGAs is fairly flexible and modular. So far Flex Logic is launching with a product called the ESLX core in a variation that offers 2,500 LUTs or look up tables (a measure of performance in FPGAs). This core can be combined with other ESLX cores to give a company more performance and each one adds about 15 cents to the overall device. That cost is mitigated by putting t on the chip as an SoC however.

The initial sample chip is in the company’s hands and customers are testing it with the fist chip expected to be in products later this year, said Tate. Because Flex Logic is selling IP, much like [company]ARM[/company] does, rather than the silicon itself, Tate expects that it will be able to translate its designs fairly rapidly to the demands of the market. It plans to make a larger and a smaller design of its ESLX core as well as to make a 40 nanometer version of the core to complement its current 28 nanometer version, but Tate is waiting to see what the market demands.

He expects the products to first appear in the networking and communications space. Other possible applications for the cores could include encryption in the security field or manufacturing software defined radios, which could be tuned to different radio protocols as needed. If we can make faster, flexible chips this is truly a breakthrough worth investigating. I’ll be keeping an eye on Flex Logic to see the customers it signs up and the tradeoffs its technology demands in the field.

Microsoft is building fast, low-power neural networks with FPGAs

Microsoft on Monday released a white paper explaining a current effort to run convolutional neural networks — the deep learning technique responsible for record-setting computer vision algorithms — on FPGAs rather than GPUs.

Microsoft claims that new FPGA designs provide greatly improved processing speed over earlier versions while consuming a fraction of the power of GPUs. This type of work could represent a big shift in deep learning if it catches on, because for the past few years the field has been largely centered around GPUs as the computing architecture of choice.

If there’s a major caveat to Microsoft’s efforts, it might have to do with performance. While Microsoft’s research shows FPGAs consuming about one-tenth the power of high-end GPUs (25W compared with 235W), GPUs still process images at a much higher rate. Nvidia’s Tesla K40 GPU can do between 500 and 824 images per second on one popular benchmark dataset, the white paper claims, while Microsoft predicts its preferred FPGA chip — the Altera Arria 10 — will be able to process about 233 images per second on the same dataset.

However, the paper’s authors note that performance per processor is relative because a multi-FPGA cluster could match a single GPU while still consuming much less power: “In the future, we anticipate further significant gains when mapping our design to newer FPGAs . . . and when combining a large number of FPGAs together to parallelize both evaluation and training.”

In a Microsoft Research blog post, processor architect Doug Burger wrote, “We expect great performance and efficiency gains from scaling our [convolutional neural network] engine to Arria 10, conservatively estimated at a throughput increase of 70% with comparable energy used.”

fpgacnn

This is not Microsoft’s first rodeo when it comes deploying FPGAs within its data centers, and in fact is a corollary of an earlier project. Last summer, the company detailed a research project called Catapult in which it was able to improve the speed and performance of Bing’s search-ranking algorithms by adding FPGA co-processors to each server in a rack. The company intends to port production Bing workloads onto the Catapult architecture later this year.

There have also been other attempts to port deep learning algorithms onto FPGAs, including one by State University of New York at Stony Brook professors and another by Chinese search giant Baidu. Ironically, Baidu Chief Scientist, and deep learning expert, Andrew Ng is big proponent of GPUs, and the company claims a massive GPU-based deep learning system as well as a GPU-based supercomputer designed for computer vision. But this needn’t be and either/or situation: companies could still use GPUs to maximize performance while training their models, and then port them to FPGAs for production workloads.

Expect to hear more about the future of deep learning architectures and applications at Gigaom’s Structure Data conference March 18 and 19 in New York, which features experts from Facebook, Microsoft and elsewhere. Our Structure Intelligence conference, September 22-23 in San Francisco, will dive even deeper into deep learnings, as well as the broader field of artificial intelligence algorithms and applications.

Microsoft’s machine learning guru on why data matters sooooo much

[soundcloud url=”https://api.soundcloud.com/tracks/191875439″ params=”color=ff5500&auto_play=false&hide_related=false&show_comments=true&show_user=true&show_reposts=false” width=”100%” height=”166″ iframe=”true” /]

Not surprisingly, Joseph Sirosh, has big ambitions for his product portfolio at Microsoft which includes Azure ML, HDInsight and other tools. Chief among them is making it easy for mere mortals to consume these data services from the applications they’re familiar with. Take Excel for example.

If a financial analyst can, with a few clicks, send data to a forecast service in the cloud, then get the numbers back, visualized on the same spreadsheet, that’s a pretty powerful story, said Sirosh who is corporate VP of machine learning for Microsoft.

But as valuable as those applications and services are, more and more of the value to be derived from computation over time will be the data itself, not all those tech underpinnings.  “In the future a huge part of the value generated from computing will come from the data as opposed to storage and operating systems and basic infrastructure,” he noted on this week’s podcast. WHich is why one topic under discussion at next month’s Structure Data show will be who owns all the data flowing betwixt and betweeen various systems, the internet of things etc.

When it comes to getting corporations running these new systems [company]Microsoft[/company] may have an ace in the hole because so many of them already use key Microsoft tools — Active Directory, SQL Server, Excel. That gives them a pretty good on-ramp to Microsoft Azure and its resident services. Sirosh makes a compelling case and we’ll talk to him more on stage at Structure Data next month in New York City.

In the first half of the show, Derrick Harris and I talk about the Hadoop world has returned to its feisty and oh so interesting roots. When Pivotal announced its plan to offload support of Hadoop to [company]Hortonworks[/company] and work with that company along with [company]IBM[/company], [company]GE[/company] on  the Open Data Platform the response from Cloudera CEO Mike Olsen in a blog post with his take. 

Also on the docket, @WalmartLabs massive OpenStack production private cloud implementation.

Joesph Sirosh

Joseph Sirosh

 

SHOW NOTES

Hosts: Barb Darrow and Derrick Harris.

Download This Episode

Subscribe in iTunes

The Structure Show RSS Feed

PREVIOUS EPISODES:

No, you don’t need a ton of data to do deep learning 

VMware wants all those cloud workloads “marooned” in AWS

Don’t like your cloud vendor? Wait a second.

Hilary Mason on taking big data from theory to reality

On the importance of building privacy into apps and Reddit AMAs

How AI can help build a universal real-time translator

The breakthroughs in natural language processing and machine translation brought by deep learning might enable us to build a trope of science-fiction books — a universal real-time translator that fits within the human ear. Geoff Hinton, one of the godfathers of deep learning and neural networks, explained how it could be done at the Association for the Advancement of Artificial Intelligence conference held in Austin, Texas, on Wednesday at the tail end of a talk he gave about the history and future of artificial intelligence.

He wasn’t clear in his timeline, although he did say that he only could only anticipate the future about five years out, so perhaps we’re closer than we think to this concept. Here’s how he explained it in his talk for a translation from English to French.

You start with recurrent neural networks, which excel at text analysis and natural language processing. Recurrent neural networks have been responsible for some of the significant improvements in language understanding, including the machine translation that powers Microsoft’s Skype Translate and Google’s word2vec libraries.

Essentially, for each language you have multiple recurrent neural networks that will take your English sentence and parse it word by word. It will then take the entire sentence and move that over to the French recurrent neural network for decoding. There, it will take the concept represented by the sentence and start with the first word to be translated. Once it has translated that, it will match that word against both the statistically probability of the likeliest word that would follow that first word and also against a distribution of the likeliest translation of the second word to come up with a match.

It continues to do this until you get a translation. Hinton explained that the neural networks are trained using random words, and after training the recurrent neural networks for one man-year, which equated to a few students working for about three months, the Hinton recurrent neural network translator matched state-of-the-art databases.

Hinton added that the more languages one adds, the better it makes the neural network, because it helps the computer narrow the probabilities it has to look at. Hinton concluded, “In few years time we will put it on a chip that fits into someone’s ear and have an English-decoding chip that’s just like a real Babel fish.”

For those who aren’t Douglas Adams fans, the Babel fish was an alien fish that the hero of his Hitchhiker’s Guide to the Galaxy books slipped into his ear at the beginning of his journey so he could instantly understand all of the alien languages he encountered.