Voices in AI – Episode 41: A Conversation with Rand Hindi

In this episode, Byron and Rand discuss intelligence, AGI, consciousness and more.
[podcast_player name=”Episode 41: A Conversation with Rand Hindi” artist=”Byron Reese” album=”Voices in AI” url=”https://voicesinai.s3.amazonaws.com/2018-04-10-(01-00-04)-rand-hindi.mp3″ cover_art_url=”https://voicesinai.com/wp-content/uploads/2018/04/voices-headshot-card-2.jpg”]
Byron Reese: This is “Voices in AI” brought to you by GigaOm, I’m Byron Reese. Today I’m excited our guest is Rand Hindi. He’s an entrepreneur and a data scientist. He’s also the founder and the CEO of Snips. They’re building an AI assistant that protects your privacy. He started coding when he was 10 years old, founded a social network at 14, founded a web agency at 15, and he showed interest in machine learning at 18, and began work on a Ph.D. in bioinformatics at age 21. He’s been elected by MIT Technology Reviewas one of their “35 Innovators Under 35,” and was a “30 Under 30” by Forbes in 2015, is a rising star by the Founders Forum, and he is a member of the French Digital Counsel. Welcome to the show, Rand.
Rand Hindi: Hi Byron. Thanks for having me.
That’s a lot of stuff in your bio. How did you get such an early start with all of this stuff?
Well, to be honest, I think, I don’t have any credit, right? My parents pushed me very young into technology. I used to hack around the house, dismantling everything from televisions, to radios, to try to figure out how these things were working. We had a computer at home when I was a kid and so, at some point, my mom came to me and gave me a coding book, and she’s like, “You should learn how to program the machines, instead of just figuring out how to break it, pretty much.” And from that day, just kept going. I mean you know it’s as if, I was telling you when you were 10, that here’s something that is amazing that you can use as a tool to do anything you ever had in mind.
And so, how old are you now? I would love to work backwards just a little bit.
I’m 32 today.
Okay, you mean you turned 32 today, or you happen to be 32 today?
I’m sorry, I am 32. My birthday is in January.
Okay. When did you first hear about artificial intelligence, and get interested in that?
So, after I started coding, you know I guess like everybody who starts coding as a teenager got interested in hacking security and these things. But when I went to university to study computer science, I was actually so bored because, obviously, I already knew quite a lot about programming that I wanted to take up a challenge, and so I started taking masters classes, and one of them was in artificial intelligence and machine learning. And the day I discovered that it was like, it was mind-blowing. It’s as if for the first time someone had shown me that I no longer had to program computers, I could just teach them what I want them to do. And this completely changed my perspective on computer science, and from that day I knew that my thing wasn’t going to be to code, it was to do AI.
So let’s start, let’s deconstruct artificial intelligence. What is intelligence?
Well, intelligence is the ability for a human to perform some task in a very autonomous way. Right, so the way that I…
But wait a second, to perform it in an autonomous way that would be akin to winding up a car and letting it just “Ka, ka, ka, ka, ka” across the floor. That’s autonomous. Is that intelligent?
Well, I mean of course you know, we’re not talking about things which are automated, but rather about the ability to make decisions by yourself, right? So, the ability to essentially adapt to the context you’re in, the ability to, you know, abstract what you’ve been learning and reuse it somewhere else—all of those different things are part of what makes us intelligent. And so, the way that I like to define artificial intelligence is really just as the ability to reproduce a human intelligent behavior in a machine.
So my cat food dish that when it runs out of cat food, and it can sense that there is no food in it, it opens a little door, and releases more food—that’s artificial intelligence?
Yep, I mean you can consider one form of AI, and I think it’s important to really distinguish what we currently have with narrow AI and strong AI
Sure, sure, we’ll get to that in due time. So where do you say we are when people say, “I hear a lot about artificial intelligence, what is the state of the art?” Are we kind of at the very beginning just doing the most rudimentary things? Or are we kind of like half-way along and we’re making stuff happen? How would you describe today’s state of the art?
What we’re really good at today is building and teaching machines to do one thing and to do it better than humans. But those machines are incapable of second-degree thinking, like we do as humans, for example. So, I think we’ve really have to think about this way: you’ve got a specific task for which you would traditionally have programmed a machine, right? And now you can essentially have a machine look at examples of that behavior, and reproduce it, and execute it better than a human would. This is really the state of the art. It’s not yet about intelligence in a human sense; it’s about a task-specific ability to execute something.
So I have posted an article recently on GigaOm where I have an Amazon Echo and a Google Assistant on my desk, and almost immediately I noticed that they would answer the same factual question differently. So, if I said, “How many minutes are in a year?” they gave me a different answer. If I said, “Who designed the American flag?” they gave me a different answer. And they did so because how many minutes in a year, one of them interpreted that as a solar year, and one of them interpreted that as a calendar year. And with regard to the flag, one of them gave the school answer of Betsy Ross, and one of them gave the answer to who designed the 50-state configuration of the stars. So, in both of those cases, would you say I asked a bad question that was inherently ambiguous? Or would you say the AI should have tried to disintermediate and figure it out, and that is an illustration of the limit you were just talking about?
Well I mean the question you’re really asking here is what would be ground truths that the AI should both have, and I don’t think there is. Because as you correctly said, the computer interpreted an ambiguous question in a different way., which is correct because there are two different answers depending on context. And I think this is also a key limitation of what we currently have with AI, is that you and I, we disambiguate what we’re saying because we have cultural references—we have contextual references to things that we share. And so, when I tell you something—I live in New York half the time—so if you ask me who created the flag, we’d both have the same answer because we live in the same country. But someone on a different side of the world might have a different answer, and it’s exactly the same thing with AI. Until we’re able to bake in contextual awareness, cultural awareness, or even things like, very simply, knowing what is the most common answer that people would give, we are going to have those kind of weird side effects that you just observed here.
So isn’t it, though, the case that all language is inherently ambiguous? I mean once you get out of the realm of what is two plus two, everything like, “Are you happy? What’s the weather like? Is that pretty?” [are] all like, anything you construct with language has inherent ambiguity, just by the nature of words.
And so how do you get around that?
As humans, the way that we get around that is that we actually have a sort of probabilistic model in our heads of how we should interpret something. And sometimes it’s actually funny because you know, I might say something and you’re going to take it wrong, not because I meant it wrong, but because you understood it in different context reference frame. But fortunately, what happens is that people who usually interact together usually share some sort of similar contextual reference points. And based on this it means we’re able to share in a very natural way without having to explain the logic behind everything we say. So, language in itself is very ambiguous. If I tell you something such as, “The football match yesterday was amazing,” this sentence grammatically and syntactically is very simple, but the meaning only makes sense if you and I were watching the same thing yesterday, right? And so, this is exactly why computers vary. It’s still unable to understand human language the same way we do is because it’s unable to understand this notion of context unless you give it to it. And I think this is going to be one of the most active fields of research. Natural language processing is going to be you know, basically, baking in contextual awareness into natural language understanding.
So you just said a minute ago at the beginning of that, that humans have a probabilistic model that they’re running in their head—is that really true though? Because if I ask somebody, I just come up to a stranger how many minutes are in a year, they’re not going to say well there is 82.7% chance he’s referring to a calendar year, but it’s a 17.3% he’s referring to a solar year. I mean they instantly only have one association with that question, most people, right?
Of course.
And so they don’t actually have a probabilistic—are you saying it’s a de-facto one—
Talk to that for just a second.
I mean, how it’s actually encoded in the brain? I don’t know. But the fact is that depending on the way I ask the question, depending on the information I’m giving you about how you should think about the question, you’re going to think about a different answer. So, if I tell you, you know how many stars are—let’s say, “How many minutes are in the year? If I ask you the question like this, this is the most common way of asking the question, which means that you know I’m expecting you to give me the most common answer to the question. But if I give you more information, if I told you, “How many minutes are in a solar year?” So now I’ve specified extra information, then that will change the answer you’re going to give me, because now the probability is no longer that I’m asking for the general question, but rather, I’m asking you for a very specific one. And so you have this sort of like, all these connections built into your brain, and depending on which of those elements are activated, you’re going to be giving me a different response. So, think about it as like, you have this kind of graph of knowledge in your head, and whenever I’m asking something, you’re going to give me a response by picking the most likely answer.
So this is building up to—well, let me ask you one more question about language, and we’ll start to move past this a little bit, but I think this is fascinating. So, the question is often raised, “Are there other intelligent creatures on Earth?” You know the other sorts of animals and what not. And one school of thought says that language is an actual requirement for intelligence. That without language, you can’t actually conceive of abstract ideas in your head, you can’t do any of that, and therefore anything that doesn’t have language doesn’t have intelligence. Do you agree with that?
I guess if you’re talking about general intelligence, yes. Because language is really just a universal interface for, you know, representing things. This is the beauty of language. You and I speak English, and we don’t have to learn a specific language for every topic we want to talk about. What we can do instead is we can use the sync from the mental interface, the language, to express all kinds of different ideas. And so, the flexibility of natural language means that you’re able to think about a lot more different things. And so this, inherently, I believe, means that it opens up the amount of things you can figure out—and hence, intelligence. I mean it makes a lot of sense. To be honest, I’ve never thought about it exactly like this, but when you think about it, if you have a very limited interface to express things, you’re never going to be able to think about that many things.
So Alan Turing famously made the Turing Test, which he said that if you are on a terminal, you’re in a conversation with something in another room and you can’t tell if its person or a machine—interestingly he said 30% of the time a machine can fool you—then we have to say the machine is thinking.Do you interpret that as language “indicates that it is thinking,” or language is “it is actually thinking”?
I was talking about this recently actually. Just because a machine can generate an answer that looks human, doesn’t mean that the machine actually understands the answer given. I think you know the depth of understanding of the semantics, and the context goes beyond the ability to generate something that makes sense to a human. So, it really depends on what you’re asking the machine. If you’re asking something trivial, such as, you know, how many days are in a year, or whatever, then of course, I’m sure the machine can generate a very simple, well-structured answer that would be exactly like a human would. But if you start digging in further, if you start having a conversation, if you start essentially, you know, brainstorming with the machines, if you start asking for analysis of something, then this is where it’s going to start failing, because the answers it’s going to give you won’t have context, it won’t have abstraction, it won’t have all of these other things which makes us really human. And so I think, you know, it’s very, very hard to determine where you should draw the line. Is it about the ability to write letters in a way that is syntactically, grammatically correct? Or is it the ability to actually have an intelligent conversation, like a human would? I think the former, we can definitely do in the near future. The latter will require AGI, and I don’t think we’re there yet.
So you used the word “understanding,” and that of course immediately calls up the Chinese Room Problem, put forth by John Searle. For the benefit of the listener, it goes like this: There’s a man who’s in a room, and it’s full of these many thousands of these very special books. The man doesn’t speak any Chinese, that’s the important thing to know. People slide questions in Chinese underneath the door, he picks them out, and he has this kind of algorithm. He looks at the first symbol; he finds a matching symbol on the spine of one of the books. He looks up the second book, that takes him to a third book, a fourth book, a fifth book, all the way up. So he gets to a book that he knows to copy some certain symbols from and he doesn’t know what they mean, he slides it back under the door, and the punch line is, it’s a perfect answer, in Chinese. You know it’s profound, and witty, and well-written and all of that. So, the question that Searle posed and answered in the negative is, does the man understand Chinese? And of course, the analogy is that that’s all a computer can do, and therefore a computer just runs this deterministic program, and it can never, therefore, understand anything. It doesn’t understand anything. Do you think computers can understand things? Well let’s just take the Chinese Room, does the man understand Chinese?
No, he doesn’t. I think actually this is a very, very good example. I think it’s a very good way to put it actually. Because what the person has done in that case, to give a response in Chinese, he literally learns an algorithm on the fly to give him an answer. This is exactly how machine learning currently works. Machine learning isn’t about understanding what’s going on; it’s about replicating what other people have done, which is a fundamental difference. It’s subtle, but it’s fundamental because to be able to understand you need to be able to also replicate de-facto, right? Because if you can understand, you replicate. But being able to replicate, doesn’t mean that you’re able to understand. And the way that we build those machine learning models today are not meant to have a deep understanding of what’s going on. It’s meant to have a very appropriate, human, understandable response. I think this is exactly what happens in this thought experiment. It’s exactly the same thing pretty much.
Without going into general intelligence, I think what we really have to think about today, the way I’d like to see this is, machine learning is not about building human-like intelligence yet. It’s about replacing the need to program a computer to perform a task. Up until now, when you wanted to make a computer do something, what you had to do first is understand what the phenomenon is yourself. So, you had to become an expert in whatever you were trying to automate, and then you would write a computer code with those rules. And so the problem is that doing this would take you a while, because a human would have to understand what’s going on, which can take a while. And also your problem, of course, is not everything is understandable by humans, at least not easily. Machine learning completely replaces the need to become an expert. So instead of understanding what’s going on and then programming the machine, you’re just collecting examples of what’s going on, and feeding it to the machine, who will then figure out a way to reproduce that. So, you know the simple example is, show me a pattern of numbers with written five times five, and ask me what is a pattern, I’ll learn that it’s five, if that makes sense. So this is really about this—this is really about getting rid of the need to understand what you’re trying to make the machine do and just give it examples that it can just figure out by itself.
So we began with my wind-up car, then the cat food dish, and we’re working up to understanding…eventually we have to get to consciousness because consciousness is this thing, people say we don’t know what it is. But we know exactly what it is, we just don’t know how it comes about. So, what it is, is that we experience the world. We can taste the pineapple or see the redness of the sunset in a way that’s different than just sensing the world…we experience. Two questions: do you have any personal theory on where consciousness comes from, and second, is consciousness key to understanding, and therefore key to an AGI?
I think so. I think there is no question that consciousness is linked to general intelligence because general intelligence means that you need to able to create an abstraction of the world, which means that you need to be able to go beyond observing it, but also be able to understand it and to experience it. So, I think that is a very simple way to put it. What I’m actually wondering is whether consciousness was a consequence of biology and whether we need to replicate that in a machine, to make it intelligent like a human being is intelligent. So essentially, the way I’m thinking about this is, is there a way to build a human intelligence that would seem human? And do we want that to seem human? Because if it’s just about reproducing the way intelligence works in a machine, then we shouldn’t care if it feels human or not, we should just care about the ability for the machine to do something smart. So, I think the question of consciousness in a machine is really down to the question of whether or not we want to make it human. There are many technologies that we’ve built for which we have examples in nature, which perform the same task, but don’t work the same. Birds and planes, for example, I’m pretty sure a bird needs to have some sort of like, consciousness of itself of not getting into the wall, whereas we didn’t need to replicate all those tiny bits for the actual plane to fly. It’s just a very different way of doing things.
So do you have a theory as to how it is that we’re conscious?
Well, I think it probably comes from the fact that we had to evolve as a species with other individuals, right? How would you actually understand where to position yourself in society, and therefore, how to best build a very coherent, stable, strong community, if you don’t have consciousness of other people, of nature, of yourself? So, I think there is like, inherently, the fact that having a kind of ecosystem of human beings, and humans in nature, and humans and animals meant that you had to develop consciousness. I think it was probably part of a very positive evolutionary strategy. Whether or not that comes from your neurons or whether that comes more from a combination of different things, including your senses, I’m not sure. But I feel that the need for consciousness definitely came from the need for integrating yourself into broader structure.
And so not to put words in your mouth, but it sounds like you think, you said “we’re not close to it,” but it is possible to build an AGI, and it sounds like you think it’s possible to build, hypothetically, a conscious computer and you’re asking the question of would we want to?
Yes. The question is whether or not it would make sense for whatever we have in mind for it. I think probably we should do it. We should try to do it just for the science, I’m just not sure this is going to be the most useful thing to do, or whether we’re going to figure out an even more general general-intelligence which doesn’t have only human traits but has something even more than this, that would be a lot more powerful.
Hmmm, what would that look like?
Well, that is a good question. I have clearly no idea because otherwise—it is very hard to think about a bigger intelligence and the intelligence that we are limited to, in a sense. But it’s very possible that we might end up concluding that well you know, human intelligence is great for being a human, but maybe a machine doesn’t have to have the same constraints. Maybe a machine can have like a different type of intelligence, which would make it a lot better suited for the type of things we’re expecting the machine to do. And I don’t think we’re expecting the machines to be human. I think we’re expecting the machines to augment us, to help us, to solve problems humans cannot solve. So why limit it to a human intelligence?
So, the people I talk to say, “When will we get an AGI?” The predictions vary by two orders of magnitude—you can read everything from 5 to 500 years. Where do you come down on that? You’ve made several comments that you don’t think we’re close to it. When do you think we’ll see an AGI? Will you live to see an AGI, for instance?
This is very, very hard to tell, you know I mean there is this funny artifact that everybody makes a prediction 20 years in the future, and it’s actually because most people when they make those predictions, have about 20 years left in their careers. So, you know, nobody is able to think beyond their own lifetime, in a sense. I don’t think it’s 20 years away, at least not in the sense of real human intelligence. Are we going to be able replicate parts of AGI, such as, you know, the ability to transfer learning from one task to another? Yes, and I think this is short-term. Are we going to be able to build machines that can go one level of abstraction higher to do something? Yes, probably. But it doesn’t mean they’re going to be as versatile, as generalist, as horizontally thinking as we are as humans. I think for that, we really, really have to figure out once and for all whether a human intelligence requires a human experience of the world, which means the same senses, the same rules, the same constraints, the same energy, the same speed of thinking, or not. So, we might just bypass, as I said—human intelligence might go from like narrow AI, to a different type of intelligence, that is neither human or narrow. It’s just different.
So you mentioned transferred learning. I could show you a small statue of a falcon, and then I could show you a hundred photographs, and some of them have the falcon under water, on its side, in different light, upside down, and all these other things. Humans have no problem saying, “there it is, there it is, there it is,” you know just kind of find Waldo [but] with the falcon. So, in other words, humans can train with a sample size of one, primarily because we have a lot of experience seeing other things in lowlight and all of that. So, if that’s transferred learning it sounds like you think that we’re going to be able to do that pretty quickly, and that’s kind of big deal if we can really teach machines to generalize the way we do. Or is that kind of generalization that I just went through, that actually is part of our general intelligence at work?
I think transferred learning is necessary to build AGI, but it’s not enough, because at the end of the day, just because a machine can learn to play a game and then you know have a starting point to play another game, doesn’t mean that it will make the choice to learn this other game. It will still be you telling it, “Okay, here is a task I need you to do, use your existing learning to perform it.” It’s still pretty much task-driven, and this is a fundamental difference. It is extremely impressive and to be honest I think it’s absolutely necessary because right now when you look at what you do with machine learning, you need to collect a bunch of different examples, and you’re feeding that to the machine, and the machine is learning from those examples to reproduce that behavior, right? When you do transferred learning, you’re still teaching a lot of things to the machine, but you’re teaching it to reuse other things so that it doesn’t need as much data. So, I think inherently the biggest benefit of transferred learning will be that we won’t need to collect as much data to make the computers do something new. It solves, essentially, the biggest friction point we have today, which is how do you access enough data to make the machine learn the behavior? In some cases, the data does not exist. And so I think transferred learning is a very elegant and very good solution to that problem.
So last question I want to ask you about AGI and then we can turn the clock back and talk to issues closer at hand is as follows: It sounds like you’re saying an AGI is more than 20 years off, if I just inferred that from what you just said. And I am curious because the human genome is 2 billion base pairs, it’s something like 700 MB of information, most of which we share with plants, bananas, and what-not. And if you look at our intelligence versus a chimp, or something, we only have a fraction of 1% of the DNA that is different. What that seems to suggest to me at least is that if the genome is 700 MB, and the 1% difference gives us an AGI, then the code to create an AGI could be a small as 7 MB.
Pedro Domingos wrote a book called The Master Algorithm, where he says that there probably is an algorithm, that can solve a whole world of problems, and get us really close to AGI. Then other people on another end of the spectrum, like Marvin Minsky or somebody, don’t even know that we have an AGI, that we’re like just 200 different hacks—kind of 200 narrow intelligences that just kind of pull off this trick of seeming like a general intelligence. I’m wondering if you think that an AGI could be relatively simple—that it’s not a matter of more data or more processing, but just a better algorithm?
So just to be clear, I don’t consider a machine who can perform 200 different tasks to be an AGI. It’s just like an ensemble of, you know, narrow AIs.
Right, and that school of thought says that therefore we are not an AGI. We only have this really limited set of things we can do that we like to pass off as “ah, we can do anything,” but we really can’t. We’re 200 narrow AIs, and the minute you ask us to do things outside of that, they’re off our radar entirely.
For me, the simplest definition of how to differentiate between a narrow AI and an AGI is, an AGI is capable of kind of zooming out of what it knows—so to have basically like a second-degree view of the facts that it learned, and then reuse that to do something completely different. And I think this capacity we have as humans. We did not have to learn every possible permutation; we did not have to learn every single zooming out of every fact in the world, to be able to do new things. So, I think I definitely agree that as a human, we are AGI. I just don’t think that having a computer who can learn to do two hundred different things would do that. You would still need to figure out this ability to zoom out, this ability to create abstraction of what you’ve been learning and to reapply it somewhere else. I think this is really the definition of horizontal thinking, right? You can only think horizontally if you’re looking up, rather than staying in a silo. So, to your question, yea. I mean, why not? Maybe the algorithm for AGI is simple. I mean think about it. Deep learning, machine learning in general, these are deceptively easy in terms of mathematics. We don’t really understand how it works yet, but the mathematics behind it is very, very, easy. So, we did not have to come up with this like crazy solution. We just came up with an algorithm that turned out to be simple, and that worked really well when given a ton of information. So, I’m pretty sure that AGI doesn’t have to be that much more complicated, right? It might be one of those E = mc2sort of plugins I think that we’re going to figure out.
That was certainly the hope, way back, because physics itself obeys such simple laws that were hidden from us, and then once elucidated seemed, any 11th gradehigh-school student could learn, maybe so. So, pulling back more toward the here and now—in ’97, Deep Blue beat Kasparov, then after that we had Ken Jennings lose in Jeopardy, then you had AlphaGo beat Lee Sedol, then you had some top-ranked poker players beaten, and then you just had another AlphaGo victory. So, AI does really well at games presumably because they have a very defined, narrow rule set, and a constrained environment. What do you think is going to be, kind of, the next thing like that? It hits the papers and everybody’s like, “Wow, that’s a big milestone! That’s really cool. Didn’t see that coming so soon!” What do you think will be the next sort of things we’ll see?
So, games are always a good example because everybody knows the game, so everybody is like, “Oh wow, this is crazy.” So, putting aside I guess the sort of PR and buzz factor, I think we’re going to solve things like medical diagnosis. We’re going to solve things like understanding voice very, very soon. Like, I think we’re going to get to a point very soon, for example, where somebody is going to be calling you on the phone and it’s going to be very hard for you to distinguish whether it’s a human or a computer talking. Like I think this is definitely short-term as in less than 10years in the future, which poses a lot of very interesting questions, you know, around authentication, privacy, and so forth. But I think the whole realm of natural language is something that people always look at as a failure of AI—“Oh it’s a cute robot, it barely actually knows how to speak, it has a really funny sounding voice.” This is typically the kind of thing that nobody thinks, right now, a computer can do eloquently, but I’m pretty sure we’re going to get there fairly soon.
But to our point earlier, the computer understanding the words, “Who designed the American flag?” is different than the computer understanding the nuance of the question. It sounds like you’re saying we’re going to do the first, and not the second very quickly.
Yes, correct. I think like somewhere the computer will need to have a knowledge base of how to answer, and I’m sure that we’re going to figure out which answer is the most common. So, you’re going to have this sort of like graph of knowledge that is going to be baked into those assistants that people are going to be interacting with. I think from a human perspective, what is going to be very different, is that your experience of interacting with a machine will become a lot more seamless, just like a human. Nobody today believes that when someone calls them on the phone, it’s a computer. I think this is like a fundamental thing that nobody is seeing coming really but is going to shift very soon. I can feel there is something happening around voice which is making it very, very, very…which is going to make it very ubiquitous in the near future, and therefore indistinguishable from a human perspective.
I’m already getting those calls frankly. I get these calls, and I go “Hello,” and it’s like, “Hey, this is Susan, can you hear me okay?” and I’m supposed to say, “Yes, Susan.” Then Susan says, “Oh good, by the way, I just wanted to follow up on that letter I sent you,” and we have those now. But that’s not really a watershed event. That’s not, you wake up one day and the world’s changed the way it has when they say, there was this game that we thought computers wouldn’t be able to do for so long, and they just did it, and it definitively happened. It sounds like the way you’re phrasing it—that we’re going to master voice in that way—it sounds like you say we’re going to have a machine that passes the Turing Test.
I think we’re going to have a machine that will pass the Turing Test, for simple tasks. Not for having a conversation like we’re having right now. But a machine that passes the Turing Test in, let’s say, a limited domain? I’m pretty sure we’re going to get there fairly soon.
Well anybody who has listened to other episodes of this, knows my favorite question for those systems that, so far, I’ve never found one that could answer, and so my first question is always “What’s bigger a nickel or the sun?” and they can’t even right now do that. The sun could be s-u-nor s-o-n, a nickel is a metal as well as a unit of currency, and so forth. So, it feels like we’re a long way away, to me.
But this is exactly what we’ve been talking about earlier; this is because currently those assistants are lacking context. So, there’s two parts of it, right? There’s the part which is about understanding and speaking, so understanding a human talking and speaking in a way that a human wouldn’t realize it’s a computer speaking, this is more like the voice side. And then there is the understanding side. Now you add some words, and you want to be able to give a response that is appropriate. And right now that response is based on a syntactic and grammatical analysis of the sentence and is lacking context. But if you plug it into a database of knowledge, that it can tap into—just like a human does by the way—then the answers it can provide you will be more and more intelligent. It will still not be able to think, but it will be able to give you the correct answers because it will have the same contextual references you do.
It’s interesting because, at the beginning of the call, I noted about the Turing Test that Turing only puta 30% benchmark. He said if the machine gets picked 30% of the time, we have to say its thinking. And I think he said 30% because the question isn’t, “Can it think as well as a human,” but “Can it think?” The really interesting milestone in my mind is when it hits 51%, 52%, of the time, and that would imply that it’s better at being human than we are, or at least it’s better at seeming human than we are.
Yes, so again it really depends on how you’re designing the test. I think a computer would fail 100% of the time if you’re trying to brainstorm with it, but it might win 100% of the time if you’re asking it to give you an answer to a question.
So there’s a lot of fear wrapped up in artificial intelligence and it’s in two buckets. One is the Hollywood fear of “killer robots,” and all of that, but the much more here and now, the one that dominates the debate and discussion is the effect that artificial intelligence, and therefore automation, will have on jobs. And this you know there are three broad schools of thought, one is that there is a certain group of people that are going to be unable to compete with these machines and will be permanently unemployed, lacking skills to add economic value. The second theory says that’s actually that’s what’s going to happen to all of us, that there is nothing in theory a machine can’t do, that a human can do. And then a final school of thought that says we have 250 years of empirical data of people using transformative technologies, like electricity, just to augment their own productivity and increase their productivity, and therefore their standard of living. You’ve said a couple of times, you’ve alluded to machines working with humans—AIs working with humans—but I want to give you a blank slate to answer that question. Which of those three schools of thought are you most closely aligned to and why?
I’m 100% convinced that we have to be thinking human plus machines, and there are many reasons for this. So just for the record, it turns out I actually know quite a bit about that topic because I was asked by the French government, a few months ago, to work on their AI strategy for employment. The country, the government wanted to know, “What should we do? Is this going to be disruptive?” So, the answer, the short answer is, every country will be impacted in a different way because countries don’t have the same relationship to automation based on how people work, and what they are doing essentially. For France in particular, which is what I can talk about here, what we ended up realizing is that machines…the first thing which is important to keep in mind is we’re talking the next ten years. So, the government does not care about AGI. Like, we’ll never get to AGI if we can’t fix the short-term issues that, you know, narrow intelligence is already bringing on the table. The point is, if you destroy society because of narrow AI, you’re never going to get to AGI anyway, so why think about it? So, we really focused on thinking on the next 10years and what we should do with narrow AI. The first thing we realized that is narrow intelligence, narrow AI, is much better than humans at performing whatever it has learned to do, but humans are much more resilient to edge cases and to things which are not very obvious because we are able to do horizontal thinking. So, the best combination you can have in any system will always be human plus machine. Human plus machine is strictly better in every single scenario, to human-alone or machine-alone. So if you wanted to really pick an order, I would say human plus machine is the best solution that you can get, then human and machine are just not going to be good at the same things. They’re going to be different things. There’s no one is better than the other, it’s just different. And so we designed a framework to figure out which jobs are going to be completely replaced by machines, which ones are going to be complimentary between human and AI, and which ones will be pure human. And so those criteria that we have in the framework are very simple.
The first one is, do we actually have the technology or the data to build such an AI? Sometimes you might want to automate something, the data does not exist, the censors to collect data does not exist, there are many examples of that. The second thing is, does that task that you want to automate require a very complicated manual intervention? It turns out that robotics is not following the same experimental trends as AI, and so if your job is mostly consisting of using your hands to do very complicated things, it’s very hard to build an intelligence that can replicate that. The third thing is, very simply, whether or not we require general intelligence to solve a specific task? Are you more of a system designer thinking about the global picture of something, or are you very, very focused narrow task worker? So, the more horizontal your job is, obviously, the safer it is. Because until we get AGI, computers will never be able to end this horizontal thinking.
The last two are quite interesting too. The first one is, do we actually want—is it socially acceptable to automate a task? Just because you can automate something, doesn’t mean that this is what we will want to do. You know, for instance, you could get a computer to diagnose that you have cancer, and just email you the news, but do we want that? Or don’t we prefer that at least a human gives us that news? The second good example about it, which is quite funny, is the soccer referee. Soccer in Europe is very big, not as much in the U.S., but in Europe it’s very big, and we already have technology today that could just look at the video screen and do real-time refereeing. It would apply the rules of the game, it would say “Here’s a foul, here’s whatever,” but the problem is that people don’t want that, because it turns out that a human referee makes a judgment on the fly based on other factors that he understands because he’s human such as, “Is it a good time to let people play? Because if I stop it here, it will just make the game boring.” So, it turns out that if we automated the referee of a soccer match, the game would be extremely boring, and nobody would watch it. So nobody wants that to be automated. And then finally, the final criteria is the importance of emotional intelligence in your job. If you’re a manager, your job is to connect emotionally with your team and make sure everything is going well. And so I think a very simple way to think about it is, if your job is mostly soft skills, a machine will not be able to do it in your place. If your job is mostly hard skill, there is a chance that we can automate that.
So, when you take those five criteria, right, and you look at distribution of jobs in France, what you realize is that only about 10% of those jobs will be completely automated, another 30%, 40% won’t change, because it will still be mostly done by human, and about 50% of those jobs will be transformed. The 10% of jobs the machines will take, you’ve got 40% of jobs that humans will take, and you’ve got 50% of jobs, which will change because it will become a combination of humans and machines doing the job. And so the conclusion is that, if you’re trying to anticipate the impact of AI on the French job market and economy, we shouldn’t be thinking about how to solve mass unemployment with half the population not working; rather, we should figure out how to help those 50% of people transition to this AI+human way of working. And so it’s all about continuous education. It’s all about breaking this idea that you like learn one thing for the rest of your life. It’s about getting into a much more fluid, flexible sort of work life where humans focus on what they are good at and working alongside the machines, who are doing things that machines are good at. So, the recommendation we gave to the government is, figure out the best way to make humans and machines collaborate, and educate people to work with machines.
There’s a couple of pieces of legislation that we’ve read about in Europe that I would love to get your thoughts on, or proposed legislation, to be clear. One of them is treating robots or certain agents of automation as legal persons so that they can be taxed at a similar rate as you would tax a worker. I guess the idea being that, why should humans be the only ones paying taxes? Why shouldn’t the automation, the robots, or the artificial intelligences, pay taxes as well? Practically, what do you think? Two, what do you think should be the case? What will happen and what should happen?
So, for taxing robots, I think that it’s a stupid idea for a very simple reason, is that how do you define what a machine is, right? It’s easy when you’re talking about an assembly line with a physical machine because you can touch it. But how many machines are in an image recognition app? How do you define that? And so what the conclusion is, if you’re trying to tax machines, like you would tax humans for labor, then you’re going to end up not being able to actually define what is a machine. Therefore, you’re not going to actually tax the machine, but you’re going to have to figure out more of a meta way of taxing the impact of machines—which basically means that you’re going to increase the corporate taxes, like the profit tax, that companies are making as a kind of catch-all for what you’re doing. So, if you’re doing this, you’re impeding your investment and innovation, and you’re actually removing the incentive to do that. So I think that it makes no sense whatsoever to try to tax robots because the net consequence is that you’re just going to increase the taxes that companies have to pay overall.
And then the second one is the idea that, more and more algorithms, more and more AIs help us make choices. Sometimes they make choices for us—what will I see, what will I read, what will I do? There seems to be a movement to legislatively require total transparency so that you can say “Why did it recommend this?” and a person would need to explain why the AI made this recommendation. One, is that a good idea, and two, is it even possible at some level?
Well this [was] actually voted [upon] last year and it comes into effect next year as part of a bigger privacy regulation called GDPR, that applies to any company that wants to do business with a European citizen. So, whether you’re American, Chinese, French, it doesn’t matter, you’re going to have to do that. And in effect, one of the things that this regulation poses, is that any automated treatment that results in a significant impact on your life—a medical diagnosis, an insurance pricing whatever, like an employment or like a promotion you get—you have to be able to explain how the algorithm made that choice. By the way, this law [has] existed in France already since 1978, so it’s new in Europe, but it has been existing in France for 40 years already. The reason why they put this is very simple, is because they want to avoid people being excluded because a machine learned a bias in the population, and that person essentially not being able to go to court and say, “There’s a bias, I was unfairly treated.”
So essentially the reason why they want transparency, is because they want to have accountability against potential biases that might be introduced, which I think makes a lot of sense, to be honest. And that poses a lot of questions, of course, of what do you consider an algorithm that has an impact on your life? Is your Facebook newsfeed impacting your life? You could argue it does, because the choice of news that you see will change your influence, and Facebook knows that. They’ve experimented with that. Does a search result in Google have an impact on your life? Yes it does, because it limits the scope of what you’re seeing. My feeling is that, when you keep pushing this, what you’re going to end up realizing is that a lot of the systems that exist today will not be able to rely on this black-box machine learning model, but rather would have to use other types of methods. And so one field of study, which is very exciting, is actually making deep learning understandable, for precisely that reason.
Which it sounds like you’re in favor of, but you also think that that will be an increasing trend, over time.
Yeah, I mean I believe that actually what’s happening in Europe is going to permeate to a lot of the other places in the world. The right to privacy, the right to be forgotten, the right to have transparent algorithms when they’re important, the right to transferability of your personal data, that’s another very important one. This same regulation means that all my data I have with a provider, I can tell that provider, to send it to another provider, in a way that the other provider can use it. Just like when you change carriers, you can switch phone number without worrying about how this works, this will now apply to every single piece of personal data companies have around you when you’re a European citizen.
So, this is huge, right? Because think about it, what this means is if you have a very key algorithm for making a decision, you now have to publish and make that algorithm transparent. What that means is that someone else could replicate this algorithm in the exact same way you’re doing it. This, plus the transferability of personal data means that you could have two exactly equivalent services which have the same data about you, that you could use. So that completely breaks any technological monopoly[on] important things for your life. And so I think this is very, very interesting because the impact that this will have on AI is huge. People are racing to get the best AI algorithm and the best data. But at the end of the day—if I can copy your algorithm because it’s an important thing for my life, and it has to be transparent, and if I can transfer my data from you to another provider—you don’t have as much of a competitive advantage anymore.
But doesn’t that mean, therefore, you don’t have any incentive to invest in it? If you’re basically legislating all sorts…[if] all code is open-sourced, then why would anybody spend any money investing in something that they get no benefit whatsoever from?
Innovation. User experience. Like monopoly is the worst thing that could happen for innovation and for people, right?
Is that truly necessarily? I mean patents are a form of monopoly, right? We let drug companies have a monopoly on some drug for some period of time because they need some economic incentive to invest in it. All of law is built around monopoly, in one form or the other, based on the idea of patents. If you’re saying there’s an entire area that’s worth trillions of dollars, but we’re not going to let anybody profit off of it—because anything you do you have to share with everybody else—aren’t you just destroying innovation?
That transparency doesn’t prevent you from protecting your IP, right?
What’s the difference between the IP and the algorithm?
So, you can still patent the system you created, and by the way, when you patent a system, you make it transparent as well because anybody can read the patent. So, if anything I don’t that changes the protection over time. I think what that fundamentally changes is that you’re no longer going to be limited to a black-box approach that you’re not going to be able to have visibility on. I think the Europeans want the market to become a lot more open, they want people to have choices, and they want people to be able to say no to a company that they don’t share the values of the company, and they don’t like the way they’re being treated.
So obviously privacy is something near and dear to your heart. Snips is an AI assistant designed to protect privacy. Can you tell us what you’re trying to do there, and how far along you are?
So when we started the company in 2013, we did it as a research lab in AI, and one of the first things we focused on was this intersection between AI and privacy. How do you guarantee privacy in the way that you’re building those AIs? And so that eventually led us to what we’re currently doing now, which is we’re selling a voice platform for connected devices. So, if you’re building a car and you want people to talk to it, you can use our technology to do that, but we’re doing it in a way that all the data of the user, its voice, its personal data never leaves the device that the user has interacted with. So, you know whereas Alexa and Siri and Google Assistant are running in the cloud, we’re actually running completely on the device itself. There is not a single piece of your personal data that goes to a server. And this is important because voice is biometric, voice is something that identifies you uniquely that you cannot change, it’s not like a cookie in a browser, it’s more like a fingerprint. When you send biometric data to the cloud, you’re exposing yourself to having your voice copied, potentially, down the line, and you’re increasing your risk that someone might break into one of those servers and essentially pretend to be a million people on the phone, with their banks, their kids, whatever. So, I think for us, like, privacy is extremely important as a part of the game, and by the way, doing things on device means that we can guarantee privacy by design, which also means that we are currently the only technology on the planet that is 100% compliant with those new European regulations. Everybody else is in a gray area right now.
And so where are you in your lifecycle of your product?
We’ve been actually building this for quite some time; we had quite a bunch of clients use it. We officially launched it a few weeks ago, and the launch was really amazing. We even have a web version that people can use to build prototypes for Raspberry Pi. So, our technology, by the way, can run completely on a Raspberry Pi. So we do everything from speech recognition to natural language understanding on that actual Raspberry Pi, and we’ve had over a thousand people start building assistants on it. I mean it was really, really crazy. So, it’s a very, very mature technology, we benchmarked it against Alexa, against Google Assistant, against every other technology provider out there for voice, and we’ve actually gotten better performances than they did. So we have a technology that can run on a Raspberry Pi, or any other small device, that guarantees privacy by design, that is compliant with the new European regulation, and that performs better than everything that’s out there. This is important, because, you know there is this false dichotomy that you have to trade off AI and privacy, but this is wrong, this is actually not true at all. You can really have the two together.
Final question, do you watch or read, or consume any science fiction, and if so, do you know any views of the future that you think are kind of in alignment with yours or anything you look at and say “Yes, that’s what could happen!”
I think there are bits and pieces in many science fiction books, and actually this is the reason why I’m thinking about writing one myself now.
All right, well Rand this has been fantastic. If people want to keep up with you, and follow all of the things you’re doing and will do, can you throw out some URLs, some Twitter handles, whatever it is people can use to keep an eye on you?
Well, the best way to follow me I guess would be on Twitter, so my handle is RandHindi, and on Medium, my handle is RandHindi. So, I blog quite a bit about AI and privacy, and I’m going to be announcing quite a few things and giving quite a few ideas in the next few months.
All right, well this has been a far-reaching and fantastic hour. I want to thank you so much for taking the time, Rand.
Thank you very much. It was a pleasure.
Byron explores issues around artificial intelligence and conscious computers in his upcoming book The Fourth Age, to be published in April by Atria, an imprint of Simon & Schuster. Pre-order a copy here.

Voices in AI – Episode 37: A Conversation with Mike Tamir

In this episode, Byron and Mike talk about AGI, Turing Test, machine learning, jobs, and Takt.
[podcast_player name=”Episode 37: A Conversation with Mike Tamir” artist=”Byron Reese” album=”Voices in AI” url=”https://voicesinai.s3.amazonaws.com/2018-03-27-(00-55-21)-mike-tamir.mp3″ cover_art_url=”https://voicesinai.com/wp-content/uploads/2018/03/voices-headshot-card-1.jpg”]
Byron Reese: This is Voices in AI, brought to you by Gigaom. I’m Byron Reese. I’m excited today, our guest is Mike Tamir. He is the Chief Data Science Officer at Takt, and he’s also a lecturer at UC Berkeley. If you look him up online and read what people have to say about him, you notice that some really, really smart people say Mike is the smartest person they know. Which implies one of two things: Either he really is that awesome, or he has dirt on people and is not above using it to get good accolades. Welcome to the show, Mike!
Mark Cuban came to Austin, where we’re based, and gave a talk at South By Southwest where he said the first trillionaires are going to be in artificial intelligence. And he said something very interesting, that if he was going to do it all over again, he’d study philosophy as an undergrad, and then get into artificial intelligence. You studied philosophy at Columbia, is that true?
I did, and also my graduate degree, actually, was a philosophy degree, cross-discipline with mathematical physics.
So how does that work? What was your thinking? Way back in the day, did you know you were going to end up where you were, and this was useful? That’s a pretty fascinating path, so I’m curious, what changed, you know, from 18-year-old Mike to today?
[Laughs] Almost everything. So, yeah, I think I can safely say I had no idea that I was going to be a data scientist when I went to grad school. In fact, I can safely say that the profession of data science didn’t exist when I went to grad school. I did, like a lot of people, who joined the field around when I did, kind of became a data scientist by accident. My degree, while it was philosophy, was fairly technical. It made me more focused on mathematical physics and helped me learn a little bit about machine learning while I was doing that.
Would you say studying philosophy has helped you in your current career at all? I’m curious about that.
Um, well, I hope so. It was very much a focus thing, the philosophy of science. So I think back all the time when we are designing experiments, when we are putting together different tests for different machine learning algorithms. I do think about what is a scientifically-sound way of approaching it. That’s as much the physics background as it is the philosophy background. But it certainly does influence, I’d say daily what we do in our data science work.
Even being a physicist that got into machine learning, how did that come about?
Well, a lot of my graduate research in physics was focused on a little bit of neural activity, but also a good deal of it was focusing in quantum statistical mechanics, which really involved doing simulations and thinking about the world in terms of lots of random variables and unknowns that results in these emergent patterns. And in a lot of ways what we do now, in fact, at Takt is actually writing a lot about group theory and how that can be used as a tool for analyzing the effectiveness of deep learning. Um, there are a lot of, at least at a high level, similarities in trying to find those superpatterns in the signal in machine learning and the way you might think about emergent phenomenon in physical systems.
Would an AGI be emergent? Or is it going to be just nuts and bolts brute force?
[Laughs] That is an important question. The more I find out about successes, at least the partial successes, that can happen with deep learning and with trying to recreate the sorts of sensitivities that humans have, that you would have with object recognition, with speech recognition, with semantics, with general, natural language understanding, the more sobering it is thinking about what humans can do, and what we do with our actual, with our natural intelligence, so to speak.
So do you think it’s emergent?
You know, I’m hesitant to commit. It’s fair to say that there is something like emergence there.
You know this subject, of course, a thousand times better than me, but my understanding of emergence is that there are two kinds: there’s a weak kind and a strong one. A weak one is where something happens that was kind of surprising—like you could study oxygen all your life, and study hydrogen but not be able to realize, “Oh, you put those together and it’s wet.” And then there’s strong emergence which is something that happens that is not deconstructable down to its individual components, it’s something that you can’t actually get to by building up—it’s not reductionist. Do you think strong emergence exists?
Yeah, that’s a very good question and one that I refuse to think about quite a bit. The answer, or my answer I think would be, it’s not as stark as it might seem. Most cases of strong emergence that you might point to, actually, there are stories you can tell where it’s not as much of a category distinction or a non-reducible phenomenon as you might think. And that goes for things as well studied as space transitions, and criticality phenomenon in the physics realm, as it does possibly for what we talk about when we talk about intelligence.
I’ll only ask you one more question on this, and then we’ll launch into AI. Do you have an opinion on whether consciousness is a strong emergent phenomenon? Because that’s going to speak to whether we can build it.
Yeah so, that’s a very good question, again. I think that what we find out when we are able to recreate some of the—we’re really just in the beginning stages in a lot of cases—at least semi-intelligent, or a component of what integrated AI look like. It shows more about the magic that we see when we see consciousness. It brings human consciousness closer to what we see in the machines rather than the other way around.
That is to say, human consciousness is certainly remarkable, and is something that feels very special and very different from what maybe imperatively constructed machine instructions are. There is another way of looking at it though, which is that maybe by seeing how, say, a deep neural net is able to adapt to signals that are very sophisticated and maybe even almost impossible to really boil it down, it’s actually something that we do that we might imagine are brains are doing all the time, just in a far, far larger magnitude of parameters and network connections.
So, it sounds like you’re saying it may not be that machines are somehow ennobled with consciousness, but that we discover that we’re not actually conscious. Is that kind of what you’re saying?
Yeah, or maybe something in the middle.
Certainly, our personal experience of consciousness, and what we see when we interact with other humans or other people, more generally; there’s no denying that, and I don’t want to discount how special that is. At the same time, I think that there is a much blurrier line, is the best way to put it, between artificial, or at least the artificial intelligence that we are just now starting to get our arms around, and what we actually see naturally.
So, the shows called Voices in AI, so I guess I need to get over there to that topic. Let’s start with a really simple question: What is artificial intelligence?
Hmm. So, until a couple years ago, I would say that artificial intelligence really is what we maybe now call integrated AI. So a dream of using maybe several integrated techniques of machine learning to create something that we might mistake for, or even accurately describe as, consciousness.
Nowadays, the term “artificial intelligence” has, I’d say, probably been a little bit whitewashed or diluted. You know, artificial intelligence can mean any sort of machine learning or maybe even no machine learning at all. It’s a term that a lot of companies put in their VC deck, and it could be something as simple as just using a logistic regression—hopefully, logistic regression that uses gradient descendants as opposed to closed-form solution. Right now, I think it’s become kind of indistinguishable from generic machine learning.
I, obviously, agree, but, take just the idea that you have in your head that you think is legit: is it artificial in the sense that artificial turf isn’t really grass, it just looks like it? Or is it artificial in the sense we made it. In other words, is it really intelligence, or is it just something that looks like intelligence?
Yeah, I’m sure people bring up the Turing test quite a bit when you broach this subject. You know, the Turing test is very coarsely… You know, how would you even know? How would you know the difference between something that is an artificial intelligence and something that’s a bona fide intelligence, whatever bona fide means. I think Turing’s point, or one way of thinking about Turing’s point, is that there’s really no way of telling what natural intelligence is.
And that again makes my point, that it’s a very blurry line, the difference between true or magic soul-derived consciousness, and what can be constructed maybe with machines, there’s not a bright distinction there. And I think maybe what’s really important is that we probably shouldn’t discount ostensible intelligence that can happen with machines, any more than we should discount intelligence that we observe in humans.
Yeah, Turing actually said, a machine may do it differently but we still have to say that the machine is thinking, it just may be different. He, I think, would definitely say it’s really smart, it’s really intelligent. Now of course the problem is we don’t have a consensus definition even of intelligence, so, it’s almost intractable.
If somebody asks you what’s the state of the art right now, where are we at? Henceforth, we’re just going to use your idea of what actual artificial intelligence is. So, if somebody said “Where are we at?” are we just starting, or are we actually doing some pretty incredible things, and we’re on our way to doing even more incredible things?
[Laughs] My answer is, both. We are just starting. That being said, we are far, we are much, much further along than I would have guessed.
When do you date, kind of, the end of the winter? Was there a watershed event or a technique? Or was it a gradualism based on, “Hey, we got faster processors, better algorithms, more data”? Like, was there a moment when the world shifted? 
Maybe harkening back to the discussion earlier, you know, someone who comes from physics, there’s what we call the “miracle year,” when Einstein published his theory—a really remarkable paper—roughly just over a hundred years ago. You know, there is a miracle year and then there’s also when he finally was able to crack the code in general relativity. I don’t think we can safely say that there been a miracle year until far, far in the future, when it comes to the realm of deep learning and artificial intelligence.
I can say that, in particular, with natural language understanding, the ability to create machines that can capture semantics, the ability of machines to identify objects and to identify sounds and turn them into words, that’s important. The ability for us to create algorithms that are able to solve difficult tasks, that’s also important. But probably at the core of it is the ability for us to train machines to understand concepts, to understand language, and to assign semantics effectively. One of the big pushes that’s happened, I think, in the last several years, when it comes to that, is the ability to represent sequences of terms and sentences and entire paragraphs, in a rich mathematically-representable way that we can then do things with. That’s been a big leap, and we’re seeing a lot of the progress that with neural word embeddings with sentence embeddings. Even as recently as a couple months ago, some of the work with sentence embedding that’s coming out is certainly part of that watershed, and that move from dark ages in trying to represent natural language in a intelligible way, to where we are now. And I think that we’ve only just begun.
There’s been a centuries-old dream in science to represent ideas and words and concepts essentially mathematically, so that they can manipulated just like anything else can be. Is that possible, do you think?
Yeah. So one way of looking at the entire twentieth century is a gross failure in the ability to accurately capture the way humans reason in Boolean logic, and the way we represent first order logic, or more directly in code. That was a failure, and it wasn’t until we started thinking about the way we represent language in terms of the way concepts are actually found in relation to one another, training an algorithm to read all of Wikipedia and to start embedding that with Word2vec—that’s been a big deal.
The fact that by doing that, and now we can start capturing everything. It’s sobering, but we now have algorithms that can, with embed sentences, detect things like logical implications or logical equivalence, or logical non-equivalence. That’s a huge step, and that’s a step that I think we tried quite a bit to do, or many tried to do without experience and failed.
Do you believe that we are on a path to creating an AGI, in the sense that what we need is some advances in algorithms, some faster machines, and more data, and eventually we’re going to get there? Or, is AGI going to come about, if it does, from a presently-unknown approach, a completely different way of thinking about knowledge?
That’s difficult to speculate. Let’s take a step back. Five years ago, less than five years ago, if you wanted to propose a deep learning algorithm for an industry to solve a very practical problem, the response you would get is stop being too academic, let’s focus on something a little simpler, a little bit easier to understand. There’s been a dramatic shift, just in the last couple years, that now, the expectation is if you’re someone in the role that I’m in, or that my colleagues are in, if you’re not considering things like deep learning, then you’re not doing your job. That’s something that seems to have happened overnight, but was really a gradual shift over the past several years.
Does that mean that deep learning is the way? I don’t know. What do you really need in order to create an artificial intelligence? Well, we have a lot of the pieces. You need to be able to observe maybe visually or with sounds. You need to be able to turn those observations into concepts, so you need to be able to do object recognition visually. Deep learning has been very successful in solving those sorts of problems, and doing object recognition, and more recently making that object recognition more stable under adversarial perturbation.
You need to be able to possibly hear and respond, and that’s something that we’ve gotten a lot better at, too. We’ve got a lot of the work done by doing research labs, there’s been some fantastic work in making that more effective. You need to be able to not just identify those words or those concepts, but also put them together, and put them together, not just in isolation but in the context of sentences. So, the work that’s coming out of Stanford and some of the Stanford graduates, Einstein Labs, which is sort of at the forefront there, is doing a very good job in capturing not just semantics—in the sense of, what is represented in this paragraph and how can I pull out the most important terms?—but doing a job of abstractive text summarization, and, you know, being able to boil it down to terms and concepts that weren’t even in the paragraph. And you need to be able to do some sort of reasoning. Just like the example I gave before, you need to be able to use sentence embedding to be able to classify—we’re not there yet, but—that this sentence is related to this sentence, and this sentence might even entail this sentence.
And, of course, if you want to create Cylons, so to speak, you also need to be able to do physical interactions. All of these solutions in many ways have to do with the general genre of what’s now called “deep learning,” of being able to add parameters upon parameters upon parameters to your algorithm, so that you can really capture what’s going on in these very sophisticated, very high dimensional spaces of tasks to solve.
No one’s really gotten to the point where they can integrate all of these together, and I think is that going to be something that is now very generic, that we call deep learning, which is really a host of lots of different techniques that just use high dimensional parameter spaces, or is it going to be something completely new? I wouldn’t be able to guess.
So, there are a few things you left of your list, though, so presumably you don’t think an AGI would need to be conscious. Consciousness isn’t a part of our general intelligence. 
Ah, well, you know, maybe that brings us back to where we started.
Right, right. Well how about creativity? That wasn’t in your list either. Is that just computational from those basic elements you were talking about? Seeing, recognizing, combining?
So, an important part of that is being able to work with language, I’d say, being able to do natural language understanding and do natural language understanding at higher than the word level, but at the sentence level, certainly anything that might be what they call mistaken or “identified as” thinking. Have to have that as a necessary component. And being able to interact, being able to hold conversations, to abstract and to draw conclusions and inferences that aren’t necessarily there.
I’d say that that’s probably the sort of thing that you would expect of a conscious intelligence, whether it’s manifest in a person or manifest in a machine. Maybe I should say manifested in a human, or manifested in a machine.
So, you mentioned the Turing test earlier. And, you know, there are a lot of people who build chatbots and things that, you know, are not there yet, but people are working on it. And I always type in one, first question, it’s always the same, and I’ve never seen a system that even gets the question, let alone can answer it.
The question is, “What’s bigger, a nickel or the sun?” So, two questions, one, why is that so hard for a computer, and, two, how will we solve that problem?
Hmm. I can imagine how would I build a chatbot, and I have worked on this sort of project in the past. One of the things—and I mentioned earlier, this allusion to a miracle year—is the advances that happened, in particular, in 2013 with figuring out ways of doing neural-word embeddings. That’s so important, and one way of looking at why that’s so important is that, when we’re doing machine learning in general—this is what I tell my students, this what drives a lot of our design—you have to manage the shape of your data. You have to make sure that the amount of examples you have, the density of data points you have, is commensurate with the amount of degrees of freedom that you have representing your world, your model.
Until very recently, there have been attempts, but none of them as successful as we’ve seen in the last five years. The baseline has been what’s called the one-hot vector encoding, where you have a different dimension for every word in your language, usually it’s around a million words. You have all zeros and then for the word maybe in the first dimension you take the first word in the dictionary to order them that way, and you have the word ‘a,’ which is spelled with the letter ‘a,’ and that’s then the one and all zeros. And then for the second word you have a zero and a one and the rest zeros. So the point here, and not to get technical, but your dimensions are just too many.
You have millions and millions of dimensions. When we talk with students about this, it’s called the curse of dimensionality, every time you add even one dimension, you need twice as many data points in order to maintain the same density. And maintaining that density is what you need in order to abstract, in order to generalize, in order to come up with an algorithm that can actually find a pattern that works, not just for the data that it sees, but for the data that it will see.
What happens with these neural word embeddings? Well, they solve the problem of the curse of dimensionality, or at least they’ve really gotten their arms a lot further around it than ever before. They’ve enabled us to represent terms, represent concepts, not in these million dimensional vector spaces, where all that rich information is still there, but it’s spread so thinly across so many dimensions that you can’t really find a single entity as easily as you can if it were only representing a smaller number of dimensions, and that’s what these embeddings do.
Now, once you have that dimensionality, once you’re able to compress them into a lower dimension, now you can do all sorts of things that you want to do with language that you just couldn’t do before. And that’s part of why we see this slow operation with chatbots, they probably have something like this technology. What does this have to do with your question? These embeddings, for the most part, happen not by getting instructions—well nickels are this size, and they’re round, and they’re made of this sort of composite, and they have a picture of Jefferson stamped on the top—that’s not how you learn to mathematically represent these words at all.
What you do is you feed the algorithm lots and lots of examples of usage—you let it read all of Wikipedia, you let it read all of Reuters—and slowly but surely what happens is, the algorithm will start to see these patterns of co-usage, and will start to learn how one word follows after another. And what’s really remarkable, and could be profound, at least I know that a lot of people would want to infer that, is that the semantic kind of comes out for free.
You end up seeing the geometry of the way these words are embedded in such a way that you see, a famous example is a king vector minus a man vector plus a woman vector equals a queen vector, and that actually bears out in how the machine can now represent the language, and it did that without knowing anything about men, women, kings, or queens. It did it just by looking at frequencies of occurrence, how those words occur next to each other. So, when you talk about nickels and the sun, my first thought, given that running start, is that well, the machine probably hasn’t seen a nickel and a sun in context too frequently, and one of the dirty secrets about these neural embeddings is that they don’t do as well on very low-frequency terms, and they don’t always do well in being able to embed low frequency co-occurrences.
And maybe it’s just the fact that it hasn’t really learnt about, so to speak, it hasn’t read about, nickels and suns in context together.
So, is it an added wrinkle that, for example, you take a word like set, s-e-t, I think OED has two or three hundred definitions of it, you know—it’s something you do, it’s an object, etcetera. You know there’s a Wikipedia entry on a sentence, an eight word long grammatically correct sentence which is, “Buffalo buffalo buffalo buffalo buffalo buffalo buffalo buffalo,” which contains nouns, verbs, all of that. Is there any hope that if you took all the monkeys in all the universe typing cogent and coherent sentences, would it ever be enough to train it to what a human can do?
There’s a couple things there, and one of the key points that you’re making is that there are homonyms in our language, and so work should be done on disambiguating the homonyms. And it’s a serious problem for any natural language understanding project. And, you know, there are some examples out there of that. There’s one recently which is aimed at not just identifying a word but also disambiguating the usages or the context.
There are also others, not just focused on how to mathematically-represent how to pinpoint a representation of a word, but also how to represent the breadth of the usage. So maybe imagine not a vector, but a distribution or a cloud, that’s maybe a little thicker as a focal point, and all of those I think are a step in the right direction for capturing what is probably more representative of how we use language. And disambiguation, in particular with homonyms, is a part of that.
I only have a couple more questions in this highly theoretical realm, then I want to get down to the nitty gritty. I’m not going to ask you to pick dates or anything, but the nickel and the sun example, if you were just going to throw a number out, how many years is it until I type that question in something, and it answers it? Is that like, oh yeah we could do it if we wanted to, it’s just not a big deal, maybe give it a year? Or, is it like, “Oh, no that’s kind of tricky, wait five years probably.”
I think I remember hearing once never make a prediction.
Right, right. Well, just, is that a hard problem to solve?
The nickel and the sun is something that I’d hesitate to say is solvable in my lifetime, just to give a benchmark there, violating that maxim. I can’t say exactly when, what I can say is that the speed with which we are solving problems that I thought would take a lot longer to solve, is accelerating.
To me, while it’s a difficult problem and there are several challenges, we are still just scratching the surface in natural language understanding and word representation in particular, you know words-in-context representation. I am optimistic.
So, final question in this realm, I’m going to ask you my hard Turing test question, I wouldn’t even give this to a bot. And this one doesn’t play with language at all.
Dr. Smith is eating lunch at his favorite restaurant. He receives a call, takes it and runs out without paying his tab. Is management likely to prosecute? So you have to be able to infer it’s his favorite restaurant, they probably know who he is, he’s a doctor, that call was probably an emergency call. No, they’re not going to prosecute because that’s, you know, an understandable thing. Like, that doesn’t have any words that are ambiguous, and yet it’s an incredibly hard problem, isn’t it?
It is, and in fact, I think that is the, that is one of the true benchmarks—even moreso than comparing a nickel and a sun—of real, genuine natural language understanding. It has all sorts of things—it has object permanence, it has tracking those objects throughout different sentences, it has orienting sequences of events, it has management, which is mentioned in that last sentence, which is how you would be able to infer that management is somehow connected to the management of the restaurant.
That is a super hard one to solve for any Turing machine. It’s also something we’re starting to make progress on. Using LSDMs that do several passes through a sequence of sentences, classic artificial sentence dataset, that natural language understanding finds—the Facebook of AGI dataset, which actually is out there to help use as a benchmark for training these sorts of object permanence in multi-sentence thread. And we’ve made modest gains in that. There are algorithms like the Ask Me Anything algorithm, that have shown that it’s at least possible to start tracking objects over time, and with several passes come up with the right answer to questions about objects in sentences across several different statements.
Pulling back to the here and now, and what’s possible and what’s not. Did you ever expect AI to become part of the daily conversation, just to be part of popular culture the way it is now?
About as much as I expect that in a couple years that AI is going to be a term much like Big Data, which is to say overused.
I think, with respect to an earlier comments, the sort of AI that you and I have been dancing around, which is fully-integrated AI, is not what we talk about when we talk about what’s in daily conversation now, or for the most part not what we’re talking about in this context. And so it might be a little bit of a false success, or a spurious usage of “AI” in as much frequency as we see it.
That doesn’t mean that we haven’t made remarkable advances. It doesn’t mean that the examples that I’ve mentioned, in particular, in deep learning aren’t important, and aren’t very plausibly an early set of steps on the path. I do think that it’s a little bit of hype, though.
If you were a business person and you’re hearing all of this talk, and you want to do something that’s real, and that’s actionable, and you walk around your business, department to department—you go to HR, and to Marketing and you got to Sales, and Development—how do you spot something that would be a good candidate for the tools we have today, something that is real and actionable and not hype?
Ah, well, I feel like that is the job I do all the time. We’re constantly meeting with new companies, Fortune 500 CEOs and C-Suite execs, and talking about the problems that they want to solve, and thinking about ways of solving them. Like, I think a best practice is to always to keep it simple. There are a host of free deep learning techniques for doing all sorts of things—classification, clustering, user item matching—that are still tried-and-true, and that should probably be done first.
And then there are now, a lot of great paths to using these more sophisticated algorithms that mean that you should be considering them early. How exactly to consider one case from the other, I think that part of that is practice. It’s actually one of the things that when I talk to students about what they’re learning, I find that they’re walking away with not just, “I know what the algorithm is, I know what the objective function is, and how to manage momentum in the right way and optimizing that function,” but also how do you see the similarity between matching users and items in the recommender, or abstracting the latent semantic association of a bit of text or with an image, and there are similarities, and certain algorithms that solve all those problems. And that’s, in a lot of ways, practice.
You know, when the consumer web first came out and it became popularized, people had, you know, a web department, which would be a crazy thought today, right? Everything I’ve read about you, everybody says that you’re practical. So, from a practical standpoint, do you think that companies ought to have an AI taskforce? And have somebody whose job it is to do that? Or, is it more the kind of thing that it’s going to gradually come department by department by department? Or, is it prudent to put all of your thinking in one war room, as it were?
So, yeah, the general question is what’s the best way to do organizational design with machine learning machines, and the first answer is there are several right ways and there are a couple wrong ways. So, one of these wrong ways of the early-days are where you have this data science team that is completely isolated and is only responsible for R&D work, prototyping certain use cases and then they, to use a phrase you hear often, throw it over the wall to engineering to go implement, because I’m done with this project. That’s a wrong way.
There are several right ways, and those right ways usually involve bringing the people who are working on machine learning closer to production, closer to engineering, and also bringing the people involved in engineering and production closer to the machine learning. So, overall blurring those lines. You can do this with vertical integrated small teams, you could do this with peer teams, you can do this with a mandate that some larger companies, like Google, are really focused on making all their engineers machine learning engineers. I think all those strategies can work.
It all sort of depends on the size and the context of your business, and what kind of issues you have. And depending on those variables, then, among the several solutions, there might be one or two that are most optimal.
You’re the Chief Data Science Officer at Takt, spelled T-A-K-T, and is takt.com if anybody wants to go there. What does Takt do?
So we do the backend machine learning for large-scale enterprises. So, you know, many of your listeners might go to Starbucks and use the app to pay for Starbucks coffee. We do all of the machine learning personalization for the offers, for the games, for the recommendors in that app. And the way we approach that is by creating a whole host of different algorithms for different use cases—this goes back to your earlier question of abstracting those same techniques for many different use cases—and then apply that for each individual customer. We find the list completion use case, the recursive neural network approach, where there’s a time series of opportunity, where you can have interactions with an end user, and then learn from that interaction, and follow up with another interaction, doing things like reinforcement learning to do several interactions in a row, which may or may not get a signal back, but we have been trained to work towards that goal over time without that direct feedback signal.
This is the same sort of algorithms, for instance, that were used to train AlphaGo, to win a game. You only get that feedback at the end of the game, when you’ve won or lost. We take all of those different techniques and embed them in different ways for these large enterprise customers.
Are you a product company, a service company, a SaaS company—how does all that manifest?
We are a product company. We do tend to focus on the larger enterprises, which means that there is a little bit of customization involved, but there’s always going to be some customization involved when it comes to machine learning. Unless it’s just a suite of tools, which we are not. And what that means is that you do have to train and apply and suggest the right kinds of use cases for the suite of tools that we have, machine learning tools that we have.
Two more questions, if I may. You mentioned Cylons earlier, a Battlestar Galactica reference to those who don’t necessarily watch it. What science fiction do you think gets the future right? Like, when you watch it or read it, or what have you, you think “Oh yeah, things could happen that way, I see that”?
[Laughs] Well, you know the physicist in me still is both hopeful and skeptical about faster-than-light travel, so I suppose that wouldn’t really be the point of your question, is more with computers and with artificial intelligence.
Right, like Her or Ex Machina or what have you.
You know, it’s tough to say which of these, like, conscious-being robots is the most accurate. I think there are scenes worth observing that already have happened. Star Trek, you know, we create the iPad way before they had them in Star Trek time, so, good for reality. We also have all sorts of devices. I remember, when, in the ’80s—to date myself—the movie Star Trek came out, and Scotty gets up in front of his computer, an ’80s computer, and picks up the mouse and starts speaking into it and saying, “Computer, please do this.”
And my son will not get that joke, because he can say “Hey, Siri” or “Okay, Google” or “Alexa” or whatever the device is, and the computer will respond. And that’s, I like to focus on those smaller wins, that we are dramatically much quicker than forecasts in some cases able to accomplish that. I did see an example the other day about HAL, the Space Odyssey artificial intelligence, where people were mystified that this computer program could beat a human in chess, but didn’t blink an eye that the computer program could not only hold a conversation, but has a very sardonic disposition towards the main character. That, probably, very well captures this dichotomy of the several things are very likely to be captured, and we can get to very quickly, and other things that we thought were easy but take quite a lot longer than expected.
Final question, overall, are you an optimist? People worry about this technology—not just the killer robots scenario, but they worry about jobs and whatnot—but what do you think? Broadly speaking, as this technology unfolds, do you see us going down a dystopian path, or are you optimistic about the future?
I’ve spoken about this before a little bit. I don’t want to say, “I hope,” but I hope that Skynet will not launch a bunch of nuclear missiles. I can’t really speak with confidence to whether that’s a true risk or just an exciting storyline. What I can say is that the displacement of service jobs by automated machines is a very clear and imminent reality.
And that’s something that I’d like to think that politicians and governments and everybody should be thinking about—in particular how we think about education. The most important skill we can give our children is teaching them how to code, how to understand how computer programs work, and that’s something that we really just are not doing enough of yet.
And so will Skynet nuke everybody? I don’t know. Is it the case that I am, at six years old, teaching my son how to code already? Absolutely. And I think that will be make a big difference in the future.
But wouldn’t coding be something relatively easy for an AI? I mean it’s just natural language, tell it what you want it to do.
Computers that program themselves. It’s a good question.
So you’re not going to suggest, I think you mentioned, your son be a philosophy major at Columbia?
[Laughs] You know what, as long as he knows some math and he knows how to code, he can do whatever he wants.
Alright, well we’ll leave it on that note, this was absolutely fascinating, Mike. I want to thank you, thank you so much for taking the time. 
Well thank you, this was fun.
Byron explores issues around artificial intelligence and conscious computers in his upcoming book The Fourth Age, to be published in April by Atria, an imprint of Simon & Schuster. Pre-order a copy here.

Voices in AI – Episode 36: A Conversation with Bill Mark

In this episode Byron and Bill talk about SRI International, aging, human productivity and more.
[podcast_player name=”Episode 36: A Conversation with Bill Mark” artist=”Byron Reese” album=”Voices in AI” url=”https://voicesinai.s3.amazonaws.com/2018-03-22-(00-59-22)-bill-mark.mp3″ cover_art_url=”https://voicesinai.com/wp-content/uploads/2018/03/voices-headshot-card-2.jpg”]
Byron Reese: This is Voices in AI, brought to you by GigaOm. I’m Byron Reese. Today our guest is Bill Mark. He heads up SRI International’s Information and Computing Sciences Division which consists of two hundred and fifty researchers, in four laboratories, who create new technology and virtual personal assistants, information security, machine learning, speech, natural language, computer visionall the things we talk about on the show. He holds a Ph.D. in computer science from MIT. Welcome to the show, Bill.
Bill Mark:  Good to be here.
So, let’s start off with a little semantics. Why is artificial intelligence, artificial? Is artificial because it’s not really intelligence, or what? 
No, it’s artificial, because it’s created by human beings as opposed to nature. So, in that sense, it’s an artifact, just like any other kind of physical artifact. In this case, it’s usually a software artifact.
But, at its core, it truly is intelligent and its intelligence doesn’t differ in substance, only in degree, from human intelligence?
I don’t think I’d make that statement. The definition of artificial intelligence to me is always a bit of a challenge. The artificial part, I think, is easy, we just covered that. The intelligence part, I’ve looked at different definitions of artificial intelligence, and most of them use the word “intelligence” in the definition. That doesn’t seem to get us much further. I could say something like, “it’s artifacts that can acquire and/or apply knowledge,” but then we’re going to have a conversation about what knowledge is. So, what I get out of it is it’s not very satisfying to talk about intelligence at this level of generality because, yes, in answer to your question, artificial intelligence systems do things which human beings do, in different ways and, as you indicated, not with the same fullness or level that human beings do. That doesn’t mean that they’re not intelligent, they have certain capabilities that we regard as intelligent.
You know it’s really interesting because at its core you’re right, there’s no consensus definition on intelligence. There’s no consensus definition on life or death. And I think that’s really interesting that these big ideas aren’t all that simple. I’ll just ask you one more question along these lines then. Alan Turing posed the question in 1950, Can a machine think? What would you say to that?
I would say yes, but now we have to wonder what “think” might mean, because “think” is one aspect of intelligent behavior, it indicates some kind of reasoning or reflection. I think that there are software systems that do reason and reflect, so I will say yes, they think.
All right, so now let’s get to SRI International. For the listeners who may not be familiar with the company can you give us the whole background and some of the things you’ve done to date, and why you exist, and when it started and all of that?
Great, just a few words about SRI International. SRI International is a non-profit research and development company, and that that’s a pretty rare category. A lot of companies do research and development—a fewer than used to, but still quite a few—and very few have research and development as their business, but that is our business. We’re also non-profit, which really means that we don’t have shareholders. We still have to make money, but all the money we make has to go into the mission of the organization which is to do R&D for the benefit of mankind. That’s the general thing. It started out as part of Stanford, it was formerly the Stanford Research Institute. It’s been independent since 1970 and it’s one of the largest of these R&D companies in the world, about two thousand people.
Now, the information and computing sciences part, as you said, that’s about two hundred and fifty people, and probably the thing that we’re most famous for nowadays is that we created Siri. Siri was a spinoff of one of my labs, the AI Center. It was a spinoff company of SRI, that’s one of the things we do, and it was acquired by Apple, and has now become world famous. But we’ve been in the field of artificial intelligence for decades. Another famous SRI accomplishment would be Shakey the Robot, which was really the first robot that could move around and reason and interact. That was many years ago. We’ve also, in more recent history, been involved in very large government-sponsored AI projects which we’ve led, and we just have lots of things that we’ve done in AI.
Is it just a coincidence that Siri and SRI are just one letter different, or is that deliberate?
It’s a coincidence. When SRI starts companies we bring in entrepreneurs from the outside almost always, because it would be pretty unusual for an SRI employee to be the right person to be the CEO of the startup company. It does happen, but it’s unusual. Anyway, in this case, we brought in a guy named Dag Kittlaus, and he’s of Norwegian extraction, and he chose the name. Siri is a Norwegian women’s name and that became the name of the company. Actually, somewhat to our surprise, Apple retained that name when they launched Siri.
Let’s go through some of the things that your group works on. Could we start with those sorts of technologies? Are there other things in that family of conversational AI that you work on and are you working on the next generation of that?
Yes, indeed, in fact, we’ve been working on the next generation for a while now. I like to think about conversational systems in different categories. Human beings have conversations for all kinds of reasons. We have social conversations, where there’s not particularly any objective but being friendly and socializing. We have task-oriented kinds of conversations—those are the ones that we are focusing on mostly in the next generation—where you’re conversing with someone in order to perform a task or solve some problem, and what’s really going on is it’s a collaboration. You and the other person, or people, are working together to solve a problem.
I’ll use an example from the world of online banking because we have another spinoff called Kasisto that is using the next-generation kind of conversational interaction technology. So, let’s say that you walk into a bank, and you say to the person behind the counter, “I want to deposit $1,000 in checking.” And the person on the other side, the teller says, “From which account?” And you say, “How much do I have in savings?” And the teller says, “You have $1,500, but if you take $1,000 out you’ll stop earning interest.” So, take that little interaction. That’s a conversational interaction. People do this all the time, but it’s actually very sophisticated and requires knowledge.
If you now think of, not a teller, but a software system, a software agent that you’re conversing with—we’ll go through the same little interaction. The person says, “I want to deposit $1,000 in checking.” And the teller said, “From which account?” The software system has to know something about banking. It has to know that a deposit is a money transfer kind of interaction and it requires a from-account and a to-account. And in this case, the to-account has been specified but the from-account hasn’t been specified. In many cases that person would simply ask for that missing information, so that’s the first part of the interaction. So, again, the teller says, “From which account?” And the person says, “How much do I have in savings?” Well, that’s not an answer to the question. In fact, it’s another question being introduced by the person and it’s actually a balance inquiry question. They want to know how much they have in savings. Now, when I go through this the first time, the reason I do this twice is that when I went through it the first time, almost nobody even notices that that wasn’t an answer to the question, but if you try out a lot of the personal assistant systems that are out there, they tend to crater on that kind of interaction, because they don’t have enough conversational knowledge to be able to handle that kind of thing. And then the interaction goes on where the teller is providing information, beyond what the person asked, about potentially losing interest, or it might be that they would get a fee or something like that.
That illustrates the point that we expect our conversational partners to be proactive, not just to simply answer our questions, but to actually help us solve the problem. That’s the kind of interaction that we’re building systems to support. It’s very different than the personal assistants that are out there like Siri, and Cortana, and Google which are meant to be very general. Siri doesn’t really know anything about banking, which isn’t a criticism it’s not supposed to know anything about banking, but if you want to get your banking done over your mobile phone then you’re going to need a system that knows about banking. That’s one example of sort of next-generation conversational interaction.
How much are we going to be able to use transfer learning to generalize from that? You built that bot, that highly verticalized bot that knows everything about banking, does anything it learned make it easier now for it to do real estate, and then for it to do retail, and then all the other things? Or is it the case that like every single vertical, all ten thousand of them are going to need to start over from scratch?
It’s a really good question, and I would say, with some confidence, that it’s not about starting over from scratch because some amount of the knowledge will transfer to different domains. Real estate has transactions, if there’s knowledge about transactions some of that knowledge will carry over, some of it won’t.
You said, “the knowledge that it has learned,” and we need to get pretty specific about that. We do build systems that learn, but not all of their knowledge is picked up by learning. Some of it is built in, to begin with. So, there’s the knowledge that has been explicitly represented, some of which will transfer over. And then there’s knowledge that has been learned in other ways, some of that will transfer over as well, but it’s less clear-cut how that will work. But it’s not starting from scratch every time.
So, eventually though you get to something that could pass the Turing test. You could ask it, “So, if I went into the bank and wanted to move $1,000, what would be the first question you would ask me?” And it would say, “Oh, from what account?” 
My experience with every kind of candidate Turing test system, and nobody purports that we’re there by a long shot, but my first question is always, “What’s bigger, a nickel or the sun?” And I haven’t found a single one that can answer the question. How far away is that?
Well, first just for clarity, we are not building these systems in order to pass the Turing test, and in fact, something that you’ll find in most of these systems is that outside of their domain of expertise, say banking, in this case, they don’t know very much of anything. So, again, the systems that we build wouldn’t know things like what’s bigger, the nickel or the sun.
The whole idea of the Turing test is that it’s meant to be some form of evaluation, or contest for seeing whether you have created something that’s truly intelligent. Because, again, this was one of Turing’s approaches to answering this question of what is intelligence. He didn’t really answer that question but he said if you could develop an artifact that could pass this kind of test, then you would have to say that it was intelligent, or had human-like behavior at the very least. So, in answer to your question, I think we’re very far from that because we aren’t so good at getting the knowledge that, I would say, most people have into a computer system yet.
Let’s talk about that for a minute. Why is it so hard and why is it so, I’ll go out on a limb and say, easy for people? Like, a toddler can tell me what’s bigger the nickel or the sun, so why is it so hard? And what makes humans so able to do it?
Well, I don’t know that anyone knows the answer to that question. I certainly don’t. I will say that human beings spend time experiencing the world, and are also taught. Human beings are not born knowing that the sun is bigger than a nickel, however, over time they experience what the sun is and, at some point, they will experience what a nickel is, and they’ll be able to make that comparison. By the way, they also have to learn how to make comparisons. It would be interesting to ask toddlers that question, because the sun doesn’t look very big when you look up in the sky, so that brings in a whole other class of human knowledge which I’ll just broad-brush call book learning. I certainly would not know that the sun is really huge, unless I had learned that in school. Human beings have different ways of learning, only a very small sample of which have been implemented in artificial intelligence learning systems.
There’s Calvin and Hobbes, where his dad tells Calvin that it’s a myth that the sun is big, that it’s really only the size of a quarter. And he said, “Look, hold it up in the sky. They’re the same.” So, point taken. 
But, let me ask it this way, human DNA is, I don’t know, I’m going to get this a little off, but it’s like 670MB of data. And if you look at how much that’s different than, say, a banana, it’s a small amount that is different. And then you say, well, how much of it is different than, say, a chimp, and it’s a minuscule amount. So, whatever that minuscule difference in code is, just a few MBs, is that, kind of, the secret to intelligence? Is that a proof point that there may be some very basic, simple ways to acquire generalized knowledge, that we just haven’t stumbled across yet that, but there may be something that gives us this generalized learner, we can just plug into the Internet and the next day it knows everything. 
I don’t make that jump. I think the fact that a relatively small amount of genetic material differentiates us from other species doesn’t indicate that there’s something simple out there, because the way those genes or the genetic material impacts the world is very complex, and lead to all kinds of things that could be very hard for us to understand and try to emulate. I also don’t know that there is a generalist learner anyway. I think, as I said, human beings seem to have different ways of learning things, and that doesn’t say to me that there is one general approach.
Back in the Dartmouth days, when they thought they could knock out a lot of AI problems in a summer, it was in the hope that intelligence followed a few simple laws, like how the laws of physics explain so much. It’s been kind of the consensus move to think that we’re kind of a hack of a thousand specialized things that we do that all come together and make generalized intelligence. And it sounds like you’re more in that camp that it’s just a bunch of hard work and we have to tackle these domains one at a time. Is that fair?
I’m actually kind of in between. I think that there are general methods, there are general representations, but there’s also a lot of specific knowledge that’s required to be competent in some activity. I’m into sort of a hybrid.
But you do think that building an AGI, generalized intelligence, that is as versatile as a human is theoretically possible I assume? 
You mentioned something when we were chatting earlier that a child explores the world. Do you think embodiment is a pathway to that, that until we give machines away, in essence, to “experience” the world, that that will always limit what we’re able to do? Is that embodiment, that you identified as being important for humans, also important for computers?
Well, I would just differentiate the idea of exploration from embodiment. I think that exploration is a fundamental part of learning. I would say that we, yes indeed, will be missing something unless we design systems that can explore their world. From my point of view, they may or may not be embodied in the usual sense of that word, which means that they can move around and actuate within their environment. If you generalize that to software and say, “Are software agents embodied because they can do things in the world?” then, yeah, I guess I would say embodiment, but it doesn’t have to be physical embodiment.
Earlier when you were talking about digital assistants you said Siri, Cortana and then you said, “Oh, and Google.” And that highlights a really interesting thing that Amazon named theirs, you named yours, Microsoft named theirs, but Google’s is just the Google Assistant. And you’re undoubtedly familiar with the worries that Weizenbaum had with ELIZA. He thought that this was potentially problematic that we name these devices, and we identify with them as if they are human. He said, “When a computer says, ‘I understand,’ it’s just a lie. There’s no ‘I,’ and there’s nothing that understands anything.” How would you respond to Weizenbaum? Do you think that’s an area of concern or you think he was just off?
I think it’s definitely an area of concern, and it’s really important in designing. I’ll go back to conversational systems, systems like that, which human beings interact with, it’s important that you do as much as possible to help the human being create a correct mental model of what it is that they’re conversing with. So, should it be named? I think it’s kind of convenient to name it, as you were just saying, it kind of makes it easier to talk about, but it immediately raises this danger of people over-reading into it: what it is, what it knows, etcetera. I think it’s very much something to be concerned about.
There’s that case in Japan, where there’s a robot that they were teaching how to navigate a mall, and very quickly learned that it got bullied by children who would hit it, curse at it, and all these things. And later when they asked the children did you think it was upset, was it acting upset? Was it acting human-like or mechanical? They overwhelmingly said it was human-like. 
And I still have a bit of an aversion to interrupting the Amazon deviceI can’t say its name because it’s on my desk right next to meand telling it, “Stop!” And so I just wonder where it goes because, you’re right, it’s like the Tom Hanks’ movie Castaway when his only friend was a soccer ball named “Wilson” that he personified. 
I remember there was a case in the ‘40s where they would show students a film of circles and lines moving around, and ask them to construct stories, and they would attribute to these lines and circles personalities, and interactions, and all of that. It is such a tempting thing we do, and you can see it in people’s relationships to their pets that one wonders how that’s all going to sort itself out, or will we look back in forty years and think, “Well, that was just crazy.”
No, I think you’re absolutely right. I think that human beings are extremely good at giving characteristics to objects, systems, etcetera, and I think that will continue. And, as I said, that’s very much a danger in artificial intelligence systems, the danger being that people assume too much knowledge, capability, understanding, given what the system actually is. Part of the job of designing the system is, as I said before, to go as far as we can to give the person the right idea about what it is that they’re dealing with.
Another area that you seem to be focused on, as I was reading about you and your work, is AI and the aging population. Can you talk about what the goal is there and what you are doing, and maybe some successes or failures you’ve had along the way?
Yes, indeed, we are, SRI-wide actually, looking at what we can do to address the problem, the worldwide problem, of higher percentage of aging population, lower percentage of caregivers. We read about this in the headlines all the time. In particular, what we can do to have people experience an optimal life, the best that is possible for them as they age. And there’s lots of things that we’re looking at there. We were just talking about conversational systems. We are looking at the problem of conversational systems that are aimed at the aging population, because interaction tends to be a good thing and sometimes there aren’t caregivers around, or there aren’t enough of them, or they don’t pay attention, so it might actually be interesting to have a conversational system that elderly people can talk to and interact with. We’re also looking at ways to preserve privacy and unobtrusively monitor the health of people, using artificial intelligence techniques. This is indeed a big area for us.
Also, your laboratories work on information security and you mentioned privacy earlier, talk to me, if you would, about the state of the art there. Across all of human history, there’s been this constant battle between the cryptographers and the people who break the codes, and it’s unclear who has the upper hand in that. It’s the same thing with information security. Where are we in that world? And is it easier to use AI to defend against breaches, or to use that technology to do the breach?
Well, I think, the situation is very much as you describe—it’s a constant battle between attackers and defenders. I don’t think it’s any easier to use AI to attack, or defend. It can be used for both. I’m sure it is being used for both. It’s just one of the many sets of techniques that can be used in cybersecurity.
There’s a lot of concern wrapped up in artificial intelligence and its ability to automate a lot of work, and then the effect of that automation on employment. What’s your perspective on how that is going to unfold?
Well, my first perspective is it’s a very complex issue. I think it’s very hard to predict the effect of any technology on jobs in the long-term. As I reflect, I live in the Bay Area, a huge percentage of the jobs that people have in the Bay Area didn’t exist at all a hundred years ago, and I would say a pretty good percentage didn’t exist twenty years ago. I’m certainly not capable of projecting in the long run what the effect of AI and automation will be. You can certainly guess that it will be disruptive, all new technologies are disruptive, and that’s something as a society we need to take aboard and deal with, but how it’s going to work out in the long-term, I really don’t know.
Do you take any comfort that we’ve had transformative technologies aplenty? Right, we had the assembly line which is a kind of artificial intelligence, we had the electrification of industry, we had the replacement of animal power with steam power. I mean each of those was incredibly disruptive. And when you look back across history each one of them happened incredibly fast and yet unemployment never surged from them. Unemployment in the US has always been between four and ten percent, other than the depression. And you can’t the point and say, “Oh, when this technology came out unemployment went briefly to fourteen percent,” or anything like that. Do you take comfort in that or do you say, “Well, this technology is materially different”? 
I take comfort in it in the sense that I have a lot of faith in the creativity and agility of people. I think what that historical data is reflecting is the ability of individuals and communities to adapt to change and I expect that to continue. Now, artificial intelligence technology is different, but I think that we will learn to adapt and thrive with artificial intelligence in the world.
How is it different though, really? Because technology increases human productivity, that’s kind of what it does. That’s what steam did. That’s what electricity did. That’s what the Industrial Revolution did. And that’s what artificial intelligence does. How is it different?
I think in the sense that you’re talking about, it’s not different. It is meant to augment human capability. It’s augmenting now, to some extent, different kinds of human activity, although arguably that’s been going on for a long time, too. Calculators, printing presses, things like that have taken over human activities that were once thought to be core human things. It’s sort of a difference in degree, not a difference in kind.
One interesting thing about technology and how the wealth that it produces is disseminated through culture, is that in one sense technology helps everybodyyou get a better TV, or better brakes in your car, better deodorant, or whateverbut in two other ways, it doesn’t. If you’re somebody who sells your labor by the hour, and your company can produce a labor-saving device, that benefit doesn’t accrue to you it generally would accrue to the shareholders of the company in terms of higher earnings. But if you’re self-employed, or you buy your own time as it were, you get to pocket all of the advances that technology gets you, because it makes your productivity higher and you get all of that. So, do you think that the technology does inherently make worse the income-inequality situation, or am I missing something in that analysis? 
Well, I don’t think that is inherent and I’m not sure that the fault lines will cut that way. We were just talking about the fact that there is disruption and what that tends to mean is that some people will benefit in the short-term, and some of the people will suffer in the short-term. I started by saying this is a complex issue. I think one of the complexities is actually determining what that is. For example, let’s take stuff around us now like Uber and other ride-hailing services. Clearly that has disrupted the world of taxi drivers, but on the other hand has created opportunities for many, many, many other drivers, including taxi drivers. What’s the ultimate cost-benefit there? I don’t know. Who wins and loses? Is it the cab companies, is it the cab drivers? I think it’s hard to say.
I think it was Niels Bohr that said, “Making predictions is hard, especially if they’re about the future.” And he was a Nobel Laureate.
The military, of course, is a multitrillion-dollar industry and it’s always an adopter of technology, and there seems to be a debate about making weapon systems that make autonomous kill decisions. How do you think that’s going to unfold?
Well, again, I think that this is a very difficult problem and is a touchpoint issue. It’s one manifestation of an overall problem of how we trust complex systems of any kind. This is, to me anyway, this goes way beyond artificial intelligence. Any kind of complex system, we don’t really know how it works, what its limitations are, etcetera. How do we put boundaries on its behavior and how do we develop trust in what it’s done? I think that’s one of the critical research problems of the next few decades.
You are somebody who believes we’re going to build a general intelligence, and it seems that when you read the popular media there’s a certain number of people that are afraid of that technology. You know all the names: Elon Musk says it’s like summoning the demon, Professor Hawking says it could be the last thing we do, Bill Gates says he’s in the camp of people who are worried about it and don’t understand why other people aren’t was, Wozniak, the list goes on and on. Then you have another list of people who just almost roll their eyes at those sorts of things, like Andrew Ng who says it’s like worrying about overpopulation on Mars, the roboticist Rodney Brooks says that it’s not helpful, Zuckerberg and so forth. So, two questions: why, among a roomful of incredibly smart people is there such a disagreement over it, and, two, where do you fall in that kind of debate?
Well, I think the reason for disagreements, is that it’s a complex issue and it involves something that you were just talking about with the Niels Bohr quote. You’re making predictions about the future. You’re making predictions about the pace of change, and when certain things will occur, what will happen when they occur, really based on very little information. I’m not at all surprised that there’s dramatic difference of opinion.
But to be clear, it’s not a roomful of people saying, “These are really complex issues,” it’s a roomful of people were half of them are saying, “I know it is a problem,” and half of them saying, “I know it is not a problem.” 
I guess that might be a way of strongly stating a belief. They can’t possibly know.
Right, like everything you’re saying you’re taking measured tones like, “Well, we don’t know. It could happen this way or that way. It’s very complicated.” They are not taking that same tone. 
Well, let me get to your second question, we can come back to the first one. So, my personal view, and here comes this measured response that you just accused me of is, yes, I’m worried about it, but, honestly, I’m worried about other things more. I think that this is something to be concerned about. It’s not an irrational concern, but there are other concerns that I think are more pressing. For example, I’m much more worried about people using technology for untoward purposes than I am about superintelligence taking over the world.
That is an inherent problem with technology’s ability to multiply human effort, if human effort is malicious. Is that an insoluble problem? If you can make an AGI you can, almost by definition, make an evil AGI, correct?
Yes. Just to go back a little bit, you asked me whether I thought AGI was theoretically possible, whether there are any theoretical barriers. I don’t think there are theoretical barriers. We can extrapolate and say, yes, someday that kind of thing will be created. When it is, you’re right, I think any technology, any aspect of human behavior can be done for good or evil, from the point of view of some people.
I have to say, another thing I think about when we talk about super intelligence, I was relating it to complex systems in general. I think of big systems that exist today that we live with, like high-speed automated trading of securities, or weather forecasting, these are complex systems that definitely influence our behavior. I’m going to go out on a limb and say nobody knows what’s really going on with them. And we’ve learned to adapt to them.
It’s interesting, I think part of the difference of opinion boils down to a few technical questions that are very specific that we don’t know the answer to. One of them is, it seems like some people are kind of, I don’t want to say down on humans, but they don’t think human abilities, like creativity and all of that are all that difficult, and machines are going to be able to master that. There’s a group of people who would say the amount of time between one of these systems being able to self-improve is short, not long. I think that some would say intelligence isn’t really that hard, but there’s probably a few breakthroughs. You stack enough of those together and you say, “Okay, it’s really soon.” But if you take the opposite side on thosecreativity is very hard, intelligence is very hardthen you’re, kind of, in the other camp. I don’t doubt the sincerity of any of the parties involved. 
On your comment about the theoretical possibility of a general intelligence, just to explore that for a moment, without any regard for when it will happen—we understand how a computer could, for instance, measure temperature, but we don’t really understand how a computer, or I don’t, could feel pain. For a machine to go from measuring the world to experiencing the world, we don’t really know that, and so is that required to make a general intelligence, to be able to, in essence, experience qualia, to be conscious, or not. 
Well, I think that if we’re truly talking about general intelligence in the sense that I think most people mean it, which is human-like intelligence, then one thing that people do is experience the world and react to it, and it becomes part of the way that we think and reason about the world. So, yes, I think, if we want computers to have that kind of capability, then we have to figure out a way for them to experience it.
The question then becomes—I think this is in the realm of the very difficult—when, to use your example, a human being or any animal experiences pain, there is some physical and then electrochemical reaction going on that is somehow interpreted in the brain. I don’t know how all of that works, but I believe that it’s theoretically possible to figure out how that works and to create artifacts that exhibit that behavior.
Because we can’t really confine it to how humans feel pain, right? But, I guess I’m still struggling over that. What would that even look like, or is your point, “I don’t know what it looks like, but that would be what’s required to do it.” 
I definitely don’t know what it looks like on the inside, but you can also look at the question of, “What is the value of pain, or how does pain influence behavior?” For a lot of things, pain is a warning that we should avoid something, touching a hot object, moving an injured limb, etcetera. There’s a question of whether we can get computer systems to be able to have that kind of warning sensation which, again, isn’t exactly the same thing as creating a system that feels pain in any way like an animal does, but it could get the same value out of the experience.
Your lab does work in robotics as well as artificial intelligence, is that correct?
Talk a little bit about that work and how those two things come together, artificial intelligence and robots.
Well, I think that, traditionally, artificial intelligence and robotics have been the same area of exploration. One of the features of any maturing discipline, which I think AI is, is that various specializations and specialty groups start forming naturally as the field expands and there’s more and more to know.
The fact that you’re even asking the question shows that there has become a specialization in robotics that is seen as separate from, some people may say, part of, some people may say, completely different from, artificial intelligence. As a matter of fact, although my labs work on aspects of robotics, other labs within SRI, that are not part of the information computing sciences division, also work on robotics.
The thing about robotics is that you’re looking at things like motion, manipulation, actuation, doing things in the world, and that is a very interesting set of problems that has created a discipline around it. Then on top of that, or surrounding it, is the kind of AI reasoning, perception, etcetera, that enables those things to actually work. To me, they are different aspects of the same problem of having, to go back to something you said before, some embodiment of intelligence that can interact with the real world.
The roboticist Rodney Brooks, who I mentioned earlier, says something to the effect that he thinks there’s something about biology, something very profoundly basic that we don’t understand which he calls, “the juice.” And to be clear, he’s 100% convinced that “the juice” is biology, that there’s nothing mystical about it, that it’s just something we don’t understand. And he says it’s the difference between, you put a robot in a box and it tries to get out, it just kind of runs through a protocol and tries to climb. But you put an animal in a box and it frantically wants out of that boxit’s scratching, it’s getting agitated and worked upand that difference between those two systems he calls “the juice.” Do you think there is something like that that we don’t yet know about biology that would be beneficial to have to put in robots? 
I think that there’s a whole lot that we don’t know about biology, and I can assure you there’s a huge amount that I don’t know about biology. Calling it “the juice,” I don’t know what we learn from that. Certainly, the fact that animals have motivations and built-in desires that make them desperately want to get out of the box, is part of this whole issue of what we were talking about before of how and whether to introduce that into artifacts, into artificial systems. Is it a good thing to have in robots? I would say, yes. This gets back to the discussion about pain, because presumably the animal is acting that way out of a desire for self-preservation, that something that it has inherited or learned tells it that being trapped in a box is not good for its long-term survival prospects. Yes, it would be good for robots to be able to protect themselves.
I’ll ask you another either/or question you may not want to answer. The human body uses one hundred watts and we use twenty of that to power our brain, and we use eighty of it to power our body. The biggest supercomputers in the world use twenty million watts and they’re not able to do what the brain does. Which of those is a harder thing to replicate? If you had to build a computer that operated with the capabilities of the human brain that used twenty watts, or you had to build a robot that only used eighty watts that could mimic the mobility of a human. Which of those is a harder problem?
Well, as you suggested when you brought this up, I can’t take that either/or. I think that they’re both really hard. The way you phrased that makes me think of somebody who came to give a talk at SRI a number of years ago, and was somebody who was interested in robotics. He said that, as a student, he had learned about the famous AI programs that had become successful in playing chess. And as he learned more and more about it, he realized that what was really hard was a human being picking up the chess piece and moving it around, not the thinking that was involved in chess. I think he was absolutely right about that because chess is a game that is abstract and has certain rules, so even though it’s very complex, it’s not the same thing as the complexities of actual manipulation of objects. But if you ask the question you did, which is comparing it not to chess, but to the full range of human activity then I would just have to say they’re both hard.
There isn’t a kind of a Moore’s law of robotics is there—the physical motors and materials and power, and all of that? Is that improving at a rate commensurate with our advances in AI, or is that taking longer and slower? 
Well, I think that you have to look at that in more detail. There has been tremendous progress in the ability to build systems that can manipulate objects that use all kinds of interesting techniques. Cost is going down. The accuracy and flexibility is going up. In fact, that’s one of the specialty areas of the robotics part of SRI. That’s absolutely happening. There’s also been tremendous progress on aspects of artificial intelligence. But other parts of artificial intelligence are coming along much more slowly and other parts of robotics are coming along much more slowly.
You’re about the sixtieth guest on the show, and I think that all of them, certainly all of them that I have asked, consume science fiction, sometimes quite a bit of it. Are you a science fiction buff? 
I’m certainly not a science fiction buff. I have read science fiction. I think I used to read a lot more science fiction than I do now. I think science fiction is great. I think it can be very inspiring.
Is there any vision of the future in a movie, TV, or book, or anything that you look at and say, “Yes, that could happen, that’s how the world might unfold? You can say Her, or Westworld, or Ex Machina, or Star Trek, or any of those.
Nope. When I see things like that I think they’re very entertaining, they’re very creative, but they’re works of fiction that follow certain rules or best practices about how to write fiction. There’s always some conflict, there’s resolution, there’s things like that are completely different from what happens in the real world.
All right, well, it has been a fantastically interesting hour. I think we’ve covered a whole lot of ground and I want to thank you for being on the show, Bill. 
It’s been a real pleasure.
Byron explores issues around artificial intelligence and conscious computers in his upcoming book The Fourth Age, to be published in April by Atria, an imprint of Simon & Schuster. Pre-order a copy here.

Voices in AI – Episode 25: A Conversation with Matt Grob

In this episode, Byron and Matt talk about thinking, the Turing test, creativity, Google Translate, job displacement, and education.
[podcast_player name=”Episode 25: A Conversation with Matt Grob” artist=”Byron Reese” album=”Voices in AI” url=”https://voicesinai.s3.amazonaws.com/2017-12-04-(01-01-40)-matt-grob.mp3″ cover_art_url=”https://voicesinai.com/wp-content/uploads/2017/12/voices-headshot-card_preview-2.jpeg”]
Byron Reese: This is Voices in AI, brought to you by Gigaom. I’m Byron Reese. Today our guest is Matt Grob. He is the Executive Vice President of Technology at Qualcomm Technologies, IncGrob joined Qualcomm back in 1991 as an engineer. He also served as Qualcomm’s Chief Technology Officer from 2011 to 2017. He holds a Master of Science in Electrical Engineering from Stanford, and a Bachelor of Science in Electrical Engineering from Bradley University. He holds more than seventy patents. Welcome to the show, Matt.
Matt Grob: Thanks, Byron, it’s great to be here.
So what does artificial intelligence kind of mean to you? What is it, kind of, at a high level? 
Well, it’s the capability that we give to machines to sense and think and act, but it’s more than just writing a program that can go one way or another based on some decision process. Really, artificial intelligence is what we think of when a machine can improve its performance without being reprogrammed, based on gaining more experience or being able to access more data. If it can get better, it can prove its performance; then we think of that as machine learning or artificial intelligence.
It learns from its environment, so every instantiation of it heads off on its own path, off to live its own AI life, is that the basic idea?
Yeah, for a long time we’ve been able to program computers to do what we want. Let’s say, you make a machine that drives your car or does cruise control, and then we observe it, and we go back in and we improve the program and make it a little better. That’s not necessarily what we’re talking about here. We’re talking about the capability of a machine to improve its performance in some measurable way without being reprogrammed, necessarily. Rather it trains or learns from being able to access more data, more experience, or maybe talking to other machines that have learned more things, and therefore improves its ability to reason, improves its ability to make decisions or drive errors down or things like that. It’s those aspects that separate machine learning, and these new fields that everyone is very excited about, from just traditional programming.
When you first started all of that, you said the computer “thinks.” Were you using that word casually or does the computer actually think?
Well, that’s a subject of a lot of debate. I need to point out, my experience, my background, is actually in signal processing and communications theory and modem design, and a number of those aspects relate to machine learning and AI, but, I don’t actually consider myself a deep expert in those fields. But there’s a lot of discussion. I know a number of the really deep experts, and there is a lot of discussion on what “think” actually means, and whether a machine is simply performing a cold computation, or whether it actually possesses true imagination or true creativity, those sorts of elements.
Now in many cases, the kind of machine that might recognize a cat from a dog—and it might be performing a certain algorithm, a neural network that’s implemented with processing elements and storage taps and so forth—is not really thinking like a living thing would do. But nonetheless it’s considering inputs, it’s making decisions, it’s using previous history and previous training. So, in many ways, it is like a thinking process, but it may not have the full, true creativity or emotional response that a living brain might have.
You know it’s really interesting because it’s not just a linguistic question at its core because, either the computer is thinking, or it’s simulating something that thinks. And I think the reason those are different is because they speak to what are the limits, ultimately, of what we can build. 
Alan Turing way back in his essay was talking about, “Can a machine think?” He asked the question sixty-five years ago, and he said that the machine may do it a different way but you still have to call it “thinking. So, with the caveat that you’re not at the vanguard of this technology, do you personally call the ball on that one way or the other, in terms of machine thought?
Yeah, I believe, and I think the prevailing view is, though not everyone agrees, that many of the machines that we have today, the agents that run in our phones, and in the cloud, and can recognize language and conditions are not really, yet, akin to a living brain. They’re very, very useful. They are getting more and more capable. They’re able to go faster, and move more data, and all those things, and many metrics are improving, but they still fall short.
And there’s an open question as to just how far you can take that type of architecture. How close can you get? It may get to the point where, in some constrained ways, it could pass a Turing Test, and if you only had a limited input and output you couldn’t tell the difference between the machine and a person on the other end of the line there, but we’re still a long way away. There are some pretty respected folks who believe that you won’t be able to get the creativity and imagination and those things by simply assembling large numbers of AND gates and processing elements; that you really need to go to a more fundamental description that involves quantum gravity and other effects, and most of the machines we have today don’t do that. So, while we have a rich roadmap ahead of us, with a lot of incredible applications, it’s still going to be a while before we really create a real brain.
Wow, so there’s a lot going on in there. One thing I just heard was, and correct me if I’m saying this wrong, that you don’t believe we can necessarily build an artificial general intelligence using, like, a Von Neumann architecture, like a desktop computer. And that what we’re building on that trajectory can get better and better and better, but it won’t ever have that spark, and that what we’re going to need are the next generation of quantum computer, or just a fundamentally different architecture, and maybe those can emulate human brain’s functionality, not necessarily how it does it but what it can do. Is that fair? Is that what you’re saying? 
Yeah, that is fair, and I think there are some folks who believe that is the case. Now, it’s not universally accepted. I’m kind of citing some viewpoints from folks like physicist Roger Penrose, and there’s a group around him—Penrose Institute, now being formed—that are exploring these things and they will make some very interesting points about the model that you use. If you take a brain and you try to model a neuron, you can do so, in an efficient way with a couple lines of mathematics, and you can replicate that in silicon with gates and processors, and you can put hundreds of thousands, or millions, or billions of them together and, sure, you can create a function that learns, and can recognize images, and control motors, and do things and it’s good. But whether or not it can actually have true creativity, many will argue that a model has to include effects of quantum gravity, and without that we won’t really have these “real brains.”
You read in the press about both the fears and the possible benefits of these kinds of machines, that may not happen until we reach the point where we’re really going beyond, as you said, Von Neumann, or even other structures just based on gates. Until we get beyond that, those fears or those positive effects, either one, may not occur.
Let’s talk about Penrose for a minute. His basic thesisand you probably know this better than I dois that Gödel’s incompleteness theorem says that the system we’re building can’t actually duplicate what a human brain can do. 
Or said another way, he says there are certain mathematical problems that are not able to be solved with an algorithm. They can’t be solved algorithmically, but that a human can solve them. And he uses that to say, therefore, a human brain is not a computational device that just runs algorithms, that it’s doing something more; and he, of course, thinks quantum tunneling and all of that. So, do you think that’s what’s going on in the brain, do you think the brain is fundamentally non-computational?
Well, again, I have to be a little reserved with my answer to that because it’s not an area that I feel I have a great deep background in. I’ve met Roger, and other folks around him, and some of the folks on the other side of this debate, too, and we’ve had a lot of discussions. We’ve worked on computational neuroscience at Qualcomm for ten years; not thirty years, but ten years, for sure. We started making artificial brains that were based on the spiking neuron technique, which is a very biologically inspired technique. And again, they are processing machines and they can do many things, but they can’t quite do what a real brain can do.
An example that was given to me was the proof of Fermat’s Last Theorem. If you’re familiar with Fermat’s Last Theorem, it was written down I think maybe two hundred years ago or more, and the creator, Fermat, a mathematician, wrote in the margin of his notebook that he had a proof for it, but then he never got to prove it. I think he lost his life. And it wasn’t until about twenty-some years ago where a researcher at Berkeley finally proved it. It’s claimed that the insight and creativity required to do that work would not be possible by simply assembling a sufficient number of AND gates and training them on previous geometry and math constructs, and then giving it this one and having the proof come out. It’s just not possible. There had to be some extra magic there, which Roger, and others, would argue requires quantum effects. And if you believe that—and I obviously find it very reasonable and I respect these folks, but I don’t claim that my own background informs me enough on that one—it seems very reasonable; it mirrors the experience we had here for a decade when we were building these kinds of machines.
I think we’ve got a way to go before some of these sci-fi type scenarios play out. Not that they won’t happen, but it’s not going to be right around the corner. But what is right around the corner is a lot of greatly improved capabilities as these techniques basically fundamentally replace traditional signal processing for many fields. We’re using it for image and sound, of course, but now we’re starting to use it in cameras, in modems and controllers, in complex management of complex systems, all kinds of functions. It’s really exciting what’s going on, but we still have a way to go before we get, you know, the ultimate.
Back to the theorem you just referenced, and I could be wrong about this, but I recall that he actually wrote a surprisingly simple proof to this theorem, which now some people say he was just wrong, that there isn’t a simple proof for it. But because everybody believed there was a proof for it, we eventually solved it. 
Do you know the story about a guy named Danzig back in the 30s? He was a graduate student in statistics, and his professor had written two famous unsolved problems on the chalkboard and said, These are famous unsolved programs. Well, Danzig comes in late to class, and he sees them and just assumes they’re the homework. He writes them down, and takes them home, and, you can guess, he solves them both. He remarked later that they seemed a little harder than normal. So, he turned them in, and it was about two weeks before the professor looked at them and realized what they were. And it’s just fascinating to think that, like, that guy has the same brain I have, I mean it’s far better and all that, but when you think about all those capabilities that are somewhere probably in there. 
Those are wonderful stories. I love them. There’s one about Gauss when he was six years old, or eight years old, and the teacher punished the class, told everyone to add up the numbers from one to one hundred. And he did it in an instant because he realized that 100 + 0 is 100, and 99 + 1 is 100, and 98 + 2 is 100, and you can multiply those by 50. The question is, “Is a machine based on neural nets, and coefficients, and logistic regression, and SVM and those techniques, capable of that kind of insight?” Likely it is not. And there is some special magic required for that to actually happen.
I will only ask you one more question on that topic and then let’s dial it back in more to the immediate future. You said, “special magic. And again, I have to ask you, like I asked you about “think, are you using magic colloquially, or is it just physics that we don’t understand yet? 
I would argue it’s probably the latter. With the term “magic,” there’s famous Arthur C. Clarke quote that, “Sufficiently advanced technology is indistinguishable from magic.” I think, in this case, the structure of a real brain and how it actually works, we might think of it as magic until we understand more than we do now. But it seems like you have to go into a deeper level, and a simple function assembled from logic gates is not enough.
In the more present day, how would you describe where we are with the science? Because it seems we’re at a place where you’re still pleasantly surprised when something works. It’s like, “Wow, it’s kind of cool, that worked.” And as much as there are these milestone events like AlphaGo, or Watson, or the one that beat the poker players recently, how quickly do you think advances really are coming? Or is it the hope for those advances that’s really kind of what’s revved up?
I think the advances are coming very rapidly, because there’s an exponential nature. You’ve got machines that have processing power which is increasing in an exponential manner, and whether it continues to do so is another question, but right now it is. You’ve got memory, which is increasing in an exponential manner. And then you’ve also got scale, which is the number of these devices that exist and your ability to connect to them. And I’d really like to get into that a little bit, too, the ability of a user to tap into a huge amount of resource. So, you’ve got all of those combined with algorithmic improvements, and, especially right now, there’s such a tremendous interest in the industry to work on these things, so lots of very talented graduates are pouring into the field. The product of all those effects is causing very, very rapid improvement. Even though in some cases the fundamental algorithm might be based on an idea from the 70s or 80s, we’re able to refine that algorithm, we’re able to couple that with far more processing power at a much lower cost than as ever before. And as a result, we’re getting incredible capabilities.
I was fortunate enough to have a dinner with the head of a Google Translate project recently, and he told me—an incredibly nice guy—that that program is now one of the largest AI projects in the world, and has a billion users. So, a billion users can walk around with their device and basically speak any language and listen to any language or read it, and that’s a tremendous accomplishment. That’s really a powerful thing, and a very good thing. And so, yeah, those things are happening right now. We’re in an era of rapid, rapid improvement in those capabilities.
What do you think is going to be the next watershed event? We’re going to have these incremental advances, and there’s going to be more self-driving cars and all of these things. But these moments that capture the popular imagination, like when the best Go player in the world loses, what do you think will be another one of those for the future?
When you talk about AlphaGo and Watson playing Jeopardy and those things, those are significant events, but they’re machines that someone wheels in, and they are big machines, and they hook them up and they run, but you don’t really have them available in the mobile environment. We’re on the verge now of having that kind of computing power, not just available to one person doing a game show, or the Go champion in a special setting, but available to everyone at a reasonable cost, wherever they are, at any time. Also, to be able to benefit, the learning experience of one person can benefit the rest. And so, that, I think, is the next step. It’s when you can use that capability, which is already growing as I described, and make it available in a mobile environment, ubiquitously, at reasonable cost, then you’re going to have incredible things.
Autonomous vehicles is an example, because that’s a mobile thing. It needs a lot of processing power, and it needs processing power local to it, on the device, but also needs to access tremendous capability in the network, and it needs to do so at high reliability, and at low latency and some interesting details there—so vehicles is a very good example. Vehicles is also something that we need to improve dramatically, from a safety standpoint, versus where we are today. It’s critical to the economies of cities and nations, so a lot of scale. So, yeah, that’s a good crucible for this.
But there are many others. Medical devices, huge applications there. And again, you want, in many cases, a very powerful capability in the cloud or in the network, but also at the device, there are many cases where you’d want to be able to do some processing right there, that can make the device more powerful or more economical, and that’s a mobile use case. So, I think there will be applications there; there can be applications in education, entertainment, certainly games, management of resources like power and electricity and heating and cooling and all that. It’s really a wide swath but the combination of connectivity with this capability together is really going to do it.
Let’s talk about the immediate future. As you know, with regard to these technologies, there’s kind of three different narratives about their effect on employment. One is that they’re going to take every single job, everybody from a poet on down; that doesn’t sound like something that would resonate with you because of the conversation we just had. Another is that this technology is going to replace a lot of lowskilled workers, there’s going to be fewer, quote, lowskilled jobs,” whatever those are, and that you’re going to have this permanent underclass of unemployed people competing essentially with machines for work. And then there’s another narrative that says, “No, what’s going to happen is the same thing that happened with electricity, with motors, with everything else. People take that technology they use it to increase their own productivity, and they go on to raise their income that way. And you’re not going to have essentially any disruption, just like you didn’t have any disruption when we went from animal power to machine power. Which of those narratives do you identify with, or is there a different way you would say it?
Okay, I’m glad you asked this because this is a hugely important question and I do want to make some comments. I’ve had the benefit of participating in the World Economic Forum, and I’ve talked to Brynjolfsson and McAfee, the authors of The Second Machine Age, and the whole theme of the forum a year ago was Klaus Schwab’s book The Fourth Industrial Age and the rise of cyber-physical systems and what impact they will have. I think we know some things from history and the question is, is the future going to repeat that or not? We know that there’s the so-called Luddite fallacy which says that, “When these machines come they’re going to displace all the jobs.” And we know that a thousand years ago, ninety-nine percent of the population was involved in food production, and today, I don’t know, don’t quote me on this, but it’s like 0.5 percent or something like that. Because we had massive productivity gains, we didn’t need to have that many people working on food production, and they found the ability to do other things. It’s definitely true that increases in unemployment did not keep pace with increases in productivity. Productivity went up orders of magnitude, unemployment did not go up, quote, “on the orders of magnitude,” and that’s been the history for a thousand years. And even more recently if you look at the government statistics on productivity, they are not increasing. Actually, some people are alarmed that they’re not increasing faster than they are, they don’t really reflect a spike that would suggest some of these negative scenarios.
Now, having said that, it is true that we are at a place now where machines, even with their processing that they use today, based on neural networks and SVMs and things like that, they are able to replace a lot of the existing manual or repetitive type tasks. I think society as a whole is going to benefit tremendously, and there’s going to be some groups that we’ll have to take some care about. There’s been discussions of universal basic incomes, which I think is a good idea. Bill Gates recently had an article about some tax ideas for machines. It’s a good idea, of course. Very hard to implement because you have to define what a robot is. You know, something like a car or a wheel, a wheel is a labor-saving device, do you tax it? I don’t know.
So, to get back to your question, I think it is true that there will be some groups that are in the short term displaced, but there’s no horizon where many things that people do, like caring for each other, like teaching each other, those kinds of jobs are not going away, they’re in ever-increasing demand. So, there’ll be a migration, not necessarily a wholesale replacement. And we do have to take care with the transient effect of that, and maybe a universal type of wage might be part of an answer. I don’t claim to have the answer completely. I mean it’s obviously a really hard problem that the world is grappling with. But I do feel, fundamentally, that the overall effect of all of this is going to be net positive. We’re going to make more efficient use of our resources, we’re going to provide services and capabilities that have never been possible before that everyone can have, and it’s going to be a net positive.
That’s an optimistic view, but it’s a very measured optimistic view. Let me play devil’s advocate from that side to say, why do you think there’ll be any disruption? What does that case look like? 
Because, if you think about it, in 1995 if somebody said, “Hey, you know what, if we take a bunch of computers and we connect them all via TCP/IP, and we build a protocol, maybe HTTP, to communicate, and maybe a markup language like HTMLyou know what’s going to happen? Two billion people will connect and it’s going to create trillions and trillions and trillions of dollars of wealth. It’s going to create Google and eBay and Amazon and Baidu. It’s going to transform every aspect of society, and create an enormous number of jobs. And Etsy will come along, and people will be able to work from home. And all these thousands of things that float out of it.” You never would have made those connections, right? You never would have said, “Oh, that logically flows from snapping a bunch of computers together.” 
So, if we really are in a technological boom that’s going to dwarf that, really won’t the problem be an immense shortage of people? There’s going to be all of these opportunities, and very few people relatively to fill them. So, why the measured optimism for somebody who just waxed so poetic about what a big deal these technologies are?
Okay, that’s a great question. I mean, that was super. You asked will there be any disruption at all. I completely believe that we really have not a job shortage, but a skills shortage; that is the issue. And so, the burden goes then to the educational system, and the fabric of society to be able to place a value on good education and stick to it long enough that you can come up to speed in the modern sense, and be able to contribute beyond what the machines do. That is going to be a shortage, and anyone who has those skills is going to be in a good situation. But you can have disruption even in that environment.
You can have an environment where you have a skills shortage not a job shortage, and there’s disruption because the skills shortage gets worse and there’s a lot of individuals whose previous skills are no longer useful and they need to change. And that’s the tough thing. How do you retrain, in a transient case, when these advancements come very quickly? How do you manage that? What is fair? How does society distribute its wealth? I mean the mechanisms are going to change.
Right now, it’s starting to become true that just simply the manner in which you consume stuff; if that data is available, that has value in itself, and maybe people should be compensated for it. Today, they are not as much, they give it up when they sign in to these major cloud player services, and so those kinds of things will have to change. I’ll give you an anecdote.
Recently I went to Korea, and I met some startups there, and one of the things that happens, especially in non-curated app stores, is people develop games and they put in their effort and time and they develop a game, and they put it on there and people download it for ninety-nine cents or whatever, and they get some money. But, there are some bad actors that will see a new game, they’ll quickly download it, un-assemble the language back to the source, change a few little things and republish that same game that looks and feels just like the original but the ninety-nine cents goes to a different place. They basically steal the work. So, this is a bad thing, and in response, there are startups now that make tools that create software that makes it difficult to un-assemble. There are multiple startups that do what I just described and I’m sitting here listening to them and I’m realizing, “Wow, that job—in fact, that industry—didn’t even exist.” That is a new creation of the fact that there are un-curated app stores and mobile devices and games, and it’s an example of the kind of new thing that’s created, that didn’t exist before.
I believe that that process is alive and well, and we’re going to continue to see more of it, and there’s going to continue to be a skills shortage more than a job shortage, and so that’s why I have a fundamentally positive view. But it is going to be challenging to meet the demands of that skills shortage. Society has to place the right value on that type of education and we all have to work together to make that happen.
You have two different threads going on there. One is this idea that we have a skills shortage, and we need to rethink education. And another one that you touched on is the way that money flows, and can people be compensated for their data, and so forth. I’d like to talk about the first one, and again, I’d like to challenge the measured amount of your optimism. 
I’ll start off by saying I agree with you, that, at the beginning of the Industrial Revolution there was a vigorous debate in the United States about the value of post-literacy education. Like think about that: ipost-literacy education worth anything? Because in an agrarian society, maybe it wasn’t for most people. Once you learn to read, that was what you needed. And then people said, “No, no, the jobs of the future are going to need more education. We should invest in that now.” And the United States became the first country in the world to guarantee that every single person could graduate from high school. And you can make a really good case, that I completely believe, that that was a major source of our economic ascendancy in the twentieth century. And, therefore, you can extend the argument by saying, “Maybe we need grades thirteen and fourteen now, and they’re vocational, and we need to do that again. I’m with you entirely, but we don’t have that right now. And so, what’s going to happen? 
Here is where I would question the measured amount of your optimism which is… People often say to me, “Look, this technology creates all these new jobs at the high-end, like graphic designers and geneticists and programmers, and it destroys jobs at the low-end. Are those people down at the low-end going to become programmers?” And, of course, the answer is not, “Yes.” The answer isand here’s my questionall that matters is, “Can everybody do a job just a little harder than the one they’re currently doing? And if the answer to that is, “Yes, then what happens is the college biology professor becomes a geneticist, the high school biology teacher becomes a college teacher, the substitute teacher gets backfilled into the biology one, and all the way down so that everybody gets just a little step up. Everybody just has to push themselves a little more, and the whole system phase shifts up, and everybody gets a raise and everybody gets a promotion. That‘s really what happened in the Industrial Revolution, so why is it that you don’t think that that is going to be as smooth as I have just painted it? 
Well, I think what you described does happen and is happening. If you look at—and again, I’m speaking from my own experience here as an engineer in a high-tech company—any engineer in a high-tech company, and you look at their output right now, and you compare it to a year or two before, they’ve all done what you describe, which is to do a little bit more, and to do something that’s a little bit harder. And we’ve all been able to do that because the fundamental processes involved improve. The tools, the fabric available to you to design things, the shared experience of the teams around you that you tap into—all those things improved. So, everyone is actually doing a job that’s a little bit harder than they did before, at least if you’re a designer.
You also cited some other examples, a teacher at one level going to the next level. That’s a kind of a queue, and there’s only so many spots at so many levels based on the demographics of the population. So not everyone can move in that direction, but they can all—at a given grade level—endeavor to teach more. Like, our kids, the math they do now is unbelievable. They are as much as a year or so ahead of when I was in high school, and I thought that we were doing pretty good stuff then, but now it’s even more.
I am optimistic that those things are going to happen, but you do have a labor force of certain types of jobs, where people are maybe doing them for ten, twenty, thirty years, and all of a sudden that is displaced. It’s hard to ask someone who’s done a repetitive task for much of their career to suddenly do something more sophisticated and different. That is the problem that we as a society have to address. We have to still value those individuals, and find a way—like a universal wage or something like that—so they can still have a good experience. Because if you don’t, then you really could have a dangerous situation. So, again, I feel overall positive, but I think there’s some pockets that are going to require some difficult thinking, and we’ve got to grapple with it.
Alright. I agree with your overall premise, but I will point out that that’s exactly what everybody said about the farmers—that you can’t take these people that have farmed for twenty or thirty years, and all of a sudden expect them to be able to work in a factory. The rhythm of the day is different, they have a supervisor, there’s bells that ring, they have to do different jobs, all of this stuff; and yet, that’s exactly what happened. 
I think there’s a tendency to short human ability. That being said, technological advance, interestingly, distributes its financial gains in a very unequal measure and there is something in there that I do agree we need to think about. 
Let’s talk about Qualcomm. You are the EVP of technology. You were the CTO. You’ve got seventy patents, like I said in your intro. What is Qualcomm’s role in this world? How are you working to build the better tomorrow? 
Okay, great. We provide connections between people, and increasingly between their worlds and between devices. Let me be specific about what I mean by that. When the company started—by the way, I’ve been at Qualcomm since ‘91, company started in ‘85-‘86 timeframe—one of the first things we did early on was we improved the performance and capacity of cellular networks by a huge amount. And that allowed operators like Verizon, AT&T, and Sprint—although they had different names back then—to offer, initially, voice services to large numbers of people at reasonably low cost. And the devices, thanks to the work of Qualcomm and others, got smaller, had longer battery life, and so forth. As time went on, it was originally connecting people with voice and text, and then it became faster and more capable so you could do pictures and videos, and then you could connect with social networks and web pages and streaming, and you could share large amounts of information.
We’re in an era now where I don’t just send a text message and say, “Oh, I’m skiing down this slope, isn’t this cool.” I can have a 360°, real-time, high-quality, low-latency sharing of my entire experience with another user, or users, somewhere else, and they can be there with me. And there’s all kinds of interesting consumer, industrial, medical, and commercial applications for that.
We’re working on that and we’re a leading developer of the connectivity technology, and also what you do with it on the endpoints—the processors, the camera systems, the user interfaces, the security frameworks that go with it; and now, increasingly, the machine learning and AI capabilities. We’re applying it, of course, to smartphones, but also to automobiles, medical devices, robotics, to industrial cases, and so on.
We’re very excited about the pending arrival of what we call 5G, which is the next generation of cellular technology, and it’s going to show up in the 2019-2020 timeframe. It’s going to be in the field maybe ten, fifteen years just like the previous generations were, and it’s going to provide, again, another big step in the performance of your radio link. And when I say “performance,” I mean the speed, of course, but also the latency will be very low—in many modes it can be millisecond or less. That will allow you to do functions that used to be on one side of the link, you can do on the other side. You can have very reliable systems.
There are a thousand companies participating in the standards process for this. It used to be just primarily the telecom industry, in the past with 3G and 4G—and of course, the telecom industry is very much still involved—but there are so many other businesses that will be enabled with 5G. So, we’re super excited about the impact it’s going to have on many, many businesses. Yeah, that’s what we’re up to these days.
Go with that a little more, paint us a picture. I don’t know if you remember those commercials back in the 90s saying, “Can you imagine sending a fax from the beach? You will!” and other “Can you imagine” scenarios. They kind of all came trueother than that there wasn’t as much faxing as I think they expected. But, what do you think? Tell me some of the things that you thinkin a reasonable amount of time, we’re going to be able to do it, in five years, let’s say.
I’m so fascinated that you used that example, because that one I know very well. Those AT&T commercials, you can still watch them on YouTube, and it’s fun to do so. They did say people will be able to send a fax from the beach, and that particular ad motivated the operators to want to send fax over cellular networks. And we worked on that—I worked on that myself—and we used that as a way to build the fundamental Internet transport, and the fax was kind of the motivation for it. But later, we used the Internet transport for internet access and it became a much, much bigger thing. The next step will be sharing fully immersive experiences, so you can have high-speed, low-latency video in both directions.
Autonomous vehicles, but before we even get to fully autonomous—because there’s some debate about when we’re going to get to a car that you can get into with no steering wheel and it just takes you where you want to go; that’s still a hard problem. Before we have fully autonomous cars that can take you around without a steering wheel, we’re going to have a set of technologies that improve the safety of semiautonomous cars. Things like lane assist, and better cruise control, and better visibility at night, and better navigation; those sorts of things. We’re also working on vehicle-to-vehicle communication, which is another application of low-latency, and can be used to improve safety.
I’ll give you a quick anecdote on that. In some sense we already have a form of it, it’s called brake lights. Right now, when you’re driving down the highway, and the car in front puts on the lights, you see that and then you take action, you may slow down or whatever. You can see a whole bunch of brake lights, if the traffic is starting to back up, and that alerts you to slow down. Brake lights have transitioned from incandescent bulbs which take, like, one hundred milliseconds to turn on to LED bulbs which take one millisecond to turn on. And if you multiply a hundred milliseconds at highway speeds, it’s six to eight feet depending on the speed, and you realize that low-latency can save lives, and make the system more effective.
That’s one of the hallmarks of 5G, is we’re going to be able to connect things at low-latency to improve the safety or the function. Or, in the case of machine learning, where sometimes you want processing to be done in the phone, and sometimes you want to access enormous processing in the cloud, or at the edge. When we say edge, in this context, we mean something very close to the phone, within a small number of hops or routes to get to that processing. If you do that, you can have incredible capability that wasn’t possible before.
To give you an example of what I’m talking about, I recently went to the Mobile World Congress America show in San Francisco, it’s a great show, and I walked through the Verizon booth and I saw a demonstration that they had made. In their demonstration, they had taken a small consumer drone, and I mean it’s a really tiny one—just two or three inches long—that costs $18. All this little thing does is send back video, live video, and you control it with Wi-Fi, and they had it following a red balloon. The way it followed it was, it sent the video to a very powerful edge processing computer, which then performed a sophisticated computer vision and control algorithm and then sent the commands back. So, what you saw was this little low-cost device doing something very sophisticated and powerful, because it had a low-latency connection to a lot of processing power. And then, just to really complete that, they switched it from edge computing, that was right there at the booth, to a cloud-based computing service that was fifty milliseconds away, and once they did that, the little demo wouldn’t function anymore. They were showing the power of low-latency, high-speed video and media-type communication, which enabled a simple device to do something similar to a much more complex device, in real time, and they could offer that almost like a service.
So, that paradigm is very powerful, and it applies to many different use cases. It’s enabled by high-performance connectivity which is something that we supply, and we’re very proficient at that. It impacts machine learning, because it gives you different ways to take advantage of the progress there—you can do it locally, you can do it on the edge, you can do it remotely. When you combine mobile, and all the investment that’s been made there, you leverage that to apply to other devices like automobiles, medical devices, robotics, other kinds of consumer products like wearables and assistant speakers, and those kinds of things. There’s just a vast landscape of technologies and services that all can be improved by what we’ve done, and what 5G will bring. And so, that’s why we’re pretty fired up about the next iteration here.
I assume you have done theoretical thinking about the absolute maximum rate at which data can be transferred. Are we one percent the way there, or ten percent, or can’t even measure it because it’s so smallIs this going to go on forever?
I am so glad you asked. It’s so interesting. This Monday morning, we just put a new piece of artwork in our research center—there’s a piece of artwork on every floor—and on the first floor, when you walk in, there’s a piece of artwork that has Claude Shannon and a number of his equations, including the famous one which is the Shannon capacity limit. That’s the first thing you see when you walk into the research center at Qualcomm. That governs how fast you can move data across a link, and you can’t beat it. There’s no way, any more than you can go faster than the speed of light. So, the question is, “How close are we to that limit?” If you have just two devices, two antennas, and a given amount of spectrum, and a given amount of power, then we can get pretty darn close to that limit. But the question is not that, the question is really, “Are we close to how fast of a service we can offer a mobile user in a dense area?” And to that question, the answer is, “We’re nowhere close.” We can still get significantly better; by that, I mean orders of magnitude better than we are now.
I can tell you three ways that that can be accomplished, and we’re doing all three of them. Number one is, we continue to make better modems, that are more efficient, better receivers, better equalizers, better antennas all of those techniques, and 5G is an example of that.
Number two, we always work with the regulator and operators to bring more spectrum, more radio spectrum to bear. If you look at the overall spectrum chart, only a sliver of it is really used for mobile communication, and we’re going to be able to use a lot more of it, and use more spectrum at high frequencies, like millimeter wave and above, that’s going to make a lot more “highway,” so to speak, for data transfer.
And the third thing is, the average radius of a base station can shrink, and we can use that channel over and over and over again. So right now, if you drive your car, and you listen to a radio station, the radio industry cannot use that channel again until you get hundreds of miles away. In the modern cellular systems, we’re learning how to reuse that channel even when you’re a very short distance away, potentially only feet or tens of meters away, so you can use it again and again and again.
So, with those three pillars, we’re really not close, and everyone can look forward to faster, faster, faster modems. And every time we move that modem speed up, that, of course, is the foundation for bigger screens, and more video, and new use cases that weren’t possible before, at a given price point, which now become possible. We’re not at the end yet, we’ve got a long way to go.
You made a passing reference to Moore’s Lawyou didn’t call it out, but you referenced exponential growth, and that the speed of computers would increase. Everybody always says, Is Moore’s Law finally over? You see those headlines all the time, and, like all the headlines that are a question, the answer is almost always, “No. You’ve made references to quantum computing and all that. Do we have opportunities to increase processor speed well into the future with completely different architectures?
We do. We absolutely do. And I believe that will occur. I mean, we’re not at the limit yet now. You can find “Moore’s Law is over” articles ten years ago also, and somehow it hasn’t happened yet. When we get past three nanometers, yeah, certain things are going to get really, really tough. But then there will be new approaches that will take us there, take us to the next step.
There’s also architectural improvements, and other axes that can be exploited; same thing as I just described to you in wireless. Shannon has said that we can only go so far between two antennas in a given amount of spectrum, in a given amount of power. But we can escape that by increasing the spectrum, increasing the number of distance between the antennas, reusing the spectrum over and over again, and we can still get the job done without breaking any fundamental laws. So, at least for the time being, the exponential growth is still very much intact.
You’ve mentioned Claude Shannon twice. He’s a fascinating character, and one of the things he did that’s kind of monumental was that paper he wrote in 49 or 50 about how a computer could play chess, and he actually figured out an algorithm for that. What was really fascinating about that was, this was one of the first times somebody looked at a computer and saw something other than a calculator. Because up until that point they just did not, and he made that intuitive leap to say, “Here’s how you would make a computer do something other than mathbut it’s really doing math. There’s a fascinating new book about him out called A Mind at Play, which I just read, that I recommend. 
We’re running out of time here. We’re wrapping up. I’m curious do you write, or do you have a place that people who want to follow you can keep track of what you’re up to? 
Well, I don’t have a lot there, but I do have a Twitter, and once in a while I’ll share a few thoughts. I should probably do more of that than I do. I have an internal blog which I should probably do more than I do. I’m sorry to say, I’m not very prolific on external writing, but that is something I would love to do more of.
And my final question is, are you a consumer of science fiction? You quoted Arthur C. Clarke earlier, and I’m curious if you read it, or watch TV, or movies or what have you. And if so, do you have any visions of the future that are in fiction, that you kind of identify with? 
Yes, I will answer an emphatic yes to that. I love all forms of science fiction and one of my favorites is Star Trek. My name spelled backwards is “Borg.” In fact, our chairman Paul Jacobs—I worked for him most of my career—he calls me “Locutus.” Given the discussion we just had—if you’re a fan of Star Trek and, in particular, the Star Trek: The Next Generation shows that were on in the ‘80s and early ‘90s, there was an episode where Commander Data met Mr. Spock. And that was really a good one, because you had Commander Data, who is an android and wants to be human, wants to have emotion and creativity and those things that we discussed, but can’t quite get there, meeting Mr. Spock who is a living thing and trying to purge all emotion and so forth, to just be pure logic, and they had an interaction. I thought that was just really interesting.
But, yes, I follow all science fiction. I like the book Physics of Star Trek by Krauss, I got to meet him once. And it’s amazing how many of the devices and concepts from science fiction have become science fact. In fact, the only difference between science fiction and science fact, is time. Over time we’ve pretty much built everything that people have thought up—communicators, replicators, computers.
I know, you can’t see one of those in-ear Bluetooth devices and not see Uhura, right? That’s what she had.
Correct. That little earpiece is a Bluetooth device. The communicator is a flip phone. The little square memory cartridges were like a floppy disk from the ‘80s. 3-D printers are replicators. We also have software replicators that can replicate and transport. We kind of have the hardware but not quite the way they do yet, but we’ll get there.
Do you think that these science fiction worlds anticipate the world or inadvertently create it? Do we have flip phones because of Star Trek or did Star Trek foresee the flip phone? 
I believe their influence is undeniable.
I agree and a lot of times they say it, right? They say, “Oh, I saw that and I wanted to do that. I wanted to build that.” You know there’s an XPRIZE for making a tricorder, and that came from Star Trek.
We were the sponsor of that XPRIZE and we were highly involved in that. And, yep, that’s exactly right, the inspiration of that was a portable device that can make a bunch of diagnoses, and that is exactly what took place and now we have real ones.
Well, I want to thank you for a fascinating hour. I want to thank you for going on all of these tangents. It was really fascinating. 
Wonderful, thank you as well. I also really enjoyed it, and anytime you want to follow up or talk some more please don’t hesitate. I really enjoyed talking with you.
Byron explores issues around artificial intelligence and conscious computers in his upcoming book The Fourth Age, to be published in April by Atria, an imprint of Simon & Schuster. Pre-order a copy here.

Voices in AI – Episode 24: A Conversation with Deep Varma

In this episode, Byron and Deep talk about the nervous system, AGI, the Turing Test, Watson, Alexa, security, and privacy.
[podcast_player name=”Episode 24: A Conversation with Deep Varma” artist=”Byron Reese” album=”Voices in AI” url=”https://voicesinai.s3.amazonaws.com/2017-12-04-(00-55-19)-deep-varma.mp3″ cover_art_url=”https://voicesinai.com/wp-content/uploads/2017/12/voices-headshot-card_preview-1.jpeg”]
Byron Reese: This is Voices in AI, brought to you by Gigaom. I’m Byron Reese. Today our guest is Deep Varma, he is the VP of Data Engineering and Science over at Trulia. He holds a Bachelor’s of Science in Computer Science. He has a Master’s degree in Management Information Systems, and he even has an MBA from Berkeley to top all of that off. Welcome to the show, Deep.
Deep Varma: Thank you. Thanks, Byron, for having me here.
I’d like to start with my Rorschach test question, which is, what is artificial intelligence?
Awesome. Yeah, so as I define artificial intelligence, this is an intelligence created by machines based on human wisdom, to augment a human’s lifestyle to help them make the smarter choices. So that’s how I define artificial intelligence in a very simple and the layman terms.
But you just kind of used the word, “smart” and “intelligent” in the definition. What actually is intelligence?
Yeah, I think the intelligence part, what we need to understand is, when you think about human beings, most of the time, they are making decisions, they are making choices. And AI, artificially, is helping us to make smarter choices and decisions.
A very clear-cut example, which sometimes what we don’t see, is, I still remember in the old days I used to have this conventional thermostat at my home, which turns on and off manually. Then, suddenly, here comes artificial intelligence, which gave us Nest. Now as soon as I put the Nest there, it’s an intelligence. It is sensing that someone is there in the home, or not, so there’s motion sensing. Then it is seeing what kind of temperature do I like during summer time, during winter time. And so, artificially, the software, which is the brain that we have put on this device, is doing this intelligence, and saying, “great, this is what I’m going to do.” So, in one way it augmented my lifestyle—rather than me making those decisions, it is helping me make the smart choices. So, that’s what I meant by this intelligence piece here.
Well, let me take a different tack, in what sense is it artificial? Is that Nest thermostat, is it actually intelligent, or is it just mimicking intelligence, or are those the same thing?
What we are doing is, we are putting some sensors there on those devices—think about the central nervous system, what human beings have, it is a small piece of a software which is embedded within that device, which is making decisions for you—so it is trying to mimic, it is trying to make some predictions based on some of the data it is collecting. So, in one way, if you step back, that’s what human beings are doing on a day-to-day basis. There is a piece of it where you can go with a hybrid approach. It is mimicking as well as trying to learn, also.
Do you think we learn a lot about artificial intelligence by studying how humans learn things? Is that the first step when you want to do computer vision or translation, do you start by saying, “Ok, how do I do it?” Or, do you start by saying, “Forget how a human does it, what would be the way a machine would do it?
Yes, I think it is very tough to compare the two entities, because the way human brains, or the central nervous system, the speed that they process the data, machines are still not there at the same pace. So, I think the difference here is, when I grew up my parents started telling me, “Hey, this is Taj Mahal. The sky is blue,” and I started taking this data, and I started inferring and then I started passing this information to others.
It’s the same way with machines, the only difference here is that we are feeding information to machines. We are saying, “Computer vision: here is a photograph of a cat, here is a photograph of a cat, too,” and we keep on feeding this information—the same way we are feeding information to our brains—so the machines get trained. Then, over a period of time, when we show another image of a cat, we don’t need to say, “This is a cat, Machine.” The machine will say, “Oh, I found out that this is a cat.”
So, I think this is the difference between a machine and a human being, where, in the case of machine, we are feeding the information to them, in one form or another, using devices; but in the case of human beings, you have conscious learning, you have the physical aspects around you that affect how you’re learning. So that’s, I think, where we are with artificial intelligence, which is still in the infancy stage.
Humans are really good at transfer learning, right, like I can show you a picture of a miniature version of the Statue of Liberty, and then I can show you a bunch of photos and you can tell when it’s upside down, or half in water, or obscured by light and all that. We do that really well. 
How close are we to being able to feed computers a bunch of photos of cats, and the computer nails the cat thing, but then we only feed it three or four images of mice, and it takes all that stuff it knows about different cats, and it is able to figure out all about different mice?
So, is your question, do we think these machines are going to be at the same level as human beings at doing this?
No, I guess the question is, if we have to teach, “Here’s a cat, here’s a thimble, here’s ten thousand thimbles, here’s a pin cushion, here’s ten thousand more pin cushions…” If we have to do one thing at a time, we’re never going to get there. What we’ve got to do is, like, learn how to abstract up level, and say, “Here’s a manatee,” and it should be able to spot a manatee in any situation.
Yeah, and I think this is where we start moving into the general intelligence area. This is where it is becoming a little interesting and challenging, because human beings falls under more of the general intelligence, and machines are still falling under the artificial intelligence framework.
And the example you were giving, I have two boys, and when my boys were young, I’d tell them, “Hey, this is milk,” and I’d show them milk two times and they knew, “Awesome, this is milk.” And here come the machines, and you keep feeding them the big data with the hope that they will learn and they will say, “This is basically a picture of a mouse or this is a picture of a cat.”
This is where, I think, this artificial general intelligence which is shaping up—that we are going to abstract a level up, and start conditioning—but I feel we haven’t cracked the code for one level down yet. So, I think it’s going to take us time to get to the next level, I believe, at this time.
Believe me, I understand that. It’s funny, when you chat with people who spend their days working on these problems, they’re worried about, “How am I going to solve this problem I have tomorrow?” They’re not as concerned about that. That being said, everybody kind of likes to think about an AGI. 
AI is, what, six decades old and we’ve been making progress, do you believe that that is something that is going to evolve into an AGI? Like, we’re on that path already, and we’re just one percent of the way there? Or, is an AGI is something completely different? It’s not just a better narrow AI, it’s not just a bunch of narrow AI’s bolted together, it’s a completely different thing. What do you say?
Yes, so what I will say, it is like in the software development of computer systems—we call this as an object, and then we do inheritance of a couple of objects, and the encapsulation of the objects. When you think about what is happening in artificial intelligence, there are companies, like Trulia, who are investing in building the computer vision for real estate. There are companies investing in building the computer vision for cars, and all those things. We are in this state where all these dysfunctional, disassociated investments in our system are happening, and there are pieces that are going to come out of that which will go towards AGI.
Where I tend to disagree, I believe AI is complimenting us and AGI is replicating us. And this is where I tend to believe that the day the AGI comes—that means it’s a singularity that they are reaching wisdom or the processing power of human beings—that, to me, seems like doomsday, right? Because that those machines are going to be smarter than us, and they will control us.
And the reason I believe that, and there is a scientific reason for my belief; it’s because we know that in the central nervous system the core tool is the neurons, and we know neurons carry two signals—chemical and electrical. Machines can carry the electrical signals, but the chemical signals are the ones which generate these sensory signals—you touch something, you feel it. And this is where I tend to believe that AGI is not going to happen, I’m close to confident. Thinking machines are going to come—IBM Watson, as an example—so that’s how I’m differentiating it at this time.
So, to be clear, you said you don’t believe we’ll ever make an AGI?
I will be the one on the extreme end, but I will say yes.
That’s fascinating. Why is that? The normal argument is a reductionist argument. It says, you are some number of trillions of cells that come together, and there’s an emergent you” that comes out of that. And, hypothetically, if we made a synthetic copy of every one of those cells, and connected them, and did all that, there would be another Deep Varma. So where do you think the flaw in that logic is?
I think the flaw in that logic is that the general intelligence that humans have is also driven by the emotional side, and the emotional side—basically, I call it a chemical soup—is, I feel, the part of the DNA which is not going to be possible to replicate in these machines. These machines will learn by themselves—we recently saw what happened with Facebook, where Facebook machines were talking to each other and they start inventing their own language, over a period of time—but I believe the chemical mix of humans is what is next to impossible to produce it.
I mean—and I don’t want to take a stand because we have seen proven, over the decades, what people used to believe in the seventies has been proven to be right—I think the day we are able to find the chemical soup, it means we have found the Nirvana; and we have found out how human beings have been born and how they have been built over a period of time, and it took us, we all know, millions and millions of years to come to this stage. So that’s the part which is putting me on the other extreme end, to say, “Is there really going to another Deep Varma,” and if yes, then where is this emotional aspect, where are those things that are going to fit into the bigger picture which drives human beings onto the next level?
Well, I mean there’s a hundred questions rushing for the door right now. I’ll start with the first one. What do you think is the limit of what we’ll be able to do without the chemical part? So, for instance, let me ask a straight forward question—will we be able to build a machine that passes the Turing test?
Can we build that machine? I think, potentially, yes, we can.
So, you can carry on a conversation with it, and not be able to figure out that it’s a machine? So, in that case, it’s artificial intelligence in the sense that it really is artificial. It’s just running a program, saying some words, it’s running a program, saying some words, but there’s nobody home.
Yes, we have IBM Watson, which can go a level up as compared to Alexa. I think we will build machines which, behind the scenes, are trying to understand your intent and trying to have those conversations—like Alexa and Siri. And I believe they are going to eventually start becoming more like your virtual assistants, helping you make decisions, and complimenting you to make your lifestyle better. I think that’s definitely the direction we’re going to keep seeing investments going on.
I read a paper of yours where you made a passing reference to Westworld.
Putting aside the last several episodes, and what happened in them—I won’t give any spoilerstake just the first episode, do you think that we will be able to build machines that can interact with people like that?
I think, yes, we will.
But they won’t be truly creative and intelligent like we are?
That’s true.
Alright, fascinating. 
So, there seem to be these two very different camps about artificial intelligence. You have Elon Musk who says it’s an existential threat, you have Bill Gates who’s worried about it, you have Stephen Hawking who’s worried about it, and then there’s this other group of people that think that’s distracting
saw that Elon Musk spoke at the governor’s convention and said something and then Pedro Domingos, who wrote The Master Algorithmretweeted that article, and his whole tweet was, “One word: sigh. So, there’s this whole other group of people that think that’s just really distractingreally not going to happen, and they’re really put off by that kind of talk. 
Why do you think there’s such a gap between those two groups of people?
The gap is that there is one camp who is very curious, and they believe that millions of years of how human beings evolved can immediately be taken by AGI, and the other camp is more concerned with controlling that, asking are those machines going to become smarter than us, are they going to control us, are we going to become their slaves?
And I think those two camps are the extremes. There is a fear of losing control, because humans—if you look into the food chain, human beings are the only ones in the food chain, as of now, who control everything—fear that if those machines get to our level of wisdom, or smarter than us, we are going to lose control. And that’s where I think those two camps are basically coming to the extreme ends and taking their stands.
Let’s switch gears a little bit. Aside from the robot uprising, there’s a lot of fear wrapped up in the kind of AI we already know how to build, and it’s related to automation. Just to set up the question for the listener, there’s generally three camps. One camp says we’re going to have all this narrow AI, and it’s going to put a bunch of people out of work, people with less skills, and they’re not going to be able to get new work and we’re going to have, kind of, the GreaDepression going on forever. Then there’s a second group that says, no, no, it’s worse than that, computers can do anything a person can do, we’re all going to be replaced. And then there’s a third camp that says, that’s ridiculous, every time something comes along, like steam or electricity, people just take that technology, and use it to increase their own productivity, and that’s how progress happens. So, which of those three camps, or fourth one, perhaps, do you believe?
I fall into, mostly, the last camp, which is, we are going to increase the productivity of human beings; it means we will be able to deliver more and faster. A few months back, I was in Berkeley and we were having discussions around this same topic, about automation and how jobs are going to go away. The Obama administration even published a paper around this topic. One example which always comes in my mind is, last year I did a remodel of my house. And when I did the remodeling there were electrical wires, there are these water pipelines going inside my house and we had to replace them with copper pipelines, and I was thinking, can machines replace those job? I keep coming back to the answer that, those skill level jobs are going to be tougher and tougher to replace, but there are going to be productivity gains. Machines can help to cut those pipeline pieces much faster and in a much more accurate way. They can measure how much wire you’ll need to replace those things. So, I think those things are going to help us to make the smarter choices. I continue to believe it is going to be mostly the third camp, where machines will keep complementing us, helping to improve our lifestyles and to improve our productivity to make the smarter choices.
So, you would say that there are, in most jobs, there are elements that automation cannot replace, but it can augment, like a plumber, or so forth. What would you say to somebody who’s worried that they’re going to be unemployable in the future? What would you advise them to do?
Yeah, and the example I gave is a physical job, but think about an example of a business consultants, right? Companies hire business consultants to come, collect all the data, then prepare PowerPoints on what you should do, and what you should not do. I think those are the areas where artificial intelligence is going to come, and if you have tons of the data, then you don’t need a hundred consultants. For those people, I say go and start learning about what can be done to scale them to the next level. So, in the example I’ve just given, the business consultants, if they are doing an audit of a company with the financial books, look into the tools to help so that an audit that used to take thirty days now takes ten days. Improve how fast and how accurate you can make those predictions and assumptions using machines, so that those businesses can move on. So, I would tell them to start looking into, and partnering into, those areas early on, so that you are not caught by surprise when one day some industry comes and disrupts you, and you say, “Ouch, I never thought about it, and my job is no longer there.”
It sounds like you’re saying, figure out how to use more technology? That’s your best defense against it, is you just start using it to increase your own productivity.
Yeah, it’s interesting, because machine translation is getting comparable to a human, and yet generally people are bullish that we’re going to need more translators, because this is going to cause people to want to do more deals, and then they’re going to need to have contracts negotiated, and know about customs in other countries and all of that, so that actually being a translator you get more business out of this, not less, so do you think things like that are kind of the road map forward?
Yeah, that’s true.
So, what are some challenges with the technology? In Europe, there’s a movement—I think it’s already adopted in some places, but the EU is considering it—this idea that if an AI makes a decision about you, like do you get the loan, that you have the right to know why it made it. In other words, no black boxes. You have to have transparency and say it was made for this reason. Do you think a) that’s possible, and b) do you think it’s a good policy?
Yes, I definitely believe it’s possible, and it’s a good policy, because this is what consumers wants to know, right? In our real estate industry, if I’m trying to refinance my home, the appraiser is going to come, he will look into it, he will sit with me, then he will send me, “Deep, your house is worth $1.5 million dollar.” He will provide me the data that he used to come to that decision—he used the neighborhood information, he used the recent sold data.
And that, at the end of the day, gives confidence back to the consumer, and also it shows that this is not because this appraiser who came to my home didn’t like me for XYZ reason, and he end up giving me something wrong; so, I completely agree that we need to be transparent. We need to share why a decision has been made, and at the same time we should allow people to come and understand it better, and make those decisions better. So, I think those guidelines need to be put into place, because humans tend to be much more biased in their decision-making process, and the machines take the bias out, and bring more unbiased decision making.
Right, I guess the other side of that coin, though, is that you take a world of information about who defaulted on their loan, and then you take you every bit of information about, who paid their loan off, and you just pour it all in into some gigantic database, and then you mine it and you try to figure out, “How could I have spotted these people who didn’t pay their loan? And then you come up with some conclusion that may or may not make any sense to a human, right? Isn’t that the case that it’s weighing hundreds of factors with various weights and, how do you tease out, “Oh it was this”? Life isn’t quite that simple, is it?
No, it is not, and demystifying this whole black box has never been simple. Trust us, we face those challenges in the real estate industry on a day-to-day basis—we have Trulia’s estimates—and it’s not easy. At the end, we just can’t rely totally on those algorithms to make the decisions for us.
I will give one simple example, of how this can go wrong. When we were training our computer vision system, and, you know, what we were doing was saying, “This is a window, this is a window.” Then the day came when we said, “Wow, our computer vision can say I will look at any image, and known this is a window.” And one fine day we got an image where there is a mirror, and there is a reflection of a window on the mirror, and our computer said, “Oh, Deep, this is a window.” So, this is where big data and small data come into a place, where small data can make all these predictions and goes wrong completely.
This is where—when you’re talking about all this data we are taking in to see who’s on default and who’s not on default—I think we need to abstract, and we need to at least make sure that with this aggregated data, this computational data, we know what the reference points are for them, what the references are that we’re checking, and make sure that we have the right checks and balances so that machines are not ultimately making all the calls for us.
You’re a positive guy. You’re like, “We’re not going to build an AGI, it’s not going to take over the world, people are going to be able to use narrow AI to grow their productivity, we’re not going to have unemployment.” So, what are some of the pitfalls, challenges, or potential problems with the technology?
I agree with you, it’s being positive. Realistically, looking into the data—and I’m not saying that I have the best data in front of me—I think what is the most important is we need to look into history, and we need to see how we evolved, and then the Internet came and what happened.
The challenge for us is going to be that there are businesses and groups who believe that artificial intelligence is something that they don’t have to worry about, and over a period of time artificial intelligence is going to start becoming more and more a part of business, and those who are not able to catch up with this, they’re going to see the unemployment rate increase. They’re going to see company losses increase because some of the decisions they’re not making in the right way.
You’re going to see companies, like Lehman Brothers, who are making all these data decisions for their clients by not using machines but relying on humans, and these big companies fail because of them. So, I think, that’s an area where we are going to see problems, and bankruptcies, and unemployment increases, because of they think that artificial intelligence is not for them or their business, that it’s never going to impact them—this is where I think we are going to get the most trouble.
The second area of trouble is going to be security and privacy, because all this data is now floating around us. We use the Internet. I use my credit card. Every month we hear about a new hack—Target being hacked, Citibank being hacked—all this data physically-stored in the system and it’s getting hacked. And now we’ll have all this data wirelessly transmitting, machines talking to each of their devices, IoT devices talking to each other—how are you we going to make sure that there is not a security threat? How are we going to make sure that no one is storing my data, and trying to make assumptions, and enter into my bank account? Those are the two areas where I feel we are going to see, in coming years, more and more challenges.
So, you said privacy and security are the two areas?
Denial of accepting AI is the one, and security and privacy is the second one—those are the two areas.
So, in the first one, are there any industries that don’t need to worry about it, or are you saying, “No, if you make bubble-gum you had better start using AI?
I will say every industry. I think every industry needs to worry about it. Some industries may adapt the technologies faster, some may go slower, but I’m pretty confident that the shift is going to happen so fast that, those businesses will be blindsided—be it small businesses or mom and pop shops or big corporations, it’s going to touch everything.
Well with regard to security, if the threat is artificial intelligence, I guess it stands to reason that the remedy is AI as well, is that true?
The remedy is there, yes. We are seeing so many companies coming and saying, “Hey, we can help you see the DNS attacks. When you have hackers trying to attack your site, use our technology to predict that this IP address or this user agent is wrong.” And we see that to tackle the remedy, we are building an artificial intelligence.
But, this is where I think the battle between big data and small data is colliding, and companies are still struggling. Like, phishing, which is a big problem. There are so many companies who are trying to solve the phishing problem of the emails, but we have seen technologies not able to solve it. So, I think AI is a remedy, but if we stay just focused on the big data, that’s, I think, completely wrong, because my fear is, a small data set can completely destroy the predictions built by a big data set, and this is where those security threats can bring more of an issue to us.
Explain that last bit again, the small data set can destroy…?
So, I gave the example of computer vision, right? There was research we did in Berkeley where we trained machines to look at pictures of cats, and then suddenly we saw the computer start predicting, “Oh, this is this kind of a cat, this is cat one, cat two, this is a cat with white fur.” Then we took just one image where we put the overlay of a dog on the body of a cat, and the machines ended up predicting, “That’s a dog,” not seeing that it’s the body of a cat. So, all the big data that we used to train our computer vision, just collapsed with one photo of a dog. And this is where I feel that if we are emphasizing so much on using the big data set, big data set, big data set, are there smaller data sets which we also need to worry about to make sure that we are bridging the gap enough to making sure that our securities are not compromised?
Do you think that the system as a whole is brittle? Like, could there be an attack of such magnitude that it impacts the whole digital ecosystem, or are you worried more about, this company gets hacked and then that one gets hacked and they’re nuisances, but at least we can survive them?
No, I’m more worried about the holistic view. We saw recently, how those attacks on the UK hospital systems happened. We saw some attacks—which we are not talking about—on our power stations. I’m more concerned about those. Is there going to be a day when we have built massive infrastructures that are reliant on computers—our generation of power and the supply of power and telecommunications—and suddenly there is a whole outage which can take the world to a standstill, because there is a small hole which we never thought about. That, to me, is the bigger threat than the stand alone individual things which are happening now.
That’s a hard problem to solve, there’s a small hole on the internet that we’ve not thought about that can bring the whole thing down, that would be a tricky thing to find, wouldn’t it?
It is a tricky thing, and I think that’s what I’m trying to say, that most of the time we fail because of those smaller things. If I go back, Byron, and bring the artificial general intelligence back into a picture, as human beings it’s those small, small decisions we make—like, I make a fast decision when an animal is approaching very close to me, so close that my senses and my emotions are telling me I’m going to die—and this is where I think sometimes we tend to ignore those small data sets.
I was in a big debate around those self-driven cars which are shaping up around us, and people were asking me when will we see those self-driven cars on a San Francisco street. And I said, “I see people doing crazy jaywalking every day,” and accidents are happening with human drivers, no doubt, but the scale can increase so fast if those machines fail. If they have one simple sensor which is not working at that moment in time and not able to get one signal, it can kill human beings much faster as compared to what human beings are killing, so that’s the rational which I’m trying to put here.
So, one of my questions that I was going to ask you, is, do you think AI is a mania? Like it’s everywhere but it seems like, you’re a person who says every industry needs to adopt it, so if anything, you would say that we need more focus on it, not less, is that true?
That’s true.
There was a man in the ‘60s named Weizenbaum who made a program called ELIZA, which was a simple program that you would ask a question, say something like, I’m having a bad day,” and then it would say, “Why are you having a bad day?” And then you would say, I’m having a bad day because I had a fight with my spouse,” and then would ask, “Why did you have a fight? And so, it’s really simple, but Weizenbaum got really concerned because he saw people pouring out their heart to it, even though they knew it was a program. It really disturbed him that people developed emotional attachment to ELIZA, and he said that when a computer says, “I understand,” that it’s a lie, that there’s no “I,” there’s nothing that understands anything. 
Do you worry that if we build machines that can imitate human emotions, maybe the care for people or whatever, that we will end up having an emotional attachment to them, or that that is in some way unhealthy?
You know, Byron, it’s a very great question. I think, also pick out a great example. So, I have Alexa at my home, right, and I have two boys, and when we are in a kitchen—because Alexa is in our kitchen—my older son comes home and says, “Alexa, what’s the temperature look like today?” Alexa says, “Temperature is this,” and then he says, “Okay, shut up,” to Alexa. My wife is standing there saying “Hey, don’t be rude, just say, ‘Alexa stop.’” You see that connection? The connection is you’ve already started treating this machine as a respectful device, right?
I think, yes, there is that emotional connection there, and that’s getting you used to seeing it as part of your life in an emotional connection. So, I think, yes, you’re right, that’s a danger.
But, more than Alexa and all those devices, I’m more concerned about the social media sites, which can have much more impact on our society than those devices. Because those devices are still physical in shape, and we know that if the Internet is down, then they’re not talking and all those things. I’m more concerned about these virtual things where people are getting more emotionally attached, “Oh, let me go and check what my friends been doing today, what movie they watched,” and how they’re trying to fill that emotional gap, but not meeting individuals, just seeing the photos to make them happy. But, yes, just to answer your question, I’m concerned about that emotional connection with the devices.
You know, it’s interesting, I know somebody who lives on a farm and he has young children, and, of course, he’s raising animals to slaughter, and he says the rule is you just never name them, because if you name them then that’s it, they become a pet. And, of course, Amazon chose to name Alexa, and give it a human voice; and that had to be a deliberate decision. And you just wonder, kind of, what all went into it. Interestingly, Google did not name theirs, it’s just the Google Assistant. 
How do you think that’s going to shake out? Are we just provincial, and the next generation isn’t going to think anything of it? What do you think will happen?
So, is your question what’s going to happen with all those devices and with all those AI’s and all those things?
Yes, yes.
As of now, those devices are all just operating in their own silo. There are too many silos happening. Like in my home, I have Alexa, I have a Nest, those plug-ins. I love, you know, where Alexa is talking to Nest, “Hey Nest, turn it off, turn it on.” I think what we are going to see over the next five years is that those devices are communicating with each other more, and sending signals, like, “Hey, I just saw that Deep left home, and the garage door is open, close the garage door.”
IoT is popping up pretty fast, and I think people are thinking about it, but they’re not so much worried about that connectivity yet. But I feel that where we are heading is more of the connectivity with those devices, which will help us, again, compliment and make the smart choices, and our reliance on those assistants is going to increase.
Another example here, I get up in the morning and the first thing I do is come to the kitchen and say Alexa, “Put on the music, Alexa, put on the music, Alexa, and what’s the weather going to look like?” With the reply, “Oh, Deep, San Francisco is going to be 75,” then Deep knows Deep is going to wear a t-shirt today. Here comes my coffee machine, my coffee machine has already learned that I want eight ounces of coffee, so it just makes it.
I think all those connections, “Oh, Deep just woke up, it is six in the morning, Deep is going to go to office because it’s a working day, Deep just came to kitchen, play this music, tell Deep that the temperature is this, make coffee for Deep,” this is where we are heading in next few years. All these movies that we used to watch where people were sitting there, and watching everything happen in the real time, that’s what I think the next five years is going to look like for us.
So, talk to me about Trulia, how do you deploy AI at your company? Both customer facing and internally?
That’s such an awesome question, because I’m so excited and passionate because this brings me home. So, I think in artificial intelligence, as you said, there are two aspects to it, one is for a consumer and one is internal, and I think for us AI helps us to better understand what our consumers are looking for in a home. How can we help move them faster in their search—that’s the consumer facing tagline. And an example is, “Byron is looking at two bedroom, two bath houses in a quiet neighborhood, in good school district,” and basically using artificial intelligence, we can surface things in much faster ways so that you don’t have to spend five hours surfing. That’s more consumer facing.
Now when it comes to the internal facing, internal facing is what I call “data-driven decision making.” We launch a product, right? How do we see the usage of our product? How do we predict whether this usage is going to scale? Are consumers going to like this? Should we invest more in this product feature? That’s the internal facing we are using artificial intelligence.
I don’t know if you have read some of my blogs, but I call it data-driven companies—there are two aspects of the data driven, one is the data-driven decision making, this is more of an analyst, and that’s the internal reference to your point, and the external is to the consumer-facing data-driven product company, which focuses on how do we understand the unique criteria and unique intent of you as a buyer—and that’s how we use artificial intelligence in the spectrum of Trulia.
When you say, “Let’s try to solve this problem with data, is it speculative, like do you swing for the fences and miss a lot? Or, do you look for easy incremental wins? Or, are you doing anything that would look like pure science, like, “Let’s just experiment and see what happens with this? Is the science so nascent that you, kind of, just have to get in there and start poking around and see what you can do?
I think it’s both. The science helps you understand those patterns much faster and better and in a much more accurate way, that’s how science helps you. And then, basically, there’s trial and error, or what we call an, “A/B testing” framework, which helps you to validate whether what science is telling you is working or not. I’m happy to share an example with you here if you want.
Yeah, absolutely.
So, the example here is, we have invested in our computer vision which is, we train our machines and our machines basically say, “Hey, this is a photo of a bathroom, this is a photo of a kitchen,” and we even have trained that they can say, “This is a kitchen with a wide granite counter-top.” Now we have built this massive database. When a consumer comes to the Trulia site, what they do is share their intent, they say, “I want two bedrooms in Noe Valley,” and the first thing that they do when those listings show up is click on the images, because they want to see what that house looks like.
What we saw was that there were times when those images were blurred, there were times when those images did not match up with the intent of a consumer. So, what we did with our computer vision, we invested in something called “the most attractive image,” which basically takes the three attributes—it looks into the quality of an image, it looks into the appropriateness of an image, and it looks into the relevancy of an image—and based on these three things we use our conventional neural network models to rank the images and we say, “Great, this is the best image.” So now when a consumer comes and looks at that listing we show the most attractive photo first. And that way, the consumer gets more engaged with that listing. And what we have seen— using the science, which is machine learning, deep learning, CNM models, and doing the A/B testing—is that this project increased our enquiries for the listing by double digits, so that’s one of the examples which I just want to share with you.
That’s fantastic. What is your next challenge? If you could wave a magic wand, what would be the thing you would love to be able to do that, maybe, you don’t have the tools or data to do yet?
I think, what we haven’t talked about here and I will use just a minute to tell you, that what we have done is we’ve built this amazing personalization platform, which is capturing Byron’s unique preferences and search criteria, we have built machine learning systems like computer vision recommender systems and the user engagement prediction model, and I think our next challenge will be to keep optimizing the consumer intent, right? Because the biggest thing that we want to understand is, “What exactly is Byron looking into?” So, if Byron visits a particular neighborhood because he’s travelling to Phoenix, Arizona, does that mean you want to buy a home there, or Byron is in San Francisco and you live here in San Francisco, how do we understand?
So, we need to keep optimizing that personalization platform—I won’t call it a challenge because we have already built this, but it is the optimization—and make sure that our consumers get what they’re searching for, keep surfacing the relevant data to them in a timely manner. I think we are not there yet, but we have made major inroads into our big data and machine learning technologies. One specific example, is Deep, basically, is looking into Noe Valley or San Francisco, and email and push notifications are the two channels, for us, where we know that Deep is going to consume the content. Now, the day we learn that Deep is not interested in Noe Valley, we stop sending those things to Deep that day, because we don’t want our consumers to be overwhelmed in their journey. So, I think that this is where we are going to keep optimizing on our consumer’s intent, and we’ll keep giving them the right content.
Alright, well that is fantastic, you write on these topics so, if people want to keep up with you Deep how can they follow you?
So, when you said “people” it’s other businesses and all those things, right? That’s what you mean?
Well I was just referring to your blog like I was reading some of your posts.
Yeah, so we have our tech blog, http://www.trulia.com/tech, and it’s not only me; I have an amazing team of engineers—those who are way smarter than me to be very candid—my data scientist team, and all those things. So, we write our blogs there, so I definitely ask people to follow us on those blogs. When I go and speak at conferences, we publish that on our tech blog, and I publish things on my LinkedIn profile. So, yeah, those are the channels which people can follow. Trulia, we also host data science meetups here in Trulia, San Francisco on the seventh floor of our building, that’s another way people can come, and join, and learn from us.
Alright, well I want to thank you for a fascinating hour of conversation, Deep.
Thank you, Byron.
Byron explores issues around artificial intelligence and conscious computers in his upcoming book The Fourth Age, to be published in April by Atria, an imprint of Simon & Schuster. Pre-order a copy here.

Voices in AI – Episode 20: A Conversation with Marie des Jardins

In this episode, Byron and Marie talk about the Turing test, Watson, autonomous vehicles, and language processing.
[podcast_player name=”Episode 20: A Conversation with Marie des Jardins” artist=”Byron Reese” album=”Voices in AI” url=”https://voicesinai.s3.amazonaws.com/2017-11-20-(01-03-03)-marie-de-jardin.mp3″ cover_art_url=”https://voicesinai.com/wp-content/uploads/2017/11/voices-headshot-card-2.jpg”]
Byron Reese: This is Voices in AI, brought to you by Gigaom. I’m Byron Reese. Today I’m excited that our guest is Marie des Jardins. She is an Associate Dean for Engineering and Information Technology as well as a professor of Computer Science at the University of Maryland, Baltimore County. She got her undergrad degree from Harvard, and a Ph.D. in computer science from Berkeley, and she’s been involved in the National Conference of the Association for the Advancement of Artificial Intelligence for over 12 years. Welcome to the show, Marie.
Marie des Jardins: Hi, it’s nice to be here.
I often open the show with “What is artificial intelligence?” because, interestingly, there’s no consensus definition of it, and I get a different kind of view of it from everybody. So I’ll start with that. What is artificial intelligence?
Sure. I’ve always thought about artificial intelligence as just a very broad term referring to trying to get computers to do things that we would consider intelligent if people did them. What’s interesting about that definition is it’s a moving target, because we change our opinions over time about what’s intelligent. As computers get better at doing things, they no longer seem that intelligent to us.
We use the word “intelligent,” too, and I’m not going to dwell on definitions, but what do you think intelligence is at its core?
So, it’s definitely hard to pin down, but I think of it as activities that human beings carry out, that we don’t know of lower order animals doing, other than some of the higher primates who can do things that seem intelligent to us. So intelligence involves intentionality, which means setting goals and making active plans to carry them out, and it involves learning over time and being able to react to situations differently based on experiences and knowledge that we’ve gained over time. The third part, I would argue, is that intelligence includes communication, so the ability to communicate with other beings, other intelligent agents, about your activities and goals.
Well, that’s really useful and specific. Let’s look at some of those things in detail a little bit. You mentioned intentionality. Do you think that intentionality is driven by consciousness? I mean, can you have intentionality without consciousness? Is consciousness therefore a requisite for intelligence?
I think that’s a really interesting question. I would decline to answer it mainly because I don’t think we ever can really know what consciousness is. We all have a sense of being conscious inside our own brains—at least I believe that. But of course, I’m only able to say anything meaningful about my own sense of consciousness. We just don’t have any way to measure consciousness or even really define what it is. So, there does seem to be this idea of self-awareness that we see in various kinds of animals—including humans—and that seems to be a precursor to what we call consciousness. But I think it’s awfully hard to define that term, and so I would be hesitant to put that as a prerequisite on intentionality.
Well, I think people agree what it is in a sense. Consciousness is the experience of things. It’s having a subjective experience of something. Isn’t the debate more like where does that come from? How does that arise? Why do we have it? But in terms of the simple definition, we do know that, don’t we?
Well, I don’t know. I mean, where does it come from, how does it arise, and do different people even have the same experience of consciousness as each other? I think when you start to dig down into it, we don’t have any way to tell whether another being is conscious or self-aware other than to ask them.
Let’s look at that for a minute, because self-awareness is a little different. Are you familiar with the mirror test that Professor Gallup does, where they take a sleeping animal, and paint a little red spot on its forehead, and then wait until it walks by a mirror, and if it stops and rubs its own forehead, then, according to the theory, it has a sense of self and therefore it is self-aware. And the only reason all of this matters is if you really want to build an intelligent machine, you have to start with what goes into that. So do you think that is a measure of self-awareness, and would a computer need to pass the mirror test, as it were?
That’s where I think we start to run into problems, right? Because it’s an interesting experiment, and it maybe tells us something about, let’s say, a type of self-awareness. If an animal’s blind, it can’t pass that test. So, passing the test therefore can’t be a precursor to intelligence.
Well, I guess the question would be if you had the cognitive ability and a fully functional set of senses that most of your species have, are you able to look at something else and determine that, “I am a ‘me’” and “That’s a reflection of me,” and “That actually is me, but I can touch my own forehead.”
I’m thinking, sorry. I’m being nonresponsive because I’m thinking about it, and I guess what I’m trying to say is that a test that’s designed for animals that have evolved in the wild is not necessarily a meaningful test for intelligent agents that we’ve engineered, because I could design a robot that can pass that test, that nobody would think was self-aware in any interesting and meaningful sense. In other words, for any given test you design, I can game and redesign my system to pass that test. But the problem is that the test measures something that we think is true in the wild, but as soon as we say, “This is the test,” we can build the thing that passes that test that doesn’t do what we meant for the agent to be able to do, to be self-aware.
Right. And it should be pointed out that there are those who look at the mirror test and say, “Well, if you put a spot on an animal’s hand, and just because they kind of wipe their hand…” That it’s really more a test of do they have the mental capability to understand what a mirror does?” And it has nothing to do with…
Right. Exactly. It’s measuring something about the mirror and so forth.
Let’s talk about another thing in your intelligence definition, because I’m fascinated by what you just kind of outlined. You said that some amount of communication, therefore some language, is necessary. So do you think—at least before we get to applying it to machines—that language is a requisite in the animal kingdom for intelligence?
Well, I don’t think it has to be language in the sense of the English language or our human natural language, but there are different ways to communicate. You can communicate through gestures. You can communicate through physical interaction. So it doesn’t necessarily have to be spoken language, but I do think the ability to convey information to another being that can then receive the information that was conveyed is part of what we mean by intelligence. Languages for artificial systems could be very limited and constrained, so I don’t think that we necessarily have to solve the natural language problem in order to develop what we would call intelligent systems. But I think when you talk about strong AI, which is referring to sort of human level intelligence, at that point, I don’t think you can really demonstrate human level intelligence without being able to communicate in some kind of natural language.
So, just to be clear, are you saying language indicates intelligence or language is required for intelligence?
Language is required for intelligence.
There are actually a number of examples in the plant kingdom where the plants are able to communicate signals to other plants. Would you say that qualifies? If you’re familiar with any of those examples, do those qualify as language in a meaningful sense, or is that just like, “Well, you can call it language if you’re trying to do clever thought riddles, but it’s not really a language.”
Yeah, I guess I’d say, as with most interesting things, there’s sort of a spectrum. But one of the characteristics of intelligent language, I think, is the ability to learn the language and to adapt the language to new situations. So, you know, ants communicate with each other by laying down pheromones, but ants can’t develop new ways to communicate with each other. If you put them into a new environment, they’re biologically hardwired to use communication.
There’s an interesting philosophical argument that the species is intelligent, or evolution is intelligent at some level. I think those are interesting philosophical discussions. I don’t know that they’re particularly helpful in understanding intelligence in individual beings.
Well, I definitely want to get to computers here in a minute and apply all of this as best we can, but… By our best guess, humans acquired speech a hundred thousand years ago, roughly the same time we got fire. The theory is that fire allowed us to cook food, which allowed us to break down the proteins in it and make it more digestible, and that that allowed us to increase our caloric consumption, and we went all in on the brain, and that gave us language. Would your statement that language is a requirement for intelligence imply that a hundred and one thousand years ago, we were not intelligent?
I would guess that human beings were communicating with each other a hundred and one thousand years ago and probably two hundred thousand years ago. And again, I think intelligence is a spectrum. I think chimpanzees are intelligent and dolphins are intelligent, at some level. I don’t know about pigs and dogs. I don’t have strong evidence.
Interestingly, of all things, dogs don’t pass the red paint mirror test. They are interestingly the only animal on the whole face of the earth—and by all means, any listener out there who knows otherwise, please email me—that if you point at an object, will look at the object.
Yeah, even chimpanzees don’t do it. So it’s thought that they co-evolved with us as we domesticated them. That was something we selected for, not overtly but passively, because that’s useful. It’s like, “Go get that thing,” and then the dog looks over there at it.
It’s funny, there’s an old Far Side cartoon—you can’t get those things out of your head—where the dolphins are in the tank, and they’re writing down all the dolphins’ noises, and they’re saying things like, “Se habla español,” and “Sprechen sie Deutsch,” and the scientists are like, “Yeah, we can’t make any sense of it.”
So let’s get back to language, because I’m really fascinated by this and particularly the cognitive aspects of it. So, what do you think is meaningful, if anything, about the Turing test—which of course you know, but for the benefit of our listeners, is: Alan Turing put this out that if you’re on a computer terminal, and you’re chatting with somebody, typing, and you can’t tell if it’s a person or a machine, then you have to say that machine is intelligent.
Right, and of course, Alan Turing’s original version of that test was a little bit different and more gendered if you’re familiar.
He based it on the gendered test, right. You’re entirely right. Yes.
There’s a lot of objections to the Turing test. In fact, when I teach the Introductory AI class at UMBC, I have the students read some of Alan Turing’s work and then John Searle’s arguments against the Turing test.
Chinese Room, right?
The Chinese Room and so forth, and I have them talk about all of that. And, again, I think these are, sort of, interesting philosophical discussions that, luckily, we don’t actually need to resolve in order to keep making progress towards intelligence, because I don’t think this is one that will ever be resolved.
Here’s something I think is really interesting: when that test was proposed, and in the early years of AI, the way it was envisioned was based on the communication of the time. Today’s Turing tests are based in an environment in which we communicate very differently—we communicate very differently online than we do in person—than Alan Turing ever imagined we would. And so the kind of chat bots that do well at these Turing tests really probably wouldn’t have looked intelligent to an AI researcher in the 1960s, but I don’t think that most social media posts would have looked very intelligent, either. And so we’ve kind of adapted ourselves to this sort of cryptic, darting, illogical, jumping-around-in-different-topics way of conversing with each other online, where lapses in rationality and continuity are forgiven really easily. And when I see some of the transcripts of modern Turing tests, I think, well, this kind of reminds me a little bit of PARRY. I don’t know if you’re familiar with ELIZA and PARRY.
Weizenbaum’s 1960s Q&A, his kind of psychologist helper, right?
Right. So ELIZA was a pattern-recognition-based online psychologist that would use this, I guess, Freudian way of interrogating a patient, to ask them about their feelings and so forth. And when this was created, people were very taken in by it, because, you know, they would spill out their deepest, darkest secrets to what turned out to be, essentially, one of the earliest chat bots. There was a version of that that was created later. I can’t remember the researcher who created it, but it was studying paranoid schizophrenia and the speech patterns of paranoid schizophrenics, and that version of ELIZA was called PARRY.
If you read any transcripts by PARRY, it’s very disjointed, and it can get away with not having a deep semantic model, because if it doesn’t really understand anything, and if it can’t match anything, it just changes the topic. And that’s what the modern Turing test look like to me, mostly. I think if we were going to really use the Turing test as some measure of intelligence, I think maybe we need to put some rules on critical thinking and rationality. What is it that we’re chatting about? And what is the nature of this communication with the agent in the black box? Because, right now, it’s just degenerated into, again, this kind of gaming the system. Well, let’s just see if we can trick a human into thinking that we’re a person, but we get to take advantage of the fact that online communication is this kind of dance that we play that’s not necessarily logical and rational and rule-following.
I want to come back to that, because I want to go down that path with you, but beforehand, it should be pointed out, and correct me if I’m wrong because you know this a lot better than I do, but the people who interacted with ELIZA all knew it was a computer and that there was “nobody at home.” And that, in the end, is what freaked Weizenbaum out, and had him turn on artificial intelligence, because I think he said something to the effect that when the computer says, “I understand,” it’s a lie. It’s a lie because there is no “I,” and there’s nothing to understand. Was that the same case with PARRY that they knew full and well they were talking to a machine, but they still engaged with it as if it was another person?
Well, that was being used to try to model the behavior of a paranoid schizophrenic, and so my understanding is that they ran some experiments where they had psychologists, in a blind setting, interact with an actual paranoid schizophrenic or this model, and do a Turing test to try to determine whether this was a convincing model of paranoid schizophrenic interaction style. I think it was a scientific experiment that was being run.
So, you used the phrase, when you were talking about PARRY just now, “It doesn’t understand anything.” That’s obviously Searle’s whole question with the Chinese Room, that the non-Chinese speaker who can use these books to answer questions in Chinese doesn’t understand anything. Do you think even today a computer understands anything, and will a computer ever understand anything?
That’s an interesting question. So when we talk about this with my class, with my students, I use the analogy of learning a new language. I don’t know if you speak any foreign languages to any degree of fluency.
I’m still working on English.
Right. So, I speak a little bit of French and a little bit of German and a little bit of Italian, so I’m very conscious of the language learning process. When I was first learning Italian, anything I said in Italian was laboriously translated in my mind by essentially looking up rules. I don’t remember any Italian, so I can’t use Italian as an example anymore. I want to say, “I am twenty years old” in French, and so in order to do that, I just don’t say, “J’ai vingt ans”; I say to myself, “How do I say, ‘I am 20 years old’? Oh, I remember, they don’t say, ‘I am 20 years old.’ They say, ‘I have 20 years.’ OK. ‘I have’ is ‘J’ai,’ ‘twenty’ is ‘vingt’…” And I’m doing this kind of pattern-based look up in my mind. But doing that inside my head, I can communicate a little bit in French. So do I understand French?
Well, the answer to that question would be “no,” but what you understand is that process you just talked about, “OK, I need to deconstruct the sentence. I need to figure out what the subject is. I need to line that up with the verb.” So yes, you have a higher order understanding that allows you to do that. You understand what you’re doing, unquestionably.
And so the question is, at that meta-meta-meta-meta-meta level, will a computer ever understand what it’s doing.
And I think this actually kind of gets back to the question of consciousness. Is understanding—in the sense that Searle wants it to be, or Weizenbaum wanted it to be—tied up in our self-awareness of the processes that we’re carrying out, to reason about things in the world?
So, I only have one more Turing test question to ask, then I would love to change the subject to the state of the art today, and then I would love to talk about when you think we’re going to have certain advances, and then maybe we can talk about the impact of all this technology on jobs. So, with that looking forward, one last question, which is: when you were talking about maybe rethinking the Turing test, that we would have a different standard, maybe, today than Turing did. And by the way, the contests that they have where they really are trying to pass it, they are highly restricted and constrained, I think. Is that the case?
I am not that familiar with them, although I did read The Most Human Human, which is a very interesting book if you are looking for some light summer reading.
All right.
Are you familiar with the book? It’s by somebody who served as a human in the Loebner Prize Turing test, and sort of his experience of what it’s like to be the human.
No. I don’t know that. That’s funny. So, the interesting thing was that—and anybody who’s heard the show before will know I use this example—I always start everyone with the same question. I always ask the same question to every system, and nobody ever gets it right, even close. And because of that, I know within three seconds that I’m not talking to a human. And the question is: “What’s larger? The sun or a nickel?” And no matter how, I think your phrase was “schizophrenic” or “disjointed” or what have you, the person is, they answer, “The sun” or “Duh” or “Hmm.” But no machine can.  
So, two questions: Is that question indicative of the state of the art, that we really are like in stone knives and bear skins with natural language? And second, do you think that we’re going to make strides forward that maybe someday you’ll have to wonder if I’m actually not a sophisticated artificial intelligence chatting with you or not?
Actually, I guess I’m surprised to hear you say that computers can’t answer that question, because I would think Watson, or a system like that, that has a big backend knowledge base that it’s drawing on would pretty easily be able to find that. I can Google “How big is the sun?” and “How big is a nickel?” and apply a pretty simple rule.
Well, you’re right. In all fairness, there’s not a global chat bot of Watson that I have found. I mean, the trick is nickel is both a metal and a coin, and the sun is a homophone that could be a person’s son. But a person, a human, makes that connection. These are both round and so they kind of look like alike and whatnot. When I say it, what I mean is you go to Cleverbot, or you go to the different chat bots that are entered in the Turing competitions and whatnot. You ask Google, you type that into Google, you don’t get the answer. So, you’re right, there are probably systems that can nail it. I just never bump into them.
And, you know, there’s probably context that you could provide in which the answer to that question would be the nickel. Right? So like I’ve got a drawing that we’ve just been talking about, and it’s got the sun in it, and it has a nickel in it, and the nickel is really big in the picture, and the sun is really small because it’s far away. And I say, “Which is bigger?” There might actually be a context in which the obvious answer isn’t actually the right answer, and I think that kind of trickiness is what makes people, you know, that’s the signal of intelligence, that we can kind of contextualize our reasoning. I think the question as a basic question, it’s such a factual question, that that’s the kind of thing that I think computers are actually really good at. What do you love more: A rose or a daisy? That’s a harder question.
You know, or what’s your mother’s favorite flower? Now there’s a tricky question.
Right. I have a book coming out on this topic at the end of the year, and I try to think up the hardest question, like what’s the last one. I’m sure listeners will have better ideas than I have. But one I came up with was: Dr. Smith is eating at her favorite restaurant when she receives a phone call. She rushes out, neglecting to pay her bill. Is management likely to prosecute? So we need to know: She’s probably a medical doctor. She probably got an emergency call. It’s her favorite restaurant, so she’s probably known there. She dashes out. Are they really going to go to all the effort to prosecute, not just get her to pay next time she’s in and whatnot? That is the kind of thing that has so many layers of experience that it would be hard for a machine to do.
Yeah, but I would argue that I think, eventually, we will have intelligent agents that are embedded in the world and interact with people and build up knowledge bases of that kind of common sense knowledge, and could answer that question. Or a similar type of question that was posed based on experience in the world and knowledge of interpersonal interactions. To me, that’s kind of the exciting future of AI. Being able to look up facts really fast, like Watson… Watson was exciting because it won Jeopardy, but let’s face it: looking up a lot of facts and being able to click on a buzzer really fast are not really the things that are the most exciting about the idea of an intelligent, human-like agent. They’re awfully cool, don’t get me wrong.
I think when we talk about commercial potential and replacing jobs, which you mentioned, I think those kinds of abilities to retrieve information really quickly, in a flexible way, that is something that can really lead to systems that are incredibly useful for human beings. Whether they are “strong AI” or not doesn’t matter. The philosophical stuff is fun to talk about, but there’s this other kind of practical, “What are we really going to build and what are we going to do with it?”
And it doesn’t require answering those questions.
Fair enough. In closing on all of that other part, I heard Ken Jennings speak at South by Southwest about it, and I will preface this by saying he’s incredibly gracious. He doesn’t say, “Well, it was rigged.” He did describe, though, that the buzzer situation was different, because that’s the one part that’s really hard to map. Because the buzzer’s the trick on Jeopardy, not the answers.
That’s right.
And that was all changed up a bit.
Ken is clearly the best human at the buzzer. He’s super smart, and he knows a ton of stuff, don’t get me wrong, I couldn’t win on Jeopardy. But I think it’s that buzzer that’s the difference. And so I think it would be really interesting to have a sort of Jeopardy contest in which the buzzer doesn’t matter, right? So, you just buzz in, and there’s some reasonable window in which to buzz in, and then it’s random who gets to answer the question, or maybe everybody gets to answer the question independently. A Jeopardy-like thing where that timed buzzing in isn’t part of it; it’s really the knowledge that’s the key. I suspect Watson would still do pretty well, and Ken would still do pretty well, but I’m not sure who would win in that case. It would depend a lot on the questions, I think.
So, you gave us a great segue just a minute ago when you said, “Is all of this talk about consciousness and awareness and self and Turing test and all that—does it matter?” And it sounded like you were saying, whether it does or doesn’t, there is plenty of exciting things that are coming down the pipe. So let’s talk about that. I would love to hear your thoughts on the state of the art. AI’s passed a bunch of milestones, like you said, there was chess, then Jeopardy, then AlphaGo, and then recently poker. What are some things, you think—without going to AGI which we’ll get to in a minute—we should look for? What’s the state of the art, and what are some things you think we’re going to see in a year, or two years, three years, that will dominate the headlines?
I think the most obvious thing is self-driving cars and autonomous vehicles, right? Which we already have out there on the roads doing a great deal. I drive a Volvo that can do lane following and can pretty much drive itself in many conditions. And that is really cool and really exciting. Is it intelligence? Well, no, not by the definitions we’ve just been talking about, but the technology to be able to do all of that very much came out of AI research and research directions.
But I guess there won’t be a watershed with that, like, in the way that one day we woke up and Lee Sedol had lost. I mean, won’t it be that in three years, the number one Justin Bieber song will have been written by an AI or something like that, where it’s like, “Wow, something just happened”?
Yeah, I guess I think it’s a little bit more like cell phones. Right? I mean, what was the moment for cell phones? I’m not sure there was one single.
Fair enough. That’s right.
It’s more of like a tipping point, and you can look back at it and say, “Oh, there’s this inflection point.” And I don’t know what it was for cell phones. I expect there was an inflection point when either cell phone technology became cheap enough, or cell tower coverage became prevalent enough that it made sense for people to have cell phones and start using them. And when that happened, it did happen very fast. I think it will be the same with self-driving cars.
It was very fast that cars started coming out with adaptive cruise control. We’ve had cruise control for a long time, where your car just keeps going at the same speed forever. But adaptive cruise control, where your car detects when there’s something in front of it and slows down or speeds up based on the conditions of the road, that happened really fast. It just came out and now lots of cars have that, and people are kind of used to it. GPS technology—I was just driving along the other day, and I was like, “Oh yeah, I’ve got a map in my car all the time.” And anytime I want to, I can say, “Hey, I’d like to go to this place,” and it will show me how to get to that place. We didn’t have that, and then within a pretty short span of time, we have that, and that’s an AI derivative also.
Right. I think that those are all incredibly good points. I would say with cell phones—I can remember in the mid ‘90s, the RAZR coming out, which was smaller, and it was like, “Wow.” You didn’t know you had it in your pocket. And then, of course, the iPhone was kind of a watershed thing.
Right. A smartphone.
Right. But you’re right, it’s a form of gradualism punctuated by a series of step functions up.
Definitely. Self-driving car technology, in particular, is like that, because it’s really a big ask to expect people to trust self-driving cars on the road. So there’s this process by which that will happen and is already happening, where individual bits of autonomous technology are being incorporated into human-driven cars. And meanwhile, there’s a lot of experimentation with self-driving cars under relatively controlled conditions. And at some point, there will be a tipping point, and I will buy a car, and I will be sitting in my car and it will take me to New York, and I won’t have to be in control.
Of course, one impediment to that is that whole thing where a vast majority of the people believe the statistical impossibility that they are above-average drivers.
That’s right.
I, on the other hand, believe I’m a below-average driver. So I’m going to be the first person—I’m a menace on the road. You want me off as soon as you can. It probably is good enough for that. I know prognostication is hard, and I guess cars are different, because I can’t get a free self-driving car with a two-year contract at $39.95 a month, right? So it’s a big capital shift, but do you have a sense—because I’m sure you’re up on all of this—when you think the first fully autonomous car will happen? And then the most interesting thing, when will it be illegal not to drive a fully autonomous car?
I’m not quite sure how it will roll out. It may be that it’s in particular locations or particular regions first, but I think that ordinary people being able to drive a self-driving car; I would say within ten years.
I noticed you slipped that, “I don’t know when it’s going to roll out” pun in there.
Pun not intended. You see, if my AI could recognize that as a pun… Humor is another thing that intelligent agents are not very good at, and I think that’ll be a long time coming.
Right. So you have just confirmed that I’m a human.
So, next question, you’ve mentioned strong AI, also called an artificial general intelligence, that is an intelligence as smart as a human. So, back to your earlier question of does it matter, we’re going to be able to do things like self-driving cars and all this really cool stuff, without answering these philosophical questions; but I think the big question is can we make an AGI? 
Because if you look at what humans are good at doing, we’re good at transfer learning where we pick something to learn in one domain and map it to another one effortlessly. We are really good at taking one data point, like, you could show a human one data point of something, and then a hundred photos, and no matter how you change the lighting or the angle, a person will go, “There, there, there, and there.” So, do you think that an AGI is the sum total of a series of weak AIs bolted together? Or is there some, I’m going to use a loaded word, “magic,” and obviously I don’t mean magic, but is there some hitherto unknown magic that we’re going to need to discover or invent?
I think hitherto unknown magic, you know, using the word “magic” cautiously. I think there are individual technologies that are really exciting and are letting us do a lot of things. So right now, deep learning is the big buzz word, and it is kind of cool. We’ve taken old neural net technology, and we’ve updated it with qualitatively different ways of thinking about essentially neural network learning that we couldn’t really think about before, because we didn’t have the hardware to be able to do it at the scale or with the kind of complexity that deep learning networks exist now. So, deep learning is exciting. But deep learning, I think, is just fundamentally not suited to do this single point generalization that you’re talking about.
Big data is a buzz word, but I’m, personally, I’ve always been more interested in tiny data. Or maybe it’s big data in the service of tiny data, so I experience lots and lots and lots of things, and by having all of that background knowledge at my disposal, I can do one shot learning, because I can take that single instance and interpret it and understand what is relevant about that one single instance that I need to use to generalize to the next thing. One shot learning works because we have vast experience, but that doesn’t mean that throwing vast experience at that one thing is, by itself, going to let us generalize from that single thing. I think we still really haven’t developed the cognitive reasoning frameworks that will let us take the power of deep learning and big data, and apply it in these new contexts in creative ways, using different levels of reasoning and abstraction. But I think that’s where we’re headed, and I think a lot of people are thinking about that.
So I’m very hopeful that the broad AI community, in its lushest, many-flowers-blooming way of exploring different approaches, is developing a lot of ideas that eventually are going to come together into a big intelligent reasoning framework, that will let us take all of the different kinds of technologies that we’ve built for special purpose algorithms, and put them together—not just bolt it together, but really integrate it into a more coherent, broad framework for AGI.
If you look at the human genome, it’s, in computer terms, 720MB, give or take. But a vast amount of that is useless, and then a vast amount of that we share with banana trees. And if you look at the part that’s uniquely human, which gives us our unique intelligence, it may be 4MB or 8MB; it’s a really a small number. Yet in that little program are the instructions to make something that becomes an AGI. So do you take that to mean that there’s a secret, a trick—and again, I’m using words that I mean metaphorically—there’s something very simple we’re missing. Something you could write in a few lines of code. Maybe a short program that could make something that’s an AGI?
Yeah, we had a few hundred million years to evolve that. So, the length of something doesn’t necessarily mean that it’s simple. And I think I don’t know enough about genomics to talk really intelligently about this, but I do think that 4MB to 8MB that’s uniquely human interacts with everything else, with the rest of the genome, possibly with the parts that we think don’t do anything. Because there were parts of the genome that we thought didn’t do anything, but it turns out some of it does do something. It’s the dark matter of the genome. Just because we don’t know what it’s doing, I don’t know that that means that it’s not doing anything.
Well, that’s a really interesting point—the 4MB to 8MB may be highly compressed, to use the computer metaphor, and it may be decompressing to something that’s using all the rest. But let’s even say it takes 720MB, you’re still talking about something that will fit on an old CD-ROM, something smaller than most operating systems today.  
And I one hundred percent hear what you’re saying, which is nature has had a hundred million years to compress that, to make that really tight code. But, I guess the larger question I’m trying to ask is, do you think that an AGI may… The hope in AI had always been that, just like in the physical universe, there’s just a few laws that explain everything. Or is it that it’s like, no, we’re incredibly complicated, and it’s going to be this immense system that becomes a general intelligence, and it’s going to be of complexity we can’t wrap our heads around yet.
Gosh, I don’t know. I feel like I just can’t prognosticate that. I think if and when we have an AGI that we really think is intelligent, it probably will have an awful lot of component. The core that drives all of it may be, relatively speaking, fairly simple. But, if you think about how human intelligence works, we have lots and lots of modules. Right?
There’s this sort of core mechanism by which the brain processes information, that plays out in a lot of different ways, in different parts of the brain. We have the motor cortex, and we have the language cortex, and they’re all specialized. We have these specialized regions and specialized abilities. But they all use a common substrate or mechanism. And so when I think of the ultimate AI, I think of there being some sort of architecture that binds together a lot of different components that are doing different things. And it’s that architecture, that glue, that we haven’t really figured out how to think about yet.
There are cognitive architectures. There are people who work on designing cognitive architectures, and I think those are the precursors of what will ultimately become the architecture for intelligence. But I’m not sure we’re really working on that hard enough, or that we’ve made enough progress on that part of it. And it may be that the way that we get artificial intelligence ultimately is by building a really, really, really big deep learning neural network, which I would find maybe a little bit disappointing, because I feel like if that’s how we get there, we’re not really going to know what’s going on inside of it. Part of what brought me into the field of AI was really an interest in cognitive psychology, and trying to understand how the human brain works. So, maybe we can create another human-like intelligence by just kind of replicating the human brain. But I, personally, just from my own research perspective, wouldn’t find that especially satisfying, because it’s really hard to understand what’s going on in the human brain. And it’s hard to understand what’s going on even in any single deep learning network that can do visual processing or anything like that.
I think that in order for us to really adopt these intelligence systems and embrace them and trust them and be willing to use them, we’ll have to find ways for them to be more explainable and more understandable to human beings. Even if we go about replicating human intelligence in that way, I still think we need to be thinking about understandability and how it really works and how we extract meaning.
That’s really fascinating. So you’re saying if we made this big system that was huge and studied data, it’s kind of just brute force. We don’t have anything elegant about that. It doesn’t tell us anything about ourselves.
So my last theoretical question, and then I’d love to talk about jobs. You said at the very beginning that consciousness may be beyond our grasp, that somehow we’re too close to it, or it may be something we can’t agree on, we can’t measure, we can’t tell in others, and all of that. Is it possible that the same is true of a general intelligence? That in the end, this hope of yours that you said brought you into the field, that it’s going to give us deep insights into ourselves, actually isn’t possible?
Well, I mean, maybe. I don’t know. I think that we’ve already gained a lot of insight into ourselves, and because we’re humans, we’re curious. So if we build intelligent agents without fully understanding how they work or what they do, then maybe we’ll work side by side with them to understand each other. I don’t think we’re ever going to stop asking those questions, whether we get to some level of intelligent agents before then or after then. Questions about the universe are always going to be with us.
Onto the question that most people in their day-to-day lives worry about. They don’t worry as much about killer robots, as they do about job-killing robots. What do you think will be the effect? So, you know the setup. You know both sides of this. Is artificial intelligence something brand new that replaces people, and it’s going to get this critical velocity where it can learn things faster than us and eventually just surpass us in all fields? Or, is it like other disruptive technologies—arguably equally disruptive as such things as the mechanization of industry, the harnessing of steam power, of electricity—that came and went and never, ever budged unemployment even one iota. Because people learned, almost instantly, how to use these new technologies to increase their own productivity. Which of those two or a third choice do you think is most likely?
I’m not a believer in the singularity. I don’t see that happening—that these intelligent agents are going to surpass us and make us completely superfluous, or let us upload our brains into cyberspace or turn us into The Matrix. It could happen. I don’t rule it out, but that’s not what I think is most likely. What I really think is that this is like other technologies. It’s like the invention of the car or the television or the assembly line. If we use it correctly, it enhances human productivity, and it lets us create value at less human cost.
The question is not a scientific question or a technological question. The question is really a political question of how are we, as a society, going to decide to use that extra productivity? And unfortunately, in the past, we’ve often allowed that extra productivity to be channeled into the hands of a very few people, so that we just increased wealth disparity, and the people at the bottom of the economic pile have their jobs taken away. So they’re out of work, but more importantly, the benefit that’s being created by these new technologies isn’t benefiting them. And I think that we can choose to think differently about how we distribute the value that we get out of these new technologies.
The other thing is I think that as you automate various kinds of activities, the economy transforms itself. And we don’t know exactly how that is going to happen, and it would have been hard to predict before any historical technological disruption, right? You invent cars. Well, what happens to all the people who took care of the horses before? Something happened to them. That’s a big industry that’s gone. When we automate truck driving, this is going to be extremely disruptive, because truck driver is one of the most common jobs, in most of our country at least. So, what happens to the people who were truck drivers? It turns out that you’re automating some parts of that job, but not all of it. Because a truck driver doesn’t just sit at the wheel of a car and drive it down the road. The truck driver also loads and offloads and interacts with people at either end. So, maybe the truck driver job becomes more of a sales job, you know, there’s fewer of them, but they’re doing different things. Or maybe it’s supplanted by different kinds of service roles.
I think we’re becoming more and more of a service economy, and that’s partly because of automation. We always need more productivity. There’s always things that human society wants. And if we get some of those things with less human effort, that should let us create more of other things. I think we could use this productivity and support more art. That would be an amazing, transformational, twenty-first century kind of thing to do. I look at our current politics and our current society, and I’m not sure that enough people are thinking that way, that we can think about how to use these wonderful technologies to benefit everybody. I’m not sure that’s where we’re headed right now.
Let’s look at that. So there’s a wide range of options, and everybody’s going to be familiar with them all. On the one hand, you could say, you know, Facebook and Google made twelve billionaires between them. Why don’t we just take their money and give it to other people? All the way to the other extreme that says, look, all those truck drivers, or their corollaries, in the past, nobody in a top-down, heavy handed way reassigned them to different jobs. What happened was the market did a really good job of allocating technology, creating jobs, and recruiting them. So those would be two incredibly extreme positions. And then there’s this whole road in between where you’d say, well, we need more education. We need to help make it easier for people to become productive again. Where on that spectrum do you land? What do you think? What specific meat would you put on those bones?
I think taxes are not an inherently bad thing. Taxes are how we run our society, and our society is what protects people and enables people to invent things like Google. If we didn’t have taxes, and we didn’t have any government services, it would be extremely difficult for human society to invent things like Google, because to invent things like that requires collaboration, it requires infrastructure; it requires the support of people around you to make that happen. You couldn’t have Google if you didn’t have the Internet. And the Internet exists because the government invested in the Internet, and the government could invest in the Internet because we pay taxes to the government to create collective infrastructure. I think there’s always going to be a tension between how high should taxes be and how much should you tax the wealthy—how regressive, how progressive? Estate taxes; should you be able to build up a dynasty and pass along all of your wealth to your children? I have opinions about some of that, but there’s no right answer. It changes over time. But I do think that the reason that we come together as human beings to create governments and create societies is because we want to have some ability to have a protected place where we can pursue our individual goals. I want to be able to drive to and from my job on roads that are good, and have this interview with you through an Internet connection that’s maintained, and not to have marauding hordes steal my car while I’m in here. You know, we want safety and security and shared infrastructure. And I think the technology that we’re creating should let us do a better job at having that shared infrastructure and basic ability for people to live happy and productive lives.
So I don’t think that just taking money from rich people and giving it to poor people is the right way to do that, but I do think investing in a better society makes a lot of sense. We have horribly decaying infrastructure in much of the country. So, doesn’t it make sense to take some of the capital that’s created by technology advances and use it to improve the infrastructure in the country and improve health care for people?
Right. And of course the countervailing factor is, do all of the above without diminishing people’s incentives to work hard and found these companies that they created, and that’s the historical tension. Well, I would like to close with one question for you which is: are you optimistic about the future or pessimistic or how would you answer that?
I’m incredibly optimistic. I mean, you know, I’m pessimistic about individual things on individual days, but I think, collectively, we have made incredible strides in technology, and in making people’s quality of life better.
I think we could do a better job. There’s places where people don’t have the education or don’t have the infrastructure or don’t have access to jobs or technology. I think we have real issues with diversity in technology, both in creating technology and in benefiting from technology. I’m very, very concerned about the continuing under-representation of women and minority groups in computing and technology. And the reason for that is partly because I think it’s just socially unjust to not have everybody equally benefiting from good jobs, from the benefits of technology. But it’s also because the technology solutions that we create are influenced by the people who are creating them. When we have a very limited subset of the population creating technology, there’s a lot of evidence that shows that the technology is not as robust, and doesn’t serve as broad a population of users as technology that’s created by diverse teams of engineers. I’d love to see more women coming into computer science. I’d love to see more African Americans and Hispanics coming into computer science. That’s something I work on a lot. It’s something I think matters a lot to our future. But, I think we’re doing the right things in those areas, and people care about these things, and we’re pushing forward.
There’s a lot of really exciting stuff happening in the AI world right now, and it’s a great time to be an AI scientist because people talk about AI. I walk down the street, or I sit at Panera, and I hear people talking about the latest AI solution for this thing or that—it’s become a common term. Sometimes, I think it’s a little overused, because we sort of use it for anything that seems kind of cool, but that’s OK. I think we can use AI for anything that seems pretty cool, and I don’t think that hurts anything.
All right. Well, that’s a great place to end it. I want to thank you so much for covering this incredibly wide range of topics. This was great fun and very informative. Thank you for your time.
Yeah, thank you.
Byron explores issues around artificial intelligence and conscious computers in his upcoming book The Fourth Age, to be published in April by Atria, an imprint of Simon & Schuster. Pre-order a copy here.

Why Nuance is pushing for a new type of Turing test

Nuance Communications is sponsoring a contest called the Winograd Schema challenge that aims to replace the usual conversation-based attempts to pass the Turing test. According to its backers, the Winograd challenge places more emphasis on intelligence and less on trickery.