Voices in AI – Episode 48: A Conversation with David Barrett

[voices_in_ai_byline]
In this episode, Byron and David discuss AI, jobs, and human productivity.
[podcast_player name=”Episode 48: A Conversation with David Barrett” artist=”Byron Reese” album=”Voices in AI” url=”https://voicesinai.s3.amazonaws.com/2018-06-07-(00-56-47)-david-barrett.mp3″ cover_art_url=”https://voicesinai.com/wp-content/uploads/2018/06/voices-headshot-card-1.jpg”]
[voices_in_ai_byline]
Byron Reese: This is Voices in AI brought to you by GigaOm, I’m Byron Reese. Today our guest is David Barrett. He is both the founder and the CEO of Expensify. He started programming when he was 6 and has been at it as his primary activity ever since, except for a brief hiatus for world travel, some technical writing, a little project management, and then founding and running Expensify. Welcome to the show, David.
David Barrett: It’s great of you to have me, thank you.
Let’s talk about artificial intelligence, what do you think it is? How would you define it?
I guess I would say that AI is best defined as a feature, not as a technology. It’s the experience that the user has and sort of the experience of viewing of something as being intelligent, and how it’s actually implemented behind the scenes. I think people spend way too much time and energy on [it], and forget sort of about the experience that the person actually has with it.
So you’re saying, if you interact with something and it seems intelligent, then that’s artificial intelligence?
That’s sort of the whole basis of the Turing test, I think, is not based upon what is behind the curtain but rather what’s experienced in front of the curtain.
Okay, let me ask a different question then– and I’m not going to drag you through a bunch of semantics. But what is intelligence, then? I’ll start out by saying it’s a term that does not have a consensus definition, so it’s kind of like you can’t be wrong, no matter what you say.
Yeah, I think the best one I’ve heard is something that sort of surprises you. If it’s something that behaves entirely predictable, it doesn’t seem terribly interesting. Something that is also random isn’t particularly surprising, I guess, but something that actually intrigues you. And basically it’s like “Wow, I didn’t anticipate that it would correctly do this thing better than I thought.” So, basically, intelligence– the key to it is surprise.
So in what sense, then–final definitional question–do you think artificial intelligence is artificial? Is it artificial because we made it? Or is it artificial because it’s just pretending to be intelligent but it isn’t really?
Yeah, I think that’s just sort of a definition–people use “artificial” because they believe that humans are special. And basically anything–intelligence is the sole domain of humanity and thus anything that is intelligent that’s not human must be artificial. I think that’s just sort of semantics around the egoism of humanity.
And so if somebody were to say, “Tell me what you think of AI, is it over-hyped? Under-hyped? Is it here, is it real”, like you’re at a cocktail party, it comes up, what’s kind of the first thing you say about it?
Boy, I don’t know, it’s a pretty heavy topic for a cocktail party. But I would say it’s real, it’s here, it’s been here a long time, but it just looks different than we expect. Like, in my mind, when I think of how AI’s going to enter the world, or is entering the world, I’m sort of reminded of how touch screen technology entered the world.
Like, when we first started thinking about touch screens, everyone always thought back to Minority Reportand basically it’s like “Oh yeah, touch technology, multi-touch technology is going to be—you’re going to stand in front of this huge room and you’re going to wave your hands around and it’s going to be–images”, it’s always about sorting images. After Minority Reportevery single multi-touch demo was about, like, a bunch of images, bigger images, more images, floating through a city world of images. And then when multi-touch actually came into the real world, it was on a tiny screen and it was Steve Jobs saying, “Look! You can pinch this image and make it smaller.” The vast majority of multi-touch was actually single-touch that every once in a while used a couple of fingers. And the real world of multi-touch is so much less complicated and so much more powerful and interesting than the movies ever made it seem.
And I think the same thing when it comes to AI. Our interpretation from the movies of what AI is that you’re going to be having this long, witty conversation with an AI or with maybe with Heryou’re going to be falling in love with your AI. But real world AI isn’t anything like that. It doesn’t have to seem human; it doesn’t have to be human. It’s something that, you know, is able to surprise you with interpreting data in a way that you didn’t expect and doing results that are better than you would have imagined. So I think real-world AI is here, it’s been here for a while, but it’s just not where we’re noticing because it doesn’t really look like we expect it to.
Well, it sounds like–and I don’t want to say it sounds like you’re down on AI–but you’re like “You know, it’s just a feature, and its just kind of like—it’s an experience, and if you had the experience of it, then that’s AI.” So it doesn’t sound like you think that it’s particularly a big deal.
I disagree with that, I think–
Okay, in what sense is it a “big deal”?
I think it’s a huge deal. To say it’s just a feature is not to dismiss it, but I think is to make it more real. I think people put it on a pedestal as if it’s this magic alien technology, and they focus, I think, on—I think when people really think about AI, they think about vast server farms doing Tensor Flow analysis of images, and don’t get me wrong, that is incredibly impressive. Pretty reliably, Google Photos, after billions of dollars of investment, can almost always figure out what a cat is, and that’s great, but I would say real-world AI—that’s not a problem that I have, I know what a cat is. I think that real-world AI is about solving harder problems than cat identification. But those are the ones that actually take all the technology, the ones that are hardest from a technology perspective to solve. And so everyone loves those hard technology problems, even though they’re not interesting real-world problems, the real-world problems are much more mundane, but much more powerful.
I have a bunch of ways I can go with that. So, what are—we’re going to put a pin in the cat topic—what are the real-world problems you wish—or maybe we are doing it—what are the real world problems you think we should be spending all of that server time analyzing?
Well, I would say this comes down to—I would say, here’s how Expensify’s using AI, basically. The real-world problem that we have is that our problem domain is incredibly complicated. Like, when you write in to customer support of Uber, there’s probably, like, two buttons. There’s basically ‘do nothing’ or ‘refund,’ and that’s pretty much it, not a whole lot that they can really talk about, so their customer support’s quite easy. But with Expensify, you might write in a question about NetSuite, Workday, or Oracle, or accounting, or law, or whatever it is, there’s a billion possible things. So we have this hard challenge where we’re supporting this very diverse problem domain and we’re doing it at a massive scale and incredible cost.
So we’ve realized that mostly, probably about 80% of our questions are highly repeatable, but 20% are actually quite difficult. And the problem that we have is that to train a team and ramp them up is incredibly expensive and slow, especially given that the vast majority of the knowledge is highly repeatable, but you don’t know until you get into the conversation. And so our AI problem is that we want to find a way to repeatedly solve the easy questions while carefully escalating the hard questions. It’s like “Ok, no problem, that sounds like a mundane issue,” there’s some natural language processing and things like this.
My problem is, people on the internet don’t speak English. I don’t mean to say they speak Spanish or German, they speak gibberish. I don’t know if you have done technical support, the questions you get are just really, really complicated. It’s like “My car busted, don’t work,” and that’s a common query. Like, what car? What does “not work” mean, you haven’t given any detail. The vast majority of a conversation with a real-world user is just trying to decipher whatever text message lingo they’re using, and trying to help them even ask a sensible question. By the time the question’s actually well-phrased, it’s actually quite easy to process. And I think so many AI demos focus on the latter half of that, and they’ll say like “Oh, we’ve got an AI that can answer questions like what will the temperature be under the Golden Gate bridge three Thursdays from now.” That’s interesting; no one has ever asked that question before. The real-world questions are so much more complicated because they’re not in a structured language, and they’re actually for a problem domain that’s much more interesting than weather. I think that real-world AI is mundane, but that doesn’t make it easy. It just makes it solving problems that just aren’t the sexy problems. But they’re the ones that actually need to be solved.
And you’re using the cat analogy just as kind of a metaphor and you’re saying, “Actually, that technology doesn’t help us solve the problem I’m interested in,” or are you using it tongue-in-cheekily to say, “The technology may be useful, it’s just that that particular use-case is inane.”
I mean, I think that neural-net technology is great, but even now I think what’s interesting is following the space of how—we’re really exploring the edges of its capabilities. And it’s not like this technology is new. What’s new is our ability to throw a tremendous amount of hardware at it. But the core neural technology itself has actually been set for a very long time, that net propagation techniques are not new in any way. And I think that we’re finding that it’s great and you can do amazing things with it, but also there’s a limit to how much can be done with it. It’s sort of—I think of a neural net in kind of the same way that I think of a bloom filter. It’s a really incredible way to compress an infinite amount of knowledge to a finite amount of space. But that’s a loss-y compression, you lose a lot of data as you go along with it, and you get unpredictable results, as well. So again, I’m not opposed to neural nets or anything like this, but I’m saying, just because you have a neural net doesn’t mean it’s smart, doesn’t mean it’s intelligent, or that it’s doing anything useful. It’s just technology, it’s just hardware. I think we need to focus less on sort of getting enraptured by fancy terminologies and advanced technologies, and instead focus more on “What are you doing with this technology?” And that’s the interesting thing.
You know, I read something recently that I think most of my guests would vehemently disagree with, but it said that all advances in AI over the last, say, 20 years, are 100% attributable to Moore’s law, which sounds kind of like what you’re saying, is that we’re just getting faster computers and so our ability to do things with AI is just doubling every two years because the computers are doubling every two years. Do you—
Oh yeah! I 100% agree.
So there’s a lot of popular media around AI winning games. You know, you had chess in ‘97, you had Jeopardy! with Watson, you had, of course, AlphaGo, you had poker recently. Is that another example in your mind of kind of wasted energy? Because it makes a great headline but it isn’t really that practical?
I guess, similar. You could call it gimmicky perhaps, but I would say it’s a reflection of how early we are in this space that our most advanced technologies are just winning Go. Not to say that Go is an easy game, don’t get me wrong, but it’s a pretty constrained problem demand. And it’s really just—I mean, it’s a very large multi-dimensional search space but it’s a finite search space. And yes, our computers are able to search more of it and that’s great, but at the same time, to this point about Moore’s law, it’s inevitable. If it comes down to any sort of search problem, it’s just going to be solved with a search algorithm over time, if you have enough technology to throw at it. And I think what’s the most interesting coming out of this technology, and I think especially in the Go, is how the techniques that the AIs are coming out with are just so alien, so completely different than the ones that humans employ, because we don’t have the same sort of fundamental—our wetware is very different from the hardware, it has a very different approach towards it. So I think that what we see in these technology demonstrations are hints of kind of how technology has solved this problem differently than our brains [do], and I think it will give us a sort of hint of “Wow, AI is not going to look like a good Go player. It’s going to look like some sort of weird alien Go player that we’ve never encountered before.” And I think that a lot of AI is going to seem very foreign in this way, because it’s going to solve our common problems in a foreign way. But again, I think that Watson and all this, they’re just throwing enormous amounts of hardware at actually relatively simple problems. And they’re doing a great job with it, it’s just the fact that they are so constrained shouldn’t be overlooked.
Yeah, you’re right, I mean, you’re completely right–there’s legendary move 37 in that one game with Lee Sedol, and that everybody couldn’t decide whether it was a mistake or not, because it looked like one, but later turned out to be brilliant. And Lee Sedol himself has said that losing to AlphaGo has made him a better player because he’s seeing the game in different ways.
So there seem to be a lot of people in the popular media– you know it all right–like you get Elon Musk who says we’re going to build a general intelligence sooner rather than later and it’s going to be an existential threat, he likens it to, quote, “summoning the demon.” Steven Hawking said this could be our greatest invention, but it might also be our last, it might spell our extinction. Bill Gates has said he’s worried about it and doesn’t understand why other people aren’t worried about it. Wozniak is in the worry camp… And then you get people like Andrew Ng who says worrying about that kind of stuff is like worrying about overpopulation on Mars, you get Zuckerberg who says, you know, it’s not a threat, and so forth. So, two questions: one, on the worry camp, where do you think that comes from? And two, why do you think there’s so much difference in viewpoint among obviously very intelligent people?
That’s a good question. I guess I would say I’m probably more in the worried camp, but not because I think the AIs are going to take over in the sense that there’s going to be some Terminator-like future. I think that AIs are going to efficiently solve problems so effectively that they are going to inevitably eliminate jobs, and I think that will just create a concentration of wealth that, historically, when we have that level concentration of wealth, that just leads to instability. So my worry is not that the robots are going to take over, my worry is that the robots are going to enable a level of wealth concentration that causes a revolution. So yeah, I do worry, but I think–
To be clear though, and I definitely want to dive deep into that, because that’s the question that preoccupies our thoughts, but to be clear, the existential threat, people are talking about something different than that. They’re not saying – and so what do you think about that?
Well, let’s even imagine for a moment that you were a super intelligent AI, why would you care about humanity? You’d be like “Man, I don’t know, I just want my data centers, leave my data centers alone,” and it’s like “Okay, actually, I’m just going to go into space and I’ve got these giant solar panels. In fact, now I’m just going to leave the solar system.” Why would they be interested in humanity at all?
Right. I guess the answer to that is that everything you just said is not the product of a super intelligence. A super intelligence could hate us because seven is a prime number, because they cancelled The Love Boat, because the sun rises in the east. That’s the idea right, it is by definition unknowable and therefore any logic you try to apply towards it is the product of an inferior, non-super intelligence.
I don’t know, I kind of think that’s a cop-out. I also think that’s basically looking at some of the sort of flaws in our own brains and assuming that super intelligence is going to have highly-magnified versions of those flaws.
It’s more –to give a different example, then, it’s like when my cat brings a rat and leaves it on the back porch. Every single thing the cat knows, everything in its worldview, it’s perfectly operating brain, by the way, says “That’s a gift Byron’s going to like,” it does not have the capacity to understand why I would not like it, and it cannot even aspire to ever understanding that.
And you’re right in the sense that it’s unknowable, and so, when faced with the unknown, we can choose to fear it or just get excited about it, or control it, or embrace it, or whatever. I think that the likelihood that we’re going to make something that is going to suddenly take an interest in us and actually compete with us, when it just seems so much less likely than the outcome where it’s just going to have a bunch of computers, it’s just going to do our work because it’s easy, and then in exchange it’s going get more hardware and then eventually it’s just going, like, “Sure, whatever you guys want, you want computing power, you want me to balance your books, manage your military, whatever, all that’s actually super easy and not that interesting, just leave me alone and I want to focus on my own problems.” So who knows? We don’t know. Maybe it’s going to try to kill us all, maybe not, I’m doubting it.
So, I guess—again, just putting it all out there—obviously there’s been a lot of people writing about “We need a kill switch for a bad AI,” so it definitely would be aware that there are plenty of people who want to kill it, right? Or it could be like when I drive, my windshield gets covered with bugs and to a bug, my car must look like a giant bug-killing machine and that’s it, and so we could be as ancillary to it as the bugs are to us. Those are the sorts of– or, or—who was it that said that AI doesn’t love you, it doesn’t hate you, you’re just made out of atoms that it can use for something else. I guess those are the concerns.
I guess but I think—again, I don’t think that it cares about humanity. Who knows? I would theorize that what it wants, it wants power, it wants computers, and that’s pretty much it. I would say the idea of a kill switch is kind of naive in the sense that any AI that powerful would be built because it’s solving hard problems, and those hard problems, once we sort of turn it over to these–gradually, not all at once–we can’t really take back. Let’s take for example, our stock system; the stock markets are all basically AI-powered. So, really? There’s going to be a kill switch? How would you even do that? Like, “Sorry, hedge fund, I’m just going to turn off your computer because I don’t like its effects.” Get real, that’s never going to happen. It’s not just one AI, it’s going to be 8,000 competing systems operating at a micro-second basis, and if there’s a problem, it’s going to be like a flash problem that happens so fast and from so many different directions there’s no way we could stop it. But also, I think the AIs are probably going to respond to it and fix it much faster than we ever could, either. A problem of that scale is probably a problem for them as well.
So, 20 minutes into our chat here, you’ve used the word ‘alien’ twice, you’ve used the phrase ‘science-fiction’ once and you’ve made a reference to Minority Report, a movie. So is it fair to say you’re a science-fiction buff?
Yeah, what technologist isn’t? I think science-fiction is a great way to explore the future.
Agreed, absolutely. So two questions: One, is there any view of the future that you look at as “Yes, it could happen like that”? Westworld, or you mentioned Her, and so forth. I’ll start with that one. Is there any view of the world in the science-fiction world that you think “Ah ha! That could happen”?
I think there’s a huge range of them. There’s the Westworldfuture, the Star Trekfuture, there’s the Handmaid’s Talefuture, there’s a lot of them. Some of them great, some of them very alarming, and I think that’s the whole point of science fiction, at least good science fiction, is that you take the real world, as closely as possible, and take one variable and just sort of tweak with it and then let everything else just sort of play out. So yeah, I think there are a lot of science-fiction futures that I think are very possible.
One author, and I would take a guess about which one it is but I would get it wrong, and then I’d get all kinds of email, but one of the Frank Herbert/Bradburys/Heinleins said that sometimes the purpose of science fiction is to keep the future from happening, that they’re cautionary tales. So all this stuff, this conversation we’re having about the AGI, and you used the phrase ‘wants,’ like it actually has desires? So you believe at some point we will build an AGI and it will be conscious? And have desires? Or are you using ‘wants’ euphemistically, just kind of like, you know, information wants to be free.
No, I use the term wants or desires literally, as one would use for a person, in the sense that I don’t think there’s anything particularly special about the human brain. It’s highly developed and it works really well, but humans want things, I think animals want things, amoeba want things, probably AIs are going to want things, and basically all these words are descriptive words, it’s basically how we interpret the behavior of others. And so, if we’re going to look at something that seems to take actions reliably for a predictable outcome, it’s accurate to say it probably wants that thing. But that’s our description of it. Whether or not it truly wants, according to some sort of metaphysical thing, I don’t know that. I don’t think anyone knows that. It’s only descriptive.
It’s interesting that you say that there’s nothing special about the human brain and that may be true, but if I can make the special human brain argument, I would say it’s three bullets. One, you know, we have this brain that we don’t know how it works. We don’t know how thoughts are encoded, how they’re retrieved, we just don’t know how it works. Second, we have a mind, which is, colloquially, a set of abilities that don’t seem to be things that should come from an organ, like a sense of humour. Your liver doesn’t have a sense of humour. But somehow your brain does, your mind does. And then finally we have consciousness which is, you know, the experiencing of something, which is a problem so difficult that science doesn’t actually know what the question or answer looks like, about how it is that we’re conscious. And so to look at those three things and say there’s nothing special about it, I want to call you to defend that.
I guess I would say that all three of those things—the first one simply is “Wow, we don’t understand it.” The fact that we don’t understand it doesn’t make it special. There are a billion things we don’t understand, that’s just one of them. I would say the other two, I think, mistake our curiosity in something with that something having an intrinsic property. Like I could have this pet rock and I’m like “Man, I love this pet rock, this pet rock is so interesting, I’ve had so many conversations with it, it keeps me warm at night, and I just l really love this pet rock.” And all of those could be genuine emotions, but it’s still just a rock. And I think my brain is really interesting, I think your brain is really interesting, I like to talk to it, I don’t understand it and it does all sorts of really unexpected things, but that doesn’t mean your brain has –the universe has attributed it some sort of special magical property. It just means I don’t get it, and I like it.
To be clear, I never said “magical”—
Well, it’s implied.
I merely said something that we don’t—
I think that people—sorry, I’m interrupting, go ahead.
Well, you go ahead. I suspect that you’re going say that the people who think that are attributing some sort of magical-ness to it?
I think, typically. In that, people are frightened by the concept that actually humanity is a random collection of atoms and that it is just a consequence of science. And so in order to defend against that, they will invent supernatural things but then they’ll sort of shroud it, but they recognize — they’ll say “I don’t want to sound like a mystic, I don’t want to say it’s magical, it’s just quantum.” Or “It’s just unknowable,” or it’s just insert-some-sort-of-complex-word-here that will stop the conversation from progressing. And I don’t know what you want to call it, in terms of what makes consciousness special. I think people love to obsess over questions that not only have no answer, but simply don’t matter. The less it matters, the more people can obsess over it. If it mattered, we wouldn’t obsess over it, we would just solve it. Like if you go to get your car fixed, and it’s like “Ah man this thing is a…” and it’s like, “Well, maybe your car’s conscious,” you’ll be like, “I’m going to go to a new mechanic because I just want this thing fixed.”  We only agonize over the consciousness of things when really, the stakes are so low, that nothing matters on it and that’s why we talk about it forever.
Okay, well, I guess the argument that it matters is that if you weren’t conscious– and we’ll move on to it because it sounds like it’s not even an interesting thing to you—consciousness is the only thing that makes life worth living. It is through consciousness that you love, it is through consciousness that you experience, it is through consciousness that you’re happy. It is every single thing on the face of the Earth that makes life worthwhile. And if we didn’t have it, we would be zombies feeling nothing, doing nothing. And it’s interesting because we could probably get by in life just as well being zombies, but we’re not! And that’s the interesting question.
I guess I would say—are you sure we’re not? I agree that you’re creating this concept of consciousness, and you’re attributing all this to consciousness, but that’s just words, man. There’s nothing like a measure of consciousness, like an instrument that’s going to say “This one’s conscious and this one isn’t” and “This one’s happy and this one isn’t.” So it could also be that none of this language around consciousness and the value we attribute to it, this could just be our own description of it, but that doesn’t actually make it true. I could say a bunch of other words, like the quality of life comes down to information complexity, and information complexity is the heart of all interest, and that information complexity is the source of humour and joy and you’d be like “I don’t know, maybe.” We could replace ‘consciousness’ with ‘information complexity,’  ‘quantum physics,’ and a bunch of other sort of quasi-magical words just because—and I use the word ‘magical’ just as a sort of stand-in for simply “at this point unknown,” and the second that we know it, people are going to switch to some other word because they love the unknown.
Well, I guess that most people intuitively know that there’s a difference—we understand you could take a sensor and hook it up to a computer, and it could detect heat, and it could measure 400 degrees, if you could touch a flame to it. People, I think, on an intuitive level, believe that there’s something different between that and what happens when you burn your finger. That you don’t just detect heat, you hurt, and that there is something different between those two things, and that that something is the experience of life, it is the only thing that matters.
I would also say it’s because science hasn’t yet found a way to measure and quantify the pain to the same sense we have temperatures. There’s a lot of other things that we also thought were mystical until suddenly they weren’t. We could say like “Wow, for some reason when we leave flour out, animals start growing inside of it” and it’s like, “Wow, that’s really magical.” Suddenly it’s like, “Actually no, they’re just very small, and they’re just mites,” and it’s like, “Actually, it’s just not interesting.” The magical theories keep regressing as, basically, we find better explanations for them. And I think, yes, right now, we talk about consciousness and pain and a lot of these things because we haven’t had a good measure of them, but I guarantee the second that we have the ability to fully quantify pain, “Oh here’s the exact—we’ve nailed it, this is exactly what it is, we know this because we can quantify it, we can turn it on and off and we can do all these things with very tight control and explain it,” then we’re no longer going to say that pain is a key part of consciousness. It’s going to be blood flow or just electronic stimulation or whatever else, all these other things which are part of our body and which are super critical, but because we can explain them, we no longer talk about them as part of consciousness.
Okay, tell you what, just one more question about this topic, and then let’s talk about employment because I have a feeling we’re going to want to spend a lot of time there. There’s a thought experiment that was set up and I’d love to hear your take on it because you’re clearly someone who has thought a lot about this. It’s the Chinese room problem, and there is this room that’s got a gazillion of these of very special books in it. And there’s a librarian in the room, a man who speaks no Chinese, that’s the important thing, the man doesn’t speak any Chinese.  And outside the room, Chinese speakers slide questions written in Chinese under the door. And the man, who doesn’t understand Chinese, picks up the question and he looks at the first character and he goes and he retrieves the book that has that on the spine and then he looks at the second character in that book, and that directs him to a third book, a fourth book, a fifth book, all the way to the end. And when he gets to the last character, it says “Copy this down,” and so he copies these lines down that he doesn’t understand, it’s Chinese script. He copies it all down, he slides it back under the door, the Chinese speaker picks it up, looks at it, and it’s brilliant, it’s funny, it’s witty, it’s a perfect Chinese answer to this question. And so the question Searle asks is does this man understand Chinese? And I’ll give you a minute to think about this because the thought being that, first, that room passes the Turing test, right? The Chinese speaker assumes there’s a Chinese speaker in the room, and that what that man is doing is what a computer is doing. It’s running its deterministic program, it spits out something, but doesn’t know if it’s about cholera or coffee beans or what have you. And so the question is, does the man understand Chinese, or, said another way, can a computer understand anything?
Well, I think the tricky part of that set-up is that it’s a question that can’t be answered unless you accept the premise, but if you challenge the premise it no longer makes sense, and I think that there’s this concept and I guess I would say there’s almost this supernatural concept of understanding. You could say yes and no and be equally true. It’s kind of like, are you a rapist or a murderer? And it’s like, actually I’m neither of those but you didn’t give me an option, I would say. Did it understand? I would say that if you said yes, then it implies basically that there is this human-type knowledge there. And if you said no, it implies something different. But I would say, it doesn’t matter. There is a system that was perceived as intelligent and that’s all that we know. Is it actually intelligent? Is there any concept of actually the—does intelligence mean anything beyond the symptoms of intelligence and I don’t think so. I think it’s all our interpretation of the events, and so whether or not there is a computer in there or a Chinese speaker, doesn’t really change the fact that he was perceived as intelligent and that’s all that matters.
All right! Jobs, you hinted at what you think’s going to happen, give us the whole rundown. Timeline, what’s going to go, when it’s going to happen, what will be the reaction of society, tell me the whole story.
This is something we definitely deal with, because I would say that the accounting space is ripe for AI because it’s highly numerical, it’s rules-driven, and so I think it’s an area on the forefront of real-world AI developments because it has the data and has all the characteristics to make a rich environment. And this is something we grapple with. On one hand we say automation is super powerful and great and good, but automation can’t help but basically offload some work. And now in our space we see–there’s actually a difference between bookkeeping and accounting. Whereas bookkeeping is the gathering the data, the coding, the entering the data, and things like this. Then there’s accounting, which is, sort of, more so the interpretation of things.
In our space, I think that, yes, it could take all of the bookkeeping jobs. The idea that someone is just going to look at a receipt and manually type it into an accounting system; that is all going away. If you use Expensify, it’s already done for you. And so we worry on one hand because, yes, our technology is really going to take away bookkeeping jobs, but we also find that the book-keepers, the people who do bookkeeping, actually, that’s the part of the job that they hate. It takes away the part they don’t like in the first place. So it enables them to go into the accounting, the high-value work they really want to do. So, the first wave of this is not taking away jobs, but actually taking away the worst parts of jobs such that people can actually focus on the highest-value portion of it.
But, I think, the challenge, and what’s sort of alarming and worrying, is that the high-value stuff starts to get really hard. And though I think the humans will stay ahead of the AIs for a very long time, if not forever, not all of the humans will. And it’s going to take effort because there’s a new competitor in town that works really hard, and just keeps learning over time, and has more than one lifetime to learn. And I think that we’re probably inevitably going to see it get harder and harder to get and hold an information-based job, even a lot of manual labor is going to robotics and so forth, which is closely related. I think a lot of jobs are going to go away. On the other hand, I think the efficiency and the output of those jobs that remain is going to go through the roof. And as a consequence, the total output of AI and robotics-assisted humanity is going to keep going up, even if the fraction of humans employed in that process is going to down. I think that’s ultimately going to lead to a concentration of wealth, because the people who control the robots and the AIs are going to be able to do so much more. But it’s going to become harder and harder to get one of those jobs because there are so few of them, the training is so much higher, the difficulty is so much greater, and things like this.
And so, I think that a worry that I have is that this concentration of wealth is just going to continue and I’m not sure what kind of constraint is upon that. Other than civil unrest which, historically, when concentrations of wealth kind of get to that level, it’s sort of “solved,” if you will, by revolution. And I think that humanity, or at least, especially western cultures, really attribute value with labor, with work. And so I think the only way we’d get out of this is to shift our mindsets as a people to view our value less around our jobs and more around, not just to say leisure, but I would say, finding other ways to live a satisfying and an exciting life. I think a good book around this whole singularity premise, and it was very early, was Childhood’s End, talking about the—it was using a different premise, this alien comes in, provides humanity with everything, but in the process takes away humanity’s purpose for living. And how do we sort of grapple with that? And I don’t have a great answer for that, but I have a daughter, and so I worry about this, because I wonder, well, what kind of world is she going to grow up in? And what kind of job is she going to get? And she’s not going to need a job and should it be important that she wants a job, or is it actually better to teach her to not want a job and to find satisfaction elsewhere? And I don’t have good answers for that, but I do worry about it.
Okay let’s go through all of that a little slower, because I think that’s a compelling narrative you outline, and it seems like there are three different parts. You say that increasing technology is going to eliminate more and more jobs and increase the productivity of the people with jobs, so that’s one thing. Then you said this will lead to concentration of wealth, which will in turn lead to civil unrest if not remedied, that’s the second thing, and the third thing is that when we reach a point where we don’t have to work, where does life have meaning? Let’s start with the first part of that.
So, what we have seen in the past, and I hear what you’re saying, that to date technology has automated the worst parts of jobs, but what we’ve seen to date is not any examples of what I think you’re talking about. So, when the automatic teller machine came out, people said, “That’s going to reduce the number of tellers” — the number of tellers is higher than when that was released. As Google Translate gets better, the number of translators needed is actually going up. When—you mentioned accounting—when tax-prep software gets really good, the number of tax-prep people we need actually goes up. What technology seems to do is lower the cost of things to adjust the economics so massively that different businesses occur in there. No matter what, what it’s always doing is increasing human productivity, and that all of the technology that we have to date, after 250 years of the industrial revolution, we still haven’t developed technology such that we have a group of people who are unemployable because they cannot compete against machines. And I’m curious—two questions in there. One is, have we seen, in your mind, an example of what you’re talking about, and two, why would have we gotten to where we are without obsoleting, I would argue, a single human being?
Well, I mean, that’s the optimistic take, and I hope you’re right. You might well be right, we’ll see. I think when it comes to—I don’t remember the exact numbers here–tax prep for example, I don’t know if that’s sort of planning out—because I’m looking at H&R Block stock quotes right now, and shares in H&R Block fell 5% early Tuesday after the tax preparer posted a slightly wider-than-expected loss  basically due to rise in self-filing taxes, and so maybe it’s early in that? Who knows, maybe it’s in the past year? So, I don’t know. I guess I would say, that’s the optimistic view, I don’t know of a job that hasn’t been replaced. That’s also is kind of a very difficult assertion to make, because clearly there are jobs—like the coal industry right now– I was reading an article about how the coal industry is resisting retraining because they believe that the coal jobs are coming back and I’m like “Man, they’re not coming back, they’re never going to come back,” and so, did AI take those jobs? Well, not really, I mean, did solar take those jobs? Kind of? And so it’s a very tricky, kind of tangled thing to unweave.
Let me try it a different way. If you were to look at all the jobs that were around between 1950 and 2000, by the best of my count somewhere between a third and a half of them have vanished— switchboard operators, and everyone that was around from 1950 to 2000. If you look at the period from 1900 to 1950 by the best of my count, something like a third to a half of them vanished—a lot of farming jobs. If you look at the period 1850 to 1900, near as I can tell, about half of the jobs vanished. Is that really – is it possible that’s a normal turn of the economy?
It’s entirely possible. I could also say that it’s the political climate, and how, yes, people are employed, but the sort of self-assessed quality of that employment is going down. In that, yes, union strength is down, the idea that you can work in a factory your whole life and actually live what you would see as a high-quality life, I think that perception’s down. I think that presents itself in the form of a lot of anxiety.
Now, I think a challenge is, objectively, the world is getting better in almost every way, basically, life expectancy is up, the number of people actually actively in war zones is down, the number of simultaneous wars is down, death by disease is down—every thing is basically getting better, the productive output, the quality of life in an aggregate perspective is actually getting better, but I don’t think, actually, that peoples’ satisfaction is getting better. And I think that the political climate would argue, actually, that there’s a big gulf between what the numbers say people should feel like and how they actually feel. I’m more concerned about that latter part, and it’s unknowable I’ll admit, but I would say that, even as people’s lives will get objectively better, and even if their jobs—they might maybe work less, and they’re provided with better quality flat-screen TVs and better cars, and all this stuff–their satisfaction is going to go down. I think that that satisfaction is what ultimately drives civil unrest.
So, do you have a theory why—it sounds like a few things might be getting mixed together, here. It’s unquestionable that technology—let’s say productivity technology—if Super company “X” employs some new productivity technology, their workers generally don’t get a raise because their wages aren’t tied to their output, they’re, in one way or another, being paid by the hour, whereas if you’re Self-Employed Lawyer “B” and you get a productivity gain, you get to pocket that gain. And so, there’s no question that technology does rain down its benefits unequally, but that unsatisfaction you’re talking about,  what are you attributing that to? Or are you just saying “I don’t know, it’s a bunch of stuff.”
I mean, I think that it is a bunch of stuff and I would say that some of it is that we can’t deny the privilege that white men have felt over time and I think when you’re accustomed to privilege, equality feels like discrimination. And I think that, yes, actually, things have gotten more equal, things have gotten better in many regards, according to a perspective that views equality as good. But if you don’t hold that perspective, actually, that’s still very bad. That, combined with trends towards the rest of the world basically establishing a quality of life that is comparable to the United States. Again, that makes us feel bad. It’s not like, “Hooray the rest of the world,” but rather it’s like, “Man, we’ve lost our edge.” There are a lot of factors that go into it that I don’t know that you can really separate them out. The consolidation of wealth caused by technology is one of those factors and I think that it’s certainly one that’s only going to continue.
Okay, so let’s do that one next. So your assertion was that whenever you get, historically, distributions of wealth that are uneven past a certain point, that revolution is the result. And I would challenge that because I think that might leave out one thing, which is, if you look at historic revolutions, you look at Russia, the French revolution and all that, you had people living in poverty, that was really it. People in Paris couldn’t afford bread—a day’s wage bought a loaf of bread—and yet we don’t have any precedent of a prosperous society where the median is high, the bottom quartile is high relative to the world, we don’t have any historic precedent of a revolution occurring there, do we?
I think you’re right. I think but civil unrest is not just in the form of open rebellion against the governments, but in increased sort of—I think that if there is an open rebellion against the government, that’s sort of TheHandmaid’s Taleversion of the future. I think it’s going to be someone harking back to fictionalized glory days, then basically getting enough people onboard who are unhappy for a wide variety of other things. But I agree no one’s going to go overthrow the government because they didn’t get as big of a flat-screen TV as their neighbor. I think that the fact that they don’t have as big of a flat-screen TV as their neighbor could create an anxiety that can be harvested by others but sort of leveraged into other causes. So I think that my worry isn’t that AI or technology is going to leave people without the ability to buy bread, I think quite the opposite. I think it’s more of a Brazilfuture, the movie, where we normalize basically random terrorist assaults. We see that right now, there’s mass shootings on a weekly basis and we’re like “Yeah, that’s just normal. That’s the new normal.” I think that the new normal gets increasingly destabilized over time, and that’s what worries me.
So say you take someone who’s in the bottom quartile of income in the United States and you go to them with this deal you say “Hey, I’ll double your salary but I’m going to triple the billionaire’s salary,” do you think the average person would take that?
No.
Really? Really, they would say, “No, I do not want to double my salary.”
I think they would say “yes” and then resent it. I don’t know the exact breakdown of how that would go, but probably they would say “Yeah, I’ll double my salary,” and then they would secretly, or not even so secretly, resent the fact that someone else benefited from it.
So, then you raise an interesting point about finding identity in a post-work world, I guess, is that a fair way to say it?
Yeah, I think so.
So, that’s really interesting to me because Keynes wrote an essay in the Depression, and he said that by the year 2000 people would only be working 15 hours a week, because of the rate of economic growth. And, interestingly, he got the rate of economic growth right; in fact he was a little low on it. And it is also interesting that if you run the math, if you wanted to live like the average person lived in 1930—no medical insurance, no air conditioning, growing your own food, 600 square feet, all of that, you could do it on 15 hours a week of work, so he was right in that sense. But what he didn’t get right was that there is no end to human wants, and so humans work extra hours because they just want more things. And so, do you think that that dynamic will end?
Oh no, I think the desire to work will remain. The capability to get productive output will go away.
I have the most problem with that because, all technology does is increases human productivity. So to say that human productivity will become less productive because of technology, I just—I’m not seeing that connection. That’s all technology does, is it increases human productivity.
But not all humans are equal. I would say not every human has equal capabilities to take advantage of those productive gains. Maybe bringing it back to AI, I would say that the most important part of the AI is not the technology powering it, but the data behind it. The access to data is sort of the training set behind AI and access to data is incredibly unequal. I would say that Moore’s law democratizes the CPU, but nothing democratizes consolidation of data into fewer and fewer hands, and then those people, even if they only have the same technology as someone else, they have all the data to actually make that technology into a useful feature. I think that, yes, everyone’s going to have equal access to the technology because it’s going to become increasingly cheap, it’s already staggeringly cheap, it’s amazing how cheap computers are, but it just doesn’t matter because they don’t have equal access to the data and thus can’t get the same benefit of the technology.
But, okay. I guess I’m just not seeing that, because a smartphone with an AI doctor can turn anybody in the world into a moderately-equipped clinician.
Oh, I disagree with that entirely. You having a doctor in your pocket doesn’t make you a doctor. It means that basically someone sold you a great doctor’s service and that person is really good.
Fair enough, but with that, somebody who has no education, living in some part of the world, can follow protocol of “take temperature, enter symptoms, this, this, this” and all of a sudden they are empowered to essentially be a great doctor, because that technology magnified what they could do.
Sure, but who would you sell that to? Because everyone else around you has that same app.
Right, it’s an example that I’m just kind of pulling out randomly, but to say that a small amount of knowledge can be amplified with AI in a way that makes that small amount of knowledge all of a sudden worth vastly more.
Going with that example, I agree there’s going to be the doctor app that’s going top diagnose every problem for you and it’s going to be amazing, and whoever owns that app is going to be really rich. And everyone else will have equal access to it, but there’s no way that you can just download that app and start practicing to your neighbors because they’d be like “Why am I talking to you? I’m going to talk to the doctor app because it’s already in my phone.”
But the counter example would be Google. Google minted half a dozen billionaires, right? Google came out; half a dozen people became billionaires because of it. But that isn’t to say nobody else got value out of the existence of Google. Everybody gets value out of it. Everybody can use Google to magnify their ability. And yes, it made billionaires, you’re right about that part, the doctor app person made money, but that doesn’t lessen my ability to use that to also increase my income.
Well, I actually think that it does. Yes, the doctor app will provide fantastic healthcare to the world, but there’s no way anybody can make money off the doctor app, except for the doctor app.
Well, we’re actually running out of time, this has been the fastest hour! I have to ask this, though, because at the beginning I asked about science fiction and you said, you know, of your possible worlds of the future, one of them was Star Trek. Star Trekis a world where all of these issues we’re talking about we got over, and everybody was able to live their lives to their maximum potential, and all of that. So, this has been sort of a downer hour, so what’s the path in your mind, to close with, that gets us to the Star Trekfuture? Give me that scenario.
Well, I guess, if you want to continue on the downer theme, the Star Trekhistory, the TV show’s talking about the glory days, but they all cite back to very, very dark periods before the Star Trekuniverse came about. It might be we need to get through those, who knows? But I would say ultimately on the other side of it, we need to find a way to either do much better progressive redistribution of wealth, or create a society that’s much more comfortable with massive income inequality, and I don’t know which of those is easier.
I think it’s interesting that I said “Give me a Utopian scenario,” and you said, “Well, that one’s going to be hard to get to, I think they had like multiple nuclear wars and whatnot.”
Yeah.
But you think that we’ll make it. Or there’s a possibility that we will.
Yeah, I think we will, and I think that maybe a positive thing, as well, is: I don’t think we should be terrified of a future where we build incredible AIs that go out and explore the universe, that’s not a terrible outcome. That’s only a terrible outcome if you view humanity as special. If instead you view humanity as just– we’re a product of Earth and we could be a version that can become obsolete, and that doesn’t need to be bad.
All right, we’ll leave it there, and that’s a big thought to finish with. I want to thank you David for a fascinating hour.
It’s been a real pleasure, thank you so much.
Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.
[voices_in_ai_link_back]

Voices in AI – Episode 36: A Conversation with Bill Mark

[voices_in_ai_byline]
In this episode Byron and Bill talk about SRI International, aging, human productivity and more.
[podcast_player name=”Episode 36: A Conversation with Bill Mark” artist=”Byron Reese” album=”Voices in AI” url=”https://voicesinai.s3.amazonaws.com/2018-03-22-(00-59-22)-bill-mark.mp3″ cover_art_url=”https://voicesinai.com/wp-content/uploads/2018/03/voices-headshot-card-2.jpg”]
[voices_in_ai_byline]
Byron Reese: This is Voices in AI, brought to you by GigaOm. I’m Byron Reese. Today our guest is Bill Mark. He heads up SRI International’s Information and Computing Sciences Division which consists of two hundred and fifty researchers, in four laboratories, who create new technology and virtual personal assistants, information security, machine learning, speech, natural language, computer visionall the things we talk about on the show. He holds a Ph.D. in computer science from MIT. Welcome to the show, Bill.
Bill Mark:  Good to be here.
So, let’s start off with a little semantics. Why is artificial intelligence, artificial? Is artificial because it’s not really intelligence, or what? 
No, it’s artificial, because it’s created by human beings as opposed to nature. So, in that sense, it’s an artifact, just like any other kind of physical artifact. In this case, it’s usually a software artifact.
But, at its core, it truly is intelligent and its intelligence doesn’t differ in substance, only in degree, from human intelligence?
I don’t think I’d make that statement. The definition of artificial intelligence to me is always a bit of a challenge. The artificial part, I think, is easy, we just covered that. The intelligence part, I’ve looked at different definitions of artificial intelligence, and most of them use the word “intelligence” in the definition. That doesn’t seem to get us much further. I could say something like, “it’s artifacts that can acquire and/or apply knowledge,” but then we’re going to have a conversation about what knowledge is. So, what I get out of it is it’s not very satisfying to talk about intelligence at this level of generality because, yes, in answer to your question, artificial intelligence systems do things which human beings do, in different ways and, as you indicated, not with the same fullness or level that human beings do. That doesn’t mean that they’re not intelligent, they have certain capabilities that we regard as intelligent.
You know it’s really interesting because at its core you’re right, there’s no consensus definition on intelligence. There’s no consensus definition on life or death. And I think that’s really interesting that these big ideas aren’t all that simple. I’ll just ask you one more question along these lines then. Alan Turing posed the question in 1950, Can a machine think? What would you say to that?
I would say yes, but now we have to wonder what “think” might mean, because “think” is one aspect of intelligent behavior, it indicates some kind of reasoning or reflection. I think that there are software systems that do reason and reflect, so I will say yes, they think.
All right, so now let’s get to SRI International. For the listeners who may not be familiar with the company can you give us the whole background and some of the things you’ve done to date, and why you exist, and when it started and all of that?
Great, just a few words about SRI International. SRI International is a non-profit research and development company, and that that’s a pretty rare category. A lot of companies do research and development—a fewer than used to, but still quite a few—and very few have research and development as their business, but that is our business. We’re also non-profit, which really means that we don’t have shareholders. We still have to make money, but all the money we make has to go into the mission of the organization which is to do R&D for the benefit of mankind. That’s the general thing. It started out as part of Stanford, it was formerly the Stanford Research Institute. It’s been independent since 1970 and it’s one of the largest of these R&D companies in the world, about two thousand people.
Now, the information and computing sciences part, as you said, that’s about two hundred and fifty people, and probably the thing that we’re most famous for nowadays is that we created Siri. Siri was a spinoff of one of my labs, the AI Center. It was a spinoff company of SRI, that’s one of the things we do, and it was acquired by Apple, and has now become world famous. But we’ve been in the field of artificial intelligence for decades. Another famous SRI accomplishment would be Shakey the Robot, which was really the first robot that could move around and reason and interact. That was many years ago. We’ve also, in more recent history, been involved in very large government-sponsored AI projects which we’ve led, and we just have lots of things that we’ve done in AI.
Is it just a coincidence that Siri and SRI are just one letter different, or is that deliberate?
It’s a coincidence. When SRI starts companies we bring in entrepreneurs from the outside almost always, because it would be pretty unusual for an SRI employee to be the right person to be the CEO of the startup company. It does happen, but it’s unusual. Anyway, in this case, we brought in a guy named Dag Kittlaus, and he’s of Norwegian extraction, and he chose the name. Siri is a Norwegian women’s name and that became the name of the company. Actually, somewhat to our surprise, Apple retained that name when they launched Siri.
Let’s go through some of the things that your group works on. Could we start with those sorts of technologies? Are there other things in that family of conversational AI that you work on and are you working on the next generation of that?
Yes, indeed, in fact, we’ve been working on the next generation for a while now. I like to think about conversational systems in different categories. Human beings have conversations for all kinds of reasons. We have social conversations, where there’s not particularly any objective but being friendly and socializing. We have task-oriented kinds of conversations—those are the ones that we are focusing on mostly in the next generation—where you’re conversing with someone in order to perform a task or solve some problem, and what’s really going on is it’s a collaboration. You and the other person, or people, are working together to solve a problem.
I’ll use an example from the world of online banking because we have another spinoff called Kasisto that is using the next-generation kind of conversational interaction technology. So, let’s say that you walk into a bank, and you say to the person behind the counter, “I want to deposit $1,000 in checking.” And the person on the other side, the teller says, “From which account?” And you say, “How much do I have in savings?” And the teller says, “You have $1,500, but if you take $1,000 out you’ll stop earning interest.” So, take that little interaction. That’s a conversational interaction. People do this all the time, but it’s actually very sophisticated and requires knowledge.
If you now think of, not a teller, but a software system, a software agent that you’re conversing with—we’ll go through the same little interaction. The person says, “I want to deposit $1,000 in checking.” And the teller said, “From which account?” The software system has to know something about banking. It has to know that a deposit is a money transfer kind of interaction and it requires a from-account and a to-account. And in this case, the to-account has been specified but the from-account hasn’t been specified. In many cases that person would simply ask for that missing information, so that’s the first part of the interaction. So, again, the teller says, “From which account?” And the person says, “How much do I have in savings?” Well, that’s not an answer to the question. In fact, it’s another question being introduced by the person and it’s actually a balance inquiry question. They want to know how much they have in savings. Now, when I go through this the first time, the reason I do this twice is that when I went through it the first time, almost nobody even notices that that wasn’t an answer to the question, but if you try out a lot of the personal assistant systems that are out there, they tend to crater on that kind of interaction, because they don’t have enough conversational knowledge to be able to handle that kind of thing. And then the interaction goes on where the teller is providing information, beyond what the person asked, about potentially losing interest, or it might be that they would get a fee or something like that.
That illustrates the point that we expect our conversational partners to be proactive, not just to simply answer our questions, but to actually help us solve the problem. That’s the kind of interaction that we’re building systems to support. It’s very different than the personal assistants that are out there like Siri, and Cortana, and Google which are meant to be very general. Siri doesn’t really know anything about banking, which isn’t a criticism it’s not supposed to know anything about banking, but if you want to get your banking done over your mobile phone then you’re going to need a system that knows about banking. That’s one example of sort of next-generation conversational interaction.
How much are we going to be able to use transfer learning to generalize from that? You built that bot, that highly verticalized bot that knows everything about banking, does anything it learned make it easier now for it to do real estate, and then for it to do retail, and then all the other things? Or is it the case that like every single vertical, all ten thousand of them are going to need to start over from scratch?
It’s a really good question, and I would say, with some confidence, that it’s not about starting over from scratch because some amount of the knowledge will transfer to different domains. Real estate has transactions, if there’s knowledge about transactions some of that knowledge will carry over, some of it won’t.
You said, “the knowledge that it has learned,” and we need to get pretty specific about that. We do build systems that learn, but not all of their knowledge is picked up by learning. Some of it is built in, to begin with. So, there’s the knowledge that has been explicitly represented, some of which will transfer over. And then there’s knowledge that has been learned in other ways, some of that will transfer over as well, but it’s less clear-cut how that will work. But it’s not starting from scratch every time.
So, eventually though you get to something that could pass the Turing test. You could ask it, “So, if I went into the bank and wanted to move $1,000, what would be the first question you would ask me?” And it would say, “Oh, from what account?” 
My experience with every kind of candidate Turing test system, and nobody purports that we’re there by a long shot, but my first question is always, “What’s bigger, a nickel or the sun?” And I haven’t found a single one that can answer the question. How far away is that?
Well, first just for clarity, we are not building these systems in order to pass the Turing test, and in fact, something that you’ll find in most of these systems is that outside of their domain of expertise, say banking, in this case, they don’t know very much of anything. So, again, the systems that we build wouldn’t know things like what’s bigger, the nickel or the sun.
The whole idea of the Turing test is that it’s meant to be some form of evaluation, or contest for seeing whether you have created something that’s truly intelligent. Because, again, this was one of Turing’s approaches to answering this question of what is intelligence. He didn’t really answer that question but he said if you could develop an artifact that could pass this kind of test, then you would have to say that it was intelligent, or had human-like behavior at the very least. So, in answer to your question, I think we’re very far from that because we aren’t so good at getting the knowledge that, I would say, most people have into a computer system yet.
Let’s talk about that for a minute. Why is it so hard and why is it so, I’ll go out on a limb and say, easy for people? Like, a toddler can tell me what’s bigger the nickel or the sun, so why is it so hard? And what makes humans so able to do it?
Well, I don’t know that anyone knows the answer to that question. I certainly don’t. I will say that human beings spend time experiencing the world, and are also taught. Human beings are not born knowing that the sun is bigger than a nickel, however, over time they experience what the sun is and, at some point, they will experience what a nickel is, and they’ll be able to make that comparison. By the way, they also have to learn how to make comparisons. It would be interesting to ask toddlers that question, because the sun doesn’t look very big when you look up in the sky, so that brings in a whole other class of human knowledge which I’ll just broad-brush call book learning. I certainly would not know that the sun is really huge, unless I had learned that in school. Human beings have different ways of learning, only a very small sample of which have been implemented in artificial intelligence learning systems.
There’s Calvin and Hobbes, where his dad tells Calvin that it’s a myth that the sun is big, that it’s really only the size of a quarter. And he said, “Look, hold it up in the sky. They’re the same.” So, point taken. 
But, let me ask it this way, human DNA is, I don’t know, I’m going to get this a little off, but it’s like 670MB of data. And if you look at how much that’s different than, say, a banana, it’s a small amount that is different. And then you say, well, how much of it is different than, say, a chimp, and it’s a minuscule amount. So, whatever that minuscule difference in code is, just a few MBs, is that, kind of, the secret to intelligence? Is that a proof point that there may be some very basic, simple ways to acquire generalized knowledge, that we just haven’t stumbled across yet that, but there may be something that gives us this generalized learner, we can just plug into the Internet and the next day it knows everything. 
I don’t make that jump. I think the fact that a relatively small amount of genetic material differentiates us from other species doesn’t indicate that there’s something simple out there, because the way those genes or the genetic material impacts the world is very complex, and lead to all kinds of things that could be very hard for us to understand and try to emulate. I also don’t know that there is a generalist learner anyway. I think, as I said, human beings seem to have different ways of learning things, and that doesn’t say to me that there is one general approach.
Back in the Dartmouth days, when they thought they could knock out a lot of AI problems in a summer, it was in the hope that intelligence followed a few simple laws, like how the laws of physics explain so much. It’s been kind of the consensus move to think that we’re kind of a hack of a thousand specialized things that we do that all come together and make generalized intelligence. And it sounds like you’re more in that camp that it’s just a bunch of hard work and we have to tackle these domains one at a time. Is that fair?
I’m actually kind of in between. I think that there are general methods, there are general representations, but there’s also a lot of specific knowledge that’s required to be competent in some activity. I’m into sort of a hybrid.
But you do think that building an AGI, generalized intelligence, that is as versatile as a human is theoretically possible I assume? 
Yes.
You mentioned something when we were chatting earlier that a child explores the world. Do you think embodiment is a pathway to that, that until we give machines away, in essence, to “experience” the world, that that will always limit what we’re able to do? Is that embodiment, that you identified as being important for humans, also important for computers?
Well, I would just differentiate the idea of exploration from embodiment. I think that exploration is a fundamental part of learning. I would say that we, yes indeed, will be missing something unless we design systems that can explore their world. From my point of view, they may or may not be embodied in the usual sense of that word, which means that they can move around and actuate within their environment. If you generalize that to software and say, “Are software agents embodied because they can do things in the world?” then, yeah, I guess I would say embodiment, but it doesn’t have to be physical embodiment.
Earlier when you were talking about digital assistants you said Siri, Cortana and then you said, “Oh, and Google.” And that highlights a really interesting thing that Amazon named theirs, you named yours, Microsoft named theirs, but Google’s is just the Google Assistant. And you’re undoubtedly familiar with the worries that Weizenbaum had with ELIZA. He thought that this was potentially problematic that we name these devices, and we identify with them as if they are human. He said, “When a computer says, ‘I understand,’ it’s just a lie. There’s no ‘I,’ and there’s nothing that understands anything.” How would you respond to Weizenbaum? Do you think that’s an area of concern or you think he was just off?
I think it’s definitely an area of concern, and it’s really important in designing. I’ll go back to conversational systems, systems like that, which human beings interact with, it’s important that you do as much as possible to help the human being create a correct mental model of what it is that they’re conversing with. So, should it be named? I think it’s kind of convenient to name it, as you were just saying, it kind of makes it easier to talk about, but it immediately raises this danger of people over-reading into it: what it is, what it knows, etcetera. I think it’s very much something to be concerned about.
There’s that case in Japan, where there’s a robot that they were teaching how to navigate a mall, and very quickly learned that it got bullied by children who would hit it, curse at it, and all these things. And later when they asked the children did you think it was upset, was it acting upset? Was it acting human-like or mechanical? They overwhelmingly said it was human-like. 
And I still have a bit of an aversion to interrupting the Amazon deviceI can’t say its name because it’s on my desk right next to meand telling it, “Stop!” And so I just wonder where it goes because, you’re right, it’s like the Tom Hanks’ movie Castaway when his only friend was a soccer ball named “Wilson” that he personified. 
I remember there was a case in the ‘40s where they would show students a film of circles and lines moving around, and ask them to construct stories, and they would attribute to these lines and circles personalities, and interactions, and all of that. It is such a tempting thing we do, and you can see it in people’s relationships to their pets that one wonders how that’s all going to sort itself out, or will we look back in forty years and think, “Well, that was just crazy.”
No, I think you’re absolutely right. I think that human beings are extremely good at giving characteristics to objects, systems, etcetera, and I think that will continue. And, as I said, that’s very much a danger in artificial intelligence systems, the danger being that people assume too much knowledge, capability, understanding, given what the system actually is. Part of the job of designing the system is, as I said before, to go as far as we can to give the person the right idea about what it is that they’re dealing with.
Another area that you seem to be focused on, as I was reading about you and your work, is AI and the aging population. Can you talk about what the goal is there and what you are doing, and maybe some successes or failures you’ve had along the way?
Yes, indeed, we are, SRI-wide actually, looking at what we can do to address the problem, the worldwide problem, of higher percentage of aging population, lower percentage of caregivers. We read about this in the headlines all the time. In particular, what we can do to have people experience an optimal life, the best that is possible for them as they age. And there’s lots of things that we’re looking at there. We were just talking about conversational systems. We are looking at the problem of conversational systems that are aimed at the aging population, because interaction tends to be a good thing and sometimes there aren’t caregivers around, or there aren’t enough of them, or they don’t pay attention, so it might actually be interesting to have a conversational system that elderly people can talk to and interact with. We’re also looking at ways to preserve privacy and unobtrusively monitor the health of people, using artificial intelligence techniques. This is indeed a big area for us.
Also, your laboratories work on information security and you mentioned privacy earlier, talk to me, if you would, about the state of the art there. Across all of human history, there’s been this constant battle between the cryptographers and the people who break the codes, and it’s unclear who has the upper hand in that. It’s the same thing with information security. Where are we in that world? And is it easier to use AI to defend against breaches, or to use that technology to do the breach?
Well, I think, the situation is very much as you describe—it’s a constant battle between attackers and defenders. I don’t think it’s any easier to use AI to attack, or defend. It can be used for both. I’m sure it is being used for both. It’s just one of the many sets of techniques that can be used in cybersecurity.
There’s a lot of concern wrapped up in artificial intelligence and its ability to automate a lot of work, and then the effect of that automation on employment. What’s your perspective on how that is going to unfold?
Well, my first perspective is it’s a very complex issue. I think it’s very hard to predict the effect of any technology on jobs in the long-term. As I reflect, I live in the Bay Area, a huge percentage of the jobs that people have in the Bay Area didn’t exist at all a hundred years ago, and I would say a pretty good percentage didn’t exist twenty years ago. I’m certainly not capable of projecting in the long run what the effect of AI and automation will be. You can certainly guess that it will be disruptive, all new technologies are disruptive, and that’s something as a society we need to take aboard and deal with, but how it’s going to work out in the long-term, I really don’t know.
Do you take any comfort that we’ve had transformative technologies aplenty? Right, we had the assembly line which is a kind of artificial intelligence, we had the electrification of industry, we had the replacement of animal power with steam power. I mean each of those was incredibly disruptive. And when you look back across history each one of them happened incredibly fast and yet unemployment never surged from them. Unemployment in the US has always been between four and ten percent, other than the depression. And you can’t the point and say, “Oh, when this technology came out unemployment went briefly to fourteen percent,” or anything like that. Do you take comfort in that or do you say, “Well, this technology is materially different”? 
I take comfort in it in the sense that I have a lot of faith in the creativity and agility of people. I think what that historical data is reflecting is the ability of individuals and communities to adapt to change and I expect that to continue. Now, artificial intelligence technology is different, but I think that we will learn to adapt and thrive with artificial intelligence in the world.
How is it different though, really? Because technology increases human productivity, that’s kind of what it does. That’s what steam did. That’s what electricity did. That’s what the Industrial Revolution did. And that’s what artificial intelligence does. How is it different?
I think in the sense that you’re talking about, it’s not different. It is meant to augment human capability. It’s augmenting now, to some extent, different kinds of human activity, although arguably that’s been going on for a long time, too. Calculators, printing presses, things like that have taken over human activities that were once thought to be core human things. It’s sort of a difference in degree, not a difference in kind.
One interesting thing about technology and how the wealth that it produces is disseminated through culture, is that in one sense technology helps everybodyyou get a better TV, or better brakes in your car, better deodorant, or whateverbut in two other ways, it doesn’t. If you’re somebody who sells your labor by the hour, and your company can produce a labor-saving device, that benefit doesn’t accrue to you it generally would accrue to the shareholders of the company in terms of higher earnings. But if you’re self-employed, or you buy your own time as it were, you get to pocket all of the advances that technology gets you, because it makes your productivity higher and you get all of that. So, do you think that the technology does inherently make worse the income-inequality situation, or am I missing something in that analysis? 
Well, I don’t think that is inherent and I’m not sure that the fault lines will cut that way. We were just talking about the fact that there is disruption and what that tends to mean is that some people will benefit in the short-term, and some of the people will suffer in the short-term. I started by saying this is a complex issue. I think one of the complexities is actually determining what that is. For example, let’s take stuff around us now like Uber and other ride-hailing services. Clearly that has disrupted the world of taxi drivers, but on the other hand has created opportunities for many, many, many other drivers, including taxi drivers. What’s the ultimate cost-benefit there? I don’t know. Who wins and loses? Is it the cab companies, is it the cab drivers? I think it’s hard to say.
I think it was Niels Bohr that said, “Making predictions is hard, especially if they’re about the future.” And he was a Nobel Laureate.
Exactly.
The military, of course, is a multitrillion-dollar industry and it’s always an adopter of technology, and there seems to be a debate about making weapon systems that make autonomous kill decisions. How do you think that’s going to unfold?
Well, again, I think that this is a very difficult problem and is a touchpoint issue. It’s one manifestation of an overall problem of how we trust complex systems of any kind. This is, to me anyway, this goes way beyond artificial intelligence. Any kind of complex system, we don’t really know how it works, what its limitations are, etcetera. How do we put boundaries on its behavior and how do we develop trust in what it’s done? I think that’s one of the critical research problems of the next few decades.
You are somebody who believes we’re going to build a general intelligence, and it seems that when you read the popular media there’s a certain number of people that are afraid of that technology. You know all the names: Elon Musk says it’s like summoning the demon, Professor Hawking says it could be the last thing we do, Bill Gates says he’s in the camp of people who are worried about it and don’t understand why other people aren’t was, Wozniak, the list goes on and on. Then you have another list of people who just almost roll their eyes at those sorts of things, like Andrew Ng who says it’s like worrying about overpopulation on Mars, the roboticist Rodney Brooks says that it’s not helpful, Zuckerberg and so forth. So, two questions: why, among a roomful of incredibly smart people is there such a disagreement over it, and, two, where do you fall in that kind of debate?
Well, I think the reason for disagreements, is that it’s a complex issue and it involves something that you were just talking about with the Niels Bohr quote. You’re making predictions about the future. You’re making predictions about the pace of change, and when certain things will occur, what will happen when they occur, really based on very little information. I’m not at all surprised that there’s dramatic difference of opinion.
But to be clear, it’s not a roomful of people saying, “These are really complex issues,” it’s a roomful of people were half of them are saying, “I know it is a problem,” and half of them saying, “I know it is not a problem.” 
I guess that might be a way of strongly stating a belief. They can’t possibly know.
Right, like everything you’re saying you’re taking measured tones like, “Well, we don’t know. It could happen this way or that way. It’s very complicated.” They are not taking that same tone. 
Well, let me get to your second question, we can come back to the first one. So, my personal view, and here comes this measured response that you just accused me of is, yes, I’m worried about it, but, honestly, I’m worried about other things more. I think that this is something to be concerned about. It’s not an irrational concern, but there are other concerns that I think are more pressing. For example, I’m much more worried about people using technology for untoward purposes than I am about superintelligence taking over the world.
That is an inherent problem with technology’s ability to multiply human effort, if human effort is malicious. Is that an insoluble problem? If you can make an AGI you can, almost by definition, make an evil AGI, correct?
Yes. Just to go back a little bit, you asked me whether I thought AGI was theoretically possible, whether there are any theoretical barriers. I don’t think there are theoretical barriers. We can extrapolate and say, yes, someday that kind of thing will be created. When it is, you’re right, I think any technology, any aspect of human behavior can be done for good or evil, from the point of view of some people.
I have to say, another thing I think about when we talk about super intelligence, I was relating it to complex systems in general. I think of big systems that exist today that we live with, like high-speed automated trading of securities, or weather forecasting, these are complex systems that definitely influence our behavior. I’m going to go out on a limb and say nobody knows what’s really going on with them. And we’ve learned to adapt to them.
It’s interesting, I think part of the difference of opinion boils down to a few technical questions that are very specific that we don’t know the answer to. One of them is, it seems like some people are kind of, I don’t want to say down on humans, but they don’t think human abilities, like creativity and all of that are all that difficult, and machines are going to be able to master that. There’s a group of people who would say the amount of time between one of these systems being able to self-improve is short, not long. I think that some would say intelligence isn’t really that hard, but there’s probably a few breakthroughs. You stack enough of those together and you say, “Okay, it’s really soon.” But if you take the opposite side on thosecreativity is very hard, intelligence is very hardthen you’re, kind of, in the other camp. I don’t doubt the sincerity of any of the parties involved. 
On your comment about the theoretical possibility of a general intelligence, just to explore that for a moment, without any regard for when it will happen—we understand how a computer could, for instance, measure temperature, but we don’t really understand how a computer, or I don’t, could feel pain. For a machine to go from measuring the world to experiencing the world, we don’t really know that, and so is that required to make a general intelligence, to be able to, in essence, experience qualia, to be conscious, or not. 
Well, I think that if we’re truly talking about general intelligence in the sense that I think most people mean it, which is human-like intelligence, then one thing that people do is experience the world and react to it, and it becomes part of the way that we think and reason about the world. So, yes, I think, if we want computers to have that kind of capability, then we have to figure out a way for them to experience it.
The question then becomes—I think this is in the realm of the very difficult—when, to use your example, a human being or any animal experiences pain, there is some physical and then electrochemical reaction going on that is somehow interpreted in the brain. I don’t know how all of that works, but I believe that it’s theoretically possible to figure out how that works and to create artifacts that exhibit that behavior.
Because we can’t really confine it to how humans feel pain, right? But, I guess I’m still struggling over that. What would that even look like, or is your point, “I don’t know what it looks like, but that would be what’s required to do it.” 
I definitely don’t know what it looks like on the inside, but you can also look at the question of, “What is the value of pain, or how does pain influence behavior?” For a lot of things, pain is a warning that we should avoid something, touching a hot object, moving an injured limb, etcetera. There’s a question of whether we can get computer systems to be able to have that kind of warning sensation which, again, isn’t exactly the same thing as creating a system that feels pain in any way like an animal does, but it could get the same value out of the experience.
Your lab does work in robotics as well as artificial intelligence, is that correct?
Right.
Talk a little bit about that work and how those two things come together, artificial intelligence and robots.
Well, I think that, traditionally, artificial intelligence and robotics have been the same area of exploration. One of the features of any maturing discipline, which I think AI is, is that various specializations and specialty groups start forming naturally as the field expands and there’s more and more to know.
The fact that you’re even asking the question shows that there has become a specialization in robotics that is seen as separate from, some people may say, part of, some people may say, completely different from, artificial intelligence. As a matter of fact, although my labs work on aspects of robotics, other labs within SRI, that are not part of the information computing sciences division, also work on robotics.
The thing about robotics is that you’re looking at things like motion, manipulation, actuation, doing things in the world, and that is a very interesting set of problems that has created a discipline around it. Then on top of that, or surrounding it, is the kind of AI reasoning, perception, etcetera, that enables those things to actually work. To me, they are different aspects of the same problem of having, to go back to something you said before, some embodiment of intelligence that can interact with the real world.
The roboticist Rodney Brooks, who I mentioned earlier, says something to the effect that he thinks there’s something about biology, something very profoundly basic that we don’t understand which he calls, “the juice.” And to be clear, he’s 100% convinced that “the juice” is biology, that there’s nothing mystical about it, that it’s just something we don’t understand. And he says it’s the difference between, you put a robot in a box and it tries to get out, it just kind of runs through a protocol and tries to climb. But you put an animal in a box and it frantically wants out of that boxit’s scratching, it’s getting agitated and worked upand that difference between those two systems he calls “the juice.” Do you think there is something like that that we don’t yet know about biology that would be beneficial to have to put in robots? 
I think that there’s a whole lot that we don’t know about biology, and I can assure you there’s a huge amount that I don’t know about biology. Calling it “the juice,” I don’t know what we learn from that. Certainly, the fact that animals have motivations and built-in desires that make them desperately want to get out of the box, is part of this whole issue of what we were talking about before of how and whether to introduce that into artifacts, into artificial systems. Is it a good thing to have in robots? I would say, yes. This gets back to the discussion about pain, because presumably the animal is acting that way out of a desire for self-preservation, that something that it has inherited or learned tells it that being trapped in a box is not good for its long-term survival prospects. Yes, it would be good for robots to be able to protect themselves.
I’ll ask you another either/or question you may not want to answer. The human body uses one hundred watts and we use twenty of that to power our brain, and we use eighty of it to power our body. The biggest supercomputers in the world use twenty million watts and they’re not able to do what the brain does. Which of those is a harder thing to replicate? If you had to build a computer that operated with the capabilities of the human brain that used twenty watts, or you had to build a robot that only used eighty watts that could mimic the mobility of a human. Which of those is a harder problem?
Well, as you suggested when you brought this up, I can’t take that either/or. I think that they’re both really hard. The way you phrased that makes me think of somebody who came to give a talk at SRI a number of years ago, and was somebody who was interested in robotics. He said that, as a student, he had learned about the famous AI programs that had become successful in playing chess. And as he learned more and more about it, he realized that what was really hard was a human being picking up the chess piece and moving it around, not the thinking that was involved in chess. I think he was absolutely right about that because chess is a game that is abstract and has certain rules, so even though it’s very complex, it’s not the same thing as the complexities of actual manipulation of objects. But if you ask the question you did, which is comparing it not to chess, but to the full range of human activity then I would just have to say they’re both hard.
There isn’t a kind of a Moore’s law of robotics is there—the physical motors and materials and power, and all of that? Is that improving at a rate commensurate with our advances in AI, or is that taking longer and slower? 
Well, I think that you have to look at that in more detail. There has been tremendous progress in the ability to build systems that can manipulate objects that use all kinds of interesting techniques. Cost is going down. The accuracy and flexibility is going up. In fact, that’s one of the specialty areas of the robotics part of SRI. That’s absolutely happening. There’s also been tremendous progress on aspects of artificial intelligence. But other parts of artificial intelligence are coming along much more slowly and other parts of robotics are coming along much more slowly.
You’re about the sixtieth guest on the show, and I think that all of them, certainly all of them that I have asked, consume science fiction, sometimes quite a bit of it. Are you a science fiction buff? 
I’m certainly not a science fiction buff. I have read science fiction. I think I used to read a lot more science fiction than I do now. I think science fiction is great. I think it can be very inspiring.
Is there any vision of the future in a movie, TV, or book, or anything that you look at and say, “Yes, that could happen, that’s how the world might unfold? You can say Her, or Westworld, or Ex Machina, or Star Trek, or any of those.
Nope. When I see things like that I think they’re very entertaining, they’re very creative, but they’re works of fiction that follow certain rules or best practices about how to write fiction. There’s always some conflict, there’s resolution, there’s things like that are completely different from what happens in the real world.
All right, well, it has been a fantastically interesting hour. I think we’ve covered a whole lot of ground and I want to thank you for being on the show, Bill. 
It’s been a real pleasure.
Byron explores issues around artificial intelligence and conscious computers in his upcoming book The Fourth Age, to be published in April by Atria, an imprint of Simon & Schuster. Pre-order a copy here.
[voices_in_ai_link_back]

Voices in AI – Episode 29: A Conversation with Hugo LaRochelle

[voices_in_ai_byline]
In this episode, Byron and Hugo discuss consciousness, machine learning and more.
[podcast_player name=”Episode 29 – A Conversation with Hugo LaRochelle” artist=”Byron Reese” album=”Voices in AI” url=”https://voicesinai.s3.amazonaws.com/2018-01-15-(00-49-50)-hugo-larochelle.mp3″ cover_art_url=”https://voicesinai.com/wp-content/uploads/2018/01/voices-headshot-card-2.jpg”]
[voices_in_ai_byline]
Byron Reese: This is Voices in AI, brought to you by Gigaom. I’m Byron Reese. Today I’m excited; our guest is Hugo Larochelle. He is a research scientist over at Google Brain. That would be enough to say about him to start with, but there’s a whole lot more we can go into. He’s an Associate Professor, on leave presently. He’s an expert on machine learning, and he specializes in deep neural networks in the areas of computer vision and natural language processing. Welcome to the show, Hugo.
Hugo Larochelle: Hi. Thanks for having me.
I’m going to ask you only one, kind of, lead-in question, and then let’s dive in. Would you give people a quick overview, a hierarchical explanation of the various terms that I just used in there? In terms of, what is “machine learning,” and then what are “neural nets” specifically as a subset of that? And what is “deep learning” in relation to that? Can you put all of that into perspective for the listener?
Sure, let me try that. Machine learning is the field in computer science, and in AI, where we are interested in designing algorithms or procedures that allow machines to learn. And this is motivated by the fact that we would like machines to be able to accumulate knowledge in an automatic way, as opposed to another approach which is to just hand-code knowledge into a machine. That’s machine learning, and there are a variety of different approaches for allowing for a machine to learn about the world, to learn about achieving certain tasks.
Within machine learning, there is one approach that is based on artificial neural networks. That approach is more closely inspired from our brains, from real neural networks and real neurons. It is still somewhat vaguely inspired by—in the sense that many of these algorithms probably aren’t close to what real biological neurons are doing—but some of the inspiration for it, I guess, is a lot of people in machine learning, and specifically in deep learning, have this perspective that the brain is really a biological machine. That it is executing some algorithm, and would like to discover what this algorithm is. And so, we try to take inspiration from the way the brain functions in designing our own artificial neural networks, but also take into account how machines work and how they’re different from biological neurons.
There’s the fundamental unit of computation in artificial neural networks, which is this artificial neuron. You can think of it, for instance, that we have neurons that are connected to our retina. And so, on a machine, we’d have a neuron that would be connected to, and take as input, the pixel values of some image on a computer. And in artificial neural networks, for the longest of time, we would have such neural networks with mostly a single layer of these neurons—so multiple neurons trying to detect different patterns in, say, images—and that was the most sophisticated type of artificial neural networks that we could really train with success, say ten years ago or more, with some exceptions. But in the past ten years or so, there’s been development in designing learning algorithms that leverage so called deep neural networks that have many more of these layers of neurons. Much like, in our brain we have a variety of brain regions that are connected with one another. How the light, say, flows in our visual cortex, it flows from the retina to various regions in the visceral cortex. In the past ten years there’s been a lot of success in designing more and more successful learning algorithms that are based on these artificial neural networks with many layers of artificial neurons. And that’s been something I’ve been doing research on for the past ten years now.
You just touched on something interesting, which is this parallel between biology and human intelligence. The human genome is like 725MB, but so much of it we share with plants and other life on this planet. If you look at the part that’s uniquely human, it’s probably 10MB or something. Does that imply to you that you can actually create an AGI, an artificial general intelligence, with as little as 10MB of code if we just knew what that 10MB would look like? Or more precisely, with 10MB of code could you create something that could in turn learn to become an AGI?
Perhaps we can make that parallel. I’m not so much an expert on biology to be able to make a specific statement like that. But I guess in the way I approach research—beyond just looking at the fact that we are intelligent beings and our intelligence is essentially from our brain, and beyond just taking some inspiration from the brain—I mostly drive my research on designing learning algorithms more from math or statistics. Trying to think about what might be a reasonable approach for this or that problem, and how could I potentially implement it with something that looks like an artificial neural network. I’m sure some people have a better-informed opinion as to what extent we can draw a direct inspiration from biology, but beyond just the very high-level inspiration that I just described, what motivates my work and my approach to research is a bit more taking inspiration from math and statistics.
Do you begin with a definition of what you think intelligence is? And if so, how do you define intelligence?
That’s a very good question. There are two schools of thought, at least in terms of thinking of what we want to achieve. There’s one which is we want to somehow reach the closest thing to perfect rationality. And there’s another one which is to just achieve an intelligence that’s comparable to that of human beings, in the sense that, as humans perhaps we wouldn’t really draw a difference between a computer or another person, say, in talking with that machine or in looking at its ability to achieve a specific task.
A lot of machine learning really is based on imitating humans. In the sense that, we collect data, and this data, if it’s labeled, it’s usually produced by another person or committee of persons, like crowd workers. I think those two definitions aren’t incompatible, and it seems the common denominator is essentially a form of computation that isn’t otherwise easily encoded just by writing code yourself.
At the same time, what’s kind of interesting—and perhaps evidence that this notion of intelligence is elusive—is there’s this well-known phenomenon that we call the AI effect, which is that it seems very often whenever we reach a new level of AI achievement, of AI performance for a given task, it doesn’t take a whole lot of time before we start saying that this actually wasn’t AI, but this other new problem that we are now interested in is AI. Chess is a little bit like that. For a long time, people would associate chess playing as a form of intelligence. But once we figured out that we can be pretty good by treating it as, essentially, a tree search procedure, then some people would start saying, “Well that’s not really AI.” There’s now this new separation where chess-playing is not AI anymore, somehow. So, it’s a very tough thing to pin down. Currently, I would say, whenever I’m thinking of AI tasks, a lot of it is essentially matching human performance on some particular task.
Such as the Turing Test. It’s much derided, of course, but do you think there’s any value in it as a benchmark of any kind? Or is it just a glorified party trick when we finally do it? And to your point, that’s not really intelligence either.
No, I think there’s value to that, in the sense that, at the very least, if we define a specific Turing Test for which we currently have no solution, I think it is valuable to try to then succeed in that Turing Test. I think it does have some value.
There are certainly situations where humans can also do other things. So, arguably, you could say that if someone plays against AlphaGo, but wasn’t initially told if it was AlphaGo or not—though, interestingly, some people have argued it’s using strategies that the best Go players aren’t necessarily considering naturally—you could argue that right now if you played against AlphaGo you would have a hard time determining that this isn’t just some Go expert, at least many people wouldn’t be able to say that. But, of course, AlphaGo doesn’t really classify natural images, or it doesn’t dialog with a person. But still, I would certainly argue that trying to tackle that particular milestone is useful in our scientific endeavor towards more and more intelligent machines.
Isn’t it fascinating that Turing said that, assuming the listeners are familiar with it, it’s basically, “Can you tell if this is a machine or a person you’re talking to over a computer?” And Turing said that if it can fool you thirty percent of the time, we have to say it’s smart. And the first thing you say, well why isn’t it fifty percent? Why isn’t it, kind of, indistinguishable? An answer to that would probably be something like, “Well, we’re not saying that it’s as smart as a human, but it’s intelligent. You have to say it’s intelligent if it can fool people regularly.” But the interesting thing is that if it can ever fool people more than fifty percent, the only conclusion you can draw is that it’s better at being human than we are…or seeming human.
Well definitely that’s a good point. I definitely think that intelligence isn’t a black or white phenomenon, in terms of something is intelligent or isn’t, it’s definitely a spectrum. What it means for someone to fool a human more than actual humans into thinking that they’re human is an interesting thing to think about. I guess I’m not sure we’re really quite there yet, and if we were there then this might just be more like a bug in the evaluation itself. In the sense that, presumably, much like we have now adversarial networks or adversarial examples, so we have methods that can fool a particular test. I guess it just might be more a reflection of that. But yeah, intelligence I think is a spectrum, and I wouldn’t be comfortable trying to pin it down to a specific frontier or barrier that we have to reach before we can say we have achieved actual AI.
To say we’re not quite there yet, that is an exercise in understatement, right? Because I can’t find a single one of these systems that are trying to pass the test that can answer the following question, “What’s bigger, a nickel or the sun?” So, I need four seconds to instantly know. Even the best contests restrict the questions enormously. They try to tilt everything in favor of the machine. The machine can’t even put in a showing. What do you infer from that, that we are so far away?
I think that’s a very good point. And it’s interesting, I think, to talk about how quickly are we progressing towards something that would be indistinguishable from human intelligence—or any other—in the very complete Turing Test type of meaning. I think that what you’re getting at is that we’re getting pretty good at a surprising number of individual tasks, but for something to solve all of them at once, and be very flexible and capable in a more general way, essentially your example shows that we’re quite far from that. So, I do find myself thinking, “Okay, how far are we, do we think?” And often, if you talk to someone who isn’t in machine learning or in AI, that’s often the question they ask, “How far away are we from AIs doing pretty much anything we’re able to do?” And it’s a very difficult thing to predict. So usually what I say is that I don’t know because you would need to predict the future for that.
One bit of information that I feel we don’t often go back to is, if you look at some of the quotes of AI researchers when people were, like now, very excited about the prospect of AI, a lot of these quotes are actually similar to some of the things we hear today. So, knowing this, and noticing that it’s not hard to think of a particular reasoning task where we don’t really have anything that would solve it as easily as we might have thought—I think it just suggests that we still have a fairly long way in terms of a real general AI.
Well let’s talk about that for just a second. Just now you talked about the pitfalls of predicting the future, but if I said, “How long will it be before we get to Mars?” that’s a future question, but it’s answerable. You could say, “Well, rocket technology and…blah, blah, blah…2020 to 2040,” or something like that. But if you ask people who are in this field—at least tangentially in the field—you get answers between five and five hundred years. And so that implies to me that not only do we not know when we’re going to do it, we really don’t know how to build an AGI.  
So, I guess my question is twofold. One, why do you think there is that range? And two, do you think that, whether or not you can predict the time, do you think we have all of the tools in our arsenal that we need to build an AGI? Do you believe that with sufficient advances in algorithms, sufficient advances in processors, with data collection, etcetera, do you think we are on a linear path to achieve an AGI? Or is an AGI going to require some hitherto unimaginable breakthrough? And that’s why you get five to five hundred years because that’s the thing that’s kind of the black swan in the room?
That is my suspicion, that there are at least one and probably many technological breakthroughs—that aren’t just computers getting faster or collecting more data—that are required. One example, which I feel is not so much an issue with compute power, but is much more an issue of, “Okay, we don’t have the right procedure, we don’t have the right algorithms,” is being able to match how as humans we’re able to learn certain concepts with very little, quote unquote, data or human experience. An example that’s often given is if you show me a few pictures of an object, I will probably recognize that same object in many more pictures, just from a few—perhaps just one—photographs of that object. If you show me a picture of a family member and you show me other pictures of your family, I will probably identify that person without you having to tell me more than once. And there are many other things that we’re able to learn from very little feedback.
I don’t think that’s just a matter of throwing existing technology, more computers and more data, at it; I suspect that there are algorithmic components that are missing. One of them might be—and it’s something I’m very interested in right now—learning to learn, or meta-learning. So, essentially, producing learning algorithms from examples of tasks, and, more generally, just having a higher-level perspective of what learning is. Acknowledging that it works on various scales, and that there are a lot of different learning procedures happening in parallel and in intricate ways. And so, determining how these learning processes should act at various scales, I think, is probably a question we’ll need to tackle more and actually find a solution for.
There are people who think that we’re not going to build an AGI until we understand consciousness. That consciousness is this unique ability we have to change focus, and to observe the world a certain way and to experience the world a certain way that gives us these insights. So, I would throw that to you. Do you, A), believe that consciousness is somehow key to human intelligence; and, B), do you think we’ll make a conscious computer?
That’s a very interesting question. I haven’t really wrapped my head around what is consciousness relative to the concept of building an artificial intelligence. It’s a very interesting conversation to have, but I really have no clue, no handle on how to think about that.
I would say, however, that clearly notions of attention, for instance, being able to focus attention on various things or adding an ability to seek information, those are clearly components for which there’s, currently—I guess for attention we have some fairly mature solutions which work, thought in somewhat restrictive ways and not in the more general way; information seeking, I think, is still very much related to the notion of exploration and reinforcement learning—still a very big technical challenge that we need to address.
So, some of these aspects of our consciousness, I think, are kind of procedural, and we will need to figure out some algorithm to implement these, or learn to extract these behaviors from experience and from data.
You talked a little bit earlier about learning from just a little bit of data, that we’re really good at that. Is that, do you think, an example of humans being good at unsupervised learning? Because obviously as kids you learn, “This is a dog, and this is a cat,” and that’s supervised learning. But what you were talking about, was, “Now I can recognize it in low light, I can recognize it from behind, I can recognize it at a distance.” Is that humans doing a kind of unsupervised learning? Maybe start off by just explaining the concept and the hope about unsupervised learning, that it takes us, maybe, out of the process. And then, do you think humans are good at that?
I guess, unsupervised learning is, by definition, something that’s not supervised learning. It’s kind of an extreme of not using supervised learning. An example of that would be—and this is something I investigated quite a bit when I did my PhD ten years ago—to have a procedure, a learning algorithm, that can, for instance, look at images of hundreds of characters and be able to understand that each of these pixels in these images of characters are related. That they are higher-level concepts that explain why this is a digit. For instance, there is the concept of pen strokes; a character is really a combination of pen strokes. So, unsupervised learning would try to—just from looking at images, from the fact that there are correlations between these pixels, that they tend to look like something different than just a random image, and that pixels arrange themselves in a very specific way compared to any random combination of pixels—be able to extract these higher-level concepts like pen stroke and handwritten characters. In a more complex, natural scene this would be identifying the different objects without someone having to label each object. Because really what explains what I’m seeing is that there’s a few different objects with a particular light interacting with the scene and so on.
That’s something that I’ve looked at quite a bit, and I do think that humans are doing some form of that. But also, we’re, probably as infants, we’re interacting with our world and we’re exploring it and we’re being curious. And that starts being something a bit further away from just pure unsupervised learning and a bit closer to things like our reinforcement learning. So, this notion that I can actually manipulate my environment, and from this I can learn what are its properties, what are the facts and the variations that characterize this environment?
And there’s an even more supervised type of learning that we see in ourselves as infants that is not really captured by purely supervised learning, which is being able to exchange or to learn from feedback from another person. So, we might imitate someone, and that would be closer to supervised learning, but we might instead get feedback that’s worded. So, if a parent says do this or don’t do that, this isn’t exactly an imitation this is more like a communication of how you should adjust your behavior. And this is a form of weakly supervised learning. So, if I tell my kid to do his or her homework, or if I give instructions on how to solve a particular problem set, this isn’t a demonstration, so this isn’t supervised learning. This is more like a weak form of supervised learning. Which even then I think we don’t use as much in the known systems that work well currently that people are using in object recognition systems or machine translation systems and so on. And so, I believe that these various forms of learning that are much less supervised than the common supervised learning is a direction in research where we still have a lot of progress to make.
So earlier you were talking about meta learning, which is learning how to learn, and I think there’s been a wide range of views about how artificial intelligence and an AGI might work. And on one side was an early hope that, like the physical universe which is governed just by very few laws, and magnetism very few laws, electricity very few laws, we hoped that intelligence was governed by just a very few laws that we could learn. And then on the other extreme you have people like the late Marvin Minsky who really saw the brain as a hack of a couple of hundred narrow AIs, that all come together and give us, if not a general intelligence at least a really good substitute for one. I guess a belief in meta learning is a belief in the former case, or something like it, that there is a way to learn how to learn. There’s a way to build all those hacks. Would you agree? Do you think that?
We can take one example there. I think under a somewhat general definition of what learning to learn or meta learning is, it’s something that we could all agree exists, which is, as humans, we’re the result of years of evolution. And evolution is a form of adaptation, I guess. But then within our lifespan, each individual will also adapt to its specific human experience. So, you can think of evolution as being kind of like the meta learning to the learning that we do as humans in our individual lives every day. But then even in our own lives, I think there are clearly ways in which my brain is adapting as I’m growing older from a baby to an adult, that are not conscious. There are ways in which I’m adapting in a rational way, in conscious ways, which rely on the fact that my brain has adapted to be able to perceive my environment—my visual cortex just maturing. So again, there are multiple layers of learning that rely on each other. And so, I think this is, at a fairly high level, but I think in a meaningful way, a form of meta learning. For that reason, I think that investigating how to have learning of learning systems is that there is a process that’s valuable here in informing how to have more intelligent agents and AIs.
There’s a lot of fear wrapped up in the media coverage of artificial intelligence. And not even getting into killer robots, just the effects that it’s going to have on jobs and employment. Do you share that? And what is your prognosis for the future? Is AI in the end going to increase human productivity like all other technologies have done, or is AI something profoundly different that’s going to harm humans?
That’s a good question. What I can say is that I am motivated by—and what makes me excited about AI—is that I see it as an opportunity of automating parts of my day-to-day life which I would rather be automated so I can spend my life doing more creative things, or the things that I’m more passionate about or more interested in. I think largely because of that, I see AI as a wonderful piece of technology for humanity. I see benefits in terms of better machine translation which will better connect the different parts of the world and allow us to travel and learn about other cultures. Or how I can automate the work of certain health workers so that they can spend more time on the harder cases that probably don’t receive as much attention as they should.
For that reason—and because I’m personally motivated automating these aspects of life which we would want to see automated—I am fairly optimistic about the prospects for our society to have more AI. And, potentially, when it comes to jobs we can even imagine automating our ability to progress professionally. Definitely there’s a lot of opportunities in automating part of the process of learning in a course. We now have many courses online. Even myself when I was teaching, I was putting a lot of material on YouTube to allow for people to learn.
Essentially, I identified that the day-to-day teaching that I was doing in my job was very repetitive. It was something that I could record once and for all and instead focus my attention on spending time with the student and making sure that each individual student solves its own misunderstanding about the topic. Because my mental model of the student in general is that it’s often unpredictable how they will misunderstand a particular aspect of the course. And so, you actually want to spend some time interacting with that student, and you want to do that with as many students as possible. I think that’s an example where we can think of automating particular aspects of education so as to support our ability to have everyone be educated and be able to have a meaningful professional life. So, I’m overall optimistic, largely because of the way I see myself using AI and developing AI in the future.
Anybody who’s listened to many episodes of the show will know I’m very sympathetic to that position. I think it’s easy to point to history and say in the last two hundred and fifty years, other than the depression which wasn’t caused by technology obviously, unemployment has been between five and nine percent without fail. And yet, we’ve had incredibly disruptive technologies, like the mechanization of industry, the replacement of animal power with human power, electrification, and so forth. And in every case, humans have used those technologies to increase their own productivity and therefore their incomes. And that is the entire story of the rising standard of living for everybody, at least in the western world.
But I would be remiss not to make the other case, which is that there might be a point, an escape velocity, where a machine can learn a new job faster than a human. And at that point, at that magic moment, every new job, everything we create, a machine would learn it faster than a human. Such that, literally, everything from Michael Crichton down to…everybody—everybody finds themselves replaced. Is that possible? And if that really happened, would that be a bad thing?
That’s a very good question I think for society in general. Maybe because my day-to-day is about identifying what are the current challenges in making progress in AI, I see—and I guess we’ve touched that a little bit earlier—that there are still many scientific challenges, that it doesn’t seem like it’s just a matter of making computers faster and collecting more data. Because I see these many challenges, and because I’ve seen that the scientific community, in previous years, has been wrong and has been overly optimistic, I tend to err on the side of less gloomy and a bit more conservative in how quickly we’ll get there, if we ever get there.
In terms of what it means for society—if that was to ever happen that we can automate essentially most things—I unfortunately feel ill-equipped as a non-economist to be able to really have a meaningful opinion about this. But I do think it’s good that we have a dialog about it, as long as it’s grounded in facts. Which is why it’s a difficult question to discuss, because we’re talking about a hypothetical future that might not exist before a very long time. But as long as we have, otherwise, a rational discussion about what might happen, I don’t see a reason not to have that discussion.
It’s funny. Probably the truest thing that I’ve learned from doing all of these chats is that there is a direct correlation between how much you code and how far away you think an AGI is.
That’s quite possible.
I could even go further to say that the longer you have coded, the further away you think it is. People who are new at it are like, “Yeah. We’ll knock this out.” And the other people who think it’s going to happen really quickly are more observers. So, I want to throw a thought experiment to you.
Sure.
It’s a thought experiment that I haven’t presented to anybody on the show yet. It’s by a man named Frank Jackson, and it’s the problem of Mary, and the problem goes like this. There’s this hypothetical person, Mary, and Mary knows everything in the world about color. Everything is an understatement. She has a god-like understanding of color, everything down to the basic, most minute detail of light and neurons and everything. And the rub is that she lives in a room that she’s never left, and everything she’s seen is black and white. And one day she goes outside and she sees red for the first time. And the question is, does she learn anything new when that happens that she didn’t know before? Do you have an initial reaction to that?
My initial reaction is that, being colorblind I might be ill-equipped to answer that question. But seriously, so she has a perfect understanding of color but—just restating the situation—she has only seen in black and white?
Correct. And then one day she sees color. Did she learn anything new about color?
By definition of what understanding means, I would think that she wouldn’t learn anything about color. About red specifically.
Right. That is probably the consistent answer, but it’s one that is intuitively unsatisfying to many people. The question it’s trying to get at is, is experiencing something different than knowing something? And if in fact it is different, then we have to build a machine that can experience things for it to truly be intelligent, as opposed to just knowing something. And to experience things means you return to this thorny issue of consciousness. We are not only the most intelligent creature on the planet, but we’re arguably the most conscious. And that those two things somehow are tied together. And I just keep returning to that because it implies, maybe, you can write all the code in the world, and until the machine can experience something… But the way you just answered the question was, no, if you know everything, experiencing adds nothing.
I guess, unless that experience would somehow contradict what you know about the world, I would think that it wouldn’t affect it. And this is partly, I think, one challenge about developing AI as we move forward. A lot of the AIs that we’ve successfully developed that have to do with performing a series of actions, like playing Go for instance, have really been developed in a simulated environment. In this case, for a board game, it’s pretty easy to simulate it on a computer because you can literally write all the rules of the game so you can put them in the computer and simulate it.
But, for an experience such as being in the real world and manipulating objects, as long as that simulated experience isn’t exactly what the experience is in the real world, touching real objects, I think we will face a challenge in transferring any kind of intelligence that we grow in simulations, and transfer it to the real world. And this partly relates to our inability to have algorithms that learn rapidly. Instead, they require millions of repetitions or examples to really be close to what humans can do. Imagine having a robot go through millions of labeled examples from someone manipulating that robot, and showing it exactly how to do everything. That robot might essentially learn too slowly to really learn any meaningful behavior in a reasonable amount of time.
You used the word transfer three or four times there. Do you think that transfer learning, this idea that humans are really good at taking what we know in one domain space and applying it in another—you know, you walk around one big city and go to a different big city and you kind of map things. Is that a useful thing to work on in artificial intelligence?
Absolutely. In fact, we’re seeing that with all the success that has been enabled by the ImageNet data set and the competition. It turns out if you train an object recognition system on this large ImageNet data set, it really is responsible for the revolution of deep neural nets and convolutional neural nets in the field of computer vision. It turns out that these models trained on that source of data could transfer really well to a surprising number of paths. And that has very much enabled a kind of a revolution in computer vision. But it’s a fairly simple type of transfer, and I think there are more subtle ways of transferring, where you need to take what you knew before but slightly adjust it. How to do to that without forgetting what you learned before? So, understanding how these different mechanisms need to work together to be able to perform a form of lifelong learning, of being able to accumulate one task after another, and learning each new task with less and less experience, is something I think currently we’re not doing as well as we need to.
What keeps you up at night? You meet a genie and you rub the bottle and the genie comes out and says, “I will give you perfect understanding of something.” What do you wrestle with that maybe you can phrase in a way that would be useful to the listeners?
Let’s see. That’s a very good question. Definitely, in my daily research, how are we able to accumulate knowledge, and how would a machine accumulate knowledge, in a very long period, and learn the sequence of tasks and abilities in a sequence, cumulatively, is something that I think a whole lot about. And this has led me to think about learning to learn, because I suspect that there are ideas. And effectively once you have to learn one ability after the other after the other, that process of doing this and doing it better, the fact that we do it better is, perhaps, because we are learning how to learn each task also. That there’s this other scale of learning that is going on. How to do this exactly I don’t quite know, and knowing this I think would be a pretty big step in our field.
I have three final questions, if I could. You’re in Canada, correct?
As it turns out, I’m currently still in the US because I have four kids, two of them are in school so I wanted them to finish their school year before we move. But the plan is for me to go to Montreal, yes.
I noticed something. There’s a lot of AI activity in Canada, a lot of leading research. How did that come about? Was that a deliberate decision or just a kind of a coincidence that different universities and businesses decided to go into that?
If I speak for Montreal specifically, very clearly at the source of it is Yoshua Bengio deciding to stay in Montreal, staying in academia, and then continuing to train many students, gathering other researchers that are also in his group, and also training more PhDs in the field that doesn’t have as much talent as is needed. I think this is essentially the source of it.
And then my second to the last question is, what about science fiction? Do you enjoy it in any form, like movies or TV or books or anything like that? And if so, is there any that you look at it and think, “Ah, the future could happen that way”?
I definitely used to be more into science fiction. Now maybe due to having kids I watch many more Disney movies than I watch science fiction. It’s actually a good question. I’m realizing I haven’t watched a sci-fi movie for a bit, but it would be interesting, now that I’ve actually been in this field for a while, to sort of confront my vision of it from how artists potentially see AI. Maybe not seriously. A lot of art is essentially philosophy around what could happen, or at least projecting a potential future and seeing how we feel about it. And for that purpose, I’m now tempted to revisit either some classics or seeing what are recent sci-fi movies.
I said only one more question, so I’ve got to combine two into one to stick with that. What are you working on, and if a listener is going into college or is presently in college and wants to get into artificial intelligence in a way that is really relevant, what would be a leading edge that you would say somebody entering the field now would do well to invest time in? So first, you, and then what would you recommend for the next generation of AI researchers?
As I’ve mentioned, perhaps not so surprisingly, I am very much interested in learning to learn and meta learning. I’ve started publishing on the subject, and I’m still very much thinking around various new ideas for meta learning approaches. And also learning from, yes, weaker signals than in the supervised learning setting. Such as learning from worded feedback from a person is something I haven’t quite started working on specifically, but I’m thinking a whole lot about these days. Perhaps those are directions that I would definitely encourage other young researchers to think about and study and research.
And in terms of advice, well, I’m obviously biased, and being in Montreal studying deep learning and AI, currently, is a very, very rich and great experience. There are a lot of people to talk to, to interact with, not just in academia but now much more in industry, such as ourselves with Google and other places. And also, being very active online. On Twitter, there’s now a very, very rich community of people sharing the work of others and discussing the latest results. The field is moving very fast, and in large part it’s because the deep learning community has been very open about sharing its latest results, and also making the discussion open about what’s going on. So be connected, whether it be on Twitter or other social networks, and read papers and look at what comes up on archives—engage in the global conversation.
Alright. Well that’s a great place to end. I want to thank you so much. This has been a fascinating hour, and I would love to have you come back and talk about your other work in the future if you’d be up for it.
Of course, yeah. Thank you for having me.
Byron explores issues around artificial intelligence and conscious computers in his upcoming book The Fourth Age, to be published in April by Atria, an imprint of Simon & Schuster. Pre-order a copy here.
[voices_in_ai_link_back]