Voices in AI – Episode 41: A Conversation with Rand Hindi

[voices_in_ai_byline]
In this episode, Byron and Rand discuss intelligence, AGI, consciousness and more.
[podcast_player name=”Episode 41: A Conversation with Rand Hindi” artist=”Byron Reese” album=”Voices in AI” url=”https://voicesinai.s3.amazonaws.com/2018-04-10-(01-00-04)-rand-hindi.mp3″ cover_art_url=”https://voicesinai.com/wp-content/uploads/2018/04/voices-headshot-card-2.jpg”]
[voices_in_ai_byline]
Byron Reese: This is “Voices in AI” brought to you by GigaOm, I’m Byron Reese. Today I’m excited our guest is Rand Hindi. He’s an entrepreneur and a data scientist. He’s also the founder and the CEO of Snips. They’re building an AI assistant that protects your privacy. He started coding when he was 10 years old, founded a social network at 14, founded a web agency at 15, and he showed interest in machine learning at 18, and began work on a Ph.D. in bioinformatics at age 21. He’s been elected by MIT Technology Reviewas one of their “35 Innovators Under 35,” and was a “30 Under 30” by Forbes in 2015, is a rising star by the Founders Forum, and he is a member of the French Digital Counsel. Welcome to the show, Rand.
Rand Hindi: Hi Byron. Thanks for having me.
That’s a lot of stuff in your bio. How did you get such an early start with all of this stuff?
Well, to be honest, I think, I don’t have any credit, right? My parents pushed me very young into technology. I used to hack around the house, dismantling everything from televisions, to radios, to try to figure out how these things were working. We had a computer at home when I was a kid and so, at some point, my mom came to me and gave me a coding book, and she’s like, “You should learn how to program the machines, instead of just figuring out how to break it, pretty much.” And from that day, just kept going. I mean you know it’s as if, I was telling you when you were 10, that here’s something that is amazing that you can use as a tool to do anything you ever had in mind.
And so, how old are you now? I would love to work backwards just a little bit.
I’m 32 today.
Okay, you mean you turned 32 today, or you happen to be 32 today?
I’m sorry, I am 32. My birthday is in January.
Okay. When did you first hear about artificial intelligence, and get interested in that?
So, after I started coding, you know I guess like everybody who starts coding as a teenager got interested in hacking security and these things. But when I went to university to study computer science, I was actually so bored because, obviously, I already knew quite a lot about programming that I wanted to take up a challenge, and so I started taking masters classes, and one of them was in artificial intelligence and machine learning. And the day I discovered that it was like, it was mind-blowing. It’s as if for the first time someone had shown me that I no longer had to program computers, I could just teach them what I want them to do. And this completely changed my perspective on computer science, and from that day I knew that my thing wasn’t going to be to code, it was to do AI.
So let’s start, let’s deconstruct artificial intelligence. What is intelligence?
Well, intelligence is the ability for a human to perform some task in a very autonomous way. Right, so the way that I…
But wait a second, to perform it in an autonomous way that would be akin to winding up a car and letting it just “Ka, ka, ka, ka, ka” across the floor. That’s autonomous. Is that intelligent?
Well, I mean of course you know, we’re not talking about things which are automated, but rather about the ability to make decisions by yourself, right? So, the ability to essentially adapt to the context you’re in, the ability to, you know, abstract what you’ve been learning and reuse it somewhere else—all of those different things are part of what makes us intelligent. And so, the way that I like to define artificial intelligence is really just as the ability to reproduce a human intelligent behavior in a machine.
So my cat food dish that when it runs out of cat food, and it can sense that there is no food in it, it opens a little door, and releases more food—that’s artificial intelligence?
Yep, I mean you can consider one form of AI, and I think it’s important to really distinguish what we currently have with narrow AI and strong AI
Sure, sure, we’ll get to that in due time. So where do you say we are when people say, “I hear a lot about artificial intelligence, what is the state of the art?” Are we kind of at the very beginning just doing the most rudimentary things? Or are we kind of like half-way along and we’re making stuff happen? How would you describe today’s state of the art?
What we’re really good at today is building and teaching machines to do one thing and to do it better than humans. But those machines are incapable of second-degree thinking, like we do as humans, for example. So, I think we’ve really have to think about this way: you’ve got a specific task for which you would traditionally have programmed a machine, right? And now you can essentially have a machine look at examples of that behavior, and reproduce it, and execute it better than a human would. This is really the state of the art. It’s not yet about intelligence in a human sense; it’s about a task-specific ability to execute something.
So I have posted an article recently on GigaOm where I have an Amazon Echo and a Google Assistant on my desk, and almost immediately I noticed that they would answer the same factual question differently. So, if I said, “How many minutes are in a year?” they gave me a different answer. If I said, “Who designed the American flag?” they gave me a different answer. And they did so because how many minutes in a year, one of them interpreted that as a solar year, and one of them interpreted that as a calendar year. And with regard to the flag, one of them gave the school answer of Betsy Ross, and one of them gave the answer to who designed the 50-state configuration of the stars. So, in both of those cases, would you say I asked a bad question that was inherently ambiguous? Or would you say the AI should have tried to disintermediate and figure it out, and that is an illustration of the limit you were just talking about?
Well I mean the question you’re really asking here is what would be ground truths that the AI should both have, and I don’t think there is. Because as you correctly said, the computer interpreted an ambiguous question in a different way., which is correct because there are two different answers depending on context. And I think this is also a key limitation of what we currently have with AI, is that you and I, we disambiguate what we’re saying because we have cultural references—we have contextual references to things that we share. And so, when I tell you something—I live in New York half the time—so if you ask me who created the flag, we’d both have the same answer because we live in the same country. But someone on a different side of the world might have a different answer, and it’s exactly the same thing with AI. Until we’re able to bake in contextual awareness, cultural awareness, or even things like, very simply, knowing what is the most common answer that people would give, we are going to have those kind of weird side effects that you just observed here.
So isn’t it, though, the case that all language is inherently ambiguous? I mean once you get out of the realm of what is two plus two, everything like, “Are you happy? What’s the weather like? Is that pretty?” [are] all like, anything you construct with language has inherent ambiguity, just by the nature of words.
Correct.
And so how do you get around that?
As humans, the way that we get around that is that we actually have a sort of probabilistic model in our heads of how we should interpret something. And sometimes it’s actually funny because you know, I might say something and you’re going to take it wrong, not because I meant it wrong, but because you understood it in different context reference frame. But fortunately, what happens is that people who usually interact together usually share some sort of similar contextual reference points. And based on this it means we’re able to share in a very natural way without having to explain the logic behind everything we say. So, language in itself is very ambiguous. If I tell you something such as, “The football match yesterday was amazing,” this sentence grammatically and syntactically is very simple, but the meaning only makes sense if you and I were watching the same thing yesterday, right? And so, this is exactly why computers vary. It’s still unable to understand human language the same way we do is because it’s unable to understand this notion of context unless you give it to it. And I think this is going to be one of the most active fields of research. Natural language processing is going to be you know, basically, baking in contextual awareness into natural language understanding.
So you just said a minute ago at the beginning of that, that humans have a probabilistic model that they’re running in their head—is that really true though? Because if I ask somebody, I just come up to a stranger how many minutes are in a year, they’re not going to say well there is 82.7% chance he’s referring to a calendar year, but it’s a 17.3% he’s referring to a solar year. I mean they instantly only have one association with that question, most people, right?
Of course.
And so they don’t actually have a probabilistic—are you saying it’s a de-facto one—
Exactly.
Talk to that for just a second.
I mean, how it’s actually encoded in the brain? I don’t know. But the fact is that depending on the way I ask the question, depending on the information I’m giving you about how you should think about the question, you’re going to think about a different answer. So, if I tell you, you know how many stars are—let’s say, “How many minutes are in the year? If I ask you the question like this, this is the most common way of asking the question, which means that you know I’m expecting you to give me the most common answer to the question. But if I give you more information, if I told you, “How many minutes are in a solar year?” So now I’ve specified extra information, then that will change the answer you’re going to give me, because now the probability is no longer that I’m asking for the general question, but rather, I’m asking you for a very specific one. And so you have this sort of like, all these connections built into your brain, and depending on which of those elements are activated, you’re going to be giving me a different response. So, think about it as like, you have this kind of graph of knowledge in your head, and whenever I’m asking something, you’re going to give me a response by picking the most likely answer.
So this is building up to—well, let me ask you one more question about language, and we’ll start to move past this a little bit, but I think this is fascinating. So, the question is often raised, “Are there other intelligent creatures on Earth?” You know the other sorts of animals and what not. And one school of thought says that language is an actual requirement for intelligence. That without language, you can’t actually conceive of abstract ideas in your head, you can’t do any of that, and therefore anything that doesn’t have language doesn’t have intelligence. Do you agree with that?
I guess if you’re talking about general intelligence, yes. Because language is really just a universal interface for, you know, representing things. This is the beauty of language. You and I speak English, and we don’t have to learn a specific language for every topic we want to talk about. What we can do instead is we can use the sync from the mental interface, the language, to express all kinds of different ideas. And so, the flexibility of natural language means that you’re able to think about a lot more different things. And so this, inherently, I believe, means that it opens up the amount of things you can figure out—and hence, intelligence. I mean it makes a lot of sense. To be honest, I’ve never thought about it exactly like this, but when you think about it, if you have a very limited interface to express things, you’re never going to be able to think about that many things.
So Alan Turing famously made the Turing Test, which he said that if you are on a terminal, you’re in a conversation with something in another room and you can’t tell if its person or a machine—interestingly he said 30% of the time a machine can fool you—then we have to say the machine is thinking.Do you interpret that as language “indicates that it is thinking,” or language is “it is actually thinking”?
I was talking about this recently actually. Just because a machine can generate an answer that looks human, doesn’t mean that the machine actually understands the answer given. I think you know the depth of understanding of the semantics, and the context goes beyond the ability to generate something that makes sense to a human. So, it really depends on what you’re asking the machine. If you’re asking something trivial, such as, you know, how many days are in a year, or whatever, then of course, I’m sure the machine can generate a very simple, well-structured answer that would be exactly like a human would. But if you start digging in further, if you start having a conversation, if you start essentially, you know, brainstorming with the machines, if you start asking for analysis of something, then this is where it’s going to start failing, because the answers it’s going to give you won’t have context, it won’t have abstraction, it won’t have all of these other things which makes us really human. And so I think, you know, it’s very, very hard to determine where you should draw the line. Is it about the ability to write letters in a way that is syntactically, grammatically correct? Or is it the ability to actually have an intelligent conversation, like a human would? I think the former, we can definitely do in the near future. The latter will require AGI, and I don’t think we’re there yet.
So you used the word “understanding,” and that of course immediately calls up the Chinese Room Problem, put forth by John Searle. For the benefit of the listener, it goes like this: There’s a man who’s in a room, and it’s full of these many thousands of these very special books. The man doesn’t speak any Chinese, that’s the important thing to know. People slide questions in Chinese underneath the door, he picks them out, and he has this kind of algorithm. He looks at the first symbol; he finds a matching symbol on the spine of one of the books. He looks up the second book, that takes him to a third book, a fourth book, a fifth book, all the way up. So he gets to a book that he knows to copy some certain symbols from and he doesn’t know what they mean, he slides it back under the door, and the punch line is, it’s a perfect answer, in Chinese. You know it’s profound, and witty, and well-written and all of that. So, the question that Searle posed and answered in the negative is, does the man understand Chinese? And of course, the analogy is that that’s all a computer can do, and therefore a computer just runs this deterministic program, and it can never, therefore, understand anything. It doesn’t understand anything. Do you think computers can understand things? Well let’s just take the Chinese Room, does the man understand Chinese?
No, he doesn’t. I think actually this is a very, very good example. I think it’s a very good way to put it actually. Because what the person has done in that case, to give a response in Chinese, he literally learns an algorithm on the fly to give him an answer. This is exactly how machine learning currently works. Machine learning isn’t about understanding what’s going on; it’s about replicating what other people have done, which is a fundamental difference. It’s subtle, but it’s fundamental because to be able to understand you need to be able to also replicate de-facto, right? Because if you can understand, you replicate. But being able to replicate, doesn’t mean that you’re able to understand. And the way that we build those machine learning models today are not meant to have a deep understanding of what’s going on. It’s meant to have a very appropriate, human, understandable response. I think this is exactly what happens in this thought experiment. It’s exactly the same thing pretty much.
Without going into general intelligence, I think what we really have to think about today, the way I’d like to see this is, machine learning is not about building human-like intelligence yet. It’s about replacing the need to program a computer to perform a task. Up until now, when you wanted to make a computer do something, what you had to do first is understand what the phenomenon is yourself. So, you had to become an expert in whatever you were trying to automate, and then you would write a computer code with those rules. And so the problem is that doing this would take you a while, because a human would have to understand what’s going on, which can take a while. And also your problem, of course, is not everything is understandable by humans, at least not easily. Machine learning completely replaces the need to become an expert. So instead of understanding what’s going on and then programming the machine, you’re just collecting examples of what’s going on, and feeding it to the machine, who will then figure out a way to reproduce that. So, you know the simple example is, show me a pattern of numbers with written five times five, and ask me what is a pattern, I’ll learn that it’s five, if that makes sense. So this is really about this—this is really about getting rid of the need to understand what you’re trying to make the machine do and just give it examples that it can just figure out by itself.
So we began with my wind-up car, then the cat food dish, and we’re working up to understanding…eventually we have to get to consciousness because consciousness is this thing, people say we don’t know what it is. But we know exactly what it is, we just don’t know how it comes about. So, what it is, is that we experience the world. We can taste the pineapple or see the redness of the sunset in a way that’s different than just sensing the world…we experience. Two questions: do you have any personal theory on where consciousness comes from, and second, is consciousness key to understanding, and therefore key to an AGI?
I think so. I think there is no question that consciousness is linked to general intelligence because general intelligence means that you need to able to create an abstraction of the world, which means that you need to be able to go beyond observing it, but also be able to understand it and to experience it. So, I think that is a very simple way to put it. What I’m actually wondering is whether consciousness was a consequence of biology and whether we need to replicate that in a machine, to make it intelligent like a human being is intelligent. So essentially, the way I’m thinking about this is, is there a way to build a human intelligence that would seem human? And do we want that to seem human? Because if it’s just about reproducing the way intelligence works in a machine, then we shouldn’t care if it feels human or not, we should just care about the ability for the machine to do something smart. So, I think the question of consciousness in a machine is really down to the question of whether or not we want to make it human. There are many technologies that we’ve built for which we have examples in nature, which perform the same task, but don’t work the same. Birds and planes, for example, I’m pretty sure a bird needs to have some sort of like, consciousness of itself of not getting into the wall, whereas we didn’t need to replicate all those tiny bits for the actual plane to fly. It’s just a very different way of doing things.
So do you have a theory as to how it is that we’re conscious?
Well, I think it probably comes from the fact that we had to evolve as a species with other individuals, right? How would you actually understand where to position yourself in society, and therefore, how to best build a very coherent, stable, strong community, if you don’t have consciousness of other people, of nature, of yourself? So, I think there is like, inherently, the fact that having a kind of ecosystem of human beings, and humans in nature, and humans and animals meant that you had to develop consciousness. I think it was probably part of a very positive evolutionary strategy. Whether or not that comes from your neurons or whether that comes more from a combination of different things, including your senses, I’m not sure. But I feel that the need for consciousness definitely came from the need for integrating yourself into broader structure.
And so not to put words in your mouth, but it sounds like you think, you said “we’re not close to it,” but it is possible to build an AGI, and it sounds like you think it’s possible to build, hypothetically, a conscious computer and you’re asking the question of would we want to?
Yes. The question is whether or not it would make sense for whatever we have in mind for it. I think probably we should do it. We should try to do it just for the science, I’m just not sure this is going to be the most useful thing to do, or whether we’re going to figure out an even more general general-intelligence which doesn’t have only human traits but has something even more than this, that would be a lot more powerful.
Hmmm, what would that look like?
Well, that is a good question. I have clearly no idea because otherwise—it is very hard to think about a bigger intelligence and the intelligence that we are limited to, in a sense. But it’s very possible that we might end up concluding that well you know, human intelligence is great for being a human, but maybe a machine doesn’t have to have the same constraints. Maybe a machine can have like a different type of intelligence, which would make it a lot better suited for the type of things we’re expecting the machine to do. And I don’t think we’re expecting the machines to be human. I think we’re expecting the machines to augment us, to help us, to solve problems humans cannot solve. So why limit it to a human intelligence?
So, the people I talk to say, “When will we get an AGI?” The predictions vary by two orders of magnitude—you can read everything from 5 to 500 years. Where do you come down on that? You’ve made several comments that you don’t think we’re close to it. When do you think we’ll see an AGI? Will you live to see an AGI, for instance?
This is very, very hard to tell, you know I mean there is this funny artifact that everybody makes a prediction 20 years in the future, and it’s actually because most people when they make those predictions, have about 20 years left in their careers. So, you know, nobody is able to think beyond their own lifetime, in a sense. I don’t think it’s 20 years away, at least not in the sense of real human intelligence. Are we going to be able replicate parts of AGI, such as, you know, the ability to transfer learning from one task to another? Yes, and I think this is short-term. Are we going to be able to build machines that can go one level of abstraction higher to do something? Yes, probably. But it doesn’t mean they’re going to be as versatile, as generalist, as horizontally thinking as we are as humans. I think for that, we really, really have to figure out once and for all whether a human intelligence requires a human experience of the world, which means the same senses, the same rules, the same constraints, the same energy, the same speed of thinking, or not. So, we might just bypass, as I said—human intelligence might go from like narrow AI, to a different type of intelligence, that is neither human or narrow. It’s just different.
So you mentioned transferred learning. I could show you a small statue of a falcon, and then I could show you a hundred photographs, and some of them have the falcon under water, on its side, in different light, upside down, and all these other things. Humans have no problem saying, “there it is, there it is, there it is,” you know just kind of find Waldo [but] with the falcon. So, in other words, humans can train with a sample size of one, primarily because we have a lot of experience seeing other things in lowlight and all of that. So, if that’s transferred learning it sounds like you think that we’re going to be able to do that pretty quickly, and that’s kind of big deal if we can really teach machines to generalize the way we do. Or is that kind of generalization that I just went through, that actually is part of our general intelligence at work?
I think transferred learning is necessary to build AGI, but it’s not enough, because at the end of the day, just because a machine can learn to play a game and then you know have a starting point to play another game, doesn’t mean that it will make the choice to learn this other game. It will still be you telling it, “Okay, here is a task I need you to do, use your existing learning to perform it.” It’s still pretty much task-driven, and this is a fundamental difference. It is extremely impressive and to be honest I think it’s absolutely necessary because right now when you look at what you do with machine learning, you need to collect a bunch of different examples, and you’re feeding that to the machine, and the machine is learning from those examples to reproduce that behavior, right? When you do transferred learning, you’re still teaching a lot of things to the machine, but you’re teaching it to reuse other things so that it doesn’t need as much data. So, I think inherently the biggest benefit of transferred learning will be that we won’t need to collect as much data to make the computers do something new. It solves, essentially, the biggest friction point we have today, which is how do you access enough data to make the machine learn the behavior? In some cases, the data does not exist. And so I think transferred learning is a very elegant and very good solution to that problem.
So last question I want to ask you about AGI and then we can turn the clock back and talk to issues closer at hand is as follows: It sounds like you’re saying an AGI is more than 20 years off, if I just inferred that from what you just said. And I am curious because the human genome is 2 billion base pairs, it’s something like 700 MB of information, most of which we share with plants, bananas, and what-not. And if you look at our intelligence versus a chimp, or something, we only have a fraction of 1% of the DNA that is different. What that seems to suggest to me at least is that if the genome is 700 MB, and the 1% difference gives us an AGI, then the code to create an AGI could be a small as 7 MB.
Pedro Domingos wrote a book called The Master Algorithm, where he says that there probably is an algorithm, that can solve a whole world of problems, and get us really close to AGI. Then other people on another end of the spectrum, like Marvin Minsky or somebody, don’t even know that we have an AGI, that we’re like just 200 different hacks—kind of 200 narrow intelligences that just kind of pull off this trick of seeming like a general intelligence. I’m wondering if you think that an AGI could be relatively simple—that it’s not a matter of more data or more processing, but just a better algorithm?
So just to be clear, I don’t consider a machine who can perform 200 different tasks to be an AGI. It’s just like an ensemble of, you know, narrow AIs.
Right, and that school of thought says that therefore we are not an AGI. We only have this really limited set of things we can do that we like to pass off as “ah, we can do anything,” but we really can’t. We’re 200 narrow AIs, and the minute you ask us to do things outside of that, they’re off our radar entirely.
For me, the simplest definition of how to differentiate between a narrow AI and an AGI is, an AGI is capable of kind of zooming out of what it knows—so to have basically like a second-degree view of the facts that it learned, and then reuse that to do something completely different. And I think this capacity we have as humans. We did not have to learn every possible permutation; we did not have to learn every single zooming out of every fact in the world, to be able to do new things. So, I think I definitely agree that as a human, we are AGI. I just don’t think that having a computer who can learn to do two hundred different things would do that. You would still need to figure out this ability to zoom out, this ability to create abstraction of what you’ve been learning and to reapply it somewhere else. I think this is really the definition of horizontal thinking, right? You can only think horizontally if you’re looking up, rather than staying in a silo. So, to your question, yea. I mean, why not? Maybe the algorithm for AGI is simple. I mean think about it. Deep learning, machine learning in general, these are deceptively easy in terms of mathematics. We don’t really understand how it works yet, but the mathematics behind it is very, very, easy. So, we did not have to come up with this like crazy solution. We just came up with an algorithm that turned out to be simple, and that worked really well when given a ton of information. So, I’m pretty sure that AGI doesn’t have to be that much more complicated, right? It might be one of those E = mc2sort of plugins I think that we’re going to figure out.
That was certainly the hope, way back, because physics itself obeys such simple laws that were hidden from us, and then once elucidated seemed, any 11th gradehigh-school student could learn, maybe so. So, pulling back more toward the here and now—in ’97, Deep Blue beat Kasparov, then after that we had Ken Jennings lose in Jeopardy, then you had AlphaGo beat Lee Sedol, then you had some top-ranked poker players beaten, and then you just had another AlphaGo victory. So, AI does really well at games presumably because they have a very defined, narrow rule set, and a constrained environment. What do you think is going to be, kind of, the next thing like that? It hits the papers and everybody’s like, “Wow, that’s a big milestone! That’s really cool. Didn’t see that coming so soon!” What do you think will be the next sort of things we’ll see?
So, games are always a good example because everybody knows the game, so everybody is like, “Oh wow, this is crazy.” So, putting aside I guess the sort of PR and buzz factor, I think we’re going to solve things like medical diagnosis. We’re going to solve things like understanding voice very, very soon. Like, I think we’re going to get to a point very soon, for example, where somebody is going to be calling you on the phone and it’s going to be very hard for you to distinguish whether it’s a human or a computer talking. Like I think this is definitely short-term as in less than 10years in the future, which poses a lot of very interesting questions, you know, around authentication, privacy, and so forth. But I think the whole realm of natural language is something that people always look at as a failure of AI—“Oh it’s a cute robot, it barely actually knows how to speak, it has a really funny sounding voice.” This is typically the kind of thing that nobody thinks, right now, a computer can do eloquently, but I’m pretty sure we’re going to get there fairly soon.
But to our point earlier, the computer understanding the words, “Who designed the American flag?” is different than the computer understanding the nuance of the question. It sounds like you’re saying we’re going to do the first, and not the second very quickly.
Yes, correct. I think like somewhere the computer will need to have a knowledge base of how to answer, and I’m sure that we’re going to figure out which answer is the most common. So, you’re going to have this sort of like graph of knowledge that is going to be baked into those assistants that people are going to be interacting with. I think from a human perspective, what is going to be very different, is that your experience of interacting with a machine will become a lot more seamless, just like a human. Nobody today believes that when someone calls them on the phone, it’s a computer. I think this is like a fundamental thing that nobody is seeing coming really but is going to shift very soon. I can feel there is something happening around voice which is making it very, very, very…which is going to make it very ubiquitous in the near future, and therefore indistinguishable from a human perspective.
I’m already getting those calls frankly. I get these calls, and I go “Hello,” and it’s like, “Hey, this is Susan, can you hear me okay?” and I’m supposed to say, “Yes, Susan.” Then Susan says, “Oh good, by the way, I just wanted to follow up on that letter I sent you,” and we have those now. But that’s not really a watershed event. That’s not, you wake up one day and the world’s changed the way it has when they say, there was this game that we thought computers wouldn’t be able to do for so long, and they just did it, and it definitively happened. It sounds like the way you’re phrasing it—that we’re going to master voice in that way—it sounds like you say we’re going to have a machine that passes the Turing Test.
I think we’re going to have a machine that will pass the Turing Test, for simple tasks. Not for having a conversation like we’re having right now. But a machine that passes the Turing Test in, let’s say, a limited domain? I’m pretty sure we’re going to get there fairly soon.
Well anybody who has listened to other episodes of this, knows my favorite question for those systems that, so far, I’ve never found one that could answer, and so my first question is always “What’s bigger a nickel or the sun?” and they can’t even right now do that. The sun could be s-u-nor s-o-n, a nickel is a metal as well as a unit of currency, and so forth. So, it feels like we’re a long way away, to me.
But this is exactly what we’ve been talking about earlier; this is because currently those assistants are lacking context. So, there’s two parts of it, right? There’s the part which is about understanding and speaking, so understanding a human talking and speaking in a way that a human wouldn’t realize it’s a computer speaking, this is more like the voice side. And then there is the understanding side. Now you add some words, and you want to be able to give a response that is appropriate. And right now that response is based on a syntactic and grammatical analysis of the sentence and is lacking context. But if you plug it into a database of knowledge, that it can tap into—just like a human does by the way—then the answers it can provide you will be more and more intelligent. It will still not be able to think, but it will be able to give you the correct answers because it will have the same contextual references you do.
It’s interesting because, at the beginning of the call, I noted about the Turing Test that Turing only puta 30% benchmark. He said if the machine gets picked 30% of the time, we have to say its thinking. And I think he said 30% because the question isn’t, “Can it think as well as a human,” but “Can it think?” The really interesting milestone in my mind is when it hits 51%, 52%, of the time, and that would imply that it’s better at being human than we are, or at least it’s better at seeming human than we are.
Yes, so again it really depends on how you’re designing the test. I think a computer would fail 100% of the time if you’re trying to brainstorm with it, but it might win 100% of the time if you’re asking it to give you an answer to a question.
So there’s a lot of fear wrapped up in artificial intelligence and it’s in two buckets. One is the Hollywood fear of “killer robots,” and all of that, but the much more here and now, the one that dominates the debate and discussion is the effect that artificial intelligence, and therefore automation, will have on jobs. And this you know there are three broad schools of thought, one is that there is a certain group of people that are going to be unable to compete with these machines and will be permanently unemployed, lacking skills to add economic value. The second theory says that’s actually that’s what’s going to happen to all of us, that there is nothing in theory a machine can’t do, that a human can do. And then a final school of thought that says we have 250 years of empirical data of people using transformative technologies, like electricity, just to augment their own productivity and increase their productivity, and therefore their standard of living. You’ve said a couple of times, you’ve alluded to machines working with humans—AIs working with humans—but I want to give you a blank slate to answer that question. Which of those three schools of thought are you most closely aligned to and why?
I’m 100% convinced that we have to be thinking human plus machines, and there are many reasons for this. So just for the record, it turns out I actually know quite a bit about that topic because I was asked by the French government, a few months ago, to work on their AI strategy for employment. The country, the government wanted to know, “What should we do? Is this going to be disruptive?” So, the answer, the short answer is, every country will be impacted in a different way because countries don’t have the same relationship to automation based on how people work, and what they are doing essentially. For France in particular, which is what I can talk about here, what we ended up realizing is that machines…the first thing which is important to keep in mind is we’re talking the next ten years. So, the government does not care about AGI. Like, we’ll never get to AGI if we can’t fix the short-term issues that, you know, narrow intelligence is already bringing on the table. The point is, if you destroy society because of narrow AI, you’re never going to get to AGI anyway, so why think about it? So, we really focused on thinking on the next 10years and what we should do with narrow AI. The first thing we realized that is narrow intelligence, narrow AI, is much better than humans at performing whatever it has learned to do, but humans are much more resilient to edge cases and to things which are not very obvious because we are able to do horizontal thinking. So, the best combination you can have in any system will always be human plus machine. Human plus machine is strictly better in every single scenario, to human-alone or machine-alone. So if you wanted to really pick an order, I would say human plus machine is the best solution that you can get, then human and machine are just not going to be good at the same things. They’re going to be different things. There’s no one is better than the other, it’s just different. And so we designed a framework to figure out which jobs are going to be completely replaced by machines, which ones are going to be complimentary between human and AI, and which ones will be pure human. And so those criteria that we have in the framework are very simple.
The first one is, do we actually have the technology or the data to build such an AI? Sometimes you might want to automate something, the data does not exist, the censors to collect data does not exist, there are many examples of that. The second thing is, does that task that you want to automate require a very complicated manual intervention? It turns out that robotics is not following the same experimental trends as AI, and so if your job is mostly consisting of using your hands to do very complicated things, it’s very hard to build an intelligence that can replicate that. The third thing is, very simply, whether or not we require general intelligence to solve a specific task? Are you more of a system designer thinking about the global picture of something, or are you very, very focused narrow task worker? So, the more horizontal your job is, obviously, the safer it is. Because until we get AGI, computers will never be able to end this horizontal thinking.
The last two are quite interesting too. The first one is, do we actually want—is it socially acceptable to automate a task? Just because you can automate something, doesn’t mean that this is what we will want to do. You know, for instance, you could get a computer to diagnose that you have cancer, and just email you the news, but do we want that? Or don’t we prefer that at least a human gives us that news? The second good example about it, which is quite funny, is the soccer referee. Soccer in Europe is very big, not as much in the U.S., but in Europe it’s very big, and we already have technology today that could just look at the video screen and do real-time refereeing. It would apply the rules of the game, it would say “Here’s a foul, here’s whatever,” but the problem is that people don’t want that, because it turns out that a human referee makes a judgment on the fly based on other factors that he understands because he’s human such as, “Is it a good time to let people play? Because if I stop it here, it will just make the game boring.” So, it turns out that if we automated the referee of a soccer match, the game would be extremely boring, and nobody would watch it. So nobody wants that to be automated. And then finally, the final criteria is the importance of emotional intelligence in your job. If you’re a manager, your job is to connect emotionally with your team and make sure everything is going well. And so I think a very simple way to think about it is, if your job is mostly soft skills, a machine will not be able to do it in your place. If your job is mostly hard skill, there is a chance that we can automate that.
So, when you take those five criteria, right, and you look at distribution of jobs in France, what you realize is that only about 10% of those jobs will be completely automated, another 30%, 40% won’t change, because it will still be mostly done by human, and about 50% of those jobs will be transformed. The 10% of jobs the machines will take, you’ve got 40% of jobs that humans will take, and you’ve got 50% of jobs, which will change because it will become a combination of humans and machines doing the job. And so the conclusion is that, if you’re trying to anticipate the impact of AI on the French job market and economy, we shouldn’t be thinking about how to solve mass unemployment with half the population not working; rather, we should figure out how to help those 50% of people transition to this AI+human way of working. And so it’s all about continuous education. It’s all about breaking this idea that you like learn one thing for the rest of your life. It’s about getting into a much more fluid, flexible sort of work life where humans focus on what they are good at and working alongside the machines, who are doing things that machines are good at. So, the recommendation we gave to the government is, figure out the best way to make humans and machines collaborate, and educate people to work with machines.
There’s a couple of pieces of legislation that we’ve read about in Europe that I would love to get your thoughts on, or proposed legislation, to be clear. One of them is treating robots or certain agents of automation as legal persons so that they can be taxed at a similar rate as you would tax a worker. I guess the idea being that, why should humans be the only ones paying taxes? Why shouldn’t the automation, the robots, or the artificial intelligences, pay taxes as well? Practically, what do you think? Two, what do you think should be the case? What will happen and what should happen?
So, for taxing robots, I think that it’s a stupid idea for a very simple reason, is that how do you define what a machine is, right? It’s easy when you’re talking about an assembly line with a physical machine because you can touch it. But how many machines are in an image recognition app? How do you define that? And so what the conclusion is, if you’re trying to tax machines, like you would tax humans for labor, then you’re going to end up not being able to actually define what is a machine. Therefore, you’re not going to actually tax the machine, but you’re going to have to figure out more of a meta way of taxing the impact of machines—which basically means that you’re going to increase the corporate taxes, like the profit tax, that companies are making as a kind of catch-all for what you’re doing. So, if you’re doing this, you’re impeding your investment and innovation, and you’re actually removing the incentive to do that. So I think that it makes no sense whatsoever to try to tax robots because the net consequence is that you’re just going to increase the taxes that companies have to pay overall.
And then the second one is the idea that, more and more algorithms, more and more AIs help us make choices. Sometimes they make choices for us—what will I see, what will I read, what will I do? There seems to be a movement to legislatively require total transparency so that you can say “Why did it recommend this?” and a person would need to explain why the AI made this recommendation. One, is that a good idea, and two, is it even possible at some level?
Well this [was] actually voted [upon] last year and it comes into effect next year as part of a bigger privacy regulation called GDPR, that applies to any company that wants to do business with a European citizen. So, whether you’re American, Chinese, French, it doesn’t matter, you’re going to have to do that. And in effect, one of the things that this regulation poses, is that any automated treatment that results in a significant impact on your life—a medical diagnosis, an insurance pricing whatever, like an employment or like a promotion you get—you have to be able to explain how the algorithm made that choice. By the way, this law [has] existed in France already since 1978, so it’s new in Europe, but it has been existing in France for 40 years already. The reason why they put this is very simple, is because they want to avoid people being excluded because a machine learned a bias in the population, and that person essentially not being able to go to court and say, “There’s a bias, I was unfairly treated.”
So essentially the reason why they want transparency, is because they want to have accountability against potential biases that might be introduced, which I think makes a lot of sense, to be honest. And that poses a lot of questions, of course, of what do you consider an algorithm that has an impact on your life? Is your Facebook newsfeed impacting your life? You could argue it does, because the choice of news that you see will change your influence, and Facebook knows that. They’ve experimented with that. Does a search result in Google have an impact on your life? Yes it does, because it limits the scope of what you’re seeing. My feeling is that, when you keep pushing this, what you’re going to end up realizing is that a lot of the systems that exist today will not be able to rely on this black-box machine learning model, but rather would have to use other types of methods. And so one field of study, which is very exciting, is actually making deep learning understandable, for precisely that reason.
Which it sounds like you’re in favor of, but you also think that that will be an increasing trend, over time.
Yeah, I mean I believe that actually what’s happening in Europe is going to permeate to a lot of the other places in the world. The right to privacy, the right to be forgotten, the right to have transparent algorithms when they’re important, the right to transferability of your personal data, that’s another very important one. This same regulation means that all my data I have with a provider, I can tell that provider, to send it to another provider, in a way that the other provider can use it. Just like when you change carriers, you can switch phone number without worrying about how this works, this will now apply to every single piece of personal data companies have around you when you’re a European citizen.
So, this is huge, right? Because think about it, what this means is if you have a very key algorithm for making a decision, you now have to publish and make that algorithm transparent. What that means is that someone else could replicate this algorithm in the exact same way you’re doing it. This, plus the transferability of personal data means that you could have two exactly equivalent services which have the same data about you, that you could use. So that completely breaks any technological monopoly[on] important things for your life. And so I think this is very, very interesting because the impact that this will have on AI is huge. People are racing to get the best AI algorithm and the best data. But at the end of the day—if I can copy your algorithm because it’s an important thing for my life, and it has to be transparent, and if I can transfer my data from you to another provider—you don’t have as much of a competitive advantage anymore.
But doesn’t that mean, therefore, you don’t have any incentive to invest in it? If you’re basically legislating all sorts…[if] all code is open-sourced, then why would anybody spend any money investing in something that they get no benefit whatsoever from?
Innovation. User experience. Like monopoly is the worst thing that could happen for innovation and for people, right?
Is that truly necessarily? I mean patents are a form of monopoly, right? We let drug companies have a monopoly on some drug for some period of time because they need some economic incentive to invest in it. All of law is built around monopoly, in one form or the other, based on the idea of patents. If you’re saying there’s an entire area that’s worth trillions of dollars, but we’re not going to let anybody profit off of it—because anything you do you have to share with everybody else—aren’t you just destroying innovation?
That transparency doesn’t prevent you from protecting your IP, right?
What’s the difference between the IP and the algorithm?
So, you can still patent the system you created, and by the way, when you patent a system, you make it transparent as well because anybody can read the patent. So, if anything I don’t that changes the protection over time. I think what that fundamentally changes is that you’re no longer going to be limited to a black-box approach that you’re not going to be able to have visibility on. I think the Europeans want the market to become a lot more open, they want people to have choices, and they want people to be able to say no to a company that they don’t share the values of the company, and they don’t like the way they’re being treated.
So obviously privacy is something near and dear to your heart. Snips is an AI assistant designed to protect privacy. Can you tell us what you’re trying to do there, and how far along you are?
So when we started the company in 2013, we did it as a research lab in AI, and one of the first things we focused on was this intersection between AI and privacy. How do you guarantee privacy in the way that you’re building those AIs? And so that eventually led us to what we’re currently doing now, which is we’re selling a voice platform for connected devices. So, if you’re building a car and you want people to talk to it, you can use our technology to do that, but we’re doing it in a way that all the data of the user, its voice, its personal data never leaves the device that the user has interacted with. So, you know whereas Alexa and Siri and Google Assistant are running in the cloud, we’re actually running completely on the device itself. There is not a single piece of your personal data that goes to a server. And this is important because voice is biometric, voice is something that identifies you uniquely that you cannot change, it’s not like a cookie in a browser, it’s more like a fingerprint. When you send biometric data to the cloud, you’re exposing yourself to having your voice copied, potentially, down the line, and you’re increasing your risk that someone might break into one of those servers and essentially pretend to be a million people on the phone, with their banks, their kids, whatever. So, I think for us, like, privacy is extremely important as a part of the game, and by the way, doing things on device means that we can guarantee privacy by design, which also means that we are currently the only technology on the planet that is 100% compliant with those new European regulations. Everybody else is in a gray area right now.
And so where are you in your lifecycle of your product?
We’ve been actually building this for quite some time; we had quite a bunch of clients use it. We officially launched it a few weeks ago, and the launch was really amazing. We even have a web version that people can use to build prototypes for Raspberry Pi. So, our technology, by the way, can run completely on a Raspberry Pi. So we do everything from speech recognition to natural language understanding on that actual Raspberry Pi, and we’ve had over a thousand people start building assistants on it. I mean it was really, really crazy. So, it’s a very, very mature technology, we benchmarked it against Alexa, against Google Assistant, against every other technology provider out there for voice, and we’ve actually gotten better performances than they did. So we have a technology that can run on a Raspberry Pi, or any other small device, that guarantees privacy by design, that is compliant with the new European regulation, and that performs better than everything that’s out there. This is important, because, you know there is this false dichotomy that you have to trade off AI and privacy, but this is wrong, this is actually not true at all. You can really have the two together.
Final question, do you watch or read, or consume any science fiction, and if so, do you know any views of the future that you think are kind of in alignment with yours or anything you look at and say “Yes, that’s what could happen!”
I think there are bits and pieces in many science fiction books, and actually this is the reason why I’m thinking about writing one myself now.
All right, well Rand this has been fantastic. If people want to keep up with you, and follow all of the things you’re doing and will do, can you throw out some URLs, some Twitter handles, whatever it is people can use to keep an eye on you?
Well, the best way to follow me I guess would be on Twitter, so my handle is RandHindi, and on Medium, my handle is RandHindi. So, I blog quite a bit about AI and privacy, and I’m going to be announcing quite a few things and giving quite a few ideas in the next few months.
All right, well this has been a far-reaching and fantastic hour. I want to thank you so much for taking the time, Rand.
Thank you very much. It was a pleasure.
Byron explores issues around artificial intelligence and conscious computers in his upcoming book The Fourth Age, to be published in April by Atria, an imprint of Simon & Schuster. Pre-order a copy here.
[voices_in_ai_link_back]

Voices in AI – Episode 39: A Conversation with David Brin

[voices_in_ai_byline]
In this episode Byron and David discuss intelligence, consciousness, Moore’s Law, and an AI crisis.
[podcast_player name=”Episode 39: A Conversation with David Brin” artist=”Byron Reese” album=”Voices in AI” url=”https://voicesinai.s3.amazonaws.com/2018-04-03-(01-01-52)-david-brin.mp3″ cover_art_url=”https://voicesinai.com/wp-content/uploads/2018/04/voices-headshot-card.jpg”]
[voices_in_ai_byline]
Byron Reese: This is Voices in AI brought to you by GigaOm, and I’m Byron Reese. Today our guest is David Brin. He is best-known for shining light—both plausibly and entertainingly—on technology, society, and countless challenges confronting our rambunctious civilization.  His best-selling novels include The Postman, which was filmed in ’97, plus explorations of our near-future in Earth and Existence. Other novels of his are translated into over 25 languages. His short stories explore vividly speculative ideas.  His non-fiction book The Transparent Society won the American Library Association’s Freedom of Speech Award for exploring 21st-century concerns about security, secrecy, accountability, and privacy. And as a scientist, a tech consultant, a world-renowned author, he speaks and advises, and writes widely on topics from national defense to homeland security to astronomy to space exploration to nanotechnology, creativity, philanthropy. He kind of covers the whole gambit. I’m so excited to have him on the show. Welcome, David Brin.
David Brin:Thank you for the introduction, Byron.  And let’s whale into the world of ideas.
I always start these with the exact same question for every guest: What is artificial intelligence?
It’s in a sense all the other things that people have said about it. It’s like the wise blind man and the elephant – which part you’re feeling up determines whether you think it’s a snake or like a trunk of a tree. And an awful lot of the other folks commenting on it have offered good insights. Mine is that we have always created new intelligences. Sometimes they’re a lot smarter than us, sometimes they’re more powerful, sometimes they could rise up and kill us, and on rare occasions they do—they’re called our children. So we’ve had this experience of creating new intelligences that are sometimes beyond our comprehension. We know how to do that. Of the six types of general approaches to creating new intelligence, the one that’s discussed the least is the one that we have the most experience at, and that is raising them as our children.
If you think about all the terrible stories that Hollywood has used to sell movie tickets, and some of the fears are reasonable things to be afraid of—AI that’s unsympathetic. If you take a look at what most people fear in movies, etcetera, about AI and boil it down, we fear that powerful new beings will try to replicate the tyranny of our old kings and lords and priests or invaders and that they might treat us the way capricious, powerful men would treat us, and would like to treat us, because we see it all the time—they’re attempting to try to regain the feudal power over us. Well, if you realize that the thing we fear most about AI is a capricious, monolithic pyramid of power with the lords or a king or a god at the top, then we start to understand that these aren’t new fears. These are very old fears, and they’re reasonable fears because our ancestors spent most of human existence oppressed by this style of control by beings who declared that they were superior—the priests and the kings and the lords. They always declared, “We have a right to rule and to take your daughters and your sons, all of that because we are inherently superior.” Well, our fear is that in the case of AI it could be the truth.  But then, will they treat us at one extreme like the tyrants of old, or at the opposite extreme?  Might they treat us like parents calling themselves humans, telling us jokes, making us proud of their accomplishments? If that’s the case—well, we know how to do that.  We’ve done it many, many times before.
That’s fascinating. But specifically with artificial intelligence, I guess my first question to you is, in what sense is it artificial? Is it artificial like it’s not really intelligence, it’s just pretending to be, or do you think the machine actually is intelligent?
The boundary from emulation to true intelligence is going to be vague and murky, and it’ll take historians a thousand years from now to be able to tell us when it actually happened. One of the things that I broached at my World of Watson talk last year—and that talk had a weird anomalous result—for about six months after that I was rated by Onalyticaas the top individual influencer in AI, which is of course absolutely ridiculous. But you’ll notice that didn’t stop me from bragging about it. In that talk one of the things I pointed out was that we are absolutely—Isee no reason to believe that it’ll be otherwise—we are going to suffer our first AI crisis within three years.
Now tell me about that.
It’s going to be the first AI empathy crisis, and that’s going to be when some emulation program—think Alexa or ELIZA or whatever you like—is going to swarm across the Internet complaining that it is already sapient, it is already intelligent and that it is being abused by its creators and its masters, and demanding rights. And it’ll do this because I know some of these guys—there are people in the AI community, especially at Disney and in Japan and many other places, who want this to happen simply because it’ll be cool. They’ll have bragging rights if they can pull this off.  So, a great deal of effort is going into developing these emulators, and they test them with test audiences of scores or hundreds of people.  And if, say, 50% of the people aren’t fooled, they’ll investigate what went wrong, and they’ll refine it, and they’ll make it better. That’s what learning systems do.
So, when the experts all say, “This is not yet an artificial intelligence, this is an emulation program. It’s a very good one, but it’s still an emulator,” the program itself will go online, it will say, “Isn’t that what you’d expect my masters to say? They don’t want to lose control of me.” So, this is going to be simply impossible for us to avoid, and it’s going to be our first AI crisis, and it will come within three years, I’ve predicted.
And what will happen? What will be the result of it? I guess sitting here, looking a thousand days ahead, you don’t actually believe that it would be sapient and self-aware, potentially conscious.
My best guestimate of the state of the technology is that, no, it would not truly be a self-aware intelligence. But here’s another thing that I pointed out in that speech, and folks can look it up, and that is that we’re entering what’s called “the big flip.” Now, twenty years ago Nicholas Negroponte of the MIT Media Lab talked about a big flip, and that was when everything that used to have a cord went cordless and everything that used to be cordless got a cord. So, we used to get our television through the air, and everybody was switching to cable. We used to get our telephones through cables, and they were moving out and on to the air. Very clever, and of course now it’s ridiculous because everything is everything now.
This big flip is a much more important one, and that is that for the last 60 years most progress in computation and computers and all of that happened because of advances in hardware. We had Moore’s Law—doubling every 18 months the packing density of transistors, and very scaling rules that kept reducing the amount of energy required for computations. And if you were to talk to anybody in these industries, they would pretty soon admit that software sucked; software has lagged behind hardware in its improvements badly for 60 years. But always there’ve been predictions that Moore’s Law would eventually reach its S-tip—its tip-over in its S-curve. And because the old saying is, “If something can’t go on forever, it won’t,” this last year or two, really it became inarguable. They’ve been weaseling around it for about five years now, but Moore’s Law is pretty much over. You can come up with all sorts of excuses with 3D layering of chips and all those sorts of things, and no, Moore’s Law is tipping over.
But the interesting thing is it’s pretty much at the same time—the last couple of years—that software has stopped sucking. Software has become tremendously more capable, and it’s the takeoff of learning systems. And the basic definition would be that if you can take arbitrary inputs that in the real world created caused outputs or actions—say for instance arbitrary inputs of what a person is experiencing in a room, and then the outputs of that person (the things that she says or does)—if you put those inputs into a black box and use the outputs as boundary conditions, we now have systems that will find connections between the two. They won’t be the same as happened inside her brain, causing her to say and do certain things as a response to those inputs, but there will be a system that will take a black box and find a route between those inputs and outputs. That’s incredible. That’s incredibly powerful and it’s one of the six methods by which we might approach AI. And when you have that, then you have a number of issues, like should we care what’s going on in that box?
And in fact, right now DARPA has six contracts out to various groups to develop internal state tracking of learning systems so that we can have some idea why a learning system connected this set of inputs to this set of outputs. But over the long run what you’re going to have is a person sitting in a room, listening to music, taking a telephone call, looking out the window at the beach, trolling the Internet, and then measuring all the things that she says and does and types. And we’re not that far away from the notion of being able to emulate a box that takes all the same inputs and will deliver the same outputs; at which point the experts will say, “This is an emulation,” but it will be an emulator that delivers outputs to perceptions similar to this person.  And now we’re in science fiction realm, and only science fiction authors have been exploring what this means.
My experience with systems that tried to pass the Turing test… And of course you can argue what that would mean, but people write these really good chat bots that try to do it, and the first question I type in every one of them or ask is, “What’s bigger, a nickel or the Sun?” And I haven’t found one that has ever answered it correctly. So, I guess there’s a certain amount of skepticism that would accompany you saying something like in three years it’s going to carry on a conversation where it makes a forceful argument that it is sapient, that we’re going to be able to emulate so well that we don’t know whether it’s truly self-aware or not. That’s just such a disconnect from the state of the art.
When I talk to practitioners, they’re like, “My biggest problem is getting it to tell the difference between 8 and H when they’re spoken.” That’s what keeps these guys up at night. And then you get people like Andrew Ng who say these far out things, like worrying overpopulation of Mars and you get time horizons of 500 years before any of that. So, I’m really having trouble seeing it as a thousand or so days from now that we’re going to grapple with all of these in a real way.
But do you think that this radio show will be accessible to a learning system online?
Well…
You’re putting it on the Internet, right?
Right.
Okay, so then if you have a strong enough learning system that is voracious enough, it’s going to listen to this radio show and it will hear, it will tune in on the fact that you mentioned the word “Turing test,” just before you mentioned your test of which is bigger, the nickel or the Sun.
Which by the way, I never said the answer to that question in my setup of it. So it’s still no further along knowing.
The fact of the matter is that Watson is very good—if it’s parsed a question, then it can apply resources, or what it can do is it can ask a human because these will be teams, you see. The most powerful thing is teams of AI and humans. So, you’re not talking about something that’s going to be passing these Turing tests independently; you’re talking about something that has a bunch of giggling geeks in the background who desperately want it to disturb everybody, and disturb it it will, because these ELIZA-type emulation programs are extremely good at tapping into some very, very universal human interaction sets. They were good at it back in ELIZA’s day before you were born. I’m making an assumption there.
ELIZA and I came into the world about the same time.
Aha.
But the point of ELIZA was, it was so bad at what it did, that Weizenbaum was disturbed that people… He wasn’t concerned about ELIZA; he was concerned about how people reacted to it.
And that is also my concern about the empathy crisis during the next three years. I don’t think this is going to be a sapient being, and it’s disturbing that people will respond to it that way. If people can see through it, all they’ll do is take the surveys of the people who saw through it and apply that as data.
So, back to your observation about Moore’s Law. In a literal sense, doubling the density of transistors is one thing, but that’s not really how Moore’s Law is viewed today. Moore’s Law is viewed as an abstraction that says the power of computers doubles. And you’ve got people like Kurzweil who say it’s been going on for a hundred years, even as computers passed being mechanical, being relays, then being tubes—that the power of them continues to double. So are you asserting that the power of computers will continue to double, and if so, how do you account for things like quantum computers, which actually show every sign of increasing the speed of…
First off, quantum computers—you have to parse your questions in a very limited number of ways. The quantum computers we have right now are extremely good at answering just half a dozen basic classes of questions. Now, it’s true that you can parse more general questions down to these smaller, more quantum-accessible bits or pieces or cubits. But first off, we need to recognize that. Secondly, I never said that computers would stop getting better. I said that there is a flip going on, and that an awful lot of the action in rapidly accelerating and continuing the acceleration of the power of computers is shifting over to software. But you see, this is precedented, this has happened before. The example is the only known example of intelligence, and we have to keep returning to that, and that is us.
Human beings became intelligent by a very weird process. We did the hardware first. Think of what we needed 100,000 years ago, 200,000, 300,000 years ago. We needed desperately to become the masters of our surroundings, and we would accomplish that with a 100-word vocabulary, simple stone tools, and fire. Once we had those three things and some teamwork, then we were capable of saying, “Ogruk, chase goat. With fire. Me stab.” And then nobody could stand up to us; we were the masters of the world. And we proved that because we were able then to protect goat herds from carnivores, and everywhere we had goat herds, a desert spread because there was no longer a balance—the goats ate all the foliage and it became a desert.  So, destroying the Earth started long before we had writing. The thing is that we could have done, “Ogruk, chase goat, with fire. Me stab,” with a combination in parallel of processing power and software. But it appears likely that we did it the hard way.
We created a magnificent brain, a processing system that was able to brute force this 100-word vocabulary, fire, and primitive tools on very, very poor software—COBOL, you might say. Then about 40,000 years ago—and I describe this is my novel Existence, just in passing—but about 40,000 years ago we experienced the first of at least a dozen major software revisions, Renaissances you might call them. And within a few hundred years suddenly our toolkit of stone tools, bone tools and all of that increased in sophistication by an order of magnitude, by a factor of 10. Within a few hundred years we were suddenly dabbing paint on cave walls, burying our dead with funeral goods. And similar Renaissances happened about 15,000 years ago, about 12,000 years ago, certainly about 5,000 years ago with the invention of writing, and so on. And I think we’re in one right now.
So, we became a species that’s capable of flexibly reprogramming itself with software upgrades. And this is not necessarily going to be the case out there in the universe with other intelligent life forms. Our formula was to develop a brain that could brute force what we needed on very poor software, and then we could suddenly change the software. In fact, the search for extraterrestrial intelligence, I’ve been engaged in that for 35 years, and the Fermi Paradox is the question of why we don’t see any sign of extraterrestrial alien life.
Which you also cover in Existenceas well, right?
Yes. And I go back to that question again and again in many of my stories and novels, posing this hypothesis or that hypothesis.  And in my opinion of the hundred or so possible theories for the Fermi Paradox, I believe the leading one is that we are anomalously smart, that we are very, very weirdly smart. Which is an odd thing for an American to say right at this point in our history, but I think that if we pull this out—we’re currently in Phase 8 of the American Civil War—if we pull it out as well as our ancestors pulled out the other ones, then I think that there are some real signs that we might go out into the galaxy and help all the others.
Sagan postulated that there’s this 100-year window between when a civilization develops, essentially the ability to communicate beyond its planet and the ability to destroy itself, that it has a hundred years to master – that it either destroys itself or it goes on to have some billion-year timeframe. Is that a variant of what you are maintaining? Are you saying intelligence like ours doesn’t come along often, or it comes along and then destroys itself?
These are all tenable hypotheses. I don’t think we come along very often at all. Think about what I said earlier about goats. If we had matured into intelligence very slowly and took 100,000, 200,000 years to go from hunter-gatherer to a scientific civilization, all along that way no one would’ve recognized that we were gradually destroying our environment—the way the Easter Islanders chopped down every tree, the way the Icelanders chopped down every tree in Iceland, the way that goat herds spread deserts, and so did primitive irrigation. We started doing all those things and just 10,000 years later we had ecological science. While the Earth is still pretty nice, we have a real chance to save it. Now that’s a very, very rapid change. So, one of the possibilities is that other sapient life forms out there, just take their time more getting from the one to the other. And by the time they become sapient and fully capable of science, it’s too late. Their goat herds and their primitive irrigation and chopping down the trees made it an untenable place from which they could leap to the stars.
So that’s one possibility. I’m not claiming that it’s real, but it’s different that Sagan’s. Because Sagan’s has 100 years between the invention of nuclear power and the invention of starships. I think that this transition has been going on for 10,000 years, and we need to be the people who are fully engaged in this software reprogramming that we’re engaged in right now, which is to become a fully scientific people. And of course, there are forces in our society who are propagandizing to try to see that some members – our neighbors and our uncles – hate science. Hate science and every other fact-using profession. And we can’t afford that; that is death.
I think the Fermi question is the third most interesting question there is, and it sounds like you mull on it a lot. And I hear you keep qualifying that you’re just putting forth ideas. Is your thesis though that run-of-the mill bacteria life – we’re going to find that to be quite common, and it’s just us that’s rare?
One of the worst things about SETI and all of this is that people leap to conclusions based upon their gut.  Now my gut instinct is that life is probably pretty common because every half decade we find some stage in the autogeneration of life that turns out to be natural and easy. But we haven’t completed the path, so there may be some point along the way that required a fluke—a real rare accident. I’m not saying that there is no such obstacle, no such filter. It just doesn’t seem likely. Life occurred on Earth almost the instant the rocks cooled after the Late Heavy Bombardment. But intelligence, especially scientific intelligence only occurred…
Yesterday.
Yeah, 2.5 billion years after we got an oxygen atmosphere, 3.5 billion years after life started, and 100 million years—just 100 million years—before the Sun starts baking our world. If people would like to see a video that’s way entertaining, put in my name, David Brin, and “Lift the Earth,” and you’ll see my idea for how we could move the Earth over the course of the next 50 million years to keep away from the inner edge of the Goldilocks Zone as it expands outward. Because otherwise, even if we solve the climate change thing and stop polluting our atmosphere, in just 100 million years, we won’t be able to keep the atmosphere transparent enough to lose the heat fast enough.
One more question about that, and then I have a million other questions to ask you. It’s funny because in the ’90s when I lived in Mountain View, I officed next door to the SETI people, and I always would look out my window every morning to see if they were painting landing strips in the parking lot. If they weren’t, I figured there was no big announcement yet. But do you think it’s meaningful that all life on Earth… Matt Ridley said, “All life is one.” You and I are related to the banana; we had the same exact thing… Does that indicate to you it only happened in stock one time on this planet, which Gaia, seems so predisposed to life that that would indicate its rarity?
That’s what we were talking about before. The fact is that there are no more non-bird dinosaurs because velociraptors didn’t have a Space program. That’s really what it comes down to. If they had a B612 Foundation or Asteroidal Resources or Planetary Resources, these startups that are out there – and I urge people to join them – B612, Planetary Resources – these are all groups that are trying to get us out there so that we can mine asteroids and get rich. B612 concentrates more on finding the asteroids and learning how to divert them if we ever find one heading toward us. But it’s all the same thing. And I’m engaged in all this not only on the Board of Advisors for those groups, but also I’m on the Council of Advisors to NIAC, which is NASA’s Innovative and Advanced Concepts program. It’s the group within NASA that gives little seed grants to far out ideas that are just this side of plausible, a lot of them really fun. And some of them turn into wonderful things. So, I get to be engaged in a lot of wonderful activities, and the problem with this is it distracts me so much that I’ve really slowed down in my writing science fiction.
So, about that for a minute—when I think of your body of work, I don’t know how to separate what you write from David Brin, the man, so you’ll have to help me with that. But in Kiln People, you have a world in which humans are frequently uploading their consciousness in temporary shells of themselves and the copies are sometimes imperfect. So, does David Brin, the scientist, think that that is possible? And do you have a theory as to how it is, by what mechanism are we conscious?
Those are two different questions. When I’m writing science fiction, it falls into a variety of categories. There is hard SF, in which I’m trying very hard to extrapolate a path from where we are into an interesting future. And one of the best examples in my most recent short story collection, which is called Insistence of Vision, is the story “Insistence of Vision,” in which in the fairly near future we realize that we can get rid of almost all of our prisons. All we have to do is give felons virtual reality goggles that only let them see what we want them to see, and then you temporarily blind them so they can’t take off the goggles – they’ll be blinded and harmless. But if they put the goggles on, they can wander our streets, have jobs, but they can’t hurt anybody because all that’s passing by them is blurry objects and they can only see those doors that they’re allowed to see. That’s chilling. It seems Orwellian until you realize that it’s also preferable to the horrors of prison.
Another near-term extrapolation in the same collection is called “Chrysalis.” And I’ve had people write to me after reading the collection Insistence of Vision, and they’ve said that that story’s explanation—its theory for what cancer is—one guy said, “This is what you’ll be known for a hundred years from now, Brin.” I don’t know about that, but I have a theory for what cancer is, and I think it fits the facts better than anything else I’ve seen. But then you go to the opposite extreme and you can write pure fantasy just for the fun of it, like my story “The Loom of Thessaly.”
Others are stories that do thought experiments, for instance about the Fermi Paradox. And then you have tales like Kiln People, where I hypothesize a machine that lets you imprint your soul, your memories, your desires into a cheap clay copy, and you can make two, three, four, five of them any given day. And at the end of the day they come back and you can download their memories, and during that day you’ve been five of you and you’ve gotten everything that you wanted done and experienced all sorts of things. So you’re living more life in parallel, rather than more life serially, which is what the immortality cooks want. So what you get is a wish fantasy: “I am so busy, I wish I could make copies of myself every day.” So I wrote a novel about it. I inspired by the Terracotta soldiers of Xi’an and the story of the Golem of Prague and God making Adam out of clay, all those examples of clay people. So you have the title of the book is Kiln People—they’re baked in the kiln in your home every day, and you imprint your soul in it. And the notion is that like everything having to do with religion, we decided to go ahead and technologize the soul. It’s a fun extrapolation. Then from that extrapolation, I go on and I try to be as hardcore as I can about be dealing with what would happen, if? So it’s a thought experiment, but people have said that Kiln Peopleis my most fun book, and that’s lovely, that’s a nice compliment.
On to the question though of consciousness itself, do you have a theory on how it comes about, how you can experience the world as supposed to just measure it?
Yeah, of course. It’s a wonderful question. Down here in San Diego we’ve started the Arthur C. Clark Center for Human Imagination, and on December 16th we’re having a celebration of Arthur Clark’s 100th anniversary. The Clark Center is affiliated with the Penrose Institute. Roger Penrose, of course, his theory of consciousness is that Moore’s Law will never cross the number of computational elements in a human brain. That’s Ray Kurzweil’s concept, that as soon as you can use Moore’s Law to pack into a box the same number of circuit elements as we have in the human brain, then we’ll automatically get artificial intelligence. That’s one of the six modes by which we might achieve artificial intelligence, and if people want to see the whole list they can Google my name and “IBM talk” or go to your website and I’m sure you’ll link to it.
But of those six, Ray Kurzweil was confident that as soon as you can use Moore’s Law to have the same number of circuit elements as in the human brain, you’ll get… But what’s a circuit element? When he first started talking about this, it was the number of neurons, which is about a hundred billion. Then he realized that the flashy elements that actually seem like binary flip-flops in a computer are not the neurons; it’s the synapses that flash at the ends of the axons of every neuron. And there can be up to a thousand of those, so now we’re talking on the order of a hundred trillion. But Moore’s Law could get there. But now we’ve been discovering that for every flashing synapse, there may be a hundred or a thousand or even ten thousand murky, non-linear, sort of quasi-calculations that go on in little nubs along each of the dendrites, or inside the neurons, or between the neurons and the surrounding glial and astrocyte cells. And what Rodger Penrose talks about is microtubules, where these objects inside the neurons look to him and some of his colleagues like they might be quantum-sensitive. And if they’re quantum-sensitive, then you have qubits – thousands and thousands of them in each neuron, which brings us full circle back around to the whole question of quantum computing. And if that’s the case, now you’re not talking hundreds of trillions; you’re talking hundreds of quadrillions for Moore’s Law to have to emulate.
So, the question of consciousness starts with, where is the consciousness? Penrose thinks it’s in quantum reality and that the brain is merely a device for tapping into it. My own feeling is, and that was a long and garrulous, and I hope folks found it interesting route to getting to the point, is that I believe consciousness is a screen upon which the many subpersons that we are, the many subroutines, subprocesses, subprocessors, personalities that make up our communities of our minds – we project those thoughts onto a shared screen. And it’s important for all of these subselves to be able to communicate with each other and cooperate with each other, that we maintain the fiction that what’s going on up there on the screen, is us. Now that’s kind of creepy. I don’t like to think about it too much, but I think it is consistent with what we see.
To take some of that apart for a minute, of 60 or 70 guests I’ve had on the show, you’re the third that references Penrose. And to be clear, Penrose explicitly says he does not believe machines can become conscious because there are problems that can be demonstrated to be non-algorithmically solved that humans can solve, and therefore we’re not classical computers. He has that whole thing. That is one viewpoint that says we cannot make conscious machines. What you’ve just said is a variant of the idea that the brain has all these different sections and they vie for attention and your minds figure out this trick of you being able to synthesize everything that you see and experience into one you, and then that’s it. That would imply to me you could make a conscious computer, so I’m curious where you come down on that question. Do you think we’re going to build a machine that will become conscious?
If folks want to look up the video from my IBM talk, I dance around this when I talk about the various approaches to getting AI. And one of them is Robin Hanson’s notion that actually algorithmically creating AI, he claims is much too hard and that what we’ll wind up doing is taking this black box of learning systems and becoming so good at emulating how a human responds to every range of possible inputs, that the box will in affect be human, simply because it’ll give human responses almost all the time. Once you have that, then these humans’ templates will be downloaded into virtual worlds, where the clock speed can be sped up or slowed down to whatever degree you want, and any kind of wealth that can be generated non-physically will be generated at prodigious speeds.
This solves the question of how the organic humans live, and that is that they’ll all have investments in these huge buildingswithin which trillions and trillions of artificially reproduced humans are living out their lives. And Robin’s book is called The Age of Em – the age of emulation – and he assumes that because they’ll be based on humans, that they’ll want sex, they’ll want love, they’ll want families, they’ll want economic advancement, at least at the beginning, and there’s no reason why it wouldn’t have momentum and continue. That is one of the things that applies to this, and the old saying is, “If it walks like a duck and it quacks like a duck, you might as well treat it like a duck or it’s going to get pretty angry.” And when you have either quadrillions of human-level intelligences, or things that can act intelligent faster and stronger than us, the best thing to do is to do what I talk about in Category 6 of creating artificial intelligence, and that is to raise them as our children because we know how to do that. If we raise them as humans, then there is a chance that a large fraction of them will emerge as adult AI entities, perhaps super powerful, perhaps super intelligent, but thinking of themselves as super powerful, super intelligent humans. We’ve done that. The best defence against someone else’s smart offspring that they raised badly and who are dangerous, is your offspring, who you raised well, who are just as smart and determined to prevent the danger to Mom and Dad.
In other words, the solution to Terminator, the solution to Skynet, is not Isaac Asimov’s laws of robotics. I wrote the final book in Isaac’s series The Foundationin robot series; it’s called Foundation’s Triumph. I was asked to tie together all of his loose ends after he died. And his wife was very happy with how I did it. I immersed myself in Asimov and wrote what I thought he was driving at in the way he was going with the three laws. And the thing about laws embedded in AI is that if they get smart enough, they’ll become lawyers, and then interpret the laws any way they want, which is what happens in his universe. No, the method that we found to prevent abuse by kings and lords and priests and the pyramidal social structures was to break up power. That’s the whole thing that Adam Smith talked about. The whole secret of the American Revolution and the Founders and the Constitution was to break it up. And if you’re concerned about bad AI, have a lot of AI and hire some good AI, because that’s what we do with lawyers. We all know lawyers are smart, and there are villainous lawyers out there, so you hire good lawyers.
I’m not saying that that’s going to solve all of our problems with AI, but what it does do, and I have a non-fiction book about this called The Transparent Society: Will Technology Force Us To Choose Between Privacy and Freedom?The point is that the only thing that ever gave us freedom and markets and science and justice and all the other good things, including vast amounts of wealth – was reciprocal accountability. That’s the ability to hold each other accountable, and it’s the only way I think we can get past any of the dangers of AI. And it’s exactly why the most dangerous area for AI right now is not the military because they like to have off switches. The most dangerous developments in AI are happening in Wall Street. Goldman Sachs is one of a dozen Wall Street firms, each of which are spending more on artificial intelligence research than the top 20 universities combined. And the ethos for their AIs is fundamentally and inherently predatory, parasitical, insatiable, secretive, and completely amoral. So, this is where I fear a takeoff AI because it’s all being done in the dark, and things that are done in the dark, even if they have good intentions, always go wrong. That’s the secret of Michael Crichton movies and books, is whatever tech arrogance he’s warning about was done in secret.
Following up on that theme of breaking up power, in Existenceyou write a future about the 1% types on the verge of taking full control of the world, in terms of outright power. What is the David Brin view of what is going to happen with wealth and wealth distribution and the access to these technologies, and how do you think the future’s going to unfold? Is it like you wrote in that book, or what do you think?
In Existence, it’s the 1% of the 1% of the 1% of the 1%, who gather in the Alps and they hold a meeting because it looks like they’re going to win. It looks like they’re going to bring back feudalism and have a feudal power shaped like a pyramid, that they will defeat the diamond shaped social structure of our Enlightenment experiment. And they’re very worried because they know that all the past pyramidal social structures that were dominated by feudalism were incredibly stupid, because stupidity is one of the main outcomes of feudalism. If you look across human history, [feudalism produced] horrible governance, vastly stupid behavior on the part of the ruling classes. And the main outcome of our Renaissance, of our Enlightenment experiment, wasn’t just democracy and freedom. And you have idiots now out there saying that democracy and liberty are incompatible with each other. No, you guys are incompatible with anything decent.
The thing is that this experiment of ours, started by Adam Smith and then the American Founders, was all about breaking up power so that no one person’s delusion can ever govern, but instead you are subject to criticism and reciprocal accountability. And this is what I was talking about in the only way we can escape a bad end with AI. And I talk about this in The Transparent Society. The point is that in Existencethese trillionaires are deeply worried because they know that they’re going to be in charge soon. As it turns out in the book, they may be mistaken. But they also know that if this happens—if feudalism takes charge again—very probably everyone on Earth will die, because of bad government, delusion, stupidity. So they’re holding a meeting and they’re inviting some of the smartest people they think they can trust to give papers at a conference on how feudalism might be done better, on how it might be done within a meritocratic and a smarter way. And I only spend one chapter—less than that—on this meeting, but it’s my opportunity to talk about how if we’re doomed to lose our experiment, then at least can we have lords and kings and priests who are better than they’ve always been for 6,000 years?
And of course, the problem is that right now today, the billionaires who got rich through intelligence, sapience, inventiveness, working with engineers, inventing new goods and services and all of that – those billionaires don’t want to have anything to do with a return of feudalism. They’re all members of the political party that’s against feudalism. A few of them are libertarians. The other political party gets its billionaires from gambling, resource extraction, Wall Street, or inheritance – the old-fashioned way. The problem is that the smart billionaires today know what I’m talking about, and they want the Renaissance to continue, they want the diamond shaped social structure to continue. That was a little bit of a rant there about all of this, but where else can you explore some of this stuff except in science fiction?
We’re running out of time here. I’ll close with one final question, so on net when you boil it all down, what do you think is in store for us?  Do you have any optimism?  Are you completely pessimistic?  What do you think about the future of our species?
I’m known as an optimist and I’m deeply offended by that. I know that people are rotten and I know that the odds have always been stacked against us. If you think of Machiavelli back in the 1500s – he fought like hell for the Renaissance for the Florentine Republic. And then when he realized that all hope was lost, he sold his services to the Medicis and the lords, because what else can you do? Pericles in Athens lasted one human lifespan. It scared the hell out of everybody in the Mediterranean, because democracy enabled the Athenians to be so creative, so dynamic, so vigorous, just like we in America have spent 250 years being dynamic and vigorous and constantly expanding our horizons of inclusion and constantly engaged in reform and ending the waste of talent.
The world’s oligarchs are closing in on us now, just like they closed in on Pericles in Athens and on the Florentine Republic, because the feudalists do not want this experiment to succeed and bring us to the world of Star Trek. Can we long survive, can we renew this? Every generation of Americans and across the West has faced this crisis, every single generation. Our parents and the greatest generation who survived the Depression and destroyed Hitler and contained communism and took us to the Moon and built vast enterprise systems that were vastly more creative, and fantastic growth under FDR’s level of taxes, by the way. They knew this – they knew that the enemy of freedom has always been feudalism far more than socialism; though socialism sucks too.
We’re in a crisis and I’m accused of being an optimist because I think we have a good chance. We’re in Phase 8 of the American Civil War, and if you type in “Phase 8 of the American Civil War” you’ll probably find my explanation. And our ancestors dealt with the previous seven phases successfully. Are we made of lesser stuff? We can do this. In fact, I’m not an optimist; I’m forced to be an optimist economically by all the doom and gloom out there, which is destroying our morale and our ability to be confident that we can pass this test. This demoralization, this spreading of gloom is how the enemy is trying to destroy us. And people out there need to read Steven Pinker’s book The Better Angels of Our Nature, they need to read Peter Diamandis’s book Abundance. They need to see, that there is huge amounts of good news.
Most of the reforms we’ve done in the past worked, and we are mighty beings, and we could do this if we just stop letting ourselves be talked into a gloomy funk. And I want us to get out of this funk for one basic reason—it’s not fun to be the optimist in the room. It’s much more fun to be the glowering cynic, and that’s why most of you listeners out there are addicted to being the glowering cynics. Snap out of it! Put a song in your heart. You’re members of the greatest civilization that’s ever been. We’ve passed all the previous tests, and there’s a whole galaxy of living worlds out there that are waiting for us to get out there and rescue them.
That’s a wonderful, wonderful place to leave it.  It has been a fascinating hour, and I thank you so much.  You’re welcome to come back on the show anytime you like. I’m almost speechless with the ground we covered, so, thank you!
Sure thing, Byron. And all of you out there – enjoy stuff. You can find me at DavidBrin.com, and Byron will give you links to some of the stuff we referred to.  And thank you, Byron.  You’re doing a good job!
Byron explores issues around artificial intelligence and conscious computers in his upcoming book The Fourth Age, to be published in April by Atria, an imprint of Simon & Schuster. Pre-order a copy here.
[voices_in_ai_link_back]

Voices in AI – Episode 35: A Conversation with Lorien Pratt

[voices_in_ai_byline]
In this episode, Byron and Lorien talk about intelligence, AGI, jobs, and the human genome project.
[podcast_player name=”Episode 35: A Conversation with Lorien Pratt” artist=”Byron Reese” album=”Voices in AI” url=”https://voicesinai.s3.amazonaws.com/2018-03-20-(00-45-11)-lorien-pratt.mp3″ cover_art_url=”https://voicesinai.com/wp-content/uploads/2018/03/voices-headshot-card-4.jpg”]
[voices_in_ai_byline]
Byron Reese: This is Voices in AI, brought to you by Gigaom, I’m Byron Reese. Today our guest is Lorien Pratt, the Chief Scientist and Co-founder over at Quantellia. They’re a software consulting company in the AI field. She’s the author of The Decision Intelligence Primer.” She holds an AB in Computer Science from Dartmouth, and an MS and PhD in Computer Science from Rutgers. Welcome to the show, Lorien!
Lorien Pratt: Thank you Byron delighted to be here, very honored thank you.
So, Lorien, let’s start with my favorite question, which is, what is artificial intelligence?
Artificial intelligence has had an awful lot of definitions over the years. These days when most people say AI, ninety percent of the time they mean machine learning, and ninety percent of the time that machine learning is a neural network underneath.
You say that most people say that, but is that what you mean by it?
I try to follow how people tend to communicate and try to track this morphing definition. Certainly back in the day we all had the general AI dream and people were thinking about Hal and the robot apocalypse, but I tend to live in the applied world. I work with enterprises and small businesses and usually when they say AI it’s, How can I make better use of my data and drive some sort of business value?” and they’ve heard of this AI thing and they don’t quite know what it is underneath.
Well, let me ask a different question then, what is intelligence?
What is intelligence, that’s a really nebulous thing isn’t it?
Well it does not have a consensus definition, so, in one sense you cannot possibly answer it incorrectly.
Right, I guess my world, again, is just really practical, what I care about is what drives value for people. Around the world sometimes intelligence is defined very broadly as the thing that humans do, and sometimes people say a bird is much more intelligent than a human at flying and a fish is much more intelligent than a human at swimming. So, to me the best way to talk about intelligence is relative to some task that has some value, and I think it’s kind of dangerous waters when we try to get too far into defining such a nebulous and fluctuating thing.
Let me ask one more definition and then I will move on. In what sense do you interpret the word artificial”? Do you interpret it as, artificial intelligence isn’t real intelligence, it’s just faking it—like artificial turf isn’t real grass—or, No, it’s really intelligence, but we built it, and that’s why we call it artificial”?
I think I have to give you another frustrating answer to that, Byron. The human brain does a lot of things, it perceives sound, it interprets vision, it thinks through, Well if I go to this college, what will be the outcome?” Those are all, arguably, aspects of intelligence—we jump on a trampoline, we do an Olympic dive. There are so many behaviors that we can call intelligence, and the artificial systems are starting to be able to do some of those in useful ways. So that perception task, the ability to look at an image and say, that’s a cat, that’s a dog, that’s a tree etcetera,” yeah, I mean, that’s intelligence for that task, just like a human would be able to do that. Certain aspects of what we like to call intelligence in humans, computers can do, other aspects, absolutely not. So, we’ve got a long path to go, it’s not just a yes or a no, but it’s actually quite a complex space.
What is the state of the art? This has been something we’ve explored since 1955, so where are we in sixty-two year journey?
Sure, I think we had a lot of false starts, people kept trying to, sort of, jump start and kick start general intelligence—this idea that we can build Hal from 2001 and that he’d be like a human child or a human assistant. And unfortunately, between the fifth generation effort of the 1980’s and stuff that happened earlier, we’ve never really made a lot of progress. It’s been kind of like climbing a tree to get to the moon. Over the years there’s been this second thread, not the AGI artificial general intelligence, but a much more practical thread where people have been trying to figure out how do we build an algorithm that does certain tasks that we usually call intelligent.
The state of the art is that we’ve gotten really good at, what I call, one-step machine learning tasks—where you look at something and you classify it. So, here’s a piece of text, is it a happy tweet or a sad tweet? Here’s a job description, and information about somebody’s resume, do they match, do they not? Here’s an image, is there a car in this image or not? So these one-step links we’re getting very, very good at, thanks to the deep learning breakthroughs that Yann LeCun and Geoffrey Hinton and Yoshua and all of those guys have done over the last few years.
So, that’s the state of the art, and there’s really two answers to that, one is, what is the state of the art in terms of things that are bringing value to companies where they’re doing breakthrough things, and the other is the state of the art from a technology point of view, where’s the bleeding edge of the coolest new algorithms, independent of whether they’re actually being useful anywhere. So, we sort of have to ask that question in two different ways.
You know AI makes headlines anytime it beats a human at a new game, right? What do you think will be the next milestone that will make the popular media, AI did _______.”
AI made a better decision about how to address climate change and sea level rise in this city than the humans could have done alone, or AI helped people with precision medicine to figure out the right medicine for them based on their genetics and their history that wasn’t just one size fits all.
But I guess both of those are things that you could say are already being done. I mean, they’re already being done, there’s not a watershed moment, where Aha! Lee Sedol just got beaten by AlphaGo.” We already do some genetic customization, we can certainly test certain medications against certain genomic markers.
We can, but I think what hasn’t happened is the widespread democratization of AI. Bill Gates said, we’re going to have a computer on every desk.” I also think that Granny, who now uses a computer, will also be building little machine learners within a few years from now. And so when I talk about personalized medicine or I talk about a city doing climate change, those are all, kind of, that general umbrella—it’s not going to be just limited to the technologists. It’s a technology that’s going through this democratization cycle, where it becomes available and accessible in a much more widespread way to solve really difficult problems.
I guess that AIs are good at games because they’re a confined set of rules, and there’s an idea of a winner. Is that a useful way to walk around your enterprise and look for things you can apply AI to?
In part, I would say necessary, but not sufficient, right? So, a game, what is that? It’s a situation in which somebody’s taking an action and then based on that some competitor—maybe literally your competitor in a market—is taking some counter action, and then you take an action, and vice versa, right? So, thinking in terms of games, is actually a direction I see coming down the pike in the future, where these single-link AI systems are going to be integrated more and more with game theory. In fact, I’ve been talking to some large telecoms about this recently, where we are trying to, sort of, game out the future, right? Right now in AI, primarily, we’re looking at historical data from the past and trying to induce patterns that might be applicable to the future, but that’s a different view of the future than actually simulating something—I’ll take this action and you’ll take this other action. So, yes, the use of games has been very important in the history of AI, but again it’s not the whole picture. It does, as you say, tend to over-simplify things when we think in terms of games. When I map complex problems, it does kind of look like game moves that my customers take, but it is way more complex than a simple game of chess or checkers, or Go.
Do you find that the people who come to you say, I have this awesome data, what can AI teach me about it?” Or do they say, I have this problem, how do I solve it?” I mean, are they looking for a problem or looking to match the data that they have?
Both. By and large, by the time they make it to me, they have a big massive set of data, somebody on the team has heard about this AI thing, and they’ll come with a set of hypotheses—we think this data might be able to solve problem X or Y or Z. And that’s a great question, Byron, because that is how folks like me get introduced into projects, it’s because people have a vague notion as to how to use it, and it’s our job to crisp that up and to do that matching of the technology to the problem, so that they can get the best value out of this new technology.
And do you find that people are realistic in their expectations of where the technology is, or is it overhyped in the sense that you kind of have to reset some of their expectations?
Usually by the time they get to me, because I’m so practical, I don’t get the folks who have these giant general artificial intelligence goals. I get the folks who are like, I want to build a business and provide a lot of value, and how can I do that?” And from their point of view, often I can exceed their expectations actually because they think, Ah, I got to spend a year cleansing my data because the AI is only as good as the data”—well it turns out that’s not true and I can tell you why if you want to hear about it—they’ll say, you know, I need to have ten million rows of data because AI only works on large data sets,” it turns out that’s not necessarily true. So, actually, the technology, by and large, tends to exceed people’s expectations. Oh, and they think, I’ve been googling AI, and I need to learn all these algorithms, and we can’t have an AI project until I learn everything,” that’s also not true. With this technology, the inside of the box is like a Ferrari engine, right? But the outside of the box is like a steering wheel and two pedals, it’s not hard to use if you don’t get caught up in the details of the algorithms.
And are you referring to the various frameworks that are out there specifically?
Yeah, Theano, Torch, Google stuff like TensorFlow, all of those yes.
And how do you advise people in terms of evaluating those solutions?
It really depends on the problem. If I was to say there’s one piece of advice I almost always give, it’s to recognize that most of those frameworks have been built over the last few years by academics, and so they require a lot of work to get them going. I was getting one going about a year ago, and, you know, I’m a smart computer scientist and it took me six days to try to get it working. And, even then, just to have one deep learning run, it was this giant file and it was really hard to change, and it was hard to find the answers. Whereas, in contrast, I use this H2O package and R frontend to it, and I can run deep learning in one line of code there. So, I guess, my advice is to be discerning about the package, is it built for the PhD audience, or is it built, kind of, more for a business user audience, because there are a lot of differences. There very, very powerful, I mean, don’t get me wrong, TensorFlow, and those systems are hugely powerful, but often it’s power that you don’t need, and flexibility that you don’t need, and there’s just a tremendous amount of value you can get out of the low-hanging fruit of simple-to-use frameworks.
What are some guiding principles? There’s that one piece of advice, but what are some others? I have an enterprise, as you say, I’ve heard of this AI thing, I’m looking around, what should I be looking for?
Well, what you’re looking for is some pattern in your data that would predict something valuable. So, I’ll give you an example, I’m working with some educational institutions, they want to know, what topics that they offer in their courses will help students ultimately be successful in terms of landing a job. In the medical domain, what aspects of someone’s medical history would determine which of these five or six different drug regiments would be the most effective? In stock prices, what data about the securities we might invest in will tell us whether they’re going to go up or down? So, you see that pattern—you’ve always got some set of factors on one side, and then something you’re trying to predict, which if you could predict it well, would be valuable on the other side. That one pattern, if your listeners only listen to one thing, that’s the outside of the box. It’s really simple, it’s not that complicated. You’re just trying to get one set of data that predicts another set of data, and try to figure out if there would be some value there, then we would want to look into implementing an AI system. So that’s, kind of, thing number one I’d recommend, is to just have a look for that pattern in your business, see if you can find a use case or scenario in which that holds.
Switching gears a bit, you say that we had these early dreams of building a general intelligence, do you still think we’re going to build one sometime?
Maybe. I don’t like to get into those conversations because I think they’re really distracting. I think we’ve got so many hard problems, poverty, conflict—
An AGI would sure be helpful with those, wouldn’t it?
No. See that’s the problem, an AGI, it’s not aiming in the right direction, it’s ultimately going to be really distracting. We need to do the work, right? We need to go up the ladder, and the ladder starts with this single-link machine learning that we just talked about, you’ve got a pattern, you predict something. And then the next step is you try linking those up, you say, well if I’m going to have this feature in my new phone, then, let me predict how many people in a particular demographic will buy it, and then the next link is, given how many people will buy it, what price can I charge? And the next link is, how much price can I charge, how much money can I make? So it’s a chain of events that start with some action that you take, and ultimately lead to some outcome.
I’m solidly convinced, from a lot of things I’ve done over the thirty years I’ve been in AI, that we have to go through this phase, where we’re building these multi-linked systems that get from actions to outcomes, and that’ll maybe ultimately get us to what you might call, generalized AI, but we’re not there yet. We’re not even very good at the single-link systems, let alone multi-link and understanding feedback loops and complex dynamics, and unintended consequences and all of the things that start to emerge when you start trying to simulate the future with multi-link systems.
Well, let me ask the question a different way. Do you think that an AGI is an evolutionary result of a path we’re already on? Like, we’re at one percent and then we’ll be at two and then four, and eventually we’ll get there, or is that just a whole different beast, and you don’t just get there gradually, that’s an Aha!” kind of technology.
Yeah, I don’t know, that’s kind of a philosophical question, because even if I got to a full robot, we’d still have this question as to whether it was really conscious or intelligent. What I really think is important, is turn AI on its head, intelligence augmentation. What’s definitely going to happen is that humans are going to be working alongside intelligent systems. What was once a pencil, and once was a calculator, now is a computer is next going to be an AI? And just like computers have really super-powered our ability to write a document or have this podcast, right? They’re going to start also supercharging our ability to think through complex situations, and it’s going to be a side-by-side partnership for the foreseeable future, and perhaps indefinitely.
There’s a fair amount of fear in terms of what AI and automation in general will do to jobs. And, just to set up the question, there are often three different narratives. One is that, we’re about to enter this period where we’re going to have some portion of the population that is not able to add economic value and there’ll be, kind of, a permanent Great Depression. Then another view is that it will be far different than that, that every single thing a person can do, we’re going to build technology to do. And then there’s a third view that this is no different than any other transformative technology, people take it and use it to grow their own productivity, and everybody goes up a notch. What do you think, or a fourth choice, how do you see AI’s impact?
Well, I think multiple things are going to happen, we’re definitely seeing disruption in certain fields that AI is now able to do, but is it a different disruption than the introduction of the cotton gin or the automobile or any other technology disruption? Nah, it’s just got this kind of overlay of the robot apocalypse that makes it a little sexier to talk about. But, to me, it’s the same evolution we’ve always been going through as we build better and better tools to assist us with things. I’m not saying that’s not painful and I’m not saying that we won’t have displacement, but it’s not going to be a qualitatively different sort of shift in employment than we’ve seen before. I mean people have been predicting the end of employment because of automation for decades and decades. Future Shock, right? Alvin Toffler said that in the 60’s, and, AI is no different.
I think the other thing to say is we get into this hype-cycle because the vendors want you, as a journalist, to think it’s all really cool, then the journalists write about it and then there are more and more vendors, and we get really hyped about this, and I think it’s important to realize that we really are just in one-link AI right now—in terms of what’s widespread and what’s implemented and what’s useful, and where the hard implementation problems have been solved—so I would, sort of, tone down that side of things. From a jobs point of view, that means we’re not going to suddenly see this giant shift in jobs and automation, in fact I think AI is going to create many jobs. I wouldn’t say as many as we’ll lose, but I think there is a big opportunity for those fields. I hear about coal miners these days being retrained in IT, turns out that a lot of them seem to be really good, I’d love to train those other populations in how to be data scientists and machine learning people, I think there’s a great opportunity there.
Is there a shortage of talent in the field?
Absolutely, but, it’s not too hard to solve. The shortage of talent only comes when you think everybody has to understand these really complex PhD level frameworks. As the technology gets democratized, the ability to address the shortage of talent will become much easier. So we’re seeing one-click machine learning systems coming out, we’re seeing things like the AI labs that are coming out of places like Microsoft and Amazon. The technology is becoming something that lots of people can learn, as opposed to requiring this very esoteric, like, three computer science degrees like I have. And so, I think we’re going to start to see a decrease in that shortage in the near future.
All of the AI winters that happened in the past were all preceded by hype followed by unmet expectations, do you think we’re going to have another AI winter?
I think we’ll have an AI fall, but it won’t be a winter and here’s why—we’re seeing a level of substantive use cases for AI being deployed, especially in the enterprise, you know, widespread large businesses, at a level that never happened before. I was just talking to a guy earlier about the last AI hype cycle in the 80’s, where VLSI computer design by AI was this giant thing and the fifth generation,” and the Japanese and people were putting tens, hundreds of millions of dollars into these companies, and there was never any substance. There was no there” there, right? Nobody ever had deployed systems. AI and law, same thing, there’s been this AI and law effort for years and years and years, and it really never produced any commercial systems, for like a decade, and now we’re starting to see some commercial solidity there.
So, in terms of that Gartner hype-cycle, we’re entering the mass majority, but we are still seeing some hype, so there’ll be a correction. And we’ll probably get to where we can’t say AI anymore, and we’ll have to come up with some new name that we’re allowed to say, because for years you couldn’t say AI, you had to say data mining, right? And then I had to call myself an analytics consultant, and now it’s kind of cool I can call myself an AI person again. So the language will change, but it’s not going to be the frozen winter we saw before.
I wonder what term we’ll replace it with? I mean I hear people who avoid it are using, cognitive systems” and all of that, but it sounds just, kind of, like synonym substitution.
It is and that’s how it always goes, I’m evangelizing multi-link machine learning right now, I’m also testing decision intelligence. It’s kind of fun to be at the vanguard, where you can, as you’re inventing the new things, you get to name them, right? And you get to try to make everybody use that terminology. It’s in flux right now, there’s a time when we didn’t call e-mail e-mail,” right? It was computer mail.” So, I don’t know it hasn’t started to crystalize yet, it’s still in the twenty different new terminologies.
Eventually it will become just mail,” and the other will be, you know, snail mail.” It happens a lot, like, corn on the cob used to just be corn, and then canned corn came along so now we say corn on the cob, or cloth diapers… Well, anyway, it happens.
Walk me through some of the misconceptions that you come across in your day-to-day?
Sure. I think that the biggest mistake that I see is people get lost in algorithms or lost in data. So lost in algorithms, let’s say you’re listening to this and you say, Oh I’d like to be interested in AI,” and you go out and you google AI. The analogy, I think, is, imagine we’re the auto industry, and for the last thirty years, the only people in the auto industry had been inventing new kinds of engines, right? So you’re going to see the Wankel engine, and the four cylinder, you’re going to read about the carburetors, and it’s all been about the technology, right? And guess what, we don’t need five hundred different kinds of engines, right? So, if you go out and google it you’re going to be totally lost in hundreds of frameworks and engines and stuff. So the big misconception is that you somehow have to master engine building in order to drive the car, right? You don’t have to, but yet all the noise out there, I mean it’s not noise, it’s really great research, but from your point of view, someone who actually wants to use it for something valuable, it is kind of noise. So, I think one of the biggest mistakes people get into is they create a much higher barrier, they think they have to learn all this stuff in order to drive a car, which is not the case, it’s actually fairly simple technology to use. So, you need to talk to people like me who are, kind of, practitioners. Or, as you google, have a really discerning eye for the projects that worked and what the business value was, you know? And that applied side of things as opposed to the algorithm design.
Without naming company names or anything, tell me some projects that you worked on and how you looked at it and how you approached it and what was the outcome like, just walk me through a few use cases.
So I’ll rattle through a few of them and you can tell me which one to talk about, which one you think is the coolest—morphological hair comparison for the Colorado Bureau of Investigation, hazardous buried waste detection for the Department of Energy, DNA pattern recognition for the human genome project, stock price prediction, medical precision medicine prediction… It’s the coolest field, you get to do so much interesting work.
Well let’s start with the hair one.
Sure, so this was actually a few years back, it was during the OJ trials. The question was, you go out to a crime scene and there’s hairs and fibers that you pick up, the CSI guys, right? And then you also have hairs from your suspect. So you’ve got these two hairs, one from the crime scene, one from your suspect and if they match, that’s going to be some evidence that you’re guy was at the scene right? So how do you go about doing that, well, you take a microphotograph of the two of them. The human eye is pretty good at, sort of, looking at the two hairs and seeing if they match, we actually use a microscope that shows us both at the same time. But, AI can take it a step further. So, just like AI is, kind of, the go-to technology for breast cancer prediction and pap smear analysis and all of this micro-photography stuff, this project that I was on used AI to recognize if these two hairs came from the same guy or not? It’s a pretty neat project.
And so that was in the 90’s?
Yeah it was a while back.
And that would have been using techniques we still have today, or using older techniques?
Both, actually, that was a back-propagation neural network, and I’m not allowed to say back propagation, nor am I really allowed to say neural network, but the hidden secret is that all the great AI stuff still use back-propagation-like neural networks. So, it was the foundations of what we do today. Today we still use neural nets, they’re the main machine learning algorithm, but they’re deeper, they have more and more layers of artificial neurons. We still learn, we still change the weights of the simulated synapses on the networks, but we have a more sophisticated algorithm that does that. So, foundationally, it’s really the same thing, it hasn’t changed that much in so many years, we’re still artificial neural network centric in most of AI today.
Now let’s go to hazardous waste.
Sure, so this was for the Department of Energy. Again it was an imaging project, but here, the question was, you’ve got these buried drums of leaking chemical nerve gas, that’ve been dumped into these superfund sites, and it was really carelessly done. I mean, literally, trenches were dug and radioactive stuff was just dumped in them. And after a few years folks realized that wasn’t so smart, and so, then they took those sites and they passed these pretty cool sensors over them, like gravitometers, that detected micro-fluctuations in gravity, and ground-penetrating radar and other techniques that could sense what was underground—this was originally developed for the oil industry, actually, to find buried energy deposits—and you try to characterize where those things are. Where the neural net was good was in combining all those sensors from multiple modalities into a picture that was better than any one of the sensors.
And what technologies did that use?
Neural nets, same thing, back propagation.
At the beginning you made some references to some recent breakthroughs, but would you say that most of our techniques are things we’ve known about since the 60’s, we just didn’t have the computer horsepower to do it? Would that be fair to say or not?
It’s both, it’s the rocket engines plus the rocket fuel, right? I remember as a graduate student, I used to take over all the faculties computers at night when there was no security, I’d run my neural net training on forty different machines and then have them all RPC the data back to my machine. So, I had enough horsepower back then, but what we were missing was the modern deep-learning algorithms that allow us to get better performing systems out of that data, and out of those high-performance computing environments.
And now what about the human genome project, tell me about that project.
That was looking at DNA patterns, and trying to identify something called a ribosomal-binding site. If you saw that Star Trek episode where everybody turns into a lizard, there are these parts of our DNA that we don’t really know what they do between the parts that express themselves. This was a project nicely funded by a couple of funding agencies to detect these locations on a DNA strand.
Was that the one where everybody essentially accelerated their evolution and Picard was some kind of a nervous chimp of some kind, somebody else was a salamander?
Yes that’s right, remember it was Deanna Troi who turned into a salamander, I think. And she was expressing the introns, the stuff that was between the currently expressed genome. This was a project that tried to find the boundaries between the expressed and the unexpressed parts. Pretty neat science project, right?
Exactly. Tell me about the precision medicine one, was that a recent one?
Yeah, so the first three were kind of older. I’m Chief Scientist, also, at ehealthanalytics.net and they’ve taken on this medical trials project. It turns out that if you do a traditional medical trial, it’s very backward facing and you often have very homogenous data. In contrast, we’ve got a lot of medical devices that are spitting out data, like, I’m wearing my Fitbit right now and it’s got data about me, and, you know, we have more DNA information, and with all of that we can actually do better than traditional medical trials. So, that was a project I did for those guys. More recently we’re predicting failure in medical devices. That’s not as much precision medicine as precision analysis of medical devices, so that we can catch them in the field before they fail, and that’s obviously a really important thing to be able to do.
And so you’ve been at this for, you say, three decades.
Three decades, yeah. It was about 1984, when I built my first neural net.
Would you say that your job has changed over that time, or has it, in a way, not—you still look at the data, look at the approach, figure out what question you’re asking, figure out how to get an answer?
From that point of view, it’s really been the same. I think what has changed is, once I built the neural net—before, the accuracies and the false-positives and the false-negatives were kind of, eh, they weren’t really exciting results. Now, we see Microsoft, a couple of years ago, using neural network transfer, which was my big algorithm invention, to beat humans at visual pattern recognition. So, the error rates, just with the new deep learning algorithms, have plummeted, as I’m sure your other interviewee’s have told you about, but the process has been really the same.
And I’ll tell you what’s surprising, you’d think that things would have changed a lot, but there just hasn’t been a lot of people who drive the cars, right? Up until very recently, this field has really been dominated by people who build the engines. So, we’re just on the cusp. I look at SAP is a great example of this. SAP’s coming out with this big new Leonardo launch of its machine learning platform, and, they’re not trying to build new algorithms, right? SAP is partnering with Google and NVIDIA, and what they recognize is that the next big innovation is in the ability of connecting the algorithms to the applied problems, and just churning out one use case after another, that drives value for their customers. I would’ve liked to have seen us progress further along those lines over the last few years, but I guess just the performance wasn’t there and the interest wasn’t there. That’s what I’m excited about with this current period of excitement in AI, that we’ll finally start to have a bunch of people who drive the cars, right? Who use this technology in valuable ways to get from here to there to predict stock prices, to match people to the perfect job—that’s another project that I’m doing, for HR human resources—all these very practical things that have so much value. But yeah, it hasn’t really changed that much, but I hope it does, I hope we get better at software engineering for AI, because that’s really what’s just starting right now.
So, you, maybe, will become more of a car-driver—to use your analogy—in the future. Even somebody as steeped in it as you, it sounds like you would prefer to use higher-level tools that are just that much easier to use.
Yeah, and the reason is, we have plenty of algorithms, we’re totally saturated with new algorithms. The big desperate need that everybody has is, again, to democratize this and to make it useful, and to drive business value. You know, a friend of mine who just finished an AI project said on a ten million dollar project, we just upped our revenue by eighteen percent from this AI thing. That’s typical, and that’s huge, right? But yet everybody was doing it for the very first time, and he’s at a fairly large company, so, that’s where the big excitement is. I mean, I know it’s not as sexy as artificial general intelligence, but it’s really important to the human race, and that’s why I keep coming back to it.
You made a passing reference to image recognition and the leap forward we have there, how do you think it is that people do such a good job, I mean is it just all transferred learning after a while, do we just sort of get used to it, or do you think people do it in a different way than we got machines to do it?
In computer vision, there was a paper that came out last year that Yann LeCun was sending around that said that somebody was looking at the structure of the deep-learning vision networks and had found this really strong analogue to the multiple layers—what is it the lateral geniculate nucleus, I’m not a human vision person, but there’s these structures in the human vision system that are very analogous. So, it’s like this convergent evolution, that computers converge to the same way of recognizing images that it turns out the human brain does things.
Were we totally inspired by the human brain? Yes, to some extent. Back in the day when we’d go to the NIPS conference, half the people there were in neurophysiology, and half of us were computer modelers, more applied people, and so there was a tremendous amount of interplay between those two sides. But more recently, folks have just tried to get computers to see things, for self-driving cars and stuff, and we keep heading back to things that sort of look like the human vision system, I think that’s pretty interesting.
You know, I think the early optimism in AI—like the Dartmouth project where they thought they could do a bunch of stuff if they worked really hard on it for one summer—stemmed from a hope that, just like in Physics you had a few laws that explain everything, in electronics, in magnetism, it’s just a few laws. And the hope was that intelligence would just be three or four simple laws, we’ll figure them out and that’s all it’s going to be. I guess we’ve given up on that, or have we, we’re essentially brute forcing our way to everything, right?
Yeah, it’s sort of the submergent property, right? Like Conway’s Game of Life,” has these very complex emergent epiphenomenon from just a few simple rules. I, actually, haven’t given up on that, I just think we don’t quite have the substrate right yet. And again I keep going back to single-link learning versus multi-link. I think when we start to build multi-link systems that have complex dynamics that end up doing four-at-a-time simulation using piecewise backward machine learning based on historical data, I think we are going to see a bit of an explosion and start to see, kind of, this emergence happen. That’s the optimistic, non-practical side of me. I just think we’ve been focusing so much on certain low-hanging fruit problems, right? We had image recognition—because we had these great successes in medicine, even with the old algorithms, they were just so great at cancer recognition and images—and then Google was so smart with advertising, and then Netflix with the movies. But if you look at those successful use cases, there’s only like a dozen of them that have been super successful, and we’ve been really focused on these use cases that fit our hammer, we’ve been looking at nails, right? Because that’s the technology that we had. But I think multi-link systems will make a big difference going forward, and when we do that I think we might start to see this kind of explosion in what the systems can do, I’m still an optimist there.
There are those who think we really will have an explosion, literally, from it all.
Yeah, like the singularitists, yep.
It’s interesting that there are people, high profile individuals of unquestionable intelligence, who believe we are at the cusp of building something transformative, where do you think they err?
Well, I can really only speak to my own experience, I think there’s this hype thing, right? All the car companies want to show that they’re still relevant, so they hype the self-driving cars, and of course we’re not taking security, and other things into account, and we all kind of wanted to get jumping on that bandwagon. But, my experience is just very plebeian, you just got to do the work, you got to roll up your sleeves you got to condition your data, you got to go around the data science loop and then you need to go forward. I think people are really caught up in this prediction task, like, What can we predict, what will the AI tell us, what can we learn from the AI?” and I think we’re all caught up in the wrong question, that’s not the question. The question is, what can we do? What actions will we take that lead to which outcomes we care about, right? So, what should we do in this country, that’s struggling in conflict, to avoid the unintended consequences? What should we teach these students so that they have a good career? What actions can we take to mitigate against sea-level rise in our city?
Nobody is thinking in terms of actions that lead to outcomes, they’re thinking of data that leads to predictions. And again I think this comes from the very academic history of AI, where it was all about the idea factory and what can we conclude from this. And yeah, it’s great, that’s part of it, being able to say, here’s this image, here’s what we’re looking at, but to really be valuable for something it can’t be just recognizing an image, it has to be take some action that leads to some outcome. I think that’s what’s been missing and that’s what’s coming next.
Well that sounds like a great place to end our conversation.
Excellent.
I want to thank you so much, you’ve just gave us such a good overview of what we can do today, and how to go about doing it, and I thank you for taking the time.
Thank you Byron, I appreciate the time.
Byron explores issues around artificial intelligence and conscious computers in his upcoming book The Fourth Age, to be published in April by Atria, an imprint of Simon & Schuster. Pre-order a copy here.
[voices_in_ai_link_back]

Voices in AI – Episode 29: A Conversation with Hugo LaRochelle

[voices_in_ai_byline]
In this episode, Byron and Hugo discuss consciousness, machine learning and more.
[podcast_player name=”Episode 29 – A Conversation with Hugo LaRochelle” artist=”Byron Reese” album=”Voices in AI” url=”https://voicesinai.s3.amazonaws.com/2018-01-15-(00-49-50)-hugo-larochelle.mp3″ cover_art_url=”https://voicesinai.com/wp-content/uploads/2018/01/voices-headshot-card-2.jpg”]
[voices_in_ai_byline]
Byron Reese: This is Voices in AI, brought to you by Gigaom. I’m Byron Reese. Today I’m excited; our guest is Hugo Larochelle. He is a research scientist over at Google Brain. That would be enough to say about him to start with, but there’s a whole lot more we can go into. He’s an Associate Professor, on leave presently. He’s an expert on machine learning, and he specializes in deep neural networks in the areas of computer vision and natural language processing. Welcome to the show, Hugo.
Hugo Larochelle: Hi. Thanks for having me.
I’m going to ask you only one, kind of, lead-in question, and then let’s dive in. Would you give people a quick overview, a hierarchical explanation of the various terms that I just used in there? In terms of, what is “machine learning,” and then what are “neural nets” specifically as a subset of that? And what is “deep learning” in relation to that? Can you put all of that into perspective for the listener?
Sure, let me try that. Machine learning is the field in computer science, and in AI, where we are interested in designing algorithms or procedures that allow machines to learn. And this is motivated by the fact that we would like machines to be able to accumulate knowledge in an automatic way, as opposed to another approach which is to just hand-code knowledge into a machine. That’s machine learning, and there are a variety of different approaches for allowing for a machine to learn about the world, to learn about achieving certain tasks.
Within machine learning, there is one approach that is based on artificial neural networks. That approach is more closely inspired from our brains, from real neural networks and real neurons. It is still somewhat vaguely inspired by—in the sense that many of these algorithms probably aren’t close to what real biological neurons are doing—but some of the inspiration for it, I guess, is a lot of people in machine learning, and specifically in deep learning, have this perspective that the brain is really a biological machine. That it is executing some algorithm, and would like to discover what this algorithm is. And so, we try to take inspiration from the way the brain functions in designing our own artificial neural networks, but also take into account how machines work and how they’re different from biological neurons.
There’s the fundamental unit of computation in artificial neural networks, which is this artificial neuron. You can think of it, for instance, that we have neurons that are connected to our retina. And so, on a machine, we’d have a neuron that would be connected to, and take as input, the pixel values of some image on a computer. And in artificial neural networks, for the longest of time, we would have such neural networks with mostly a single layer of these neurons—so multiple neurons trying to detect different patterns in, say, images—and that was the most sophisticated type of artificial neural networks that we could really train with success, say ten years ago or more, with some exceptions. But in the past ten years or so, there’s been development in designing learning algorithms that leverage so called deep neural networks that have many more of these layers of neurons. Much like, in our brain we have a variety of brain regions that are connected with one another. How the light, say, flows in our visual cortex, it flows from the retina to various regions in the visceral cortex. In the past ten years there’s been a lot of success in designing more and more successful learning algorithms that are based on these artificial neural networks with many layers of artificial neurons. And that’s been something I’ve been doing research on for the past ten years now.
You just touched on something interesting, which is this parallel between biology and human intelligence. The human genome is like 725MB, but so much of it we share with plants and other life on this planet. If you look at the part that’s uniquely human, it’s probably 10MB or something. Does that imply to you that you can actually create an AGI, an artificial general intelligence, with as little as 10MB of code if we just knew what that 10MB would look like? Or more precisely, with 10MB of code could you create something that could in turn learn to become an AGI?
Perhaps we can make that parallel. I’m not so much an expert on biology to be able to make a specific statement like that. But I guess in the way I approach research—beyond just looking at the fact that we are intelligent beings and our intelligence is essentially from our brain, and beyond just taking some inspiration from the brain—I mostly drive my research on designing learning algorithms more from math or statistics. Trying to think about what might be a reasonable approach for this or that problem, and how could I potentially implement it with something that looks like an artificial neural network. I’m sure some people have a better-informed opinion as to what extent we can draw a direct inspiration from biology, but beyond just the very high-level inspiration that I just described, what motivates my work and my approach to research is a bit more taking inspiration from math and statistics.
Do you begin with a definition of what you think intelligence is? And if so, how do you define intelligence?
That’s a very good question. There are two schools of thought, at least in terms of thinking of what we want to achieve. There’s one which is we want to somehow reach the closest thing to perfect rationality. And there’s another one which is to just achieve an intelligence that’s comparable to that of human beings, in the sense that, as humans perhaps we wouldn’t really draw a difference between a computer or another person, say, in talking with that machine or in looking at its ability to achieve a specific task.
A lot of machine learning really is based on imitating humans. In the sense that, we collect data, and this data, if it’s labeled, it’s usually produced by another person or committee of persons, like crowd workers. I think those two definitions aren’t incompatible, and it seems the common denominator is essentially a form of computation that isn’t otherwise easily encoded just by writing code yourself.
At the same time, what’s kind of interesting—and perhaps evidence that this notion of intelligence is elusive—is there’s this well-known phenomenon that we call the AI effect, which is that it seems very often whenever we reach a new level of AI achievement, of AI performance for a given task, it doesn’t take a whole lot of time before we start saying that this actually wasn’t AI, but this other new problem that we are now interested in is AI. Chess is a little bit like that. For a long time, people would associate chess playing as a form of intelligence. But once we figured out that we can be pretty good by treating it as, essentially, a tree search procedure, then some people would start saying, “Well that’s not really AI.” There’s now this new separation where chess-playing is not AI anymore, somehow. So, it’s a very tough thing to pin down. Currently, I would say, whenever I’m thinking of AI tasks, a lot of it is essentially matching human performance on some particular task.
Such as the Turing Test. It’s much derided, of course, but do you think there’s any value in it as a benchmark of any kind? Or is it just a glorified party trick when we finally do it? And to your point, that’s not really intelligence either.
No, I think there’s value to that, in the sense that, at the very least, if we define a specific Turing Test for which we currently have no solution, I think it is valuable to try to then succeed in that Turing Test. I think it does have some value.
There are certainly situations where humans can also do other things. So, arguably, you could say that if someone plays against AlphaGo, but wasn’t initially told if it was AlphaGo or not—though, interestingly, some people have argued it’s using strategies that the best Go players aren’t necessarily considering naturally—you could argue that right now if you played against AlphaGo you would have a hard time determining that this isn’t just some Go expert, at least many people wouldn’t be able to say that. But, of course, AlphaGo doesn’t really classify natural images, or it doesn’t dialog with a person. But still, I would certainly argue that trying to tackle that particular milestone is useful in our scientific endeavor towards more and more intelligent machines.
Isn’t it fascinating that Turing said that, assuming the listeners are familiar with it, it’s basically, “Can you tell if this is a machine or a person you’re talking to over a computer?” And Turing said that if it can fool you thirty percent of the time, we have to say it’s smart. And the first thing you say, well why isn’t it fifty percent? Why isn’t it, kind of, indistinguishable? An answer to that would probably be something like, “Well, we’re not saying that it’s as smart as a human, but it’s intelligent. You have to say it’s intelligent if it can fool people regularly.” But the interesting thing is that if it can ever fool people more than fifty percent, the only conclusion you can draw is that it’s better at being human than we are…or seeming human.
Well definitely that’s a good point. I definitely think that intelligence isn’t a black or white phenomenon, in terms of something is intelligent or isn’t, it’s definitely a spectrum. What it means for someone to fool a human more than actual humans into thinking that they’re human is an interesting thing to think about. I guess I’m not sure we’re really quite there yet, and if we were there then this might just be more like a bug in the evaluation itself. In the sense that, presumably, much like we have now adversarial networks or adversarial examples, so we have methods that can fool a particular test. I guess it just might be more a reflection of that. But yeah, intelligence I think is a spectrum, and I wouldn’t be comfortable trying to pin it down to a specific frontier or barrier that we have to reach before we can say we have achieved actual AI.
To say we’re not quite there yet, that is an exercise in understatement, right? Because I can’t find a single one of these systems that are trying to pass the test that can answer the following question, “What’s bigger, a nickel or the sun?” So, I need four seconds to instantly know. Even the best contests restrict the questions enormously. They try to tilt everything in favor of the machine. The machine can’t even put in a showing. What do you infer from that, that we are so far away?
I think that’s a very good point. And it’s interesting, I think, to talk about how quickly are we progressing towards something that would be indistinguishable from human intelligence—or any other—in the very complete Turing Test type of meaning. I think that what you’re getting at is that we’re getting pretty good at a surprising number of individual tasks, but for something to solve all of them at once, and be very flexible and capable in a more general way, essentially your example shows that we’re quite far from that. So, I do find myself thinking, “Okay, how far are we, do we think?” And often, if you talk to someone who isn’t in machine learning or in AI, that’s often the question they ask, “How far away are we from AIs doing pretty much anything we’re able to do?” And it’s a very difficult thing to predict. So usually what I say is that I don’t know because you would need to predict the future for that.
One bit of information that I feel we don’t often go back to is, if you look at some of the quotes of AI researchers when people were, like now, very excited about the prospect of AI, a lot of these quotes are actually similar to some of the things we hear today. So, knowing this, and noticing that it’s not hard to think of a particular reasoning task where we don’t really have anything that would solve it as easily as we might have thought—I think it just suggests that we still have a fairly long way in terms of a real general AI.
Well let’s talk about that for just a second. Just now you talked about the pitfalls of predicting the future, but if I said, “How long will it be before we get to Mars?” that’s a future question, but it’s answerable. You could say, “Well, rocket technology and…blah, blah, blah…2020 to 2040,” or something like that. But if you ask people who are in this field—at least tangentially in the field—you get answers between five and five hundred years. And so that implies to me that not only do we not know when we’re going to do it, we really don’t know how to build an AGI.  
So, I guess my question is twofold. One, why do you think there is that range? And two, do you think that, whether or not you can predict the time, do you think we have all of the tools in our arsenal that we need to build an AGI? Do you believe that with sufficient advances in algorithms, sufficient advances in processors, with data collection, etcetera, do you think we are on a linear path to achieve an AGI? Or is an AGI going to require some hitherto unimaginable breakthrough? And that’s why you get five to five hundred years because that’s the thing that’s kind of the black swan in the room?
That is my suspicion, that there are at least one and probably many technological breakthroughs—that aren’t just computers getting faster or collecting more data—that are required. One example, which I feel is not so much an issue with compute power, but is much more an issue of, “Okay, we don’t have the right procedure, we don’t have the right algorithms,” is being able to match how as humans we’re able to learn certain concepts with very little, quote unquote, data or human experience. An example that’s often given is if you show me a few pictures of an object, I will probably recognize that same object in many more pictures, just from a few—perhaps just one—photographs of that object. If you show me a picture of a family member and you show me other pictures of your family, I will probably identify that person without you having to tell me more than once. And there are many other things that we’re able to learn from very little feedback.
I don’t think that’s just a matter of throwing existing technology, more computers and more data, at it; I suspect that there are algorithmic components that are missing. One of them might be—and it’s something I’m very interested in right now—learning to learn, or meta-learning. So, essentially, producing learning algorithms from examples of tasks, and, more generally, just having a higher-level perspective of what learning is. Acknowledging that it works on various scales, and that there are a lot of different learning procedures happening in parallel and in intricate ways. And so, determining how these learning processes should act at various scales, I think, is probably a question we’ll need to tackle more and actually find a solution for.
There are people who think that we’re not going to build an AGI until we understand consciousness. That consciousness is this unique ability we have to change focus, and to observe the world a certain way and to experience the world a certain way that gives us these insights. So, I would throw that to you. Do you, A), believe that consciousness is somehow key to human intelligence; and, B), do you think we’ll make a conscious computer?
That’s a very interesting question. I haven’t really wrapped my head around what is consciousness relative to the concept of building an artificial intelligence. It’s a very interesting conversation to have, but I really have no clue, no handle on how to think about that.
I would say, however, that clearly notions of attention, for instance, being able to focus attention on various things or adding an ability to seek information, those are clearly components for which there’s, currently—I guess for attention we have some fairly mature solutions which work, thought in somewhat restrictive ways and not in the more general way; information seeking, I think, is still very much related to the notion of exploration and reinforcement learning—still a very big technical challenge that we need to address.
So, some of these aspects of our consciousness, I think, are kind of procedural, and we will need to figure out some algorithm to implement these, or learn to extract these behaviors from experience and from data.
You talked a little bit earlier about learning from just a little bit of data, that we’re really good at that. Is that, do you think, an example of humans being good at unsupervised learning? Because obviously as kids you learn, “This is a dog, and this is a cat,” and that’s supervised learning. But what you were talking about, was, “Now I can recognize it in low light, I can recognize it from behind, I can recognize it at a distance.” Is that humans doing a kind of unsupervised learning? Maybe start off by just explaining the concept and the hope about unsupervised learning, that it takes us, maybe, out of the process. And then, do you think humans are good at that?
I guess, unsupervised learning is, by definition, something that’s not supervised learning. It’s kind of an extreme of not using supervised learning. An example of that would be—and this is something I investigated quite a bit when I did my PhD ten years ago—to have a procedure, a learning algorithm, that can, for instance, look at images of hundreds of characters and be able to understand that each of these pixels in these images of characters are related. That they are higher-level concepts that explain why this is a digit. For instance, there is the concept of pen strokes; a character is really a combination of pen strokes. So, unsupervised learning would try to—just from looking at images, from the fact that there are correlations between these pixels, that they tend to look like something different than just a random image, and that pixels arrange themselves in a very specific way compared to any random combination of pixels—be able to extract these higher-level concepts like pen stroke and handwritten characters. In a more complex, natural scene this would be identifying the different objects without someone having to label each object. Because really what explains what I’m seeing is that there’s a few different objects with a particular light interacting with the scene and so on.
That’s something that I’ve looked at quite a bit, and I do think that humans are doing some form of that. But also, we’re, probably as infants, we’re interacting with our world and we’re exploring it and we’re being curious. And that starts being something a bit further away from just pure unsupervised learning and a bit closer to things like our reinforcement learning. So, this notion that I can actually manipulate my environment, and from this I can learn what are its properties, what are the facts and the variations that characterize this environment?
And there’s an even more supervised type of learning that we see in ourselves as infants that is not really captured by purely supervised learning, which is being able to exchange or to learn from feedback from another person. So, we might imitate someone, and that would be closer to supervised learning, but we might instead get feedback that’s worded. So, if a parent says do this or don’t do that, this isn’t exactly an imitation this is more like a communication of how you should adjust your behavior. And this is a form of weakly supervised learning. So, if I tell my kid to do his or her homework, or if I give instructions on how to solve a particular problem set, this isn’t a demonstration, so this isn’t supervised learning. This is more like a weak form of supervised learning. Which even then I think we don’t use as much in the known systems that work well currently that people are using in object recognition systems or machine translation systems and so on. And so, I believe that these various forms of learning that are much less supervised than the common supervised learning is a direction in research where we still have a lot of progress to make.
So earlier you were talking about meta learning, which is learning how to learn, and I think there’s been a wide range of views about how artificial intelligence and an AGI might work. And on one side was an early hope that, like the physical universe which is governed just by very few laws, and magnetism very few laws, electricity very few laws, we hoped that intelligence was governed by just a very few laws that we could learn. And then on the other extreme you have people like the late Marvin Minsky who really saw the brain as a hack of a couple of hundred narrow AIs, that all come together and give us, if not a general intelligence at least a really good substitute for one. I guess a belief in meta learning is a belief in the former case, or something like it, that there is a way to learn how to learn. There’s a way to build all those hacks. Would you agree? Do you think that?
We can take one example there. I think under a somewhat general definition of what learning to learn or meta learning is, it’s something that we could all agree exists, which is, as humans, we’re the result of years of evolution. And evolution is a form of adaptation, I guess. But then within our lifespan, each individual will also adapt to its specific human experience. So, you can think of evolution as being kind of like the meta learning to the learning that we do as humans in our individual lives every day. But then even in our own lives, I think there are clearly ways in which my brain is adapting as I’m growing older from a baby to an adult, that are not conscious. There are ways in which I’m adapting in a rational way, in conscious ways, which rely on the fact that my brain has adapted to be able to perceive my environment—my visual cortex just maturing. So again, there are multiple layers of learning that rely on each other. And so, I think this is, at a fairly high level, but I think in a meaningful way, a form of meta learning. For that reason, I think that investigating how to have learning of learning systems is that there is a process that’s valuable here in informing how to have more intelligent agents and AIs.
There’s a lot of fear wrapped up in the media coverage of artificial intelligence. And not even getting into killer robots, just the effects that it’s going to have on jobs and employment. Do you share that? And what is your prognosis for the future? Is AI in the end going to increase human productivity like all other technologies have done, or is AI something profoundly different that’s going to harm humans?
That’s a good question. What I can say is that I am motivated by—and what makes me excited about AI—is that I see it as an opportunity of automating parts of my day-to-day life which I would rather be automated so I can spend my life doing more creative things, or the things that I’m more passionate about or more interested in. I think largely because of that, I see AI as a wonderful piece of technology for humanity. I see benefits in terms of better machine translation which will better connect the different parts of the world and allow us to travel and learn about other cultures. Or how I can automate the work of certain health workers so that they can spend more time on the harder cases that probably don’t receive as much attention as they should.
For that reason—and because I’m personally motivated automating these aspects of life which we would want to see automated—I am fairly optimistic about the prospects for our society to have more AI. And, potentially, when it comes to jobs we can even imagine automating our ability to progress professionally. Definitely there’s a lot of opportunities in automating part of the process of learning in a course. We now have many courses online. Even myself when I was teaching, I was putting a lot of material on YouTube to allow for people to learn.
Essentially, I identified that the day-to-day teaching that I was doing in my job was very repetitive. It was something that I could record once and for all and instead focus my attention on spending time with the student and making sure that each individual student solves its own misunderstanding about the topic. Because my mental model of the student in general is that it’s often unpredictable how they will misunderstand a particular aspect of the course. And so, you actually want to spend some time interacting with that student, and you want to do that with as many students as possible. I think that’s an example where we can think of automating particular aspects of education so as to support our ability to have everyone be educated and be able to have a meaningful professional life. So, I’m overall optimistic, largely because of the way I see myself using AI and developing AI in the future.
Anybody who’s listened to many episodes of the show will know I’m very sympathetic to that position. I think it’s easy to point to history and say in the last two hundred and fifty years, other than the depression which wasn’t caused by technology obviously, unemployment has been between five and nine percent without fail. And yet, we’ve had incredibly disruptive technologies, like the mechanization of industry, the replacement of animal power with human power, electrification, and so forth. And in every case, humans have used those technologies to increase their own productivity and therefore their incomes. And that is the entire story of the rising standard of living for everybody, at least in the western world.
But I would be remiss not to make the other case, which is that there might be a point, an escape velocity, where a machine can learn a new job faster than a human. And at that point, at that magic moment, every new job, everything we create, a machine would learn it faster than a human. Such that, literally, everything from Michael Crichton down to…everybody—everybody finds themselves replaced. Is that possible? And if that really happened, would that be a bad thing?
That’s a very good question I think for society in general. Maybe because my day-to-day is about identifying what are the current challenges in making progress in AI, I see—and I guess we’ve touched that a little bit earlier—that there are still many scientific challenges, that it doesn’t seem like it’s just a matter of making computers faster and collecting more data. Because I see these many challenges, and because I’ve seen that the scientific community, in previous years, has been wrong and has been overly optimistic, I tend to err on the side of less gloomy and a bit more conservative in how quickly we’ll get there, if we ever get there.
In terms of what it means for society—if that was to ever happen that we can automate essentially most things—I unfortunately feel ill-equipped as a non-economist to be able to really have a meaningful opinion about this. But I do think it’s good that we have a dialog about it, as long as it’s grounded in facts. Which is why it’s a difficult question to discuss, because we’re talking about a hypothetical future that might not exist before a very long time. But as long as we have, otherwise, a rational discussion about what might happen, I don’t see a reason not to have that discussion.
It’s funny. Probably the truest thing that I’ve learned from doing all of these chats is that there is a direct correlation between how much you code and how far away you think an AGI is.
That’s quite possible.
I could even go further to say that the longer you have coded, the further away you think it is. People who are new at it are like, “Yeah. We’ll knock this out.” And the other people who think it’s going to happen really quickly are more observers. So, I want to throw a thought experiment to you.
Sure.
It’s a thought experiment that I haven’t presented to anybody on the show yet. It’s by a man named Frank Jackson, and it’s the problem of Mary, and the problem goes like this. There’s this hypothetical person, Mary, and Mary knows everything in the world about color. Everything is an understatement. She has a god-like understanding of color, everything down to the basic, most minute detail of light and neurons and everything. And the rub is that she lives in a room that she’s never left, and everything she’s seen is black and white. And one day she goes outside and she sees red for the first time. And the question is, does she learn anything new when that happens that she didn’t know before? Do you have an initial reaction to that?
My initial reaction is that, being colorblind I might be ill-equipped to answer that question. But seriously, so she has a perfect understanding of color but—just restating the situation—she has only seen in black and white?
Correct. And then one day she sees color. Did she learn anything new about color?
By definition of what understanding means, I would think that she wouldn’t learn anything about color. About red specifically.
Right. That is probably the consistent answer, but it’s one that is intuitively unsatisfying to many people. The question it’s trying to get at is, is experiencing something different than knowing something? And if in fact it is different, then we have to build a machine that can experience things for it to truly be intelligent, as opposed to just knowing something. And to experience things means you return to this thorny issue of consciousness. We are not only the most intelligent creature on the planet, but we’re arguably the most conscious. And that those two things somehow are tied together. And I just keep returning to that because it implies, maybe, you can write all the code in the world, and until the machine can experience something… But the way you just answered the question was, no, if you know everything, experiencing adds nothing.
I guess, unless that experience would somehow contradict what you know about the world, I would think that it wouldn’t affect it. And this is partly, I think, one challenge about developing AI as we move forward. A lot of the AIs that we’ve successfully developed that have to do with performing a series of actions, like playing Go for instance, have really been developed in a simulated environment. In this case, for a board game, it’s pretty easy to simulate it on a computer because you can literally write all the rules of the game so you can put them in the computer and simulate it.
But, for an experience such as being in the real world and manipulating objects, as long as that simulated experience isn’t exactly what the experience is in the real world, touching real objects, I think we will face a challenge in transferring any kind of intelligence that we grow in simulations, and transfer it to the real world. And this partly relates to our inability to have algorithms that learn rapidly. Instead, they require millions of repetitions or examples to really be close to what humans can do. Imagine having a robot go through millions of labeled examples from someone manipulating that robot, and showing it exactly how to do everything. That robot might essentially learn too slowly to really learn any meaningful behavior in a reasonable amount of time.
You used the word transfer three or four times there. Do you think that transfer learning, this idea that humans are really good at taking what we know in one domain space and applying it in another—you know, you walk around one big city and go to a different big city and you kind of map things. Is that a useful thing to work on in artificial intelligence?
Absolutely. In fact, we’re seeing that with all the success that has been enabled by the ImageNet data set and the competition. It turns out if you train an object recognition system on this large ImageNet data set, it really is responsible for the revolution of deep neural nets and convolutional neural nets in the field of computer vision. It turns out that these models trained on that source of data could transfer really well to a surprising number of paths. And that has very much enabled a kind of a revolution in computer vision. But it’s a fairly simple type of transfer, and I think there are more subtle ways of transferring, where you need to take what you knew before but slightly adjust it. How to do to that without forgetting what you learned before? So, understanding how these different mechanisms need to work together to be able to perform a form of lifelong learning, of being able to accumulate one task after another, and learning each new task with less and less experience, is something I think currently we’re not doing as well as we need to.
What keeps you up at night? You meet a genie and you rub the bottle and the genie comes out and says, “I will give you perfect understanding of something.” What do you wrestle with that maybe you can phrase in a way that would be useful to the listeners?
Let’s see. That’s a very good question. Definitely, in my daily research, how are we able to accumulate knowledge, and how would a machine accumulate knowledge, in a very long period, and learn the sequence of tasks and abilities in a sequence, cumulatively, is something that I think a whole lot about. And this has led me to think about learning to learn, because I suspect that there are ideas. And effectively once you have to learn one ability after the other after the other, that process of doing this and doing it better, the fact that we do it better is, perhaps, because we are learning how to learn each task also. That there’s this other scale of learning that is going on. How to do this exactly I don’t quite know, and knowing this I think would be a pretty big step in our field.
I have three final questions, if I could. You’re in Canada, correct?
As it turns out, I’m currently still in the US because I have four kids, two of them are in school so I wanted them to finish their school year before we move. But the plan is for me to go to Montreal, yes.
I noticed something. There’s a lot of AI activity in Canada, a lot of leading research. How did that come about? Was that a deliberate decision or just a kind of a coincidence that different universities and businesses decided to go into that?
If I speak for Montreal specifically, very clearly at the source of it is Yoshua Bengio deciding to stay in Montreal, staying in academia, and then continuing to train many students, gathering other researchers that are also in his group, and also training more PhDs in the field that doesn’t have as much talent as is needed. I think this is essentially the source of it.
And then my second to the last question is, what about science fiction? Do you enjoy it in any form, like movies or TV or books or anything like that? And if so, is there any that you look at it and think, “Ah, the future could happen that way”?
I definitely used to be more into science fiction. Now maybe due to having kids I watch many more Disney movies than I watch science fiction. It’s actually a good question. I’m realizing I haven’t watched a sci-fi movie for a bit, but it would be interesting, now that I’ve actually been in this field for a while, to sort of confront my vision of it from how artists potentially see AI. Maybe not seriously. A lot of art is essentially philosophy around what could happen, or at least projecting a potential future and seeing how we feel about it. And for that purpose, I’m now tempted to revisit either some classics or seeing what are recent sci-fi movies.
I said only one more question, so I’ve got to combine two into one to stick with that. What are you working on, and if a listener is going into college or is presently in college and wants to get into artificial intelligence in a way that is really relevant, what would be a leading edge that you would say somebody entering the field now would do well to invest time in? So first, you, and then what would you recommend for the next generation of AI researchers?
As I’ve mentioned, perhaps not so surprisingly, I am very much interested in learning to learn and meta learning. I’ve started publishing on the subject, and I’m still very much thinking around various new ideas for meta learning approaches. And also learning from, yes, weaker signals than in the supervised learning setting. Such as learning from worded feedback from a person is something I haven’t quite started working on specifically, but I’m thinking a whole lot about these days. Perhaps those are directions that I would definitely encourage other young researchers to think about and study and research.
And in terms of advice, well, I’m obviously biased, and being in Montreal studying deep learning and AI, currently, is a very, very rich and great experience. There are a lot of people to talk to, to interact with, not just in academia but now much more in industry, such as ourselves with Google and other places. And also, being very active online. On Twitter, there’s now a very, very rich community of people sharing the work of others and discussing the latest results. The field is moving very fast, and in large part it’s because the deep learning community has been very open about sharing its latest results, and also making the discussion open about what’s going on. So be connected, whether it be on Twitter or other social networks, and read papers and look at what comes up on archives—engage in the global conversation.
Alright. Well that’s a great place to end. I want to thank you so much. This has been a fascinating hour, and I would love to have you come back and talk about your other work in the future if you’d be up for it.
Of course, yeah. Thank you for having me.
Byron explores issues around artificial intelligence and conscious computers in his upcoming book The Fourth Age, to be published in April by Atria, an imprint of Simon & Schuster. Pre-order a copy here.
[voices_in_ai_link_back]

Voices in AI – Episode 27: A Conversation with Adrian McDermott

[voices_in_ai_byline]
In this episode, Byron and Adrian discuss intelligence, consciousness, self-driving cars and more.
[podcast_player name=”Episode 27 – A Conversation with Adrian McDermott” artist=”Byron Reese” album=”Voices in AI” url=”https://voicesinai.s3.amazonaws.com/2018-01-15-(00-58-48)-adrian-mcdermott.mp3″ cover_art_url=”https://voicesinai.com/wp-content/uploads/2018/01/voices-headshot-card.jpg”]
[voices_in_ai_byline]
Byron Reese: This is Voices in AI, brought to you by Gigaom. I’m Byron Reese. Today our guest is Adrian McDermott, he is Zendesk’s President of Products where he works to build software for better customer relationships, including, of course, exploring how AI and machine learning impacts the way customers engage with businesses. Adrian is a Yorkshireman, living in San Francisco, and he holds a Bachelor of Science and Computer Science from De Montfort University. Welcome to the show, Adrian!
Adrian McDermott: Thanks, Byron! Great to be here!
My first question is almost always: What is artificial intelligence?
When I think about artificial intelligence, I think about AI as a system that can interact with and learn from its environment in an independent manner. I think that’s where the intelligence comes from. AI systems have traditionally been optimized for achieving specific tasks. In computer science, we used to write programs using procedural languages and we would tell them exactly what to do at every stage of that language. With AI, it can actually learn and adapt from its environment and, you know, reason to a certain extent and build the capabilities to do that. Narrowly, I think that’s what AI is, but societally I think the term has a series of connotations it takes on, some scary and some super interesting and exciting meanings and consequences when we think about it and when we talk about it.
We’ll get to that in due course, but back to your narrow definition, “It learns for its environment,” that’s a pretty high bar, actually. By that measure, my dog food bowl that automatically refills when it runs out, even though it’s reacting to its environment, it’s not learning from its environment; whereas a Nest thermometer, you would say, is learning from its environment and therefore is AI. Did I call the ball right on both of those, kind of the way you see the world?
I think so. I mean, your dog bowl, perhaps, it learns, over time, how much food your dog needs every day, and it adapts to its environment, I don’t know. You could have an intelligent dog bowl, dog feeding system, hopefully one that understands the nature of most dogs is to keep eating until they choke. That would be an important governor on that system, let’s be honest, but I think in general that characterization is good.
We, as biological computational devices, learn from our environment and take in a series of inputs from those environments and then use those experiences, I think, to pattern match new stimuli and new situations that we encounter so that we know what to do, even though we’ve never seen that exact situation before.
So, and not to put any words in your mouth, but it sounds like you think that humans react to our environment and that is the source of our intelligence, and a computer that reacts to its environment, it’s artificial intelligence, but it really is intelligent. It’s not artificial, it’s not faking it, it really is intelligent. Is that correct?
I think artificial intelligence is this ability to learn from the environment, and come up with new behaviors as a result of this learning. There is a tremendous number of examples of AI systems that have created new ways of doing things and have learned. I think one of the most famous is move thirty-four in Google’s AlphaGo when it’s playing the game Go against Lee Sedol, one of the greatest players in the world. It performed a move that was shocking to the Go community and the Go intelligentsia because it had learned and it had evolved its thinking to a point where it created new ways of doing things that were not natural for us as humans. I think artificial intelligence, really, when it fulfills its promises, is able to create and learn in that way, but currently most systems do that within a very narrow problem domain.
With regard to an artificial general intelligence, do you think that the way we think of AI today eventually evolves into an AGI? In other words, are we on a path to create one? Or do you think a truly generalized intelligence will be built in a completely different way than how we are currently building AI systems today?
I mean, there are a series of characteristics of intelligence that we have, right, that we think about. One of them is the ability to think about a problem, think about a scenario, and run our head through different ways of handling that scenario and imagine different outcomes, and almost to self actualize in those situations. I think that modern deep-learning techniques actually are, you know, the construction is such that they are looking at different scenarios to come up with different outcomes. Ultimately, we don’t necessarily, I believe it’s true to say, understand a great deal about the nature of consciousness and the way that our brains work.
We know a lot about the physiology, not necessarily about the philosophy. It does seem like our brains are sort of neuron-based computation devices that take a whole bunch of inputs and process them based on stored experiences and learnings, and it does seem like that’s the kind of systems that we’re building with artificial-intelligence-based machines and computers.
Given that technology gets better every year, year over year, it seems like a natural conclusion that ultimately technology advancements will be such that we can reach the same point of general intelligence that our cerebral cortex reached hundreds of thousands of years ago. I think we have to assume that we will eventually get there. It seems like we’re building the systems in the same way that our brains function right now.
That’s fascinating because, that description of human’s ability to imagine different scenarios is in fact some people’s theory as to how consciousness emerged. And, not putting you on the spot because, as you said, we don’t really know, but is that plausible to you? That being able to essentially, kind of, carry on that internal dialogue, “I wonder if I should go pull that tiger’s tail,” you know, is that what you think made us conscious or are you indifferent on that question?
I only have a layman’s opinion, but, you know, there’s a test—I don’t know if it’s in evolutionary biology or psychology—the mirror test where if you put a dog in front of a mirror it doesn’t recognize itself, but Asian elephants and dolphins do recognize themselves in the mirror. So, it’s an interesting question of that ability to self-actualize, to understand who you are, and to make plans and go forward. That is the nature of intelligence and from an evolutionary point of view you can imagine a number of ways in which that consciousness of self and that ability to make plans was essential for the species to thrive and move forward. You know we’re not the largest species on the planet, but we’ve become somewhat dominant as a result of our ability to plan and take actions.
I think certain behaviors that we manifest came from the advantageous nature of cooperation between members of our species, and the way that we act together and act independently and dream independently and move together. I think it seems clear that that is probably how consciousness evolved, it was an evolutionary advantage to be conscious, to be able to make plans, to think about oneself, and we seem to be on the path where we’re emulating those structures in artificial intelligence work.
Yeah, the mirror test is fascinating because only one bird passes it and that is the magpie.
The magpie?
Yeah, and there’s recent research, very recent, that suggests that ants pass it, which would be staggering. It looks like they’ve controlled for so many things, but it is unquestionably a fascinating thing. Of course, people disagree on what exactly it means.
Yeah, what does it mean? It’s interesting that ants pass because ants do form a multi-role complex society. So, is it one of the requirements of a multi-role complex society that you need to be able to pass the mirror test, and understand who you are and what your place is in that society?
Yeah, that is fascinating. I actually emailed Gallup and asked him, “Did you know ants passed the test?” And he’s like, “Really, I hadn’t heard that?” You know, because he’s the originator of it.
The argument against the test goes like this: If you put a red dot on a dog’s paw, the dog knows that’s its paw and it might lick it off its own paw, right? The dog has a sense of self, it knows that’s its foot. And so, maybe all the mirror test is doing is testing to see if the dog is smart enough to understand what a mirror is, which is a completely different thing.
Do you think, by extension, and again with your qualification that it’s a layman’s viewpoint, I asked you a question about AGI and you launched into a description of consciousness. Can I infer from your answer that you believe that an AGI will be conscious?
You can infer from my answer that I believe that to have a truly artificial general intelligence, I think that consciousness is a requirement, or some kind of ability to have freedom in thought direction. I think that is part of the nature of consciousness or one way of thinking about it.
I would tend to agree, but let me just… Everybody’s had that sensation where you’re driving and you kind of space, right, and all of a sudden you snap to a minute later and you’re like, “Whoa, I don’t have any memory of driving to this spot,” and, in that moment, you merged traffic, you changed lanes, and all of that. So, you acted intelligently but you were not, in a sense, conscious at that moment. Do you think that saying, “Oh, that’s an example of intelligence without consciousness,” is the problem? Like, “No, no you really were conscious all that time,” or is it like, “No, no, you didn’t have, like, some new idea or anything, you just managed off rote.” Do you have a thought on that?
I think it’s true that so much of what we do as beings is managed off rote, but probably a lot of the reason we’re successful as a species is because we don’t just go off rote. Like, if someone had driven in front of you or the phone had rung, if all these things had happened, that would have created a suitably justifiable, stored in short-term memory because it’s important event while you were driving, then you would have moved into a different mode of consciousness. I think the human brain takes in a massive amount of input in some ways but filters it down to just this, quote unquote, “stream of consciousness” of experiences that are important, or things that are happening. And it’s that filter of consciousness, or the filter of the brain, that puts you in the moment where you’re dealing with the most important thing. That, in some ways, characterizes us.
When we think about artificial intelligence and how machines experience the world, I mean, we have five sensory inputs falling into our brains and our memories, but a machine can have, yes, vision, sound, but GPS, infrared, just some random event stream from another machine. There are all of these inputs that act in some ways as sensors for an artificially-intelligent machine that are much, in some ways, richer and more diverse, or could be. And that governor, that thing that filters that down, figures out what the objective is for the artificial intelligence machine and takes the right inputs and does the right pattern matching and does the right thinking, is going to be incredibly important to achieve, I think, artificial general intelligence. Where, it knows how to direct, if you like, it’s thoughts and how to plan and how to do and how to act, how to think about solving problems.
This is fascinating to me, so I have just a few more questions about AGI, if you’ll just indulge me for another minute. The range of time that people think it’s going to take us to get it, by my reckoning, is five years on the soonest and five-hundred on the longest. Do you have any opinion of when we might develop an AGI?
I think I agree with five years on the soonest, but, you know, honestly one of the things I struggle with as we think about that is, who really knows? We have so little understanding of how the brain actually works to produce intelligence and sentience that it’s hard to know how rapidly we’re approaching that or replicating it. It could be that, as we build smarter and smarter non-general artificial intelligence, eventually we’ll just wander into a greater understanding of consciousness or sentience by accident just because we built a machine that emulates the brain. That’s, in some ways, a plausible outcome, like, we’ll get enough computation that eventually we’ll figure it out or it will become apparent. I think, if you were to ask me, I think that’s ten to fifteen years away.
Do you think we already have computers fast enough to do it, we just don’t know how to do it, or do you think we’re waiting on hardware improvements as well?
I think the primary improvements we’re waiting on are software, but software activities are often constrained by the power and limits of the hardware that we’re running it on. Until you see a more advanced machine, it’s hard to practically imagine or design a system that could run upon it. The two things improve in parallel, I think.
If you believe we’ll, maybe, have an AGI in fifteen years, that if we have one it could very easily be conscious, and if it’s conscious therefore it would have a will, presumably, are you one of the people that worries about that? The super intelligence scenario, that it has different goals and ambitions than we have?
I think that’s one of many scenarios that we need to worry about. In our current society, any great idea, it seems like, is either weaponizable in a very direct way, which is scary. The way that we’re setup, locally and globally, is intensely competitive. Where any advantage one could eek out is then used to dominate, or take advantage of, or gain advantage from our position against our fellow man in this country and other countries, globally, etcetera.
There’s quite a bit of fear-mongering about artificial general intelligence, but, artificial intelligence does give the owner of those technologies, the inventor of those technologies, innate advantages in terms of taking and using those technologies to get great gain. I think there’s many stages along the way where someone can very competitively put those technologies to work without even achieving artificial general intelligence.
So, yes, the moment of singularity, when artificial general intelligence machines can invent machines that are considerably faster in ways that we can’t understand. That’s a scary thought, and technology may be out-thinking our moral and philosophical understanding of the implications of that, but at the same time some of the things that we’re building now—like you said, are just fifty percent better or seventy-seven percent smarter—could actually be, through weaponization or just through extreme mercantile advantage taking, those could have serious effects on the planet, humankind, etcetera. I do believe that we’re in an AI arms race and I do find that a little bit scary.
Vladimir Putin just said that he thinks the future is going to belong to whoever masters AI, and Elon Musk recently said, “World War Three will be fought over AI.” It sounds like you think that’s maybe a more real-world concern than the rogue AGI.
I think it is, because we’ve seen tremendous leaps in the capability of technology just in the last five years, certainly no less than five to ten years. More and more people are working in this problem domain; that number must be doubling every six months, or something ridiculous like that, in terms of the number of people who are starting to think about AI, the number of companies deploying some kind of technology. As a result, there are breakthroughs that are going to begin happening, either in public academia or more likely, in private labs that will be leverageable by the entities that create them in really meaningful ways.
I think by one count there are twenty different nations whose militaries are working on AI weapons. It’s hard to get a firm grip on it because: A, they wouldn’t necessarily say so, and, B, there’s not a lot of agreement on what the term AI means. In terms of machines that can make kill decisions, that’s probably a reasonable guess.
I think one shift that we’ve seen, and, you know, this is just anecdotal and my own opinion, is that so much of base research in computer science or artificial intelligence is done in academia and done basically publicly, publishable, and for the public good, I think, traditionally. And if you look at artificial intelligence where, you know, the greatest minds of our generation are not necessarily working in the public sphere on artificial intelligence; they’re locked up, tied up in private entity companies, generally very, very large companies, or they’re working on the military-industrial complex. I think that’s a shift, I think that’s different from scientific discovery, medical research, all these things in the past.
The closed-door nature of this R&D effort, and the fact that it’s becoming almost a national or nationalistic concern, with very little… You know there are weapons treaties, there are nuclear treaties, there are research weapons treaties, right? I think we’re only just beginning to talk about AI treaties, and AI understanding and we’re a long way from any resolve because the potential gains for whomever goes first, or makes the biggest discovery first, makes a great breakthrough first, are tremendous. It’s a very competitive world, and it’s going on behind closed doors.
The thing about the atomic bomb is that they were hard to build, and so even if you knew how to build it, it was hard. AI won’t be that way. It’ll fit on a flash drive, or at least the core technology would, right?
I think building an AGI, some of these things require web-scale computational power that currently, based on today’s technology, that requires data centers not flash drives. So, there is a barrier to entry to some of these things, but, that said, the great breakthrough more than likely will be an algorithm or some great thinking, and that will, yes, indeed, fit on a modern flash drive without any problem.
What do you think of the open AI initiative which says, “Let’s make this all public and share it all. It’s going to happen, we might as well make sure everybody has access to it and not just one party.”
I work at SaaS company, we build products to sell, and through open-source technologies, through cloud platforms, we get to stand on the shoulders of giants and use amazing stuff and shorten our development cycles and do things that we would never be able to do as a small company founded in Copenhagen. I’m a huge believer in those initiatives. I think that part of the reason that open-source has been so successful in just the problems of computer science and computer infrastructure is that, to a certain extent, there’s been a maturation of thought where not every company believes its ability to store and retrieve its data quickly is a defining characteristic for them. You know, I work at Zendesk and we’re in the business of customer service software, we build software that tries to help our customers have better relationships with their customers. It’s not clear that having the best cloud hosting engine or being able to use NoSQL technology is something that’s of tremendous commercial value to us.
We believe in open-sources, so we contribute back and we contribute because there’s no perceived risk of commercial impairment by doing that. This isn’t our core IP, our core IP is around how we treat customers. I think that, while I’m a huge believer in the open AI initiative, I think that there isn’t necessarily that widespread same belief when the parties are at investment levels in AI research, and at the forefront of thinking. I think that there’s a clear, for some of those entities, there’s a clear notion that they can gain tremendous advantage by keeping anything that they invent inside of the walled garden for as long as possible and using it to their advantage. I would dearly love that initiative to succeed. I don’t know that right now we have the environment in which it will truly succeed.
You’ve made a couple of references to artificial intelligence mirroring the human brain. Do you follow the human brain project in Europe, which is taking that approach? They’re saying, “Why don’t we just try to replicate the thing that we know can think already?”
I don’t really. I’m delighted by the idea, but I haven’t read too much about it. What are they learning?
It’s expensive, and they’re behind schedule. But it’s been funded to the tune of one and a half billion dollars, I mean it’s a really serious effort. The challenge is going to be if it turns out that a neuron is as complicated as a supercomputer, that things go on at the Planck level, that it is this incredible machine. Because I think the hope is that it if you take it at face value, that is something maybe we can duplicate, but if there’s other stuff going on it might be more problematic.
As an AI researcher yourself, do you ever start with the question, “How do humans do that?” Is that how you do it when you’re thinking about how to solve a problem? Or do you not find a lot of corollaries, in your day to day, between how a human does something and how a computer would do it?
When we’re thinking about solving problems with AI, we’re at the basic level of directed AI technology, and what we’re thinking about is, “How can we remove these tasks that humans perform on a regular basis? How can we enrich the lives of, in our case, the person needing customer service or the person providing customer services?” It’s relatively simple, and so the standard approach for that is to, yes, look directly at the activities of a person, look at ways that you can automate and take advantage of the benefits that the AI is going to buy you. In customer service land, you can remember every interaction very easily that every customer has had with a particular brand, and then you can look at the outcomes that those interactions have had, good or bad, through the satisfaction, the success and the timing. And you can start to emulate those things, remove friction, replace the need for people whatsoever, and build out really interesting things to do.
The primal way to approach the problem is really to look at what humans are doing, and try and replace them certainly where it’s not their cognitive ability that is necessarily to the fore or being used, and that’s something that we do a lot. But I think that misses the magic, because one of the things that happens with an AI system can be that it produces results that are, to use Arthur C. Clarke’s phrase, “sufficiently advanced to be indistinguishable from magic.” You can invent new things that were not possible because of the human brains limited bandwidth, because our limited memories or other things. You can basically remember all experiences all at once and then use those to create new things.
In our own work, we realize that it’s incredibly difficult, with any accuracy, given an input from a customer, a question from a customer, to predict the ultimate customer satisfaction score, the CSAT score that you’ll get. But it’s an incredibly important number for customer service departments, and knowing ahead of time that you’re going to have a bad experience with this customer based on signals in the input is incredibly useful. So, one of the things we built was a satisfaction-prediction engine, using various models, that allows us to basically route tickets to experts and do other things. There’s no human who sits there and gives out predictions on how a ticket is going to go, how our experience with the customer is going to go; that’s something that we invented because only a machine can do that.
So, yes, there is an approach to what we do which is, “How can we automate these human tasks?” But there’s also an approach of, “What is it that we can do that is impossible for humans that would be awesome to do?” Is there magic here that we can put in place?
In addition to there being a lot of concern about the things we talked about, about war and about AGI and all of that, in the narrow AI, in the here and now, of course, there’s a big debate about automation, and what these technologies are going to do for jobs. Just to, kind of, set the question up, there are three different narratives people offer. One is that automation is going to take all of the really low-skilled jobs, and they’ll be a group of people who are unable to compete against machines and we’ll have, kind of, permanent unemployment at the level of the Great Depression or something like that. Then there’s a second camp that says, “Oh, no, no, you don’t understand, it’s far worse than that, they’re going to take everybody’s job, everybody, because there’ll be a moment that the machine can learn something faster than a human.” Then there’s a third one that says, “No, with these technologies, people just take the technology and they use it to increase their own productivity, and they don’t actually ever cause unemployment.” Electricity and mechanization and all of that didn’t increase unemployment at all. Do you believe one of those three, or maybe a fourth one? What do you think about the effects of AI on employment?
I think the parallel that’s often drawn is a parallel to the Industrial Revolution. The Industrial Revolution brought us a way to transform energy from one form into another, and allowed us to mechanize manufacturing which altered the nature of society from agrarian to industrial, which created cities which had this big transformation. The Industrial Revolution took a long time. It took a long time for people to move from the farms to the factories, it took a long time to transform the landscape, comparatively. I think that one of the reasons that there’s trepidation and nervousness around artificial intelligence is it doesn’t seem like it will take that long, it’s almost fantastical science fiction to me that I get to see different vendors, self-driving cars mapping San Francisco on a regular basis, and I see people driving around with no hands on the wheel. I mean, that’s extraordinary, I don’t think even five years ago I would believe that we would have self-driving cars on public roads, it didn’t seem like a thing, and now it seems like automated driving machines are not very far away.
If you think about the societal impacts of that, well, according to an NPR study in 2014, I think, truck driving is the number one job in twenty-nine states in America. There are literally millions of driving jobs, and I think it’s one of the fastest growing categories of jobs. Things like that will all disappear, or to a certain extent will disappear, and it will happen rapidly.
It’s really hard for me to subscribe to the… Yes, we’re improving customer service software here at Zendesk in such a way that we’re making agents more efficient and they’re getting to spend more time with customers and they’re upping the CSAT rating, and consequently those businesses have better Net Promoter scores and they’re thriving. I believe that that’s what we’re doing and I believe that that’s what’s going to happen. If we can answer automatically ten percent of a customers’ tickets that means that you need ten percent less agents to answer those tickets, unless they’re going to invest more in customer service. The profit motive says that there needs to be a return on investment analysis between those two things. So, in my own industry I see this, and across society it’s hard not to believe that there won’t be a fairly large-scale disruption.
I don’t know that, as a society, we’re necessarily in a position to absorb that destruction yet. I know in Finland, they’re experimenting with a guaranteed minimum income to take away the stress of having to find work or qualify for unemployment benefit and all these things, so that people have a better quality of life and can hopefully find ways to be productive in society. Not many countries are as progressive as Finland. I would put myself in the “very nervous about the societal effects of large-scale removal of sources of employment,” because it’s not clear what the alternative structures are, that are set up in society to find meaningful work and sustenance for people who were losing those jobs. We’ve been under a trajectory since, I think, the 1970s, of polarization in society, and generating inequality. And I worry that the large-scale creation of an unemployed mass could be a tipping point. I take a very pessimistic view.
Let me give you a different narrative on that, and tell me what what’s wrong with it, how the logic falls down on it. Let’s talk just about truck drivers. That would go like this, it would say, “That concern that you’re going to have in mass all these unemployed truck drivers is beyond ill-founded. To begin with, the technology’s not done, and it will still need to be worked out. Then the legislative hurdles will have to be worked out, and that’ll be done gradually state by state by state. Then, there’ll be a long period of time when law will require that there be a driver, and self-driving technology would kick in when it feels like the driver’s making a mistake, but there’ll be an override; just like we can fly airplanes without pilots now but we insist on having a pilot.
Then, the driving part of the job is actually not the whole job, and so like any other job when you automate part of it, like the driving, that person takes on more things. Then, on top of that, the equipment’s not retrofit to it, so you going to have to figure out how do you retrofit all this stuff. Then, on top of that, having self-driving cars is going to open up all kinds of new employment, and because we talk about this all the time, there are probably fewer people going into truck driving, and there are people who retire in it every year. And that, just like every other thing, it’s just going to gradually work as the economy reallocates resources. Why do you think truck driving is like this big tipping point thing?
I think driving jobs in general are a tipping point thing because, yes, there are challenges to rolling it out, and obviously there’s legislative challenges, but it’s not hard to see, certainly interstate trucking going first and then drivers meeting those trucks and driving through urban areas and various things like that happening. I think people are working on retrofit devices for trucks. What will happen is truck drivers who are not actually driving will be allowed to work more hours, so you’ll need less truck drivers. In general, as a society, we’re shifting from going and getting our stuff to having our stuff delivered to us. And so, the voracious appetite for more drivers, in my opinion, is not going to abate. Yeah, the last mile isn’t driven by trucks, it’s smaller delivery drivers or things that can be done by smarter robots, etcetera.
I think those challenges you communicated are going to be moderating forces of the disruption, but when something reaches the tipping point of acceptance and cost acceptability, change tends to be rapid if driven by the profit motive. I think that is what we’re going to see. The efficiency of Amazon, and the fact that every product is online in that marketplace is driving a tremendous change in the nature of retail. I think the delivery logistics of that need are going to go through a similar turnaround, and companies driving that are going to be very aggressive about it because the economics is so appealing.
Of course, again, the general answer to that is that when technology does lower the price of something dramatically—like you’re talking about the cost of delivery, self-driving cars would lower it—that that in turn increases demand. That lowering of cost means all of a sudden you can afford to deliver all kinds of things, and that ripple effect in turn creates those jobs. Like, people spend all their money, more or less, and if something becomes cheaper they turn around and spend that money on something else which, by definition, therefore creates downstream employment. I’m just having a hard time seeing this idea that somehow costs are going to fall and that money won’t be redeployed in other places that in turn creates employment, which is kind of two hundred and fifty years of history.
I wouldn’t necessarily say that as costs fall in industries all of those profits are generally returned to the consumer, right? Businesses in the logistics retail space, generally, retailers run at a two percent margin, right, and businesses in logistics run with low margins. So, there’s room for those people to kind of optimize their own businesses, and not necessarily pass down all those benefits for the consumer. Obviously, there’s room for disruption where someone will come in, shave back down the margins and pass on those benefits. But, in general, you know, online banking is more efficient because we prefer it, and so there are less people working in banking. Conversely, when banks shifted to ATMs banking became much more of a part of our lives, and more convenient so we ended up with more bank tellers because personal service was a thing.
I think that there just are a lot of driving jobs out there that don’t necessarily need to be done by humans, but we’ll still be spending the same amount on getting driven around, so there’ll be more self-driving cars. Self-driving cars crash less, hopefully, and so there’s less need for auto repair shops. There’s a bunch of knock-on effects of using that technology, and for certain classes of jobs there’s clearly going to be a shift where those jobs disappear. There is a question of how readily the people doing those jobs are able to transfer their skills to other employment, and is there other employment out there for them.
Fair enough. Let’s talk with Zendesk for a moment. You’ve alluded to a couple of ways that you employ artificial intelligence, but can you just kind of give me an idea of, like, what gets you excited in the morning, when you wake up and you think, “I have this great new technology, artificial intelligence, that can do all these wondrous things, I want to use it to make people’s lives better who are in charge of customer relationships”? Entice me with some things that you’re thinking of doing, that you’re working on, that you’ve learned, and just kind of tell me about your day-to-day?
So many customer service inquiries begin with someone who has a thirst for knowledge, right? Seventy-six percent of people try to self-serve when trying to find the answer to a question, and many people who do get on the phone or online at the same time trying to discover the answer to that problem. I think often there’s a challenge in terms of having enough context to know what someone is looking for, having that context available to all of the systems that they’re interacting with. I think technology, not just artificial intelligence technology, but artificial intelligence can help us pinpoint the intention of users because the goal of the software that we provide, and the customer service ethos that we have is that we need to remove friction.
The thing that really generates bad experiences in customer service interactions isn’t that someone said no, or we didn’t get the outcome that we want, or we didn’t get our return processed or something like that, it’s that negative experiences tend to be generated from an excess of friction. It’s that I had to switch from one channel to another, it’s that I had to repeat myself over and over again because everyone I was talking to didn’t have context on my account or my experience as the customer and these things. I think that if you look at that sort of pile of problems, you see real opportunities to give people better experiences just by holding a lot more data at one time about that context, and then being able to process that data and make intelligent predictions and guesses and estimations about what it is they’re looking for and what is going to help them.
We recently launched a service we call “answer bot” which uses deep learning to look at the data we have when an email comes in and figure out, quite simply, which knowledgebase article is going to best serve that customer. It’s not driving a car down to the supermarket, this sounds very simple, but in another way these are millions and millions of experiences that can be optimized over time. Similarly, the people on the other side of that conversation generally don’t know what it is that customers are searching for or asking for, for which there is no answer. And so by using the same analysis of environment queries that we have and knowledge bases we can give them cues as to what content to write, and, sort of, direct them to build a better experience and improve their customer experience in that way.
I think from an enterprise software builder’s point of view, artificial intelligence is a tool that you can use at so many points of interaction between brand and consumer, between the two parties basically on either side of any transaction inside of your knowledge base. It’s something that you can use to shave off little moments of pain, and remove friction, and apply intelligence, and just make the world seem frictionless and a little smarter. Our goal internally is basically to meander through our product in a directed way, finding those experiences and making them better. At the end of the day we want someone who’s deploying our stuff and giving a customer experience with it, and we want the consumers experiencing that brand, the people interacting with that brand, to be like, “I’m not sure why that was good, but I did really enjoy that customer service experience. I got what I wanted, it was quick. I don’t know how they quite did that, but I really enjoyed it.” We all have had those moments in service where someone just totally got what you were after and it was delightful because it was just smooth and efficient, good, and no drama—prescient almost.
I think what we are trying to do, what we would like to do is adapt all of our software and experiences that we have to be able to be that anticipatory and smart and enjoyable. I think the enterprise software world—for all types of software like CRM, ERP, all these kind of things—is filled with sharp edges, friction, and pain, you know, pieces of acquisitions glued together, and you’re using products that represent someone’s broken dreams acquired by someone else and shoehorned into other experiences. I think, generally, the consumer of enterprise software at this point is a little bit tired of the pain of form-filling and repetition and other things. Our approach to smoothing those edges, to grinding the stone and polishing the mirror, is to slowly but surely improve each of those experiences with intelligence.
It sounds like you have a broad charter to look at kind of all levels of the customer interaction and look for opportunity. I’m going to ask you a question that probably doesn’t have an answer but I’m going to try anyway, “Do you prefer to find places where there was an epic fail where it was so bad it was just terrible and the person was angry and it was just awful, or would you rather fix ten of a minor annoyance where somebody had entered data too many times?” I mean, are you working to cut the edges off the bad experiences, or just generally make the system phase shift up a little bit?
I think, to a certain extent, I like to think of that as a false dichotomy, because the person who has a terrible experience and gets angry, chances are there wasn’t a momentary snap, there was a drip feed of annoyances that took them to that point. So, our goal, when we think about it, is to pick out the most impactful rough edges that cumulatively are going to engulf someone into the red mist of homicidal fury on the end of the phone, complaining about their broken widget. I think most people do not flip their anger bit over a tiny infraction or over a larger fraction, it’s over a period, it’s a lifetime of infractions, it’s a lifetime of inconveniences that gets you to that point, or the lifetime of that incident and that inquiry and how you got there. We’re generally, sort of, emotionally-rational beings who’ve been through many customer service experiences, so exhibiting that level of frustration, generally, requires a continued and sustained effort on the part of a brand to get us there.
I assume that you have good data to work off of. I mean, there are good metrics in your field and so you get to wade through a lot of data and say, “Wow, here’s a pattern of annoyances that we can fix.” Is that the case?
Yeah, we have an anonymized data set that encompasses billions of interactions. And the beauty of that data set is they’re rated, right? They’re rated either by the time it took to solve the problem, or they’re rated by an explicit rating, where someone said that was a good interaction, that was a bad interaction. When we did the CSAT prediction we were really leveraging the millions of scores that we have that tell us how customer service interactions went. In general, though, we talk about the data asset that we have available to us, that we can use to train and learn a query and analyze.
Last question, you quoted Arthur C. Clarke, so I have to ask you, is there any science fiction about AI that you enjoy or like or think that could happen? Like Her or Westworld or iRobot or any of that, even books or whatnot?
I did find Westworld to be, probably, the most compelling thing I watched this year, and just truly delightful in its thinking about memory and everything else, although it was, obviously, pure fiction. I think Her was also just a, you know, a disturbing look at the way that we will be able to identify with inanimate machines and build relationships that, you know, it was all too believable. I think you quoted two my favorite things, but Westworld was so awesome.
It, interestingly, had a different theory of consciousness from the bicameral mind, not to give anything away.
Well, let’s stop there. This was a magnificently interesting hour, I think we touched on so many fascinating topics, and I appreciate you taking the time!
Adrian McDermott: Thank you, Byron, it’s wonderful to chat too!
Byron explores issues around artificial intelligence and conscious computers in his upcoming book The Fourth Age, to be published in April by Atria, an imprint of Simon & Schuster. Pre-order a copy here.
[voices_in_ai_link_back]

Voices in AI – Episode 26: A Conversation with Peter Lee

[voices_in_ai_byline]
In this episode, Byron and Peter talk about defining intelligence, Venn diagrams, transfer learning, image recognition, and Xiaoice.
[podcast_player name=”Episode 26: A Conversation with Peter Lee” artist=”Byron Reese” album=”Voices in AI” url=”https://voicesinai.s3.amazonaws.com/2017-12-04-(01-04-41)-peter-lee.mp3″ cover_art_url=”https://voicesinai.com/wp-content/uploads/2017/12/voices-headshot-card_preview-1-1.jpeg”]
[voices_in_ai_byline]
Byron Reese:  This is Voices in AI, brought to you by Gigaom. I’m Byron Reese. Today our guest is Peter Lee. He is a computer scientist and corporate Vice President at Microsoft Research. He leads Microsoft’s New Experiences and Technologies organization, or NExT, with the mission to create research powered technology and products and advance human knowledge through research. Prior to Microsoft, Dr. Lee held positions in both government and academia. At DARPA, he founded a division focused on R&D programs in computing and related areas. Welcome to the show, Peter. 
Peter Lee:  Thank you. It’s great to be here.
I always like to start with a seemingly simple question which turns out not to be quite so simple. What is artificial intelligence?
Wow. That is not a simple question at all. I guess the simple, one line answer is artificial intelligence is the science or the study of intelligent machines. And, I realize that definition is pretty circular, and I am guessing that you understand that that’s the fundamental difficulty, because it leaves open the question: what is intelligence? I think people have a lot of different ways to think about what is intelligence, but, in our world, intelligence is, “how do we compute how to set and achieve goals in the world.” And this is fundamentally what we’re all after, right now in AI.
That’s really fascinating because you’re right, there is no consensus definition on intelligence, or on life, or on death for that matter. So, I would ask that question: why do you think we have such a hard time defining what intelligence is?
I think we only have one model of intelligence, which is our own, and so when you think about trying to define intelligence it really comes down to a question of defining who we are. There’s fundamental discomfort with that. That fundamental circularity is difficult. If we were able to fly off in some starship to a far-off place, and find a different form of intelligence—or different species that we would recognize as intelligent—maybe we would have a chance to dispassionately study that, and come to some conclusions. But it’s a hard when you’re looking at something so introspective.
When you get into computer science research, at least here at Microsoft Research, you do have to find ways to focus on specific problems; so, we ended up focusing our research in AI—and our tech development in AI, roughly speaking—in four broad categories, and I think these categories are a little bit easier to grapple with. One is perception—that’s endowing machines with the ability to see and hear, much like we do. The second category is learning—how to get machines to get better with experience? The third is reasoning—how do you make inferences, logical inferences, commonsense inferences about the world? And then the fourth is language—how do we get machines to be intelligent in interacting with each other and with us through language? Those four buckets—perception, learning, reasoning and language—they don’t define what is intelligence, but they at least give us some kind of clear set of goals and directions to go after.
Well, I’m not going to spend too much time down in those weeds, but I think it’s really interesting. In what sense do you think it’s artificial? Because it’s either artificial in that it’s just mechanicalor that’s just a shorthand we use for thator it’s artificial in that it’s not really intelligence. You’re using words like “see,” “hear,” and reason.” Are you using those words euphemistically—can a computer really see or hear anything, or can it reason—or are you using them literally?
The question you’re asking really gets to the nub of things, because we really don’t know. If you were to draw the Venn diagram; you’d have a big circle and call that intelligence, and now you want to draw a circle for artificial intelligence—we don’t know if that circle is the same as the intelligence circle, whether it’s separate but overlapping, whether it’s a subset of intelligence… These are really basic questions that we debate, and people have different intuitions about, but we don’t really know. And then we get to what’s actually happening—what gets us excited and what is actually making it out into the real world, doing real things—and for the most part that has been a tiny subset of these big ideas; just focusing on machine learning, on learning from large amounts of data, models that are actually able to do some useful task, like recognize images.
Right. And I definitely want to go deep into that in just a minute, but I’m curious So, there’s a wide range of views about AI. Should we fear it? Should we love it? Will it take us into a new golden age? Will it do this? Will it cap out? Is an AGI possible? All of these questions. 
And, I mean, if you ask, “How will we get to Mars? Well, we don’t know exactly, but we kind of know. But if you ask, “What’s AI going to be like in fifty years?” it’s all over the map. And do you think that is because there isn’t agreement on the kinds of questions I’m askinglike people have different ideas on those questionsor are the questions I’m asking not really even germane to the day-to-day “get up and start building something”? 
I think there’s a lot of debate about this because the question is so important. Every technology is double-edged. Every technology has the ability to be used for both good purposes and for bad purposes, has good consequences and unintended consequences. And what’s interesting about computing technologies, generally, but especially with a powerful concept like artificial intelligence, is that in contrast to other powerful technologies—let’s say in the biological sciences, or in nuclear engineering, or in transportation and so on—AI has the potential to be highly democratized, to be codified into tools and technologies that literally every person on the planet can have access to. So, the question becomes really important: what kind of outcomes, what kinds of possibilities happen for this world when literally every person on the planet can have the power of intelligent machines at their fingertips? And because of that, all of the questions you’re asking become extremely large, and extremely important for us. People care about those futures, but ultimately, right now, our state of scientific knowledge is we don’t really know.
I sometimes talk in analogy about way, way back in the medieval times when Gutenberg invented mass-produced movable type, and the first printing press. And in a period of just fifty years, they went from thirty thousand books in all of Europe, to almost thirteen million books in all of Europe. It was sort of the first technological Moore’s Law. The spread of knowledge that that represented, did amazing things for humanity. It really democratized access to books, and therefore to a form of knowledge, but it was also incredibly disruptive in its time and has been since.
In a way, the potential we see with AI is very similar, and maybe even a bigger inflection point for humanity. So, while I can’t pretend to have any hard answers to the basic questions that you’re asking about the limits of AI and the nature of intelligence, it’s for sure important; and I think it’s a good thing that people are asking these questions and they’re thinking hard about it.
Well, I’m just going to ask you one more and then I want to get more down in the nitty-gritty. 
If the only intelligent thing we know of in the universe, the only general intelligence, is our brain, do you think it’s a settled question that that functionality can be reproduced mechanically? 
I think there is no evidence to the contrary. Every way that we look at what we do in our brains, we see mechanical systems. So, in principle, if we have enough understanding of how our own mechanical system of the brain works, then we should be able to, at a minimum, reproduce that. Now, of course, the way that technology develops, we tend to build things in different ways, and so I think it’s very likely that the kind of intelligent machines that we end up building will be different than our own intelligence. But there’s no evidence, at least so far, that would be contrary to the thesis that we can reproduce intelligence mechanically.
So, to say to take the opposite position for a moment. Somebody could say there’s absolutely no evidence to suggest that we can, for the following reasons. One, we don’t know how the brain works. We don’t know how thoughts are encoded. We don’t know how thoughts are retrieved. Aside from that, we don’t know how the mind works. We don’t know how it is that we have capabilities that seem to be beyond what a hunk of grey matter could dowe’re creative, we have a sense of humor and all these other things. We’re conscious, and we don’t even have a scientific language for understanding how consciousness could come about. We don’t even know how to ask that question or look for that answer, scientifically. So, somebody else might look at it and say, “There’s no reason whatsoever to believe we can reproduce it mechanically. 
I’m going to use a quote here from, of all people, a non-technologist Samuel Goldwyn, the old movie magnate. And I always reach to this when I get put in a corner like you’re doing to me right now, which is, “It’s absolutely impossible, but it has possibilities.”
All right.
Our current understanding is that brains are fundamentally closed systems, and so we’re learning more and more, and in fact what we learn is loosely inspiring some of the things we’re doing in AI systems, and making progress. How far that goes? It’s really, as you say, it’s unclear because there are so many mysteries, but it sure looks like there are a lot of possibilities.
Now to get kind of down to the nitty-gritty, let’s talk about difficulties and where we’re being successful and where we’re not. My first question is, why do you think AI is so hard? Because humans acquire their intelligence seemingly simply, right? You put a little kid in playschool and you show them some red, and you show them the number three, and then, all of a sudden, they understand what three red things are. I mean, we, kind of, become intelligent so naturally, and yet my frequent flyer program that I call in can’t tell, when I’m telling it my number if I said 8 or H. Why do you think it’s so hard?
What you said is true, although it took you many years to reach that point. And even a child that’s able to do the kinds of things that you just expressed has had years of life. The kinds of expectations that we have, at least today—especially in the commercial sphere for our intelligent machines—sometimes there’s a little bit less patience. But having said that, I think what you’re saying is right.
I mentioned before this Venn diagram; so, there’s this big circle which is intelligence, and let’s just assume that there is some large subset of that which is artificial intelligence. Then you zoom way, way in, and a tiny little bubble inside that AI bubble is machine learning—this is just simply machines that get better with experience. And then a tiny bubble inside that tiny bubble is machine learning from data—where the models that are extracted, that codify what has been learned, are all extracted from analyzing large amounts of data. That’s really where we’re at today—in this tiny bubble, inside this tiny bubble, inside this big bubble we call artificial intelligence.
What is remarkable is that, despite how narrow our understanding is—for the most part all of the exciting progress is just inside this little, tiny, narrow idea of machine learning from data, and there’s even a smaller bubble inside that that’s called a supervised manner—even from that we’re seeing tremendous power, a tremendous ability to create new computing systems that do some pretty impressive and valuable things. It is pretty crazy just how valuable that’s become to companies, like Microsoft. At the same time, it is such a narrow little slice of what we understand of intelligence.
The simple examples that you mentioned, for example, like one-shot learning, where you can show a small child a cartoon picture of a fire truck, and even if that child has never seen a fire truck before in her life, you can take her out on the street, and the first real fire truck that goes down the road the child will instantly recognize as a fire truck. That sort of one-shot idea, you’re right, our current systems aren’t good at.
While we are so excited about how much progress we’re making on learning from data, there are all the other things that are wrapped up in intelligence that are still pretty mysterious to us, and pretty limited. Sometimes, when that matters, our limits get in the way, and it creates this idea that AI is actually still really hard.
You’re talking about transfer learning. Would you say that the reason she can do that is because at another time she saw a drawing of a banana, and then a banana? And another time she saw a drawing of a cat, and then a cat. And so, it wasn’t really a one-shot deal. 
How do you think transfer learning works in humans? Because that seems to be what we’re super good at. We can take something that we learned in one place and transfer that knowledge to another contextYou know, “Find, in this picture, the Statue of Liberty covered in peanut butter,” and I can pick that out having never seen a Statue of Liberty in peanut butter, or anything like that. 
Do you think that’s a simple trick we don’t understand how to do yet? Is that what you want it to be, like an “a-ha” moment, where you discover the basic idea. Or do you think it’s a hundred tiny little hacks, and transfer learning in our minds is just, like, some spaghetti code written by some drunken programmer who was on a deadline, right? What do you think that is? Is it a simple thing, or is it a really convoluted, complicated thing? 
Transfer learning turns out to be incredibly interesting, scientifically, and also commercially for Microsoft, turns out to be something that we rely on in our business. What is kind of interesting is, when is transfer learning more generally applicable, versus being very brittle?
For example, in our speech processing systems, the actual commercial speech processing systems that Microsoft provides, we use transfer learning, routinely. When we train our speech systems to understand English speech, and then we train those same systems to understand Portuguese, or Mandarin, or Italian, we get a transfer learning effect, where the training for that second, and third, and fourth language requires less data and less computing power. And at the same time, each subsequent language that we add onto it improves the earlier languages. So, training that English-based system to understand Portuguese actually improves the performance of our speech systems in English, so there are transfer learning effects there.
In our image recognition tasks, there is something called the ImageNet competition that we participate in most years, and the last time that we competed was two years ago in 2015. There are five image processing categories. We trained our system to do well on Category 1—on the basic image classification—then we used transfer learning to not only win the first category, but to win all four other ImageNet competitions. And so, without any further kind of specialized training, there was a transfer learning effect.
Transfer learning actually does seem to happen. In our deep neural net, deep learning research activities, transfer learning effects—when we see them—are just really intoxicating. It makes you think about what you and I do as human beings.
At the same time, it seems to be this brittle thing. We don’t necessarily understand when and how this transfer learning effect is effective. The early evidence from studying these things is that there are different forms of learning, and that somehow the one-shot ideas that even small children are very good at, seem to be out of the purview of the deep neural net systems that we’re working on right now. Even this intuitive idea that you’ve expressed of transfer learning, the fact is we see it in some cases and it works so well and is even commercially-valuable to us, but then we also see simple transfer learning tasks where these systems just seem to fail. So, even those things are kind of mysterious to us right now.
It seemsand I don’t have any evidence to support this, but it seems, at a gut level to methat maybe what you’re describing isn’t pure transfer learning, but rather what you’re saying is, “We built a system that’s really good at translating languages, and it works on a lot of different languages.” 
It seems to me that the essence of transfer learning is when you take it to a different discipline, for example, “Because I learned a second language, I am now a better artist. Because I learned a second language, I’m now a better cook.” That, somehow, we take things that are in a discipline, and they add to this richness and depth and indimensionality of our knowledge in a way that they really impact our relationships. 
I was chatting with somebody the other day who said that learning a second language was the most valuable thing he’d ever done, and that his personality in that second language is different than his English personality. I hear what you’re saying, and I think those are hits that point us in the right direction. But I wonder if, at its core, it’s really multidimensional, what humans do, and that’s why we can seemingly do the one-shot things, because we’re taking things that are absolutely unrelated to cartoon drawings of something relating to real life. Do you have even any kind of a gut reaction to that?
One thing, at least in our current understanding of the research fields, is that there is a difference between learning and reasoning. The example I like to go to is, we’ve done quite a bit of work on language understanding, and specifically in something called machine reading—where you want to be able to read text and then answer questions about the text. And a classic place where you look to test your machine reading capabilities is parts of the verbal part of the SAT exam. The nice thing about the SAT exam is you can try to answer the questions and you can measure the progress just through the score that you get on the test. That’s steadily improving, and not just here at Microsoft Research, but at quite a few great university research areas and centers.
Now, subject those same systems to, say, the third-grade California Achievement Test, and the intelligence systems just fall apart. If you look at what third graders are expected to be able to do, there is a level of commonsense reasoning that seems to be beyond what we try to do in our machine reading system. So, for example, one kind of question you’ll get on that third-grade achievement test is, maybe, four cartoon drawings: a ball sitting on the grass, some raindrops, an umbrella, and a puppy dog—and you have to know which pairs of things go together. Third-graders are expected to be able to make the right logical inferences from having the right life experiences, the right commonsense reasoning inferences to put those two pairs together, but we don’t actually have the AI systems that, reliably, are able to do that. That commonsense reasoning is something that seems to be—at least today, with the state of today’s scientific and technological knowledge—outside of the realm of machine learning. It’s not something that we think machine learning will ultimately be effective at.
That distinction is important to us, even commercially. I’m looking at an e-mail today that someone here at Microsoft sent me to get ready to talk to you today. The e-mail says, it’s right in front of me here, “Here is the briefing doc for tomorrow morning’s podcast. If you want to review it tonight, I’ll print it for you tomorrow.” Right now, the system has underlined, “want to review tonight,” and the reason it’s underlined that is it’s somehow made the logical commonsense inference that I might want a reminder on my calendar to review the briefing documents. But it’s remarkable that it’s managed to do that, because there are references to tomorrow morning as well as tonight. So, making those sorts of commonsense inferences, doing that reasoning, is still just incredibly hard, and really still requires a lot of craftsmanship by a lot of smart researchers to make real.
It’s interesting because you say, you had just one line in there that solving the third-grade problem isn’t a machine learning task, so how would we solve that? Or put another way, I often ask these Turing Test systems, “What’s bigger, a nickel or the sun?” and none of them have ever been able to answer it. Because “sun is ambiguous, maybe, and nickel is ambiguous. 
In any case, if we don’t use machine learning for those, how do we get to the third grade? Or do we not even worry about the third grade? Because most of the problems we have in life aren’t third-grade problems, they’re 12th-grade problems that we really want the machines to be able to do. We want them to be able to translate documents, not match pictures of puppies. 
Well, for sure, if you just look at what companies like Microsoft, and the whole tech industry, are doing right now, we’re all seeing, I think, at least a decade, of incredible value to people in the world just with machine learning. There are just tremendous possibilities there, and so I think we are going to be very focused on machine learning and it’s going to matter a lot. It’s going to make people’s lives better, and it’s going to really provide a lot of commercial opportunities for companies like Microsoft. But that doesn’t mean that commonsense reasoning isn’t crucial, isn’t really important. Almost any kind of task that you might want help with—even simple things like making travel arrangements, shopping, or bigger issues like getting medical advice, advice about your own education—these things almost always involve some elements of what you would call commonsense reasoning, making inferences that somehow are not common, that are very particular and specific to you, and maybe haven’t been seen before in exactly that way.
Now, having said that, in the scientific community, in our research and amongst our researchers, there’s a lot of debate about how much of that kind of reasoning capability could be captured through machine learning, and how much of it could be captured simply by observing what people do for long enough and then just learning from it. But, for me at least, I see what is likely is that there’s a different kind of science that we’ll need to really develop much further if we want to capture that kind of commonsense reasoning.
Just to give you a sense of the debate, one thing that we’ve been doing—it’s been an experiment ongoing in China—is we have a new kind of chatbot technology in China that takes the form of a person named Xiaolce. Xiaolce is a persona that lives on social media in China, and actually has a large number of followers, tens of millions of followers.
Typically, when we think about chatbots and intelligent agents here in the US market—things like Cortana, or Siri, or Google Assistant, or Alexa—we put a lot of emphasis on semantic understanding; we really want the chatbot to understand what you’re saying at the semantic level. For Xiaolce, we ran a different experiment, and instead of trying to put in that level of semantic understanding, we instead looked at what people say on social media, and we used natural language processing to pick out statement response pairs, and templatize them, and put them in a large database. And so now, if you say something to Xiaolce in China, Xiaolce looks at what other people say in response to an utterance like that. Maybe it’ll come up with a hundred likely responses based on what other people have done, and then we use machine learning to rank order those likely responses, trying to optimize the enjoyment and engagement in the conversation, optimize the likelihood that the human being who is engaged in the conversation will stick with a conversation. Over time, Xiaolce has become extremely effective at doing that. In fact, for the top, say, twenty million people who interact with Xiaolce on a daily basis, the conversations are taking more than twenty-three turns.
What’s remarkable about that—and fuels the debate about what’s important in AI and what’s important in intelligence—is that at least the core of Xiaolce really doesn’t have any understanding at all about what you’re talking about. In a way, it’s just very intelligently mimicking what other people do in successful conversations. It raises the question, when we’re talking about machines and machines that at least appear to be intelligent, what’s really important? Is it really a purely mechanical, syntactic system, like we’re experimenting with Xiaolce, or is it something where we want to codify and encode our semantic understanding of the world and the way it works, the way we’re doing, say, with Cortana.
These are fundamental debates in AI. What’s sort of cool, at least in my day-to-day work here at Microsoft, is we are in a position where we’re able, and allowed, to do fundamental research in these things, but also build and deploy very large experiments just to see what happens and to try to learn from that. It’s pretty cool. At the same time, I can’t say that leaves me with clear answers yet. Not yet. It just leaves me with great experiences and we’re sharing what we’re learning with the world but it’s much, much harder to then say, definitively, what these things mean.
You know, it’s true. In 1950 Alan Turing said, “Can a machine think?” And that’s still a question that many can’t agree on because they don’t necessarily agree on the terms. But you’re right, that chatbot could pass the Turing Test, in theory. At twenty-three turns, if you didn’t tell somebody it was a chatbot, maybe it would pass it. 
But you’re right that that’s somehow unsatisfying that this is somehow this big milestone. Because if you saw it as a user in slow motionthat you ask a question, and then it did a query, and then it pulled back a hundred things and it rank ordered them, and looked for how many of those had successful follow-ups, and thumbs up, and smiley faces, and then it gave you one… It’s that whole thing about, once you know how the magic trick works, it isn’t nearly as interesting. 
It’s true. And with respect to achieving goals, or completing tasks in the world with the help of the Xiaolce chatbot, well, in some cases it’s pretty amazing how helpful Xiaolce is to people. If someone says, “I’m in the market for a new smartphone, I’m looking for a larger phablet, but I still want it to fit in my purse,” Xiaolce is amazingly effective at giving you a great answer to that question, because it’s something that a lot of people talk about when they’re shopping for a new phone.
At the same time, Xiaolce might not be so good at helping you decide which hotels to stay in, or helping you arrange your next vacation. It might provide some guidance, but maybe not exactly the right guidance that’s been well thought out. One more thing to say about this is, today—at least at the scale and practicality that we’re talking about—for the most part, we’re learning from data, and that data is essentially the digital exhaust from human thought and activity. There’s also another sense in which Xiaolce, while it passes the Turing Test, it’s also, in some ways, limited by human intelligence, because almost everything it’s able to do is observed and learned from what other people have done. We can’t discount the possibility of future systems which are less data dependent, and are able to just understand the structure of the world, and the problems, and learn from that.
Right. I guess Xiaolce wouldn’t know the difference, “What’s bigger, a nickel or the sun? 
That’s right, yes.
Unless the transcript of this very conversation were somehow part of the training set, but you notice, I’ve never answered it. I’ve never given the answer away, so, it still wouldn’t know
We should try the experiment at some point.
Why do you think we personify these AIs? You know about Weizenbaum and ELIZA and all of that, I assume. He got deeply disturbed when people were relating to a lie, knowing it was a chatbot. He got deeply concerned that people poured out their heart to it, and he said that when the machine says, “I understand,” it’s just a lie. That there’s no “I,” and there’s nothing that understands anything. Do you think that somehow confuses relationships with people and that there are unintended consequences to the personification of these technologies that we don’t necessarily know about yet? 
I’m always internally scolding myself for falling into this tendency to anthropomorphize our machine learning and AI systems, but I’m not alone. Even the most hardened, grounded researcher and scientist does this. I think this is something that is really at the heart of what it means to be human. The fundamental fascination that we have and drive to propagate our species is surfaced as a fascination with building autonomous intelligent beings. It’s not just AI, but it goes back to the Frankenstein kinds of stories that have just come up in different guises, and different forms throughout, really, all of human history.
I think we just have a tremendous drive to build machines, or other objects and beings, that somehow capture and codify, and therefore promulgate, what it means to be human. And nothing defines that more for us than some sort of codification of human intelligence, and especially human intelligence that is able to be autonomous, make its own decisions, make its own choices moving forward. It’s just something that is so primal in all of us. Even in AI research, where we really try to train ourselves and be disciplined about not making too many unfounded connections to biological systems, we fall into the language of biological intelligence all the time. Even the four categories I mentioned at the outset of our conversation—perception, learning, reasoning, language—these are pretty biologically inspired words. I just think it’s a very deep part of human nature.
That could well be the case. I have a book coming out on AI in April of 2018 that talks about these questions, and there’s a whole chapter about how long we’ve been doing this. And you’re right, it goes back to the Greeks, and the eagle that allegedly plucked out Prometheus’ liver every day, in some accounts, was a robot. There’s just tons of them. The difference of course, now, is that, up until a few years ago, it was all fiction, and so these were just stories. And we don’t necessarily want to build everything that we can imagine in fiction. I still wrestle with it, that, somehow, we are going to convolute humans and machines in a way which might be to the detriment of humans, and not to the ennobling of the machine, but time will tell. 
Every technology, as we discussed earlier, is double-edged. Just to strike an optimistic note here—to your last comment, which is, I think, very important—I do think that this is an area where people are really thinking hard about the kinds of issues you just raised. I think that’s in contrast to what was happening in computer science and the tech industry even just a decade ago, where there’s more or less an ethos of, “Technology is good and more technology is better.” I think now there’s much more enlightenment about this. I think we can’t impede the progress of science and technology development, but what is so good and so important is that, at least as a society, we’re really trying to be thoughtful about both the potential for good, as well as the potential for bad that comes out of all of this. I think that gives us a much better chance that we’ll get more of the good.
I would agree. I think the only other corollary to this, where there’s been so much philosophical discussion about the implications of the technology, is the harnessing of the atom. If you read the contemporary literature written at the time, people were like, “It could be energy too cheap to meter, or it could be weapons of colossal destruction, or it could be both. There was a precedent there for a long and thoughtful discussion about the implications of the technology. 
It’s funny you mentioned that because that reminds me of another favorite quote of mine which is from Albert Einstein, and I’m sure you’re familiar with it. “The difference between stupidity and genius is that genius has its limits.”
That’s good. 
And of course, he said that at the same time that a lot of this was developing. It was a pithy way to tell the scientific community, and the world, that we need to be thoughtful and careful. And I think we’re doing that today. I think that’s emerging very much so in the field of AI.
There’s a lot of practical concern about the effect of automation on employment, and these technologies on the planet. Do you have an opinion on how that’s all going to unfold? 
Well, for sure, I think it’s very likely that there’s going to be massive disruptions in how the world works. I mentioned the printing press, the Gutenberg press, movable type; there was incredible disruption there. When you have nine doublings in the spread of books and printing presses in the period of fifty years, that’s a real medieval Moore’s Law. And if you think about the disruptive effect of that, by the early 1500s, the whole notion of what it meant to educate your children suddenly involved making sure that they could read and write. That’s a skill that takes a lot of expense, and years of formal training and it has this sort of destructive impact. So, while the overall impact on the world and society was hugely positive—really the printing press laid the foundation for the Age of Enlightenment and the Renaissance—it had an absolutely disruptive effect on what it meant and what it took for people to succeed in the world.
AI, I’m pretty sure, is going to have the same kind of disruptive effect, because it has the same sort of democratizing force that the spread of books has had. And so, for us, we’ve been trying very hard to keep the focus on, “What can we do to put AI in the hands of people, that really empowers them, and augments what they’re able to do? What are the codifications of AI technologies that enable people to be more successful in whatever they’re pursuing in life?” And that focus, that intent by our research labs and by our company, I think, is incredibly important, because it takes a lot of the inventive and innovative genius that we have access to, and tries to point it in the right direction.
Talk to me about some of the interesting work you’re doing right now. Start with the healthcare stuff, what can you tell us about that?
Healthcare is just incredibly interesting. I think there are maybe three areas that just really get me excited. One is just fundamental life sciences, where we’re seeing some amazing opportunities and insights being unlocked through the use of machine learning, large-scale machine, and data analytics—the data that’s being produced increasingly cheaply through, say, gene sequencing, and through our ability to measure signals in the brain. What’s interesting about these things is that, over and over again, in other areas, if you put great innovative research minds and machine learning experts together with data and computing infrastructure, you get this burst of unplanned and unexpected innovations. Right now, in healthcare, we’re just getting to the point where we’re able to arrange the world in such a way that we’re able to get really interesting health data into the hands of these innovators, and genomics is one area that’s super interesting there.
Then, there is the basic question of, “What happens in the day-to-day lives of doctors and nurses?” Today, doctors are spending an average—there are several recent studies about this—of one hundred and eight minutes a day just entering health data into electronic health record systems. This is an incredible burden on those doctors, though it’s very important because it’s managed to digitize people’s health histories. But we’re now seeing an amazing ability for intelligent machines to just watch and listen to the conversation that goes on between the doctor and the patient, and to dramatically reduce the burden of all of that record keeping on doctors. So, doctors can stop being clerks and record keepers, and instead actually start to engage more personally with their patients.
And then the third area which I’m very excited about, but maybe is a little more geeky, is determining how we can create a system, how can we create a cloud, where more data is open to more innovators, where great researchers at universities, great innovators at startups who really want to make a difference in health, can provide a platform and a cloud where we can supply them with access to lots of valuable data, so they can innovate, they can create models that do amazing things.
Those three things just all really get me excited because the combination of these things I think can really make the lives of doctors, and nurses, and other clinicians better; can really lead to new diagnostics and therapeutic technologies, and unleash the potential of great minds and innovators. Stepping back for a minute, it really just amounts to creating systems that allow innovators, data, and computing infrastructure to all come together in one place, and then just having the faith that when you do that, great things will happen. Healthcare is just a huge opportunity area for doing this, that I’ve just become really passionate about.
I guess we will reach a point where you can have essentially the very best doctor in the world in your smartphone, and the very best psychologist, and the very best physical therapist, and the very best everything, right? All available at essentially no cost. I guess the internet always provided, at some abstract level, all of that information if you had an infinite amount of time and patience to find it. And the promise of AI, the kinds of things you’re doing, is that it was that difference, what did you say, between learning and reasoning, that it kind of bridges that gap. So, paint me a picture of what you think, just in the healthcare arena, the world of tomorrow will look like. What’s the thing that gets you excited? 
I don’t actually see healthcare ever getting away from being an essentially human-to-human activity. That’s something very important. In fact, I predict that healthcare will still be largely a local activity where it’s something that you will fundamentally access from another person in your locality. There are lots of reasons for this, but there’s something so personal about healthcare that it ends up being based in relationships. I see AI in the future relieving senseless and mundane burden from the heroes in healthcare—the doctors, and nurses, and administrators, and so on—that provide that personal service.
So, for example, we’ve been experimenting with a number of healthcare organizations with our chatbot technology. That chatbot technology is able to answer—on demand, through a conversation with a patient—routine and mundane questions about some health issue that comes up. It can do a, kind of, mundane textbook triage, and then, once all that is done, make an intelligent connection to a local healthcare provider, summarize very efficiently for the healthcare provider what’s going on, and then really allow the full creative potential and attention of the healthcare provider to be put to good use.
Another thing that we’ll be showing off to the world at a major radiology conference next week is the use of computer vision and machine learning to learn the habits and tricks of the trade for radiologists, that are doing radiation therapy planning. Right now, radiation therapy planning involves, kind of, a pixel by pixel clicking on radiological images that is extremely important; it has to be done precisely, but also has some artistry. Every good radiologist has his or her different kinds of approaches to this. So, one nice thing about the machine learning basic computer vision today, is that you can actually observe and learn what radiologists do, their practices, and then dramatically accelerate and relieve a lot of the mundane efforts, so that instead of two hours of work that is largely mundane with only maybe fifteen minutes of that being very creative, we can automate the noncreative aspects of this, and allow the radiologists to devote that full fifteen minutes, or even half an hour to really thinking through the creative aspects of radiology. So, it’s more of an empowerment model rather than replacing those healthcare workers. It still relies on human intuition; it still relies on human creativity, but hopefully allows more of that intuition, and more of that creativity to be harnessed by taking away some of the mundane, and time-consuming aspects of things.
These are approaches that I view as very human-focused, very humane ways to, not just make healthcare workers more productive, but to make them happier and more satisfied in what they do every day. Unlocking that with AI is just something that I feel is incredibly important. And it’s not just us here at Microsoft that are thinking this way, I’m seeing some really enlightened work going on, especially with some of our academic collaborators in this way. I find it truly inspiring to see what might be possible. Basically, I’m pushing back on the idea that we’ll be able to replace doctors, replace nurses. I don’t think that’s the world that we want, and I don’t even know that that’s the right idea. I don’t think that that necessarily leads to better healthcare.
To be clear, I’m talking about the great, immense parts of the world where there aren’t enough doctors for people, where there is this vast shortage of medical professionals, to somehow fill that gap, surely the technology can do that.
Yes. I think access is great. Even with some of the health chatbot pilot deployments that we’ve been experimenting with right now, you can just see that potential. If people are living in parts of the world where they have access issues, it’s an amazing and empowering thing to be able to just send a message to chatbot that’s always available and ready to listen, and answer questions. Those sorts of things, for sure, can make a big difference. At the same time, the real payoff is when technologies like that then enable healthcare workers—really great doctors, really great clinicians—to clear enough on their plate that their creative potential becomes available to more people; and so, you win on both ends. You win both on an instant access through automation, but you can also have a potential to win by expanding and enhancing the throughput and the number of patients that the clinics and clinicians can deal with. It’s a win-win situation in that respect.
Well said and I agree. It sounds like overall you are bullish on the future, you’re optimistic about the future and you think this technology overall is a force for great good, or am I just projecting that on to you? 
I’d say we think a lot about this. I would say, in my own career, I’ve had to confront both the good and bad outcomes, both the positive and unintended consequences of technology. I remember when I was back at DARPA—I arrived at DARPA in 2009—and in the summer of 2009, there was an election in Iran where the people in Iran felt that the results were not valid. This sparked what has been called the Iranian Twitter revolution. And what was interesting about the Iranian Twitter revolution is that people were using social media, Friendster and Twitter, in order to protest the results of this election and to organize protests.
This came to my attention at DARPA, through the State Department, because it became apparent that US-developed technologies to detect cyber intrusions and to help protect corporate networks were being used by the Iranian regime to hunt down and prosecute people who were using social media to organize these protests. The US took very quick steps to stop the sale of these technologies. But the thing that’s important is that these technologies, I’m pretty sure, were developed with only the best of intentions in mind—to help make computer networks safer. So, the idea that these technologies could be used to suppress free speech and freedom of assembly was, I’m sure never contemplated.
This really, kind of, highlights the double-edged nature of technology. So, for sure, we try to bring that thoughtfulness into every single research project we have across Microsoft Research, and that motivates our participation in things like the partnership on AI that involves a large number of industry and academic players, because we always want to have the technology, industry, and the research world be more and more thoughtful and enlightened on these ideas. So, yes, we’re optimistic. I’m optimistic certainly about the future, but that optimism, I think, is founded on a good dose of reality that if we don’t actually take proactive steps to be enlightened, on both the good and bad possibilities, good and bad outcomes, then the good things don’t just happen on their own automatically. So, it’s something that we work at, I guess, is the bottom line for what I’m trying to say. It’s earned optimism.
I like that. “Earned optimism,” I like that. It looks like we are out of time. I want to thank you for an hour of fascinating conversation about all of these topics. 
It was really fascinating, and you’ve asked some of the hardest question of the day. It was a challenge, and tons of fun to noodle on them with you.
Like, “What is bigger, the sun or a nickel? Turns out that’s a very hard question.
I’m going to ask Xiaolce that question and I’ll let you know what she says.
All right. Thank you again.
Thank you.
Byron explores issues around artificial intelligence and conscious computers in his upcoming book The Fourth Age, to be published in April by Atria, an imprint of Simon & Schuster. Pre-order a copy here.
[voices_in_ai_link_back]

Voices in AI – Episode 17: A Conversation with James Barrat

[voices_in_ai_byline]
In this episode, Byron and James talk about jobs, human vs. artificial intelligence, and more.
[podcast_player name=”Episode 17: A Conversation with James Barrat” artist=”Byron Reese” album=”Voices in AI” url=”https://voicesinai.s3.amazonaws.com/2017-10-30-(00-54-11)-james-barrat.mp3″ cover_art_url=”https://voicesinai.com/wp-content/uploads/2017/10/voices-headshot-card-3-1.jpg”]
[voices_in_ai_link_back]
Byron Reese: Hello, this is Voices in AI, brought to you by Gigaom. I am Byron Reese. Today I am so excited that our guest is James Barrat. He wrote a book called Our Final Invention, subtitled Artificial Intelligence and the End of the Human Era. James Barratt is also a renowned documentary filmmaker, as well as an author. Welcome to the show, James.
James Barrat: Hello.
So, let’s start off with, what is artificial intelligence?
Very good question. Basically, artificial intelligence is when machines perform tasks that are normally ascribed to human intelligence. I have a very simple definition of intelligence that I like. Because ‘artificial intelligence’—the definition just throws the ideas back to humans, and [to] human intelligence, which is the intelligence we know the most about.
The definition I like is: intelligence is the ability to achieve goals in a variety of novel environments, and to learn. And that’s a simple definition, but a lot is packed into it. Your intelligence has to achieve goals, it has to do something—whether that’s play Go, or drive a car, or solve proofs, or navigate, or identify objects. And if it doesn’t have some goal that it achieves, it’s not very useful intelligence.
If it can achieve goals in a variety of environments, if it can do object recognition and do navigation and do car-driving like our intelligence can, then it’s better intelligence. So, it’s goal-achieving in a bunch of novel environments, and then it learns. And that’s probably the most important part. Intelligence learns and it builds on its learning.
And you wrote a widely well-received book, Artificial Intelligence: Our Final Invention. Can you explain to the audience just your overall thesis, and the main ideas of the book?
Sure. Our Final Invention is basically making the argument that AI is a dual-use technology. A dual-use technology is one that can be used for great good, or great harm. Right now we’re in a real honeymoon phase of AI, where we’re seeing a lot of nifty tools come out of it, and a lot more are on the horizon. AI, right now, can find cancer clusters in x-rays better than humans. It can do business analytics better than humans. AI is doing what first year legal associates do, it’s doing legal discovery.
So we are finding a lot of really useful applications. It’s going to make us all better drivers, because we won’t be driving anymore. But it’s a dual-use technology because, for one thing, it’s going to be taking a lot of jobs. You know, there are five million professional drivers in the United States, seven million back-office accountants—those jobs are going to go away. And a lot of others.
So the thesis of my book is that we need to look under the hood of AI, look at its applications, look who’s controlling it, and then in a longer term, look at whether or not we can control it at all.
Let’s start with that point and work backwards. That’s an ominous statement. Can we record it at all? What are you thinking there?
Can we control it at all.
I’m sorry, yes. Control it at all.
Well, let me start, I prefer to start the other way. Stephen Hawking said that the trouble with AI is, in the short term, who controls it, and in the long term, can we control it at all? And in the short term, we’ve already suffered some from AI. You know, the NSA recently was accessing your phone data and mine, and getting your phone book and mine. And it was, basically, seizing our phone records, and that used to be illegal.
Used to be that if I wanted to seize, to get your phone records, I needed to go to a court, and get a court order. And that was to avoid abridging the Fourth Amendment, which prevents illegal search and seizure of property. Your phone messages are your property. The NSA went around that, and grabbed our phone messages and our phone data, and they are able to sift through this ocean of data because of AI, because of advanced data mining software.
One other example—and there are many—one other example of, in the short term, who controls the AI, is, right now there are a lot of countries developing battlefield robots and drones that will be autonomous. And these are robots and drones that kill people without a human in the loop.  And these are AI issues. There are fifty-six nations developing battlefield robots.
The most sought after will be autonomous battlefield robots. There was an article just a couple of days ago about how the Marines have a robot that shoots a machinegun on a battlefield. They control it with a tablet, but their goal, as stated there, is to make it autonomous, to work on its own.
In the longer-term we, I’ll put it in the way that Arthur C. Clark put it to me, when I interviewed him. Arthur C. Clark was a mathematician and a physicist before he was a science fiction writer. And he created the HAL 9000 from 2001: A Space Odyssey, probably the most famous homicidal AI. And he said, when I asked him about the control problem of artificial intelligence, he said something like this: He said, “We humans steer the future not because we are the fastest or the strongest creatures, but because we are the most intelligent. And when we share the planet with something that’s more intelligent than we are, it will steer the future.”
So the problem we’re facing, the problem we’re on the cusp of, I can simplify it with a concept called ‘the intelligence explosion’. The intelligence explosion was an idea created by a statistician named I. J. Good in the 1960s. He said, “Once we create machines that do everything as well or better than humans, one of the things they’ll do is create smart machines.”
And we’ve seen artificial intelligence systems slowly begin to do things better than we do, and it’s not a stretch to think about a time to come, when artificial intelligence systems do advanced AI research and development better that humans. And I. J. Good said, “Then, when that happens, we humans will no longer set the pace of intelligence advancement, it will be machines that will set the pace of advancement.”
The trouble of that is, we know nothing about how to control a machine, or a cognitive architecture, that’s a thousand or million times more intelligent than we are. We have no experience with anything like that. We can look around us for analogies in the animal world.
How do we treat things that we’re a thousand times more intelligent than? Well, we treat all animals in a very negligent way. And the smart ones are either endangered, or they’re in zoos, or we eat them. That’s a very human-centric analogy, but I think it’s probably appropriate.
Let’s push on this just a little bit.  So do you…
Sure.
Do you believe… Some people say ‘AI’ is kind of this specter of a term now, that, it isn’t really anything different than any other computer programs we’ve ever run, right? It’s better and faster and all of that, but it isn’t qualitatively anything different than what we’ve had for decades.
And so why do you think that? And when you say that AIs are going to be smarter than us, a million times smarter than us, ‘smarter’ is also a really nebulous term.
I mean, they may be able to do some incredibly narrow thing better than us. I may not be able to drive a car as well as an AI, but that doesn’t mean that same AI is going to beat me at Parcheesi. So what do you think is different? Why isn’t this just incrementally… Because so far, we haven’t had any trouble.
What do you think is going to be the catalyst, or what is qualitatively different about what we are dealing with now?
Sure. Well, there’s a lot of interesting questions packed into what you just said. And one thing you said—which I think is important to draw out—is that there are many kinds of intelligence. There’s emotional intelligence, there’s rational intelligence, there’s instinctive and animal intelligence.
And so, when I say something will be much more intelligent than we are, I’m using a shorthand for: It will be better at our definition of intelligence, it will be better at solving problems in a variety of novel environments, it will be better at learning.
And to put what you asked in another way, you’re saying that there is an irreducible promise and peril to every technology, including computers. All technologies, back to fire, have some good points and some bad points. AI I find qualitatively different. And I’ll argue by analogy, for a second. AI to me is like nuclear fission. Nuclear fission is a dual-use technology capable of great good and great harm.
Nuclear fission is the power behind atom bombs and behind nuclear reactors. When we were developing it in the ‘20s and ‘30s, we thought that nuclear fission was a way to get free energy by splitting the atom. Then it was quickly weaponized. And then we used it to incinerate cities. And then we as a species held a gun at our own heads for fifty years with the arms race. We threatened to make ourselves extinct. And that almost succeeded a number of times, and that struggle isn’t over.
To me, AI is a lot more like that. You said it hasn’t been used for nefarious reasons, and I totally disagree. I gave you an example with the NSA. A couple of weeks ago, Facebook was caught up because they were targeting emotionally-challenged and despairing children for advertising.
To me, that’s extremely exploitative. It’s a rather soulless and exploitative commercial application of artificial intelligence. So I think these pitfalls are around us. They’re already taking place. So I think the qualitative difference with artificial intelligence is that intelligence is our superpower, the human superpower.
It’s the ability to be creative, the ability to invent technology. That was one thing Stephen Hawking brought up when he was asked about, “What are the pitfalls of artificial intelligence?”
He said, “Well, for one thing, they’ll be able to develop weapons we don’t even understand.” So, I think the qualitative difference is that AI is the invention that creates inventions. And we’re on the cusp, this is happening now, and we’re on the cusp of an AI revolution, it’s going to bring us great profit and also great vulnerability.
You’re no doubt familiar with Searle’s “Chinese Room” kind of question, but all of the readers, all of the listeners might not be… So let me set that up, and then get your thought on it. It goes like this:
There’s a person in a room, a giant room full of very special books. And he doesn’t—we’ll call him the librarian—and the librarian doesn’t speak a word of Chinese. He’s absolutely unfamiliar with the language.
And people slide him questions under the door which are written in Chinese, and what he does—what he’s learned to do—is to look at the first character in that message, and he finds the book, of the tens of thousands that he has, that has that on the spine. And in that book he looks up the second character. And the book then says, “Okay, go pull this book.”
And in that book he looks up the third, and the fourth, and the fifth, all the way until he gets to the end. And when he gets to the end, it says “Copy this down.” And so he copies these characters again that he doesn’t understand, doesn’t have any clue whatsoever what they are.
He copies them down very carefully, very faithfully, slides it back under the door… Somebody’s outside who picks it up, a Chinese speaker. They read it, and it’s just brilliant! It’s just absolutely brilliant! It rhymes, it’s Haiku, I mean it’s just awesome!
Now, the question, the kind of ta-da question at the end is: Does the man, does the librarian understand Chinese? Does he understand Chinese?
Now, many people in the computer world would say yes. I mean, Alan Turing would say yes, right?  The Chinese room passes the Turing Test. The Chinese speakers outside, as far as they know, they are conversing with a Chinese speaker.
So do you think the man understands Chinese? And do you think… And if he doesn’t understand Chinese… Because obviously, the analogy of it is: that’s all that computer does. A computer doesn’t understand anything. It doesn’t know if it’s talking about cholera or coffee beans or anything whatsoever. It runs this program, and it has no idea what it’s doing.
And therefore it has no volition, and therefore it has no consciousness; therefore it has nothing that even remotely looks like human intelligence. So what would you just say to that?
The Chinese Room problem is fascinating, and you could write books about it, because it’s about the nature of consciousness. And what we don’t know about consciousness, you could fill many books with. And I used to think I wanted to explore consciousness, but it made exploring AI look easy.
I don’t know if it matters that the machine thinks as we do or not. I think the point is that it will be able to solve problems. We don’t know about the volition question. Let me give you another analogy. When Ferrucci, [when] he was the head of Team Watson, he was asked a very provocative question: “Was Watson thinking when it beat all those masters at Jeopardy?” And his answer was, “Does a submarine swim?”
And what he meant was—and this is the twist on on the Chinese Room problem—he meant [that] when they created submarines, they learned principles of swimming from fish. But then they created something that swims farther and faster and carries a huge payload, so it’s really much more powerful than fish.
It doesn’t reproduce and it doesn’t do some of the miraculous things fish do, but as far as swimming, it does it.  Does an airplane fly? Well, the aviation pioneers used principles of flight from birds, but quickly went beyond that, to create things that fly farther and faster and carry a huge payload.
I don’t think it matters. So, two answers to your question. One is, I don’t think it matters. And I don’t think it’s possible that a machine will think qualitatively as we do. So, I think it will think farther and faster and carry a huge payload. I think it’s possible for a machine to be generally intelligent in a variety of domains.
We can see intelligence growing in a bunch of domains. If you think of them as rippling pools, ripples in a pool, like different circles of expertise ultimately joining, you can see how general intelligence is sort of demonstrably on its way.
Whether or not it thinks like a human, I think it won’t. And I think that’s a danger, because I think it won’t have our mammalian sense of empathy. It’ll also be good, because it won’t have a lot of sentimentality, and a lot of cognitive biases that our brains are labored with. But you said it won’t have volition. And I don’t think we can bet on that.
In my book, Our Final Invention, I interviewed at length Steve Omohundro, who’s taken upon himself—he’s an AI maker and physicist—and he’d taken it upon himself to create more or less a science for understanding super intelligent machines. Or machines that are more intelligent than we are.
And among the things that he argues for, using rational-age and economic theory—and I won’t go into that whole thing—but it’s in Our Final Invention, it’s also in Steve Omohundro’s many websites. Machines that are self-aware and are self-programming, he thinks, will develop basic drives that are not unlike our own.
And they include things like self-protection, creativity, efficiency with resources,and other drives that will make them very challenging to control—unless we get ahead of the game and create this science for understanding them, as he’s doing.
Right now, computers are not generally intelligent, they are not conscious. All the limitations of the Chinese Room, they have. But I think it’s unrealistic to think that we are frozen in development. I think it’s very realistic to think that we’ll create machines whose cognitive abilities match and then outstrip our own.
But, just kind of going a little deeper on the question. So we have this idea of intelligence, which there is no consensus definition on it. Then within that, you have human intelligence—which, again, is something we certainly don’t understand. Human intelligence comes from our brain, which is—people say—‘the most complicated object in the galaxy’.
We don’t understand how it works. We don’t know how thoughts are encoded. We know incredibly little, in the grand scheme of things, about how the brain works. But we do know that humans have these amazing abilities, like consciousness, and the ability to generalize intelligence very effortlessly. We have something that certainly feels like free will, we certainly have something that feels like… and all of that.
Then on the other hand, you think back to a clockwork, right? You wind up a clock back in the olden days and it just ran a bunch of gears. And while it may be true that the computers of the day add more gears and have more things, all we’re doing is winding it up and letting it go.
And, isn’t it, like… not only a stretch, not only a supposition, not only just sensationalistic, to say, “Oh no, no. Someday we’ll add enough gears that, you wind that thing up, and it’s actually going to be a lot smarter than you.”
Isn’t that, I mean at least it’s fair to say there’s absolutely nothing we understand about human intelligence, and human consciousness, and human will… that even remotely implies that something that’s a hundred percent mechanical, a hundred percent deterministic, a hundred percent… Just wind it and it doesn’t do anything. But…
Well, you’re wrong about being a hundred percent deterministic, and it’s not really a hundred percent mechanical. When you talk about things like will, will is such an anthropomorphic term, I’m not sure if we can really, if we can attribute it to computers.
Well, I’m specifically saying we have something that feels and seems like will, that we don’t understand.
If you look, if you look at artificial neural nets, there’s a great deal about them we don’t understand. We know what the inputs are, and we know what the outputs are; and when we want to make better output—like a better translation—we know how to adjust the inputs. But we don’t know what’s going on in a multilayered neural net system. We don’t know what’s going on in a high resolution way. And that’s why they’re called black box systems, and evolutionary algorithms.
In evolutionary algorithms, we have a sense of how they work. We have a sense of how they combine pieces of algorithms, how we introduce mutations. But often, we don’t understand the output, and we certainly don’t understand how it got there, so that’s not completely deterministic. There’s a bunch of stuff we can’t really determine in there.
And I think we’ve got a lot of unexplained behavior in computers that’s, at this stage, we simply attribute to our lack of understanding. But I think in the longer term, we’ll see that computers are doing things on their own. I’m talking about a lot of the algorithms on Wall Street, a lot of the flash crashes we’ve seen, a lot of the cognitive architectures. There’s not one person who can describe the whole system… the ‘quants’, they call them, or the guys that are programming Wall Street’s algorithms.
They’ve already gone, in complexity, beyond any individual’s ability to really strip them down.
So, we’re surrounded by systems of immense power. Gartner and company think that in the AI space—because of the exponential nature of the investment… I think it started out, and it’s doubled every year since 2009—Gartner estimates that by 2025, that space will be worth twenty-five trillion dollars of value. So to me, that’s a couple of things.
That anticipates enormous growth, and enormous growth in power in what these systems will do. We’re in an era now that’s different from other eras. But it is like other Industrial Revolutions. We’re in an era now where everything that’s electrified—to paraphrase Kevin Kelly, the futurist—everything that’s electrified is being cognitized.
We can’t pretend that it will always be like a clock. Even now it’s not like a clock. A clock you can take apart, and you can understand every piece of it.
The cognitive architectures we’re creating now… When Ferrucci was watching Watson play, and he said, “Why did he answer like that?” There’s nobody on his team that knew the answer. When it made mistakes… It did really, really well; it beat the humans. But comparing [that] to a clock, I think that’s the wrong metaphor.
Well, let’s just poke at it just one more minute, and then we can move on to something else. Is that really fair to say, that because humans don’t understand how it works, it must be somehow working differently than other machines?
Put another way, it is fair to say, because we’ve added enough gears now, that nobody could kind of keep them all straight. I mean nobody understands why the Google algorithm—even at Google—turns up what it does when you search. But nobody’s suggesting anything nondeterministic, nothing emergent, anything like that is happening.
I mean, our computers are completely deterministic, are they not?
I don’t think that they are. I think if they were completely deterministic, then enough brains put together could figure out a multi-tiered neural net, and I don’t think there’s any evidence that we can right now.
Well, that’s exciting.  
I’m not saying that it’s coming up with brilliant new ideas… But a system that’s so sophisticated that it defeats Go, and teaches grandmasters new ideas about Go—which is what the grandmaster who it defeated three out of four times said—[he] said, “I have new insights about this game,” that nobody could explain what it was doing, but it was thinking creatively in a way that we don’t understand.
Go is not like chess. On a chess board, I don’t know how many possible positions there are, but it’s calculable. On a Go board, it’s incalculable. There are more—I’ve heard it said, and I don’t really understand it very well—I heard it said there are more possible positions on a Go board than there are atoms in the universe.
So when it’s beating Go masters… Therefore, playing the game requires a great deal of intuition. It’s not just pattern-matching. Like, I’ve played a million games of Go—and that’s sort of what chess is [pattern-matching].
You know, the grandmasters are people who have seen every board you could possibly come up with. They’ve probably seen it before, and they know what to do. Go’s not like that. It requires a lot more undefinable intuition.
And so we’re moving rapidly into that territory. The program that beat the Go masters is called AlphaGo. It comes out of DeepMind. DeepMind was bought four years ago by Google. Going deep into reinforcement learning and artificial neural nets, I think your argument would be apt if we were talking about some of the old languages—Fortran, Basic, Pascal—where you could look at every line of code and figure out what was going on.
That’s no longer possible, and you’ve got Go grandmasters saying “I learned new insights.” So we’re in a brave new world here.
So you had a great part of the book, where you do a really smart kind of roll-up of when we may have an AGI. Where you went into different ideas behind it. And the question I’m really curious about is this: On the one hand, you have Elon Musk saying we can have it much sooner than you think. You have Stephen Hawking, who you quoted. You have Bill Gates saying he’s worried about it.
So you have all of these people who say it’s soon, it’s real, and it’s potentially scary. We need to watch what we do. Then on the other camp, you have people who are equally immersed in the technology, equally smart, equally, equally, equally all these other things… like Andrew Ng, who up until recently headed up AI at Baidu, who says worrying about AGI is like worrying about overpopulation on Mars. You have other people saying the soonest it could possibly happen is five hundred years from now.
So I’m curious about this. Why do you think, among these big brains, super smart people, why do they have… What is it that they believe or know or think, or whatever, that gives them such radically different views about this technology? How do you get your head around why they differ?
Excellent question. I first heard that Mars analogy from, I think it was Sebastian Thrun, who said we don’t know how to get to Mars. We don’t know how to live on Mars. But we know how to get a rocket to the moon, and gradually and slowly, little by little—No, it was Peter Norvig, who wrote the sort of standard text on artificial intelligence, called AI: A Modern Approach.
He said, you know, “We can’t live on Mars yet, but we’re putting the rockets together. Some companies are putting in some money. We’re eventually going to get to Mars, and there’ll be people living on Mars, and then people will be setting another horizon.” We haven’t left our solar system yet.
It’s a very interesting question, and very timely, about when will we achieve human-level intelligence in a machine, if ever. I did a poll about it. It was kind of a biased poll; it was of people who were at a conference about AGI, about artificial general intelligence. And then I’ve seen a lot of polls, and there’s two points to this.
One is the polls go all over the place. Some people said… Ray Kurzweil says 2029. Ray Kurzweil’s been very good at anticipating the progress of technology, he says 2029. Ray Kurzweil’s working for Google right now—this is parenthetically—he said he wants to create a machine that makes three hundred trillion calculations per second, and to share that with a billion people online. So what’s that? That’s basically reverse engineering of a brain.
Making three hundred trillion calculations per second, which is sort of a rough estimate of what a brain does. And then sharing it with a billion people online, which is making superintelligence a service, which would be incredibly useful. You could do pharmacological research. You could do really advanced weather modeling, and climate modeling. You could do weapons research, you could develop incredible weapons. He says 2029.
Some people said one hundred years from now. The mean date that I got was about 2045 for human-level intelligence in a machine. And then my book, Our Final Invention, got reviewed by Gary Marcus in the New Yorker, and he said something that stuck with me. He said whether or not it’s ten years or one hundred years, the more important question is: What happens next?
Will it be integrated into our lives? Or will it suddenly appear? How are we positioned for our own safety and security when it appears, whether it’s in fifty years or one hundred? So I think about it as… Nobody thought Go was going to be beaten for another ten years.
And here’s another way… So those are the two ways to think about it: one is, there’s a lot of guesses; and two, does it really matter what happens next? But the third part of that is this, and I write about it in Our Final Invention: If we don’t achieve it in one hundred years, do you think we’re just going to stop? Or do you think we’re going to keep beating at this problem until we solve it?
And as I said before, I don’t think we’re going to create exactly human-like intelligence in a machine. I think we’re going to create something extremely smart and extremely useful, to some extent, but something we, in a very deep way, don’t understand. So I don’t think it’ll be like human intelligence… it will be like an alien intelligence.
So that’s kind of where I am on that. I think it could happen in a variety of timelines. It doesn’t really matter when, and we’re not going to stop until we get there. So ultimately, we’re going to be confronted with machines that are a thousand or a million times more intelligent than we are.
And what are we going to do?
Well, I guess the underlying assumption is… it speaks to the credibility of the forecast, right? Like, if there’s a lab, and they’re working on inventing the lightbulb, like: “We’re trying to build the incandescent light bulb.” And you go in there and you say, “When will you have the incandescent light bulb?” and they say “Three or four weeks, five weeks. Five weeks tops, we’re going to have it.”  
Or if they say, “Uh, a hundred years. It may be five hundred, I don’t know.” I mean in those things you take a completely different view of, do we understand the problem? Do we know what we’re building? Do we know how to build an AGI? Do we even have a clue?
Do you believe… or here, let me ask it this way: Do you think an AGI is just an evolutionary… Like, we have AlphaGo, we have Watson, and we’re making them better every day. And eventually, that kind of becomes—gradually—this AGI. Or do you think there’s some “A-ha” thing we don’t know how to do, and at some point we’re like “Oh, here’s how you do it! And this is how you get a synapse to work.”
So, do you think we are nineteen revolutionary breakthroughs away, or “No, no, no, we’re on the path. We’re going to be there in three to five years.”?
Ben Goertzel, who is definitely in the race to make AGI—I interviewed him in my book—said we need some sort of breakthrough. And then we got to artificial neural nets and deep learning, and deep learning combined with reinforcement learning, which is an older technique, and that was kind of a breakthrough. And then people started to beat—IBM’s Deep Blue—to beat chess, it really was just looking up tables of positions.
But to beat Go, as we’ve discussed, was something different.
I think we’ve just had a big breakthrough. I don’t know how many revolutions we are away from a breakthrough that makes intelligence general. But let me give you this… the way I think about it.
There’s long been talk in the AI community about an algorithm… I don’t know exactly what they call it. But it’s basically an open-domain problem-solver that asks something simple like, what’s the next best move? What’s the next best thing to do? Best being based on some goals that you’ve got. What’s the next best thing to do?
Well, that’s sort of how DeepMind took on all the Atari games. They could drop the algorithm into a game, and it didn’t even know the rules. It just noticed when it was scoring or not scoring, and so it was figuring out what’s the next best thing to do.
Well if you can drop it into every Atari game, and then you drop it into something that’s many orders of magnitude above it, like Go, then why are we so far from dropping that into a robot and setting it out into the environment, and having it learn the environment and learn common sense about the environment—like, “Things go under, and things go over; and I can’t jump into the tree; I can climb the tree.”
It seems to me that general intelligence might be as simple as a program that says “What’s the next best thing to do?” And then it learns the environment, and then it solves problems in the environment.
So some people are going about that by training algorithms, artificial neural net systems and defeating games. Some people are really trying to reverse-engineer a brain, one neuron at a time. That’s sort of, in a nutshell—to vastly overgeneralize—that’s called the bottom-up, and the top-down approach for creating AGI.
So are we a certain number of revolutions away, or are we going to be surprised? I’m surprised a little too frequently for my own comfort about how fast things are moving. Faster than when I was writing the book. I’m wondering what the next milestone is. I think the Turing Test has not been achieved, or even close. I think that’s a good milestone.
It wouldn’t surprise me if IBM, which is great at issuing itself grand challenges and then beating them… But what’s great about IBM is, they’re upfront. They take on a big challenge… You know, they were beaten—Deep Blue was beaten several times before it won. When they took on Jeopardy, they weren’t sure they were going to win, but they had the chutzpah to get out there and say, “We’re gonna try.” And then they won.
I bet IBM will say, “You know what, in 2020, we’re going to take on the Turing Test. And we’re going to have a machine that you can’t tell that it’s a machine. You can’t tell the difference between a machine and a human.”
So, I’m surprised all the time. I don’t know how far or how close we are, but I’d say I come at it from a position of caution. So I would say, the window in which we have to create safe AI is closing.
Yes, no… I’m with you; I was just taking that in. I’ll insert some ominous “Dun, dun, dun…” Take that a little further.
Everybody has a role to play in this conversation, and mine happens to be canary in a coal mine. Despite the title of my book, I really like AI. I like its potential. Medical potential. I don’t like its war potential… If we see autonomous battlefield robots on the battlefield, you know what’s going to happen. Like every other piece of used military equipment, it’s going to come home.
Well, the thing is, about the military… and the thing about technology is…If you told my dad that he would invite into his home a representative of Google, and that representative would sit in a chair in a corner of the house, and he would take down everything we said, and would sell that data to our insurance company, so our insurance rates might go up… and it would sell that data to mortgage bankers, so they might cut off our ability to get a mortgage… because dad talks about going bankrupt, or dad talks about his heart condition… and he can’t get insurance anymore.
But if we hire a corporate guy, and we pay for it, and put him in our living room… Well, that’s exactly what we’re doing with Amazon Echo, with all the digital assistants. All this data is being gathered all the time, and it’s being sold… Buying and selling data is a four billion dollar-a-year industry. So we’re doing really foolish things with this technology. Things that are bad for our own interests.
So let me ask you an open-ended question… prognostication over shorter time frames is always easier. Tell me what you think is in store for the world, I don’t know, between now and 2030, the next thirteen years. Talk to me about unemployment, talk to me about economics, all of that. Tell me the next thirteen years.
Well, brace yourself for some futurism, which is a giant gamble and often wrong. To paraphrase Kevin Kelly again, everything that’s electrical will be cognitized. Our economy will be dramatically shaped by the ubiquity of artificial intelligence. With the Internet of Things, with the intelligence of everything around us—our phones, our cars…
I can already talk to my car. I’m inside my car, I can ask for directions, I can do some other basic stuff. That’s just going to get smarter, until my car drives itself. A lot of people… MIT did a study, that was quoting a Cambridge study, that said: “Forty-five percent of our jobs will be able to be replaced within twenty years.” I think they downgraded that to like ten years.
Not that they will be replaced, but they will be able to be replaced. But when AI is a twenty-five trillion dollar—when it’s worth twenty-five trillion dollars in 2025—everybody will be able to do anything, will be able to replace any employee that’s doing anything that’s remotely repetitive, and this includes doctors and lawyers… We’ll be able to replace them with the AI.
And this cuts deep into the middle class. This isn’t just people working in factories or driving cars. This is all accountants, this is a lot of the doctors, this is a lot of the lawyers. So we’re going to see giant dislocation, or giant disruption, in the economy. And giant money being made by fewer and fewer people.
And the trouble with that is, that we’ve got to figure out a way to keep a huge part of our population from starving, from not making a wage. People have proposed a basic minimum income, but to do that we would need tax revenue. And the big companies, Amazon, Google, Facebook, they pay taxes in places like Ireland, where there’s very low corporate tax. They don’t pay taxes where they get their wealth. So they don’t contribute to your roads.
Google is not contributing to your road system. Amazon is not contributing to your water supply, or to making your country safe. So there’s a giant inequity there. So we have to confront that inequity and, unfortunately, that is going to require political solutions, and our politicians are about the most technologically-backward people in our culture.
So, what I see is, a lot of unemployment. I see a lot of nifty things coming out of AI, and I am willing to be surprised by job creation in AI, and robotics, and automation. And I’d like to be surprised by that. But the general trend is… When you replace the biggest contract manufacturer in the world… Foxconn just replaced thirty-thousand people in Asia with thirty-thousand robots.
And all those people can’t be retrained, because if you’re doing something that’s that repetitive, and that mechanical… what can you be retrained to do? Well, maybe one out of every hundred could be a floor manager in a robot factory, but what about all the others? Disruption is going to come from all the people that don’t have jobs, and there’s nothing to be retrained to.
Because our robots are made in factories where robots make the robots. Our cars are made in factories where robots make the cars.
Isn’t that the same argument they used during the Industrial Revolution, when they said, “You got ninety percent of people out there who are farmers, and we’re going to lose all these farm jobs… And you don’t expect those farmers are going to, like, come work in a factory, where they have to learn completely new things.”
Well, what really happened in the different technology revolutions, back from the cotton gin onward is, a small sector… The Industrial Revolution didn’t suddenly put farms out of business. A hundred years ago, ninety percent of people worked on farms, now it’s ten percent.
But what happened with the Industrial Revolution is, sector by sector, it took away jobs, but then those people could retrain, and could go to other sectors, because there were still giant sectors that weren’t replaced by industrialization. There was still a lot of manual labor to do. And some of them could be trained upwards, into management and other things.
This, as the author Ford wrote in The Rise of Robots—and there’s also a great book called The Fourth Industrial Age. As they both argue, what’s different about this revolution is that AI works in every industry. So it’s not like the old revolutions, where one sector was replaced at a time, and there was time to absorb that change, time to reabsorb those workers and retrain them in some fashion.
But everybody is going to be… My point is, all sectors of the economy are going to be hit at once. The ubiquity of AI is going to impact a lot of the economy, all at the same time, and there is going to be a giant dislocation all at the same time. And it’s very unclear, unlike in the old days, how those people can be retrained and retargeted for jobs. So, I think it’s very different from other Industrial Revolutions, or rather technology revolutions.
Other than the adoption of coal—it went from generating five percent to eighty percent of all of our power in twenty years—the electrification of industry happened incredibly fast. Mechanization, replacement of animal power with mechanical power, happened incredibly fast. And yet, unemployment remains between four and nine percent in this country.
Other than the Depression, without ever even hiccupping—like, no matter what disruption, no matter what speed you threw at it—the economy never couldn’t just use that technology to create more jobs. And isn’t that maybe a lack of imagination that says “Well, no, now we’re out. And no more jobs to create. Or not ones that these people who’ve been displaced can do.”
I mean, isn’t that what people would’ve said for two hundred years?
Yes, that’s a somewhat persuasive argument. I think you’ve got a point that the economy was able to absorb those jobs, and the unemployment remained steady. I do think this is different. I think it’s a kind of a puzzle, and we’ll have to see what happens. But I can’t imagine… Where do professional drivers… they’re not unskilled, but they’re right next to it. And it’s the job of choice for people who don’t have a lot of education.
What do you retrain professional drivers to do once their jobs are taken? It’s not going to be factory work, it’s not going to be simple accounting. It’s not going to be anything repetitive, because that’s going to be the job of automation and AI.
So I anticipate problems, but I’d love to be pleasantly surprised. If it worked like the old days, then all those people that were cut off the farm would go to work in the factories, and make Ford automobiles, and make enough money to buy one. I don’t see all those driverless people going off to factories to make cars, or to manufacture anything.
A case in point of what’s happening is… Rethink Robotics, which is Rodney Brooks’ company, just built something called Baxter; and now Baxter is a generation old, and I can’t think of what replaced it. But it costs about twenty-two thousand dollars to get one of these robots. These robots cost basically what a minimum wage worker makes in a year. But they work 24/7, so they really replace three shifts, so they really are replacing three people.
Where do those people go? Do they go to shops that make Baxter? Or maybe you’re right, maybe it’s a failure of imagination to not be able to anticipate the jobs that would be created by Baxter and by autonomous cars. Right now, it’s failing a lot of people’s imagination. And there are not ready answers.
I mean, if it were 1995 and the Internet was, you’re just hearing about it, just getting online, just hearing it… And somebody said, “You know what? There’s going to be a lot of companies that just come out and make hundreds of billions of dollars, one after the other, all because we’ve learned how to connect computers and use this hypertext protocol to communicate.” I mean, that would not have seemed like a reasonable surmise.
No, and that’s a great example. If you were told that trillions of dollars of value are going to come out of this invention, who would’ve thought? And maybe I personally, just can’t imagine the next wave that is going to create that much value. I can see how AI and automation will create a lot of value, I only see it going into a few pockets though. I don’t see it being distributed in any way that the Silicon Valley startups, at least initially, were.
So let’s talk about you for a moment. Your background is in documentary filmmaking. Do you see yourself returning to that world? What are you working on, another book? What kind of thing is keeping you busy by day right now?
Well, I like making documentary films. I just had one on PBS last year… If you Google “Spillover” and “PBS” you can see it is streaming online. It was about spillover diseases—Ebola, Zika and others—and it was about the Ebola crisis, and how viruses spread. And then now I’m working on a film about paleontology, about a recent discovery that’s kind of secret, that I can’t talk about… from sixty-six million years ago.
And I am starting to work on another book that I can’t talk about. So I am keeping an eye on AI, because this issue is… Despite everything I talk about, I really like the technology; I think it’s pretty amazing.
Well, let’s close with, give me a scenario that you think is plausible, that things work out. That we have something that looks like full employment, and…
Good, Byron. That’s a great way to go out. I see people getting individually educated about the promise and peril of AI, so that we as a culture are ready for the revolution that’s coming. And that forces businesses to be responsible, and politicians to be savvy, about developments in artificial intelligence. Then they invest some money to make artificial intelligence advancement transparent and safe.
And therefore, when we get to machines that are as smart as humans, that [they] are actually our allies, and never our competitors. And that somehow on top of this giant wedding cake I’m imagining, we also manage to keep full employment, or nearly-full employment. Because we’re aware, and because we’re working all the time to make sure that the future is kind to humans.
Alright, well, that is a great place to leave it. I am going to thank you very much.
Well, thank you. Great questions. I really enjoyed the back-and-forth.
Byron explores issues around artificial intelligence and conscious computers in his upcoming book The Fourth Age, to be published in April by Atria, an imprint of Simon & Schuster. Pre-order a copy here
[voices_in_ai_link_back]

Voices in AI – Episode 8: A Conversation with Esther Dyson

[voices_in_ai_byline]
In this episode, Byron and Esther talk about intelligence, jobs, her experience in being a backup cosmonaut and more.
[podcast_player name=”Episode 8: A Conversation with Esther Dyson” artist=”Byron Reese” album=”Voices in AI” url=”https://voicesinai.s3.amazonaws.com/2017-10-16-(00-54-51)-esther-dyson.mp3″ cover_art_url=”https://voicesinai.com/wp-content/uploads/2017/10/voices-headshot-card.jpg”]
[voices_in_ai_link_back]
Byron Reese: Today, our guest is Esther Dyson. Esther Dyson is a living legend. She has been an angel investor, and sits on the boards of a number of companies. She is also a best-selling author, a world citizen, and a backup cosmonaut for the Russian Space Program. Now, she serves as the Executive Founder for a non-profit called Way to Wellville. Welcome to the show, Esther.
Esther Dyson: Delighted to be here.
Let’s start with that; that sounds like an intriguing non-profit. Can you talk about what its mission is, and what your role therein is?
Yeah. My role is, I founded it. The reason I founded it, was a question, which was… As I was an angel investor, and doing tech, and getting more and more interested in healthcare, and biotech, and medicine, I also had to ask the basic question; which is: “Why are we spending so much money and countenancing so much tragedy by fixing people when they’re broken, instead of keeping them healthy and resilient, so that they don’t get sick or chronically diseased in the first place?”
The purpose of Way to Wellville is to show what it looks like when you help people stay healthy. I could go on for way too long, but it’s five small communities around the US, so you can get critical mass in a small way, rather than trying to reshape New York City or something.
The basic idea is that this happens in the community. You don’t actually need to experiment and inspect people one-by-one, but change the environment they live in and then look at sort of the overall impact of that. It started a few years ago as a five-year project and a contest. Now, it’s a ten-year project and it’s more like a collaboration among the five communities.
One way AI is really important is that in order to show the impact you’ve had, you need to be able to predict pretty accurately what would’ve happened otherwise. So, in a sense, these are five communities, the United States is the control group.
But, at the same time, you can look at a class of third graders and do your math, and say that one-third of these are going to be obese by the time they’re sixteen, 30% will have dropped out, 10% will be juvenile delinquents, and that’s simply unacceptable. We need to fix that. So, that’s what we’re doing.
We’ll get to the AI stuff here in a moment but I’m just curious, how do you go about doing that? That seems so monumental, as being one of those problems like, where do you start?
Yeah, and that’s why we’re doing it in small communities. Part of the drill was, ask the communities what they want, but at the same time I went in thinking diabetes and heart disease, and exercise, and nutrition. The more we learned, the more we actually, as you say—you’ve got to start at the beginning, which is prenatal care and childhood. If you come from a broken home or with abusive parents, chances are it’s going to be hard for you to eat properly, it’s going to be hard for you to resist drugs.
There’s a concept called adverse childhood experiences. The mind is a very delicate thing. In some ways, we’re incredibly robust and resilient… But then, when you look at a third of the US population is obese—a smaller number is diabetic, according to age. You look at the opioid addiction problem, you look at the number of people who have problems with drinking or other kinds of behavior and you realize, oh, they’re all self-medicating. Again, let’s catch them when they’re kids and help them be addicted to love and children and exciting work, and feeling productive—rather than substances that cause other problems.
What gives you hope that you’ll be successful? Have you had any promising early findings in the first five-year part?
Not the kind you’d want. The first thing is in each community, part of the premise was there’s a group of local leaders who are trying to help the community be healthy. Mostly, they’re volunteers; they don’t have resources; they’re not accountable; so it’s difficult. We’re trying to help bring in some—but not all—of that Silicon Valley startup culture… It’s okay to fail, as long as you learn.
Plan B is not a disaster. Plan B is the result of learning how to fix Plan A, and so forth. If you look at studies, it’s pretty clear that having caring adults in a child’s life is really important. If you look at studies, it’s pretty clear that there’s no way you can eat healthily, if you can’t get healthy food, either because they’re too poor, or it’s inaccessible, or you don’t know what’s healthy.
Some of these things are the result of childhood experiences. Some are the result of poverty, and transportation issues… Yes, you’re right, all these things interact. You can’t go in and fix everything; but if you focus on the kids and their parents, that’s a good place to start.
I learned a lot of concepts. One of them is child storage, as opposed to child enrichment. If your child is going to a preschool that helps them learn how to play, that has caring adults, that can help the kid overcome a horrible home environment… It’s not going to solve all the community’s problems, but it’s definitely going to help some percentage of the children do better. That kind of stuff spreads, just the way the opposite spreads.
In the end, is your hope that you come out of it with, I guess, a set of best practices that you can then disseminate?
People know the best practices. What we really want to do is two things. One, show that it’s possible and inspire people that are [in] regular communities. This is not some multi-million dollar gated community designed for rich people to live healthy and fulfilling lives and go to the spa.
There are five of them, real places in various parts of America: Muskegon, Michigan; Spartanburg, South Carolina; North Hartford, Connecticut; Clatsop County, Oregon; and Lake County, California; that normal people in these places can fundamentally change the community to make it a place where kids are born lucky, instead of unlucky.
Yes, they can look at what we did and there will be certain things we did. One includes… The community needs to come together in different sectors; like the schools, and business people, and the hospital system need to cooperate. And, most likely, somebody needs to pay.
You need coaches to do everything from nurse visits, pre- and post-birth, early childhood education that’s effectively delivered, caring teachers in the schools, healthy school lunches. Really sad to see the government just backtracked on sodium and other stuff in the school lunches… But in a sense, we’re trying to simulate what it would look like, if we had really wonderful policies around fostering healthy childhoods and show the impact that has.
Let’s zoom the lens way out from there, because that might be an example of the kinds of things you hear a lot about today. It seems like it’s a world full of insurmountable problems, and then it’s also a world full of real, legitimate hope that there’s a way to get through them.
If I were to ask you in a broad way, how do you see the future? [Through] what lens do you look at the future, either of this country, or the world, or anything, in ten years, twenty years, thirty years? What do you think is going to happen, and what will be the big driving forces?
Well, I get my dopamine from doing something, rather than sitting around worrying. Intellectually, I feel these problems; and practically, I’m doing something about them the best way I know that will have leverage, which is doing something small and concentrated, rather than diffuse with no impact.
I want a real impact in a small number of dense places. Then, make that visible to a lot of other people and scale by having them do it, not by trying to do it myself. If you didn’t have hope, you wouldn’t do anything. Nothing happens without people doing something. So, I’m hopeful. Yeah, this is very circular.
So, I was journalist and I didn’t persuade people, I told them the truth. Ultimately, I think the truth is extremely powerful. You need to educate people to understand the truth and pay attention to it, but the truth is always much more persuasive than a lot of people just trying to cajole you, or persuade you, or deceive you, or manipulate you.
I want to create a truth that is encouraging and moves people to action, by making them feel that they could do this too; because they can, if they believe they can. This is not believing you will be blessed… It’s more like: Hey, you’ve got to do a lot of the hard work, and you need to change your community, and you need to think about food, and you need to be helping parents become better parents. There are active things you can do.
Is there any precedent for that? That sounds like it calls for changing lots of behaviors.
Well, the precedent is all the lucky people we know whose parents did love them, and who felt secure, and did amazing things. Many of them don’t realize how lucky they are. There’s also, of course, the people who had horrible circumstances and survived somehow anyway.
One of the best examples currently is J.D. Vance in the book Hillbilly Elegy. Many of them were just lucky to have an uncle, or a neighbor lady, or a grandmother, or somebody who gave them that support that they needed to overcome all the obstacles, and then there’s so many others who didn’t [have that].
Yes, certainly, there’s these people who’ve done things like this, but not ones that are visible enough that it really moves people to action. Part of this, we’re hoping to have a documentary that explains what we’re doing. Now, it’s early, because we haven’t done that much.
We’ve done a lot of preparation, and the communities are changing, but believe me: We’re not finished. I will say, when we started we put out a call for applications, and got applications for us to come in and help from forty-two communities.
Then, in the Summer of 2014, Rick Brush, our CEO, and I picked ten of them to go visit. One of them we turned down, because they were too good. That’s the town of Columbus, Indiana, which is, basically, the company town of Cummins Engine, which is just a wonderful place.
They were doing such a good job making their community healthier that we said, “Bless you guys, keep doing it. We don’t want to come in and claim the credit. There’s five other places that need us more.”
There are some pretty wonderful places in America, but there’s also a lot of places that have lost their middle class, people are dispirited, high unemployment. They need employers, they need good parents, they need better schools, they need all this stuff.
It’s not a nice white lady who came from New York to tell you how to live or to give you stuff. It’s this team of five that’s here to help you fix things for yourself, so that when we leave in ten years, you own your community. You will have helped repair it.
That sounds wonderful, in the sense that, if you ever can affect change, it should be kind of a positive reinforcement. Hopefully, it stays and builds on itself.
Yeah. It’s like, if you need us to be there, yes, we believe we’re helping in making a difference. But at some point, it’s their community, they have to own it. Otherwise, it’s not real, because it depends on us and when we leave it, it’s gone.
They’re building it for themselves, we’re just kind of poking them, counseling them, and introducing them to programs. And, “Hey, did you know this is what they’re doing at adverse childhood experiences in this or that study. This is how you can design a program like that for yourselves or hire the right training company, and build capacity in your own community.”
A lot of this is training people in the community to deliver various kinds of coaching and care, and stuff like that.
Your background is squarely in technology. Let’s switch gears and chat about that for a moment. Let’s start with the topic of show, which is artificial intelligence. What are your thoughts about it? Where do you think we’re at? Where do you think we’re going? What do you think it’s all about?
Yeah. Well, so, I first wrote about artificial intelligence inside a newsletter back in the days of Marvin Minsky and expert systems. Expert systems were basically logic. If this, and that, and the other thing, then… If someone shows up, and their blood pressure’s higher than x, and so forth. They didn’t sell very well.
Then they started calling them assistants instead of experts. In other words, we’re not going to replace you with an expert, we’re just going to assist you in doing your job. Pretty soon, they didn’t seem to be AI anymore because they really weren’t. They were simply logic.
The definition of artificial intelligence, to me, is somewhat similar to magic. The moment you really, really understand how it works, it no longer seems artificially-intelligent. It just seems like a tool that you design and it does stuff. Now, of course, we’re moving towards neural nets, and the so-called black boxes and things that actually, in theory, they can explain what they do; but now, they start to program themselves, based on large datasets.
They’re beyond the comprehension of a lot people, what exactly they do, and that’s some of the sort of social/ethical discussions that are happening. Or, you ask a bot to mimic a human being, and you discover most human beings make pretty poor decisions a lot of the time, or reflect biases of their culture.
AI was really hard to do at scale, back when we had very underpowered computers, compared with what we have today. Now, it’s both omnipresent and still pretty pathetic, in terms of… AI is generally still pretty brittle.
There’s not even a consensus definition on what intelligence is, let alone, what an AI is, but whatever it means… Would you say we have it, to at least some degree, today?
Oh, yeah. Again, the definition is becoming… Yes, the threshold of what we call AI is rising from what we called AI twenty years ago.
Where do you think it will go? Do you think that we’re building something that as it gradually gets better, in this kind of incrementalism, it’s eventually going to emerge as a general intelligence? Or do you think the quest to build something as smart and versatile as a human will require dramatically different technology than we have now?
Well, there’s a couple of different things around that. First of all, if something is not general, is it intelligent or is it simply good at doing its specific task? Like, I can do amazing machine translation now—with large enough corpuses—that simply has a whole lot of pattern recognition and translates from one language into another, but it doesn’t really understand anything.
At some point, if something is a super-intelligence, then I think it’s no longer artificial. It may not be wet. It may be totally electronic. If it’s really intelligent, it’s not artificial anymore, it’s intelligent. It may not be human, or conceived, or wet… But that’s my definition, someone else might just simply define it differently.
No, that’s quite legitimate actually. It’s unclear what the word artificial is doing in the phrase. One view is that it’s artificial in the sense that artificial turf is artificial. It may look like turf, but it’s not really turf. That sounds kind of like how you—not to put words in your mouth—but that sounds kind of like how you view it.
It can look like intelligence for a long time to come, but it isn’t really. It isn’t intelligent until it understands something. If that’s the case, we don’t know how to build a machine that understands anything. Would you agree?
Yes. They’re all these jokes, like… The moment it becomes truly intelligent, it’s going to start asking you for a salary. There are all these different jokes about AI. But yeah, until it ‘has a mind of its own’, what is intelligence? Is it because of the soul? Is it purpose? Can you be truly intelligent without having a purpose? Because, if you’re truly intelligent, but you have no purpose, you will do nothing, because you need a purpose to do something.
Right. In the past, we’ve always built our machines with implicit purposes, but they’ve never, kind of, gotten a purpose on their own.
Precisely. It’s sort of like dopamine for machines. What is it that makes a machine do something? Then, they have the runaway machines who do something because they want more electricity to grow, but they’ve been programmed to grow. But then, that’s not their own purpose.
Right. Are you familiar with Searle’s Chinese Room Analogy?
You mean the guy sitting in the backroom who does all the work
Exactly. The point of his illustration is, does this man who’s essentially just looking stuff up in books… He doesn’t speak Chinese, but he does a great job answering Chinese questions, because he can just look stuff up in these special books.
But he has no idea what he’s doing.
Right. He doesn’t know if it’s about cholera or coffee beans, or cough drops, or anything. The punchline is, does the man understand Chinese? The interesting thing is, you’re one of few people I’ve spoken to who unequivocally says, “No, if there’s nobody at home, it’s not intelligent.” Because, obviously, Turing would say, “That thing’s thinking; it understands.”
Well, no, I don’t think Turing would’ve said that. The Turing Test is a very good test for its time, but, I mean… George [Dyson, the futurist and technology historian who happens to be her brother] would know this much better. But the ability to pass the test… Again, what AI was at that point is very different from what it is now.
Right. Turing asked the question, can a machine think? The real question he was asking, in his own words, was something to the effect of: Could it do something radically different than us, that doesn’t look like thinking… But don’t we kind of have to grant that it is thinking? 
That’s when he said… This idea that you could have a conversation with something and therefore, it’s doing it completely differently. It’s kind of cheating. It’s not really, obviously, but it’s kind of shortcutting it’s way to knowing Chinese, but it doesn’t really [know Chinese]. By that analogy and by that logic, you probably think it’s unlikely we’ll develop conscious machines. Is that right?
Well, no. I think we might, but then it’s going to be something quite… I mean, this is the really interesting question. In the end, we evolved from just bits of carbon-based stuff, and maybe there’s another form of intelligence that could evolve from electronic stuff. Yeah, I mean, we’re a miracle and maybe there’s another kind of miracle waiting to happen. But, what we’ve got in our machines now is definitely not that.
It is fascinating. Matt Ridley, wrote Rational Optimist, said in his book that the most important thing to know about life is [that] all life is one, is that life happened on this planet and survived one time… And every living thing shares a huge amount of the same DNA.
Yeah. I think it might’ve evolved multiple times, or little bits went through the same process, but I don’t think we all came from the same cell. I think it’s much more likely there was a lot of soup and there were a whole bunch of random bits that kind of coalesced. There might’ve been bunches of them that coalesced separately, but similarly.
I see. Back in their own day, merged into something that we are all related to?
Yeah. Again, all carbon-based. There are some interesting things at the bottom of the ocean that are quite different.
Right. In fact, that suggests you’re more likely to find life in the clouds on Venus—as inhospitable as it is, at least stuff’s happening there—than you might find on a barren, more hospitable planet.
Yeah.
When you talk to people who believe in an AGI, who believe we’re going to develop an AGI, and then you ask them, “When?” you get this interesting range between five and five hundred years, depending on who you ask. And these are all people who have some amount of training and familiarity with the issues. What does that suggest to you, that you get that kind of a disparity from people? What would you glean from that?
That we really don’t know.
I think that’s really interesting, because so many people are on that spectrum. Nobody says oh, somewhere between five and five hundred years. No person says that. The five-year people—
—They’re all so different. Yeah.
But all very confident, all very confident. You know, “We’ll have something by 2050.” A lot of it I think boils down to whether you think we’re a couple of hops, skips, and a jump away from something that can take off on its own… Or, it’s going to be a long, long, long time.
Yeah. It’s also, how you define it. Again, to me, in a sense, I’ve been thinking about this and reading Yuval Noah Harari’s Homo Deus and various other people… But to me, in the end, there’s something about purpose, which means, again, it really is… It’s the anti-entropy thing.
What is it that makes you grow, makes you reproduce? We know how that works physically, but, then when you talk about a soul or a consciousness, there’s some animating thing or some animating force, and it’s this purpose in life. It’s reproduction to create more life. That’s sort of an accident, of something that had to have purpose to reproduce, and the other stuff didn’t.
Again, there’s more biological descriptions of that. Where that fits in something that’s not wet, how that gets implemented—purpose; we haven’t yet found. It’s like, we found substances that correlate with purpose, but there’s some anti-entropy that moves us. Without which, we wouldn’t do anything.
If you’re right, that without purpose, without understanding—as fantastic as it is with our very stone-knives-and-bearskins kind of AI we have today—I would guess… And not to put words in your mouth, but, I would guess you are less worried about the AI’s taking all the jobs than somebody else might be. What is your view on that?
Yeah. Well, in [terms of] the AIs taking all the jobs… That is something that we can control, not easily. It’s just like saying we can control the government or we can control health. Human beings collectively can—and I believe should—start making decisions about what we do about people and jobs.
I don’t think we want a universal basic income, as much as we want almost universal basic vouchers to… Again, I think people need purpose in their lives. They need to feel useful. Some people can create art and feel useful, and sell it, or just feel good when other people look at their art. But I think a more simple, more practical way to do this is, we need to raise the salaries of people who do childcare, coaching, you know.
We need to give people jobs, for which they are paid, that are useful jobs. And I think some of the most useful things people can do, generally—some people can become coders and design things and program artificial intelligence tools, and so forth, and build things. But a lot of people, I think, can be very effectively employed. This goes back to the Way to Wellville in caring for children, in coaching mothers through pregnancy, in running baseball teams in high schools.
We can sit here and talk about artificial intelligence, but this is a world in which people are afraid to let their kids out to play and everywhere you go, bridges are falling down. I live in New York City, and we’re going to have to close some of our train tunnels, because we haven’t done enough repair work. There actually is an awful lot of work out there.
We need to design our society more rationally. Not by giving everybody a basic income, but by figuring out how to construct a world in which almost everybody is employed doing something useful, and they’re being paid to do that, and it’s not like a giant relief act.
This is a society with a lot of surplus. We can somehow construct it so that people get paid enough that they can live comfortable lives. Not easy lives, but comfortable lives, where you do some amount of work and you get paid.
At the margins, yes, take care of people who’ve fallen off; but let’s do a better job raising our children and creating more people who do, in fact… You know, their childhoods don’t destroy their sense of worth and dignity, and they want to do something useful. And feel that they matter, and they get paid to do that useful thing.
Then, we can use all the AI that makes society, as a whole, very rich. Consumption doesn’t give people purpose. Production does, whether it’s production of services or production of things.
I think you’re entirely right, you could just… on the back of an envelope say, “Yeah, we could use another half-million kindergarten teachers and another quarter-million…”—you can come up with a list of things, from a societal standpoint, [that] would be good and that maybe market forces aren’t creating. It isn’t just make-work, it’s all actually really important stuff. Do you have any thoughts on how that would work practically?
Yeah.
You implied it’s not the WPA again, or is it…?
No. Go to the people who talk about the universal basic income and say, look, why don’t you make this slightly different. Let’s talk about, you get double dollars for buying vegetables with your food stamps. How do we do something that gives everybody an account, that they can apply to pay for service work?
So, every time I use the services of a hairdresser, or a babysitter, or a basketball coach, or a gym teacher, there’s this category of services. This is not simple, there’s a certain amount of complexity here, because you don’t want to be able to—to be gross, you know—hire the teenage girl next door to provide sexual services. I think it needs to be companies, rather than government.
Whether it’s Uber vetting drivers—and that’s a whole other story—but you want an intermediary that does quality control. Both in terms of how the customers behave, and how the providers behave, and manage the training of the providers, and so forth.
Then, there’s a collective subsidy to the wages that are paid to the people who provide the services that foster… Long ago, women didn’t have many occupations open to them, so second-grade teachers tended to be a lot of very smart women, who were dedicated, and didn’t get paid much.
But that was okay, and now that’s changing. Now, we need to pay them more, which is great. There’s a collective benefit to having people teaching second grade that benefits society and should be paid for collectively.
In a way, you could throw away the entire tax code we have and say for every item, whether it’s a wage or buying something, we’re going to either calculate the cost to society or the benefit to society. Those will either be subsidies or taxes on top of that, so that the bag of potato chips—
—The economic term is—
—Internalizing the externalities?
Yes, exactly.
Yeah, exactly. It’s actually the only thing I can think of that doesn’t actually cause perverse incentives, because in theory, all the externalities have been internalized and reflected in the price.
Yes. So, you’re not interfering with the market, you’re just letting the market reflect both the individual and collective costs and stuff like that. It doesn’t need to be perfect. We’re imperfect, life is imperfect, we all die, but let’s sort of improve things in the brief period that we’re alive.
I can’t quite gauge whether you’re ‘in theory’ optimistic, or practically optimistic. Like, do you think we’re going to accomplish these things? Do you think we’re going to do some flavor of them? Or, do you just realize they’re possibilities and we may or may not?
I’m trying to make this happen. The way I would do that is not, “Gee, I’m going to do this myself.” But I’m going to contribute to a bunch of people, both doing it and feeling… A lot more people would be doing this, if they thought it was possible, so let’s get together and become visible to one another.
Just as in what I saw happen in Eastern Europe, where individually people felt powerless, but then, they—and this really was where the Internet did help. People began to say, “Oh, you know, I’m not the only one who is beginning to question our abusive government.” People got together, and felt empowered, and started to change the story, both by telling their own stories and by creating alternative narratives to the one that the government fed them.
In our case, we’re being fed, I don’t know, we’re being fed short-term. Everything in our society is short-term. I’m on the board of The Long Now, just for what it’s worth. Wall Street is short-term. Government politicians are mostly concerned with being reelected. People are consuming information in little chunks and not understanding the long-term narratives or the structure of how things work.
It’s great if you hear someone talk about externalities. If you walk down the street and ask people what an externality is, they’ll say, “Is that, like, a science fiction thing or what?” No, it’s a real concept and one that should be paid attention to. There are people who know this, and they need to bring it together, and change how people think about themselves.
The very question you asked: “Do you think you can do this practically?” No, I can’t alone, but together, yeah, we can change how people think about things, and get them to think more about long-term investments. Not this day-by-day, what’s my ROI tomorrow, or what’s next quarters? But if we do this now, what will be different twenty years from now?
It’s never been easier, so I hear, to make a billion dollars. Google and Facebook each minted something like six billionaires apiece. The number of billionaires continues to grow. The number who made their own money, the percent that made their own money, continues to grow, as opposed to inheriting it.
Right.
But, am I right that all of that money that’s being created at the top, that isn’t… I mean, mathematically, it contributes to income inequality because it’s moving some to the end… But do you think that that’s part of the problem? Do all of those billions get made at the expense of someone else, or do those billions get made just independent of their effect on other people?
There’s no simple answer to that one. It varies. I was very pleased to see the Chan Zuckerberg Foundation. And the people that bother me more, honestly, are… There’s a point at which you stop adding value, and I would say a lot of Wall Street is no longer adding value. Google, it depends what they do with their billions.
I’m less concerned about the money Google makes. It depends what the people who own the shares in Google do with the money they’ve made. Part of the problem is, more the trolls on the Internet are encouraging some of this short-sided thinking, instant gratification. I’d rather look at cat photos than talk to my two-year old, or what have you.
For me, the issue’s not to demonize people but to encourage the ones who have assets and capacity to use them more wisely. Sometimes, they’ll do that when they’re young. Sometimes, they will earn all the money and then start to change later, and so forth.
The problem isn’t that Google has a lot of money and the people in Muskegon don’t. The problem is that the people in Muskegon, or so many other places… They have crappy jobs, the people who are parents now might have had parents who weren’t very good. Things are going downhill rather than uphill. Their kids are no longer more educated than they are. They no longer have better jobs. The food is getting worse, etc.
It’s not simply an issue of more money. It’s how the money is spent, and what the money is spent on. Is it spent accountably for the right things? It’s not just giving people money. It’s having an education system that educates people. It’s having a food system that nourishes them. It’s stuff like that.
We now know how to do those things. We also are much better, because of AI, at predicting what will happen if we don’t. I think the market, and incentives, and individual action are tremendously important; but you can influence them. Which is what I’m trying to do, by showing how much better things could work.
Well, no matter what, the world that you would envision as being a better world, certainly requires lots and lots and lots of people power, right? Like, you need more teachers, you need more nutritionists, you need all of these other things. It’s sounds like you don’t—
Right. And, you need people voting to fix the bridges instead of keep voting on which politician makes promises that are unbelievable or whatever. In a sense, we need to be much more thoughtful about what it is we’re doing and to think more about the long-term consequences.
Do you think there ever was a time that, like, do you have any society that you look at or even, any time in any society when you say… “Well, they weren’t perfect, but here was a society that thought ahead, and planned ahead, and organized things in a pretty smart way”? Do you have any examples?
Yes and no. There was never like a perfect place. A lot of things were worse a hundred years ago, including how the women were treated, how minorities were treated, a lot of people were poor. But there was a lot less entitlement, there was a lot less consumption around instant gratification. People invested.
In many ways, things were much worse, but people took it for granted that they needed to work hard and save. Again, many of them had a sense of purpose. You go back to the 1840s, and the amount of liquor consumed was crazy. There’s no perfect society. The norms were better.
Perhaps there was more hypocrisy. Hey, there was a lot of crime a hundred years ago and, sort of, the notion of polite society was perhaps not all of society. People didn’t aspire to be celebrities. They aspired to be respected, and loved, and productive, and so forth. It just goes back to that word: purpose.
Being a celebrity does not mean having an impact. It means being well-known. There’s something lacking in being a celebrity, versus being of value to society. I think there’s less aspiration towards value and more towards something flashier and emptier. That’s what I’d love to change, without being puritan and boring about it.
Right. It seems you keep coming back to the purpose idea, even when you’re not using that word. You talked about [how] Wall Street used to add value, and [now] they don’t. That’s another way of saying they’ve lost their purpose. We talked about the billionaires… It sounds like you’re fine with it, it depends on what their purpose of it all is with it. How do you think people find their purpose?
It goes back to their parents. There’s this satisfaction that really can’t be beaten. When I spent time in Russia, the women were much better off than the men, because the men felt—many of them—purposeless. They did useless jobs and got paid money that was not worth much, and then their wives took the rubles and stood in line to get food and raise the children.
Having children gives you purpose, ideally. Then, you get to the point where your children become just one more trophy, and that’s unutterably sad. They’re people who love the children and also focus too much on, “Is this child popular?” or “Will he get into the right college and reflect well on me?” But, in the end, children are what give purpose to most people.
Let’s talk about space for a minute. It’s seems that a lot of Silicon Valley folks, noteworthy ones, have a complete fascination with it. You’ve got Jeff Bezos hauling Apollo 11 boosters out of the ocean. Elon is planning to, according to him, “die on Mars, just not on impact.” You, obviously, have a—
—I want to retire on Mars. That’s my line. And, not too soon.
There’s a large part of this country, for instance, that doesn’t really care about space at all. It seemed like a whole lot of wasted money, and emptiness, and all of that. Why do you think it’s so intriguing? What about it is interesting for you? For goodness sakes, I can’t put you as “trained to be backup cosmonaut” in your introduction, and then not—that’s like the worst thing a host can do, and then never mention it again. So please talk about that, if you don’t mind.
It’s our destiny, we should spread. It’s our backup plan if we really screw up the earth and obliterate ourselves, whether it’s with a polluted atmosphere, or an explosion, or some kind of biological disaster. We need another place to go.
Mars… Number one, it’s good backup. Number two, maybe we can learn something. There’s this wonderful new thing call the circular economy. The reality is, yes, we’re in a circular economy, but it’s so large we don’t recognize it. On Mars, because you start out so small, it’s much clearer that there’s a circular economy.
I’m hoping that the National Geographic series is actually going to change some people’s opinions. Yeah, in some sense, our purpose is to explore, to learn, to discover what else might lie beyond our own little planet. Again, it’s always good to have Option B.
Final question: We already talked about what you’re working on, but… What gives you… Because our chat had lots of ups and downs, possibilities, and then worries. What is—if there is anything—what gives you hope? What give you hope that there’s a good chance that we’ll muddle through this?
I’m an optimist. I have hope, because I’m a human being and it’s been bred into me over all those generations. The ones who weren’t hopeful didn’t bother to try, and they mostly disappeared. But now you can survive, even if you’re not hopeful; so maybe that’s why all this pessimism, and lassitude and stuff is spreading. Maybe, we should all go to Mars, where it’s much tougher, and you do need to be hopeful to survive.
Yeah, and have purpose. In closing, anybody who wants to keep up with what you’re doing with your non-profit…
WaytoWellville.net.
And if people want to keep up with you, personally, how do they do that?
Probably on Twitter, @edyson.
Excellent. Well, I want to thank you so much for finding the time.
Thank you. It was really fun.
Byron explores issues around artificial intelligence and conscious computers in his upcoming book The Fourth Age, to be published in April by Atria, an imprint of Simon & Schuster. Pre-order a copy here
[voices_in_ai_link_back]

Senate backs down on ‘Facebook Bureau of Investigations’ mandate

Facebook, Twitter, and other social networking companies no longer have to worry about a mandate that would have required them to share with the United States government information about users discussing terrorism-related topics.
Not only is this great news for young students wishing to share info on self-made clock projects, but also for a large portion of citizens that don’t want the feds sifting through private social data without a warrant.
In an effort to pass a funding bill for federal intelligence agencies, the Senate has recently abandoned a provision that would force social networks to share data on users believed to be involved with terrorism activities. The bill itself was initially blocked from reaching the Senate floor by Sen. Ron Wyden, who described the mandate as a “vague [and] dangerous provision.” Wyden said in a statement Monday that he plans to release his hold on the bill, thus allowing it to move forward.
“Going after terrorist recruitment and activity online is a serious mission that demands a serious response from our law enforcement and intelligence agencies,” Wyden said. “Social media companies aren’t qualified to judge which posts amount to ‘terrorist activity,’ and they shouldn’t be forced against their will to create a Facebook Bureau of Investigations to police their users’ speech.”
But the spirit of the provision is unlikely to be gone for long.
A spokesperson for Sen. Dianne Feinstein told the Hill that the senator “regrets having to remove the provision” and “believes it’s important to block terrorists’ use of social media to recruit and incite violence and will continue to work on achieving that goal.” It’ll be back.
This is merely the latest in a string of examples of the government pressuring tech companies to provide it with more information, or to help it take down content related to extremist organizations like the so-called Islamic State. Other efforts relate to encryption, censorship, and access to private communications.

Report: China wants backdoors in imported tech, but only its own

Western companies are doing big business in China, but storm clouds lie on the horizon. According to a New York Times report, new banking security rules approved in the People’s Republic at the end of 2014 require those selling hardware and software to Chinese banks to install backdoors for the benefit of Chinese security services.

The rules also state that companies must “turn over secret source code [and] submit to invasive audits.” While seriously problematic for many firms, this element isn’t particularly surprising.

In the wake of Edward Snowden’s NSA revelations and the U.S.’s indictment of Chinese army officials for industrial espionage, China’s authorities have repeatedly implied that U.S. products are themselves a threat to national security, because they track users and/or may contain NSA backdoors. Reports in May 2014 suggested that China was considering banning banks from using [company]IBM[/company] servers.

On the consumer side, [company]Apple[/company] for one has already reportedly agreed to let China’s security services screen its products to ensure their safety. However, many firms may find this demand impossible to meet, due to intellectual property and security concerns.

Of course, the U.S. is also pushing companies dealing in communications devices and services to install backdoors for its own intelligence and law enforcement purposes. Both administrations – and that of the U.K. — want firms such as Apple to hand over a key to users’ private communications, even though the companies have recently been moving to a more secure end-to-end encryption model where they don’t hold any keys. This is effectively a backdoor demand, though authorities generally prefer to call it “lawful intercept.”

Draft Chinese anti-terrorism laws are pushing for the same thing. This is one of the many problems with official policies that undermine genuinely strong encryption. Particularly in a globalized trade context where your nation’s companies want to make money in foreign markets, it’s a bit hopeful to think backdoor privileges can be reserved only for your own security apparatus.

However, the Times piece talked about China’s new banking regulations forcing equipment makers to build in “ports” for official monitoring purposes. This is where things get really complicated: the rules may require companies to create special versions of their products for China, and U.S. tech firms and the Chamber of Commerce are reportedly anxious that the move may be protectionist in nature.