Voices in AI – Episode 24: A Conversation with Deep Varma

[voices_in_ai_byline]
In this episode, Byron and Deep talk about the nervous system, AGI, the Turing Test, Watson, Alexa, security, and privacy.
[podcast_player name=”Episode 24: A Conversation with Deep Varma” artist=”Byron Reese” album=”Voices in AI” url=”https://voicesinai.s3.amazonaws.com/2017-12-04-(00-55-19)-deep-varma.mp3″ cover_art_url=”https://voicesinai.com/wp-content/uploads/2017/12/voices-headshot-card_preview-1.jpeg”]
[voices_in_ai_link_back]
Byron Reese: This is Voices in AI, brought to you by Gigaom. I’m Byron Reese. Today our guest is Deep Varma, he is the VP of Data Engineering and Science over at Trulia. He holds a Bachelor’s of Science in Computer Science. He has a Master’s degree in Management Information Systems, and he even has an MBA from Berkeley to top all of that off. Welcome to the show, Deep.
Deep Varma: Thank you. Thanks, Byron, for having me here.
I’d like to start with my Rorschach test question, which is, what is artificial intelligence?
Awesome. Yeah, so as I define artificial intelligence, this is an intelligence created by machines based on human wisdom, to augment a human’s lifestyle to help them make the smarter choices. So that’s how I define artificial intelligence in a very simple and the layman terms.
But you just kind of used the word, “smart” and “intelligent” in the definition. What actually is intelligence?
Yeah, I think the intelligence part, what we need to understand is, when you think about human beings, most of the time, they are making decisions, they are making choices. And AI, artificially, is helping us to make smarter choices and decisions.
A very clear-cut example, which sometimes what we don’t see, is, I still remember in the old days I used to have this conventional thermostat at my home, which turns on and off manually. Then, suddenly, here comes artificial intelligence, which gave us Nest. Now as soon as I put the Nest there, it’s an intelligence. It is sensing that someone is there in the home, or not, so there’s motion sensing. Then it is seeing what kind of temperature do I like during summer time, during winter time. And so, artificially, the software, which is the brain that we have put on this device, is doing this intelligence, and saying, “great, this is what I’m going to do.” So, in one way it augmented my lifestyle—rather than me making those decisions, it is helping me make the smart choices. So, that’s what I meant by this intelligence piece here.
Well, let me take a different tack, in what sense is it artificial? Is that Nest thermostat, is it actually intelligent, or is it just mimicking intelligence, or are those the same thing?
What we are doing is, we are putting some sensors there on those devices—think about the central nervous system, what human beings have, it is a small piece of a software which is embedded within that device, which is making decisions for you—so it is trying to mimic, it is trying to make some predictions based on some of the data it is collecting. So, in one way, if you step back, that’s what human beings are doing on a day-to-day basis. There is a piece of it where you can go with a hybrid approach. It is mimicking as well as trying to learn, also.
Do you think we learn a lot about artificial intelligence by studying how humans learn things? Is that the first step when you want to do computer vision or translation, do you start by saying, “Ok, how do I do it?” Or, do you start by saying, “Forget how a human does it, what would be the way a machine would do it?
Yes, I think it is very tough to compare the two entities, because the way human brains, or the central nervous system, the speed that they process the data, machines are still not there at the same pace. So, I think the difference here is, when I grew up my parents started telling me, “Hey, this is Taj Mahal. The sky is blue,” and I started taking this data, and I started inferring and then I started passing this information to others.
It’s the same way with machines, the only difference here is that we are feeding information to machines. We are saying, “Computer vision: here is a photograph of a cat, here is a photograph of a cat, too,” and we keep on feeding this information—the same way we are feeding information to our brains—so the machines get trained. Then, over a period of time, when we show another image of a cat, we don’t need to say, “This is a cat, Machine.” The machine will say, “Oh, I found out that this is a cat.”
So, I think this is the difference between a machine and a human being, where, in the case of machine, we are feeding the information to them, in one form or another, using devices; but in the case of human beings, you have conscious learning, you have the physical aspects around you that affect how you’re learning. So that’s, I think, where we are with artificial intelligence, which is still in the infancy stage.
Humans are really good at transfer learning, right, like I can show you a picture of a miniature version of the Statue of Liberty, and then I can show you a bunch of photos and you can tell when it’s upside down, or half in water, or obscured by light and all that. We do that really well. 
How close are we to being able to feed computers a bunch of photos of cats, and the computer nails the cat thing, but then we only feed it three or four images of mice, and it takes all that stuff it knows about different cats, and it is able to figure out all about different mice?
So, is your question, do we think these machines are going to be at the same level as human beings at doing this?
No, I guess the question is, if we have to teach, “Here’s a cat, here’s a thimble, here’s ten thousand thimbles, here’s a pin cushion, here’s ten thousand more pin cushions…” If we have to do one thing at a time, we’re never going to get there. What we’ve got to do is, like, learn how to abstract up level, and say, “Here’s a manatee,” and it should be able to spot a manatee in any situation.
Yeah, and I think this is where we start moving into the general intelligence area. This is where it is becoming a little interesting and challenging, because human beings falls under more of the general intelligence, and machines are still falling under the artificial intelligence framework.
And the example you were giving, I have two boys, and when my boys were young, I’d tell them, “Hey, this is milk,” and I’d show them milk two times and they knew, “Awesome, this is milk.” And here come the machines, and you keep feeding them the big data with the hope that they will learn and they will say, “This is basically a picture of a mouse or this is a picture of a cat.”
This is where, I think, this artificial general intelligence which is shaping up—that we are going to abstract a level up, and start conditioning—but I feel we haven’t cracked the code for one level down yet. So, I think it’s going to take us time to get to the next level, I believe, at this time.
Believe me, I understand that. It’s funny, when you chat with people who spend their days working on these problems, they’re worried about, “How am I going to solve this problem I have tomorrow?” They’re not as concerned about that. That being said, everybody kind of likes to think about an AGI. 
AI is, what, six decades old and we’ve been making progress, do you believe that that is something that is going to evolve into an AGI? Like, we’re on that path already, and we’re just one percent of the way there? Or, is an AGI is something completely different? It’s not just a better narrow AI, it’s not just a bunch of narrow AI’s bolted together, it’s a completely different thing. What do you say?
Yes, so what I will say, it is like in the software development of computer systems—we call this as an object, and then we do inheritance of a couple of objects, and the encapsulation of the objects. When you think about what is happening in artificial intelligence, there are companies, like Trulia, who are investing in building the computer vision for real estate. There are companies investing in building the computer vision for cars, and all those things. We are in this state where all these dysfunctional, disassociated investments in our system are happening, and there are pieces that are going to come out of that which will go towards AGI.
Where I tend to disagree, I believe AI is complimenting us and AGI is replicating us. And this is where I tend to believe that the day the AGI comes—that means it’s a singularity that they are reaching wisdom or the processing power of human beings—that, to me, seems like doomsday, right? Because that those machines are going to be smarter than us, and they will control us.
And the reason I believe that, and there is a scientific reason for my belief; it’s because we know that in the central nervous system the core tool is the neurons, and we know neurons carry two signals—chemical and electrical. Machines can carry the electrical signals, but the chemical signals are the ones which generate these sensory signals—you touch something, you feel it. And this is where I tend to believe that AGI is not going to happen, I’m close to confident. Thinking machines are going to come—IBM Watson, as an example—so that’s how I’m differentiating it at this time.
So, to be clear, you said you don’t believe we’ll ever make an AGI?
I will be the one on the extreme end, but I will say yes.
That’s fascinating. Why is that? The normal argument is a reductionist argument. It says, you are some number of trillions of cells that come together, and there’s an emergent you” that comes out of that. And, hypothetically, if we made a synthetic copy of every one of those cells, and connected them, and did all that, there would be another Deep Varma. So where do you think the flaw in that logic is?
I think the flaw in that logic is that the general intelligence that humans have is also driven by the emotional side, and the emotional side—basically, I call it a chemical soup—is, I feel, the part of the DNA which is not going to be possible to replicate in these machines. These machines will learn by themselves—we recently saw what happened with Facebook, where Facebook machines were talking to each other and they start inventing their own language, over a period of time—but I believe the chemical mix of humans is what is next to impossible to produce it.
I mean—and I don’t want to take a stand because we have seen proven, over the decades, what people used to believe in the seventies has been proven to be right—I think the day we are able to find the chemical soup, it means we have found the Nirvana; and we have found out how human beings have been born and how they have been built over a period of time, and it took us, we all know, millions and millions of years to come to this stage. So that’s the part which is putting me on the other extreme end, to say, “Is there really going to another Deep Varma,” and if yes, then where is this emotional aspect, where are those things that are going to fit into the bigger picture which drives human beings onto the next level?
Well, I mean there’s a hundred questions rushing for the door right now. I’ll start with the first one. What do you think is the limit of what we’ll be able to do without the chemical part? So, for instance, let me ask a straight forward question—will we be able to build a machine that passes the Turing test?
Can we build that machine? I think, potentially, yes, we can.
So, you can carry on a conversation with it, and not be able to figure out that it’s a machine? So, in that case, it’s artificial intelligence in the sense that it really is artificial. It’s just running a program, saying some words, it’s running a program, saying some words, but there’s nobody home.
Yes, we have IBM Watson, which can go a level up as compared to Alexa. I think we will build machines which, behind the scenes, are trying to understand your intent and trying to have those conversations—like Alexa and Siri. And I believe they are going to eventually start becoming more like your virtual assistants, helping you make decisions, and complimenting you to make your lifestyle better. I think that’s definitely the direction we’re going to keep seeing investments going on.
I read a paper of yours where you made a passing reference to Westworld.
Right.
Putting aside the last several episodes, and what happened in them—I won’t give any spoilerstake just the first episode, do you think that we will be able to build machines that can interact with people like that?
I think, yes, we will.
But they won’t be truly creative and intelligent like we are?
That’s true.
Alright, fascinating. 
So, there seem to be these two very different camps about artificial intelligence. You have Elon Musk who says it’s an existential threat, you have Bill Gates who’s worried about it, you have Stephen Hawking who’s worried about it, and then there’s this other group of people that think that’s distracting
saw that Elon Musk spoke at the governor’s convention and said something and then Pedro Domingos, who wrote The Master Algorithmretweeted that article, and his whole tweet was, “One word: sigh. So, there’s this whole other group of people that think that’s just really distractingreally not going to happen, and they’re really put off by that kind of talk. 
Why do you think there’s such a gap between those two groups of people?
The gap is that there is one camp who is very curious, and they believe that millions of years of how human beings evolved can immediately be taken by AGI, and the other camp is more concerned with controlling that, asking are those machines going to become smarter than us, are they going to control us, are we going to become their slaves?
And I think those two camps are the extremes. There is a fear of losing control, because humans—if you look into the food chain, human beings are the only ones in the food chain, as of now, who control everything—fear that if those machines get to our level of wisdom, or smarter than us, we are going to lose control. And that’s where I think those two camps are basically coming to the extreme ends and taking their stands.
Let’s switch gears a little bit. Aside from the robot uprising, there’s a lot of fear wrapped up in the kind of AI we already know how to build, and it’s related to automation. Just to set up the question for the listener, there’s generally three camps. One camp says we’re going to have all this narrow AI, and it’s going to put a bunch of people out of work, people with less skills, and they’re not going to be able to get new work and we’re going to have, kind of, the GreaDepression going on forever. Then there’s a second group that says, no, no, it’s worse than that, computers can do anything a person can do, we’re all going to be replaced. And then there’s a third camp that says, that’s ridiculous, every time something comes along, like steam or electricity, people just take that technology, and use it to increase their own productivity, and that’s how progress happens. So, which of those three camps, or fourth one, perhaps, do you believe?
I fall into, mostly, the last camp, which is, we are going to increase the productivity of human beings; it means we will be able to deliver more and faster. A few months back, I was in Berkeley and we were having discussions around this same topic, about automation and how jobs are going to go away. The Obama administration even published a paper around this topic. One example which always comes in my mind is, last year I did a remodel of my house. And when I did the remodeling there were electrical wires, there are these water pipelines going inside my house and we had to replace them with copper pipelines, and I was thinking, can machines replace those job? I keep coming back to the answer that, those skill level jobs are going to be tougher and tougher to replace, but there are going to be productivity gains. Machines can help to cut those pipeline pieces much faster and in a much more accurate way. They can measure how much wire you’ll need to replace those things. So, I think those things are going to help us to make the smarter choices. I continue to believe it is going to be mostly the third camp, where machines will keep complementing us, helping to improve our lifestyles and to improve our productivity to make the smarter choices.
So, you would say that there are, in most jobs, there are elements that automation cannot replace, but it can augment, like a plumber, or so forth. What would you say to somebody who’s worried that they’re going to be unemployable in the future? What would you advise them to do?
Yeah, and the example I gave is a physical job, but think about an example of a business consultants, right? Companies hire business consultants to come, collect all the data, then prepare PowerPoints on what you should do, and what you should not do. I think those are the areas where artificial intelligence is going to come, and if you have tons of the data, then you don’t need a hundred consultants. For those people, I say go and start learning about what can be done to scale them to the next level. So, in the example I’ve just given, the business consultants, if they are doing an audit of a company with the financial books, look into the tools to help so that an audit that used to take thirty days now takes ten days. Improve how fast and how accurate you can make those predictions and assumptions using machines, so that those businesses can move on. So, I would tell them to start looking into, and partnering into, those areas early on, so that you are not caught by surprise when one day some industry comes and disrupts you, and you say, “Ouch, I never thought about it, and my job is no longer there.”
It sounds like you’re saying, figure out how to use more technology? That’s your best defense against it, is you just start using it to increase your own productivity.
Yeah.
Yeah, it’s interesting, because machine translation is getting comparable to a human, and yet generally people are bullish that we’re going to need more translators, because this is going to cause people to want to do more deals, and then they’re going to need to have contracts negotiated, and know about customs in other countries and all of that, so that actually being a translator you get more business out of this, not less, so do you think things like that are kind of the road map forward?
Yeah, that’s true.
So, what are some challenges with the technology? In Europe, there’s a movement—I think it’s already adopted in some places, but the EU is considering it—this idea that if an AI makes a decision about you, like do you get the loan, that you have the right to know why it made it. In other words, no black boxes. You have to have transparency and say it was made for this reason. Do you think a) that’s possible, and b) do you think it’s a good policy?
Yes, I definitely believe it’s possible, and it’s a good policy, because this is what consumers wants to know, right? In our real estate industry, if I’m trying to refinance my home, the appraiser is going to come, he will look into it, he will sit with me, then he will send me, “Deep, your house is worth $1.5 million dollar.” He will provide me the data that he used to come to that decision—he used the neighborhood information, he used the recent sold data.
And that, at the end of the day, gives confidence back to the consumer, and also it shows that this is not because this appraiser who came to my home didn’t like me for XYZ reason, and he end up giving me something wrong; so, I completely agree that we need to be transparent. We need to share why a decision has been made, and at the same time we should allow people to come and understand it better, and make those decisions better. So, I think those guidelines need to be put into place, because humans tend to be much more biased in their decision-making process, and the machines take the bias out, and bring more unbiased decision making.
Right, I guess the other side of that coin, though, is that you take a world of information about who defaulted on their loan, and then you take you every bit of information about, who paid their loan off, and you just pour it all in into some gigantic database, and then you mine it and you try to figure out, “How could I have spotted these people who didn’t pay their loan? And then you come up with some conclusion that may or may not make any sense to a human, right? Isn’t that the case that it’s weighing hundreds of factors with various weights and, how do you tease out, “Oh it was this”? Life isn’t quite that simple, is it?
No, it is not, and demystifying this whole black box has never been simple. Trust us, we face those challenges in the real estate industry on a day-to-day basis—we have Trulia’s estimates—and it’s not easy. At the end, we just can’t rely totally on those algorithms to make the decisions for us.
I will give one simple example, of how this can go wrong. When we were training our computer vision system, and, you know, what we were doing was saying, “This is a window, this is a window.” Then the day came when we said, “Wow, our computer vision can say I will look at any image, and known this is a window.” And one fine day we got an image where there is a mirror, and there is a reflection of a window on the mirror, and our computer said, “Oh, Deep, this is a window.” So, this is where big data and small data come into a place, where small data can make all these predictions and goes wrong completely.
This is where—when you’re talking about all this data we are taking in to see who’s on default and who’s not on default—I think we need to abstract, and we need to at least make sure that with this aggregated data, this computational data, we know what the reference points are for them, what the references are that we’re checking, and make sure that we have the right checks and balances so that machines are not ultimately making all the calls for us.
You’re a positive guy. You’re like, “We’re not going to build an AGI, it’s not going to take over the world, people are going to be able to use narrow AI to grow their productivity, we’re not going to have unemployment.” So, what are some of the pitfalls, challenges, or potential problems with the technology?
I agree with you, it’s being positive. Realistically, looking into the data—and I’m not saying that I have the best data in front of me—I think what is the most important is we need to look into history, and we need to see how we evolved, and then the Internet came and what happened.
The challenge for us is going to be that there are businesses and groups who believe that artificial intelligence is something that they don’t have to worry about, and over a period of time artificial intelligence is going to start becoming more and more a part of business, and those who are not able to catch up with this, they’re going to see the unemployment rate increase. They’re going to see company losses increase because some of the decisions they’re not making in the right way.
You’re going to see companies, like Lehman Brothers, who are making all these data decisions for their clients by not using machines but relying on humans, and these big companies fail because of them. So, I think, that’s an area where we are going to see problems, and bankruptcies, and unemployment increases, because of they think that artificial intelligence is not for them or their business, that it’s never going to impact them—this is where I think we are going to get the most trouble.
The second area of trouble is going to be security and privacy, because all this data is now floating around us. We use the Internet. I use my credit card. Every month we hear about a new hack—Target being hacked, Citibank being hacked—all this data physically-stored in the system and it’s getting hacked. And now we’ll have all this data wirelessly transmitting, machines talking to each of their devices, IoT devices talking to each other—how are you we going to make sure that there is not a security threat? How are we going to make sure that no one is storing my data, and trying to make assumptions, and enter into my bank account? Those are the two areas where I feel we are going to see, in coming years, more and more challenges.
So, you said privacy and security are the two areas?
Denial of accepting AI is the one, and security and privacy is the second one—those are the two areas.
So, in the first one, are there any industries that don’t need to worry about it, or are you saying, “No, if you make bubble-gum you had better start using AI?
I will say every industry. I think every industry needs to worry about it. Some industries may adapt the technologies faster, some may go slower, but I’m pretty confident that the shift is going to happen so fast that, those businesses will be blindsided—be it small businesses or mom and pop shops or big corporations, it’s going to touch everything.
Well with regard to security, if the threat is artificial intelligence, I guess it stands to reason that the remedy is AI as well, is that true?
The remedy is there, yes. We are seeing so many companies coming and saying, “Hey, we can help you see the DNS attacks. When you have hackers trying to attack your site, use our technology to predict that this IP address or this user agent is wrong.” And we see that to tackle the remedy, we are building an artificial intelligence.
But, this is where I think the battle between big data and small data is colliding, and companies are still struggling. Like, phishing, which is a big problem. There are so many companies who are trying to solve the phishing problem of the emails, but we have seen technologies not able to solve it. So, I think AI is a remedy, but if we stay just focused on the big data, that’s, I think, completely wrong, because my fear is, a small data set can completely destroy the predictions built by a big data set, and this is where those security threats can bring more of an issue to us.
Explain that last bit again, the small data set can destroy…?
So, I gave the example of computer vision, right? There was research we did in Berkeley where we trained machines to look at pictures of cats, and then suddenly we saw the computer start predicting, “Oh, this is this kind of a cat, this is cat one, cat two, this is a cat with white fur.” Then we took just one image where we put the overlay of a dog on the body of a cat, and the machines ended up predicting, “That’s a dog,” not seeing that it’s the body of a cat. So, all the big data that we used to train our computer vision, just collapsed with one photo of a dog. And this is where I feel that if we are emphasizing so much on using the big data set, big data set, big data set, are there smaller data sets which we also need to worry about to make sure that we are bridging the gap enough to making sure that our securities are not compromised?
Do you think that the system as a whole is brittle? Like, could there be an attack of such magnitude that it impacts the whole digital ecosystem, or are you worried more about, this company gets hacked and then that one gets hacked and they’re nuisances, but at least we can survive them?
No, I’m more worried about the holistic view. We saw recently, how those attacks on the UK hospital systems happened. We saw some attacks—which we are not talking about—on our power stations. I’m more concerned about those. Is there going to be a day when we have built massive infrastructures that are reliant on computers—our generation of power and the supply of power and telecommunications—and suddenly there is a whole outage which can take the world to a standstill, because there is a small hole which we never thought about. That, to me, is the bigger threat than the stand alone individual things which are happening now.
That’s a hard problem to solve, there’s a small hole on the internet that we’ve not thought about that can bring the whole thing down, that would be a tricky thing to find, wouldn’t it?
It is a tricky thing, and I think that’s what I’m trying to say, that most of the time we fail because of those smaller things. If I go back, Byron, and bring the artificial general intelligence back into a picture, as human beings it’s those small, small decisions we make—like, I make a fast decision when an animal is approaching very close to me, so close that my senses and my emotions are telling me I’m going to die—and this is where I think sometimes we tend to ignore those small data sets.
I was in a big debate around those self-driven cars which are shaping up around us, and people were asking me when will we see those self-driven cars on a San Francisco street. And I said, “I see people doing crazy jaywalking every day,” and accidents are happening with human drivers, no doubt, but the scale can increase so fast if those machines fail. If they have one simple sensor which is not working at that moment in time and not able to get one signal, it can kill human beings much faster as compared to what human beings are killing, so that’s the rational which I’m trying to put here.
So, one of my questions that I was going to ask you, is, do you think AI is a mania? Like it’s everywhere but it seems like, you’re a person who says every industry needs to adopt it, so if anything, you would say that we need more focus on it, not less, is that true?
That’s true.
There was a man in the ‘60s named Weizenbaum who made a program called ELIZA, which was a simple program that you would ask a question, say something like, I’m having a bad day,” and then it would say, “Why are you having a bad day?” And then you would say, I’m having a bad day because I had a fight with my spouse,” and then would ask, “Why did you have a fight? And so, it’s really simple, but Weizenbaum got really concerned because he saw people pouring out their heart to it, even though they knew it was a program. It really disturbed him that people developed emotional attachment to ELIZA, and he said that when a computer says, “I understand,” that it’s a lie, that there’s no “I,” there’s nothing that understands anything. 
Do you worry that if we build machines that can imitate human emotions, maybe the care for people or whatever, that we will end up having an emotional attachment to them, or that that is in some way unhealthy?
You know, Byron, it’s a very great question. I think, also pick out a great example. So, I have Alexa at my home, right, and I have two boys, and when we are in a kitchen—because Alexa is in our kitchen—my older son comes home and says, “Alexa, what’s the temperature look like today?” Alexa says, “Temperature is this,” and then he says, “Okay, shut up,” to Alexa. My wife is standing there saying “Hey, don’t be rude, just say, ‘Alexa stop.’” You see that connection? The connection is you’ve already started treating this machine as a respectful device, right?
I think, yes, there is that emotional connection there, and that’s getting you used to seeing it as part of your life in an emotional connection. So, I think, yes, you’re right, that’s a danger.
But, more than Alexa and all those devices, I’m more concerned about the social media sites, which can have much more impact on our society than those devices. Because those devices are still physical in shape, and we know that if the Internet is down, then they’re not talking and all those things. I’m more concerned about these virtual things where people are getting more emotionally attached, “Oh, let me go and check what my friends been doing today, what movie they watched,” and how they’re trying to fill that emotional gap, but not meeting individuals, just seeing the photos to make them happy. But, yes, just to answer your question, I’m concerned about that emotional connection with the devices.
You know, it’s interesting, I know somebody who lives on a farm and he has young children, and, of course, he’s raising animals to slaughter, and he says the rule is you just never name them, because if you name them then that’s it, they become a pet. And, of course, Amazon chose to name Alexa, and give it a human voice; and that had to be a deliberate decision. And you just wonder, kind of, what all went into it. Interestingly, Google did not name theirs, it’s just the Google Assistant. 
How do you think that’s going to shake out? Are we just provincial, and the next generation isn’t going to think anything of it? What do you think will happen?
So, is your question what’s going to happen with all those devices and with all those AI’s and all those things?
Yes, yes.
As of now, those devices are all just operating in their own silo. There are too many silos happening. Like in my home, I have Alexa, I have a Nest, those plug-ins. I love, you know, where Alexa is talking to Nest, “Hey Nest, turn it off, turn it on.” I think what we are going to see over the next five years is that those devices are communicating with each other more, and sending signals, like, “Hey, I just saw that Deep left home, and the garage door is open, close the garage door.”
IoT is popping up pretty fast, and I think people are thinking about it, but they’re not so much worried about that connectivity yet. But I feel that where we are heading is more of the connectivity with those devices, which will help us, again, compliment and make the smart choices, and our reliance on those assistants is going to increase.
Another example here, I get up in the morning and the first thing I do is come to the kitchen and say Alexa, “Put on the music, Alexa, put on the music, Alexa, and what’s the weather going to look like?” With the reply, “Oh, Deep, San Francisco is going to be 75,” then Deep knows Deep is going to wear a t-shirt today. Here comes my coffee machine, my coffee machine has already learned that I want eight ounces of coffee, so it just makes it.
I think all those connections, “Oh, Deep just woke up, it is six in the morning, Deep is going to go to office because it’s a working day, Deep just came to kitchen, play this music, tell Deep that the temperature is this, make coffee for Deep,” this is where we are heading in next few years. All these movies that we used to watch where people were sitting there, and watching everything happen in the real time, that’s what I think the next five years is going to look like for us.
So, talk to me about Trulia, how do you deploy AI at your company? Both customer facing and internally?
That’s such an awesome question, because I’m so excited and passionate because this brings me home. So, I think in artificial intelligence, as you said, there are two aspects to it, one is for a consumer and one is internal, and I think for us AI helps us to better understand what our consumers are looking for in a home. How can we help move them faster in their search—that’s the consumer facing tagline. And an example is, “Byron is looking at two bedroom, two bath houses in a quiet neighborhood, in good school district,” and basically using artificial intelligence, we can surface things in much faster ways so that you don’t have to spend five hours surfing. That’s more consumer facing.
Now when it comes to the internal facing, internal facing is what I call “data-driven decision making.” We launch a product, right? How do we see the usage of our product? How do we predict whether this usage is going to scale? Are consumers going to like this? Should we invest more in this product feature? That’s the internal facing we are using artificial intelligence.
I don’t know if you have read some of my blogs, but I call it data-driven companies—there are two aspects of the data driven, one is the data-driven decision making, this is more of an analyst, and that’s the internal reference to your point, and the external is to the consumer-facing data-driven product company, which focuses on how do we understand the unique criteria and unique intent of you as a buyer—and that’s how we use artificial intelligence in the spectrum of Trulia.
When you say, “Let’s try to solve this problem with data, is it speculative, like do you swing for the fences and miss a lot? Or, do you look for easy incremental wins? Or, are you doing anything that would look like pure science, like, “Let’s just experiment and see what happens with this? Is the science so nascent that you, kind of, just have to get in there and start poking around and see what you can do?
I think it’s both. The science helps you understand those patterns much faster and better and in a much more accurate way, that’s how science helps you. And then, basically, there’s trial and error, or what we call an, “A/B testing” framework, which helps you to validate whether what science is telling you is working or not. I’m happy to share an example with you here if you want.
Yeah, absolutely.
So, the example here is, we have invested in our computer vision which is, we train our machines and our machines basically say, “Hey, this is a photo of a bathroom, this is a photo of a kitchen,” and we even have trained that they can say, “This is a kitchen with a wide granite counter-top.” Now we have built this massive database. When a consumer comes to the Trulia site, what they do is share their intent, they say, “I want two bedrooms in Noe Valley,” and the first thing that they do when those listings show up is click on the images, because they want to see what that house looks like.
What we saw was that there were times when those images were blurred, there were times when those images did not match up with the intent of a consumer. So, what we did with our computer vision, we invested in something called “the most attractive image,” which basically takes the three attributes—it looks into the quality of an image, it looks into the appropriateness of an image, and it looks into the relevancy of an image—and based on these three things we use our conventional neural network models to rank the images and we say, “Great, this is the best image.” So now when a consumer comes and looks at that listing we show the most attractive photo first. And that way, the consumer gets more engaged with that listing. And what we have seen— using the science, which is machine learning, deep learning, CNM models, and doing the A/B testing—is that this project increased our enquiries for the listing by double digits, so that’s one of the examples which I just want to share with you.
That’s fantastic. What is your next challenge? If you could wave a magic wand, what would be the thing you would love to be able to do that, maybe, you don’t have the tools or data to do yet?
I think, what we haven’t talked about here and I will use just a minute to tell you, that what we have done is we’ve built this amazing personalization platform, which is capturing Byron’s unique preferences and search criteria, we have built machine learning systems like computer vision recommender systems and the user engagement prediction model, and I think our next challenge will be to keep optimizing the consumer intent, right? Because the biggest thing that we want to understand is, “What exactly is Byron looking into?” So, if Byron visits a particular neighborhood because he’s travelling to Phoenix, Arizona, does that mean you want to buy a home there, or Byron is in San Francisco and you live here in San Francisco, how do we understand?
So, we need to keep optimizing that personalization platform—I won’t call it a challenge because we have already built this, but it is the optimization—and make sure that our consumers get what they’re searching for, keep surfacing the relevant data to them in a timely manner. I think we are not there yet, but we have made major inroads into our big data and machine learning technologies. One specific example, is Deep, basically, is looking into Noe Valley or San Francisco, and email and push notifications are the two channels, for us, where we know that Deep is going to consume the content. Now, the day we learn that Deep is not interested in Noe Valley, we stop sending those things to Deep that day, because we don’t want our consumers to be overwhelmed in their journey. So, I think that this is where we are going to keep optimizing on our consumer’s intent, and we’ll keep giving them the right content.
Alright, well that is fantastic, you write on these topics so, if people want to keep up with you Deep how can they follow you?
So, when you said “people” it’s other businesses and all those things, right? That’s what you mean?
Well I was just referring to your blog like I was reading some of your posts.
Yeah, so we have our tech blog, http://www.trulia.com/tech, and it’s not only me; I have an amazing team of engineers—those who are way smarter than me to be very candid—my data scientist team, and all those things. So, we write our blogs there, so I definitely ask people to follow us on those blogs. When I go and speak at conferences, we publish that on our tech blog, and I publish things on my LinkedIn profile. So, yeah, those are the channels which people can follow. Trulia, we also host data science meetups here in Trulia, San Francisco on the seventh floor of our building, that’s another way people can come, and join, and learn from us.
Alright, well I want to thank you for a fascinating hour of conversation, Deep.
Thank you, Byron.
Byron explores issues around artificial intelligence and conscious computers in his upcoming book The Fourth Age, to be published in April by Atria, an imprint of Simon & Schuster. Pre-order a copy here.
[voices_in_ai_link_back]

Voices in AI – Episode 15: A Conversation with Daniel H. Wilson

[voices_in_ai_byline]
In this episode, Byron and Daniel talk about magic, robots, Alexa, optimism, and ELIZA.
[podcast_player name=”Episode 15: A Conversation with Daniel H. Wilson” artist=”Byron Reese” album=”Voices in AI” url=”https://voicesinai.s3.amazonaws.com/2017-10-30-(00-57-18)-daniel-h-wilson.mp3″ cover_art_url=”https://voicesinai.com/wp-content/uploads/2017/10/voices-headshot-card-1-1.jpg”]
[voices_in_ai_link_back]
Byron Reese: This is Voices in AI, brought to you by Gigaom. I’m Byron Reese. Today, our guest is Daniel Wilson. He is the author of the New York Times best-selling Robopocalypse, and its sequel, Robogenesis, as well as other books, including How to Survive A Robot Uprising, A Boy And His Bot, and Amped. He earned a PhD in Robotics from Carnegie-Mellon University, and a master’s degree in AI and robotics, as well. His newest novel, The Clockwork Dynasty, was released in August 2017. Welcome to the show, Daniel.
Daniel H Wilson: Hi, Byron. Thanks for having me.
So how far back—the earliest robots—I guess they began in Greek myth, didn’t they?
Yeah, so it’s something I have been thinking a lot about, because automatons play a major part in my novel The Clockwork Dynasty. I started thinking about, how far back does this desire to build robots, or lifelike machines, really go? Yeah, and if you start to look at history, you’ll see that we have actual artifacts from the last few hundred years.
And before that, we have a lot stories. And before that, we have mythology, and it does go all the way back to Greek mythology. People might remember that Hephaestus supposedly built tripod robots to serve the gods on Mount Olympus.
They had to chain them up at night, didn’t they, because they would wander off?
I don’t remember that part, but it wouldn’t surprise me. Yeah, that was written somewhere. Someone reported that they had visited, and that was true. I think there was the giant bronze robot that guarded… I think it was Crete, that was called Talos? That was another one of Hephaestus’s creations. So yeah, there are stories about lifelike machines that go all the way back into prehistory, and into mythology.
I think even in the story of Prometheus, in its earliest tellings, it was a robot eagle that actually flew down and plucked his liver out every day?
Oh, really… I didn’t remember that. I always, of course, loved the little robots from Clash of the Titans, you know the robot owl… do you remember his name?
No.
Bobo, or something.
That’s funny. So, those were not, even at the time, considered scientific devices, right? They were animated by magic, or something else. Nobody looked at a bunch of tools and thought, “A-ha, I can build a mechanical device here.” So where do you think it came from?
Well, you know, I think obviously human beings are really fascinated with themselves, right? Think about Galatea, and creating sculptures, and creating imitations of ourselves, and of animals, of course. It doesn’t surprise me at all that people have been trying to build this stuff for a really long time; what is kind of interesting to consider is to look at how it’s evolved over centuries and centuries.
Because you’re right; one thing that I have found doing research for this novel is that—it’s really fascinating to me—our concept of the scientific method, and the idea of the world as a machine, and that we can pick up the pieces and build new things. And we can figure out underlying physical principles, and things like that. That’s a relatively new viewpoint, which human beings haven’t really had for that long.
Looking at automatons, I saw that there’s this sort of pattern, in that the longer we build these things, they really are living embodiments of the world as the machine, right? If you start to look at the automatons being built during the Middle Ages, the medieval times, and then up through to the beginning of the Industrial Revolution, you see that people like Descartes, and philosophers who really helped us, as a civilization, solidify our viewpoint of the way nature works, and the way that science works—they were inspired by automatons, because they showed a living embodiment of what it would be like if an animal were made out of parts.
Then you go and dissect a real animal, and you start to think, “Wait, maybe I can figure this out. Maybe it’s not just, ‘God created it, walk away from it; it is what it is.'” Maybe there’s actually some rule or rhyme under this, and we can figure it out. I think that these kinds of machines actually really helped propel our civilization towards the technological age that we live in right now, because these philosophers were able to see this playing out.
Sorry, not to prattle on too long, but one thing I also really love about, specifically, medieval times is the notions of how this stuff works were very set down, but they were also very magical. There were different types of magic, that’s what I really loved in my research. Finding that whenever you see something like an aqueduct functioning, they would think of that as a natural kind of magic, whereas if you had some geometry, or pure math, they would think of that as a celestial type of magic.
But underneath all of it there were always angels or demons, and always there were the suspicions of a Necromantic art, that this lifelike thing is animated by a spirit of the dead. There’s so much magic and mystery that was laced into science at the time, that I think it really hindered the ability to develop iterative scientific advancements, at the time.
So picking up on that a bit, late eighteenth century, you’ve got Frankenstein. Frankenstein was a scientific creation, right? There was nothing magical about that. Can you think of an example before Frankenstein where the animating force was science-based?
The animating force behind some kind of creature, or like lifelike automaton? Yeah, I really can’t. I can think of lots of examples of stuff like Golem, or something like that, and they are all kind of created by magic, or by deities. I’m trying to think… I think that all of those ideas really culminated right around the time of the Industrial Revolution, and that was really reflective of their time. Do you have any examples?
No. What do you know about Da Vinci’s robot?
Not much. I know that he had a lot of sketches for various mechanical devices.
He, of course, couldn’t build it. He didn’t have the tools, but obviously what Da Vinci would have made would have been a purely scientific thing, in that sense.
Sure, but even if it were, that doesn’t mean that other people wouldn’t have applied the mindset that, whatever his inventions were, they were powered by natural magic, or some kind of deity or spirit. It’s kind of funny, because people back then were able to completely hold both of those ideas in their heads at once.
They could completely believe the idea that whatever they were creating was magical, and at the same time, they were doing science. It’s such an interesting thing to contemplate, being able to do science from that mentality.
Let’s go to the 1920s. Talk to us about the play that gives us the word “robot.”
Wow, this is like a quiz. This is great. So, you’re talking about R.U.R., the Čapek play. Yeah, Rossum’s Universal Robots—it’s a play from the ’20s in which, you know, a scientist creates a robot, and a race of robots. And of course, what do they do, they rise up and overthrow humanity and they kill every single one of us. It’s attributed as being the place where the term “robot” was coined, and yeah, it plays out in the way that a lot of the stories about robots have played out, ever since.
One of the things that is interesting about R.U.R. is that, so often, we use robots differently in our stories, based on whatever the context is, of what’s going on in the world at the time, because robots are really reflections of people. They are kind of this distorted mirror that we hold up to ourselves. At that time, you know, people were worried about the exploitation of the working class. When you look at R.U.R., that’s pretty much what those robots embodied.
They are the children of men, they are working class, they rise up and they destroy their rulers. I think the lesson there was clear for everybody in the 1920s who went to go see that play. Robots represent different things, depending on what’s going on. We’ve seen lots of other killer robots, but they’ve embodied or represented lots of other different evils and fears that we have, as people.
Would you call that 1920s version of a robot a fully-formed image in the way we think of them now? What would have been different about that view of robots?
Well, no. Those robots, they just looked like people, but I don’t even think there was the idea that they were made of metal, or anything like that. I think that that sort of image of the pop culture robot evolved more in the ’40s, ’50s, and ’60s, with pulp science fiction, when we started thinking of them as “big metal men”—you know, like Gort from The Day the Earth Stood Still, or Robby, or all of these giant hunks of metal with lights and things on them—that are more consistent with the technology of that time, which was the dawn of rocket ships and stuff like that, and that kind of science fiction.
From what I recall, in R.U.R., they aren’t mechanical at all. They are just like people, except they can’t procreate.
The reason why I ask you if you thought they were fully modern: let me just read you this quote from the play, and tell me what it sounds like to you. This is Harry Domin, he’s one of the characters, and he says:
“In ten years, Rossum’s Universal Robots will produce so much corn, so much cloth, and so much of everything that things will be practically without price. There will be no poverty, all work will be done by living machines. Everyone will be free from worry, and liberated from the degradation of labor. Everyone will live only to perfect himself.”
Yeah, it’s a utopian post-economy. Of course, it’s built on the back of slaves, which I think is the point of the play—we’re all going to have great lives, and we’re going to be standing right on the throats of this race of slaves that are going to sacrifice everything so we can have everything.
I guess I am struck by the fact that it seems very similar to what people’s hope for automation is right now—”The factories will run themselves.” Who was it that said, “The factory of the future will only have two employees—a man and a dog. The man’s job will be to feed the dog, and the dog’s job will be to keep the man from punching the machines.”
I’ve been cooking up a little rant about this, lately, honestly. I might as well launch into it. I think that’s actually a really naïve and childish view of a future. I’m starting to realize it more and more as I see the technology that we are receiving. This is sort of the first fruit, right?
Because we’ve only just gotten speech recognition to a level that’s useful, and gesture recognition, and maybe a little bit of natural language, and some computer vision, and then just general AI pattern recognition—we’re just now getting useful stuff from that, right?
We’re getting stuff like Alexa, or these mapping algorithms that can take us from one place to another, and Facebook and Twitter are choosing what they think would be most interesting to us, and I think this is very similar to what they’re describing in R.U.R., is this perfect future where we do nothing.
But doing nothing is not perfect. Doing nothing sucks. Doing nothing robs a person of all their ability and all their potential—it’s not what we would want. But a child, a person who just stumbled upon a treasure trove of this stuff, that’s what they’d think; that’s like the first wish you’d make, that would then make the rest of your life hell.
That’s what we are seeing now, what I’ve been calling the “candy age” of artificial intelligence, where people—researchers and technologists—are going, “What do people want? Let’s give them exactly what they say they want.”
Then they do, and then we don’t know how to get around in the cities that we live, because we depend on a mapping algorithm. We don’t know the viewpoints that our neighbors have, because we’ve never actually read an article that doesn’t tell us exactly what our worldview already is, there are a million examples. Talking to Alexa, I don’t have to say “please” or “thank you.” I just order it around, and it does whatever I say, and delivers whatever I ask for.
I think that, and hope that, as we get a little bit more of a mature view on technology, and as the technology itself matures, we can reach a future in which the technology doesn’t deliver exactly what we want, exactly when we want it, but the technology actually makes us better, in whatever way it can. I would prefer that my mapping algorithm not just take me to my destination, I want it to help me know where stuff is myself. I want it to teach me, and make me better.
Not just give me something, but make me better. I think that, potentially, that is the future of technology. It’s not a future where we’re all those overweight, helpless people from Wall-E leaning back in floating chairs, doing nothing and totally dependent on a machine. I think it’s a future where the technology makes us stronger, and I think that’s a more mature worldview and idea of the future.
Well, you know, the quote that I read though, he said that “everybody will spend their time perfecting themselves.” And I assume you’ve seen Star Trek before?
Sure, yes.
There’s an episode where the Enterprise thaws some people out from the twentieth century, and one of the guys—his name is Offenhouse—he’s talking about what’s the challenge in a world where there are no material needs or hunger, and all of that? And Picard said, the challenge is to become a better person, and make the most of it. So that’s also part of the narrative as well, right?
Yeah, and I think that slots in kind of well with the Alexa example, you know? Alexa is this AI that Amazon has built that—oh God, and mine’s talking to me right now because I keep saying her name—is this AI that sits in your house and you tell it what to do, and you don’t have to be polite to it. And this is kind of interesting to contemplate, right?
If your future with technology is a place where you are going to hone your sense of being the best version of yourself that you can be, how are you going to do that if you’re having interactions with lifelike machines in which you don’t have to behave ethically?
Where it’s okay to shout at Alexa—sorry, I’ve got to whisper her name—who, by the way, sounds exactly like a woman, and has a woman’s voice, and is therefore implicitly teaching you via your interaction with her that it’s okay to shout at that type of a voice.
I think it’s not going to be mutually exclusive—where the machines take over everything and you are free to be by yourself—because technology is a huge part of our life. We are going to have to work with technology to be the best versions of ourselves.
I think another example you can find easily is just looking at athletes. You don’t gauge how fast a runner is by putting them on a motorcycle; they run. They’re human. They are perfecting something that’s very human. And yet, they are doing it in concert with extreme levels of technology, so that when they do stand on the starting mark, ideally under the same conditions that every other human has stood on a starting mark for the last, however long, and the pistol goes off, and they start running, they are going to run faster than any human being who ever ran before.
The difference is that they are going to have trained with technology, and it’s going to have made them better. That’s kind of the non-mutually-exclusive future that I see, or that I end up writing science fiction about, since I’m not actually a scientist and I don’t have to do any of this stuff.
Let me take that idea and run with it for just a minute. Just to set this up for the listener, in the 1960s, there was a man named Weizenbaum, who wrote a program named ELIZA. ELIZA was kind of a therapy bot—I guess we would think of it now—and you would say something like, “I’m having a bad day,” and it would say, “Why are you having a bad day?” And you would say, “I’m having a bad day because of my boyfriend,” and it would say, “What about your boyfriend is making you have a bad day?”
It’s really simple, and uses a few linguistic rules. And Weizenbaum saw people engaging with it, and even though they knew it was a machine, he saw them form an emotional attachment—they would pour their heart out to it, they would cry. And he turned on AI, as it were. He deleted ELIZA and said, when the computer says, “I understand,” it’s just a lie, because there’s no “I” and no understanding.
He distinguished between choosing and deciding. He said, “Deciding is something a computer can do, but choice is a human thing.” He was against using computers as substitutes for people, especially anything that involved empathy. Is your observation about Alexa that we need to program it to require us to say please, or we need to not give it a personality, or something different?
Absolutely, we need to just figure out ethical interactions and make sure that our technology encourages those. And it’s not about the technology. No one cares about whether or not you’re hurting Alexa’s feelings; she doesn’t have any feelings. The question is, what kind of interactions are you setting up for yourself, and what kind of behaviors are you implicitly encouraging in yourself?
Because we get to choose the environments that we are in. The difference between when ELIZA was written and now is that we are surrounded by technology. Every minute of our lives has got technology. At that time, you can say, “Oh, let’s erase the program, this is sick, this is messed up.” Well guess what, man, that’s not the world anymore.
Every teenager has a real social network, and then they have a virtual social network, that’s bigger and stranger and more complex, and possibly more rewarding than the real people that are out there. That’s the environment that we live in now. It’s not a choice to say “turn it off,” right? We’re too far. I think that the answer is to make sure that technologists remember that this is a dimension that they have to consider while they create technology.
That’s kind of a new thing, right? We didn’t have to use to worry about consumer products—are people going to fall in love with a toaster, are people going to get upset when the toaster goes kaput, are people going to curse at the toasters and become worse versions of themselves? That wasn’t an issue then, but it is an issue now, because we are having interactions with lifelike artifacts. Therefore, ethical dimensions have to be considered. I think it’s a fascinating problem, and I think it’s something that is going to really make people better, in the end.
Assuming we do make machines that simulate emotions—you can have a bot best friend, or what have you—do you think that that is something that people will do, and do you think that that is healthy, and good, and positive?
It’s going to be interesting to see how that shakes out. Talking in terms of decision versus choice; one thing that’s always stuck with me is a moment in the movie AI, when Gigolo Joe—who is exactly what he sounds like, and he’s a robot—he looks this woman in the eyes, and he says, “You are the most beautiful woman in the world.” Immediately, you look at that, and you go, he’s just a robot, that doesn’t mean anything.
He just said, “You’re the most beautiful woman in the world,” but his opinion doesn’t mean anything, right? But then you think about it for another second, and you realize, he means it. He means that with every fiber of his being, and there’s no human alive, that could probably look at that exact woman, at that exact moment, and say, “You’re the most beautiful woman alive,” and really mean it. So, there’s value there.
You can see how that value exists when you see complete earnestness versus how a wider society might attribute a zero value to the whole thing, but at least he means it. So yeah, I can kind of see both sides of this. I’m judging now from the environment that I live in right now, the context of the world that I have; I don’t think it would be a great idea. I wouldn’t want my kids to just have virtual friends that are robots, or whatever, but you never know.
I can’t make that call for people twenty years from now. They could be living in a friggin’ apocalypse, where they don’t have access to human beings and the only thing that they’ve got are virtual characters to be friends with. I don’t know what the future is going to bring. But I can definitely say that we are going to have interactions with lifelike machines, there are going to be ethical dimensions to those interactions; technologists had better figure out ways to make sure those interactions make us better people, and not monsters.
You know, it’s interestingly an old question. Do you remember that original Twilight Zone episode about the guy who’s on the planet by himself—I think he’s in prison—and they leave him a robot. He gets a pardon, or something, and they go to pick him up, and they only have room for him, not the robot, and he refuses to leave the robot.
So, he just stays alone on the planet. It’s kind of interesting that fifty years ago, we looked ahead and that was a real thing that people thought about—are synthetic emotions as valuable to a human as real ones? I assume you think we are definitely going to face that—as a roboticist—we certainly are going to build things that can look you in the eye, and tell you that you are beautiful, in a very convincing way.
Yes. I have a very humanist kind of viewpoint on this. I don’t think technology means anything without people. I think that technology derives its value entirely from how much it matters to human beings. It’s the part of me that gets very excited about this idea of the robot that looks you in the eye and says, “I love you,” but I’m not interested in replacing human relationships that I have.
I don’t know how many friends you have, but I have a couple of really good friends. That’s all I can handle. I have my wife, and my kids, and my family. I think most people aren’t looking to add more and replace all their friends with machines, but what I get excited about is how storytelling is going to evolve. Because all of us are constantly scouring books and movies and television, because we are looking for glimpses of those kinds of emotional interactions and relationships between people, because we feed on that, because we are human beings and we’re designed to interact with each other.
We just love watching other human beings interact with each other. Having written novels and comic books and screenplays and the occasional videogame, I can’t wait to interact with these types of agents in a storytelling setting, where the game, the story, is literally human interaction.
I’ve talked about this a little bit before, and some examples I’ve cooked up, like… What if it’s World War I, and you’re in No Man’s Land, and there are mortars streaking out of the sky, blowing up, and your whole job for this story is to convince your seventeen-year-old brother to get out of the crater and follow you to the next crater before he gets killed, right? The job is not to carry a videogame gun and shoot back. Your job is to look him in the eye, and beg him, and say, “I’m begging you, you have to get up, you have to be strong enough to come with me and go over here, I promised mom you would not die here!” You convince him to get up and go with you over the hill to the next crater, and that’s how you pass that level of that story, or that’s how you move through that storytelling world.
That level of human interaction with an artificial agent, where it’s looking at me, and it can tell whether I mean it, and it can tell if there’s emotion in my voice, and it can tell if I’m committed to this, and it can also reflect that back to me accurately, through the actions of this artificial agent—man, now that is going to be a really fascinating way to engage in a story. And I think, it has—again, like I’ve been harping on—it has the ability to make people better through empathy, through sharing situations that they get to experience emotionally, and then understand after that.
Thinking about replacing things is interesting, and often depressing. I think it’s more interesting to think about how we are going to evolve, and try out new things, and have new experiences with this type of technology.
Let’s talk a little bit about life and intelligence. So, will the robots be alive? Do you think we are going to build living machines? And by asking you the question, I am kind of implicitly asking you to define “life.”
Sorry, let’s back up. The question is: Do we think we’re going to build perfectly lifelike machines?
No. Will we build machines that are alive—whether they look human or not, I’m not interested in—will there be living machines?
That’s interesting, I mean—I only find that interesting in a philosophical way to contemplate. I don’t really care about that question. Because at the end of the day, I think Turing had it right. If we are talking about human-like machines, and we are going to consider whether they are alive—which would probably mean that they need rights, and things like that—then I think the proof is just in the comparison.
I’m making the assumption that every other human is conscious. I’m assuming that I’m conscious, because I’m sitting here feeling what executive function feels like, but, I think that that’s a fine hoop to jump through. Human-like level of intelligence: it’s enough for me to give everyone else the benefit of the doubt, it’s enough for them to give me the benefit of the doubt, so why wouldn’t I just use that same metric for a lifelike machine?
To the extent that I have been convinced that I’m alive, or that anybody is alive, I’m perfectly willing to be convinced that a machine is alive, as well.
I would submit, though, that it is the farthest thing from a philosophical question, because, as you touched on, if the machine is alive, then it has certain rights? You can’t have it plunge your toilet, necessarily, or program it to just do your bidding. Nobody thinks the bots we have now are alive. Nobody worries—
—Well, we currently don’t have a definition of “life” that everyone agrees on, period. So, throwing robots into that milieu, is just… I don’t know…
We don’t have to have a definition. We can know the endpoints, though. We know a rock is not alive, and we know a human is alive. The question isn’t, are robots going to walk in some undefined grey area that we can’t figure out; the question is, will they actually be alive? And if they’re alive, are they conscious? And if they’re conscious, then that is the furthest thing from a philosophical question. It used to be a philosophical question, when you couldn’t even really entertain the question, but now…
I’m willing to alter that slightly. I’ll say that it’s an academic question. If the first thing that leads off this whole chain is, “Is it alive?” and we have not yet assigned a definition to that symbol—A-L-I-V-E—then it becomes an academic discussion of what parameters are necessary in order to satisfy the definition of “alive.”
And that is not really very interesting. I think the more interesting thing is, how are we actually going to deal with these things in our day-to-day lives? So from a very practical, concrete manner, like… I walk up to a robot, the robot is indistinguishable from a human being—which, that’s not a definition of alive, that’s just a definition—then how am I going to behave, what’s my interaction protocol going to be?
That’s really fun to contemplate. It’s something that we are contemplating right now. We’re at the very beginning of making that call. You think about all of the thought experiments that people are engaging in right now regarding autonomous vehicles. I’ve read a lot lately about, “Okay, we got a Tesla here, it’s fully autonomous, it’s gotta go left or right, can’t do anything else, but there’s a baby on the left, and an elderly person on the right, what are we going to do? It’s gotta kill somebody; what’s going to happen?”
The fact is, we don’t know anything about the moral culpability, we don’t know anything about the definitions of life or of consciousness, but we’ve got a robot that’s going to run over something, and we’ve got to figure out how we feel about it. I love that, because it means that we are going to have to formalize our ethical values as a society.
I think that’s something that’s very good for us to consider, and we are going to have to pack that stuff into these machines, and they are going to continue to evolve. My feeling is that I hope that by the time we get to a point where we can sit in armchairs and discuss whether these things are alive, they’ll of course already be here. And hopefully, we will have already figured out exactly how we do want to interact with these autonomous machines, whether they are vehicles or human-like robots, or whatever they are.
We will hopefully already have figured that out by the time we smoke cigars and consider what “aliveness” is.
The reason I ask the question… Up until the 1990s, veterinarians were taught not to use anesthetic when they operated on animals. The theory was—
—And on babies. Human babies. Yeah.
Right. That was scientific consensus, right? The question is, how would we have known? Today, we would look at that and say, “That dog really looks like it’s hurting.” Therefore, we would be intensely curious to know it. And of course we call that sentience, the ability to sense something, generally pain, and we base our laws all on it.
Human rights arrived, in part, because we are sentient. And animal cruelty law arrived because the animals are sentient. And yet, we don’t get in trouble for using antibiotics on bacteria because, they are not deemed to be sentient. So all of a sudden we are going to be confronted by something that says, “Ouch, that hurt.” And either it didn’t, and we should pay that no mind whatsoever, or it did hurt, which is a whole different thing.
To say, “Let’s just wait until that happens, and then we can sit around and discuss it academically” is not necessarily what I’m asking—I’m asking how will we know when that moment changes? It sounds like you are saying, we should just assume, if they say they hurt, we should just assume that they do.
By extension, if I put a sensor on my computer, and I hold a match to it, and it hits five hundred degrees, and it says “ouch,” I should assume that it is in pain. Is that what you’re saying?
No, not exactly. What I’m saying is that there are going to be a lot of iterations before we reach a point where we have a perfectly lifelike robot that is standing in front of you and saying, “Ouch.” Now, what I said about believing it when it says that, is that I hold it to the same bar that I hold human beings to: which is to say, if I can’t tell the difference between it and a human being, then I might as well give it the benefit of the doubt.
That’s really far down the line. Who knows, we might not ever even get there, but I assume that we would. Of course, that’s not the same standard that I would hold a CPU to. I wouldn’t consider the CPU as feeling pain. My point is, every iteration that we have, until we reach that perfectly lifelike human robot that’s standing in front of us and saying, “You hurt my feelings, you should apologize,” is that the emotions that these things exhibit are only meaningful insomuch as they affect the human beings that are around them.
So I’m saying, to program a machine that says, “Ouch you hurt my feelings, apologize to me,” is very important, as long as it looks like a person. And there is some probability that by interacting with it as a person, I could be training myself to be a serial killer without knowing it, if it didn’t require that I treat it with any moral care.
Is that making any sense? I don’t want to kick a real dog, and I don’t want to kick a perfectly lifelike dog. I don’t think that’s going to be good for me.
Even if you can argue that one dog doesn’t feel it, and the other dog does. In the case that one of the dogs is a robot, I don’t care about that dog actually getting hurt—it’s a robot. What I care about is me training myself to be the sort of person who kicks a dog. So I want that robot dog to not let me kick it—to growl, to whimper, to do whatever it does to invoke whatever the human levers are that you pull in order to make sure that we are not serial killers… if that makes any sense.
Let me ask in a different way, a different kind of question. I call a 1-800 number of my airline of choice, and they try to route me into the automated system, and I generally hit zero, because… whatever.
I fully expect that there is going to be a day, soonish, where I may be able to chat with a bot and do some pretty basic things without even necessarily knowing that it’s a bot. When I have a person that I’m chatting with, and they’re looking something up, I make small talk, ask about the weather, or whatnot.
If I find myself doing that, and then, towards the end of the call I figure out that this isn’t even a person; I will have felt tricked, and like I wasted my time. There’s nothing there that heard me. We yell at the TV—
—No. You heard you. When you yell at the TV, you yell for a reason. You don’t yell at an empty room for no reason, you yell for yourself. It’s your brain that is experiencing this. There’s no such thing as anything that you do which doesn’t get added up and go into your personality, and go into your daily experiences, and your dreams, and everything that eventually is you.
Whatever you spend your time doing, that’s going to have an impact on who you are. If you’re yelling at a wall, it doesn’t matter—you’re still yelling.
Don’t you think that there is something different about interacting with a machine and interacting with a human? We would by definition do those differently. Kicking the robot dog, I don’t think that’s going to be what most people do. But if the Tesla has to go left or go right, and hit a robot dog or a real dog… You know which way it should go, right?
Clearly the Tesla, we don’t care what decision it makes. We’re not worried about the impact on the Tesla. The Tesla would obviously kill a dog. If it was a human being who had a choice to kill a robot dog or a real dog, we would obviously choose the robot dog, because it would be better for the human being’s psyche.
We could have fun playing around with gradations, I guess. But again, I am more interested in real practical outcomes, and how to make lifelike artifacts that interact with human beings ethically, and what our real near-term future with that is going to look like. I’m just curious, what’s the future that you would like to see? What kind of interactions would you prefer to have—or none at all—with lifelike machines?
Well, I’m far more interested—like you—with what’s going to happen, and how we are going to react to it. It’s going to be confusing, though, because we’re used to things that speak in a human voice being a human.
I share some of Weizenbaum’s unease—not necessarily quite to the extent—but some unease that if we start blurring the lines between what’s human and what’s not, that doesn’t necessarily ennoble the machine. It may actually be to our own detriment. We’ve had to go through thousands of years of civilization to get something we call human rights, and we do them because we think there is something uniquely special about humans, or at least about life.
To just blithely say, “Let’s start extending that elsewhere,” I think it diminishes and maybe devalues it. But, enough with that; let me ask you a different one. What do you see? You said you’re far more interested in what the near-future holds. So, what does the near future hold?
Well, yeah, that’s kind of what I was ranting about before. Exactly what you were saying; I really agree with you strongly that these interactions, and what happens with us and our machines, puts a lot of power strongly in the hands of the people that make this technology. Like this dopamine reflex, mouse-pushing-the-cocaine-button way that we check our smartphone; that’s really good for corporations. That’s not necessarily great for individuals, you know?
That’s what scares me. If you ask me what is worrisome about the potential future interactions we have with these machines, and whether we should at all, a lot of it boils down to: are corporations going to take any responsibility for not harming people, once they start to understand better how these interactions play out? I don’t have a whole lot of faith in the corporations to look out for anyone’s interests but their own.
But if once we start understanding what good interactions look like… maybe as consumers, we can force these people to make these products that are hopefully going to make us better people.
Sorry, I got a little off into the weeds there. That’s my main fear. And as a little aside, I think it’s absolutely vital that when we are talking to an AI, or when we are interacting with a lifelike artificial machine, that that interaction be out in the open. I want that AI to tell me, “Hi, I’m automated, let’s talk about car insurance.”
Because you’re right, I don’t want to sit there and talk about weather with that thing. I don’t want to treat it exactly like I would a human being—unless it’s like fifty years from now, and these things are incredibly smart, and it would be totally worthwhile to talk to it. It would be like having a conversation with your smart aunt, or something.
But I would want that information upfront. I would want it to be flagged. Because I’d want to know if I’m talking to something that’s real or not—my boundaries are going to change depending on that information. And I think it’s important.
You have a PhD in Robotics, so what’s going to be something that’s going to happen in the near future? What’s something that’s going to be built that’s really just going to blow our minds?
Everyone’s always looking for something totally new, some sort of crazy app that’s going to come out of nowhere and blow our minds. It’s highly doubtful that anything is going to happen within the next five years, because science is incredibly iterative. Where you often see real breakthroughs is not some atomic thing being created completely new, that blows everybody away… But often, when you get connections between two things that already exist, and then you suddenly realize, “Oh wow! Peanut butter and jelly! Here we go, it’s a whole new world!”
This Alexa thing, the smart assistants that are now physically manifesting themselves in our homes, in the places where we spend most of our time socially—in our kitchens, in my office, where I’m at right now—they have established a beachhead in our homes now.
They started on our phones, and they’re in some of our cars, and now they’re in our homes, and I think that as this web spreads, slowly, and they add more ability to these personal AI assistants, and my conversations with Alexa get more complex, and there starts to become a dialogue… I think that slow creep is going to result in me sort of snapping to attention in five years and going, “Oh, holy crap! I just talked about what’s the best present to buy for my ten-year-old daughter with Alexa, based on the last ten years that I’ve spent ordering stuff off of Amazon, and everything she knows about me!”
That’s going to be the moment. I think it’s going to be something that creeps up on us, and it’s gonna show up in these monthly updates to these devices, as they creep through our houses, and take control of more stuff in our environments, and increase their ability to interact with us at all times.
It’ll be your Weizenbaum moment.
It’ll be a relationship moment, yeah. And I’ll know right then whether I value that relationship. By the way, I just wrote a short story all about this called “Iterations.” I joined the XPRIZE Science Fiction Advisory Council, and they’re really focused on optimistic futures. They brought together all of these science fiction authors and said, “Write some stories twenty years in the future with optimism, Utopias… Let’s do some good stuff.”
I wrote a story about a guy who comes back twenty years later, he finds his wife, and realizes that she has essentially been carrying on a relationship with an AI that’s been seeded with all of his information. She, at first, uses it as a tool for her depression at having mysteriously lost her husband, but now it’s become a part of her life. And the question in the story is, is that optimistic? Or is that a pessimistic future?
My feeling is that people use technology to survive, and we can’t judge them for it. We can’t tell them, “You’re living in a terrible dystopia, you’re a horrible person, you don’t understand human interaction because you spend all your time with a machine.” Well, no…if you’ve got severe depression, and this is what keeps you alive, then that’s an optimistic future, right? And who are we to judge?
You know, I don’t know. I keep on writing stories about it. I don’t think I’ll ever get any answers out of myself.
Isn’t it interesting that, you know, Siri has a name. Alexa—I have to whisper it, too, I have them all, so I have to watch everything that I say—has a name, Microsoft has Cortana, but Google is the “Google Assistant”—they didn’t name it; they didn’t personify it.
Do you have any speculation—I mean, not any first-hand knowledge—but would you have any speculation as to why that would be the case? I mean, I think Alexa, it’s got a hard “x” and it’s a reference to the library of Alexandria.
Yeah, that’s interesting. Well, also you literally want to choose a series of phonemes that are not high frequency, because you don’t want to constantly be waking the thing up. What’s also interesting about Alexa, is that it’s a “le” sound, which is difficult for children to make, so kids can’t actually use Alexa—I know this from extreme experience. Most of them can’t say “Alexa,” they say “Awexa” when they’re little, and so she doesn’t respond to little kids, which is crucial because little kids are the worst, and they’re always telling her to play these stupid songs that I don’t want to hear.
Can’t you change the trigger word, actually?
I think you can, but I think you’re pretty limited. I think you can change it to Echo.
Right.
I’m not sure why exactly Google would make that decision—I’m sure that it was a serious decision. It’s not the decision that every other company made—but I would guess that it’s not the greatest situation, because people like to anthropomorphize the objects that they interact with; it creates familiarity, and it also reinforces that this is an interaction with a person… It has a person’s name, right?
So, if you’re talking to something, what do we talk to? What’s the only thing that we’ve ever talked to in the history of humankind that was able to respond in English? Friggin’, another human being, right? So why would you call that human being “Google”? It doesn’t make any sense. Maybe they just wanted to reinforce their brand name, again and again and again, but I do think it’s a dumb decision.
Well, I notice that you give gender to Alexa, every time you refer to it.
She has a female name, and a female voice, so of course I do.
It’s still not an “it.”
If I was defining “it” for a dictionary or something, I would obviously define the entity Alexa as an “it,” but she’s intentionally piggybacking on human interaction, which is smart, because that’s the easiest way to interact, that’s what we have been evolved to do. So I am more than happy to bend to her wishes and utilize my interaction with her as naturally as I can, because she’s clearly trying to present herself as a female voice, living in a box in my kitchen. And so I’m completely happy, of course, to interact with her in that way, because it’s most efficient.
As we draw to the end here, you talked about optimism, and you came to this conclusion on different ways the future may unfold and that it may be hard to call the ball on whether that’s good or bad. But those nuances aside, generally speaking, are you optimistic about the future?
I am. I’m frighteningly optimistic. In everything I see, I have some natural level of optimism that is built into me, and it is often at odds with what I am seeing in the world. And yet it’s still there. It’s like trying to sit on a beach ball in a swimming pool. You can push it down, but it floats right back to the surface.
I feel like human beings make tools—that’s the most fundamental thing about people—and that part of making tools is being afraid of what we’ve made. That’s also a really great innate human instinct, and probably the reason that we’ve been around as long as we have been. I think every new tool we build—every time it’s more powerful than the one before it—we make a bigger bet on ourselves being a species worthy of that tool.
I believe in humanity. At the end of the day, I think that’s a bet worth making. Not everybody is good, not everybody is evil, but I think in the end, in the composition, we’re going to keep going forward, and we’re going to get somewhere, someday.
So, I’m mostly just excited, I’m excited to see what the future is going to bring.
Let’s close up talking about your books really quickly. Who do you write for? Of all the people listening, you would say, “The people that like my books are…”?
The people who are very similar to me, I guess, in taste. Of course, I write for myself. I get interested in something, I think a lot about it, sometimes I’ll do a lot of research on it, and then I write it. And I trust that someone else is going to be interested in that. It’s impossible for me to predict what people are going to want. I can’t do it. I didn’t go get a degree in robotics because I wanted to write science fiction.
I like robots, that’s why I studied robots, that’s why I write about robots now. I’m just very lucky that there’s anybody out there that’s interested in reading this stuff that I’m interested in writing. I don’t put a whole lot of thought into pleasing an audience, you know? I just do the best I can.
What’s The Clockwork Dynasty about? And it’s out already, right?
Yeah, so it’s out. It’s been out a couple weeks, and I just got back from a book tour, which is why I might be hoarse from talking about it. So the idea behind The Clockwork Dynasty is… It’s told in two parts: one part is set in the past, and the other part is set in the present. In the past, it imagines a race of humanlike machines built from automatons that are serving the great empires of antiquity, and they’re blending in with humanity, and hiding their identity.
And then in the present day, these same automatons are still alive, and they’re running out of power, and they’re cannibalizing each other in order to stay alive. An anthropologist discovers that they exist, and she goes on this Indiana Jones-style around-the-world journey to figure out who made these machines in the distant past, and why, and how to save their race, and resupply their power.
It’s this really epic journey that takes place over thousands of years, and all across Russia, and Europe, and China, and the United States; and I just had a hell of a good time writing it, because it’s all my favorite moments of history. I love clockwork automatons. I’ve always loved court automatons that were built in the seventeenth century, and around then… And yeah, I just had a great time writing it.
Well I want to thank you so much for taking an hour, to have maybe the most fascinating conversation about robots that I think I’ve ever had, and I hope that we can have you come back another time.
Thank you very much for having me, Byron. I had a great time.
Byron explores issues around artificial intelligence and conscious computers in his upcoming book The Fourth Age, to be published in April by Atria, an imprint of Simon & Schuster. Pre-order a copy here
[voices_in_ai_link_back]

Voices in AI – Episode 2: A Conversation with Oren Etzioni

[voices_in_ai_byline]
In this episode Byron and Oren talk about AGI, Aristo, the future of work, conscious machines, and Alexa.
[podcast_player name=”Episode 2: A Conversation with Oren Etzioni” artist=”Byron Reese” album=”Voices in AI” url=”https://voicesinai.s3.amazonaws.com/2017-09-28-(00-57-00)-oren-etzioni.mp3″ cover_art_url=”https://voicesinai.com/wp-content/uploads/2017/09/voices-headshot-card-1.jpg”]
[voices_in_ai_link_back]
Byron Reese: This is Voices in AI, brought to you by Gigaom. I’m Byron Reese. Today, our guest is Oren Etzioni. He’s a professor of computer science who founded and ran University of Washington’s Turing Center. And since 2013, he’s been the CEO of the Allen Institute for Artificial Intelligence. The Institute investigates problems in data mining, natural language processing, and the semantic web. And if all of that weren’t enough to keep a person busy, he’s also a venture partner at the Madrona Venture Group. Business Insider called him, quote: “The most successful entrepreneur you’ve never heard of.”
Welcome to the show, Oren.
Oren Etzioni: Thank you, and thanks for the kind introduction. I think the key emphasis there would be, “you’ve never heard of.”
Well, I’ve heard of you, and I’ve followed your work and the Allen Institute’s as well. And let’s start there. You’re doing some fascinating things. So if you would just start off by telling us a bit about the Allen Institute, and then I would love to go through the four projects that you feature prominently on the website. And just talk about each one; they’re all really interesting.
Well, thanks. I’d love to. The Allen Institute for AI is really Paul Allen’s brainchild. He’s had a passion for AI for decades, and he’s founded a series of institutes—scientific institutes—in Seattle, which were modeled after the Allen Institute for Brain Science, which has been very successful running since 2003. We were founded—got started—in 2013. We were launched as a nonprofit on January 1, 2014, and it’s a great honor to serve as CEO. Our mission is AI for the common good, and as you mentioned, we have four projects that I’m really excited about.
Our first project is the Aristo project, and that’s about building a computer program that’s able to answer science questions of the sort that we would ask a fourth grader, and now we’re also working with eighth-grade science. And people sometimes ask me, “Well, gosh, why do you want to do that? Are you trying to put 10-year-olds out of work?” And the answer is, of course not.
We really want to use that test—science test questions—as a benchmark for how well are we doing in intelligence, right? We see tremendous success in computer programs like AlphaGo, beating the world champion in Go. And we say, “Well, how does that translate to language—and particularly to understanding language—and understanding diagrams, understanding science?”
And one way to answer that question is to, kind of, level the playing field with, “Let’s ask machines and people the same questions.” And so we started with these science tests, and we can see that, in fact, people do much better. It turns out, paradoxically, that things that are relatively easy for people are really quite hard for machines, and things that are hard for people—like playing Go at world championship level—those are actually relatively easy for the machine.
Hold on there a minute: I want to take a moment and really dissect this. Any time there’s a candidate chatbot that can make a go at the Turing test, I have a standard question that I start with, and none of them have ever answered it correctly.
It’s a question a four-year-old could answer, which is, “Which is bigger, a nickel or the sun?” So why is that a hard problem? Is what you’re doing, would it be able to answer that? And why would you start with a fourth grader instead of a four-year-old, like really go back to the most basic, basic questions? So the first part of that is: Is what you’re doing, would it be able to answer the question?
Certainly our goal is to give it the background knowledge and understanding ability to be able to answer those types of questions, which combine both basic knowledge, basic reasoning, and enough understanding of language to know that, when you say “a nickel,” you’re not referring to the metal, but you’re referring to a particular coin, with a particular size, and so on.
The reason that’s so hard for the machine is that it’s part of what’s called ‘common sense’ knowledge, right? Of course, the machine, if you programmed it, could answer that particular question—but that’s a stand-in for literally billions of other questions that you could ask about relative sizes, about animal behavior, about the properties of paper versus feathers versus furniture.
There’s really a seemingly infinite—or certainly a very, very large number—of basic questions that people, that certainly eight-year-olds can answer, or four-year-olds, but that machines struggle with. And they struggle with it because, what’s their basis for answering the questions? How would they acquire all that knowledge?
Now, to say, “Well, gosh, why don’t we build a four-year-old, or maybe even a one-year-old?” I’ve actually thought about that. So at the university, we investigated for a summer, trying to follow the developmental ladder, saying: “Let’s start with a six-month-old, and a one-year-old, etc., etc.”
And my interest, in particular, is in language. So I said, “Well, gosh, surely we can build something that can say ‘dada’ or ‘mama’, right?” And then work our way from there. What we found is that, even a very young child, their ability to process language and understand the world around them is so involved with their body—with their gaze, with their understanding of people’s facial expressions—that the net effect was that we could not build a one-year-old.
So, in a funny way, once you’re getting to the level of a fourth grader, who’s reading and answering multiple choice science questions, it gets easier and it gets more focused on language and semantics, and less on having a body, being able to crawl—which, of course, are challenging robotics problems.
So, we chose to start higher up in the ladder, and it was kind of a Goldilocks thing, right? It was more language-focused and, in a funny way, easier than doing a one-year-old, or a four-year-old. And—at the same time—not as hard as, say, college-level biology questions or AP questions, which involve very complicated language and reasoning.
So it’s your thinking that by talking about school science examinations, in particular, that you have a really, really narrow vocabulary that you have to master, a really narrow set of objects you have to understand the property of, is that the idea? Like, AI does well at games because they’re constrained worlds with fixed rules. Are you trying to build that, an analog to that?
It is an analog, right? In the sense that AI has done well with having narrow tasks and, you know, limited domains. At the same time, it’s probably not the word, really. There are, if you look—and this is something that we’ve learned—at the tremendous variety in these questions, and not only variety of ways of saying things, but also variety because these tests often require you to take something that you could have an understanding of—like gravity or photosynthesis—but then apply it to a particular situation.
“What happens if we take a plant and move it nearer to the window?” So that combination means that the combination of basic scientific knowledge, with an application to a real-world situation, means that it’s really quite varied. And it’s really a much harder AI problem to answer fourth-grade science questions than it is to solve Go.
I completely get that. I’m going to ask you a question, and it’s going to sound like I’m changing the topic, but it is germane. Do you believe that we’re on a path to building an AGI—a general intelligence? You’re going to learn things doing this, and is it, like, all we will need to do is scale them up more and more, faster, faster, better and better, and you’ll have an AGI? Is this on that trajectory, or is an AGI something completely unrelated to what you’re trying to do here?
That’s a very, very key question. And I would say that we are not on a path to building an AGI—in the sense that, if you build Aristo, and then you scale it to twelfth grade, and more complex vocabulary, and more complex reasoning, and, “Hey, if we just keep scaling this further, we’ll end up with artificial general intelligence, with an AGI.” I don’t think that’s the case.
I think there are many other problems that we have to solve, and this is a part of a very complex picture. And if it’s a path, it’s a very meandering one. But really, the point is that the word “research,” which is obviously what we’re doing here, has the word “search” in it. And that means that we’re iterating, we’re going here, we’re going there, we’re looking, you know.
“Oh, where did I put my keys?” Right? How many times do you retrace your steps and open that drawer, and say, “Oh, but I forgot to look under the socks,” or “I forgot to look under the bed”? It’s this very complex, uncertain process; it’s quite the opposite of, “Oh, I’m going down the path, the goal is clear, and I just have to go uphill for five miles, and I’ll get there.”
I’ve got a book on AI coming out towards the end of this year, and in it, I talk about the Turing test. And I talk about, like, the hardest question I can think of to ask a computer so that I could detect if it’s a computer or a person. And here’s a variant of what I came up with, which is:
“Doctor Smith is eating at his favorite restaurant, that he eats at frequently. He gets a call, an emergency call, and he runs out without paying his bill. Are the owners likely to prosecute?” So, if you think about that… Wow, you’ve got to know he’s a doctor, the call he got is probably a medical emergency, you have to infer that he eats there a lot, that they know who he is, they might even know he’s a doctor. Are they going to prosecute? So, it’s a gazillion social things that you have to know in order to answer that question.
Now, is that also on the same trajectory as solving twelfth grade science problems? Or is that question that I posed, would that require an AGI to answer?
Well, one of the things that we’ve learned is that, whenever you define a task—say answering story types of questions that involve social nuance, and maybe would involve ethical and practical considerations—that is on the trajectory of our research. You can imagine Aristo, over time, being challenged by these more nuanced questions.
But, again, we’ve gotten so good at identifying those tasks, building training sets, building models and then answering those questions, and that program might get good at answering those questions but still have a hard time crossing the street. Still have a hard time reading a poem or telling a joke.
So, the key to AGI is the “G”; the generality is surprisingly elusive. And that’s the amazing thing, because that four-year-old that we were talking about has generality in spades, even though she’s not necessarily a great chess player or a great Go player. So that’s what we learned.
As our AI technology evolves, we keep learning about what is the most elusive aspect of AI. At first, if you read some of the stuff that was written in the ’60s and the ’70s, people were very skeptical that the program could ever play chess, because that was really seen as, very intelligent people are very good chess players.
And then, that became solved, and people talked about learning. They said, “Well, gosh, but programs can’t learn.” And as we’ve gotten better, at least at certain kinds of learning, now the emphasis is on generality, right? How do we build a general program, given that all of our successes, whether it’s poker or chess or certain kinds of question answering, have been on very narrow tasks?
So, one sentence I read about Aristo says, “The focus of the project is explained by the guiding philosophy that artificial intelligence is about having a mental model for how things operate, and refining that mental model based on new knowledge.” Can you break that down for us? What do you mean?
Well, I think, again, lots of things. But I think a key thing not to forget—and it goes from your favorite question about a nickel and the sun—is that so much of what we do makes use of background knowledge, just extensive knowledge of facts, of words, of all kinds of social nuances, etc., etc.
And the hottest thing going is deep learning methods. Deep learning methods are responsible for the success in Go, but the thing to remember is that often, at least by any classical definition, those programs are very knowledge-poor. If you could talk to them and ask them, “What do you know?” you’d find out that—while they may have stored a lot of implicit information, say, about the game of Go—they don’t know a whole heck of a lot. And that, of course, touches onto the topic of consciousness, which I understand is also covered in your book. If I asked AlphaGo, “Hey, did you know you won?” AlphaGo can’t answer that question. And it’s not because it doesn’t understand natural languages. It’s not conscious.
Kasparov said that about Deep Blue. He said, “Well, at least it can’t gloat. At least it doesn’t know that it beat me.” To that point, Claude Shannon wrote about computers playing chess back in the ’50s, but it was an enormous amount of work. It took the best minds a long time to build something that could beat Kasparov. Do you think that something like that is generalizable to a lot of other things? Or am I hearing you correctly that that is not a step towards anything general? That’s a whole different kind of thing, and therefore Aristo is, kind of, doing something very different than AlphaGo or chess, or Jeopardy?
I do think that we can generalize from that experience. But I think that generalization isn’t always the one that people make. So what we can generalize is that, when we have a very clear “objective function” or “performance criteria”—basically it’s very clear who won and who lost—and we have a lot of data, that as computer scientists we’re very, very good—and it still, as you mentioned, took decades—but we’re very, very good at continuing to chip away at that with faster computers, more data, more sophisticated algorithms, and ultimately solving the problem.
However, in the case of natural language: If you and I, let’s say we’re having a conversation here on this podcast—who won that conversation? Let’s say I want to do a better job if you ever invite me for another podcast. How do I do that? And if my method for getting better involves looking at literally millions of training examples, you’re not going to do millions of podcasts. Right?
So you’re right, that a very different thing needs to happen when things are vaguer, or more uncertain, or more nuanced, when there’s less training data, etc., etc.—all these characteristics that make Aristo and some of our other projects very, very different than chess or Go.
So, where is Aristo? Give me a question it can answer and a question it can’t. Or is that even a cogent question? Where are you with it?
First of all, we keep track of our scores. So, I can give you an example in a second. But when we look at what we call “non-diagram multiple choice”—questions that are purely in language, because diagrams can be challenging for the machine to interpret—we’ve been able to reach very close to eighty percent correctness. Eighty percent accuracy on non-diagram multiple choice questions for fourth grade.
When you say any questions, there we’re at sixty percent. Which is either great, because when we started—all these questions with diagrams and what’s called “direct answer questions,” where you had to answer them with a phrase or a sentence, you don’t just get to choose between four choices—we were close to twenty percent. We were far lower.
So, we’ve made a lot of progress, so that’s on the glass-half-full side. And the glass-half-empty side, we’re still getting a D on a fourth-grade science test. So it’s all a question of how you look at it. Now, when you ask, “What questions can we solve?” We actually have a demo on our website, on AllenAI.org, that illustrates some of these.
If I go to the Aristo project there, and I click on “live demo,” I see questions like, “What is the main source of energy for the water cycle?” Or even, “The diagram below shows a food chain. If the wheat plants died, the population of mice would likely _______?” So, these are fairly complex questions, right?
But they’re not paragraph-long, and the thing that we’re still struggling with is what we call “brittleness.” If you take any one of these questions that we can answer, and then change the way you ask the question a bit, all of a sudden we fail. This is, by the way, a characteristic of many AI systems, this notion of brittleness—where a small change that a human might say, “Oh, that’s no different at all.” It can make a big difference to the machine.
It’s true. I’ve been playing around with an Amazon Alexa, and I noticed that if I say, “How many countries are there?” it gives me one number. If I say, “How many countries are there in the world?” it gives me a different number. Even though a human would see that as the same question. Is that the sort of thing you’re talking about?
That’s exactly the sort of thing I’m talking about, and it’s very frustrating. And, by the way, Alexa and Siri, for the people who want to take the pulse of AI—I mean, again, we’re one of the largest nonprofit AI research institutes in the world, but we’re still pretty small at 72 people—Alexa or Siri, that’s for-profit companies; there are thousands of people working on those, and it’s still the case that you can’t carry on a halfway decent dialogue with these programs.
And I’m not talking about the cutesy answers about, you know, “Siri, what are you doing tonight?” Or, “Are you better than Alexa?” I’m talking about, let’s say, the kind of dialogue you’d have with a concierge of a hotel, to help you find a good restaurant downtown. And, again, it’s because how do you score dialogues? Right? Who won the dialogue? All those questions, that are very easy to solve in games, are not even really well-posed in the context of a dialogue.
I pinned an article about how—and I have to whisper her name, otherwise it will start talking to me—Alexa and Google Assistant give you different answers to factual questions.
So if you ask, “How many seconds are there in a year?” they give you different answers. And if you say, “Who designed the American flag?” they’ll give you different answers. Seconds in a year, you would think that’s an objective question, there’s a right and a wrong answer, but actually one gives you a calendar year, and one gives you a solar year, which is a quarter-day different.
And with the American flag, one says Betsy Ross, and the other one says the person who designed the 50-star configuration of the flag, which is our current flag. And in the end, both times those were the questioner’s fault, because the question itself is inherently vague, right? And so, even if the system is good, if the questions are poorly phrased, it still breaks, right? It’s still brittle.
I would say that it’s the computer’s fault. In other words, again, an aspect of intelligence is being able to answer vague questions and being able to explain yourself. But these systems, even if their fact store is enormous—and one day, they’ll certainly exceed ours—if all it can do when you say, “Well, why did you give me this number?” is say, “Well, I found it here,” then really it’s a big lookup table.
It’s not able to deal with the vagueness, or to explain itself in a more meaningful way. What if you put the number three in that table? You ask, “How many seconds are there in a year?” The program would happily say, “Three.” And you say, “Does that really make sense?” And it would say, “Oh, I can’t answer that question.” Right? Whereas a person, would say, “Wait a minute. It can’t be three seconds in a year. That just doesn’t make sense!” Right? So, we have such a long way to go.
Right. Well, let’s talk about that. You’re undoubtedly familiar with John Searle’s Chinese Room question, and I’ll set it up for the listener—because what I’m going to ask you is, is it possible for a computer to ever understand anything?
The setup, very briefly—I mean, I encourage people to look it up—is that there’s a person in a room and he doesn’t speak any Chinese, and he’s given Chinese questions, and he’s got all these books he can look it up in, but he just copies characters down and hands them back. And he doesn’t know if he’s talking about cholera or coffee beans or what have you. And the analogy is, obviously, that’s what a computer does. So can a computer actually understand anything?
You know, the Chinese Room thought experiment is really one of the most tantalizing and fun thought experiments in philosophy of mind. And so many articles have been written about it, arguing this, that or the other thing. In short, I think it does expose some of the issues, and the bottom line is when you look under the hood at this Chinese Room and the system there, you say, “Gosh, it sure seems like it doesn’t understand anything.”
And when you take a computer apart, you say, “Gosh, how could it understand? It’s just a bunch of circuits and wires and chips.” The only problem with that line of reasoning is, it turns out that if you look under the hood in a person’s mind—in other words, if you look at their brain—you see the same thing. You see neurons and ion potentials and chemical processes and neurotransmitters and hormones.
And when you look at it at that level, surely, neurons can’t understand anything either. I think, again, without getting to a whole other podcast on the Chinese Room, I think that it’s a fascinating thing to think about, but it’s a little bit misleading. Understanding is something that emerges from a complex technical system. That technical system could be built on top of neurons, or it could be built on top of circuits and chips. It’s an emergent phenomenon.
Well, then I would ask you, is it strong emergence or is it weak emergence? But, we’ve got three more projects to discuss. Let’s talk about Euclid.
Euclid is, really, a sibling of Aristo, and in Euclid we’re looking at SAT math problems. The Euclid problems are easier in the sense that you don’t need all this background knowledge to answer these pure math questions. You surely need a lot less of that. However, you really need to very fully and comprehensively understand the sentence. So, I’ll give you my favorite example.
This is a question that is based on a story about Ramanujan, the Indian number theorist. He said, “What’s the smallest number that’s the sum of two cubes in two different ways?” And the answer to that question is a particular number, which the listeners can look up on Google. But, to answer that correctly, you really have to fully parse that rather long and complicated sentence and understand “the sum of two cubes in two different ways.” What on earth does that mean?
And so, Euclid is working to have a full understanding of sentences and paragraphs, which are the kind of questions that we have on the SATs. Whereas often with Aristo—and certainly, you know, with things like Watson and Jeopardy—you could get away with a much more approximate understanding, “this question is sort of about this.” There’s no “sort of” when you’re dealing with math questions, and you have to give the answer.
And so that is, as you say, a sibling to Aristo; but Plato, the third one we’re going to discuss, is something very different, right?
Right. Maybe if we’re using this family metaphor, Plato is Aristo’s and Euclid’s cousin, and what’s going on there is we don’t have a natural benchmark test, but we’re very, very interested in vision. We’ve realized that a lot of the questions that we want to address, a lot of the knowledge that is present in the world isn’t expressed in text, certainly not in any convenient way.
One great way to learn about the sizes of things—not just the sun and a nickel, but maybe even a giraffe and a butterfly—is through pictures. You’re not going to find the sentence that says, “A giraffe is much bigger than a butterfly,” but if you see pictures of them, you can make that connection. Plato is about extracting knowledge from images, from videos, from diagrams, and being able to reason over that to draw conclusions.
So, Ali Farhadi, who leads that project and who shares his time between us and the Allen School at University of Washington, has done an amazing job generating result after result, where we’re able to do remarkable things based on images.
My favorite example of this—you kind of have to visualize it—imagine drawing a diagonal line and then a ball on top of that line. What’s going to happen to that ball? Well, if you can visualize it, of course the ball’s going to roll down the line—it’s going to roll downhill.
It turns out that most algorithms are actually really challenged to make that kind of prediction, because to make that kind of prediction, you have to actually reason about what’s going on. It’s not just enough to say, “There’s a ball here on a line,” but you have to understand that this is a slope, and that gravity is going to come into play, and predict what’s going to happen. So, we really have some of the state-of-the-art capabilities, in terms of reasoning over images and making predictions.
Isn’t video a whole different thing, because you’re really looking at the differences between images, or is it the same basic technology?
At a technical level, there are many differences. But actually, the elegant thing about video, as you intimated, a video is just a sequence of images. It’s really our eye, or our mind, that constructs the continuous motion. All it is, is a number of images shown per second. Well, for us, it’s a wonderful source of training data, because I can take the image at Second 1 and make a prediction about what’s going to happen in Second 2. And then I can look at what happened at Second 2, and see whether the prediction was correct or not. Did the ball roll down the hill? Did the butterfly land on the giraffe? So there’s a lot of commonalities, and video is actually a very rich source of images and training data.
One of the challenges with images is—well, let me give an example, then we can discuss it. Say I lived on a cul-de-sac, and the couple across the street were expecting—the woman is nine months pregnant—and one time I get up at three in the morning and I look out the window and their car is gone. I would say, “Aha, they must have gone to the hospital.” In other words, I’m reasoning from what’s not in the image. That would be really hard, wouldn’t it?
Yes. You’re way ahead of Plato. It’s very, very true.
To anticipate that you’ll go to Semantic Scholar; I want to make sure that we get to that. With Semantic Scholar, a number of the capabilities that we see in these other projects come together. Semantic Scholar is a scientific search engine, it’s available 24/7 at semanticscholar.org and it allows people to look for computer science papers, for neuroscience papers. Soon we’re going to be launching the ability to cover all the papers in biomedicine that are available on engines like PubMed.
And what we’re trying to do there is deal with the fact that there are so many, you know, over a hundred million scientific research papers, and more are coming out every day, and it’s virtually impossible for anybody to keep up. Our nickname for Semantic Scholar sometimes is Da Vinci, because we say Da Vinci was the last Renaissance man, right?
The person who, kind of, knew all of science. There are no Renaissance men or women anymore, because we just can’t keep up. And that’s a great place for AI to help us, to make scientists more efficient in their literature searches, more efficient in their abilities to generate hypotheses and design experiments.
That’s what we’re trying to do with Semantic Scholar, and that involves understanding language, and that involves understanding images and diagrams, and it involves a lot more.
Why do you think the semantic web hasn’t taken off more, and what is your prediction about the semantic web?
I think it’s important to distinguish between “semantics,” as we use it at Semantic Scholar, and “semantic” in the semantic web. In Semantic Scholar, we try to associate semantic information with text. For example, this paper is about a particular brain region, or this paper uses fMRI methodology, etc. It’s pretty simple semantic distinctions.
The semantic web was a very rich notion of semantics that, frankly, is superhuman and is way, way, way beyond what we can do in a distributed world. So that vision by Tim Berners-Lee really evolved over the years into something called “linked open data,” where, again, the semantics is very simple and the emphasis is much more about different players on the web linking their data together.
I think that very, very few people are working on the original notion of the semantic web, because it’s just way too hard.
I’m just curious, this is a somewhat frivolous question: But the names of your projects don’t seem to follow an overarching naming scheme. Is that because they were created and named elsewhere or what?
Well, it’s because, you know, if you let a computer scientist, which is me, if you put him or her in charge of branding, you’re going to run into problems. So, I think, Aristo and Euclid are what we started with and those were roughly analogous. Then we added Plato, which is an imperfect name, but still roughly in the mythological world. And then Semantic Scholar really is a play off of Google Scholar.
So Semantic Scholar is, if you will, really the odd duck here. And when we had a project, we were considering doing work on dialogue—which we still are—we called that project Socrates. But then I’m also thinking “Do we really want all the projects to be named after men?” which is definitely not our intent. So, I think the bottom line is it’s an imperfect naming scheme and it’s all my fault.
So, the mission of the Allen Institute for AI is, quote: “Our mission is to contribute to humanity through high-impact AI research and engineering.” Talk to me about the “contribute to humanity” part of that. What do you envision? What do you hope comes of all of this?
Sure. So, I think that when we started, we realized that so often AI is either vilified—particularly in Hollywood films, but also by folks like Stephen Hawking and Elon Musk—and we wanted to emphasize AI for the common good, AI for humanity, where we saw some real benefits to it.
And also, in a lot of for-profit companies, AI is used to target advertising, or to get you to buy more things, or to violate your privacy, if it’s being used by intelligence agencies or by aggressive marketing. And we really wanted to find places like Semantic Scholar, where AI can help solve some of humanity’s thorniest problems by helping scientists.
And so, that’s where it comes from; it’s a contrast to these other, either more negative uses, or more negative views of AI. And we’ve been really pleased that, since we were founded, organizations like OpenAI or the Partnership on AI, which is an industry consortium, have adopted missions that are very consistent and kind of echo ours, you know: AI to benefit humanity and society and things like that. So it seems like more and more of us in the field are really focused on using AI for good.
You mentioned fear of AI, and the fear manifests—and you can understand Hollywood, I mean, it’s drama, right—but the fear manifests in two different ways. One is what you alluded to, that it’s somehow bad, you know, Terminator or what have you. But the other one that is on everybody’s mind is, what do you think about AI’s effect on employment and jobs?
I think that’s a very serious concern. As you can tell, I’m not a big fan of the doomsday scenarios about AI. I tell people we should not confuse science with science fiction. But another reason why we shouldn’t concern ourselves with Skynet and doomsday scenarios is because we have a lot more realistic and pressing problems to worry about. And that, for example, is AIs impact on jobs. That’s a very real concern.
We’ll see it in the transportation sector, I predict, particularly soon. Where truck drivers and Uber drivers and so on are going to be gradually squeezed out of the market, and that’s a very significant number of workers. And it’s a challenge, of course, to help these people to retrain them, to help them find other jobs in an increasingly digital economy.
But, you know, in the history of the United States, at least, over the past couple of hundred years, there have been a number of really disruptive technologies that have come along—the electrification of industry, the mechanization of industry, the replacement of animal power, going into steam—things that really impacted quickly, and yet unemployment never once budged because of that. Because what happens is, people just use the new technology. And isn’t it at least possible that, as we move along with the development of artificial intelligence, that it actually is an empowering technology that lets people use it to increase their own productivity? Like, anybody could use it to increase their productivity.
I do think that AI will have that role, and I do think that, as you intimated, these technological forces have some real positives. So, the reason that we have phones and cars and washing machines and modern medicine, all these things that make our lives better and that are broadly shared through society, is because of technological advances. So I don’t think of these technological advances, including AI advances, as either a) negative; or b) avoidable.
If we say, “Okay, we’re not going to have AI,” or “We’re not going to have computers,” well, other countries will and they’ll overtake us. I think that it’s very, very difficult, if not impossible to stop broad-based technology change. Narrow technologies that are particularly terrible, like landmines or biological weapons, we’ve been able to stop. But I think AI isn’t stoppable because it’s much broader, and it’s not something that should be stopped, it’s not like that.
So I very much agree with what you said, but with one key caveat. We survived those things and we emerged thriving, but the disruption over significant periods of time and for millions of people was very, very difficult. So right as we went from a society that was ninety-something percent agricultural to one where there were only two percent workers in agriculture—people suffered and people were unemployed. And so, I do think that we need to have the programs in place to help people with these transitions.
And I don’t think that they’re simple because some people say, “Sure, those old jobs went away, but look at all these great jobs. You know, web developer, computer programmer, somebody who leverages these technologies to make themselves more effective at their jobs.” That’s true, but the reality is a lot more complicated. Are all these truck drivers really going to become web developers?
Well, I don’t think that’s the argument, right? The argument is that everybody moves one small notch up. So somebody who was a math teacher in a college, maybe becomes a web developer, and a high school teacher becomes the college teacher, and then a substitute teacher gets the full time job.
Nobody says, “Oh, no, no, we’re going to take these people, you know, who have less training and we’re going to put them in these highly technical jobs.” That’s not what happened in the past either, right? The question is can everybody do a job a little more complicated than the one they have today? And if the answer to that is yes, then do we have a big disruption coming?
Well, first of all, you’re making a fair point. I was oversimplifying by mapping the truck drivers to the developers. But, at the same time, I think we need to remember that these changes are very disruptive. And, so, the easiest example to give, because it’s fresh in my mind and, I think, other people’s mind—let’s look at Detroit. This isn’t technological changes, it’s more due to globalization and to the shifting of manufacturing jobs out of the US.
But nevertheless, these people didn’t just each take a little step up or a little step to the right, whatever you want to say. These people and their families suffered tremendously. And it’s had very significant ramifications, including Detroit going bankrupt, including many people losing their health care, including the vote for President Trump. So I think if you think on a twenty-year time scale, will the negative changes be offset by positive changes? Yes, to a large extent. But if you think on shorter time scales, and you think about particular populations, I don’t think we can just say, “Hey, it’s going to all be alright.” I think we have a lot of work to do.
Well, I’m with you there, and if there’s anything that I think we can take comfort in, it’s that the country did that before. There used to be a debate in the country about whether post-literacy education was worth it. This was back when we were an agricultural society. And you can understand the logic, right? “Well once somebody learns to read, why do you need to keep them in school?” And then, people said, “Well, the jobs of the future are going to need a lot more skills.” That’s why the United States became the first country in the world to guarantee a high school education to every single person.
And it sounds like you’re saying something like that, where we need to make sure that our education opportunities stay in sync with the requirements of the jobs we’re creating.
Absolutely. I think we are agreeing that there’s a tremendous potential for this to be positive, you know? Some people, again, have a doomsday scenario for jobs and society. And I agree with you a hundred percent; I don’t buy into that. And it sounds like we also agree, though, that there are things that we could do to make these transitions smoother and easier on large segments of society.
And it definitely has to do with improving education and finding opportunities etc., etc. So, I think it’s really a question of how painful will this change be, and how long will it take until we’re at a new equilibrium that, by the way, could be a fantastic one? Because, you know, the interesting thing about the truck jobs, and the toll jobs that went away, and a lot of other jobs that went away; some of these jobs are awful. They’re terrible, right? People aren’t excited about a lot of these jobs. They do them because they don’t have something better. If we can offer them something better, then the world will be a better place.
Absolutely. So we’ve talked about AGI. I assume you think that we’ll eventually build a general intelligence.
I do think so. I think it will easily take more than twenty-five years, it could take as long as a thousand years, but I’m what’s called a materialist; which doesn’t mean that I like to shop on Amazon; it means that I believe that when you get down to it, we’re constructed out of atoms and molecules, and there’s nothing magical about intelligence. Sorry—there’s something tremendously magical about it, but there’s nothing ineffable about it. And, so, I think that, ultimately, we will build computer programs that can do and exceed what we can do.
So, by extension, you believe that we’ll build conscious machines as well?
Yes. I think consciousness emerges from it. I don’t think there’s anything uniquely human or biological about consciousness.
The range of time that people think it will be before we create an AGI, in my personal conversations, range from five to five hundred years. Where in that spectrum would you cast your ballot?
Well, I would give anyone a thousand-to-one odds that it won’t happen in the next five years. I’ll bet ten dollars against ten thousand dollars, because I’m in the trenches working on these problems right now and we are just so, so far from anything remotely resembling an AGI. And I don’t know anybody in the field who would say or think otherwise.
I know there are some, you know, so-called futurists or what have you… But people actively working on AI don’t see that. And furthermore, even if somebody says some random thing, then I would ask them, “Back it up with data.” What’s your basis for saying that? Look at our progress rates on specific benchmarks and challenges; they’re very promising but they’re very promising for a very narrow task, like object detection or speech recognition or language understanding etc., etc.
Now, when you go beyond ten, twenty, thirty years, who can predict what will happen? So I’m very comfortable saying it won’t happen in the next twenty-five years, and I think that it is extremely difficult to predict beyond that, whether it’s fifty or a hundred or more, I couldn’t tell you.
So, do you think we have all the parts we need to build an AGI? Is it going to take some breakthrough that we can’t even fathom right now? Or with enough deep learning and faster processors and better algorithms and more data, could you say we are on a path to it now? Or is your sole reason for believing we’re going to build an AGI that you’re a materialist—you know, we’re made of atoms, we can build something made of atoms.
I think it’s going to require multiple breakthroughs which are very difficult to imagine today. And let me give you a pretty concrete example of that.
We want to take the information that’s in text and images and videos and all that, and represent that internally using a representation language that captures the meaning, the gist of it, like a listener to this podcast has kind of a gist of what we’ve talked about. We don’t even know what that language looks like. We have various representational languages, none of them are equal to the task.
Let me give you another way to think about it as a thought experiment. Let’s suppose I was able to give you a computer, a computer that was as fast as I wanted, with as much memory as I wanted. Using that unbelievable computer, would I now be able to construct an artificial intelligence that’s human-level? The answer is, “No.” And it’s not about me. None of us can.
So, if it was really about just the speed and so on, then I would be a lot more optimistic about doing it in a short term, because we’re so good at making it run two times faster, making it run ten times faster, building a faster computer, storing information. We used to store it on floppy disk, and now we store it here. Next we’re going to be storing it in DNA. This exponential march of technology under Moore’s Law—keep getting faster and cheaper—in that sense, is phenomenal. But that’s not enough to achieve AGI.
Earlier you said that you tell people not to get confused with science and science fiction. But, about science fiction, is there anything that you’ve seen, read, watched that you actually think is a realistic scenario of what we may be able to do, what the future may hold? Is there anything that you look at and say, well, it’s fiction, but it’s possible?
You know, one of my favorite pieces of fiction is the book Snow Crash, where it, kind of, sketches this future of Facebook and the future of our society and so on. If I were to recommend one book, it would be that. I think a lot of the books about AI are long on science fiction and short on what you call “hard science fiction”; short on reality.
And if we’re talking about science fiction, I’d love to end with a note where, you know, there’s this famous Arthur C. Clarke quote, “Any sufficiently advanced technology is indistinguishable from magic.” So, I think, to a lot of people AI seems like magic, right? We can beat the world champion in Go—and my message to people, again, as somebody who works in the field day in and day out, it couldn’t be further from magic.
It’s blood, sweat, and tears—and, by the way, human blood, sweat and tears—of really talented people, to achieve the limited successes that we’ve had in AI. And AlphaGo, by the way, is the ultimate illustration of that. Because it’s not that AlphaGo defeated Lee Sedol, or the machine defeated the human. It’s this remarkably-talented team of engineers and scientists at Google, working at Google DeepMind, working for years; they’re the ones who defeated Lee Sedol, with some help from technology.
Alright. Well, that’s a great place to leave it, and I want to thank you so much. It’s been fascinating.
It’s a real pleasure for me, and I look forward both to listening to this podcast, to your other ones, and to reading your book.
Thank you.
[voices_in_ai_link_back]
Byron explores issues around artificial intelligence and conscious computers in his upcoming book The Fourth Age, to be published in April by Atria, an imprint of Simon & Schuster. Pre-order a copy here

THE INTERNET OF EATS

It was the Three-Course Dinner Gum that served as Violet Beauregarde’s downfall at Willy Wonka’s Chocolate Factory and also introduced multiple generations to the curious possibilities of food’s future. Now, more than fifty years’ since the publication of the Roald Dahl classic, we’re on the brink of innovations that might make twentieth-century fiction look more like a forecasting engine. As the way we cook, eat and interact with our food is evolving, what does the future eating look like?
Let’s start in the kitchen
Many an embroidered wall-hanging will tell you that the kitchen is the heart of the home. Today, that heart holds many possibilities for innovation, some of which are already in play. There are a growing number of smart refrigerators on the market, offering touch-screen, wi-fi enabled doors—yes, you can watch cat videos but you can also view how many eggs you have stocked while you’re at the market.
Similarly, wi-fi oven ranges are making it possible to adjust oven temperatures from afar and check if you left your burners on after you left the house. The connectivity plays out in a few different ways; some appliances will connect to your smartphone, but many are hooking up with smart home systems or digital assistants (see Whirlpool and Nest and GE and Alexa) and yet others plug into their own smart home systems (see Samsung’s Smartthings Hub).
But if you’re not ready to invest in new built-in appliances, there are other entry points to smarter cooking. Cuciniale, for example, promises to deliver perfectly cooked meats by connecting your steak to your smartphone through its multisensor probe. June Intelligent Oven also works with sensors to improve timing and preparation, but can also recognize what food it’s cooking.
These (as well as the bigger appliances) have the appeal of ease and convenience and may also elevate our cooking skills much in the same way digital has improved our photography. (Think of “seared” as a filter you can simply tap to apply to your tuna.)
Those holding out for a fully hands-off solution might find projects like UK startup Moley Robotics’ robotic kitchen of interest. Moley offers a pair of wall-embedded arms that can prepare and cook your meals. (No indication if it also does dishes.) Meanwhile, thanks to artificial intelligence, robots are learning how to cook the same way many humans are picking up tips: through Youtube. It’s all quite compelling, though, for now at least, it’s still more convenient to just order a pizza.
What about the actual food?
A more savory aspect of the future of food is, naturally, the food itself. One fairly easy trend to identify is the move toward a more health-conscious eating—there are plenty of studies to support this but you really only need to see that McDonald’s sells apple slices for confirmation. Technology is ready to enable this trend, with apps that offer calorie counts on pictures of food and devices like Nima that scan food for gluten and other allergens.  
In a way that mirrors the fragmenting of media experiences, we’re also moving toward an era of more customized meals. That’s not simply Ethan-won’t-eat-zucchini-so-make-him-a-hot-dog-customization, but rather food that is developed to mirror our specific preferences, adjust to allergies and even address specific nutritional deficiencies. Success here relies on access to dietary insights, be it through logged historical eating patterns, blood levels and/or gut microbiome data. (New York Magazine has an interesting piece on the use of microbiome data to create your own personal food algorithm.)
And while it’s easy to imagine more personalized diets at home, we can count on technology to support that same customized approach while we’re eating out. Increasingly restaurants like Chili’s, Applebee’s, Olive Garden and Buffalo Wild Wings are introducing the table side tablet to increase efficiency and accuracy in orders and payments. As restaurant-goers take more control in how food is ordered, it will be easy to expect more customization in what is ordered.
Are we redefining food?
Given the rise of allergies and food intolerance, it’s not difficult to imagine a world of highly-customized eating. More unexpected in the evolution of eating is the work being done in neurogastronomy. This is a field that is approaching flavor from a neural level—actually rewiring the brain to change our perception of taste. In other words, neurogastronomy could make a rice cake register as delicious as ice cream cake. By fundamentally changing the types of food from which we derive pleasure, neurogastronomy could essentially trick us into healthier eating.
Then there is the emerging camp that eschews eating in favor of more efficient food alternatives. Products like provocatively-named Soylent and the much-humbler-sounding Schmilk offer a minimalist approach to nutrition (underscored by minimalist packaging), sort of like Marie Kondo for your diet. While this level of efficiency may have appeal in today’s cult of busy-ness, there something bittersweet about stripping food to the bare nutritional essentials, like eliminating the art of conversation in favor of plain, cold communication.
Another entry from the takes-some-time-to-get-used-to department comes from a team of Danish researchers. With the goal of addressing the costly challenge of food storage in space, CosmoCrops is working on a way to 3D-print food. There are already a number of products available that offer 3D-printed food (check out this Business Insider article for some cool food sculptures), but CosmoCrops is unique in its aim to reduce storage needs by printing food from bacteria. To that end, they are developing a ‘super-bacterium’ that can survive in space. (What could possibly go wrong?)
Where is the opportunity?
It’s probably too soon to tell if we’ll be more likely to nosh on bacteria burgers or pop nutritional powder pills come 2050. What is easier to digest today is the fact that connectivity is coming to eating. For the home kitchen, it won’t happen immediately—the turnover for built-in appliances isn’t as quick as, say, televisions and costs are still high. This means there’s still time for the contenders, both the appliance builders and the smart technology providers, to figure out which features will tip the kitchen in their favor.
From a dietary perspective, there is an opportunity in bridging the gap between our diet and technology. Restaurants will want to explore how to use technology to support more customized food preferences, but the broader question may be what will make it possible—and acceptable, in terms of privacy—to analyze personal data in order to develop meals that align with our unique food preferences as well as our specific nutritional needs? Maybe it’s a wearable that links your gut bacteria to ingredients stocked in the fridge, a toothbrush that reads your saliva, or (to really close the loop) the diagnostic toilet.
With innovation happening on many tracks, the possibilities for our future cooking and eating are both broad and captivating. What will lunch look like in next fifty, twenty, or even ten years? To borrow from Willy Wonka (who actually borrowed from Oscar Wilde): “The suspense is terrible. I hope it’ll last.”

Enchanting Products and Spaces by Rethinking the Human-Machine Interface

At the Gigaom Change conference in Austin, Texas, on September 21-23, 2016, David Rose (CEO of Ditto Labs, MIT Media Lab Researcher and author of Enchanted Objects), Mark Rolston (Founder and Chief Creative Officer at argodesign) and Rohit Prasad (Vice President and Head Scientist, Alexa Machine Learning) spoke with moderator, Leanne Seeto, about “enchanted” products, the power of voice-enabled interactions and the evolution of our digital selves.
There’s so much real estate around us for creating engaging interfaces. We don’t need to be confined to devices. Or at least that is the belief of Gigaom Change panelists, David Rose, Rohit Prasad and Mark Rolston, who talked about the ideas and work being explored today that will change the future of human-machine interfaces creating more enchanted objects in our lives.
With the emergence of Internet of Things (IoT) and advances in voice recognition, touch and gesture-based computing, we are going to see new types of interfaces that look less like futuristic robots and more like the things we interact with daily.
Today we’re seeing this happen the most in our homes, now dubbed the “smart home.” Window drapes that automatically close to give us privacy when we need it is just one example of how our homes and even our workspaces will soon come alive with what Rose and Rolston think of as Smart-Dumb Things (SDT). One example might be an umbrella that can accurately tell you if or when it’s going to rain. In the near future devices will emerge out of our phones and onto our walls, furniture and products. We may even see these devices added to our bodies. This supports the new thinking that devices and our interactions with them can be a simpler, more seamless and natural experience.
Rose gave an example from a collaboration he did with the architecture firm Gensler for the offices of Salesforce. He calls it a “conversational balance table.” It’s a device that helps subtly notify people who are speaking too much during meetings. “Both introverts and extraverts have good ideas. What typically happens, though, is that during the course of a meeting, extraverts take over the conversation, often not knowingly,” Rose explains, “so we designed a table with a microphone array around the edge that identifies who is speaking. There’s a constellation of LEDs embedded underneath the veneer so as people speak, LEDs illuminate in front of where you are. Over the course of 10 or 15 minutes you can see graphically who is dominating the conversation.”
So what about voice? Will we be able to talk to these devices too? VP and Head Scientist behind Amazon Alexa, Rohit Prasad, is working on vastly improving voice interactions with devices. Prasad believes voice will be the key feature in the IoT revolution that is happening today. Voice will allow us to access these new devices within our homes and offices more efficiently. As advances in speech recognition continue, voice technology will become more accurate and able to quickly understand our meaning and context.
Amazon is hoping to spur even faster advances in voice from the developer community through Alexa Skills Kit (ASK) and Alexa Voice Service (AVS), which allow developers to build voice-enabled products and devices using the same voice service that powers Alexa. All of this raises important questions. How far does this go? When does voice endow an object with the attributes of personhood? That is, when does an object become an “enchanted” object?
At some point, as Mark Rolston of argodesign has observed, users are changed in the process of interacting with these objects and spaces. Rolston believes that our digital selves will evolve into entities of their own — what he calls our “meta me,” a combination of both the real and the digital you. In the future Rolston sees our individual meta me’s as being more than just data, but actually negotiating, transacting, organizing, and speaking on our behalf.
And while this is an interesting new concept for our personal identity, what is most interesting is using all of this information and knowledge to get decision support on who we are and what we want. The ability for these cognitive, connected applications to help us make decisions in our life is huge. What we’re moving toward is creating always-there digital companions to help with our everyday needs. Imagine the future when AI starts to act as you, making the same decisions you would make.
As this future unfolds, we’re going to begin to act more like nodes in a network than simply users. We’ll have our own role in asking questions of the devices and objects around us, telling them to shut off, turn on, or help us with tasks; gesturing or touching them to initiate some new action. We’ll still call upon our smartphones and personal computers, but we won’t be as tethered to them as our primary interfaces.
We’ll begin to call on these enchanted devices, using them for specific tasks or even in concert together. When you ask Amazon’s Echo a simple question like “what’s for lunch?” you won’t be read a lengthy menu from your favorite restaurant. Instead, your phone will vibrate letting you know it has the menu pulled up for you to scroll through and decide what to eat. Like the talking candlestick and teapot in Beauty and The Beast, IoT is going to awaken a new group of smart, interconnected devices that will forever change how we interact with our world.
By Royal Frasier, Gryphon Agency for Gigaom Change 2016