Voices in AI – Episode 55: A Conversation with Rob High

[voices_in_ai_byline]

About this Episode

Episode 55 of Voices in AI features host Byron Reese and Rob High talking about IBM Watson and the history and future of AI. Rob High is an IBM fellow, VP and Chief Technical Officer at IBM Watson.
Visit www.VoicesinAI.com to listen to this one-hour podcast or read the full transcript.

Transcript Excerpt

Byron Reese: This is Voices in AI, brought to you by GigaOm. I’m Byron Reese. August 12th, 1981. That was the day IBM released the IBM PC and who could have imagined what that would lead to? Who would’ve ever thought, from that vantage point, of our world today? Who could’ve imagined that eventually you would have one on every desktop and then they would all be connected? Who would have guessed that through those connections, trillions of dollars of wealth would be created? All the companies, you know, that you see in the news every day from eBay to Amazon to Google to Baidu to Alibaba, all of them have, in one way or the other, as the seed of their genesis, that moment on August 12th, 1981.
Now the interesting thing about that date, August of ‘81, that’s kind of getting ready to begin the school year, the end of the summer. And it so happens that our guest, Rob High graduated from UC Santa Cruz in 1981, so he graduated about the same time, just a few months before this PC device was released. And he went and joined up with IBM. And for the last 36/37 years, he has been involved in that organization affecting what they’re doing, watching it all happen, and if you think about it, what a journey that must be. If you ever pay your respects to Elvis Presley and see his tombstone, you’ll see it says, “He became a living legend in his own time.” Now, I’ll be the first to say that’s a little redundant, right? He was either a living legend or a legend in his own time. That being said, if there’s anybody who can be said to be a living legend in his own time, it’s our guest today. It’s Rob High. He is an IBM fellow, he is a VP at IBM, he is the Chief Technical Officer at IBM Watson and he is with us today. Welcome to the show, Rob!
Rob High: Yeah, thank you very much. I appreciate the references but somehow I think my kids would consider those accolades to be a little, probably, you know, not accurate.
Well, but from a factual standpoint, you joined IBM in 1981 when the PC was brand new.
Yeah – I’ve really been honored with having the opportunity to work on some really interesting problems over the years. And with that honor has come the responsibility to bring value to those problems, to the solutions we have for those problems. And for that, I’ve always been well-recognized. So I do appreciate you bringing that up. In fact, it really is more than just any one person in this world that makes changes meaningful.
Well, so walk me back to that. Don’t worry, this isn’t going to be a stroll down memory lane, but I’m curious. In 1981, IBM was of course immense, as immense as it is now and the PC had to be a kind of tiny part of that at that moment in time. It was new. When did your personal trajectory intercept with that or did it ever? Had you always been on the bigger system side of IBM?
No, actually. It was almost immediate. Probably was, I don’t know the exact number, but probably I was pretty close to the first one hundred or two hundred people that ordered a PC when it got announced. In fact, the first thing I did at IBM was to take the PC into work and show my colleagues what the potential was. I was just doing simple, silly things at the time, but I wanted to make an impression that this really was going to change the way that we were thinking about our roles at work and what technology was going to do to help change our trajectory there. So, no, I actually had the privilege of being there at the very beginning. I won’t say that I had the foresight to recognize its utility but I certainly appreciated it and I think that to some extent, my own career followed the trajectory of change that has occurred similar to what PCs did to us back then. In other areas as well: including web computing, and service orientation, now cloud computing, and of course cognitive computing.
And so, walk me through that and then let’s jump into Watson. So, walk me through the path you went through as this whole drama of the computer age unfolded around you. Where did you go from point to point to point through that and end up where you are now?
Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com 
[voices_in_ai_link_back]
 
Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.

Voices in AI – Episode 24: A Conversation with Deep Varma

[voices_in_ai_byline]
In this episode, Byron and Deep talk about the nervous system, AGI, the Turing Test, Watson, Alexa, security, and privacy.
[podcast_player name=”Episode 24: A Conversation with Deep Varma” artist=”Byron Reese” album=”Voices in AI” url=”https://voicesinai.s3.amazonaws.com/2017-12-04-(00-55-19)-deep-varma.mp3″ cover_art_url=”https://voicesinai.com/wp-content/uploads/2017/12/voices-headshot-card_preview-1.jpeg”]
[voices_in_ai_link_back]
Byron Reese: This is Voices in AI, brought to you by Gigaom. I’m Byron Reese. Today our guest is Deep Varma, he is the VP of Data Engineering and Science over at Trulia. He holds a Bachelor’s of Science in Computer Science. He has a Master’s degree in Management Information Systems, and he even has an MBA from Berkeley to top all of that off. Welcome to the show, Deep.
Deep Varma: Thank you. Thanks, Byron, for having me here.
I’d like to start with my Rorschach test question, which is, what is artificial intelligence?
Awesome. Yeah, so as I define artificial intelligence, this is an intelligence created by machines based on human wisdom, to augment a human’s lifestyle to help them make the smarter choices. So that’s how I define artificial intelligence in a very simple and the layman terms.
But you just kind of used the word, “smart” and “intelligent” in the definition. What actually is intelligence?
Yeah, I think the intelligence part, what we need to understand is, when you think about human beings, most of the time, they are making decisions, they are making choices. And AI, artificially, is helping us to make smarter choices and decisions.
A very clear-cut example, which sometimes what we don’t see, is, I still remember in the old days I used to have this conventional thermostat at my home, which turns on and off manually. Then, suddenly, here comes artificial intelligence, which gave us Nest. Now as soon as I put the Nest there, it’s an intelligence. It is sensing that someone is there in the home, or not, so there’s motion sensing. Then it is seeing what kind of temperature do I like during summer time, during winter time. And so, artificially, the software, which is the brain that we have put on this device, is doing this intelligence, and saying, “great, this is what I’m going to do.” So, in one way it augmented my lifestyle—rather than me making those decisions, it is helping me make the smart choices. So, that’s what I meant by this intelligence piece here.
Well, let me take a different tack, in what sense is it artificial? Is that Nest thermostat, is it actually intelligent, or is it just mimicking intelligence, or are those the same thing?
What we are doing is, we are putting some sensors there on those devices—think about the central nervous system, what human beings have, it is a small piece of a software which is embedded within that device, which is making decisions for you—so it is trying to mimic, it is trying to make some predictions based on some of the data it is collecting. So, in one way, if you step back, that’s what human beings are doing on a day-to-day basis. There is a piece of it where you can go with a hybrid approach. It is mimicking as well as trying to learn, also.
Do you think we learn a lot about artificial intelligence by studying how humans learn things? Is that the first step when you want to do computer vision or translation, do you start by saying, “Ok, how do I do it?” Or, do you start by saying, “Forget how a human does it, what would be the way a machine would do it?
Yes, I think it is very tough to compare the two entities, because the way human brains, or the central nervous system, the speed that they process the data, machines are still not there at the same pace. So, I think the difference here is, when I grew up my parents started telling me, “Hey, this is Taj Mahal. The sky is blue,” and I started taking this data, and I started inferring and then I started passing this information to others.
It’s the same way with machines, the only difference here is that we are feeding information to machines. We are saying, “Computer vision: here is a photograph of a cat, here is a photograph of a cat, too,” and we keep on feeding this information—the same way we are feeding information to our brains—so the machines get trained. Then, over a period of time, when we show another image of a cat, we don’t need to say, “This is a cat, Machine.” The machine will say, “Oh, I found out that this is a cat.”
So, I think this is the difference between a machine and a human being, where, in the case of machine, we are feeding the information to them, in one form or another, using devices; but in the case of human beings, you have conscious learning, you have the physical aspects around you that affect how you’re learning. So that’s, I think, where we are with artificial intelligence, which is still in the infancy stage.
Humans are really good at transfer learning, right, like I can show you a picture of a miniature version of the Statue of Liberty, and then I can show you a bunch of photos and you can tell when it’s upside down, or half in water, or obscured by light and all that. We do that really well. 
How close are we to being able to feed computers a bunch of photos of cats, and the computer nails the cat thing, but then we only feed it three or four images of mice, and it takes all that stuff it knows about different cats, and it is able to figure out all about different mice?
So, is your question, do we think these machines are going to be at the same level as human beings at doing this?
No, I guess the question is, if we have to teach, “Here’s a cat, here’s a thimble, here’s ten thousand thimbles, here’s a pin cushion, here’s ten thousand more pin cushions…” If we have to do one thing at a time, we’re never going to get there. What we’ve got to do is, like, learn how to abstract up level, and say, “Here’s a manatee,” and it should be able to spot a manatee in any situation.
Yeah, and I think this is where we start moving into the general intelligence area. This is where it is becoming a little interesting and challenging, because human beings falls under more of the general intelligence, and machines are still falling under the artificial intelligence framework.
And the example you were giving, I have two boys, and when my boys were young, I’d tell them, “Hey, this is milk,” and I’d show them milk two times and they knew, “Awesome, this is milk.” And here come the machines, and you keep feeding them the big data with the hope that they will learn and they will say, “This is basically a picture of a mouse or this is a picture of a cat.”
This is where, I think, this artificial general intelligence which is shaping up—that we are going to abstract a level up, and start conditioning—but I feel we haven’t cracked the code for one level down yet. So, I think it’s going to take us time to get to the next level, I believe, at this time.
Believe me, I understand that. It’s funny, when you chat with people who spend their days working on these problems, they’re worried about, “How am I going to solve this problem I have tomorrow?” They’re not as concerned about that. That being said, everybody kind of likes to think about an AGI. 
AI is, what, six decades old and we’ve been making progress, do you believe that that is something that is going to evolve into an AGI? Like, we’re on that path already, and we’re just one percent of the way there? Or, is an AGI is something completely different? It’s not just a better narrow AI, it’s not just a bunch of narrow AI’s bolted together, it’s a completely different thing. What do you say?
Yes, so what I will say, it is like in the software development of computer systems—we call this as an object, and then we do inheritance of a couple of objects, and the encapsulation of the objects. When you think about what is happening in artificial intelligence, there are companies, like Trulia, who are investing in building the computer vision for real estate. There are companies investing in building the computer vision for cars, and all those things. We are in this state where all these dysfunctional, disassociated investments in our system are happening, and there are pieces that are going to come out of that which will go towards AGI.
Where I tend to disagree, I believe AI is complimenting us and AGI is replicating us. And this is where I tend to believe that the day the AGI comes—that means it’s a singularity that they are reaching wisdom or the processing power of human beings—that, to me, seems like doomsday, right? Because that those machines are going to be smarter than us, and they will control us.
And the reason I believe that, and there is a scientific reason for my belief; it’s because we know that in the central nervous system the core tool is the neurons, and we know neurons carry two signals—chemical and electrical. Machines can carry the electrical signals, but the chemical signals are the ones which generate these sensory signals—you touch something, you feel it. And this is where I tend to believe that AGI is not going to happen, I’m close to confident. Thinking machines are going to come—IBM Watson, as an example—so that’s how I’m differentiating it at this time.
So, to be clear, you said you don’t believe we’ll ever make an AGI?
I will be the one on the extreme end, but I will say yes.
That’s fascinating. Why is that? The normal argument is a reductionist argument. It says, you are some number of trillions of cells that come together, and there’s an emergent you” that comes out of that. And, hypothetically, if we made a synthetic copy of every one of those cells, and connected them, and did all that, there would be another Deep Varma. So where do you think the flaw in that logic is?
I think the flaw in that logic is that the general intelligence that humans have is also driven by the emotional side, and the emotional side—basically, I call it a chemical soup—is, I feel, the part of the DNA which is not going to be possible to replicate in these machines. These machines will learn by themselves—we recently saw what happened with Facebook, where Facebook machines were talking to each other and they start inventing their own language, over a period of time—but I believe the chemical mix of humans is what is next to impossible to produce it.
I mean—and I don’t want to take a stand because we have seen proven, over the decades, what people used to believe in the seventies has been proven to be right—I think the day we are able to find the chemical soup, it means we have found the Nirvana; and we have found out how human beings have been born and how they have been built over a period of time, and it took us, we all know, millions and millions of years to come to this stage. So that’s the part which is putting me on the other extreme end, to say, “Is there really going to another Deep Varma,” and if yes, then where is this emotional aspect, where are those things that are going to fit into the bigger picture which drives human beings onto the next level?
Well, I mean there’s a hundred questions rushing for the door right now. I’ll start with the first one. What do you think is the limit of what we’ll be able to do without the chemical part? So, for instance, let me ask a straight forward question—will we be able to build a machine that passes the Turing test?
Can we build that machine? I think, potentially, yes, we can.
So, you can carry on a conversation with it, and not be able to figure out that it’s a machine? So, in that case, it’s artificial intelligence in the sense that it really is artificial. It’s just running a program, saying some words, it’s running a program, saying some words, but there’s nobody home.
Yes, we have IBM Watson, which can go a level up as compared to Alexa. I think we will build machines which, behind the scenes, are trying to understand your intent and trying to have those conversations—like Alexa and Siri. And I believe they are going to eventually start becoming more like your virtual assistants, helping you make decisions, and complimenting you to make your lifestyle better. I think that’s definitely the direction we’re going to keep seeing investments going on.
I read a paper of yours where you made a passing reference to Westworld.
Right.
Putting aside the last several episodes, and what happened in them—I won’t give any spoilerstake just the first episode, do you think that we will be able to build machines that can interact with people like that?
I think, yes, we will.
But they won’t be truly creative and intelligent like we are?
That’s true.
Alright, fascinating. 
So, there seem to be these two very different camps about artificial intelligence. You have Elon Musk who says it’s an existential threat, you have Bill Gates who’s worried about it, you have Stephen Hawking who’s worried about it, and then there’s this other group of people that think that’s distracting
saw that Elon Musk spoke at the governor’s convention and said something and then Pedro Domingos, who wrote The Master Algorithmretweeted that article, and his whole tweet was, “One word: sigh. So, there’s this whole other group of people that think that’s just really distractingreally not going to happen, and they’re really put off by that kind of talk. 
Why do you think there’s such a gap between those two groups of people?
The gap is that there is one camp who is very curious, and they believe that millions of years of how human beings evolved can immediately be taken by AGI, and the other camp is more concerned with controlling that, asking are those machines going to become smarter than us, are they going to control us, are we going to become their slaves?
And I think those two camps are the extremes. There is a fear of losing control, because humans—if you look into the food chain, human beings are the only ones in the food chain, as of now, who control everything—fear that if those machines get to our level of wisdom, or smarter than us, we are going to lose control. And that’s where I think those two camps are basically coming to the extreme ends and taking their stands.
Let’s switch gears a little bit. Aside from the robot uprising, there’s a lot of fear wrapped up in the kind of AI we already know how to build, and it’s related to automation. Just to set up the question for the listener, there’s generally three camps. One camp says we’re going to have all this narrow AI, and it’s going to put a bunch of people out of work, people with less skills, and they’re not going to be able to get new work and we’re going to have, kind of, the GreaDepression going on forever. Then there’s a second group that says, no, no, it’s worse than that, computers can do anything a person can do, we’re all going to be replaced. And then there’s a third camp that says, that’s ridiculous, every time something comes along, like steam or electricity, people just take that technology, and use it to increase their own productivity, and that’s how progress happens. So, which of those three camps, or fourth one, perhaps, do you believe?
I fall into, mostly, the last camp, which is, we are going to increase the productivity of human beings; it means we will be able to deliver more and faster. A few months back, I was in Berkeley and we were having discussions around this same topic, about automation and how jobs are going to go away. The Obama administration even published a paper around this topic. One example which always comes in my mind is, last year I did a remodel of my house. And when I did the remodeling there were electrical wires, there are these water pipelines going inside my house and we had to replace them with copper pipelines, and I was thinking, can machines replace those job? I keep coming back to the answer that, those skill level jobs are going to be tougher and tougher to replace, but there are going to be productivity gains. Machines can help to cut those pipeline pieces much faster and in a much more accurate way. They can measure how much wire you’ll need to replace those things. So, I think those things are going to help us to make the smarter choices. I continue to believe it is going to be mostly the third camp, where machines will keep complementing us, helping to improve our lifestyles and to improve our productivity to make the smarter choices.
So, you would say that there are, in most jobs, there are elements that automation cannot replace, but it can augment, like a plumber, or so forth. What would you say to somebody who’s worried that they’re going to be unemployable in the future? What would you advise them to do?
Yeah, and the example I gave is a physical job, but think about an example of a business consultants, right? Companies hire business consultants to come, collect all the data, then prepare PowerPoints on what you should do, and what you should not do. I think those are the areas where artificial intelligence is going to come, and if you have tons of the data, then you don’t need a hundred consultants. For those people, I say go and start learning about what can be done to scale them to the next level. So, in the example I’ve just given, the business consultants, if they are doing an audit of a company with the financial books, look into the tools to help so that an audit that used to take thirty days now takes ten days. Improve how fast and how accurate you can make those predictions and assumptions using machines, so that those businesses can move on. So, I would tell them to start looking into, and partnering into, those areas early on, so that you are not caught by surprise when one day some industry comes and disrupts you, and you say, “Ouch, I never thought about it, and my job is no longer there.”
It sounds like you’re saying, figure out how to use more technology? That’s your best defense against it, is you just start using it to increase your own productivity.
Yeah.
Yeah, it’s interesting, because machine translation is getting comparable to a human, and yet generally people are bullish that we’re going to need more translators, because this is going to cause people to want to do more deals, and then they’re going to need to have contracts negotiated, and know about customs in other countries and all of that, so that actually being a translator you get more business out of this, not less, so do you think things like that are kind of the road map forward?
Yeah, that’s true.
So, what are some challenges with the technology? In Europe, there’s a movement—I think it’s already adopted in some places, but the EU is considering it—this idea that if an AI makes a decision about you, like do you get the loan, that you have the right to know why it made it. In other words, no black boxes. You have to have transparency and say it was made for this reason. Do you think a) that’s possible, and b) do you think it’s a good policy?
Yes, I definitely believe it’s possible, and it’s a good policy, because this is what consumers wants to know, right? In our real estate industry, if I’m trying to refinance my home, the appraiser is going to come, he will look into it, he will sit with me, then he will send me, “Deep, your house is worth $1.5 million dollar.” He will provide me the data that he used to come to that decision—he used the neighborhood information, he used the recent sold data.
And that, at the end of the day, gives confidence back to the consumer, and also it shows that this is not because this appraiser who came to my home didn’t like me for XYZ reason, and he end up giving me something wrong; so, I completely agree that we need to be transparent. We need to share why a decision has been made, and at the same time we should allow people to come and understand it better, and make those decisions better. So, I think those guidelines need to be put into place, because humans tend to be much more biased in their decision-making process, and the machines take the bias out, and bring more unbiased decision making.
Right, I guess the other side of that coin, though, is that you take a world of information about who defaulted on their loan, and then you take you every bit of information about, who paid their loan off, and you just pour it all in into some gigantic database, and then you mine it and you try to figure out, “How could I have spotted these people who didn’t pay their loan? And then you come up with some conclusion that may or may not make any sense to a human, right? Isn’t that the case that it’s weighing hundreds of factors with various weights and, how do you tease out, “Oh it was this”? Life isn’t quite that simple, is it?
No, it is not, and demystifying this whole black box has never been simple. Trust us, we face those challenges in the real estate industry on a day-to-day basis—we have Trulia’s estimates—and it’s not easy. At the end, we just can’t rely totally on those algorithms to make the decisions for us.
I will give one simple example, of how this can go wrong. When we were training our computer vision system, and, you know, what we were doing was saying, “This is a window, this is a window.” Then the day came when we said, “Wow, our computer vision can say I will look at any image, and known this is a window.” And one fine day we got an image where there is a mirror, and there is a reflection of a window on the mirror, and our computer said, “Oh, Deep, this is a window.” So, this is where big data and small data come into a place, where small data can make all these predictions and goes wrong completely.
This is where—when you’re talking about all this data we are taking in to see who’s on default and who’s not on default—I think we need to abstract, and we need to at least make sure that with this aggregated data, this computational data, we know what the reference points are for them, what the references are that we’re checking, and make sure that we have the right checks and balances so that machines are not ultimately making all the calls for us.
You’re a positive guy. You’re like, “We’re not going to build an AGI, it’s not going to take over the world, people are going to be able to use narrow AI to grow their productivity, we’re not going to have unemployment.” So, what are some of the pitfalls, challenges, or potential problems with the technology?
I agree with you, it’s being positive. Realistically, looking into the data—and I’m not saying that I have the best data in front of me—I think what is the most important is we need to look into history, and we need to see how we evolved, and then the Internet came and what happened.
The challenge for us is going to be that there are businesses and groups who believe that artificial intelligence is something that they don’t have to worry about, and over a period of time artificial intelligence is going to start becoming more and more a part of business, and those who are not able to catch up with this, they’re going to see the unemployment rate increase. They’re going to see company losses increase because some of the decisions they’re not making in the right way.
You’re going to see companies, like Lehman Brothers, who are making all these data decisions for their clients by not using machines but relying on humans, and these big companies fail because of them. So, I think, that’s an area where we are going to see problems, and bankruptcies, and unemployment increases, because of they think that artificial intelligence is not for them or their business, that it’s never going to impact them—this is where I think we are going to get the most trouble.
The second area of trouble is going to be security and privacy, because all this data is now floating around us. We use the Internet. I use my credit card. Every month we hear about a new hack—Target being hacked, Citibank being hacked—all this data physically-stored in the system and it’s getting hacked. And now we’ll have all this data wirelessly transmitting, machines talking to each of their devices, IoT devices talking to each other—how are you we going to make sure that there is not a security threat? How are we going to make sure that no one is storing my data, and trying to make assumptions, and enter into my bank account? Those are the two areas where I feel we are going to see, in coming years, more and more challenges.
So, you said privacy and security are the two areas?
Denial of accepting AI is the one, and security and privacy is the second one—those are the two areas.
So, in the first one, are there any industries that don’t need to worry about it, or are you saying, “No, if you make bubble-gum you had better start using AI?
I will say every industry. I think every industry needs to worry about it. Some industries may adapt the technologies faster, some may go slower, but I’m pretty confident that the shift is going to happen so fast that, those businesses will be blindsided—be it small businesses or mom and pop shops or big corporations, it’s going to touch everything.
Well with regard to security, if the threat is artificial intelligence, I guess it stands to reason that the remedy is AI as well, is that true?
The remedy is there, yes. We are seeing so many companies coming and saying, “Hey, we can help you see the DNS attacks. When you have hackers trying to attack your site, use our technology to predict that this IP address or this user agent is wrong.” And we see that to tackle the remedy, we are building an artificial intelligence.
But, this is where I think the battle between big data and small data is colliding, and companies are still struggling. Like, phishing, which is a big problem. There are so many companies who are trying to solve the phishing problem of the emails, but we have seen technologies not able to solve it. So, I think AI is a remedy, but if we stay just focused on the big data, that’s, I think, completely wrong, because my fear is, a small data set can completely destroy the predictions built by a big data set, and this is where those security threats can bring more of an issue to us.
Explain that last bit again, the small data set can destroy…?
So, I gave the example of computer vision, right? There was research we did in Berkeley where we trained machines to look at pictures of cats, and then suddenly we saw the computer start predicting, “Oh, this is this kind of a cat, this is cat one, cat two, this is a cat with white fur.” Then we took just one image where we put the overlay of a dog on the body of a cat, and the machines ended up predicting, “That’s a dog,” not seeing that it’s the body of a cat. So, all the big data that we used to train our computer vision, just collapsed with one photo of a dog. And this is where I feel that if we are emphasizing so much on using the big data set, big data set, big data set, are there smaller data sets which we also need to worry about to make sure that we are bridging the gap enough to making sure that our securities are not compromised?
Do you think that the system as a whole is brittle? Like, could there be an attack of such magnitude that it impacts the whole digital ecosystem, or are you worried more about, this company gets hacked and then that one gets hacked and they’re nuisances, but at least we can survive them?
No, I’m more worried about the holistic view. We saw recently, how those attacks on the UK hospital systems happened. We saw some attacks—which we are not talking about—on our power stations. I’m more concerned about those. Is there going to be a day when we have built massive infrastructures that are reliant on computers—our generation of power and the supply of power and telecommunications—and suddenly there is a whole outage which can take the world to a standstill, because there is a small hole which we never thought about. That, to me, is the bigger threat than the stand alone individual things which are happening now.
That’s a hard problem to solve, there’s a small hole on the internet that we’ve not thought about that can bring the whole thing down, that would be a tricky thing to find, wouldn’t it?
It is a tricky thing, and I think that’s what I’m trying to say, that most of the time we fail because of those smaller things. If I go back, Byron, and bring the artificial general intelligence back into a picture, as human beings it’s those small, small decisions we make—like, I make a fast decision when an animal is approaching very close to me, so close that my senses and my emotions are telling me I’m going to die—and this is where I think sometimes we tend to ignore those small data sets.
I was in a big debate around those self-driven cars which are shaping up around us, and people were asking me when will we see those self-driven cars on a San Francisco street. And I said, “I see people doing crazy jaywalking every day,” and accidents are happening with human drivers, no doubt, but the scale can increase so fast if those machines fail. If they have one simple sensor which is not working at that moment in time and not able to get one signal, it can kill human beings much faster as compared to what human beings are killing, so that’s the rational which I’m trying to put here.
So, one of my questions that I was going to ask you, is, do you think AI is a mania? Like it’s everywhere but it seems like, you’re a person who says every industry needs to adopt it, so if anything, you would say that we need more focus on it, not less, is that true?
That’s true.
There was a man in the ‘60s named Weizenbaum who made a program called ELIZA, which was a simple program that you would ask a question, say something like, I’m having a bad day,” and then it would say, “Why are you having a bad day?” And then you would say, I’m having a bad day because I had a fight with my spouse,” and then would ask, “Why did you have a fight? And so, it’s really simple, but Weizenbaum got really concerned because he saw people pouring out their heart to it, even though they knew it was a program. It really disturbed him that people developed emotional attachment to ELIZA, and he said that when a computer says, “I understand,” that it’s a lie, that there’s no “I,” there’s nothing that understands anything. 
Do you worry that if we build machines that can imitate human emotions, maybe the care for people or whatever, that we will end up having an emotional attachment to them, or that that is in some way unhealthy?
You know, Byron, it’s a very great question. I think, also pick out a great example. So, I have Alexa at my home, right, and I have two boys, and when we are in a kitchen—because Alexa is in our kitchen—my older son comes home and says, “Alexa, what’s the temperature look like today?” Alexa says, “Temperature is this,” and then he says, “Okay, shut up,” to Alexa. My wife is standing there saying “Hey, don’t be rude, just say, ‘Alexa stop.’” You see that connection? The connection is you’ve already started treating this machine as a respectful device, right?
I think, yes, there is that emotional connection there, and that’s getting you used to seeing it as part of your life in an emotional connection. So, I think, yes, you’re right, that’s a danger.
But, more than Alexa and all those devices, I’m more concerned about the social media sites, which can have much more impact on our society than those devices. Because those devices are still physical in shape, and we know that if the Internet is down, then they’re not talking and all those things. I’m more concerned about these virtual things where people are getting more emotionally attached, “Oh, let me go and check what my friends been doing today, what movie they watched,” and how they’re trying to fill that emotional gap, but not meeting individuals, just seeing the photos to make them happy. But, yes, just to answer your question, I’m concerned about that emotional connection with the devices.
You know, it’s interesting, I know somebody who lives on a farm and he has young children, and, of course, he’s raising animals to slaughter, and he says the rule is you just never name them, because if you name them then that’s it, they become a pet. And, of course, Amazon chose to name Alexa, and give it a human voice; and that had to be a deliberate decision. And you just wonder, kind of, what all went into it. Interestingly, Google did not name theirs, it’s just the Google Assistant. 
How do you think that’s going to shake out? Are we just provincial, and the next generation isn’t going to think anything of it? What do you think will happen?
So, is your question what’s going to happen with all those devices and with all those AI’s and all those things?
Yes, yes.
As of now, those devices are all just operating in their own silo. There are too many silos happening. Like in my home, I have Alexa, I have a Nest, those plug-ins. I love, you know, where Alexa is talking to Nest, “Hey Nest, turn it off, turn it on.” I think what we are going to see over the next five years is that those devices are communicating with each other more, and sending signals, like, “Hey, I just saw that Deep left home, and the garage door is open, close the garage door.”
IoT is popping up pretty fast, and I think people are thinking about it, but they’re not so much worried about that connectivity yet. But I feel that where we are heading is more of the connectivity with those devices, which will help us, again, compliment and make the smart choices, and our reliance on those assistants is going to increase.
Another example here, I get up in the morning and the first thing I do is come to the kitchen and say Alexa, “Put on the music, Alexa, put on the music, Alexa, and what’s the weather going to look like?” With the reply, “Oh, Deep, San Francisco is going to be 75,” then Deep knows Deep is going to wear a t-shirt today. Here comes my coffee machine, my coffee machine has already learned that I want eight ounces of coffee, so it just makes it.
I think all those connections, “Oh, Deep just woke up, it is six in the morning, Deep is going to go to office because it’s a working day, Deep just came to kitchen, play this music, tell Deep that the temperature is this, make coffee for Deep,” this is where we are heading in next few years. All these movies that we used to watch where people were sitting there, and watching everything happen in the real time, that’s what I think the next five years is going to look like for us.
So, talk to me about Trulia, how do you deploy AI at your company? Both customer facing and internally?
That’s such an awesome question, because I’m so excited and passionate because this brings me home. So, I think in artificial intelligence, as you said, there are two aspects to it, one is for a consumer and one is internal, and I think for us AI helps us to better understand what our consumers are looking for in a home. How can we help move them faster in their search—that’s the consumer facing tagline. And an example is, “Byron is looking at two bedroom, two bath houses in a quiet neighborhood, in good school district,” and basically using artificial intelligence, we can surface things in much faster ways so that you don’t have to spend five hours surfing. That’s more consumer facing.
Now when it comes to the internal facing, internal facing is what I call “data-driven decision making.” We launch a product, right? How do we see the usage of our product? How do we predict whether this usage is going to scale? Are consumers going to like this? Should we invest more in this product feature? That’s the internal facing we are using artificial intelligence.
I don’t know if you have read some of my blogs, but I call it data-driven companies—there are two aspects of the data driven, one is the data-driven decision making, this is more of an analyst, and that’s the internal reference to your point, and the external is to the consumer-facing data-driven product company, which focuses on how do we understand the unique criteria and unique intent of you as a buyer—and that’s how we use artificial intelligence in the spectrum of Trulia.
When you say, “Let’s try to solve this problem with data, is it speculative, like do you swing for the fences and miss a lot? Or, do you look for easy incremental wins? Or, are you doing anything that would look like pure science, like, “Let’s just experiment and see what happens with this? Is the science so nascent that you, kind of, just have to get in there and start poking around and see what you can do?
I think it’s both. The science helps you understand those patterns much faster and better and in a much more accurate way, that’s how science helps you. And then, basically, there’s trial and error, or what we call an, “A/B testing” framework, which helps you to validate whether what science is telling you is working or not. I’m happy to share an example with you here if you want.
Yeah, absolutely.
So, the example here is, we have invested in our computer vision which is, we train our machines and our machines basically say, “Hey, this is a photo of a bathroom, this is a photo of a kitchen,” and we even have trained that they can say, “This is a kitchen with a wide granite counter-top.” Now we have built this massive database. When a consumer comes to the Trulia site, what they do is share their intent, they say, “I want two bedrooms in Noe Valley,” and the first thing that they do when those listings show up is click on the images, because they want to see what that house looks like.
What we saw was that there were times when those images were blurred, there were times when those images did not match up with the intent of a consumer. So, what we did with our computer vision, we invested in something called “the most attractive image,” which basically takes the three attributes—it looks into the quality of an image, it looks into the appropriateness of an image, and it looks into the relevancy of an image—and based on these three things we use our conventional neural network models to rank the images and we say, “Great, this is the best image.” So now when a consumer comes and looks at that listing we show the most attractive photo first. And that way, the consumer gets more engaged with that listing. And what we have seen— using the science, which is machine learning, deep learning, CNM models, and doing the A/B testing—is that this project increased our enquiries for the listing by double digits, so that’s one of the examples which I just want to share with you.
That’s fantastic. What is your next challenge? If you could wave a magic wand, what would be the thing you would love to be able to do that, maybe, you don’t have the tools or data to do yet?
I think, what we haven’t talked about here and I will use just a minute to tell you, that what we have done is we’ve built this amazing personalization platform, which is capturing Byron’s unique preferences and search criteria, we have built machine learning systems like computer vision recommender systems and the user engagement prediction model, and I think our next challenge will be to keep optimizing the consumer intent, right? Because the biggest thing that we want to understand is, “What exactly is Byron looking into?” So, if Byron visits a particular neighborhood because he’s travelling to Phoenix, Arizona, does that mean you want to buy a home there, or Byron is in San Francisco and you live here in San Francisco, how do we understand?
So, we need to keep optimizing that personalization platform—I won’t call it a challenge because we have already built this, but it is the optimization—and make sure that our consumers get what they’re searching for, keep surfacing the relevant data to them in a timely manner. I think we are not there yet, but we have made major inroads into our big data and machine learning technologies. One specific example, is Deep, basically, is looking into Noe Valley or San Francisco, and email and push notifications are the two channels, for us, where we know that Deep is going to consume the content. Now, the day we learn that Deep is not interested in Noe Valley, we stop sending those things to Deep that day, because we don’t want our consumers to be overwhelmed in their journey. So, I think that this is where we are going to keep optimizing on our consumer’s intent, and we’ll keep giving them the right content.
Alright, well that is fantastic, you write on these topics so, if people want to keep up with you Deep how can they follow you?
So, when you said “people” it’s other businesses and all those things, right? That’s what you mean?
Well I was just referring to your blog like I was reading some of your posts.
Yeah, so we have our tech blog, http://www.trulia.com/tech, and it’s not only me; I have an amazing team of engineers—those who are way smarter than me to be very candid—my data scientist team, and all those things. So, we write our blogs there, so I definitely ask people to follow us on those blogs. When I go and speak at conferences, we publish that on our tech blog, and I publish things on my LinkedIn profile. So, yeah, those are the channels which people can follow. Trulia, we also host data science meetups here in Trulia, San Francisco on the seventh floor of our building, that’s another way people can come, and join, and learn from us.
Alright, well I want to thank you for a fascinating hour of conversation, Deep.
Thank you, Byron.
Byron explores issues around artificial intelligence and conscious computers in his upcoming book The Fourth Age, to be published in April by Atria, an imprint of Simon & Schuster. Pre-order a copy here.
[voices_in_ai_link_back]

Voices in AI – Episode 20: A Conversation with Marie des Jardins

[voices_in_ai_byline]
In this episode, Byron and Marie talk about the Turing test, Watson, autonomous vehicles, and language processing.
[podcast_player name=”Episode 20: A Conversation with Marie des Jardins” artist=”Byron Reese” album=”Voices in AI” url=”https://voicesinai.s3.amazonaws.com/2017-11-20-(01-03-03)-marie-de-jardin.mp3″ cover_art_url=”https://voicesinai.com/wp-content/uploads/2017/11/voices-headshot-card-2.jpg”]
[voices_in_ai_link_back]
Byron Reese: This is Voices in AI, brought to you by Gigaom. I’m Byron Reese. Today I’m excited that our guest is Marie des Jardins. She is an Associate Dean for Engineering and Information Technology as well as a professor of Computer Science at the University of Maryland, Baltimore County. She got her undergrad degree from Harvard, and a Ph.D. in computer science from Berkeley, and she’s been involved in the National Conference of the Association for the Advancement of Artificial Intelligence for over 12 years. Welcome to the show, Marie.
Marie des Jardins: Hi, it’s nice to be here.
I often open the show with “What is artificial intelligence?” because, interestingly, there’s no consensus definition of it, and I get a different kind of view of it from everybody. So I’ll start with that. What is artificial intelligence?
Sure. I’ve always thought about artificial intelligence as just a very broad term referring to trying to get computers to do things that we would consider intelligent if people did them. What’s interesting about that definition is it’s a moving target, because we change our opinions over time about what’s intelligent. As computers get better at doing things, they no longer seem that intelligent to us.
We use the word “intelligent,” too, and I’m not going to dwell on definitions, but what do you think intelligence is at its core?
So, it’s definitely hard to pin down, but I think of it as activities that human beings carry out, that we don’t know of lower order animals doing, other than some of the higher primates who can do things that seem intelligent to us. So intelligence involves intentionality, which means setting goals and making active plans to carry them out, and it involves learning over time and being able to react to situations differently based on experiences and knowledge that we’ve gained over time. The third part, I would argue, is that intelligence includes communication, so the ability to communicate with other beings, other intelligent agents, about your activities and goals.
Well, that’s really useful and specific. Let’s look at some of those things in detail a little bit. You mentioned intentionality. Do you think that intentionality is driven by consciousness? I mean, can you have intentionality without consciousness? Is consciousness therefore a requisite for intelligence?
I think that’s a really interesting question. I would decline to answer it mainly because I don’t think we ever can really know what consciousness is. We all have a sense of being conscious inside our own brains—at least I believe that. But of course, I’m only able to say anything meaningful about my own sense of consciousness. We just don’t have any way to measure consciousness or even really define what it is. So, there does seem to be this idea of self-awareness that we see in various kinds of animals—including humans—and that seems to be a precursor to what we call consciousness. But I think it’s awfully hard to define that term, and so I would be hesitant to put that as a prerequisite on intentionality.
Well, I think people agree what it is in a sense. Consciousness is the experience of things. It’s having a subjective experience of something. Isn’t the debate more like where does that come from? How does that arise? Why do we have it? But in terms of the simple definition, we do know that, don’t we?
Well, I don’t know. I mean, where does it come from, how does it arise, and do different people even have the same experience of consciousness as each other? I think when you start to dig down into it, we don’t have any way to tell whether another being is conscious or self-aware other than to ask them.
Let’s look at that for a minute, because self-awareness is a little different. Are you familiar with the mirror test that Professor Gallup does, where they take a sleeping animal, and paint a little red spot on its forehead, and then wait until it walks by a mirror, and if it stops and rubs its own forehead, then, according to the theory, it has a sense of self and therefore it is self-aware. And the only reason all of this matters is if you really want to build an intelligent machine, you have to start with what goes into that. So do you think that is a measure of self-awareness, and would a computer need to pass the mirror test, as it were?
That’s where I think we start to run into problems, right? Because it’s an interesting experiment, and it maybe tells us something about, let’s say, a type of self-awareness. If an animal’s blind, it can’t pass that test. So, passing the test therefore can’t be a precursor to intelligence.
Well, I guess the question would be if you had the cognitive ability and a fully functional set of senses that most of your species have, are you able to look at something else and determine that, “I am a ‘me’” and “That’s a reflection of me,” and “That actually is me, but I can touch my own forehead.”
I’m thinking, sorry. I’m being nonresponsive because I’m thinking about it, and I guess what I’m trying to say is that a test that’s designed for animals that have evolved in the wild is not necessarily a meaningful test for intelligent agents that we’ve engineered, because I could design a robot that can pass that test, that nobody would think was self-aware in any interesting and meaningful sense. In other words, for any given test you design, I can game and redesign my system to pass that test. But the problem is that the test measures something that we think is true in the wild, but as soon as we say, “This is the test,” we can build the thing that passes that test that doesn’t do what we meant for the agent to be able to do, to be self-aware.
Right. And it should be pointed out that there are those who look at the mirror test and say, “Well, if you put a spot on an animal’s hand, and just because they kind of wipe their hand…” That it’s really more a test of do they have the mental capability to understand what a mirror does?” And it has nothing to do with…
Right. Exactly. It’s measuring something about the mirror and so forth.
Let’s talk about another thing in your intelligence definition, because I’m fascinated by what you just kind of outlined. You said that some amount of communication, therefore some language, is necessary. So do you think—at least before we get to applying it to machines—that language is a requisite in the animal kingdom for intelligence?
Well, I don’t think it has to be language in the sense of the English language or our human natural language, but there are different ways to communicate. You can communicate through gestures. You can communicate through physical interaction. So it doesn’t necessarily have to be spoken language, but I do think the ability to convey information to another being that can then receive the information that was conveyed is part of what we mean by intelligence. Languages for artificial systems could be very limited and constrained, so I don’t think that we necessarily have to solve the natural language problem in order to develop what we would call intelligent systems. But I think when you talk about strong AI, which is referring to sort of human level intelligence, at that point, I don’t think you can really demonstrate human level intelligence without being able to communicate in some kind of natural language.
So, just to be clear, are you saying language indicates intelligence or language is required for intelligence?
Language is required for intelligence.
There are actually a number of examples in the plant kingdom where the plants are able to communicate signals to other plants. Would you say that qualifies? If you’re familiar with any of those examples, do those qualify as language in a meaningful sense, or is that just like, “Well, you can call it language if you’re trying to do clever thought riddles, but it’s not really a language.”
Yeah, I guess I’d say, as with most interesting things, there’s sort of a spectrum. But one of the characteristics of intelligent language, I think, is the ability to learn the language and to adapt the language to new situations. So, you know, ants communicate with each other by laying down pheromones, but ants can’t develop new ways to communicate with each other. If you put them into a new environment, they’re biologically hardwired to use communication.
There’s an interesting philosophical argument that the species is intelligent, or evolution is intelligent at some level. I think those are interesting philosophical discussions. I don’t know that they’re particularly helpful in understanding intelligence in individual beings.
Well, I definitely want to get to computers here in a minute and apply all of this as best we can, but… By our best guess, humans acquired speech a hundred thousand years ago, roughly the same time we got fire. The theory is that fire allowed us to cook food, which allowed us to break down the proteins in it and make it more digestible, and that that allowed us to increase our caloric consumption, and we went all in on the brain, and that gave us language. Would your statement that language is a requirement for intelligence imply that a hundred and one thousand years ago, we were not intelligent?
I would guess that human beings were communicating with each other a hundred and one thousand years ago and probably two hundred thousand years ago. And again, I think intelligence is a spectrum. I think chimpanzees are intelligent and dolphins are intelligent, at some level. I don’t know about pigs and dogs. I don’t have strong evidence.
Interestingly, of all things, dogs don’t pass the red paint mirror test. They are interestingly the only animal on the whole face of the earth—and by all means, any listener out there who knows otherwise, please email me—that if you point at an object, will look at the object.
Really?
Yeah, even chimpanzees don’t do it. So it’s thought that they co-evolved with us as we domesticated them. That was something we selected for, not overtly but passively, because that’s useful. It’s like, “Go get that thing,” and then the dog looks over there at it.
Right.
It’s funny, there’s an old Far Side cartoon—you can’t get those things out of your head—where the dolphins are in the tank, and they’re writing down all the dolphins’ noises, and they’re saying things like, “Se habla español,” and “Sprechen sie Deutsch,” and the scientists are like, “Yeah, we can’t make any sense of it.”
So let’s get back to language, because I’m really fascinated by this and particularly the cognitive aspects of it. So, what do you think is meaningful, if anything, about the Turing test—which of course you know, but for the benefit of our listeners, is: Alan Turing put this out that if you’re on a computer terminal, and you’re chatting with somebody, typing, and you can’t tell if it’s a person or a machine, then you have to say that machine is intelligent.
Right, and of course, Alan Turing’s original version of that test was a little bit different and more gendered if you’re familiar.
He based it on the gendered test, right. You’re entirely right. Yes.
There’s a lot of objections to the Turing test. In fact, when I teach the Introductory AI class at UMBC, I have the students read some of Alan Turing’s work and then John Searle’s arguments against the Turing test.
Chinese Room, right?
The Chinese Room and so forth, and I have them talk about all of that. And, again, I think these are, sort of, interesting philosophical discussions that, luckily, we don’t actually need to resolve in order to keep making progress towards intelligence, because I don’t think this is one that will ever be resolved.
Here’s something I think is really interesting: when that test was proposed, and in the early years of AI, the way it was envisioned was based on the communication of the time. Today’s Turing tests are based in an environment in which we communicate very differently—we communicate very differently online than we do in person—than Alan Turing ever imagined we would. And so the kind of chat bots that do well at these Turing tests really probably wouldn’t have looked intelligent to an AI researcher in the 1960s, but I don’t think that most social media posts would have looked very intelligent, either. And so we’ve kind of adapted ourselves to this sort of cryptic, darting, illogical, jumping-around-in-different-topics way of conversing with each other online, where lapses in rationality and continuity are forgiven really easily. And when I see some of the transcripts of modern Turing tests, I think, well, this kind of reminds me a little bit of PARRY. I don’t know if you’re familiar with ELIZA and PARRY.
Weizenbaum’s 1960s Q&A, his kind of psychologist helper, right?
Right. So ELIZA was a pattern-recognition-based online psychologist that would use this, I guess, Freudian way of interrogating a patient, to ask them about their feelings and so forth. And when this was created, people were very taken in by it, because, you know, they would spill out their deepest, darkest secrets to what turned out to be, essentially, one of the earliest chat bots. There was a version of that that was created later. I can’t remember the researcher who created it, but it was studying paranoid schizophrenia and the speech patterns of paranoid schizophrenics, and that version of ELIZA was called PARRY.
If you read any transcripts by PARRY, it’s very disjointed, and it can get away with not having a deep semantic model, because if it doesn’t really understand anything, and if it can’t match anything, it just changes the topic. And that’s what the modern Turing test look like to me, mostly. I think if we were going to really use the Turing test as some measure of intelligence, I think maybe we need to put some rules on critical thinking and rationality. What is it that we’re chatting about? And what is the nature of this communication with the agent in the black box? Because, right now, it’s just degenerated into, again, this kind of gaming the system. Well, let’s just see if we can trick a human into thinking that we’re a person, but we get to take advantage of the fact that online communication is this kind of dance that we play that’s not necessarily logical and rational and rule-following.
I want to come back to that, because I want to go down that path with you, but beforehand, it should be pointed out, and correct me if I’m wrong because you know this a lot better than I do, but the people who interacted with ELIZA all knew it was a computer and that there was “nobody at home.” And that, in the end, is what freaked Weizenbaum out, and had him turn on artificial intelligence, because I think he said something to the effect that when the computer says, “I understand,” it’s a lie. It’s a lie because there is no “I,” and there’s nothing to understand. Was that the same case with PARRY that they knew full and well they were talking to a machine, but they still engaged with it as if it was another person?
Well, that was being used to try to model the behavior of a paranoid schizophrenic, and so my understanding is that they ran some experiments where they had psychologists, in a blind setting, interact with an actual paranoid schizophrenic or this model, and do a Turing test to try to determine whether this was a convincing model of paranoid schizophrenic interaction style. I think it was a scientific experiment that was being run.
So, you used the phrase, when you were talking about PARRY just now, “It doesn’t understand anything.” That’s obviously Searle’s whole question with the Chinese Room, that the non-Chinese speaker who can use these books to answer questions in Chinese doesn’t understand anything. Do you think even today a computer understands anything, and will a computer ever understand anything?
That’s an interesting question. So when we talk about this with my class, with my students, I use the analogy of learning a new language. I don’t know if you speak any foreign languages to any degree of fluency.
I’m still working on English.
Right. So, I speak a little bit of French and a little bit of German and a little bit of Italian, so I’m very conscious of the language learning process. When I was first learning Italian, anything I said in Italian was laboriously translated in my mind by essentially looking up rules. I don’t remember any Italian, so I can’t use Italian as an example anymore. I want to say, “I am twenty years old” in French, and so in order to do that, I just don’t say, “J’ai vingt ans”; I say to myself, “How do I say, ‘I am 20 years old’? Oh, I remember, they don’t say, ‘I am 20 years old.’ They say, ‘I have 20 years.’ OK. ‘I have’ is ‘J’ai,’ ‘twenty’ is ‘vingt’…” And I’m doing this kind of pattern-based look up in my mind. But doing that inside my head, I can communicate a little bit in French. So do I understand French?
Well, the answer to that question would be “no,” but what you understand is that process you just talked about, “OK, I need to deconstruct the sentence. I need to figure out what the subject is. I need to line that up with the verb.” So yes, you have a higher order understanding that allows you to do that. You understand what you’re doing, unquestionably.
Right.
And so the question is, at that meta-meta-meta-meta-meta level, will a computer ever understand what it’s doing.
And I think this actually kind of gets back to the question of consciousness. Is understanding—in the sense that Searle wants it to be, or Weizenbaum wanted it to be—tied up in our self-awareness of the processes that we’re carrying out, to reason about things in the world?
So, I only have one more Turing test question to ask, then I would love to change the subject to the state of the art today, and then I would love to talk about when you think we’re going to have certain advances, and then maybe we can talk about the impact of all this technology on jobs. So, with that looking forward, one last question, which is: when you were talking about maybe rethinking the Turing test, that we would have a different standard, maybe, today than Turing did. And by the way, the contests that they have where they really are trying to pass it, they are highly restricted and constrained, I think. Is that the case?
I am not that familiar with them, although I did read The Most Human Human, which is a very interesting book if you are looking for some light summer reading.
All right.
Are you familiar with the book? It’s by somebody who served as a human in the Loebner Prize Turing test, and sort of his experience of what it’s like to be the human.
No. I don’t know that. That’s funny. So, the interesting thing was that—and anybody who’s heard the show before will know I use this example—I always start everyone with the same question. I always ask the same question to every system, and nobody ever gets it right, even close. And because of that, I know within three seconds that I’m not talking to a human. And the question is: “What’s larger? The sun or a nickel?” And no matter how, I think your phrase was “schizophrenic” or “disjointed” or what have you, the person is, they answer, “The sun” or “Duh” or “Hmm.” But no machine can.  
So, two questions: Is that question indicative of the state of the art, that we really are like in stone knives and bear skins with natural language? And second, do you think that we’re going to make strides forward that maybe someday you’ll have to wonder if I’m actually not a sophisticated artificial intelligence chatting with you or not?
Actually, I guess I’m surprised to hear you say that computers can’t answer that question, because I would think Watson, or a system like that, that has a big backend knowledge base that it’s drawing on would pretty easily be able to find that. I can Google “How big is the sun?” and “How big is a nickel?” and apply a pretty simple rule.
Well, you’re right. In all fairness, there’s not a global chat bot of Watson that I have found. I mean, the trick is nickel is both a metal and a coin, and the sun is a homophone that could be a person’s son. But a person, a human, makes that connection. These are both round and so they kind of look like alike and whatnot. When I say it, what I mean is you go to Cleverbot, or you go to the different chat bots that are entered in the Turing competitions and whatnot. You ask Google, you type that into Google, you don’t get the answer. So, you’re right, there are probably systems that can nail it. I just never bump into them.
And, you know, there’s probably context that you could provide in which the answer to that question would be the nickel. Right? So like I’ve got a drawing that we’ve just been talking about, and it’s got the sun in it, and it has a nickel in it, and the nickel is really big in the picture, and the sun is really small because it’s far away. And I say, “Which is bigger?” There might actually be a context in which the obvious answer isn’t actually the right answer, and I think that kind of trickiness is what makes people, you know, that’s the signal of intelligence, that we can kind of contextualize our reasoning. I think the question as a basic question, it’s such a factual question, that that’s the kind of thing that I think computers are actually really good at. What do you love more: A rose or a daisy? That’s a harder question.
Right.
You know, or what’s your mother’s favorite flower? Now there’s a tricky question.
Right. I have a book coming out on this topic at the end of the year, and I try to think up the hardest question, like what’s the last one. I’m sure listeners will have better ideas than I have. But one I came up with was: Dr. Smith is eating at her favorite restaurant when she receives a phone call. She rushes out, neglecting to pay her bill. Is management likely to prosecute? So we need to know: She’s probably a medical doctor. She probably got an emergency call. It’s her favorite restaurant, so she’s probably known there. She dashes out. Are they really going to go to all the effort to prosecute, not just get her to pay next time she’s in and whatnot? That is the kind of thing that has so many layers of experience that it would be hard for a machine to do.
Yeah, but I would argue that I think, eventually, we will have intelligent agents that are embedded in the world and interact with people and build up knowledge bases of that kind of common sense knowledge, and could answer that question. Or a similar type of question that was posed based on experience in the world and knowledge of interpersonal interactions. To me, that’s kind of the exciting future of AI. Being able to look up facts really fast, like Watson… Watson was exciting because it won Jeopardy, but let’s face it: looking up a lot of facts and being able to click on a buzzer really fast are not really the things that are the most exciting about the idea of an intelligent, human-like agent. They’re awfully cool, don’t get me wrong.
I think when we talk about commercial potential and replacing jobs, which you mentioned, I think those kinds of abilities to retrieve information really quickly, in a flexible way, that is something that can really lead to systems that are incredibly useful for human beings. Whether they are “strong AI” or not doesn’t matter. The philosophical stuff is fun to talk about, but there’s this other kind of practical, “What are we really going to build and what are we going to do with it?”
Right.
And it doesn’t require answering those questions.
Fair enough. In closing on all of that other part, I heard Ken Jennings speak at South by Southwest about it, and I will preface this by saying he’s incredibly gracious. He doesn’t say, “Well, it was rigged.” He did describe, though, that the buzzer situation was different, because that’s the one part that’s really hard to map. Because the buzzer’s the trick on Jeopardy, not the answers.
That’s right.
And that was all changed up a bit.
Ken is clearly the best human at the buzzer. He’s super smart, and he knows a ton of stuff, don’t get me wrong, I couldn’t win on Jeopardy. But I think it’s that buzzer that’s the difference. And so I think it would be really interesting to have a sort of Jeopardy contest in which the buzzer doesn’t matter, right? So, you just buzz in, and there’s some reasonable window in which to buzz in, and then it’s random who gets to answer the question, or maybe everybody gets to answer the question independently. A Jeopardy-like thing where that timed buzzing in isn’t part of it; it’s really the knowledge that’s the key. I suspect Watson would still do pretty well, and Ken would still do pretty well, but I’m not sure who would win in that case. It would depend a lot on the questions, I think.
So, you gave us a great segue just a minute ago when you said, “Is all of this talk about consciousness and awareness and self and Turing test and all that—does it matter?” And it sounded like you were saying, whether it does or doesn’t, there is plenty of exciting things that are coming down the pipe. So let’s talk about that. I would love to hear your thoughts on the state of the art. AI’s passed a bunch of milestones, like you said, there was chess, then Jeopardy, then AlphaGo, and then recently poker. What are some things, you think—without going to AGI which we’ll get to in a minute—we should look for? What’s the state of the art, and what are some things you think we’re going to see in a year, or two years, three years, that will dominate the headlines?
I think the most obvious thing is self-driving cars and autonomous vehicles, right? Which we already have out there on the roads doing a great deal. I drive a Volvo that can do lane following and can pretty much drive itself in many conditions. And that is really cool and really exciting. Is it intelligence? Well, no, not by the definitions we’ve just been talking about, but the technology to be able to do all of that very much came out of AI research and research directions.
But I guess there won’t be a watershed with that, like, in the way that one day we woke up and Lee Sedol had lost. I mean, won’t it be that in three years, the number one Justin Bieber song will have been written by an AI or something like that, where it’s like, “Wow, something just happened”?
Yeah, I guess I think it’s a little bit more like cell phones. Right? I mean, what was the moment for cell phones? I’m not sure there was one single.
Fair enough. That’s right.
It’s more of like a tipping point, and you can look back at it and say, “Oh, there’s this inflection point.” And I don’t know what it was for cell phones. I expect there was an inflection point when either cell phone technology became cheap enough, or cell tower coverage became prevalent enough that it made sense for people to have cell phones and start using them. And when that happened, it did happen very fast. I think it will be the same with self-driving cars.
It was very fast that cars started coming out with adaptive cruise control. We’ve had cruise control for a long time, where your car just keeps going at the same speed forever. But adaptive cruise control, where your car detects when there’s something in front of it and slows down or speeds up based on the conditions of the road, that happened really fast. It just came out and now lots of cars have that, and people are kind of used to it. GPS technology—I was just driving along the other day, and I was like, “Oh yeah, I’ve got a map in my car all the time.” And anytime I want to, I can say, “Hey, I’d like to go to this place,” and it will show me how to get to that place. We didn’t have that, and then within a pretty short span of time, we have that, and that’s an AI derivative also.
Right. I think that those are all incredibly good points. I would say with cell phones—I can remember in the mid ‘90s, the RAZR coming out, which was smaller, and it was like, “Wow.” You didn’t know you had it in your pocket. And then, of course, the iPhone was kind of a watershed thing.
Right. A smartphone.
Right. But you’re right, it’s a form of gradualism punctuated by a series of step functions up.
Definitely. Self-driving car technology, in particular, is like that, because it’s really a big ask to expect people to trust self-driving cars on the road. So there’s this process by which that will happen and is already happening, where individual bits of autonomous technology are being incorporated into human-driven cars. And meanwhile, there’s a lot of experimentation with self-driving cars under relatively controlled conditions. And at some point, there will be a tipping point, and I will buy a car, and I will be sitting in my car and it will take me to New York, and I won’t have to be in control.
Of course, one impediment to that is that whole thing where a vast majority of the people believe the statistical impossibility that they are above-average drivers.
That’s right.
I, on the other hand, believe I’m a below-average driver. So I’m going to be the first person—I’m a menace on the road. You want me off as soon as you can. It probably is good enough for that. I know prognostication is hard, and I guess cars are different, because I can’t get a free self-driving car with a two-year contract at $39.95 a month, right? So it’s a big capital shift, but do you have a sense—because I’m sure you’re up on all of this—when you think the first fully autonomous car will happen? And then the most interesting thing, when will it be illegal not to drive a fully autonomous car?
I’m not quite sure how it will roll out. It may be that it’s in particular locations or particular regions first, but I think that ordinary people being able to drive a self-driving car; I would say within ten years.
I noticed you slipped that, “I don’t know when it’s going to roll out” pun in there.
Pun not intended. You see, if my AI could recognize that as a pun… Humor is another thing that intelligent agents are not very good at, and I think that’ll be a long time coming.
Right. So you have just confirmed that I’m a human.
So, next question, you’ve mentioned strong AI, also called an artificial general intelligence, that is an intelligence as smart as a human. So, back to your earlier question of does it matter, we’re going to be able to do things like self-driving cars and all this really cool stuff, without answering these philosophical questions; but I think the big question is can we make an AGI? 
Because if you look at what humans are good at doing, we’re good at transfer learning where we pick something to learn in one domain and map it to another one effortlessly. We are really good at taking one data point, like, you could show a human one data point of something, and then a hundred photos, and no matter how you change the lighting or the angle, a person will go, “There, there, there, and there.” So, do you think that an AGI is the sum total of a series of weak AIs bolted together? Or is there some, I’m going to use a loaded word, “magic,” and obviously I don’t mean magic, but is there some hitherto unknown magic that we’re going to need to discover or invent?
I think hitherto unknown magic, you know, using the word “magic” cautiously. I think there are individual technologies that are really exciting and are letting us do a lot of things. So right now, deep learning is the big buzz word, and it is kind of cool. We’ve taken old neural net technology, and we’ve updated it with qualitatively different ways of thinking about essentially neural network learning that we couldn’t really think about before, because we didn’t have the hardware to be able to do it at the scale or with the kind of complexity that deep learning networks exist now. So, deep learning is exciting. But deep learning, I think, is just fundamentally not suited to do this single point generalization that you’re talking about.
Big data is a buzz word, but I’m, personally, I’ve always been more interested in tiny data. Or maybe it’s big data in the service of tiny data, so I experience lots and lots and lots of things, and by having all of that background knowledge at my disposal, I can do one shot learning, because I can take that single instance and interpret it and understand what is relevant about that one single instance that I need to use to generalize to the next thing. One shot learning works because we have vast experience, but that doesn’t mean that throwing vast experience at that one thing is, by itself, going to let us generalize from that single thing. I think we still really haven’t developed the cognitive reasoning frameworks that will let us take the power of deep learning and big data, and apply it in these new contexts in creative ways, using different levels of reasoning and abstraction. But I think that’s where we’re headed, and I think a lot of people are thinking about that.
So I’m very hopeful that the broad AI community, in its lushest, many-flowers-blooming way of exploring different approaches, is developing a lot of ideas that eventually are going to come together into a big intelligent reasoning framework, that will let us take all of the different kinds of technologies that we’ve built for special purpose algorithms, and put them together—not just bolt it together, but really integrate it into a more coherent, broad framework for AGI.
If you look at the human genome, it’s, in computer terms, 720MB, give or take. But a vast amount of that is useless, and then a vast amount of that we share with banana trees. And if you look at the part that’s uniquely human, which gives us our unique intelligence, it may be 4MB or 8MB; it’s a really a small number. Yet in that little program are the instructions to make something that becomes an AGI. So do you take that to mean that there’s a secret, a trick—and again, I’m using words that I mean metaphorically—there’s something very simple we’re missing. Something you could write in a few lines of code. Maybe a short program that could make something that’s an AGI?
Yeah, we had a few hundred million years to evolve that. So, the length of something doesn’t necessarily mean that it’s simple. And I think I don’t know enough about genomics to talk really intelligently about this, but I do think that 4MB to 8MB that’s uniquely human interacts with everything else, with the rest of the genome, possibly with the parts that we think don’t do anything. Because there were parts of the genome that we thought didn’t do anything, but it turns out some of it does do something. It’s the dark matter of the genome. Just because we don’t know what it’s doing, I don’t know that that means that it’s not doing anything.
Well, that’s a really interesting point—the 4MB to 8MB may be highly compressed, to use the computer metaphor, and it may be decompressing to something that’s using all the rest. But let’s even say it takes 720MB, you’re still talking about something that will fit on an old CD-ROM, something smaller than most operating systems today.  
And I one hundred percent hear what you’re saying, which is nature has had a hundred million years to compress that, to make that really tight code. But, I guess the larger question I’m trying to ask is, do you think that an AGI may… The hope in AI had always been that, just like in the physical universe, there’s just a few laws that explain everything. Or is it that it’s like, no, we’re incredibly complicated, and it’s going to be this immense system that becomes a general intelligence, and it’s going to be of complexity we can’t wrap our heads around yet.
Gosh, I don’t know. I feel like I just can’t prognosticate that. I think if and when we have an AGI that we really think is intelligent, it probably will have an awful lot of component. The core that drives all of it may be, relatively speaking, fairly simple. But, if you think about how human intelligence works, we have lots and lots of modules. Right?
There’s this sort of core mechanism by which the brain processes information, that plays out in a lot of different ways, in different parts of the brain. We have the motor cortex, and we have the language cortex, and they’re all specialized. We have these specialized regions and specialized abilities. But they all use a common substrate or mechanism. And so when I think of the ultimate AI, I think of there being some sort of architecture that binds together a lot of different components that are doing different things. And it’s that architecture, that glue, that we haven’t really figured out how to think about yet.
There are cognitive architectures. There are people who work on designing cognitive architectures, and I think those are the precursors of what will ultimately become the architecture for intelligence. But I’m not sure we’re really working on that hard enough, or that we’ve made enough progress on that part of it. And it may be that the way that we get artificial intelligence ultimately is by building a really, really, really big deep learning neural network, which I would find maybe a little bit disappointing, because I feel like if that’s how we get there, we’re not really going to know what’s going on inside of it. Part of what brought me into the field of AI was really an interest in cognitive psychology, and trying to understand how the human brain works. So, maybe we can create another human-like intelligence by just kind of replicating the human brain. But I, personally, just from my own research perspective, wouldn’t find that especially satisfying, because it’s really hard to understand what’s going on in the human brain. And it’s hard to understand what’s going on even in any single deep learning network that can do visual processing or anything like that.
I think that in order for us to really adopt these intelligence systems and embrace them and trust them and be willing to use them, we’ll have to find ways for them to be more explainable and more understandable to human beings. Even if we go about replicating human intelligence in that way, I still think we need to be thinking about understandability and how it really works and how we extract meaning.
That’s really fascinating. So you’re saying if we made this big system that was huge and studied data, it’s kind of just brute force. We don’t have anything elegant about that. It doesn’t tell us anything about ourselves.
Yeah.
So my last theoretical question, and then I’d love to talk about jobs. You said at the very beginning that consciousness may be beyond our grasp, that somehow we’re too close to it, or it may be something we can’t agree on, we can’t measure, we can’t tell in others, and all of that. Is it possible that the same is true of a general intelligence? That in the end, this hope of yours that you said brought you into the field, that it’s going to give us deep insights into ourselves, actually isn’t possible?
Well, I mean, maybe. I don’t know. I think that we’ve already gained a lot of insight into ourselves, and because we’re humans, we’re curious. So if we build intelligent agents without fully understanding how they work or what they do, then maybe we’ll work side by side with them to understand each other. I don’t think we’re ever going to stop asking those questions, whether we get to some level of intelligent agents before then or after then. Questions about the universe are always going to be with us.
Onto the question that most people in their day-to-day lives worry about. They don’t worry as much about killer robots, as they do about job-killing robots. What do you think will be the effect? So, you know the setup. You know both sides of this. Is artificial intelligence something brand new that replaces people, and it’s going to get this critical velocity where it can learn things faster than us and eventually just surpass us in all fields? Or, is it like other disruptive technologies—arguably equally disruptive as such things as the mechanization of industry, the harnessing of steam power, of electricity—that came and went and never, ever budged unemployment even one iota. Because people learned, almost instantly, how to use these new technologies to increase their own productivity. Which of those two or a third choice do you think is most likely?
I’m not a believer in the singularity. I don’t see that happening—that these intelligent agents are going to surpass us and make us completely superfluous, or let us upload our brains into cyberspace or turn us into The Matrix. It could happen. I don’t rule it out, but that’s not what I think is most likely. What I really think is that this is like other technologies. It’s like the invention of the car or the television or the assembly line. If we use it correctly, it enhances human productivity, and it lets us create value at less human cost.
The question is not a scientific question or a technological question. The question is really a political question of how are we, as a society, going to decide to use that extra productivity? And unfortunately, in the past, we’ve often allowed that extra productivity to be channeled into the hands of a very few people, so that we just increased wealth disparity, and the people at the bottom of the economic pile have their jobs taken away. So they’re out of work, but more importantly, the benefit that’s being created by these new technologies isn’t benefiting them. And I think that we can choose to think differently about how we distribute the value that we get out of these new technologies.
The other thing is I think that as you automate various kinds of activities, the economy transforms itself. And we don’t know exactly how that is going to happen, and it would have been hard to predict before any historical technological disruption, right? You invent cars. Well, what happens to all the people who took care of the horses before? Something happened to them. That’s a big industry that’s gone. When we automate truck driving, this is going to be extremely disruptive, because truck driver is one of the most common jobs, in most of our country at least. So, what happens to the people who were truck drivers? It turns out that you’re automating some parts of that job, but not all of it. Because a truck driver doesn’t just sit at the wheel of a car and drive it down the road. The truck driver also loads and offloads and interacts with people at either end. So, maybe the truck driver job becomes more of a sales job, you know, there’s fewer of them, but they’re doing different things. Or maybe it’s supplanted by different kinds of service roles.
I think we’re becoming more and more of a service economy, and that’s partly because of automation. We always need more productivity. There’s always things that human society wants. And if we get some of those things with less human effort, that should let us create more of other things. I think we could use this productivity and support more art. That would be an amazing, transformational, twenty-first century kind of thing to do. I look at our current politics and our current society, and I’m not sure that enough people are thinking that way, that we can think about how to use these wonderful technologies to benefit everybody. I’m not sure that’s where we’re headed right now.
Let’s look at that. So there’s a wide range of options, and everybody’s going to be familiar with them all. On the one hand, you could say, you know, Facebook and Google made twelve billionaires between them. Why don’t we just take their money and give it to other people? All the way to the other extreme that says, look, all those truck drivers, or their corollaries, in the past, nobody in a top-down, heavy handed way reassigned them to different jobs. What happened was the market did a really good job of allocating technology, creating jobs, and recruiting them. So those would be two incredibly extreme positions. And then there’s this whole road in between where you’d say, well, we need more education. We need to help make it easier for people to become productive again. Where on that spectrum do you land? What do you think? What specific meat would you put on those bones?
I think taxes are not an inherently bad thing. Taxes are how we run our society, and our society is what protects people and enables people to invent things like Google. If we didn’t have taxes, and we didn’t have any government services, it would be extremely difficult for human society to invent things like Google, because to invent things like that requires collaboration, it requires infrastructure; it requires the support of people around you to make that happen. You couldn’t have Google if you didn’t have the Internet. And the Internet exists because the government invested in the Internet, and the government could invest in the Internet because we pay taxes to the government to create collective infrastructure. I think there’s always going to be a tension between how high should taxes be and how much should you tax the wealthy—how regressive, how progressive? Estate taxes; should you be able to build up a dynasty and pass along all of your wealth to your children? I have opinions about some of that, but there’s no right answer. It changes over time. But I do think that the reason that we come together as human beings to create governments and create societies is because we want to have some ability to have a protected place where we can pursue our individual goals. I want to be able to drive to and from my job on roads that are good, and have this interview with you through an Internet connection that’s maintained, and not to have marauding hordes steal my car while I’m in here. You know, we want safety and security and shared infrastructure. And I think the technology that we’re creating should let us do a better job at having that shared infrastructure and basic ability for people to live happy and productive lives.
So I don’t think that just taking money from rich people and giving it to poor people is the right way to do that, but I do think investing in a better society makes a lot of sense. We have horribly decaying infrastructure in much of the country. So, doesn’t it make sense to take some of the capital that’s created by technology advances and use it to improve the infrastructure in the country and improve health care for people?
Right. And of course the countervailing factor is, do all of the above without diminishing people’s incentives to work hard and found these companies that they created, and that’s the historical tension. Well, I would like to close with one question for you which is: are you optimistic about the future or pessimistic or how would you answer that?
I’m incredibly optimistic. I mean, you know, I’m pessimistic about individual things on individual days, but I think, collectively, we have made incredible strides in technology, and in making people’s quality of life better.
I think we could do a better job. There’s places where people don’t have the education or don’t have the infrastructure or don’t have access to jobs or technology. I think we have real issues with diversity in technology, both in creating technology and in benefiting from technology. I’m very, very concerned about the continuing under-representation of women and minority groups in computing and technology. And the reason for that is partly because I think it’s just socially unjust to not have everybody equally benefiting from good jobs, from the benefits of technology. But it’s also because the technology solutions that we create are influenced by the people who are creating them. When we have a very limited subset of the population creating technology, there’s a lot of evidence that shows that the technology is not as robust, and doesn’t serve as broad a population of users as technology that’s created by diverse teams of engineers. I’d love to see more women coming into computer science. I’d love to see more African Americans and Hispanics coming into computer science. That’s something I work on a lot. It’s something I think matters a lot to our future. But, I think we’re doing the right things in those areas, and people care about these things, and we’re pushing forward.
There’s a lot of really exciting stuff happening in the AI world right now, and it’s a great time to be an AI scientist because people talk about AI. I walk down the street, or I sit at Panera, and I hear people talking about the latest AI solution for this thing or that—it’s become a common term. Sometimes, I think it’s a little overused, because we sort of use it for anything that seems kind of cool, but that’s OK. I think we can use AI for anything that seems pretty cool, and I don’t think that hurts anything.
All right. Well, that’s a great place to end it. I want to thank you so much for covering this incredibly wide range of topics. This was great fun and very informative. Thank you for your time.
Yeah, thank you.
Byron explores issues around artificial intelligence and conscious computers in his upcoming book The Fourth Age, to be published in April by Atria, an imprint of Simon & Schuster. Pre-order a copy here.
[voices_in_ai_link_back]

Rob High talks Artificial Intelligence with Gigaom

Rob High
Rob High is an IBM Fellow, Vice President and Chief Technology Officer, IBM Watson. He has overall responsibility to drive Watson technical strategy and thought leadership. As a key member of the Watson Leadership team, Rob works collaboratively with the Watson engineering, research, and development teams across IBM.
Rob High will be speaking on the subject of artificial intelligence at Gigaom Change Leaders Summit in Austin, September 21-23rd. In anticipation of that, I caught up with him to ask a few questions about AI and it’s potential impact on the business world.
Byron Reese: Do you feel like we are on the path to building an AGI and if so, when do you think we will see it?
Rob High: Cognitive technologies, like Watson, apply reasoning techniques to domain-specific problems in things like Healthcare, Finance, Education, and Legal — anywhere there is an overwhelming amount of information that, if processed, can substantially improve the decisions or outcomes in that domain. For example, the work we’ve done with Oncologists to help them identify the most appropriate treatments for their cancer patients is based on having assessed what makes the patient unique; standard of care practices and clinical expertise that has been used to train the system; and the available clinical literature that can help doctors make better decisions. This helps to democratize that expertise to a wide swath of other doctors who do not have the benefit of having seen the thousands of patients that major cancer centers like Memorial Sloan Kettering or MD Anderson see.
The types of artificial intelligence used in these systems are spectacular in that they are able draw inferences from literature written in natural language, and to be taught how to interpret the meaning in that language as it applies to bringing the right information at the right time to the doctor’s fingertips. Unlike Artificial General Intelligence, our goal is to amplify human cognition — not to do our thinking for us, but to do the necessary research so that we can do our thinking better.
What do you make of all of the angst and concern being talked about in terms of why we should perhaps fear the AGI?

The concept of a machine-dominated world is inspired more by Hollywood and science fiction writers rather than technologists and AI researchers. IBM has been firmly committed to responsible science and ethical best practices for over a hundred years – it’s embedded in our DNA. Our focus is on applying cognitive computing to amplifying human cognitive processes, not on replacing them.
The reality is AI and cognitive technologies will help mankind better understand our world and make more informed decisions. Cognitive computing will always serve to bolster, not replace, human decision-making, working side-by-side with humans to accelerate and improve our ability to act with confidence and authority. The industries where Watson is being applied today – healthcare, law, financial services, oil & gas – exist to benefit people working in those industries.
For example, Watson augments a doctor’s abilities by aggregating and producing the best available information to inform medical decisions and democratizing expertise. But it’s the human doctor who takes the information Watson produces and combines it with their own knowledge of a patient and the complex issues associated with each diagnosis. Ultimately, the doctor makes the recommendation, informed by Watson, and the patient makes the decision – so there will always be a complementary relationship between human and machine.
Do you think computers can or will become conscious?
Today, we are making significant advances in integrating embodied cognition into robotics through Watson and that remains a primary focus. Our technology currently allows robots to – like humans – show expression, understand the nuances of certain interactions and respond appropriately. There’s still a need to teach robots certain skills, like the skill of movement, the skill of seeing, the skill of recognizing the difference between a pot of potatoes that are boiling versus a pot of potatoes that are boiling over.
However, we do believe that we’re only in the first few years of a computing era that will last for decades to come. We are currently assessing what’s doable, what’s useful and what will have economic interest in the future.
Great. We’ll leave it there. Thank you for taking the time to talk today.
Rob High will be speaking on the subject of artificial intelligence at Gigaom Change Leaders Summit in Austin, September 21-23rd.

IBM acquires deep learning startup AlchemyAPI

So much for AlchemyAPI CEO Elliot Turner’s statement that his company is not for sale. IBM has bought the Denver-based deep learning startup that delivers a wide variety of text analysis and image recognition capabilities via API.

IBM plans to integrate AlchemyAPI’s technology into the core Watson cognitive computing platform. IBM will also use AlchemyAPI’s technology to expand its set of Watson cloud APIs that let developers infuse their web and mobile applications with artificial intelligence. Eventually, the AlchemyAPI service will shut down as the capabilities are folded into the IBM Bluemix platform, said IBM Watson Group vice president and CMO Stephen Gold said.

Elliot Turner — CEO, AlchemyAPI; Stephen Gold, Watson Solutions, IBM Software Group. Structure Data 2014

Love at first sight? AlchemyAPI CEO Elliot Turner (left) and IBM Watson vice president Stephen Gold (center) at Structure Data 2014.

Compared with Watson’s primary ability to draw connections and learn from analyzing textual data, AlchemyAPI excels at analyzing text for sentiment, category and keywords, and for recognizing objects and faces in images. Gold called the two platforms “a leather shoe fit” in terms of how well they complement each other. Apart from the APIs, he said AlchemyAPI’s expertise in unsupervised and semi-supervised learning systems (that is, little human oversight over model creation) will be a good addition to the IBM team.

We will discuss the burgeoning field of new artificial intelligence applications at our Structure Data conference later this month in New York, as well as at our inaugural Structure Intelligence event in September.

I have written before that cloud computing will be the key to IBM deriving the types of profits it wants to from Watson, as cloud developers are the new growth area for technology vendors. Cloud developers might not result in multi-million-dollar deals, but they represent a huge user base in aggregate and, more importantly, can demonstrate the capabilities of a platform like Watson probably better than IBM itself can. AlchemyAPI already has more than 40,000 developers on its platform.

Other companies delivering some degree of artificial intelligence and deep learning via the cloud, and sometimes via API, include Microsoft, Google, MetaMind, Clarifai and Expect Labs.

celebrity_chadsmith_willferrell_cropped (1)

AlchemyAPI’s facial recognition API can distinguish between Will Ferrell and Red Hot Chili Peppers drummer Chad Smith.

AlchemyAPI’s Turner said his company decided to join IBM, after spurning numerous acquisition offers and stating it wasn’t for sale, in part because it represents an opportunity to “throw rocket fuel on” the company’s long-term goals. Had the plan been to buy AlchemyAPI, kill its service and fold the team into research roles — like what happens with so many other acquisitions of deep learning talent — it probably would not have happened.

Gold added that IBM is not only keeping the AlchemyAPI services alive (albeit as part of the Bluemix platform) but also plans to use the company’s Denver headquarters as the starting point of an AI and deep learning hub in the city.

[protected-iframe id=”447ad6f774bfa076438dfe73b2d084db-14960843-6578147″ info=”https://www.youtube.com/embed/iHVeoJBtoIM?list=PLZdSb0bQCA7mpVy–2jQBxfjcbNrp9Fu4″ width=”640″ height=”390″ frameborder=”0″ allowfullscreen=””]

Update: This post was updated at 9:10 a.m. to include quotes and information from Elliot Turner and Stephen Gold.

Now IBM is teaching Watson Japanese

IBM has struck a deal SoftBank Telecom Corporation to bring the IBM Watson artificial intelligence (or, as IBM calls it, cognitive computing) system to Japan. The was announced on Tuesday.

Watson has already been trained in Japanese, so now it’s matter of getting its capabilities into production via specialized systems, apps or even robots running Watson APIs. As in the United States, early focus areas include education, banking, health care, insurance and retail.

[company]IBM[/company] has had a somewhat difficult time selling Watson, so maybe the Japanese market will help the company figure out why. It could be that the technology doesn’t work as well or as easily as advertised, or it could just be that American companies, developers and consumers aren’t ready to embrace so many natural-language-powered applications.

The deal with SoftBank isn’t the first time IBM has worked to teach a computer Japanese. The company is also part of a project with several Japanese companies and agencies, called the Todai Robot, to build a system that runs on a laptop and can pass the University of Tokyo entrance exam.

We’ll be talking a lot about artificial intelligence and machine that can learn at our Structure Data conference in March, with speakers from Facebook, Spotify, Yahoo and other companies. In September, we’re hosting Gigaom’s inaugural Structure Intelligence conference, which will be all about AI.

New from Watson: Financial advice and a hardcover cookbook

IBM has recruited a couple of new partners in its quest to mainstream its Watson cognitive computing system: financial investment specialist Vantage Software and the Institute of Culinary Education, or ICE. While the former is exactly the kind of use case one might expect from Watson, the latter seems like a pretty savvy marketing move.

What Vantage is doing with Watson, through a new software program called Coalesce, is about the same thing [company]IBM[/company] has been touting for years around the health care and legal professions. Only, replace health care and legal with financial services, and doctors and lawyers with financial advisers and investment managers. Coalesce will rely on Watson to analyze large amount of literature and market data, which will complement experts’ own research and possibly provide them with information or trends they otherwise might have missed.

The partnership with the culinary institute, though — on a hardcover cookbook — is much more interesting. It’s actually a tangible manifestation of work that IBM and ICE have been doing together for a few years. At last year’s South By Southwest event, in fact, Gigaom’s Stacey Higginbotham ate a meal from an IBM food truck with ingredients suggested by Watson and prepared by ICE chefs.

Source: I

The IBM food truck.

But even if the cookbook doesn’t sell (although I will buy one when it’s released in April and promise to review at least a few recipes), it’s a good way to try and convince the world that Watson has promise beyond just fighting cancer. IBM is banking on cognitive computing (aka artificial intelligence) to become a multi-billion-dollar business, so it’s going to need more than a handful of high-profile users. It has already started down this path with its Watson cloud ecosystem and APIs, where partners have built applications for things including retail recommendations, travel and cybersecurity.

Watson isn’t IBM’s only investment in artificial intelligence, either. Our Structure Data conference in March will feature Dharmendra Modha, the IBM researcher who led development of the company’s SyNAPSE chip that’s modeled on the brain and designed to learn like a neural network while consuming just a fraction of the power normal microchips do.

However, although we’re on the cusp of an era of smart applications and smart devices, we’re also in an era of on-demand cloud computing and a user base that cut its teeth on Google’s product design. The competition over the next few years — and there will be lots of it — won’t just be about who has most-accurate text analysis or computer vision models, or who executes the best publicity stunts.

All the cookbooks and research projects in the world will amount to a lot of wasted time if IBM can’t deliver with artificial intelligence products and services that people actually want to use.

The 5 stories that defined the big data market in 2014

There is no other way to put it: 2014 was a huge year for the big data market. It seems years of talk about what’s possible are finally giving way to some real action on the technology front — and there’s a wave of cash following close behind it.

Here are the five stories from the past year that were meaningful in their own rights, but really set the stage for bigger things to come. We’ll discuss many of these topics in depth at our Structure Data conference in March, but until then feel free to let me know in the comments what I missed, where I went wrong or why I’m right.

5. Satya Nadella takes the reins at Microsoft

Microsoft CEO Satya Nadella has long understood the importance of data to the company’s long-term survival, and his ascendance to the top spot ensures Microsoft won’t lose sight of that. Since Nadella was appointed CEO in February, we’ve already seen Microsoft embrace the internet of things, and roll out new data-centric products such as Cortana, Skype Translate and Azure Machine Learning. Microsoft has been a major player in nearly every facet of IT for decades and how it executes in today’s data-driven world might dictate how long it remains in the game.

Microsoft CEO Satya Nadella speaks at a Microsoft Cloud event. Photo by Jonathan Vanian/Gigaom

Satya Nadella speaks at a Microsoft Cloud event.

4. Apache Spark goes legit

It was inevitable that the Spark data-processing framework would become a top-level project within the Apache Software Foundation, but the formal designation felt like an official passing-of-the-torch nonetheless. Spark promises to do for the Hadoop ecosystem all the things MapReduce never could around speed and usability, so it’s no wonder Hadoop vendors, open source projects and even some forward-thinking startups are all betting big on the technology. Databricks, the first startup trying to commercialize Spark, has benefited from this momentum, as well.

Ion Stoica

Spark co-creator and Databricks CEO Ion Stoica.

3. IBM bets its future on Watson

Big Blue might have abandoned its server and microprocessor businesses, but IBM is doubling down on cognitive computing and expects its new Watson division to grow into a $10 billion business. The company hasn’t wasted any time trying to get the technology into users’ hands — it has since announced numerous research and commercial collaborations, highlighted applications built atop Watson and even worked Watson tech into the IBM cloud platform and a user-friendly analytics service. IBM’s experiences with Watson won’t only affect its bottom line; they could be a strong indicator of how enterprises will ultimately use artificial intelligence software.

watson headquarters

A shot of IBM’s new Watson division headquarters in Manhattan.

2. Google buys DeepMind

It’s hard to find a more exciting technology field than artificial intelligence right now, and deep learning is the force behind a lot of that excitement. Although there were a myriad of acquisitions, startup launches and research breakthroughs in 2014, it was Google’s acquisition of London-based startup DeepMind in January that set the tone for the year. The price tag, rumored to be anywhere from $450 million to $628 million, got the mainstream technology media paying attention, and it also let deep learning believers (including those at competing companies) know just how important deep learning is to Google.

Jeffrey Dean - Google Fellow, Google

Google’s Jeff Dean talks about early deep learning results at Structure 2013.

1. Hortonworks goes public

Cloudera’s massive (and somewhat convoluted) deal with Intel boosted the company’s valuation past $4 billion and sent industry-watchers atwitter, but the Hortonworks IPO in December was really a game-changer. It came faster than most people expected, was more successful than many people expected, and should put the pressure on rivals Cloudera and MapR to act in 2015. With a billion-plus-dollar market cap and public market trust, Hortonworks can afford to scale its business and technology — and maybe even steal some valuable mindshare — as the three companies vie to own what could be a humongous software market in a few years’ time.

nasdaqhdp

Hortonworks rings the opening bell on its IPO day.

Honorable mentions

IBM bringing its skin-cancer computer vision system to hospitals

IBM says it has developed a machine learning system that identified images of skin cancer with better than 95 percent accuracy in experiments, and it’s now teaming up with doctors to see how it can help them do the same. On Wednesday, the company announced a partnership with Memorial Sloan Kettering — one of IBM’s early partners on its Watson system — to research the computer vision technology might be applied in medical settings.

According to one study, cited in the IBM infographic below, diagnostic accuracy for skin cancer today is estimated at between 75 percent and 84 percent even with computer assistance. If IBM’s research results hold up in the real world, they would constitute a significant improvement.

As noted above, the skin cancer research is not IBM’s first foray into applying machine learning and artificial intelligence techniques — which it prefers to call cognitive computing — in the health care setting. In fact, the company announced earlier this week a partnership with the Department of Veterans’ Affairs to investigate the utility of the IBM Watson system for analyzing medical records.

And [company]IBM[/company] is certainly not the first institution to think about how advances in computer vision could be used to diagnose disease. Two startups — Enlitic and Butterfly Network — recently launched with the goal of improving diagnostics using deep learning algorithms, and the application of machine learning to medical imagery has been, and continues to be, the subject of numerous academic studies.

We will be discussing the state of the art in machine learning, and computer vision specifically, at our Structure Data conference in March with speakers from IBM, Facebook, Yahoo, Stanford and Qualcomm, among others.