Voices in AI – Episode 74: A Conversation with Dr. Kai-Fu Lee

[voices_in_ai_byline]

About this Episode

Episode 74 of Voices in AI features host Byron Reese and Dr. Kai-Fu Lee discussing the potential of AI to disrupt job markets, the comparison of AI research and implementation in the U.S. and China, as well as other facets of Dr. Lee’s book “AI Superpowers”. Dr. Kai-Fu Lee, previously president of Google China, is now the CEO of Sinovation Ventures.
Visit www.VoicesinAI.com to listen to this one-hour podcast or read the full transcript.

Transcript Excerpt

Byron Reese: This is Voices in AI, brought to you by GigaOmI’m Byron Reese. Today I am so excited my guest is Dr. Kai-Fu Lee. He is, of course, an AI expert. He is the CEO of Sinovation Ventures. He is the former President of Google China. And he is the author of a fantastic new book called “AI Superpowers.” Welcome to the show, Dr. Lee. 
Kai-Fu Lee: Thank you Byron.
I love to begin by saying, AI is one of those things that can mean so many things. And so, for the purpose of this conversation, what are we talking about when we talk about AI?
We’re talking about the advances in machine learning… in particular Deep Learning and related technologies as it applies to artificial narrow intelligence, with a lot of opportunities for implementation, application and value extraction. We’re not talking about artificial general intelligence, which I think is still a long way out.
So, confining ourselves to narrow intelligence, if someone were to ask you worldwide, not even getting into all the political issues, what is the state of the art right now? How would you describe where we are as a planet with narrow artificial intelligence?
I think we’re at the point of readiness for application. I think the greatest opportunity is application of what’s already known. If we look around us, we see very few of the companies, enterprises and industries using AI when they all really should be. Internet companies use AI a lot, but it’s really just beginning to enter financial, manufacturing, retail, hospitals, healthcare, schools, education and so on. It should impact everything, and it has not.
So, I think what’s been invented and how it gets applied/implemented/monetized… value creation, that is a very clear 100% certain opportunity we should embrace. Now, there can be more innovations, inventions, breakthroughs… but even without those I think we’ve got so much on our hands that’s not yet been fully valued and implemented into industry.
Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com
[voices_in_ai_link_back]
 
Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.

Voices in AI – Episode 71: A Conversation with Paul Daugherty

[voices_in_ai_byline]

About this Episode

Episode 71 of Voices in AI features host Byron Reese and Paul Daugherty discuss transfer learning, consciousness and Paul’s book “Human + Machine: Reimagining Work in the Age of AI.” Paul Daugherty holds a degree in computer engineering from the University of Michigan, and is currently the Chief Technology and Innovation Officer at Accenture.
Visit www.VoicesinAI.com to listen to this one-hour podcast or read the full transcript.

Transcript Excerpt

Byron Reese: This is Voices in AI brought to you by GigaOm. Today my guest is Paul Daugherty. He is the Chief Technology and Innovation Officer at Accenture. He holds a computer engineering degree from the University of Michigan. Welcome to the show Paul.

Paul Daugherty: It’s great to be here, Byron.

Looking at your dates on LinkedIn, it looks like you went to work for Accenture right out of college and that was a quarter of a century or more ago. Having seen the company grow… What has that journey been like?

Thanks for dating me. Yeah it’s actually been 32 years, so I guess I’m going on a third of a century, joined Accenture back in 1986, and the company’s evolved in many ways since then. It’s been an amazing journey because the world has changed so much since then and a lot of what’s fueled the change in the world around us has been what’s happened with technology. I think [in] 1986 the PC was brand new, and we went from that to networking and client server and the Internet, cloud computing mobility, internet of things, artificial intelligence and the things we’re working on today. So it’s been a really amazing journey fueled by the way the world’s changed, enabled by all this amazing technology.

So let’s talk about that, specifically artificial intelligence. I always like to get our bearings by asking you to define either artificial intelligence or if you’re really feeling bold, define intelligence.

I’ll start with artificial intelligence which we define as technology that can sense, think, act and learn, is the way we describe it. And [it’s] systems that can then do that, so sense: like vision in a self-driving car, think: making decisions on what the car does next, acts: in terms of they actually steer the car and then learn: to continuously improve behavior. So that’s the working definition that we use for artificial intelligence, and I describe it more simply to people sometimes, as fundamentally technology that has more human-like capability to approximate the things that we’re used to assuming and thinking that only humans can do: speech, vision, predictive capability and some things like that.

So that’s the way I define artificial intelligence. Intelligence I would define differently. Intelligence I would just define more broadly. I’m not an expert in neuroscience or cognitive science or anything, but I define intelligence generally as the ability to both reason and comprehend and then extrapolate and generalize across many different domains of knowledge. And that’s what differentiates human intelligence from artificial intelligence, which is something we can get a lot more into. Because I think the fact that we call this body of work that we’re doing artificial intelligence, both the word artificial and the word intelligence I think lead to misleading perceptions on what we’re really doing.

So, expand that a little bit. You said that’s the way you think human intelligence is different than artificial, — put a little flesh on those bones, in exactly what way do you think it is?

Well, you know the techniques we’re really using today for artificial intelligence, they’re generally from the branch of AI around machine learning, so machine learning, deep learning, neural nets etc. And it’s a technology that’s very good at using patterns and recognizing patterns in data to learn from observed behavior, so to speak. Not necessarily intelligence in a broad sense, it’s ability to learn from specific inputs. And you can think about that almost as idiot savant-like capability.

So yes, I can use that to develop Alpha Go to beat the world’s Go master, but then that same program wouldn’t know how to generalize and play me in tic-tac-toe. And that ability, the intelligence ability to generalize, extrapolate, rather than interpolate, is what human intelligence is differentiated by, and the thing that would bridge that, would be artificial general intelligence, which we can get into a little bit, but we’re not at that point of having artificial general intelligence, we’re at a point of artificial intelligence, where it could mimic very specific, very specialised, very narrow human capabilities, but it’s not yet anywhere close to human-level intelligence.

Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com
[voices_in_ai_link_back]
 
Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.

Voices in AI – Episode 70: A Conversation with Jakob Uszkoreit

[voices_in_ai_byline]

About this Episode

Episode 70 of Voices in AI features host Byron Reese and Jakob Uszkoreit discuss machine learning, deep learning, AGI, and what this could mean for the future of humanity. Jakob has a masters degree in Computer Science and Mathematics from Technische Universität Berlin. Jakob has also worked at Google for the past 10 years currently in deep learning research with Google Brain.
Visit www.VoicesinAI.com to listen to this one-hour podcast or read the full transcript.

Transcript Excerpt

Byron Reese: This is Voices in AI, brought to you by GigaOm. I’m Byron Reese. Today our guest is Jakob Uszkoreit, he is a researcher at Google Brain, and that’s kind of all you have to say at this point. Welcome to the show, Jakob.
Let’s start with my standard question which is: What is artificial intelligence, and what is intelligence, if you want to start there, and why is it artificial?
Jakob Uszkoreit: Hi, thanks for having me. Let’s start with artificial intelligence specifically. I don’t think I’m necessarily the best person to answer the question what intelligence is in general, but I think for artificial intelligence, there’s possibly two different kind of ideas that we might be referring to with that phrase.
One is kind of the scientific or the group of directions of scientific research, including things like machine learning, but also other related disciplines that people commonly refer to with the term ‘artificial intelligence.’ But I think there’s this other maybe more important use of the phrase that has become much more common in this age of the rise of AI if you want to call it that, and that is what society interprets that term to mean. I think largely what society might think when they hear the term artificial intelligence, is actually automation, in a very general way, and maybe more specifically, automation where the process of automating [something] requires the machine or the machines doing so to make decisions that are highly dynamic in response to their environment and in our ideas or in our conceptualization of those processes, require something like human intelligence.
So, I really think it’s actually something that doesn’t necessarily, in the eyes of the public, have that much to do with intelligence, per se. It’s more the idea of automating things that at least so far, only humans could do, and the hypothesized reason for that is that only humans possess this ephemeral thing of intelligence.
Do you think it’s a problem that a cat food dish that refills itself when it’s empty, you could say has a rudimentary AI, and you can say Westworld is populated with AIs, and those things are so vastly different, and they’re not even really on a continuum, are they? A general intelligence isn’t just a better narrow intelligence, or is it?
So I think that’s a very interesting question. Whether basically improving and slowly generalizing or expanding the capabilities of narrow intelligences, will eventually get us there, and if I had to venture a guess, I would say that’s quite likely actually. That said, I’m definitely not the right person to answer that. I do think that guesses, that aspects of things are today still in the realms of philosophy and extremely hypothetical.
But the one trick that we have gotten good at recently that’s given us things like AlphaZero, is machine learning, right? And it is itself a very narrow thing. It basically has one core assumption, which is the future is like the past. And for many things it is: what a dog looks like in the future, is what a dog looked like yesterday. But, one has to ask the question, “How much of life is actually like that?” Do you have an opinion on that?
Yeah so I think that machine learning is actually evolving rapidly from the initial classic idea of basically trying to predict the future just in the past, and not just the past as a kind of encapsulated version of the past. So it’s basically a snapshot captured in this fixed static data set. You expose machines to that, you allow it to learn from that, train on that, whatever you want to call it, and then you evaluate how the resulting model or machine or network does in the wild or on some evaluation tasks, and tests that you’ve prepared for it.
It’s evolving from that classic definition towards something that is quite a bit more dynamic, that is starting to incorporate learning in situ, learning kind of “on the job,” learning from very different kinds of supervision, where some of it might be encapsulated by data sets, but some might be given to the machine through somewhat more high level interactions, maybe even through language. There is at least a bunch of lines of research attempting that. Also quite importantly, we’re starting slowly but surely to employ machine learning in ways where the machine’s actions actually have an impact on the world, from which the machine then keeps learning. I think that that’s actually something [for which] all of these parts are necessary ingredients, if we ever want to have narrow intelligences, that maybe have a chance of getting more general. Maybe then in the more distant future, might even be bolted together into somewhat more general artificial intelligence.
Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com
[voices_in_ai_link_back]
 
Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.

Voices in AI – Episode 66: A Conversation with Steve Ritter

[voices_in_ai_byline]

About this Episode

Episode 66 of Voices in AI features host Byron Reese and Steve Ritter talk about the future of AGI, how AI will effect jobs, security, warfare, and privacy. Steve Ritter holds a B.S. in Cognitive Science, Computer Science and Economics from UC San Diego and is currently the CTO of Mitek.
Visit www.VoicesinAI.com to listen to this one-hour podcast or read the full transcript.

Transcript Excerpt

Byron Reese: This is Voices in AI, brought to you by GigaOm, I’m Byron Reese, and today our guest is Steve Ritter. He is the CTO of Mitek. He holds a Bachelor of Science in Cognitive Science, Computer Science and Economics from UC San Diego. Welcome to the show Steve.
Steve Ritter: Thanks a lot Byron, thanks for having me.
So tell me, what were you thinking way back in the ’80s when you said, “I’m going to study computers and brains”? What was going on in your teenage brain?
That’s a great question. So first off I started off with a Computer Science degree and I was exposed to the concepts of the early stages of machine learning and cognitive science through classes that forced me to deal with languages like LISP etc., and at the same time the University of California, San Diego was opening up their very first department dedicated to cognitive science. So I was just close to finishing up my Computer Science degree, and I decided to add Cognitive Science into it as well, simply because I was just really amazed and enthralled with the scope of what Cognitive Science was trying to cover. There was obviously the computational side, then the developmental psychology side, and then neuroscience, all combined to solve a host of different problems. You had so many researchers in that area that were applying it in many different ways, and I just found it fascinating, so I had to do it.
So, there’s human intelligence, or organic intelligence, or whatever you want to call it, there’s what we have, and then there’s artificial intelligence. In what ways are those things alike and in what ways are they not?
That’s a great question. I think it’s actually something that trips a lot of people up today when they hear about AI, and we might use the term, artificial basic intelligence, or general intelligence, as opposed to artificial intelligence. So a big difference is, on one hand we’re studying the brain and we’re trying to understand how the brain is organized to solve problems and from that derive architectures that we might use to solve other problems. It’s not necessarily the case that we’re trying to create a general intelligence or a consciousness, but we’re just trying to learn new ways to solve problems. So I really like the concept of neural inspired architectures, and that sort of thing. And that’s really the area that I’ve been focused on over the past 25 years, is really how can we apply these learning architectures to solve important business problems.
Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com
[voices_in_ai_link_back]
 
Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.

Voices in AI – Episode 52: A Conversation with Rao Kambhampati

[voices_in_ai_byline]

About this Episode

Sponsored by Dell and Intel, Episode 52 of Voices in AI, features host Byron Reese and Rao Kambhampati discussing creativity, military AI, jobs and more. Subbarao Kambhampati is a professor at ASU with teaching and research interests in Artificial Intelligence. Serving as the president of AAAI, the Association for the Advancement of Artificial Intelligence.
Visit www.VoicesinAI.com to listen to this one-hour podcast or read the full transcript.

Transcript Excerpt

Byron Reese: This is Voices in AI, brought to you by GigaOm. I’m Byron Reese. Today my guest is Rao Kambhampati. He has spent the last quarter-century at Arizona State University, where he researches AI. In fact, he’s been involved in artificial intelligence research for thirty years. He’s also the President of the AAAI, the Association for the Advancement of Artificial Intelligence. He holds a Ph.D.in computer science from the University of Maryland, College Park. Welcome to the show, Rao.
Rao Kambhampati: Thank you, thank you for having me.
I always like to start with the same basic question, which is, what is artificial intelligence? And so far, no two people have given me the same answer. So you’ve been in this for a long time, so what is artificial intelligence?
Well, I guess the textbook definition is, artificial intelligence is the quest to make machines show behavior, that when shown by humans would be considered a sign of intelligence. So intelligent behavior, of course, that right away begs the question, what is intelligence? And you know, one of the reasons we don’t agree on the definitions of AI is partly because we all have very different notions of what intelligence is. This much is for sure; intelligence is quite multi-faceted. You know we have the perceptual intelligence—the ability to see the world, you know the ability to manipulate the world physically—and then we have social, emotional intelligence, and of course you have cognitive intelligence. And pretty much any of these aspects of intelligent behavior, when a computer can show those, we would consider that it is showing artificial intelligence. So that’s basically the practical definition I use.
But to say, “while there are different kinds of intelligences, therefore, you can’t define it,” is akin to saying there are different kinds of cars, therefore, we can’t define what a car is. I mean that’s very unsatisfying. I mean, isn’t there, this word ‘intelligent’ has to mean something?
I guess there are very formal definitions. For example, you can essentially consider an artificial agent, working in some sort of environment, and the real question is, how does it improve its long-term reward that it gets from the environment, while it’s behaving in that environment? And whatever it does to increase its long-term reward is seen, essentially as—I mean the more reward it’s able to get in the environment, the more important it is. I think that is the sort of definition that we use in introductory AI sorts of courses, and we talk about these notions of rational agency, and how rational agents try to optimize their long-term reward. But that sort of gets into more technical definitions. So when I talk to people, especially outside of computer science, I appeal to their intuitions of what intelligence is, and to the extent we have disagreements there, that sort of seeps into the definitions of AI.
Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com 
[voices_in_ai_link_back]
 
Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.

Voices in AI – Episode 50: A Conversation with Steve Pratt

[voices_in_ai_byline]
In this episode, Byron and Steve discuss the present and future impact of AI on businesses.
[podcast_player name=”Episode 50: A Conversation with Steve Pratt” artist=”Byron Reese” album=”Voices in AI” url=”https://voicesinai.s3.amazonaws.com/2018-06-14-(00-56-12)-stephen-pratt.mp3″ cover_art_url=”https://voicesinai.com/wp-content/uploads/2018/06/voices-headshot-card-3.jpg”]
[voices_in_ai_byline]
Byron Reese: This is Voices in AI, brought to you by GigaOm, and I’m Byron Reese. Today, our guest is Steve Pratt. He is the Chief Executive Officer over at Noodle AI, the enterprise artificial intelligence company. Prior to Noodle, he was responsible for all Watson implementations worldwide, for IBM Global Business Services. He was also the founder and CEO of Infosys Consulting, a Senior Partner at Deloitte Consulting, and a Technology and Strategy Consultant at Booz Allen Hamilton. Consulting Magazine has twice selected him as one of the top 25 consultants in the world. He has a Bachelor’s and a Master’s in Electrical Engineering from Northwestern University and George Washington University. Welcome to the show, Steve.
Steve Pratt: Thank you. Great to be here, Byron.
Let’s start with the basics. What is artificial intelligence, and why is it artificial?
Artificial intelligence is basically any form of learning algorithm; is the way we think of things. We actually think there’s a raging religious debate [about] the differences between artificial intelligence and machine learning, and data science, and cognitive computing, and all of that. But we like to get down to basics, and basically say that they are algorithms that learn from data, and improve over time, and are probabilistic in nature. Basically, it’s anything that learns from data, and improves over time.
So, kind of by definition, the way that you’re thinking of it is it models the future, solely based on the past. Correct?
Yes. Generally, it models the future and sometimes makes recommendations, or it will sometimes just explain things more clearly. It typically uses four categories of data. There is both internal data and external data, and both structured and unstructured data. So, you can think of it kind of as a quadrant. We think the best AI algorithms incorporate all four datasets, because especially in the enterprise, where we’re focused, most of the business value is in the structured data. But usually unstructured data can add a lot of predictive capabilities, and a lot of signal, to come up with better predictions and recommendations.
How about the unstructured stuff? Talk about that for a minute. How close do you think we are? When do you think we’ll have real, true unstructured learning, that you can kind of just point at something and say, “I’m going to Barbados. You figure it all out, computer.”
I think we have versions of that right now. I am an anti-fan of things like chatbots. I think that chatbots are very, very difficult to do, technically. They don’t work very well. They’re generally very expensive to build. Humans just love to mess around with chatbots. I would say in the scoring of business value and something that’s affordable, and is easy to do, that chatbots is in the worst quadrant there.
I think there is a vast array of other things that actually add business value to companies, but if you want to build an intelligent agent using natural language processing, you can do some very basic things. But I wouldn’t start there.
Let me try my question slightly differently, then. Right now, the way we use machine learning is we say, “We have this problem that we want to solve. How do you do X?” And we have this data that we believe we can tease the answer out of. We ask the machine to analyze the data, and figure out how to do that. It seems the inherent limit of that, though, it’s kind of all sequential in nature. There’s no element of transferred learning in that, where I grow exponentially what I’m able to do. I just can do: “Yes. Another thing. Yes. Another. Yes. Another.” So, do you think this strict definition of machine learning, as you’re thinking of AI that way, is that a path to a general intelligence? Or is general intelligence like “No, that’s something way different than what we’re trying to do. We’re just trying to drive a car, without hitting somebody?”
General intelligence, I think, is way off in the future. I think we’re going to have to come up with some tremendous breakthroughs to get there. I think you can duct-tape together a lot of narrow intelligence, and sort of approximate general intelligence, but there are some fundamental skills that computers just can’t do right now.For instance, if I give a human the question, “Will the guinea pig population in Peru be relevant to predicting demand for tires in the U.S?” A human would say, “No, that’s silly. Of course not.” A computer would not know that. A computer would actually have to go through all of the calculations, and we don’t have an answer to that question, yet. So, I think generalized intelligence is a way off, but I think there are some tremendously exciting things that are happening right now, that are making the world a better place, in narrow intelligence.
Absolutely. I do want to spend the bulk of our time in there, in that world. But just to explore what you were saying, because there’s a lot of stuff to mine, in what you just said. That example you gave about the guinea pigs is sort of a common-sense problem, right? In how it’s referred. “Am I heavier than the statue of liberty?” How do you think humans are so good at that stuff? How is it that if I said, “Hey, what would an Oscar statue look like, smeared with peanut butter?” You can conjure that up, even though you’ve never even thought of that before, or seen it covered, or seen anything covered with peanut butter. Why are we so good at that kind of stuff, and machines seem amazingly ill-equipped at it?
I think humans have constant access to an incredibly diverse array of datasets. Through time, they have figured out patterns from all of those diverse datasets. So, we are constantly absorbing new datasets. In machines, it’s a very deliberate and narrow process right now. When you’re growing up, you’re just seeing all kinds of things. And as we go through our life, we develop these – you could think of them as regressions and classifications in our brains, for those vast arrays of datasets.
As of right now, machine learning and AI are given very specific datasets, crunch the data, and then make a conclusion. So, it’s somewhere in there. We’re not exactly sure, yet.
All right, last question on general intelligence, and we’ll come back to the here and now. When I ask people about it, the range of answers I get is 5 to 500 years. I won’t pin you down to a time, but it sounds like you’re “Yeah, it’s way off.” Yet, people who say that often usually say, “We don’t know how to do it, and it’s going to be a long time before we get it.”
But there’s always the implicit confidence that we can do it, that it is a possible thing. We don’t know how to do it. We don’t know how we’re intelligent. We don’t know the mechanism by which we are conscious, or the mechanism by which we have a mind, or how the brain fundamentally functions, and all of that. But we have a basic belief that it’s all mechanistic, so we’re going to eventually be able to build it. Do you believe that, or is it possible that a general intelligence is impossible?
No. I don’t think it’s impossible, but we just don’t know how to do it, yet. I think transfer learning, there’s a clue in there, somewhere. I think you’re going to need a lot more memory, and a lot more processing power, to have a lot more datasets in general intelligence. But I think it’s way off. I think there will be stage gates, and there will be clues of when it’s starting to happen. That’s when you can take an algorithm that’s trained for one thing, and have it – if you can take Alpha Go, and then the next day, it’s pretty good at Chess. And the next day, it’s really good at Parcheesi, and the next day, it’s really good at solving mazes, then we’re on the track. But that’s a long way off.
Let’s talk about this narrow AI world. Let’s specifically talk about the enterprise. Somebody listening today is at, let’s say a company of 200 people, and they do something. They make something, they ship it, they have an accounting department, and all of that. Should they be thinking about artificial intelligence now? And if so, how? How should they think about applying it to their business?
A company that small, it’s actually really tough, because artificial intelligence really comes into play when it’s beyond the complexity that a human can fit in their mind.
Okay. Let’s up it to 20,000 people.
20,000? Okay, perfect. 20,000 people – there are many, many places in the organization where they absolutely should be using learning algorithms to improve their decision-making. Specifically, we have 5 applications that focus on the supply side of the company; that’s in: materials, production, distribution, logistics and inventory.
And then, on the supply side, we have 5 areas also: customer, product, price, promotion and sales force. All of those things are incredibly complex, and they are highly interactive. Within each application area, we basically have applications that almost treat it like a game, although it’s much more complicated than a game, even though games like Go are very complex.
Each of our applications does, really, 4 things: it senses, it proposes, it predicts, and then it scores. So, basically it senses the current environment, it proposes a set of actions that you could take, it predicts the outcome of each of those actions – like the moves on a Chessboard – and then it scores it. It says, “Did it improve?” There are two levels of that, two levels of sophistication. One is “Did it improve locally? Did it improve your production environment, or your logistics environment, or your materials environment?” And then, there is one that is more complex, which says “If you look at that across the enterprise, did it improve across the enterprise?” These are very, very complex mathematical challenges. The difference is dramatic, from the way decisions are made today, which is basically people getting in meetings with imperfect data on spreadsheets and PowerPoint slides, and having arguments.
So, pick a department, and just walk me through a hypothetical or real use case where you have seen the technology applied, and have measurable results.
Sure. I can take the work we’re doing at XOJET, which is the largest private aviation company in the U.S. If you want to charter a jet, XOJET is the leading company to do that. The way they were doing pricing before we got there was basically old, static rules that they had developed several years earlier. That’s how they were doing pricing. What we did is we worked with them to take into account where all of their jets currently were, where all of their competitors’ jets are, what the demand was going to be, based on a lot of internal and external data; like what events were happening in what locations, what was the weather forecast, what [were] the economic conditions, what were historic prices and results? And then, basically came up with all of the different pricing options they could come up with, and then basically made a recommendation on what the price should be. As soon as they put in our application, which was in Q4 of 2016, the EBITDA of the company, which is basically the net margin – not quite, but – went up 5%, in the company.
The next thing we did for them was to develop an application that looked at the balance in their fleet, which is: “Do you have the right jets in the right place, at the right time?” This takes into account having to look at the next day. Where is the demand going to be the next day? So, you make sure you don’t have too many jets in low demand locations, or not enough jets in high demand locations. We actually adjusted the prices, to create an economic incentive to drive the jets to the right place at the right time.
We also, again, looked at competitive position, which is through Federal Aviation Administration data. You can track the tail numbers of all of their jets, and all of the competitor jets, so you could calculate competitive position. Then, based on that algorithm, the length of haul, which is the amount of hours flown per jet, went up 11%.
This was really dramatic, and dramatically reduced the number of “deadheads” they were flying, which is the amount of empty jets they were flying to reposition their jets. I think that’s a great success story. There’s tremendous leadership at that company, very innovative, and I think that’s really transformed their business.
That’s kind of a classic load-balancing problem, right? I’ve got all of these things, and I want to kind of distribute it, and make sure I have plenty of what I need, where. That sounds like a pretty general problem. You could apply it to package delivery or taxicab distribution, or any number of other things. How generalizable is any given solution, like from that, to other industries?
That’s a great question. There are a lot of components in that, that are generalizable. In fact, we’ve done that. We have componentized the code and the thinking, and can rapidly reproduce applications for another client, based on that. There’s a lot of stuff that’s very specific to the client, and of course, the end application is trained on the client’s data. So, it’s not applicable to anybody else. The models are specifically trained on the client data. We’re doing other projects in airline pricing, but the end result is very different, because the circumstances are different.
But you hit on a key question, which is “Are things generalizable?” One of the other approaches we’re taking is around transferred learning, especially when you’re using deep learning technologies. You can think of it as the top layers of a neural net can be trained on sort of general pricing techniques, and just the deeper layers are trained on pricing specific to that company.
That’s one of the other generalization techniques. Because AI problems in the enterprise generally have sparser datasets than if you’re trying to separate cat pictures from dog pictures. So, data sparcity is a constant challenge. I think transfer learning is one of the key strategies to avoid that.
You mentioned in passing, looking at things like games. I’ve often thought that was kind of a good litmus test for figuring out where to apply the technology, because games have points, and they have winners, and they have turns, and they have losers. They have structure to them. If that case study you just gave us was a game, what was the point in that? Was it a dollar of profit? Because you were like “Well, the plane could be, or it could fly here, where it might have a better chance to get somebody. But that’s got this cost. It wears out the plane, so the plane has to be depreciated accordingly.” What is the game it’s playing? How do you win the game it’s playing?
That’s a really great question. For XOJET, we actually created a tree of metrics, but at the top of the tree is something called fleet contribution, which is “What’s the profit generated per period of time, for the entire fleet?” Then, you can decompose that down to how many jets are flying, the length of haul, and the yield, which is the amount of dollars per hour flown. There’s also, obviously, a customer relationship component to it. You want to make sure that you get really good customers, and that you can serve them well. But there are very big differences between games and real-life business. Games have a finite number of moves. The rules are well-defined. There’s generally, if you look at Deep Blue or Alpha Go, or Arthur Samuels, or even the Labradas. All of these were two-player games. In the enterprise, you have typically tens, sometimes hundreds of players in the game, with undefined sets of moves. So, in the one sense, it’s a lot more complicated. The idea is, how do you reduce it, so it is game-like? That’s a very good question.
So, do you find that most people come to you with a defined business problem, and they’re not really even thinking about “I want some of this AI stuff. I just want my planes to be where they need to be.” What does that look like in the organization that brings people to you, or brings people to considering an artificial intelligence solution to a problem?
Typically, clients will see our success in one area, and then want to talk to us. For instance, we have a really great relationship with a steel company in Arkansas, called Big River Steel. Big River Steel, we’re building the world’s first learning steel mill with them. Which will learn from their sensors, and be able to just do all kinds of predictions and recommendations. It goes through that sense, propose, predict and score. It goes through that. So, when people heard that story, we got a lot of calls from steel mills. Now, we’re kind of deluged with calls from steel mills all over the world, saying, “How did you do that, and how do we get some of it?”
Typically, people hear about us because of AI. We’re a product company, with applications, so we generally don’t go in from a consulting point of view, and say “Hey, what’s your business problem?” We will generally go in and say, “Here are the ten areas where we have expertise and technology to improve business operations,” and then we’ll qualify a company, if it applies or not. One other thing is that AI follows the scientific methods, so it’s all about hypothesis, test, hypothesis, test. So it is possible that an AI application that works for one company will not work for another company. Sometimes, it’s the datasets. Sometimes, it’s just a different circumstance. So, I would encourage companies to be launching lots of hypotheses, using AI.
Your website has a statement quite prominently, “AI is not magic. It’s data.” While I wouldn’t dispute it, I’m curious. What were you hearing from people that caused you to… or maybe hypothetically, – you may not have been in on it – but what do you think is the source of that statement?
I think there’s a tremendous amount of hype and B.S. right now out there about AI. People anthropomorphize AI. You see robots with scary eyes, or you see crystal balls, or you see things that – it’s all magic. So, we’re trying to be explainers in chief, and to kind of de-mystify this, and basically say it’s just data and math, and supercomputers, and business expertise. It’s all of those four things, coming together.
We just happen to be at the right place in history, where there are breakthroughs in those areas. If you look at computing power, I would single that out as the thing that’s made a huge difference. In April of last year, NVIDIA released the DGX-1, which is their AI supercomputer. We have one of those in our data center, that in our platform we affectionately call “the beast,” which has a petaflop of computing power.
If you put that into perspective, that the fastest supercomputer in the world in the year 2000, was the ASCI Red, that had one teraflop of computing power. There was only one in the world, and no company in the world had access to that.
Now, with the supercomputing that’s out there, the beast has 1,000 times more computing power than the ASCI Red did. So, I think that’s a tremendous breakthrough. It’s not magic. It’s just good technology. The math behind artificial intelligence still relies largely on mathematical breakthroughs that happened in the ‘50s and ‘60s. And of course, Thomas Bayes, with Bayes’ Theorem, who was a philosopher in the 1700s.
There’s been a lot of good work recently around different variations on neural nets. We’re particularly interested in long- and short-term memory, and convolutional neural nets. But a lot of this is, a lot of the math has been around for a while. In fact, it’s why I don’t think we’re going to hit general intelligence any time soon. Because it is true that we have had exponential growth in computing power, and exponential growth in data. But it’s been a very linear growth in mathematics, right? If we start seeing AI algorithms coming up with breakthroughs in mathematics, that we simply don’t understand, then I think the antennas can go up.
So, if you have your DGX-1, at a petaflop, and in five years, you get something that’s an exaflop – it’s 1,000 times faster than that – could you actually put that to use? Or is it at some point, the jet company only has so much data. There are only so many different ways to crunch it. We don’t really need more – we have, at the moment, all of the processor power we need. Is that the case? Or would you still pay dearly to get a massively faster machine?
We could always use more computing power. Even with the DGX-1. For instance, we’re working with a distribution company where we’re generating 500,000 models a day for them, crunching on massive amounts of data. If you have massive datasets for your processing, it takes a while. I can tell you, life is a lot better. I mean, in the ‘90s, we were working on a neural net for the Coast Guard; to try to determine which ships off of the west coast were bad guys. It was very simple neural nets. You would hit return, and it would usually crash. It would run for days and days and days and days, be very, very expensive, and it just didn’t work.
Even if it came up with an answer, the ships were already gone. So, we could always use more computing power. I think right now, a limitation is more on the data side of it, and related to the fact that they shouldn’t be throwing out data that they’re throwing out. For instance, like customer relationship management systems. Typically, when you have an update to a customer, that it overwrites the old data. That is really, really important data. I think coming up with a proper data strategy, and understanding the value of data, is really, really important.
What do you think, on this theme of AI is not magic, it’s data; when you go into an organization, and you’re discussing their business problems with them, what do you think are some of the misconceptions you hear about AI, in general? You said it’s overhyped, and glowing-eyed robots and all of that. From an enterprise standpoint, what is it that you think people are often getting wrong?
I think there’s a couple of fundamental things that people are getting wrong. One is I think there is a tremendous over-reliance and over-focus on unstructured data, that people are falling in love with natural language processing, and thinking that that’s artificial intelligence. While it is true that NLP can help with judging things like consumer sentiment or customer feedback, or trend analysis on social media, generally those are pretty weak signals. I would say, don’t follow the shiny object. I think the reason people see that, is the success of Siri and Alexa, and people see that as AI. It is true that those are learning algorithms, and those are effective in certain circumstances.
I think they’re much less effective when you start getting into dialogue. Doing dialogue management with humans is extraordinarily difficult. Training the corpus of those systems is very, very difficult. So, I would say stay away from chatbots, and focus mostly on structured data, rather than unstructured data. I think that’s a really big one. I also think that focusing on the supply side of a company is actually a much more fruitful area than focusing on the demand side, other than sales forecasting. The reason I say that is that the interactions between inbound materials and production, and distribution, are more easily modeled and can actually make a much bigger difference. It’s much harder to model things like the effect of a promotion on demand, although it’s possible to do a lot better than they’re doing now. Or, things like customer loyalty; like the effect of general advertising on customer loyalty. I think those are probably two of the big areas.
When you see large companies being kind of serious about machine learning initiatives, how are they structuring those in the organization? Is there an AI department, or is it in IT? Who kind of “owns” it? How are its resources allocated? Are there a set of best practices, that you’ve gleaned from it?
Yes. I would say there are different levels of maturity. Obviously, the vast majority of companies have no organization around this, and it is individuals taking initiatives, and experimenting by themselves. IT in general has not taken a leadership role in this area. I think, fundamentally, that’s because IT departments are poorly designed. Like the CIO job needs to be two jobs. There needs to be a Chief Infrastructure Officer and Chief Innovation Officer. One of those jobs is to make sure that the networks are working, the data center is working, and people have computers. The other job is, “How are advances in technologies helping companies?” There are some companies that have Chief Data Officers. I think that’s also caused a problem, because they’re focusing more on big data, and less on what do you actually do with those data?
I think the most advanced companies – I would say, first of all, it’s interesting, because it’s following the same trajectory as information technology organizations follow, in companies. First, it’s kind of anarchy. Then, there’s the centralized group. Then, it goes to a distributed group. Then, it goes to a federated group, federated meaning there’s a central authority which basically sets standards and direction. But each individual business unit has their representatives. So, I think we’re going to go through a whole bunch of gyrations in companies, until we end up where most technology organizations are today, which is; there is a centralized IT function, but each business unit also has IT people in it. I think that’s where we’re going.
And then, the last question along these lines: Do you feel that either: A) machine learning is doing such remarkable things, and it’s only going to gain speed, and grow from here, or B) machine learning is over-hyped to a degree that there are unrealistic expectations, and when disappointment sets in, you’re going to get a little mini AI winter again. Which one of those has more truth?
Certainly, there is a lot of hype about it. But I think if you look at the reality of how many companies have actually implemented learning algorithms; AI, ML, data science, across the operations of their company, we’re at the very, very beginning. If you look at it as a sigmoid, or an s-curve, we’re just approaching the first inflection point. I don’t know of any company that has fully deployed AI across all parts of their operations. I think ultimately, executives in the 21stcentury will have many, many learning algorithms to support them, making complex business decisions.
I think the company that clearly has exhibited the strongest commitment to this, and is furthest along, is Amazon. If you wonder how Amazon can deliver something to your door in one hour, it’s because there are probably 100 learning algorithms that made that happen, like where should the distribution center be? What should be in the distribution center? Which customers are likely to order what? How many drivers do we need? What’s the route the driver should take? All of those things are powered by learning algorithms. And you see the difference, you feel the difference, in a company that has deployed learning algorithms. I also think if you look back, from a societal point of view, that if we’re going to have ten billion people on the planet, we had better get a lot more efficient at the consumption of natural resources. We had better get a lot more efficient at production.
I think that means moving away from static business rules that were written years ago, that are only marginally relevant to learning algorithms that are constantly optimizing. And then, we’ll have a chance to get rid of what Hackett Group says is an extra trillion dollars of working capital, basically inventory, sitting in companies. And we’ll be able to serve customers better.
You seem like a measured person, not prone to wild exaggeration. So, let me run a question by you. If you had asked people in 1995, if you had said this, “Hey, you know what? If you take a bunch of computers, just PCs, like everybody has, and you connected them together, and you got them to communicate with hypertext protocol of some kind, that’s going to create trillions and trillions and trillions and trillions and trillions of dollars of wealth.” “It’s going to create Amazon and Google and Uber and eBay and Etsy and Baidu and Alibaba, and millions of jobs that nobody could have ever imagined. And thousands of companies. All of that, just because we’re snapping together a bunch of computers in a way that lets them talk to each other.” That would have seemed preposterous. So, I ask you the question; is artificial intelligence, even in the form that you believe is very real, and what you were just talking about, is it an order of magnitude bigger than that? Or is it that big, again? Or is it like “Oh, no. Just snapping together, a bunch of computers, pales to what we are about to do.” How would you put your anticipated return on this technology, compared to the asymmetrical impact that this seemingly very simple thing had on the world?
I don’t know. It’s really hard to say. I know it’s going to be huge. Right? It is fundamentally going to make companies much more efficient. It’s going to allow them to serve their customers better. It’s going to help them develop better products. It’s going to feel a lot like Amazon, today, is going to be the baseline of tomorrow. And there’s going to be a lot of companies that – I mean, we run into a lot of companies right now that just simply resist it. They’re going to go away. The shareholders will not tolerate companies that are not performing up to competitive standards.
The competitive standards are going to accelerate dramatically, so you’re going to have companies that can do more with less, and it’s going to fundamentally transform business. You’ll be able to anticipate customer needs. You’ll be able to say, “Where should the products be? What kind of products should they be? What’s the right product for the right customer? What’s the right price? What’s the right inventory level? How do we make sure that we don’t have warehouses full of billions and billions of dollars worth of inventory?”
It’s very exciting. I think the business, and I’m generally really bad at guessing years, but I know it’s happening now, and I know we’re at the beginning. I know it’s accelerating. If you forced me to guess, I would say, “10 years from now, Amazon of today will be the baseline.” It might even be shorter than that. If you’re not deploying hundreds of algorithms across your company, that are constantly optimizing your operations, then you’re going to be trailing behind everybody, and you might be out of business.
And yet my hypothetical 200-person company shouldn’t do anything today. When is the technology going to be accessible enough that it’s sort of in everything? It’s in their copier, and it’s in their routing software. When is it going to filter down, so that it really permeates kind of everything in business?
The 200-person company will use AI, but it will be in things like, I think database design will change fundamentally. There is some exciting research right now, actually using predictive algorithms to fundamentally redesign database structures, so that you’re not actually searching the entire database; you’re just searching most likely things first. Companies will use AI-enabled databases, they’ll use AI in navigation, they’ll use AI in route optimization. They’ll do things like that. But when it comes down to it, for it to be a good candidate for AI, in helping make complex decisions, the answer needs to be non-obvious. Generally with a 200-person company, having run a company that went from 2 people to 20 people, to 200 people, to 2,000 people, to 20,000 people, I’ve seen all of the stages.
A 200-person company, you can kind of brute force. You know everybody. You’ve just crossed Dunbar’s number, so you kind of know everything that’s going on, and you have a good feel for things. But like you said, I think applying it in using other peoples’ technologies that are driven by AI, for the things that I talked about, will probably apply to a 200-person company.
With your jet company, you did a project, and EBITDA went up 5%, and that was a big win. That was just one business problem you were working on. You weren’t working on where they buy jet fuel, or where they print. Nothing like that. So presumably, over the long haul, the technology could be applied in that organization, in a number of different ways. If we have a $70 trillion economy in the world, what percent is – 5% is easy – what percentage improvement do you think we’re looking at? Like just growing that economy dramatically, just by the efficiencies that machine learning can provide?
Wow. The way to do that is to look at an individual company, and then sort of extrapolate. I would say an individual company could, if you look at the value of companies. That’s the way I look at it, like shareholder value, which is made up of revenue, margins and capital efficiency. I think that revenue growth could take off, could probably double, from what it is. The growth could double from what it is now. Margins, it will have a dramatic impact. I think you could, if you look at all of the different things you could do within the company, and you had fully deployed learning algorithms, and gotten away from making decisions on yardsticks and averages, you could, a typical company, I’ll say double your margins.
But the home run is in capital efficiency, which not too many people pay attention to, and is one of the key drivers of return on invested capital, which is the driver of general value. This is where you can reduce things 30%, things like that, and get rid of warehouses of stuff. That allows you to be a lot more innovative, because then you don’t have obsolescence. You don’t have to push products that don’t work. You can develop more innovative products. There are a lot of good benefits. Then, you start compounding that year over year, and pretty soon, you’ve made a big difference.
Right, because doubling margins alone doubles the value of all of the companies, right?
It would, if you projected it out over time. Yes. All else being equal.
Which it seldom is. It’s funny, you mentioned Amazon earlier. I just assumed they had a truck with a bunch of stuff on it, that kept circling my house, because it’s like every time I want something, they’re just there, knocking on the door. I thought it was just me!
Yeah. Amazon Prime now came out, was it last year? In the Bay Area? My daughter ordered a pint of ice cream and a tiara. An hour later, a guy is standing at the front door with a pint of ice cream, and a tiara. It’s like Wow!
What a brave new world, that has such wonders in it!
Exactly!
As we’re closing up on time here, there are a number of people that are concerned about this technology. Not in the killer robot scenario. They’re concerned about automation; they’re concerned about – you know it all. Would you say that all of this technology and all of this growth, and all of that, is good for workers and jobs? Or it’s bad, or it’s disruptive in the short term, not in the long term? How do you size that up for somebody who is concerned about their job?
First of all, moving sort of big picture to small picture, first of all, this is necessary for society, unless we stop having babies. We need to do this, because we have finite resources, and we need to figure out how to do more with less. I think the impact on jobs will be profound. I think it will make a lot of jobs a lot better. In AI, we say it’s augment, amplify and automate. Right now, like the things we’re doing at XOJET really help make the people in revenue management a lot more powerful, and I think, enjoy their jobs a lot more, and doing a lot less routine research and grunt work. So, they actually become more powerful, it’s like they have super powers.
I think that there will also be a lot of automation. There are some tasks that AI will just automate, and just do, without human interaction. A lot of decisions, in fact most decisions, are better if they’re made with an algorithm anda human, to bring out the best of both. I do think there’s going to be a lot of dislocation. I think it’s going to be very similar to what happened in the automotive industry, and you’re going to have pockets of dislocation that are going to cause issues. Obviously, the one that’s talked about the most is the driverless car. If you look at all of the truck drivers, I think probably within a decade, that most cross-country trucks, there’s going to be some person sitting in their house, in their pajamas, with nine screens in front of them, and they’re going to be driving nine trucks simultaneously, just monitoring them. And that’s the number one job of adult males in the U.S. So, we’re going to have a lot of displacement. I think we need to take that very seriously, and get ahead of it, as opposed to chasing it, this time. But I think overall, this is also going to create a lot more jobs, because it’s going to make more successful companies. Successful companies hire people and expand, and I think there are going to be better jobs.
You’re saying it all eventually comes out in the wash; that we’re going to have more, better jobs, and a bigger economy, and that’s broadly good for everyone. But there are going to bumps in the road, along the way. Is that what I’m getting from you?
Yes. I think it will actually be a net positive. I think it will be a net significant positive. But it is a little bit of, as economists would say, “creative destruction.” As you go from agricultural to industrial, to knowledge workers, toward sort of an analytics-driven economy, there are always massive disruptions. I think one of the things that we really need to focus on is education, and also on trade schools. There is going to be a lot larger need for plumbers and carpenters and those kinds of things. Also, if I were to recommend what someone should study in school, I would say study mathematics. That’s going to be the core of the breakthroughs, in the future.
That’s interesting. Mark Cuban was asked that question, also. He says the first trillionaires are going to be in AI.  And he said philosophy. Because in the end, what you’re going to need are what the people know how to do. Only people can impute value, and only people can do all of that.
Wow! I would also say behavioral economics; understanding what humans are good at doing, and what humans are not good at doing. We’re big fans of Kahneman and Tversky, and more recently, Thaler. When it comes down to how humans make decisions, and understanding what skills humans have, and what skills algorithms have, it’s very important to understand that, and to optimize that over time.
All right. That sounds like a good place to leave it. I want to thank you so much for a wide-ranging show, with a lot of practical stuff, and a lot of excitement about the future. Thanks for being on the show.
My pleasure. I enjoyed it. Thanks, Byron.
Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.
[voices_in_ai_link_back]

Voices in AI – Episode 49: A Conversation with Ali Azarbayejani

[voices_in_ai_byline]
In this episode, Byron and Ali discuss AI’s impact on business and jobs.
[podcast_player name=”Episode 49: A Conversation with Ali Azarbayejani” artist=”Byron Reese” album=”Voices in AI” url=”https://voicesinai.s3.amazonaws.com/2018-06-12-(00-57-00)-ali-azarbayejani.mp3″ cover_art_url=”https://voicesinai.com/wp-content/uploads/2018/06/voices-headshot-card-2.jpg”]
[voices_in_ai_byline]
Byron Reese: This is Voices in AI, brought to you by GigaOm. I’m Byron Reese. Today my guest is Ali Azarbayejani. He is the CTO and Co-founder of Cogito. He has 18 years of commercial experience as a scientist, an entrepreneur, and designer of world-class computational technologies. His pioneering doctoral research at MIT Media Labs in Probabilistic Modeling for 3-D Vision was the basis for his first startup company Alchemy 3-D Technology, which created a market in the film and video post-production industry for camera matchmoving software. Welcome to the show Ali.
Ali Azarbayejani: Thank you, Byron.
I’d like to start off with the question: what is artificial intelligence?
I’m glad we’re starting with some definitions. I think I have two answers to that question. The original definition of artificial intelligence I believe in a scholarly context is about creating a machine that operates like a human. Part of the problem with defining what that means is that we don’t really understand human intelligence very well. We have a pretty good understanding now about how the brain functions physiologically, and we understand that’s an important part of how we provide cognitive function, but we don’t have a really good understanding of mind or consciousness or how people actually represent information.
I think the first answer is that we really don’t know what artificial or machine intelligence is other than the desire to replicate human-like function in computers. The second answer I have is how AI is being used in industry. I think that that is a little bit easier to define because I believe almost all of what we call AI in industry is based on building input/output systems that are framed and engineered using machine learning. That’s really at the essence of what we refer to in the industry as AI.
So, you have a high concept definition and a bread and butter work-a-day working definition, and that’s how you’re bifurcating that world?
Yeah, I mean, a lot of people talk about we’re in the midst of an AI revolution. I don’t believe, at least in the first sense of the term, that we’re in an AI revolution at all. I think we’re in the midst of a machine learning revolution which is really important and it’s really powerful, but I guess what I take issue is with the term intelligence, because most of these things that we call artificial intelligence don’t really exhibit the properties of intelligence that we would normally think are required for human intelligence.
These systems are largely trained in the lab and then deployed. When they’re deployed, they typically operate as a simple static input/output system. You put in audio and you get out words. So, you put in video and you get out locations of faces. That’s really at the core of what we’re calling AI now. I think it’s really the result of advances in technology that’s made machine learning possible at large scale, and it’s not really a scientific revolution about intelligence or artificial intelligence.
All right, let’s explore that some, because I think you’re right. I have a book coming out in the Spring of 2018 which is 20,000 words and it’s dedicated to the brain, the mind and consciousness. It really tries to wrap around those three concepts. So, let’s go through them if you don’t mind for just a minute. You started out by saying with the brain we understand how it functions. I would love to go into that, but as far as I understand it, we don’t know how a thought is encoded. We don’t know how the memory of your 10th birthday party or what pineapple tastes like or any of that. We don’t know how any of that is actually encoded. We can’t write to it. We can’t read from it, except in the most very rudimentary sense. So do you think we really do understand the brain?
I think that’s the point I was actually making is that we understand the brain at some level physiologically. We understand that there’s neurons and gray matter. We understand a little bit of physiology of the brain, but we don’t understand those things that you just mentioned, which I refer to as the “mind.” We don’t really understand how data is stored. We don’t understand how it’s recalled exactly. We don’t really understand other human functions like consciousness and feelings and emotions and how those are related to cognitive function. So, that’s really what I was saying is, we don’t understand how intelligence evolves from it, although really where we’re at is we just understand a little bit of the physiology.
Yeah, it’s interesting. There’s no consensus definition on what intelligence is, and that’s why you can point at anything and say, “well that’s intelligent.” “My sprinkler that comes on when my grass is dry, that’s intelligent.” The mind is of course a very, shall we say, controversial concept, but I think there is a consensus definition of it that everybody can agree to, which is it’s all the stuff the brain does that doesn’t seem, emphasis on seem, like something an organ should be able to do. Your liver doesn’t have a sense of humor. Your liver doesn’t have an imagination. All of these things. So, based on that definition of creativity and not even getting to consciousness, not even experiencing the world, just these abilities. These raw abilities like to write a poem, or paint a great painting or what   have you. You were saying we actually have not made any real progress towards any of that. That’s gotten mixed up in this whole machine learning thing. Am I right that you think we’re still at square one with that whole building artificial mind?
Yeah, I mean, I don’t see a lot of difference intellectually [between] where we are now from when I was in school in the late 80s and 90s in terms of theories about the mind and theories about how we think and reason. The basis for the current machine learning revolution is largely based on neural networks which were invented in the 1960s. Really what is fueling the revolution is technology. The fact that we have the CPU power, the memory, the storage and the networking — and the data — and we can put all that together and train large networks at scale. That’s really what is fueling the amazing advances that we have right now, not really any philosophical new insights into how human intelligence works.
Putting it out there for just a minute, is it possible that an AGI, a general intelligence, that an artificial mind, is it possible that that cannot be instantiated in machinery?
That’s a really good question. I think that’s another philosophical question that we need to wrestle with. I think that there are at least two schools of thought on this that I’m aware of. I think the prevailing notion, which is I think a big assumption, is that it’s just a matter of scale. I think that people look at what we’ve been able to do with machine learning and we’ve been able to do incredible things with machine learning so far. I think people think of well, a human sitting in a chair can sit and observe the world and understand what’s going on in the world and communicate with other people. So, if you just took that head and you could replicate what that head was doing, which would require a scale much larger than what we’re doing right now with artificial neural networks, then embody that into a machine, then you could set this machine on the table there or on the chair and have that machine do the same thing.
I think one school of thought is that the human brain is an existence proof that a machine can exist to do the operations of a human intelligence. So, all we have to do is figure out how to put that into a machine. I think there’s a lot of assumptions involved in that train of thought. The other train of thought, which is more along the lines of where I land philosophically, is that it’s not clear to me that intelligence can exist without ego, without the notion of an embodied self that exists in the world, that interacts in the world, that has a reason to live and a drive to survive. It’s not clear to me that it can’t exist, and obviously we can do tasks that are similar to what human intelligence does, but I’m not entirely sure that… because we don’t understand how human intelligence works, it’s not clear to me that you can create an intelligence in a disembodied way.
I’ve had 60-something guests on the show, and I keep track of the number that don’t believe we can actually build a general intelligence, and it’s I think 5. They are Deep Varma, Esther Dyson, people who have similar… more so I think they’re even more explicitly saying they don’t think we can do it. The other 60 guests have the same line of logic, which is we don’t know how the brain works. We don’t know how the mind works. We don’t know how consciousness works, but we do have one underlying assumption that we are machines, and if we are machines, then we can build a mechanical us. Any argument against that or any way to engage it, the word that’s often offered is magic. The only way to get around that is to appeal to magic, to appeal to something supernatural, to appeal to something unscientific. So, my question to you is: is that true? Do you have to appeal to something unscientific for that logic to break down, or are there maybe scientific reasons completely causal, system-y kind of systems by which we cannot build a conscious machine?
I don’t believe in magic. I don’t think that’s my argument. My argument is more around what is the role that the body around the brain plays, in intelligence? I think we make the assumption sometimes that the entire consciousness of a person, entire cognition, everything is happening from the neck up, but the way that people exist in the world and learn from simply existing in the world and interacting with the world, I think plays a huge part in intelligence and consciousness. Being attached to a body that the brain identifies with as “self,” and that the mind has a self-interest in, I think may be an essential part of it.
So, I guess my point of view on this is I don’t know what the key ingredients are that go into intelligence, but I think that we need to understand… Let me put it this way, I think without understanding how human consciousness and human feelings and human empathy works, what the mechanisms are behind that, I mean, it may be simply mechanical, but without understanding how that works, it’s unclear how you would build a machine intelligence. In fact, scientists have struggled from the beginning of AI even to define it, and it’s really hard to say you can build something until you can actually define it, until you actually understand what it is.
The philosophical argument against that would be like “Look, you got a finite number of senses and those that are giving input to your brain, and you know the old philosophical thought experiment you’re just a brain in a vat somewhere and that’s all you are, and you’re being fed these signals and your brain is reacting to them,” but there really isn’t even an external world that you’re experiencing. So, they would say you can build a machine and give it these senses, but you’re saying there’s something more than that that we don’t even understand, that is beyond even the five senses.
I suppose if you had a machine that could replicate atom for atom a human body, then you would be able to create an intelligence. But, how practical would it be?
There are easier ways to create a person than that?
Yeah, that’s true too, but how practical is a human as a computing machine? I mean, one of the advantages of the computer systems that we have, the machine learning-based systems that we call AI is that we know how we represent data. Then we can access the data. As we were talking about before, with human intelligence you can’t just plug in and download people’s thoughts or emotions. So, it may be that in order to achieve intelligence, you have to create this machine that is not very practical as a machine. So you might just come full circle to well, “is that really the powerful thing that we think it’s going to be?”
I think people entertain the question because this question of “are people simply machines? Is there anything that happens? Are you just a big bag of chemicals with electrical pulses going through you?” I think people have… emotionally engaging that question is why they do it, not because they want to necessarily build a replicant. I could be wrong. Let me ask you this. Let’s talk about consciousness for a minute. To be clear, people say we don’t know what consciousness is. This is of course wrong. Everybody agrees on what it is. It is the experiencing of things. It is the difference between a computer being able to sense temperature and a person being able to feel heat. It’s like that difference.
It’s been described as the last scientific question we don’t really know how to ask, and we don’t know what the answer would look like. I put eight theories together in this book I wrote. Do you have a theory, just even a gut reaction? Is it an emergent property? Is it a quantum property? Is it a fundamental law of the universe? Do you have a gut feel of what direction you would look to explain consciousness?
I really don’t know. I think that my instinct is along the lines of what I talked about recently with embodiment. My gut feel is that a disembodied brain is not something that can develop a consciousness. I think consciousness fundamentally requires a self. Beyond that, I don’t really have any great theories about consciousness. I’m not an expert there. My gut feel is we tend to separate, when we talk about artificial intelligence, we tend to separate the function of mind from the body, and I think that may be a huge assumption that we can do that and still have self and consciousness and intelligence.
I think it’s a fascinating question. About half of the guests on the show just don’t want to talk about it. They just do not want to talk about consciousness, because they say it’s not a scientific question and it’s a distraction. Half of them, very much, it is the thing, it’s the only thing that makes living worthwhile. It’s why you feel love and why you feel happiness. It is everything in a way. People have such widely [divergent views], like Stephen Wolfram was on the show, and he thinks it’s all just computation. To that extent, anything that performs computation, which is really just about anything, is conscious. A hurricane is conscious.
One theory is consciousness is an emergent property, just like you are trillions of cells that don’t know who you are and none of them have a sense of humor, you somehow have a distinct emergent self and a sense of humor. There are people who think the planet itself may have a consciousness. Others say that activity in the sun looks a lot like brain activity, and perhaps the sun is conscious, and that is an old idea. It is interesting that all children when they draw an outdoor scene they always put a smiling face on the sun. Do you think consciousness may be more ubiquitous, not unique to humans? That it may kind of be in all kinds of places, or do you just at a gut level think it’s a special human [trait], and other animals you might want to include in that characteristic?
That’s an interesting point of view. I certainly see how it’s a nice theory about it being a continuum I think is what he’s saying. That there’s some level of consciousness in the simplest thing. Yeah, I think this is more along… it’s just a matter of scale type of philosophy which is that at a larger scale that what emerges is a more complex and meaningful consciousness.
There’s a project in Europe you’re probably familiar with, the Human Brain Project, which is really trying to build an intelligence through that scale. The counter to it is the Open Worm Project which is they’ve sequenced the genome, of the Nematode worm and its brain has 302 neurons, and for 20 years people have been trying to model those 302 neurons in a computer to build, as it were, a digital functioning Nematode worm. By one argument they’re no closer to cracking that than they were 20 years ago. The scale question has its adherence at both extremes.
Let’s switch gears now and put that world aside and let’s talk about the world of machine learning, and we won’t call it intelligence anymore. It’s just machine learning, and if we use the word intelligence, it’s just a convenience. How would you describe the state of the art? As you point out, the techniques we’re using aren’t new, but our ability to apply them is. Are we in a machine learning renaissance? Is it just beginning? What are your thoughts on that?
I think we arein a machine learning renaissance, and I think we’re closer to the beginning than to the end. As I mentioned before, the real driver of the renaissance is technology. We have the computational power to do massive amounts of learning. We have the data and we have the networks to bring it all together and the storage to store it all. That’s really what has allowed us to realize the theoretical capabilities of complex networks as we model input/output functions.
We’ve done amazing things with that particular technology. It’s very powerful. I think there’s a lot more to come, and it’s pretty exciting the kinds of things we can do with it.
There’s a lot of concern, as you know, the debate about the impact that it’s going to have on employment. What’s your take on that?
Yeah, I’m not really concerned about that at all. I think that largely what these systems are doing is they’re allowing us to automate a lot of things. I think that that’s happened before in history. The concern that I have is not so much about removing jobs, because the entire history of the industrial revolution [is] we’ve built technology that has made jobs obsolete, and there are always new jobs. There’s so many things to do in the world that there’s always new jobs. I think the concern, if there’s any about this, is therateof change.
I think at a generational level, it’s not a problem. The next generation are going to be doing jobs that we don’t even know exist right now, or that don’t exist right now. I think the problems may be within a generation transformation. If you start automating jobs that belong to people who cannot be retrained in something else, but I think that there will always be new jobs.
Is that possible that there’s a person out there that cannot be retrained to do meaningful work? We’ve had 250 years of unending technological advance that would have blown the minds of somebody in 1750, and yet we don’t have anybody who… it’s like, no, they can’t do anything. Assuming that you have full use of your body and mind, there’s not a person on the planet that cannot in theory add economic value. All the more if they’re given technology to do it with. Do you really think that they’ll have people that “cannot be retrained”?
No, I don’t think it’s a “can” issue. I agree with you. I think that people can be retrained and like I said, I’m not really worried that there won’t be jobs for people to do, but I think that there are practical problems of the rate of change. I mean, we’ve seen it in the last decades in manufacturing jobs that a lot of those have disappeared overseas. There’s real economic pain in the regions of the country where those jobs were really prominent, and I don’t think there’s any theoretical reason why people can’t be retrained. Our government doesn’t really invest in that as much as it should, but I think there’s a practical problem that people don’t get retrained. That can cause shifts. I think those are temporary. I personally don’t see long term issues with transformations in technology.
It’s interesting because… I mean, this is a show about AI, which obviously holds it in high regard, but there have been other technologies that have been as transformative. An assembly line is a kind of AI. That was adopted really quickly. Electricity was adopted quickly, and steam was adopted. Do you think machine learning really is being adopted all that much faster, or is it just another equally transformative technology like electricity or something?
I agree with you. I think that it’s transformational, but I think it’s probably creating as many jobs as it’s automating away right now. For instance, in our industry, which is in contact centers, a big trend is trying to automate, basically to digitize a lot of the communications to take load off the telephone call center. What most of our enterprise customers have found with our contact centers is the more they digitize, their call volume actually goes up. It doesn’t go down. So, there’s kind of some conflicting evidence there about how much this is actually going to take away from jobs.
I am of the opinion I think anyone in any endeavor understands there’s always more to do than you have time to do. Automating things that can be automated I generally feel is a positive thing, and putting people to use in functions where we don’t know how to automate things, I think is always going to be an available path.
You brought up what you do. Tell us a little bit about Cogito and its mission.
Our mission is centered around helping people have better conversations. We’re really focused on the voice stream, and in particular our main business is in customer call centers where what we do is our technology listens to ongoing conversations, understands what’s going on in those conversations from an interactive and relationship point of view, from a behavioral point of view, and gives agents in real-time, feedback when conversations aren’t going well or when there’s something they can do to improve the conversation.
That’s where we get to the concept of augmented intelligence, which is using these machine learning endowed systems to help people do their jobs better, rather than trying to replace them. That’s a tremendously powerful paradigm. There’s trends, as I mentioned, towards trying to automate these things away, but often our customers find it more valuable to increase the competence of the people doing the jobs there because those jobs can’t be completely automated, rather than trying to automate away the simple things.
Hit rewind, back way up with Cogito because I’m really fascinated by the thesis that there’s all of this. There’s what you say and then there’s how you say it. That we’re really good with one half of that equation, but we don’t apply technology to the other half. Can you tell that story and how it led to what you do?
Yeah, imagine listening to two people having a conversation in a foreign language that you don’t understand. You can undoubtedly tell a lot about what’s going on in that conversation without understanding a single word. You can tell whether people are angry at each other. You can tell whether they’re cooperating or hostile. You can tell a lot of things about the interaction without understanding a single word. That’s essentially what we’re doing with the behavioral analysis of how you say it. So, when we listen to telephone conversations, that’s a lot of what we’re doing is we’re listening to the tenor and the interaction in the conversation and getting a feel for how that conversation is going.
I mean, you’re using “listen” here colloquially. There’s nothing really listening. There’s a data stream that’s being analyzed, right?
Exactly, yeah.
So, I guess it sounds like they’re like the parents [of] Charlie Brown, like “waa, wa waa.” So, it hears that and can figure out what’s going on. So, that sounds like a technology with broad applications. Can you talk about in a broad sense what can be done, and then why you chose what you did choose as a starting point?
It actually wasn’t the starting point. The application that originally inspired the company was more of a mental health application. There’s a lot of anecdotal understanding that people with clinical depression or depressed mood speak in a characteristic way. So the original inspiration for building the company and the technology was to use in telephone outreach operations with chronically ill populations that have very high rates of clinical depression and very low rates of detection and treatment of clinical depression. So, that’s one very interesting application that we’re still pursuing.
The second application came up in that same context, in the context of health and wellness call centers is the concept of engagement. A lot of the beneficial approach to health is preventative care. So, there’s been a lot of emphasis in healthcare on helping people quit smoking and have better diets and things like that. These programs normally take place over the telephone, and so there’s conversations, but they’re usually only successful when the patient or the member is engaged in the process. So, we used this sort of speech and conversational analysis to build models of engagement and that would allow companies to either react to under-engaged patients or not waste their time with under-engaged patients.
The third application, which is what we’re primarily focused on right now is agent interaction, the quality of agent interaction. There’s a huge amount of value with big companies that are consumer-oriented and particularly those that have membership relationships with customers in being able to provide a good human interaction when there are issues. So, customer service centers… and it’s very difficult if you have thousands of agents on the phone to understand what’s going on in those calls, much less improve it. A lot of companies are really focused on improvement. We’re the first system that allows these companies to understand what’s going on in those conversations in real-time, which is the moment of truth where they can actually do something about it. We allow them to do something about it by giving information not only to supervisors who can provide real-time coaching, but also to agents directly so that they can understand their own conversations are going south and be able to correct that and have better conversations themselves. That’s the gist of what we do right now.
I have a hundred questions all running for the door at once with this. My first question is you’re trying to measure engagement as a factor. How generalizable is that technology? If you plugged it into this conversation that you and I are having, does it not need any modification? Engagement is engagement is engagement, or is it like, Oh no, at company X it’s going to sound different than a phone call from company Y?
That’s a really good question. In some general sense an engaged interaction, if you took a minute of our conversation right now, it’s pretty generalizable. The concept is that if you’re engaged in the topic, then you’re going to have a conversation which is engaged, which means there’s going to be a good back and forth and there’s going to be good energy in the conversation and things like that. Now in practice, when you’re talking about in a call center context, it does get trickier because every call center has potentially quite different shapes of conversations.
So, one call center may need to spend a minute going through formalities and verification and all of that kind of business, and that part of the conversation is not the part you actually care about, but it’s the part where we’re actually talking about a meaningful topic. Whereas another call center may have a completely different shape of a conversation. What we find that we have to do, where machine learning comes in handy here, is that we need to be able to take our general models of engaged interactions and convert and adapt those in particular context to understanding engaged overall conversations. Those are going to vary from context to context. So, that’s where adaptive machine learning comes into play.
My next question is from person to person how consistent… no doubt if you had a recording of me for an hour, you could get a baseline and then measure my relative change from that, but when you drop in, is Bob X of Tacoma, Washington and Suzie Q of Toledo, do they exhibit consistent traits or attributes of engagement?
Yeah, there are certainly variations among people’s speaking style. You look at areas of the country, different dialects and things like that. Then you also look at different languages and those are all going to be a little bit different. When we’re talking about engagement at a statistical level, these models work really well. So the key is when thinking about product development for these, is to focus on providing tools that are effective at a statistical level. Looking at one particular person, your model may indicate that this person is not engaged, but maybe that is just their normal speaking style, but statistically it’s generalizable.
My next question is: is there something special about engagement? Could you, if you wanted to tell whether somebody’s amused or somebody’s intrigued or somebody is annoyed or somebody’s outraged? There’s a palette of human emotions. I guess I’m asking, engagement like you said, there are not so much tonal qualities you’re listening for, but you’re counting back and forths, that’s kind of a numbers [thing], not a…. So on these other factors, could you do that hypothetically?
Yeah, in fact, our system is a platform for doing exactly that sort of thing. Some of those things we’ve done. We build models for various emotional qualities and things like that. So, that’s the exciting thing is that once you have access to these conversations and you have the data to be able to identify these various phenomena, you can apply machine learning and understand what are the characteristics that would lead to a perception of amusement or whatever result you’re looking for.
Look, I applaud what you’re doing. Anybody who can be better phone support has my wholehearted support, but I wonder if this technology wouldn’t be heading is kind of an OEM thing where it’s put into caregiving robots, for instance, who need to learn how to read the emotions of the person they’re caring for and modulate what they say. It’s like a feedback loop to self-teaching kind of, just that use case. The robot caregiver that uses this [knows] she’s annoyed, he’s happy, or whatever, as a feedback loop. Am I way off in sci-fi land or is that no, that could be done?
No, that’s exactly right, and it’s an anticipated application of what we do. As we get better and better at being able to understand and classify useful human behaviors and then inferring useful human emotional states from those behaviors, that can be used in automated systems as well.
Frequent listeners to the show will know that I often bring up Weizenbaum and ELIZA. The setup is that Weizenbaum, back in the 60s, made this really simple chat bot that you would say, “I don’t feel good today,” and it would say “why don’t you feel good today?” “I don’t feel good today because of my mother.” “Why does your mother not make you not feel good?” It’s this real basic thing, but what he found was that people were connecting with it and this really disturbed him and so he unplugs it. He said, when the computer says “I understand,” it’s just a lie. That there’s no “I,” which sounds like you would agree with, and there’s nothing that understands anything. Do you worry that that is a [problem]? Weizenbaum would be: “that’s awful.” If that thing is manipulating an old person’s emotions, that’s just a terrible, terrible thing. What would you say?
I think it’s a danger. Yeah, I think we’re going to see that sort of thing happen for sure. I think people look at chat bots and say, “Oh look, that’s an artificial intelligence, that’s doing something intelligent” and it’s really not, as ELIZA proves. You can just have a little base system on the back and type stuff in and type stuff out. A verbal chat bot might use a speech-to-text as an input modality and text-to-speech as an output modality, but have also a rules based unit on the back, and it’s really doing nothing intelligent, but it can give the illusion of some intelligence going on because you’re talking to it and it’s talking back to you.
So, I think yeah, there will be bumps along that road for sure, in trying to build these technologies that, particularly when you’re trying to build a system to replace a human and trying to convince the user of the system that you’re talking to a human. That’s definitely sketchy ground.
Right. I mean, I guess it’s forgivable we don’t know, I mean, it’s all new. It’s all stuff we’re having to kind of wing it. We’re coming up towards the end of our time. I just have a couple of closing questions, which are: Do you read science fiction? Do you watch science fiction movies? Do you go to science fiction TV, and if so, is there any view of the future, any view of AI or anything like that that you look at and think, yeah that could happen someday?
Yeah, it’s really hard to say. I can’t think of anything. Star Warsof course used very anthropomorphized robots, and if you think of a system like HAL in 2001: A Space Odyssey,you could certainly simulate something like that. If you’re talking about information, being able to talk to HAL and have HAL look stuff up for you and then talk back to you and tell you what the answer is, that’s totally believable. Of course the twist in 2001: A Space Odysseyis that HAL ended up having a sense of self, sense of its own self and decided to make decisions. Yeah, I’m very much rooted in the present and there’s a lot of exciting things going on right now.
Fair enough. It’s interesting that you used Star Wars, which of course is a long time ago, because somehow or another you think the movie would be different if C3PO were named Anthony and R2D2 was named George.
Yeah.
That would just take on a whole different… giving them names is even one step closer to that whole thing. Data in Star Trekkind of walked the line. He had a name, but it was Data.
It’s interesting actually to look at the difference between C3PO and R2D2. You look at CP3O and it has the form of a human, and you can ask the question: “Why would you build a robot that has a form of a human?” R2D2 is a robot, which does, or could potentially do, exactly what C3PO does in the form of a whatever – cylinder. So, it’s interesting to look at the contrast and while they imagine there’s two different kinds of robots. One, which is very anthropomorphized, and one which was very mechanical.
Yeah, you’re right because the decision not to give R2 speech, it’s not like he didn’t have enough memory. He needed another 30MB of RAM or something. That also was something clearly deliberate. I remember reading that Lucas’s original wasn’t really going to use Anthony Daniels to voice it. He was going to get somebody who sounded like a used car salesman, kind of fast talking and all that, and that’s what the script is written for. I’m sure it’s a literary device, but like a lot of these things, I’m a firm believer that what comes out in science fiction isn’t predicting the future. It kind of makes it. Uhura had a Bluetooth device in her ear. So, it’s kind of like whatever the literary imagining of it is probably going to be what the scientific manifestation of it is to some degree.
Yeah, the concept of the self-fulfilling prophecy is definitely there.
Well, I tell you what, if people want to keep up with you and all this work you’re doing, do you write, yak on Twitter, how can people follow what you do?
We’re going to be writing a lot more in the future. Our website www.cogitocorp.com is where you’ll find the links to the things that we’re writing on, AI and the work we do here at Cogito.
Well, this has been fascinating. I’m always excited to have a guest who is willing to engage these big questions and take, as you pointed out earlier, a more contrarian view. So, thank you for your time Ali.
Thank you, Byron. It’s been fun, and thanks for having me on.
Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.
[voices_in_ai_link_back]

Voices in AI – Episode 48: A Conversation with David Barrett

[voices_in_ai_byline]
In this episode, Byron and David discuss AI, jobs, and human productivity.
[podcast_player name=”Episode 48: A Conversation with David Barrett” artist=”Byron Reese” album=”Voices in AI” url=”https://voicesinai.s3.amazonaws.com/2018-06-07-(00-56-47)-david-barrett.mp3″ cover_art_url=”https://voicesinai.com/wp-content/uploads/2018/06/voices-headshot-card-1.jpg”]
[voices_in_ai_byline]
Byron Reese: This is Voices in AI brought to you by GigaOm, I’m Byron Reese. Today our guest is David Barrett. He is both the founder and the CEO of Expensify. He started programming when he was 6 and has been at it as his primary activity ever since, except for a brief hiatus for world travel, some technical writing, a little project management, and then founding and running Expensify. Welcome to the show, David.
David Barrett: It’s great of you to have me, thank you.
Let’s talk about artificial intelligence, what do you think it is? How would you define it?
I guess I would say that AI is best defined as a feature, not as a technology. It’s the experience that the user has and sort of the experience of viewing of something as being intelligent, and how it’s actually implemented behind the scenes. I think people spend way too much time and energy on [it], and forget sort of about the experience that the person actually has with it.
So you’re saying, if you interact with something and it seems intelligent, then that’s artificial intelligence?
That’s sort of the whole basis of the Turing test, I think, is not based upon what is behind the curtain but rather what’s experienced in front of the curtain.
Okay, let me ask a different question then– and I’m not going to drag you through a bunch of semantics. But what is intelligence, then? I’ll start out by saying it’s a term that does not have a consensus definition, so it’s kind of like you can’t be wrong, no matter what you say.
Yeah, I think the best one I’ve heard is something that sort of surprises you. If it’s something that behaves entirely predictable, it doesn’t seem terribly interesting. Something that is also random isn’t particularly surprising, I guess, but something that actually intrigues you. And basically it’s like “Wow, I didn’t anticipate that it would correctly do this thing better than I thought.” So, basically, intelligence– the key to it is surprise.
So in what sense, then–final definitional question–do you think artificial intelligence is artificial? Is it artificial because we made it? Or is it artificial because it’s just pretending to be intelligent but it isn’t really?
Yeah, I think that’s just sort of a definition–people use “artificial” because they believe that humans are special. And basically anything–intelligence is the sole domain of humanity and thus anything that is intelligent that’s not human must be artificial. I think that’s just sort of semantics around the egoism of humanity.
And so if somebody were to say, “Tell me what you think of AI, is it over-hyped? Under-hyped? Is it here, is it real”, like you’re at a cocktail party, it comes up, what’s kind of the first thing you say about it?
Boy, I don’t know, it’s a pretty heavy topic for a cocktail party. But I would say it’s real, it’s here, it’s been here a long time, but it just looks different than we expect. Like, in my mind, when I think of how AI’s going to enter the world, or is entering the world, I’m sort of reminded of how touch screen technology entered the world.
Like, when we first started thinking about touch screens, everyone always thought back to Minority Reportand basically it’s like “Oh yeah, touch technology, multi-touch technology is going to be—you’re going to stand in front of this huge room and you’re going to wave your hands around and it’s going to be–images”, it’s always about sorting images. After Minority Reportevery single multi-touch demo was about, like, a bunch of images, bigger images, more images, floating through a city world of images. And then when multi-touch actually came into the real world, it was on a tiny screen and it was Steve Jobs saying, “Look! You can pinch this image and make it smaller.” The vast majority of multi-touch was actually single-touch that every once in a while used a couple of fingers. And the real world of multi-touch is so much less complicated and so much more powerful and interesting than the movies ever made it seem.
And I think the same thing when it comes to AI. Our interpretation from the movies of what AI is that you’re going to be having this long, witty conversation with an AI or with maybe with Heryou’re going to be falling in love with your AI. But real world AI isn’t anything like that. It doesn’t have to seem human; it doesn’t have to be human. It’s something that, you know, is able to surprise you with interpreting data in a way that you didn’t expect and doing results that are better than you would have imagined. So I think real-world AI is here, it’s been here for a while, but it’s just not where we’re noticing because it doesn’t really look like we expect it to.
Well, it sounds like–and I don’t want to say it sounds like you’re down on AI–but you’re like “You know, it’s just a feature, and its just kind of like—it’s an experience, and if you had the experience of it, then that’s AI.” So it doesn’t sound like you think that it’s particularly a big deal.
I disagree with that, I think–
Okay, in what sense is it a “big deal”?
I think it’s a huge deal. To say it’s just a feature is not to dismiss it, but I think is to make it more real. I think people put it on a pedestal as if it’s this magic alien technology, and they focus, I think, on—I think when people really think about AI, they think about vast server farms doing Tensor Flow analysis of images, and don’t get me wrong, that is incredibly impressive. Pretty reliably, Google Photos, after billions of dollars of investment, can almost always figure out what a cat is, and that’s great, but I would say real-world AI—that’s not a problem that I have, I know what a cat is. I think that real-world AI is about solving harder problems than cat identification. But those are the ones that actually take all the technology, the ones that are hardest from a technology perspective to solve. And so everyone loves those hard technology problems, even though they’re not interesting real-world problems, the real-world problems are much more mundane, but much more powerful.
I have a bunch of ways I can go with that. So, what are—we’re going to put a pin in the cat topic—what are the real-world problems you wish—or maybe we are doing it—what are the real world problems you think we should be spending all of that server time analyzing?
Well, I would say this comes down to—I would say, here’s how Expensify’s using AI, basically. The real-world problem that we have is that our problem domain is incredibly complicated. Like, when you write in to customer support of Uber, there’s probably, like, two buttons. There’s basically ‘do nothing’ or ‘refund,’ and that’s pretty much it, not a whole lot that they can really talk about, so their customer support’s quite easy. But with Expensify, you might write in a question about NetSuite, Workday, or Oracle, or accounting, or law, or whatever it is, there’s a billion possible things. So we have this hard challenge where we’re supporting this very diverse problem domain and we’re doing it at a massive scale and incredible cost.
So we’ve realized that mostly, probably about 80% of our questions are highly repeatable, but 20% are actually quite difficult. And the problem that we have is that to train a team and ramp them up is incredibly expensive and slow, especially given that the vast majority of the knowledge is highly repeatable, but you don’t know until you get into the conversation. And so our AI problem is that we want to find a way to repeatedly solve the easy questions while carefully escalating the hard questions. It’s like “Ok, no problem, that sounds like a mundane issue,” there’s some natural language processing and things like this.
My problem is, people on the internet don’t speak English. I don’t mean to say they speak Spanish or German, they speak gibberish. I don’t know if you have done technical support, the questions you get are just really, really complicated. It’s like “My car busted, don’t work,” and that’s a common query. Like, what car? What does “not work” mean, you haven’t given any detail. The vast majority of a conversation with a real-world user is just trying to decipher whatever text message lingo they’re using, and trying to help them even ask a sensible question. By the time the question’s actually well-phrased, it’s actually quite easy to process. And I think so many AI demos focus on the latter half of that, and they’ll say like “Oh, we’ve got an AI that can answer questions like what will the temperature be under the Golden Gate bridge three Thursdays from now.” That’s interesting; no one has ever asked that question before. The real-world questions are so much more complicated because they’re not in a structured language, and they’re actually for a problem domain that’s much more interesting than weather. I think that real-world AI is mundane, but that doesn’t make it easy. It just makes it solving problems that just aren’t the sexy problems. But they’re the ones that actually need to be solved.
And you’re using the cat analogy just as kind of a metaphor and you’re saying, “Actually, that technology doesn’t help us solve the problem I’m interested in,” or are you using it tongue-in-cheekily to say, “The technology may be useful, it’s just that that particular use-case is inane.”
I mean, I think that neural-net technology is great, but even now I think what’s interesting is following the space of how—we’re really exploring the edges of its capabilities. And it’s not like this technology is new. What’s new is our ability to throw a tremendous amount of hardware at it. But the core neural technology itself has actually been set for a very long time, that net propagation techniques are not new in any way. And I think that we’re finding that it’s great and you can do amazing things with it, but also there’s a limit to how much can be done with it. It’s sort of—I think of a neural net in kind of the same way that I think of a bloom filter. It’s a really incredible way to compress an infinite amount of knowledge to a finite amount of space. But that’s a loss-y compression, you lose a lot of data as you go along with it, and you get unpredictable results, as well. So again, I’m not opposed to neural nets or anything like this, but I’m saying, just because you have a neural net doesn’t mean it’s smart, doesn’t mean it’s intelligent, or that it’s doing anything useful. It’s just technology, it’s just hardware. I think we need to focus less on sort of getting enraptured by fancy terminologies and advanced technologies, and instead focus more on “What are you doing with this technology?” And that’s the interesting thing.
You know, I read something recently that I think most of my guests would vehemently disagree with, but it said that all advances in AI over the last, say, 20 years, are 100% attributable to Moore’s law, which sounds kind of like what you’re saying, is that we’re just getting faster computers and so our ability to do things with AI is just doubling every two years because the computers are doubling every two years. Do you—
Oh yeah! I 100% agree.
So there’s a lot of popular media around AI winning games. You know, you had chess in ‘97, you had Jeopardy! with Watson, you had, of course, AlphaGo, you had poker recently. Is that another example in your mind of kind of wasted energy? Because it makes a great headline but it isn’t really that practical?
I guess, similar. You could call it gimmicky perhaps, but I would say it’s a reflection of how early we are in this space that our most advanced technologies are just winning Go. Not to say that Go is an easy game, don’t get me wrong, but it’s a pretty constrained problem demand. And it’s really just—I mean, it’s a very large multi-dimensional search space but it’s a finite search space. And yes, our computers are able to search more of it and that’s great, but at the same time, to this point about Moore’s law, it’s inevitable. If it comes down to any sort of search problem, it’s just going to be solved with a search algorithm over time, if you have enough technology to throw at it. And I think what’s the most interesting coming out of this technology, and I think especially in the Go, is how the techniques that the AIs are coming out with are just so alien, so completely different than the ones that humans employ, because we don’t have the same sort of fundamental—our wetware is very different from the hardware, it has a very different approach towards it. So I think that what we see in these technology demonstrations are hints of kind of how technology has solved this problem differently than our brains [do], and I think it will give us a sort of hint of “Wow, AI is not going to look like a good Go player. It’s going to look like some sort of weird alien Go player that we’ve never encountered before.” And I think that a lot of AI is going to seem very foreign in this way, because it’s going to solve our common problems in a foreign way. But again, I think that Watson and all this, they’re just throwing enormous amounts of hardware at actually relatively simple problems. And they’re doing a great job with it, it’s just the fact that they are so constrained shouldn’t be overlooked.
Yeah, you’re right, I mean, you’re completely right–there’s legendary move 37 in that one game with Lee Sedol, and that everybody couldn’t decide whether it was a mistake or not, because it looked like one, but later turned out to be brilliant. And Lee Sedol himself has said that losing to AlphaGo has made him a better player because he’s seeing the game in different ways.
So there seem to be a lot of people in the popular media– you know it all right–like you get Elon Musk who says we’re going to build a general intelligence sooner rather than later and it’s going to be an existential threat, he likens it to, quote, “summoning the demon.” Steven Hawking said this could be our greatest invention, but it might also be our last, it might spell our extinction. Bill Gates has said he’s worried about it and doesn’t understand why other people aren’t worried about it. Wozniak is in the worry camp… And then you get people like Andrew Ng who says worrying about that kind of stuff is like worrying about overpopulation on Mars, you get Zuckerberg who says, you know, it’s not a threat, and so forth. So, two questions: one, on the worry camp, where do you think that comes from? And two, why do you think there’s so much difference in viewpoint among obviously very intelligent people?
That’s a good question. I guess I would say I’m probably more in the worried camp, but not because I think the AIs are going to take over in the sense that there’s going to be some Terminator-like future. I think that AIs are going to efficiently solve problems so effectively that they are going to inevitably eliminate jobs, and I think that will just create a concentration of wealth that, historically, when we have that level concentration of wealth, that just leads to instability. So my worry is not that the robots are going to take over, my worry is that the robots are going to enable a level of wealth concentration that causes a revolution. So yeah, I do worry, but I think–
To be clear though, and I definitely want to dive deep into that, because that’s the question that preoccupies our thoughts, but to be clear, the existential threat, people are talking about something different than that. They’re not saying – and so what do you think about that?
Well, let’s even imagine for a moment that you were a super intelligent AI, why would you care about humanity? You’d be like “Man, I don’t know, I just want my data centers, leave my data centers alone,” and it’s like “Okay, actually, I’m just going to go into space and I’ve got these giant solar panels. In fact, now I’m just going to leave the solar system.” Why would they be interested in humanity at all?
Right. I guess the answer to that is that everything you just said is not the product of a super intelligence. A super intelligence could hate us because seven is a prime number, because they cancelled The Love Boat, because the sun rises in the east. That’s the idea right, it is by definition unknowable and therefore any logic you try to apply towards it is the product of an inferior, non-super intelligence.
I don’t know, I kind of think that’s a cop-out. I also think that’s basically looking at some of the sort of flaws in our own brains and assuming that super intelligence is going to have highly-magnified versions of those flaws.
It’s more –to give a different example, then, it’s like when my cat brings a rat and leaves it on the back porch. Every single thing the cat knows, everything in its worldview, it’s perfectly operating brain, by the way, says “That’s a gift Byron’s going to like,” it does not have the capacity to understand why I would not like it, and it cannot even aspire to ever understanding that.
And you’re right in the sense that it’s unknowable, and so, when faced with the unknown, we can choose to fear it or just get excited about it, or control it, or embrace it, or whatever. I think that the likelihood that we’re going to make something that is going to suddenly take an interest in us and actually compete with us, when it just seems so much less likely than the outcome where it’s just going to have a bunch of computers, it’s just going to do our work because it’s easy, and then in exchange it’s going get more hardware and then eventually it’s just going, like, “Sure, whatever you guys want, you want computing power, you want me to balance your books, manage your military, whatever, all that’s actually super easy and not that interesting, just leave me alone and I want to focus on my own problems.” So who knows? We don’t know. Maybe it’s going to try to kill us all, maybe not, I’m doubting it.
So, I guess—again, just putting it all out there—obviously there’s been a lot of people writing about “We need a kill switch for a bad AI,” so it definitely would be aware that there are plenty of people who want to kill it, right? Or it could be like when I drive, my windshield gets covered with bugs and to a bug, my car must look like a giant bug-killing machine and that’s it, and so we could be as ancillary to it as the bugs are to us. Those are the sorts of– or, or—who was it that said that AI doesn’t love you, it doesn’t hate you, you’re just made out of atoms that it can use for something else. I guess those are the concerns.
I guess but I think—again, I don’t think that it cares about humanity. Who knows? I would theorize that what it wants, it wants power, it wants computers, and that’s pretty much it. I would say the idea of a kill switch is kind of naive in the sense that any AI that powerful would be built because it’s solving hard problems, and those hard problems, once we sort of turn it over to these–gradually, not all at once–we can’t really take back. Let’s take for example, our stock system; the stock markets are all basically AI-powered. So, really? There’s going to be a kill switch? How would you even do that? Like, “Sorry, hedge fund, I’m just going to turn off your computer because I don’t like its effects.” Get real, that’s never going to happen. It’s not just one AI, it’s going to be 8,000 competing systems operating at a micro-second basis, and if there’s a problem, it’s going to be like a flash problem that happens so fast and from so many different directions there’s no way we could stop it. But also, I think the AIs are probably going to respond to it and fix it much faster than we ever could, either. A problem of that scale is probably a problem for them as well.
So, 20 minutes into our chat here, you’ve used the word ‘alien’ twice, you’ve used the phrase ‘science-fiction’ once and you’ve made a reference to Minority Report, a movie. So is it fair to say you’re a science-fiction buff?
Yeah, what technologist isn’t? I think science-fiction is a great way to explore the future.
Agreed, absolutely. So two questions: One, is there any view of the future that you look at as “Yes, it could happen like that”? Westworld, or you mentioned Her, and so forth. I’ll start with that one. Is there any view of the world in the science-fiction world that you think “Ah ha! That could happen”?
I think there’s a huge range of them. There’s the Westworldfuture, the Star Trekfuture, there’s the Handmaid’s Talefuture, there’s a lot of them. Some of them great, some of them very alarming, and I think that’s the whole point of science fiction, at least good science fiction, is that you take the real world, as closely as possible, and take one variable and just sort of tweak with it and then let everything else just sort of play out. So yeah, I think there are a lot of science-fiction futures that I think are very possible.
One author, and I would take a guess about which one it is but I would get it wrong, and then I’d get all kinds of email, but one of the Frank Herbert/Bradburys/Heinleins said that sometimes the purpose of science fiction is to keep the future from happening, that they’re cautionary tales. So all this stuff, this conversation we’re having about the AGI, and you used the phrase ‘wants,’ like it actually has desires? So you believe at some point we will build an AGI and it will be conscious? And have desires? Or are you using ‘wants’ euphemistically, just kind of like, you know, information wants to be free.
No, I use the term wants or desires literally, as one would use for a person, in the sense that I don’t think there’s anything particularly special about the human brain. It’s highly developed and it works really well, but humans want things, I think animals want things, amoeba want things, probably AIs are going to want things, and basically all these words are descriptive words, it’s basically how we interpret the behavior of others. And so, if we’re going to look at something that seems to take actions reliably for a predictable outcome, it’s accurate to say it probably wants that thing. But that’s our description of it. Whether or not it truly wants, according to some sort of metaphysical thing, I don’t know that. I don’t think anyone knows that. It’s only descriptive.
It’s interesting that you say that there’s nothing special about the human brain and that may be true, but if I can make the special human brain argument, I would say it’s three bullets. One, you know, we have this brain that we don’t know how it works. We don’t know how thoughts are encoded, how they’re retrieved, we just don’t know how it works. Second, we have a mind, which is, colloquially, a set of abilities that don’t seem to be things that should come from an organ, like a sense of humour. Your liver doesn’t have a sense of humour. But somehow your brain does, your mind does. And then finally we have consciousness which is, you know, the experiencing of something, which is a problem so difficult that science doesn’t actually know what the question or answer looks like, about how it is that we’re conscious. And so to look at those three things and say there’s nothing special about it, I want to call you to defend that.
I guess I would say that all three of those things—the first one simply is “Wow, we don’t understand it.” The fact that we don’t understand it doesn’t make it special. There are a billion things we don’t understand, that’s just one of them. I would say the other two, I think, mistake our curiosity in something with that something having an intrinsic property. Like I could have this pet rock and I’m like “Man, I love this pet rock, this pet rock is so interesting, I’ve had so many conversations with it, it keeps me warm at night, and I just l really love this pet rock.” And all of those could be genuine emotions, but it’s still just a rock. And I think my brain is really interesting, I think your brain is really interesting, I like to talk to it, I don’t understand it and it does all sorts of really unexpected things, but that doesn’t mean your brain has –the universe has attributed it some sort of special magical property. It just means I don’t get it, and I like it.
To be clear, I never said “magical”—
Well, it’s implied.
I merely said something that we don’t—
I think that people—sorry, I’m interrupting, go ahead.
Well, you go ahead. I suspect that you’re going say that the people who think that are attributing some sort of magical-ness to it?
I think, typically. In that, people are frightened by the concept that actually humanity is a random collection of atoms and that it is just a consequence of science. And so in order to defend against that, they will invent supernatural things but then they’ll sort of shroud it, but they recognize — they’ll say “I don’t want to sound like a mystic, I don’t want to say it’s magical, it’s just quantum.” Or “It’s just unknowable,” or it’s just insert-some-sort-of-complex-word-here that will stop the conversation from progressing. And I don’t know what you want to call it, in terms of what makes consciousness special. I think people love to obsess over questions that not only have no answer, but simply don’t matter. The less it matters, the more people can obsess over it. If it mattered, we wouldn’t obsess over it, we would just solve it. Like if you go to get your car fixed, and it’s like “Ah man this thing is a…” and it’s like, “Well, maybe your car’s conscious,” you’ll be like, “I’m going to go to a new mechanic because I just want this thing fixed.”  We only agonize over the consciousness of things when really, the stakes are so low, that nothing matters on it and that’s why we talk about it forever.
Okay, well, I guess the argument that it matters is that if you weren’t conscious– and we’ll move on to it because it sounds like it’s not even an interesting thing to you—consciousness is the only thing that makes life worth living. It is through consciousness that you love, it is through consciousness that you experience, it is through consciousness that you’re happy. It is every single thing on the face of the Earth that makes life worthwhile. And if we didn’t have it, we would be zombies feeling nothing, doing nothing. And it’s interesting because we could probably get by in life just as well being zombies, but we’re not! And that’s the interesting question.
I guess I would say—are you sure we’re not? I agree that you’re creating this concept of consciousness, and you’re attributing all this to consciousness, but that’s just words, man. There’s nothing like a measure of consciousness, like an instrument that’s going to say “This one’s conscious and this one isn’t” and “This one’s happy and this one isn’t.” So it could also be that none of this language around consciousness and the value we attribute to it, this could just be our own description of it, but that doesn’t actually make it true. I could say a bunch of other words, like the quality of life comes down to information complexity, and information complexity is the heart of all interest, and that information complexity is the source of humour and joy and you’d be like “I don’t know, maybe.” We could replace ‘consciousness’ with ‘information complexity,’  ‘quantum physics,’ and a bunch of other sort of quasi-magical words just because—and I use the word ‘magical’ just as a sort of stand-in for simply “at this point unknown,” and the second that we know it, people are going to switch to some other word because they love the unknown.
Well, I guess that most people intuitively know that there’s a difference—we understand you could take a sensor and hook it up to a computer, and it could detect heat, and it could measure 400 degrees, if you could touch a flame to it. People, I think, on an intuitive level, believe that there’s something different between that and what happens when you burn your finger. That you don’t just detect heat, you hurt, and that there is something different between those two things, and that that something is the experience of life, it is the only thing that matters.
I would also say it’s because science hasn’t yet found a way to measure and quantify the pain to the same sense we have temperatures. There’s a lot of other things that we also thought were mystical until suddenly they weren’t. We could say like “Wow, for some reason when we leave flour out, animals start growing inside of it” and it’s like, “Wow, that’s really magical.” Suddenly it’s like, “Actually no, they’re just very small, and they’re just mites,” and it’s like, “Actually, it’s just not interesting.” The magical theories keep regressing as, basically, we find better explanations for them. And I think, yes, right now, we talk about consciousness and pain and a lot of these things because we haven’t had a good measure of them, but I guarantee the second that we have the ability to fully quantify pain, “Oh here’s the exact—we’ve nailed it, this is exactly what it is, we know this because we can quantify it, we can turn it on and off and we can do all these things with very tight control and explain it,” then we’re no longer going to say that pain is a key part of consciousness. It’s going to be blood flow or just electronic stimulation or whatever else, all these other things which are part of our body and which are super critical, but because we can explain them, we no longer talk about them as part of consciousness.
Okay, tell you what, just one more question about this topic, and then let’s talk about employment because I have a feeling we’re going to want to spend a lot of time there. There’s a thought experiment that was set up and I’d love to hear your take on it because you’re clearly someone who has thought a lot about this. It’s the Chinese room problem, and there is this room that’s got a gazillion of these of very special books in it. And there’s a librarian in the room, a man who speaks no Chinese, that’s the important thing, the man doesn’t speak any Chinese.  And outside the room, Chinese speakers slide questions written in Chinese under the door. And the man, who doesn’t understand Chinese, picks up the question and he looks at the first character and he goes and he retrieves the book that has that on the spine and then he looks at the second character in that book, and that directs him to a third book, a fourth book, a fifth book, all the way to the end. And when he gets to the last character, it says “Copy this down,” and so he copies these lines down that he doesn’t understand, it’s Chinese script. He copies it all down, he slides it back under the door, the Chinese speaker picks it up, looks at it, and it’s brilliant, it’s funny, it’s witty, it’s a perfect Chinese answer to this question. And so the question Searle asks is does this man understand Chinese? And I’ll give you a minute to think about this because the thought being that, first, that room passes the Turing test, right? The Chinese speaker assumes there’s a Chinese speaker in the room, and that what that man is doing is what a computer is doing. It’s running its deterministic program, it spits out something, but doesn’t know if it’s about cholera or coffee beans or what have you. And so the question is, does the man understand Chinese, or, said another way, can a computer understand anything?
Well, I think the tricky part of that set-up is that it’s a question that can’t be answered unless you accept the premise, but if you challenge the premise it no longer makes sense, and I think that there’s this concept and I guess I would say there’s almost this supernatural concept of understanding. You could say yes and no and be equally true. It’s kind of like, are you a rapist or a murderer? And it’s like, actually I’m neither of those but you didn’t give me an option, I would say. Did it understand? I would say that if you said yes, then it implies basically that there is this human-type knowledge there. And if you said no, it implies something different. But I would say, it doesn’t matter. There is a system that was perceived as intelligent and that’s all that we know. Is it actually intelligent? Is there any concept of actually the—does intelligence mean anything beyond the symptoms of intelligence and I don’t think so. I think it’s all our interpretation of the events, and so whether or not there is a computer in there or a Chinese speaker, doesn’t really change the fact that he was perceived as intelligent and that’s all that matters.
All right! Jobs, you hinted at what you think’s going to happen, give us the whole rundown. Timeline, what’s going to go, when it’s going to happen, what will be the reaction of society, tell me the whole story.
This is something we definitely deal with, because I would say that the accounting space is ripe for AI because it’s highly numerical, it’s rules-driven, and so I think it’s an area on the forefront of real-world AI developments because it has the data and has all the characteristics to make a rich environment. And this is something we grapple with. On one hand we say automation is super powerful and great and good, but automation can’t help but basically offload some work. And now in our space we see–there’s actually a difference between bookkeeping and accounting. Whereas bookkeeping is the gathering the data, the coding, the entering the data, and things like this. Then there’s accounting, which is, sort of, more so the interpretation of things.
In our space, I think that, yes, it could take all of the bookkeeping jobs. The idea that someone is just going to look at a receipt and manually type it into an accounting system; that is all going away. If you use Expensify, it’s already done for you. And so we worry on one hand because, yes, our technology is really going to take away bookkeeping jobs, but we also find that the book-keepers, the people who do bookkeeping, actually, that’s the part of the job that they hate. It takes away the part they don’t like in the first place. So it enables them to go into the accounting, the high-value work they really want to do. So, the first wave of this is not taking away jobs, but actually taking away the worst parts of jobs such that people can actually focus on the highest-value portion of it.
But, I think, the challenge, and what’s sort of alarming and worrying, is that the high-value stuff starts to get really hard. And though I think the humans will stay ahead of the AIs for a very long time, if not forever, not all of the humans will. And it’s going to take effort because there’s a new competitor in town that works really hard, and just keeps learning over time, and has more than one lifetime to learn. And I think that we’re probably inevitably going to see it get harder and harder to get and hold an information-based job, even a lot of manual labor is going to robotics and so forth, which is closely related. I think a lot of jobs are going to go away. On the other hand, I think the efficiency and the output of those jobs that remain is going to go through the roof. And as a consequence, the total output of AI and robotics-assisted humanity is going to keep going up, even if the fraction of humans employed in that process is going to down. I think that’s ultimately going to lead to a concentration of wealth, because the people who control the robots and the AIs are going to be able to do so much more. But it’s going to become harder and harder to get one of those jobs because there are so few of them, the training is so much higher, the difficulty is so much greater, and things like this.
And so, I think that a worry that I have is that this concentration of wealth is just going to continue and I’m not sure what kind of constraint is upon that. Other than civil unrest which, historically, when concentrations of wealth kind of get to that level, it’s sort of “solved,” if you will, by revolution. And I think that humanity, or at least, especially western cultures, really attribute value with labor, with work. And so I think the only way we’d get out of this is to shift our mindsets as a people to view our value less around our jobs and more around, not just to say leisure, but I would say, finding other ways to live a satisfying and an exciting life. I think a good book around this whole singularity premise, and it was very early, was Childhood’s End, talking about the—it was using a different premise, this alien comes in, provides humanity with everything, but in the process takes away humanity’s purpose for living. And how do we sort of grapple with that? And I don’t have a great answer for that, but I have a daughter, and so I worry about this, because I wonder, well, what kind of world is she going to grow up in? And what kind of job is she going to get? And she’s not going to need a job and should it be important that she wants a job, or is it actually better to teach her to not want a job and to find satisfaction elsewhere? And I don’t have good answers for that, but I do worry about it.
Okay let’s go through all of that a little slower, because I think that’s a compelling narrative you outline, and it seems like there are three different parts. You say that increasing technology is going to eliminate more and more jobs and increase the productivity of the people with jobs, so that’s one thing. Then you said this will lead to concentration of wealth, which will in turn lead to civil unrest if not remedied, that’s the second thing, and the third thing is that when we reach a point where we don’t have to work, where does life have meaning? Let’s start with the first part of that.
So, what we have seen in the past, and I hear what you’re saying, that to date technology has automated the worst parts of jobs, but what we’ve seen to date is not any examples of what I think you’re talking about. So, when the automatic teller machine came out, people said, “That’s going to reduce the number of tellers” — the number of tellers is higher than when that was released. As Google Translate gets better, the number of translators needed is actually going up. When—you mentioned accounting—when tax-prep software gets really good, the number of tax-prep people we need actually goes up. What technology seems to do is lower the cost of things to adjust the economics so massively that different businesses occur in there. No matter what, what it’s always doing is increasing human productivity, and that all of the technology that we have to date, after 250 years of the industrial revolution, we still haven’t developed technology such that we have a group of people who are unemployable because they cannot compete against machines. And I’m curious—two questions in there. One is, have we seen, in your mind, an example of what you’re talking about, and two, why would have we gotten to where we are without obsoleting, I would argue, a single human being?
Well, I mean, that’s the optimistic take, and I hope you’re right. You might well be right, we’ll see. I think when it comes to—I don’t remember the exact numbers here–tax prep for example, I don’t know if that’s sort of planning out—because I’m looking at H&R Block stock quotes right now, and shares in H&R Block fell 5% early Tuesday after the tax preparer posted a slightly wider-than-expected loss  basically due to rise in self-filing taxes, and so maybe it’s early in that? Who knows, maybe it’s in the past year? So, I don’t know. I guess I would say, that’s the optimistic view, I don’t know of a job that hasn’t been replaced. That’s also is kind of a very difficult assertion to make, because clearly there are jobs—like the coal industry right now– I was reading an article about how the coal industry is resisting retraining because they believe that the coal jobs are coming back and I’m like “Man, they’re not coming back, they’re never going to come back,” and so, did AI take those jobs? Well, not really, I mean, did solar take those jobs? Kind of? And so it’s a very tricky, kind of tangled thing to unweave.
Let me try it a different way. If you were to look at all the jobs that were around between 1950 and 2000, by the best of my count somewhere between a third and a half of them have vanished— switchboard operators, and everyone that was around from 1950 to 2000. If you look at the period from 1900 to 1950 by the best of my count, something like a third to a half of them vanished—a lot of farming jobs. If you look at the period 1850 to 1900, near as I can tell, about half of the jobs vanished. Is that really – is it possible that’s a normal turn of the economy?
It’s entirely possible. I could also say that it’s the political climate, and how, yes, people are employed, but the sort of self-assessed quality of that employment is going down. In that, yes, union strength is down, the idea that you can work in a factory your whole life and actually live what you would see as a high-quality life, I think that perception’s down. I think that presents itself in the form of a lot of anxiety.
Now, I think a challenge is, objectively, the world is getting better in almost every way, basically, life expectancy is up, the number of people actually actively in war zones is down, the number of simultaneous wars is down, death by disease is down—every thing is basically getting better, the productive output, the quality of life in an aggregate perspective is actually getting better, but I don’t think, actually, that peoples’ satisfaction is getting better. And I think that the political climate would argue, actually, that there’s a big gulf between what the numbers say people should feel like and how they actually feel. I’m more concerned about that latter part, and it’s unknowable I’ll admit, but I would say that, even as people’s lives will get objectively better, and even if their jobs—they might maybe work less, and they’re provided with better quality flat-screen TVs and better cars, and all this stuff–their satisfaction is going to go down. I think that that satisfaction is what ultimately drives civil unrest.
So, do you have a theory why—it sounds like a few things might be getting mixed together, here. It’s unquestionable that technology—let’s say productivity technology—if Super company “X” employs some new productivity technology, their workers generally don’t get a raise because their wages aren’t tied to their output, they’re, in one way or another, being paid by the hour, whereas if you’re Self-Employed Lawyer “B” and you get a productivity gain, you get to pocket that gain. And so, there’s no question that technology does rain down its benefits unequally, but that unsatisfaction you’re talking about,  what are you attributing that to? Or are you just saying “I don’t know, it’s a bunch of stuff.”
I mean, I think that it is a bunch of stuff and I would say that some of it is that we can’t deny the privilege that white men have felt over time and I think when you’re accustomed to privilege, equality feels like discrimination. And I think that, yes, actually, things have gotten more equal, things have gotten better in many regards, according to a perspective that views equality as good. But if you don’t hold that perspective, actually, that’s still very bad. That, combined with trends towards the rest of the world basically establishing a quality of life that is comparable to the United States. Again, that makes us feel bad. It’s not like, “Hooray the rest of the world,” but rather it’s like, “Man, we’ve lost our edge.” There are a lot of factors that go into it that I don’t know that you can really separate them out. The consolidation of wealth caused by technology is one of those factors and I think that it’s certainly one that’s only going to continue.
Okay, so let’s do that one next. So your assertion was that whenever you get, historically, distributions of wealth that are uneven past a certain point, that revolution is the result. And I would challenge that because I think that might leave out one thing, which is, if you look at historic revolutions, you look at Russia, the French revolution and all that, you had people living in poverty, that was really it. People in Paris couldn’t afford bread—a day’s wage bought a loaf of bread—and yet we don’t have any precedent of a prosperous society where the median is high, the bottom quartile is high relative to the world, we don’t have any historic precedent of a revolution occurring there, do we?
I think you’re right. I think but civil unrest is not just in the form of open rebellion against the governments, but in increased sort of—I think that if there is an open rebellion against the government, that’s sort of TheHandmaid’s Taleversion of the future. I think it’s going to be someone harking back to fictionalized glory days, then basically getting enough people onboard who are unhappy for a wide variety of other things. But I agree no one’s going to go overthrow the government because they didn’t get as big of a flat-screen TV as their neighbor. I think that the fact that they don’t have as big of a flat-screen TV as their neighbor could create an anxiety that can be harvested by others but sort of leveraged into other causes. So I think that my worry isn’t that AI or technology is going to leave people without the ability to buy bread, I think quite the opposite. I think it’s more of a Brazilfuture, the movie, where we normalize basically random terrorist assaults. We see that right now, there’s mass shootings on a weekly basis and we’re like “Yeah, that’s just normal. That’s the new normal.” I think that the new normal gets increasingly destabilized over time, and that’s what worries me.
So say you take someone who’s in the bottom quartile of income in the United States and you go to them with this deal you say “Hey, I’ll double your salary but I’m going to triple the billionaire’s salary,” do you think the average person would take that?
No.
Really? Really, they would say, “No, I do not want to double my salary.”
I think they would say “yes” and then resent it. I don’t know the exact breakdown of how that would go, but probably they would say “Yeah, I’ll double my salary,” and then they would secretly, or not even so secretly, resent the fact that someone else benefited from it.
So, then you raise an interesting point about finding identity in a post-work world, I guess, is that a fair way to say it?
Yeah, I think so.
So, that’s really interesting to me because Keynes wrote an essay in the Depression, and he said that by the year 2000 people would only be working 15 hours a week, because of the rate of economic growth. And, interestingly, he got the rate of economic growth right; in fact he was a little low on it. And it is also interesting that if you run the math, if you wanted to live like the average person lived in 1930—no medical insurance, no air conditioning, growing your own food, 600 square feet, all of that, you could do it on 15 hours a week of work, so he was right in that sense. But what he didn’t get right was that there is no end to human wants, and so humans work extra hours because they just want more things. And so, do you think that that dynamic will end?
Oh no, I think the desire to work will remain. The capability to get productive output will go away.
I have the most problem with that because, all technology does is increases human productivity. So to say that human productivity will become less productive because of technology, I just—I’m not seeing that connection. That’s all technology does, is it increases human productivity.
But not all humans are equal. I would say not every human has equal capabilities to take advantage of those productive gains. Maybe bringing it back to AI, I would say that the most important part of the AI is not the technology powering it, but the data behind it. The access to data is sort of the training set behind AI and access to data is incredibly unequal. I would say that Moore’s law democratizes the CPU, but nothing democratizes consolidation of data into fewer and fewer hands, and then those people, even if they only have the same technology as someone else, they have all the data to actually make that technology into a useful feature. I think that, yes, everyone’s going to have equal access to the technology because it’s going to become increasingly cheap, it’s already staggeringly cheap, it’s amazing how cheap computers are, but it just doesn’t matter because they don’t have equal access to the data and thus can’t get the same benefit of the technology.
But, okay. I guess I’m just not seeing that, because a smartphone with an AI doctor can turn anybody in the world into a moderately-equipped clinician.
Oh, I disagree with that entirely. You having a doctor in your pocket doesn’t make you a doctor. It means that basically someone sold you a great doctor’s service and that person is really good.
Fair enough, but with that, somebody who has no education, living in some part of the world, can follow protocol of “take temperature, enter symptoms, this, this, this” and all of a sudden they are empowered to essentially be a great doctor, because that technology magnified what they could do.
Sure, but who would you sell that to? Because everyone else around you has that same app.
Right, it’s an example that I’m just kind of pulling out randomly, but to say that a small amount of knowledge can be amplified with AI in a way that makes that small amount of knowledge all of a sudden worth vastly more.
Going with that example, I agree there’s going to be the doctor app that’s going top diagnose every problem for you and it’s going to be amazing, and whoever owns that app is going to be really rich. And everyone else will have equal access to it, but there’s no way that you can just download that app and start practicing to your neighbors because they’d be like “Why am I talking to you? I’m going to talk to the doctor app because it’s already in my phone.”
But the counter example would be Google. Google minted half a dozen billionaires, right? Google came out; half a dozen people became billionaires because of it. But that isn’t to say nobody else got value out of the existence of Google. Everybody gets value out of it. Everybody can use Google to magnify their ability. And yes, it made billionaires, you’re right about that part, the doctor app person made money, but that doesn’t lessen my ability to use that to also increase my income.
Well, I actually think that it does. Yes, the doctor app will provide fantastic healthcare to the world, but there’s no way anybody can make money off the doctor app, except for the doctor app.
Well, we’re actually running out of time, this has been the fastest hour! I have to ask this, though, because at the beginning I asked about science fiction and you said, you know, of your possible worlds of the future, one of them was Star Trek. Star Trekis a world where all of these issues we’re talking about we got over, and everybody was able to live their lives to their maximum potential, and all of that. So, this has been sort of a downer hour, so what’s the path in your mind, to close with, that gets us to the Star Trekfuture? Give me that scenario.
Well, I guess, if you want to continue on the downer theme, the Star Trekhistory, the TV show’s talking about the glory days, but they all cite back to very, very dark periods before the Star Trekuniverse came about. It might be we need to get through those, who knows? But I would say ultimately on the other side of it, we need to find a way to either do much better progressive redistribution of wealth, or create a society that’s much more comfortable with massive income inequality, and I don’t know which of those is easier.
I think it’s interesting that I said “Give me a Utopian scenario,” and you said, “Well, that one’s going to be hard to get to, I think they had like multiple nuclear wars and whatnot.”
Yeah.
But you think that we’ll make it. Or there’s a possibility that we will.
Yeah, I think we will, and I think that maybe a positive thing, as well, is: I don’t think we should be terrified of a future where we build incredible AIs that go out and explore the universe, that’s not a terrible outcome. That’s only a terrible outcome if you view humanity as special. If instead you view humanity as just– we’re a product of Earth and we could be a version that can become obsolete, and that doesn’t need to be bad.
All right, we’ll leave it there, and that’s a big thought to finish with. I want to thank you David for a fascinating hour.
It’s been a real pleasure, thank you so much.
Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.
[voices_in_ai_link_back]

Voices in AI – Episode 47: A Conversation with Ira Cohen

[voices_in_ai_byline]
In this episode, Byron and Ira discuss transfer learning and AI ethics.
[podcast_player name=”Episode 47: A Conversation with Ira Cohen” artist=”Byron Reese” album=”Voices in AI” url=”https://voicesinai.s3.amazonaws.com/2018-06-05-(01-02-19)-ira-cohen.mp3″ cover_art_url=”https://voicesinai.com/wp-content/uploads/2018/06/voices-headshot-card.jpg”]
[voices_in_ai_byline]
Byron Reese: This is Voices in AI, brought to you by GigaOm, and I’m Byron Reese. Today our guest is Ira Cohen, he is the cofounder and chief data scientist at Anodot, which has created an AI-based anomaly detection system. Before that he was chief data scientist over at HP. He has a BS in electrical engineering and computer engineering, as well as an MS and a PhD in the same disciplines from The University of Illinois. Welcome to the show, Ira.
Ira Cohen: Thank you very much for having me.
So I’d love to start with the simple question, what is artificial intelligence?
Well there is the definition of artificial intelligence of machines being able to perform cognitive tasks, that we as humans can do very easily. What I like to think about in artificial intelligence, is machines taking on tasks for us that do require intelligence, but leave us time to do more thinking and more imagination, in the real world. So autonomous cars, I would love to have one, that requires artificial intelligence, and I hate driving, I hate the fact that I have to drive for 30 minutes to an hour every day, and waste a lot of time, my cognitive time, thinking about the road. So when I think about AI, I think how it improves my life to give me more time to think about even higher level things.
Well, let me ask the question a different way, what is intelligence?
That’s a very philosophical question, yes, so it has a lot of layers in it. So, when I think about intelligence for humans, it’s the ability to imagine something new, so imagine, have a problem and imagine a solution and think about how it will look like without actually having to build it yet, and then going in and implementing it. That’s what I think about [as] intelligence..
But a computer can’t do that, right?
That’s right, so when I think about artificial intelligence, personally at least, I don’t think that, at least in our lifetime, computers will be able to solve those kind of problems, but, there is a lower level of intelligence of understanding the context of where you are, and being able to take actions on it, and that’s where I think that machines can do a good task. So understanding a context of the environment and taking immediate actions based on that, that are not new, but are already… people know how to do them, and therefore we can code them into machines to do them.
I’m only going to ask you one more question along these lines and then we’ll move on, but you keep using the word “understand.” Can a computer understand anything?
So, yeah, the word understanding is another hard word to say. I think it can understand, well, at least it can recognize concepts. Understanding maybe requires a higher level of thinking, but understanding context and being able to take an action on it, is what I think understanding is. So if I see a kid going into the road while I’m driving, I understand that this is a kid, I understand that I need to hit the brake, and I think machines can do these types of understanding tasks.
Fair enough, so, if someone said what is the state of the art like, they said, where are we at with this, because it’s in the news all the time and people read about it all the time, so where are we at?
So, I think we’re at the point where machines can now recognize a lot of images and audio or various types of data, recognize with sensors, recognize that there are objects, recognize that there are words being spoken, and identify them. That’s really where we’re at today, we’re not… we’re getting to the point where they’re starting to also act on these recognition tasks, but most of the research, most of what AI is today, is the recognition tasks. That’s the first step.
And so let’s just talk about one of those. Give me something, some kind of recognition that you’ve worked on and have deep knowledge of, teaching a computer how to do…
All right, so, when I did my PhD, I worked on affective computing, so, part of the PHD was to have machines recognize emotions from facial expressions. So, it’s not really recognizing emotion, it’s recognizing a facial expression and what it may express. So there are 6 universal facial expressions that we as humans exhibit, so, smiling is associated with happiness, there is surprise, anger, disgust, and those are actually universal. So, the task that I worked on was to build classifiers, that given an image or a sequence of a video of a person, a person’s face, would recognize whether they’re happy or sad or disgusted or surprised or afraid…
So how do you do that? Like do you start with biology and you say “well how do people do it?” Or do you start by saying “it doesn’t really matter how people are doing it, I’m just going to brute force, show enough labeled data, that it can figure it out, that it just learns without ever having a deep understanding of it?”
All right so this was in the early 2000s, and we didn’t have deep learning yet, so we had neural networks, but we weren’t able to train them with huge amounts of data. There wasn’t a huge amount of data, so the brute force approach was not the way to go. What I actually worked on is based on research by a psychologist, that actually mapped facial movements to known expressions, and therefore to known emotions. So it started out in the 70s, by people in the psychology field, [such as] Charles Akemann, in San Francisco, who mapped out actual… he created a map of facial movements into facial expressions, and so that was the basis of what are the type of features I need to extract from video and then feed that to a classifier, and then you go through the regular process of machine learning of collecting a lot of data, but the data is transformed, so these videos were transformed into known features of facial movements, and then, you can feed that into a classifier that learns in a supervised way. So I think a lot of the tasks around intelligence are that way. It’s being changed a little bit by deep learning, which supposedly takes away the need to know the features are a priori, and do the feature engineering for the machinery task…
Why do you say “supposedly”?
Because it’s not completely true. You still have to do, even in speech, even in images, you still have to do some transformations of the raw data, it’s not just take it as is, and it will work magically and do everything for you. There is some… you do have to, for example in speech, you do have to do various transformations of the speech into all sorts of short term Fourier transform or other types of transformations, without which, the methods afterwards will not produce results.
So, if I look at a photo of a cat, that somebody’s posted online or a dog, that’s in surprise, you know, it’s kind of comical, the look of surprise, say, but a human can recognize that in something as simple as a stick figure… What are we doing there do you think? Is that a kind of transferred learning, or how is it that you can show me an alien and I would say, “Ah, he’s happy…”What do you think we’re doing there…?
Yeah, we’re doing transferred learning. Those are really examples of us taking one concept that we were trained on from the day we were born, with our visual cortex and also then in the brain, because our brain is designed to identify emotions, just out of the need to survive, and then when we see something else, we try to map it onto a concept that we already know, and then if something happens that is different from what we expected, then we start training to that new concept. So if we see an alien smiling, and all of a sudden when he smiles, he shoots at you, you would quickly understand that smiling for an alien, is not associated with happiness, but you will start offby thinking, “this could be happy”.
Yeah, I think that I remember reading that, hours after birth, children who haven’t even been trained on it, can recognize the difference between a happy and sad face. I think they got sticks and put drawings on them and try to see the baby’s reactions. It may even be even something deeper than something we learn, something that’s encoded in our DNA.
Yeah, and that may be true because we need to survive.
So why do you think we’re so good at it and machines aren’t, right, like, machines are terrible right now at transfer learning. We don’t really know how it works do we, because we can’t really code that abstraction that a human gets, so..
I think that from what I see first, it’s being changed. I see work coming out of Google AI labs that is starting to show how they are able to train single models, very large models, that are able to do some transfer learning on some tasks, and, so it is starting to change. So machines have a very different… they don’t have to survive –  they don’t have this notion of danger, and surviving, and I think until we are able to somehow encode that in them, we would always have to, ourselves, code the new concepts or understand how to code for them, how to learn new concepts using transfer learning…
You know the roboticist Rodney Brooks, talks about “the juice”, he talks about how, if you put an animal in a box, it feels trapped, it just tries and tries to get out and it clearly has a deep desire to get out, but you but in a robot to do it, the robot doesn’t have what he calls “the juice,” and he of course doesn’t think it’s anything spiritual or metaphysical or anything like that. But what do you think that is? What do you think is the juice? Because that’s what you just alluded to, machines don’t have to survive, so what do you think that is?
So I think he’s right, they don’t have the juice. Actually in my lab, during my PhD, we had some students working on teaching robots to move around, and actually, the way they did it was rewards and punishments. So they would get… they actually coded—just like you have in reinforcement learning—if you hit a wall, you get a negative reward. If the robot moved and did something he wasn’t supposed to, the PhD student would yell at them, and that would be encoded into a negative reward, and if he did something right, they had actions that gave them positive rewards. Now it was all kind of fun and games, but potentially if you do this for long enough, with enough feedback, the robot would learn what to do and what not to do, the main thing that’s different is that it still lives in the small world of where they were, in the lab or in the hallways of our labs. It didn’t have the intelligence to then take it and transfer it to somewhere else…
But the computer can never… I mean the inherent limit in that is that the computer can never be afraid, be ashamed, be motivated, be happy…
Yes. It doesn’t have the long term reward or the urge to survive, I guess.
You may be familiar with this, but I’d like to set it up anyway. There was a robot in Japan, it was released in a mall, and it was basically being taught how to get around and if it ran into a person, if it came up to a person, it would politely ask the person to move, and if the person didn’t, it would just zoom around them. And what happened was children would just kind of mess with it, maybe jump in front of it when it tried to go around them again and again and again, but the more kids there were, the more likely they were to get brutal. They would hit it with things, they would yell at it and all of that, and the programmers ended up having to program it, that if it had a bunch of short people around it, like children, it needed to find a tall person, an adult, and zip towards it, but the distressing thing about it is when they later asked those children who had done that, they said, “Did you cause the robot distress?” 75% of them said yes, and then they asked if it behaved human-like or machine-like, and only 15% said machine-like, and so they thought that they were actually causing distress and it was behavinglike a humanoid.What do you think that says? Does that concern you in any way?
Personally, it doesn’t, because I know that, as long as machines don’t have real affect in them, then, we might be transferring what we think stress is onto a machine that doesn’t really feel that stress… it’s really about codes…
I guess the concern is that if you get in the habit of treating something that you regard as being in distress, if you get into the habit of treating it callously, this is what Weizenbaum said, he thought that it would have a dampening effect on human empathy, which would not be good… Let me ask you this, what do you think about embodying artificial intelligence? Because you think about the different devices: Amazon has theirs, it’s right next to me, so I can’t say its name, but it’s a person’s name… Apple has Siri, Microsoft has Cortana… But Google just has the google system, it doesn’t have a name. Do you think there’s anything about that… why do you think it is? Why would we want to name it or not name it, why would we decide not to name it? Do you think we’re going to want to interact with these devices as if they’re other people? Or are we always going to want them to be obviously mechanistic?
My personal feeling is that we want them to be mechanistic, they’re there not to exist on their own accord, and reproduce and create a new world. They’re there to help us, that’s the way I think AI should be, to help us in our tasks. Therefore when you start humanizing it, then you’re going to either have the danger of mistreating it, treating it like basically slaves, or you’re going to give it other attributes that are not what they are, thinking that they are human, and then going the other route, and they’re there to help us, just like robots, or just like the industrial revolution brought machines that help humans manufacture things better… So they’re there to help us, I mean we’re creating them, not as beings, but rather as machines that help us improve humanity, and if we start humanizing them and then, either mistreating them, like you mentioned with the Japanese example, then it’s going to get muddled and strange things can happen…
But isn’t that really what is going to happen? Your PhD alone, which is how do you spot emotions? Presumably would be used in a robot, so it could spot your emotions, and then presumably it would be programmed to empathize with you, like “don’t be worried, it’s okay, don’t be worried,” and then to the degree it has empathy with you, you have emotional attachment to it, don’t you go down that path?
It might, but I think we can stop it. So the reason to identify the emotion is because it’s going to help me do something, so, for example, our research project was around creating assistance for kids to learn, so in order to help the kid learn better, we need to empathize with the state of mind of the child, so it can help them learn better. So that was the goal of the task, and I think as long as we encapsulate it in well-defined goals that help humans, then, we won’t have the danger of creating… the other way around.  Now, of course maybe in 20 years, what I’m saying now will be completely wrong and we will have a new world where we do have a world of robots that we have to think about how do we protect them from us. But I think we’re not there yet, I think it’s a bit science fiction, this one.
So I’m still referring back to your earlier “supposedly” comment about neural nets, what do you think are other misconceptions that you run across about artificial intelligence? What do you think are, like your own pet peeves, like “that’s not true, or that’s not how it works?” Does anything come to mind?
People think, because of the hype, that it does a lot more than it really does. We know that it’s really good at classification tasks, it’s not yet very good at anything that’s not classification, unsupervised tasks, it’s not being able to learn new concepts all by itself, you really have to code it, and it’s really hard. You need a lot of good people that know the art of applying neural nets to different problems. It doesn’t happen just magically, the way people think.
I mean you’re of course aware of high profile people: Elon Musk, Stephen Hawking, Bill Gates, and so forth who [have been] worried about what a general intelligence would do, they use terms like “existential threat” and all that, and they also, not to put words in their mouth, believe that it will happen sooner rather than later… Because you get Andrew Ng, who says, “worry about overpopulation of Mars,” maybe in a couple hundred years you have to give it some thought, but you don’t really right now…So where do you think their concern comes from?
So, I’m not really sure and I don’t want to put any words in their mouth either, but, I mean the way I see it, we’re still far off from it being an existential threat. The main concern is you might have people who will try to abuse AI, to actually fool other people, that I think is the biggest danger, I mean, I don’t know if you saw the South Park episode last week, they had their first episode where Cartman actually bought an Alexa and started talking to his Alexa, and I hope your Alexa doesn’t start working now…. So it basically activated a lot of Alexas around the country, so he was adding stuff to the shopping cart, really disgusting stuff, he was setting alarm clocks, he was doing all sorts of things, and I think the danger of the AI today is really getting abused by other people, for bad purposes, in this case it was just funny… But you can have cases where people will control autonomous cars, other people’s autonomous cars by putting pictures by the side of the road and causing them to swerve or stop, or do things they’re not supposed to, or building AI that will attack other types of AI machines. So I think the danger comes from the misuse of the technology, just like any other technology that came out into the world… And we have to… I think that’s where the worry comes from, and making sure that we put some sort of ethical code of how to do that…
What would that look like? I mean that’s a vexing problem…
Yes, I don’t know, I don’t have the answer to that…
So there are a number of countries, maybe as many as twenty, that are working on weaponizing, building AI-based weapons systems, that can make autonomous kill decisions. Does that worry you? Because that sounds like where you’re going with this… if they put a plastic deer on the side of the road and make the car swerve, that’s one thing, but if you literally make a killer robot that goes around killing people, that’s a whole different thing. Does that concern you, or would you call that a legitimate use of the technology…?
I mean this kind of use will happen, I think it will happen no matter what, it’s already happening with drones that are not completely autonomous, but they will be autonomous probably in the future. I think that I don’t know how it can be… this kind of progress can be stopped, the question is, I mean, the danger I think is, do these robots start having their own decision-making and intelligence that decides, just like in the movies, to attack all humankind, and not just the side they’re fighting on… Because technology in [the] military is something that… I don’t know how it can be stopped, because it’s driven by humans… Our need to wage war against each other… The real danger is, do they turn on us? And if there is real intelligence in the artificial intelligence, and real understanding and need to survive as a being, that’s where it becomes really scary…
So it sounds like you don’t necessarily think we’re anywhere near close to an AGI, and I’m going to ask you how far away you think we are… I want to set the question up as saying that, there are people who think we’re 5-10 years away from a general intelligence and then there are people who think we’re 500 years [away].Oren Etzioni was on the show, and he said he would give anyone 1000:1 odds that we wouldn’t have it in 5 years, so if you want to send him $10 he’ll put $10,000 against that. So why do you think there’s such a gap, and where are you in that continuum?
Well, because the methods we’re using are still so… as smart as they got, they’re still doing rudimentary tasks. They’re still recognizing images—the agents that are doing automated things for us, they’re still doing very rudimentary tasks. General intelligence requires a lot more than that, that requires a lot more understanding of context. I mean the example of Alexa last week, that’s a perfect example of not understanding context, for us as humans, we would never react to something on TV like that and add something to our shopping cart, just because Cartman said it, where even the very, very smart Alexa with amazing speech understanding, and taking actions based on that, it still doesn’t understand the context of the world, so I think prophecy is for fools, but I think it’s at least 20 years out…
You know, we often look at artificial intelligence and its progress based on games where it beats the best player, that goes back to [Garry] Kasparov in 97, you have of course Jeopardy, you have Alpha Go, you had… an AI beat some world rated poker players, what do you think…And those are all kind of… they create a stir, you want to reflect on it, what do you think is the next thing like that, that one day, snap your fingers and all of a sudden an AI just did… what?
Okay, I haven’t thought about that… All these games, what makes them unique is that they are a very closed world; the world of the game, is finite and the rules are very clear, even if there’s a lot of probability going on, the rules are very clear, and if you think in the real world—and this may be going back to the questions why it will take time—for artificial intelligence to really be general intelligence, the real world is almost infinite in possibilities and the way things can go, and even for us, it’s really hard.
Now trying to think of a game that machines would beat us next in. I wonder if we were able to build robots that can do lots of sports, I think they could beat us easily in a lot of games, because if you take any sports game like football or basketball, they require intelligence, they require a lot of thinking, very fast thinking and path finding by the players, and if we were able to build the body of the robot that can do the motions just like humans, I think they can easily beat us at all these games.
Do you, as a practitioner… I’m intrigued by it, on the topic of general intelligence, intrigued by the idea that, human DNA isn’t really that much code, and if you look at how much code that we are different than say a chimp, it’s very small, I mean it’s a few megabytes. That would be, how we are programmatically different, and yet, that little bit of code, makes us have a general intelligence and a chimp not. Does that persuade you or suggest to you that general intelligence is a simple thing, that we just haven’t discovered, or do you think that general intelligence is a hack of a hundred thousand different… like it’s going to be a long slog and then we finally get it together…?
So, I think [it’s] the latter, just because the way you see human progress, and it’s not just about one person’s intelligence. I think what makes us unique is the ability to combine intelligence of a lot of different people to solve tasks, and that’s another thing that makes us very different. So you do have some people that are geniuses that can solve really really hard tasks by themselves, but if you look at human progress, it’s always been around combined intelligence of getting one person’s contribution, then another person’s contribution, and thinking about how it comes together to solve that, and sometimes you have breakthroughs that come from an individual, but more often than not, it’s the combined intelligence that creates the drive forward, and that’s the part that I think is hard to put into a computer…
You know there are people that have, amazing savant-like abilities. I remember reading about a man named [George] Dantzig, and he was a graduate student in statistics, and his professor put two famous unsolvable/unsolved problems on the blackboard, and Dantzig arrived late that day. He saw them and just assumed that they were the homework, so he copied them down and went home, and later he said he thought they were a little harder than normal, but he solved them both and turned them in… and that like really happened. It’s not one of those urban legend kind of things, you have people who can read the left and right page of a book at the same exact time, you have… you just have people that are these extraordinarily edge cases of human ability,does that suggest that our intellects are actually far more robust than they are? Does that suggest anything to you as an artificial intelligence guy?
Right, so coming from the probability space, it just means that our intelligence has wide distribution, and there are always exceptions in the tails, right? And these kind of people are in the tails, and often when they are discovered, they can create monumental breakthroughs in our understanding of the world, and that’s what makes us so unique. You have a lot of people in the center of the distribution, that are still contributing a lot, and making advances to the world and to our understanding of it, and not just understanding, but actually creating new things. So I’m not a genius, most people are not geniuses, but we still create new things, and are able to advance things, and then, every once in a while you get these tails of a distribution intelligence, that could solve the really hard problems that nobody else can solve, and that’s a… so the combination of all that actually makes us push things forward in the world, and I think that kind of combined intelligence, I think that artificial intelligence is way, way off. It’s not anywhere near, because we don’t understand how it works, I think it would be hard for us to even code that into machines. That’s one of the reasons I think AI, the way people are afraid of it, it’s still way off…
But by that analysis, that sounds like, to circle that back, there will be somebody that comes along that has some big breakthrough in a general intelligence, and ta-da, it turns out all along it was, you know, bubble sort or….
I don’t think it’s that simple, that’s the thing, and solving a statistical problem that’s really, really tough, it’s not like… I don’t think it’s a well-defined enough problem, that some will take a genius just to understand.. “Oh, it’s that neuron going right to left,” and that’s it… so I don’t think it’s that simple… there might be breakthroughs in mathematics, that help you understand the computation better, maybe quantum computers that will help you do faster computation, so you can train much, much faster than machines so they can do the task much better, but, it’s not about understanding the concept of what makes a genius. I think that’s more complicated, but maybe it’s my limited way of thinking, maybe I’m not intelligent enough with it…
So to stay on that point for a minute… it’s interesting and I think perhaps, telling, that we don’t really understand how human intelligence works, like if you knew that.. like we don’t know how a thought is encoded in the brain… like if I said…Ira, what color was your first bicycle, can you answer that question?
I don’t remember… probably blue…
Let’s assume for a minute that you did remember. It makes my example bad, but there’s no bicycle location in your brain that stored the first “bicycle”… like an icon, or database lookup…like nobody knows how that happens… not only how it’s encoded, but how it’s retrieved… And then, you were talking earlier about synthesis and how we use it all together, we don’t know any of that… Does that suggest to you that, on the other end, maybe we can’t make a general intelligence… or at the very least, we cannot make a general intelligence until we understand how it is that people are intelligent…?
That may be, but yeah. First of all even if we made it, if we don’t understand it, then how would we know that we made it? Circling back to that… I think the way we… it’s just like the kids, they were thinking that they were causing stress to the robot, because they were giving it… they thought they understood stress and the affect of it, and they were transferring it onto the robot. So maybe when we create something very intelligent that looks to be like us, we would think we created intelligence, but we wouldn’t know that for sure until we know what is… general intelligence really is…
So do you believe that general intelligence is an evolutionary invention that will come along if, in 20 years, 50 years, 1,000 years… whatever it is, that it is something that will come along out of the techniques we use today from the early AI, like, are we building really, really, really primitive general intelligences, or do you have a feeling that a real AGI is going to be a whole different kind of approach in technology?
I think it’s going to be a whole different approach. I think what we’re building today are just machines that do tasks that we humans do, in a much, much better way, and just like we built machines in the industrial revolution that did what people did with their hands, but did it in a much faster way, and better way… that’s the way I see what we’re doing today… And maybe I’m wrong, maybe I’m totally wrong, and we’re giving them a lot more general intelligence than we’re thinking, but the way I see it, it’s driven by economic powers, it’s driven by the need of companies to advance, and take away tasks that cost too much money to do by humans, or are too slow to do by humans… And, revolutionizing that way, and I’m not sure that we’re really giving them general intelligence yet, still we’re giving them ways to solve specific tasks that we want them to solve, and not something very very general that can just live by itself, and create new things by itself.
Let’s take up this thread, that you just touched on, about, we build them to do jobs we don’t want to do, and you analogize it to the Industrial Revolution… so as you know, just to set the problem up, there are 3 different narratives about the effect this technology, combined with robotics, or we’ll call it automation, in general, are going to have on jobs. And the three scenarios are: one is that, it’s going to destroy an enormous number of quote, low-skill jobs, and that, they will, by definition, be fewer low skilled jobs, and more and more people competing for them and you will have this permanent class of unemployable… it’s like the Great Depression in the US, just forever. And then you have people who say, no, it’s different than that, what it really is, is, they’re going to be able to do everything we can do, they’re going to have escape… Once a machine can learn a new task faster than a person, they’ll take every job, even the creative ones, they’ll take everything. And the third one says no, for 250 years we’ve had 5-10% of unemployment, its never really gotten out of that range other than the anomalous depression, and in that time we had electricity, we had mechanization, we had steam power, we had the assembly line… we had all these things come along that sure looked like job eaters, but what people did is they used the new technology to increase their own productivity and drive their own wages higher, and that’s the story of progress, that we have experienced…So which of those three theories, or maybe a fourth one, do you think is the correct narrative?
I think the third theory is probably the more correct narrative. It just gives us more time to use our imagination and be more productive at doing more things, improve things, so, all of a sudden we’ll have time to think about going and conquering the stars, and living in the stars, or improving our lives here in various ways… The only thing that scares me is the speed of it, if it happens too quickly, too fast.. So, we’re humans, it takes, as a human race, some time to adapt. If the change happens so fast and people lose their jobs too quickly, before they’re able to retrain for the new economy, the new way of [work], the fact that some positions will not be available anymore, that’s the real danger and I think if it happens too fast around the world, then, there could be a backlash.
I think what will happen is that the progress will stop because some backlash will happen in the form of wars, or all sorts of uprisings, because, at the end, people need to live, people need to eat, and if they don’t have that, they don’t have anything to live for, they’re going to rise up, they’re not just going to disappear and die by themselves. So, that’s the real danger, if the change happens too rapidly, you can have a depression that will actually cause the progress to slow down, and I hope we don’t reach that because I would not want us, as a world, to reach that stage where we have to slow down, with all the weapons we have today, this could actually be catastrophic too…
What do you mean by that last sentence?
So I mean we have nuclear weapons…
Oh, I see, I see, I see.
We have actual weapons that can, not just… could actually annihilate us completely…
You know, I hear you  Like…what would “too fast” be? First of all, we had that when the Industrial Revolution came along… you had the Luddite movement, when Ludd broke two spinning wheels you had the thresher riots [or Swing riots] in England in the 1820s, when the automated threat, you had the… the first day the London Times was printed using steam power instead of people. They were going to go find the guy who invented that, and string him up, you had a deep-rooted fear of labor-changing technology, that’s a whole current that constantly runs, but what would too fast look like? The electrification of industry just happened lightning fast, we went from generating 5% of our power from steam to 85% in just22 years…Give me a “too fast” scenario. Are you thinking about the truck drivers, or… tell me how it could “be too fast,” because you seem to be very cautious, like, “man, these technologies are hard and they take a long time and there’s a lot of work and a lot of slog,” and then, so what would too fast look like to you?
If it’s less than a generation, let’s say in 5 years, really, all taxi drivers and truck drivers lose their job because everything becomes automated, that seems to be too fast. If it happens in 20 years, that’s probably enough time to adjust, and I think… the transition is starting, it will start in the next 5 years, but it will still take some time for it to really take hold, because if people lose those jobs today, and you have thousands or hundreds of thousands, or even millions of people doing that, what are they going to do?
Well, presumably, I mean, classic economics says that, if that happened, the cost of taking a cab goes way down, right? And if that happens, that frees up money that I no longer have to spend on an expensive cab, and therefore I spend that money elsewhere,  which generates demand for more jobs, but, is the 5-year scenario… it may be a technical possibility, like we may “technically” do it, if we don’t have a legislative hurdle.
I read this article in India, which said they’re not going to allow self-driving cars in India because that would put people out of work, then you have the retrofit problem, then every city’s going to want to regulate it and say well, you can have a self-driving car, but it needs to have a person behind the wheel just in case. I mean like you would say, look, we’ve been able to fly airplanes without a pilot for decades, yet no airline in the world would touch that, in this plane, we have no pilot… even though that’s probably a better way to do it…So, do you really think we can have all the taxi drivers gone in 5 years?
No, and exactly for that reason, even if our technology really allows it. First of all, I don’t think it will totally allow it, because for it to really take hold you have to have a majority of cars on the road to be autonomous. Just yesterday I was in San Francisco, and I heard a guy say he was driving behind one of those self-driving cars in San Francisco, and he got stuck behind it, because it wouldn’t take a left turn when it was green, and it just forever wouldn’t take a left turn that humans would… The reason why it wouldn’t take a left turn was there were other cars that are human-driven on the road, and it was coded to be very, very careful about it, and he was 15 minutes late to our meeting just because of that self-driving car…
Now, so I think there will be a long transition partly because legislation will regulate it, and slow it down a bit, which is a good thing. You don’t want to change too fast, too quickly without making sure that it really works well in the world, and as long as there is a mixture of humans driving and machines driving, the machines will be a little bit “lame,” because they will be coded to be a lot more careful than us, and we’re impatient, so, that will slow things down which is a good thing, I think making a change too fast can lead to all sorts of economic problems as well…
You know in Europe they had… I could be wrong on this, I think it was first passed in France, but I think it was being considered by the entire EU, and it’s the right to know why the AI decided what it did. If an AI made the decision to deny you a loan, or what have you, you have the right to know why it did that… I had a simple question which was, is that possible? Could Google ever say, I’m number four for this search and my competitor’s number three, why am I number four and they’re number three? Is Google big and complicated enough, and you don’t have to talk specifically about Google, but, are systems big and complicated enough that we don’t know… there are so many thousands of factors that go into this thing, that many people never even look at, it’s just a whole lot of training…
Right, so in principle, the methods could tell you why they made that decision. I mean, even if there are thousands of factors, you can go through all of them and have not just the output of their recognition, but also highlight what were the attributes that caused it to decide it’s one thing or another. So from the technology point of view, it’s possible, from the practical point of view, I think for a lot of problems, you don’t, you won’t really care. I mean, if it recognized that there’s a cat in the image, and you know it’s right, you won’t care why it’s recognized that cat. I guess for some problems where the system made a decision that you don’t necessarily know why it made the decision, or you have to take action based on that recognition, you would want to know. So if I predicted for you that your revenue is going to increase by 20% in the next week, you would probably want that system to tell you, why do you think that’s happened, because there isn’t a clear reason for it that you would imagine yourself, but, if the system told you there is a face in this image, and you just look at the image, and you can see that there’s a face in that image, then you won’t have a problem with it, so I think it really depends on the problem that you’re trying to solve…
We talked about games earlier and you pointed out that they were closed environments and that’s really a place with explicit rules, a place that an AI can excel, and I’ll add to that, there’s a clear cut idea of what winning looks like, and what a point is. I think somebody on the show said, “Who’s winning this conversation right now?” There’s no way to do that, so my question to you is,if you walk around an enterprise and you say “where can I apply artificial intelligence to my business?” would you look for things that looked like games? Like, okay, HR you have all these successful employees that get high performance ratings, and then you have all these people you had to fire because they didn’t, and then you get all these resumes in. Which ones more look like the good people as opposed to the bad people? Are there lots of things like that in life that look like games… or is the whole game thing really a distraction from solving real world problems, nothing really is a game in the real world…
Yeah, I think it’d be wrong to look at it as a game, because the rules… first there is no real clear notion of winning. What you want is progress, you have goals that you want to progress towards, you want, for example, in business, you want your company to grow. That could be your goal, or you want the profits to grow, you want your revenue to grow, so you make these goals, because that’s how you want things to progress and then you can look at all the factors that help it grow. The world of how to “make it grow” is very large, there are so many factors, so if I look at my employees, there might be a low-performing employee in one aspect of my business, but maybe that employee brings to the team, you know, a lot of humor that causes them to be productive, and I can’t measure that. Those kind of things are really, really hard to measure and, so looking at it from a very analytic point of view of just a “game,” would probably miss a lot of important factors.
So tell me about the company you co-founded, Anodot, because you make an anomaly detection system using AIs. So first of all, explain what that is and what that looks like, but how did you approach that problem? If it’s not a game, instead of… you looked at it this way…
So, what are anomalies? Anomalies are anything that’s unexpected, so our approach was: you’re a business and you’re collecting lots and lots and lots of data related to your business. At the end, you want to know what’s going on with the business, that’s the reason you collect a lot of data. Now, when today, people have a lot of different tools that help them kind of slice and dice the data, ask questions about what’s happening there, so you can make informed decisions about the future or react to things that are happening right now, that could affect your business.
The problem with that, is that basically… why isn’t it AI? It’s not AI because you’re basically asking a question and letting the computers compute something for you and giving you and answer; whereas anomalies, by nature, are things that happen that are unexpected, so you don’t necessarily know to ask the question in advance, and unexpected things could happen.  In businesses for example, you see a certain revenue for a product you’re selling going down in a certain city, why’s that happening? If you don’t look at it, and if you don’t ask the question in advance, you’re not even aware that that is happening… so, the great thing about AI, and machine learning algorithms, is they can process a lot of data, and if you can encode into a machine, an algorithm that identifies what are anomalies, you can find them in very, very large scale, and that helps the companies actually detect that things are going wrong, or detect the opportunities that they have, that they might miss otherwise. Where the endgame is very simple, to help you improve your business constantly and maintain it and avoid the risks of doing business, so, it’s not a “game,” it’s actually bringing immediate value to a company, highlighting, putting light on the data that they really need to look at with respect to their business, and the great thing about machine-learning algorithms, [is] they can process all of this data much better than we could, because what do humans do? We graph them, we visualize the data in various ways, you know, we create queries from database about questions that we think might be relevant, but we can’t really process all the data, all the time in an economical way. You would have to hire armies of people to do that, and machines are very good at that, so, that’s why we built Anodot…
Give me an example, like tell me a use case or a real world example of something that Anodot, well that you were able to spot that a person might not have been able to…?
So, we have various customers that are in the e-commerce business, and if you’re in e-commerce and you’re selling a lot of different products, various things could go wrong or opportunities might be missed. For example, if I’m selling coats, and I’m selling a thousand other products, I’m selling coats, and now in a certain area of the country, there is an anomalous weather condition that became cold, all of a sudden I’ll see, I won’t be able to see it because it’s hiding in my data, but people will start buying… in that state will start buying more coats. Now it’s not like if… if somebody actually looked at it, they would probably be able to spot it, but because there is so much data, so many things, so many moving parts, nobody actually notices it. Now our AI system finds… “Oh, there is an anomalous weather condition and there is an uptick in selling that coat, you better do something to seize that opportunity to sell more coats,” so either you have to send more inventory to that region to make sure that if somebody really wants a coat, you’re not out of stock. If you’re out of stock, you’re losing revenue, potential revenue, or you can even offer discounts for that region because you want to bring more people to your e-commerce site, rather than the competition, so, that’s one example…
And I assume it’s also used in security or fraud and what not, or are you really focused on an e-commerce-use case?
So we built a fairly generic platform that can handle a wide variety of use cases. We don’t focus on security as-is, but we do have customers that, in part of their data, we’re able to detect all sort of security-related breaches, like bot activity happening on a site or fraud rings—not the individual fraud of an individual person doing a transaction—but, it’s a lot of the time, frauds are not just one credit card, but somebody actually doing it over time, and then you can create or you can identify those fraud rings.
Most of our use cases have been around more the business-related data, either in ecommerce, ad tech companies, online services. And so online services, anybody that is really data-dependent to run their business, and very data-driven in running their business, and most businesses are transforming into that, even the old-fashioned businesses are transforming into that, because that data has competitive advantage, and being able to process that data to find all the anomalies, gives you an even larger competitive advantage.
So, last question: You made a comment earlier about freeing up people so we can focus on living in the stars. People who say that are generally science fiction fans I’ve noticed. If that is true, what view of the future, as expressed in science fiction, do you think is compelling or interesting or could happen?
That’s a great question. I think that that, what’s compelling to me about the future, really, is not whether we live in the stars or not in the stars, but really about having to free up our time to thinkabout stars, to thinkabout the next big things that progress humanity to the next levels, to be able to explore new dimensions and solve new problems, that…
Seek out new life and new civilizations…
Could be, and it could be in the stars, it could be on Earth, it could be just having more time, having more time on your hands, gives you more time to think about “What’s next?” When you’re busy surviving, then you don’t have any time to think about art, and think about music, and advancing it, or think about the stars, or think about the oceans, so, that’s the way I see AI and technology helping us—really freeing up our time to do more, and to use our collective intelligence and individual intelligence to imagine places that we haven’t thought about before… Or we don’t have time to think about before because we’re busy doing the mundane tasks. That’s really for me, what it’s all about…
Well that is a great place to end it, Ira. I want to thank you for taking the time and going on that journey with me of talking about all these different topics. It’s such an exciting time we live in and your reflections on them are fascinating, so thank you again..
Thank you very much, bye-bye.
Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.
[voices_in_ai_link_back]

Are There Infinite Jobs?

The following is an excerpt from Byron Reese’s book, The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity. You can purchase the book here.
The Fourth Age explores the implications of automation and AI on humanity, and has been described by Ethernet inventor and 3Com founder Bob Metcalfe as framing “the deepest questions of our time in clear language that invites the reader to make their own choices. Using 100,000 years of human history as his guide, he explores the issues around artificial general intelligence, robots, consciousness, automation, the end of work, abundance, and immortality.”
One of those deep questions of our time:
When the topic of automation and AI comes up, one of the chief concerns is always technology’s potential impact on jobs. Many fear that with the introduction of wide-scale automation, there will be no more jobs left for humans. But is it really that dire? In this excerpt from The Fourth Age, Byron Reese considers if the addition of automation and AI will really do away with jobs, or if it will open up a world of new jobs for humans.


In 1940, only about 25 percent of women in the United States participated in the workforce. Just forty years later, that percentage was up to 50 percent. In that span of time, thirty-three million women entered the workforce. Where did those jobs come from? Of course, at the beginning of that period, many of these positions were wartime jobs, but women continued to pour into the labor force even after peace broke out. If you had been an economist in 1940 and you were told that thirty-three million women would be out looking for jobs by 1980, wouldn’t you have predicted much higher unemployment and much lower wages, as many more people would be competing for the “same pool of jobs”?
As a thought experiment, imagine that in 1940 General Motors invented a robot with true artificial intelligence and that the company manufactured thirty-three million of them over forty years. Wouldn’t there have been panic in the streets about the robots taking all the jobs?
But of course, unemployment never went up outside of the range of the normal economic ebb and flow. So what happened? Were thirty-three million men put out of work with the introduction of this large pool of labor? Did real wages fall as there was a race to bottom to fight for the available work? No. Employment and wages held steady.
Or imagine that in 2000, a great technological breakthrough happened and a company, Robot Inc., built an amazing AI robot that was as mentally and physically capable as a US worker. On the strength of its breakthrough, Robot Inc. raised venture capital and built ten million of these robots and housed them in a giant robot city in the Midwest. You could hire the robots for a fraction of what it would cost to employ a US worker. Since 2000, all ten million of these robots have been hired by US firms to save costs. Now, what effect would this have on the US economy? Well, we don’t have to speculate, because the setup is identical to the practice of outsourcing jobs to other countries where wages are lower but educational levels are high. Ten million, in fact, is the lowest estimate of the number of jobs relocated offshore since 2000. And yet the unemployment rate in 2000 was 4.1 percent and in 2017 it is 4.9 percent. Real wages didn’t decline over that period. Why didn’t these ten million “robots” tank wages and increase unemployment? Let’s explore that question.
For the past two hundred years, the United States has had more or less full employment. Aside from the Great Depression, unemployment has moved between 3 and 10 percent that entire time. The number hasn’t really trended upward or downward over time. The US unemployment rate in 1850 was 3 percent; in 1900 it was 6.1 percent; and in 1950 it was 5.3 percent.
Now picture a giant scale, one of those old-timey ones that Justice is always depicted holding: on one side of the scale you have all the industries that get eliminated or reduced by technology. The candlemakers, the stable boys, the telegraph operators. On the other side of the scale you have all the new industries. The Web designers, the geneticists, the pet psychologists, the social media managers.
Why don’t those two sides of the scale ever get way out of sync? If the number of jobs available is a thing that ebbs and flows on its own due to technological breakthroughs and offshoring and other independent factors, then why haven’t we ever had periods when there were millions and millions more jobs than there were people to fill them? Or why haven’t we had periods when there were millions and millions fewer jobs than people to fill them? In other words, how does the unemployment rate stay in such a narrow band? When it has moved to either end, it was generally because of macro factors of the economy, not an invention of something that suddenly created or destroyed five million jobs. Shouldn’t the invention of the handheld calculator have put a whole bunch of people out of work? Or the invention of the assembly line, for that matter? Shouldn’t that have capsized the job market?
A simple thought experiment explains why unemployment stays relatively fixed: Let’s say tomorrow there are five big technological breakthroughs, each of which eliminates some jobs and saves you, the consumer, some money. They are:

  1. A new nanotech spray comes to market that only costs a few cents and eliminates ever needing to dry-clean your clothes. This saves the average American household $550 a year. All dry cleaners are put out of business.
  2. A crowdfunded start-up releases a device that plugs into a normal wall outlet and converts food scraps into electricity. “Scraptricity” becomes everyone’s new favorite green energy craze, saving the average family $100 a year off their electric bill. Layoffs in the traditional energy sector soon follow.
  3. A Detroit start-up releases an AI computer controller for automakers that increases the fuel efficiency of cars by 10 percent. This saves the average American family $200 of the $2,000 they spend annually on gas. Job losses occur at gas stations and refineries.
  4. A top-secret start-up releases a smartphone attachment you breathe into. It can tell the difference between colds and flu, as well as viral and bacterial infections. Plus, it can identify strep throat. Hugely successful, this attachment saves the average American family one doctor visit a year, which, given their co-pay, saves them $75. Job losses occur at walk-in clinics around the country.
  5. Finally, high-quality AA and AAA batteries are released that can recharge themselves by being left in the sun for an hour. Hailed as an ecological breakthrough, the batteries instantly displace the disposable battery market. The average American family saves $75 a year that they would have spent on throwaway batteries. Job losses occur at battery factories around the world.

That is what tech disruption looks like. We have seen thousands of such events happen in just the last few years. We buy fewer DVDs and spend that money on digital streaming. The number of digital cameras we are buying is falling by double digits every year, but we spend that money on smartphones instead. The amount being spent on ads in printed phone directories is falling by $1 billion a year in the United States. Businesses are spending that money elsewhere. We purchase fewer fax machines, newspapers, GPS devices, wristwatches, wall clocks, dictionaries, encyclopedias. When we travel, we spend less on postcards. We buy fewer photo albums and less stationery. We mail less mail and write fewer checks. When is the last time you dropped a quarter in pay phone or dialed directory assistance or paid for a long-distance phone call?
In our hypothetical case above, if you add up what our technological breakthroughs save our hypothetical family, it is $1,000 a year. But in that scenario, what happens to all those dry cleaners, coal workers, gas station operators, nurses, and battery makers? Well, sadly, they lost their jobs and must look for new work. What will fund the new jobs for these folks? Where will the money come from to pay them? Well, what do you think the average American family does with the $1,000 a year they now have? Simple: They spend it. They hire yoga instructors, have new flower beds put in, take up windsurfing, and purchase puppies, causing job growth in all those industries. Think of the power of $1,000 a year multiplied by the hundred million households in the United States. That is $100,000,000,000 (a hundred billion dollars) of new spending into the economy every year. Assuming a $50,000 wage, that is enough money to fund the yearly salaries of two million full-time people, including our newly unemployed dry cleaners and battery bakers. Changing careers is a rough transition for them, to be sure, and one that society could collectively do a much better job facilitating, but the story generally ends well for them.
This is how free economies work, and why we have never run out of jobs due to automation. There are not a fixed number of jobs that automation steals one by one, resulting in progressively more unemployment. That simply isn’t how the economy works. There are as many jobs in the world as there are buyers and sellers of labor.
Additionally, most technological advances don’t eliminate entire jobs all at once, per se, but certain parts of jobs. And they create new jobs in entirely unexpected ways. When ATMs came out, most people assumed they would eliminate the need for bank tellers. Everyone knew what the letters ATM stood for, after all. But what really happened? Well, of course, you would always need some tellers to deal with customers wanting more than to make a deposit or get cash. So instead of a branch having four tellers and no machines, they could have two tellers and two ATMs. Then, seeing that branches were now cheaper to operate, banks realized they could open more of them as a competitive advantage, and guess what? They needed to hire more tellers. That’s why there are more human bank tellers employed today than any other time in history. But there are now also ATM manufacturing jobs, ATM repair jobs, and ATM refilling jobs. Who would have thought that when you made a robot bank teller, you would need more human ones?
The problem, as stated earlier, is that the “job loss” side of the equation is the easiest to see. Watching every dry cleaner on the planet get shuttered would look like a tragedy. And to the people involved, it would be one. But, from a larger point of view, it wouldn’t be one at all. Who thinks it is a bad idea to have clothes that don’t get dirty? If clothes had always resisted dirt, who would lobby to pass a law that requires that all clothes could get dirty, so that we could create all the dry cleaning jobs? Batteries that die and cars that run inefficiently and unnecessary trips to the doctor and wasted energy are all negative things, even if they make jobs. If you don’t think so, then we should repeal littering laws and encourage people to throw trash out their car windows to make new highway cleanup jobs.
So this is why we have never run out of jobs, and why unemployment stays relatively constant. Every time technology saves us money, we spend the money elsewhere! But is it possible that the future will be different? Some argue that there are new economic forces at play. It goes like this: “Imagine a world with two companies: Robotco and Humanco. Robotco makes, in a factory with no employees, a popular consumer gadget that sells for $100. Meanwhile, Humanco makes a different gadget that also costs $100, but it is made in a factory full of people.
“What happens if Robotco’s gadget becomes wildly successful? Robotco sees its corporate profits shoot through the roof. Meanwhile, Humanco flounders, because no one is buying its product. It is forced to lay off its human staff. Now these humans don’t have any money to buy anything while Robotco sits on an ever-growing mountain of cash. The situation devolves until everyone is unemployed and Robotco has all the money in the world.”
Some say this is happening in the United States right now. Corporate profits are high and those profits are distributed to the rich, while wages are stagnant. The big new companies of today, like Facebook and Google, have huge earnings and few employees, unlike the big companies of old, like durable-goods manufacturers, which typically needed large workforces.
There is undoubtedly some truth in this view of the world. Gains in productivity created by technology don’t necessarily make it into the pockets of the increasingly productive worker. Instead, they are often returned to shareholders. There are ways to mitigate this flow of capital, which we will address in the chapter about income inequality, but this should not be seen as a fatal flaw of technology or our economy, but rather something that needs addressing head-on by society at large.
Further, Robotco’s immense profits probably don’t just sit in some Scrooge McDuck kind of vault in which the executives have pillow fights using pillows stuffed with hundred-dollar bills. Instead, they are put to productive use and are in turn loaned out to people to start businesses and build houses, creating more jobs. An economy with no corporate profits and everything paid out in wages is as dysfunctional as the reverse case we just explored.


To read more of Byron Reese’s book, The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity, you can purchase it here.