Voices in AI – Episode 73: A Conversation with Konstantinos Karachalios

[voices_in_ai_byline]

About this Episode

Episode 73 of Voices in AI features host Byron Reese and Konstantinos Karachalios discuss what it means to be human, how technology has changed us in the far and recent past and how AI could shape our future. Konstantinos holds a PhD in Engineering and Physics from the University of Stuttgart, as well as being the managing director at the IEEE standards association.
Visit www.VoicesinAI.com to listen to this one-hour podcast or read the full transcript.

Transcript Excerpt

Byron Reese: This is Voices in AI, brought to you by GigaOm, and I’m Byron Reese. Today our guest is Konstantinos Karachalios. He is the Managing Director at the IEEE Standards Association, and he holds a PhD in Engineering and Physics from the University of Stuttgart. Welcome to the show.

Konstantinos Krachalios: Thank you for inviting me.

So we were just chatting before the show about ‘what does artificial intelligence mean to you?’ You asked me that and it’s interesting, because that’s usually my first question: What is artificial intelligence, why is it artificial and feel free to talk about what intelligence is.

Yes, and first of all we see really a kind of mega-wave around the ‘so-called’ artificial intelligence—it started two years ago. There seems to be a hype around it, and it would be good to distinguish what is marketing, what is real, and what is propaganda—what are dreams what are nightmares, and so on. I’m a systems engineer, so I prefer to take a systems approach, and I prefer to talk about, let’s say, ‘intelligent systems,’ which can be autonomous or not, and so on. The big question is a compromise because the big question is: ‘what is intelligence?’ because nobody knows what is intelligence, and the definitions vary very widely.

I myself try to understand what is human intelligence at least, or what are some expressions of human intelligence, and I gave a certain answer to this question when I was invited in front of the House of the Lords testimony. Just to make it brief, I’m not a supporter of the hype around artificial intelligence, also I’m not even supporting the term itself. I find it obfuscates more than it reveals, and so I think we need to re-frame this dialogue, and it takes also away from human agency. So, I can make a critique to this and also I have a certain proposal.

Well start with your critique If you think the term is either meaningless or bad, why? What are you proposing as an alternative way of thinking?

Very briefly because we can talk really for one or two hours about this: My critique is that the whole of this terminology is associated also with a perception of humans and of our intelligence, which is quite mechanical. That means there is a whole school of thinking, there are many supporters there, who believe that humans are just better data processing machines.

Well let’s explore that because I think that is the crux of the issue, so you believe that humans are not machines?

Apparently not. It’s not only we’re not machines, I think, because evidently we’re not machines, but we’re biological, and machines are perhaps mechanical although now the boundary has blurred because of biological machines and so on.

You certainly know the thought experiment that says, if you take what a neuron does and build an artificial one and then you put enough of them together, you could eventually build something that functions like the brain. Then wouldn’t it have a mind and wouldn’t it be intelligent, and isn’t that what the human brain initiative in Europe is trying to do?

This is weird, all this you have said starts with a reductionist assumption about the human—that our brain is just a very good computer. It ignores really the sources of our intelligence, which are really not all in our brain. Our intelligence has really several other sources. We cannot reduce it to just the synapses in the neurons and so on, and of course, nobody can prove this or another thing. I just want to make clear here that the reductionist assumption about humanity is also a religious approach to humanity, but a reductionist religion.

And the problem is that people who support this, they believe it is scientific, and this, I do not accept. This is really a religion, and a reductionist one, and this has consequences about how we treat humans, and this is serious. So if we continue propagating a language which reduces humanity, it will have political and social consequences, and I think we should resist this and I think the best way to express this is an essay by Joichi Ito with the title which says “Resist Reduction.” And I would really suggest that people read this essay because it explains a lot that I’m not able to explain here because of time.

So you’re maintaining that if you adopt this, what you’re calling a “religious view,” a “reductionist view” of humanity, that in a way that can go to undermine human rights and the fact that there is something different about humans that is beyond purely humanistic.

For instance I was in an AI conference of a UN organization which brought all other UN organizations with technology together. It was two years ago, and there they were celebrating a humanoid, which was pretending to be a human. The people were celebrating this and somebody there asked this question to the inventor of this thing: “What do you intend to do with this?” And this person spoke publicly for five minutes and could not answer the question and then he said, “You know, I think we’re doing it because if we don’t do it, others were going to do it, it is better we are the first.”

I find this a very cynical approach, a very dangerous one and nihilistic. These people with this mentality, we celebrate them as heroes. I think this is too much. We should stop doing this anymore, we should resist this mentality, and this ideology. I believe we make machine a citizen, you treat your citizens like machines, then we’re not going very far as humanity. I think this is a very dangerous path.

Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com
[voices_in_ai_link_back]
 
Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.

Voices in AI – Episode 64: A Conversation with Eli David

[voices_in_ai_byline]

About this Episode

Episode 64 of Voices in AI features host Byron Reese and Dr. Eli David discuss evolutionary computation, deep learning and neural networks, as well as AI’s role in improving cyber-security. Dr. David is the CTO and co-founder of Deep Instinct as well as having published multiple papers on deep learning and genetic algorithms in leading AI journals.
Visit www.VoicesinAI.com to listen to this one-hour podcast or read the full transcript.

Transcript Excerpt

Byron Reese: This is Voices in AI, brought to you by GigaOm. I’m Byron Reese. And today, our guest is Dr. Eli David. He is the CTO and the co-founder of Deep Instinct. He’s an expert in the field of computational intelligence, specializing in deep learning and evolutionary computation. He’s published more than 30 papers in leading AI journals and conferences, mostly focusing on applications of deep learning and genetic algorithms in various real-world domains. Welcome to the show, Eli.
Eli David: Thank you very much. Great to be here.
So bring us up to date, or let everybody know what do we mean by evolutionary computation, and deep learning and neural networks? Because all three of those are things that, let’s just say, they aren’t necessarily crystal clear in everybody’s minds what they are. So let’s begin by defining your terms. Explain those three concepts to us.
Sure, definitely. Now, both neural networks and evolutionary computation take inspiration from intelligence in nature. If instead of trying to come up with smart mathematical ways of creating intelligence, we just look at the nature to see how intelligence works there, we can reach two very obvious conclusions. First, the only algorithm that is in charge of creating intelligence – we started from single-cell organisms billions of years ago, and now we are intelligent organisms – and the main algorithm, or maybe the only algorithm, in charge of that was evolution. So evolutionary computation takes inspiration from the evolutionary process in the nature and trying to evolve computer programs so that, from one generation to other, they will become smarter and smarter, and the smarter they are, the more they breed, the more children they have, and so, hopefully the smart gene improves one generation after the other.
The other thing that we will notice when we observe nature is brains. Nearly all the intelligence in humans or other mammals or the intelligent animals, it is due to a neural network and network of neurons which we refer to as a brain — many small processing units connected to each other via what we call synapses. In our brains, for example, we have many tens of billions of such neurons, each one of them, on average, connected to about ten thousand other neurons, and these small processing units connected to each other, they create the brain; they create all our intelligence. So the two fields of evolutionary computation and artificial neural networks, nowadays referred to as deep learning, and we will shortly dwell on the difference as well, take direct inspiration from nature.
Now, what is the difference between deep learning, deep neural networks, traditional neural networks, etc? So, neural networks is not a new field. Already in the 1980s, we had most of the concepts that we have today. But the main difference is that during the past several years, we had several major breakthroughs, while until then, we could train only shallow neural networks, shallow artificial neural networks, just a few layers of neurons, just a few thousand synapses, connectors. A few years ago, we managed to make these neural networks deep, so instead of a few layers, we have many tens of layers; instead of a few thousand connectors, we have now hundreds of millions, or billions, of connectors. So instead of having shallow neural networks, nowadays we have deep neural networks, also known as deep learning. So deep learning and deep neural networks are synonyms.
Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com
[voices_in_ai_link_back]
 
Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.

Voices in AI – Episode 61: A Conversation with Dr. Louis Rosenberg

[voices_in_ai_byline]

About this Episode

Episode 61 of Voices in AI features host Byron Reese and Dr. Louis Rosenberg talking about AI and swarm intelligence. Dr. Rosenberg is the CEO of Unanimous AI. He also holds a B.S., M.S., and a PhD in Engineering from Stanford.
Visit www.VoicesinAI.com to listen to this one-hour podcast or read the full transcript.

Transcript Excerpt

Byron Reese: This is Voices in AI, brought to you by GigaOm. I’m Byron Reese and today I’m excited that our guest is Louis Rosenberg. He is the CEO at Unanimous A.I. He holds a B.S. in Engineering, an M.S. in Engineering and a PhD in Engineering all from Stanford. Welcome to the show, Louis.
Dr. Louis Rosenberg: Yeah, thanks for having me.
So tell me a little bit about why do you have a company? Why are you CEO of a company called Unanimous A.I.? What is the unanimous aspect of it?
Sure. So, what we do at Unanimous A.I. is we use artificial intelligence to amplify the intelligence of groups rather than using A.I. to replace people. And so instead of replacing human intelligence, we are amplifying human intelligence by connecting people together using A.I. algorithms. So in laymen’s terms, you would say we build hive minds. In scientific terms, we would say we build artificial swarm intelligence by connecting people together into systems.
What is swarm intelligence?
So swarm intelligence is a biological phenomenon that people have been studying, or biologists have been studying, since the 1950s. And it is basically the reason why birds flock and fish school and bees swarm—they are smarter together than they would be on their own. And the way they become smarter together is not the way people do it. They don’t take calls, they don’t conduct surveys, there’s no SurveyMonkey in nature. The way that groups of organisms get smarter together is by forming systems, real-time systems with feedback loops so that they can essentially think together as an emergent intelligence that is smarter as a uniform system than the individual participants would be on their own. And so the way I like to think of an artificial swarm intelligence or a hive mind is as a brain of brains. And that’s essentially what we focus on at Unanimous A.I., is figuring out how to do that among people, even though nature has figured out how to do that among birds and bees and fish, and have demonstrated over millions of years and hundreds of millions of years, how powerful it can be.
So before we talk about artificial swarm intelligence, let’s just spend a little time really trying to understand what it is that the animals are doing. So the thesis is, your average ant isn’t very smart and even the smartest ant isn’t very smart and yet collectively they exhibit behavior that’s quite intelligent. They can do all kinds of things and forage and do this and that, and build a home and protect themselves from a flood and all of that. So how does that happen?
Yeah, so it’s an amazing process and its worth taking one little step back and just asking ourselves, how do we define the term intelligence? And then we can talk about how we can build a swarm intelligence. And so, in my mind, the word intelligence could be defined as a system that takes in noisy input about the world and it processes that input and it uses it to make decisions, to have opinions, to solve problems and, ideally, it does it creatively and by learning over time. And so if that’s intelligence, then there’s lots of ways we can think about building an artificial intelligence, which I would say is basically creating a system that involves technology that does some or all of these systems, takes in noisy input and uses it to make decisions, have opinions, solve problems, and does it creatively and learning over time.
Now, in nature, there’s really been two paths by which nature has figured out how to do these things, how to create intelligence. One path is the path we’re very, very familiar with, which is by building up systems of neurons. And so, over hundreds of millions and billions of years, nature figured out that if you build these systems of neurons, which we call brains, you can take in information about the world and you can use it to make decisions and have opinions and solve problems and do it creatively and learn over time. But what nature has also shown is that in many organisms—particularly social organisms—once they’ve built that brain and they have an individual organism that can do this on their own, many social organisms then evolve the ability to connect the brains together into systems. So if a brain is a network of neurons where intelligence emerges, a swarm in nature is a network of brains that are connected deeply enough that a superintelligence emerges. And by superintelligence, we mean that the brain of brains is smarter together than those individual brains would be on their own. And as you described, it happens in ants, it happens in bees, it happens in birds, and fish.
And let me talk about bees because that happens to be the type of swarm intelligence that’s been studied the longest in nature. And so, if you think about the evolution of bees, they first developed their individual brains, which allowed them to process information, but at some point their brains could not get any larger, presumably because they fly, and so bees fly around, their brains are very tiny to be able to allow them to do that. In fact, a honeybee has a brain that has less than a million neurons in it, and it’s smaller than a grain of sand. And I know a million neurons sounds like a lot, but a human has 85 billion neurons. So however smart you are, divide that by 85,000 and that’s a honeybee. So a single honeybee, very, very simple organism and yet they have very difficult problems that they need to solve, just like humans have difficult problems.
And so the type of problem that is actually studied the most in honeybees is picking a new home to move into. And by new home, I mean, you have a colony of 10,000 bees and every year they need to find a new home because they’ve outgrown their previous home and that home could be a hole in a hollow log, it could be a hole at the side of a building, it could be a hole—if you’re unlucky—in your garage, which happened to me. And so a swarm of bees is going to need to find a new home to move into. And, again, it sounds like a pretty simple decision, but actually it’s a life-or-death decision for honeybees. And so for the evolution of bees, the better decision that they can make when picking a new home, the better the survival of their species. And so, to solve this problem, what colonies of honeybees do is they form a hive mind or a swarm intelligence and the first step is that they need to collect information about their world. And so they send out hundreds of scout bees out into the world to search 30 square miles to find potential sites, candidate sites that they can move into. So that’s data collection. And so they’re out there sending hundreds of bees out into the world searching for different potential homes, then they bring that information back to the colony and now they have the difficult part of it: they need to make a decision, they need to pick the best possible site of dozens of possible sites that they have discovered. Now, again, this sounds simple but honeybees are very discriminating house-hunters. They need to find a new home that satisfies a whole bunch of competing constraints. That new home has to be large enough to store the honey they need for the winter. It needs to be ventilated well enough so they can keep it cool in the summer. It needs to be insulated well enough so it can stay warm in cold nights. It needs to be protected from the rain, but also near good sources of water. And also, of course, it needs to be well-located, near good sources of pollen.
And so it’s a complex multi-variable problem. This is a problem that a single honeybee with a brain smaller than a grain of sand could not possibly solve. In fact, a human that was looking at that data would find it very difficult to use a human brain to find the best possible solution to this multi-variable optimization problem. Or a human that is faced with a similar human challenge, like finding the perfect location for a new factory or the perfect features of a new product or the perfect location to put a new store, would be very difficult to find a perfect solution. And yet, rigorous studies by biologists have shown that honeybees pick the best solution from all the available options about 80% of the time. And when they don’t pick the best possible solution, they pick the next best possible solution. And so it’s remarkable. By working together as a swarm intelligence, they are enabling themselves to make a decision that is optimized in a way that a human brain, which is 85,000 times more powerful, would struggle to do.
And so how do they do this? Well, they form a real-time system where they can process the data together and converge together on the optimal solution. Now, they’re honeybees, so how do they process the data? Well, nature came up with an amazing way. They do it by vibrating their bodies. And so biologists call this a “waggle dance” because to humans, when people first starting looking into hives, they saw these bees doing something that looked like they were dancing because they were vibrating their bodies. It looked like they were dancing but really they were generating these vibrations, these signals that represent their support for their various home sites that were under consideration. By having hundreds and hundreds of bees vibrating their bodies at the same time, they’re basically engaging in this multi-directional tug of war. They’re pushing and pulling on a decision, exploring all the different options until they converge together in real time on the one solution that they can best agree upon and it’s almost always the optimal solution. And when it’s not the optimal solution, it’s the next best solution. So basically they’re forming this real-time system, this brain of brains that can converge together on an optimal solution and can solve problems that they couldn’t do on their own. And so that’s the most well-known example of what a swarm intelligence is and we see it in honeybees, but we also see the same process happening in flocks of birds, in schools of fish, which allow them to be smarter together than alone.
Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com
[voices_in_ai_link_back]
 
Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.

Voices in AI – Episode 58: A Conversation with Chris Eliasmith

[voices_in_ai_byline]

About this Episode

Episode 58 of Voices in AI features host Byron Reese and Chris Eliasmith talking about the brain, the mind, and emergence. Dr. Chris Eliasmith is co-CEO of Applied Brain Research, Inc. and director of the Centre for Theoretical Neuroscience at the University of Waterloo. Professor Eliasmith uses engineering, mathematics and computer modelling to study brain processes that give rise to behaviour. His lab developed the world’s largest functional brain model, Spaun, whose 2.5 million simulated neurons provide insights into the complexities of thought and action. Professor of Philosophy and Engineering, Dr. Eliasmith holds a Canada Research Chair in Theoretical Neuroscience. He has authored or coauthored two books and over 90 publications in philosophy, psychology, neuroscience, computer science, and engineering. In 2015, he won the prestigious NSERC Polayni Award. He has also co-hosted a Discovery channel television show on emerging technologies.
Visit www.VoicesinAI.com to listen to this one-hour podcast or read the full transcript.

Transcript Excerpt

Byron Reese: This is Voices in AI brought to you by GigaOm. I’m Byron Reese. Today our guest is Chris Eliasmith. He’s the Canadian Research Chair in Theoretical Neuroscience. He’s a professor with, get this, a joint appointment in Philosophy and Systems Design Engineering and, if that’s not enough, a cross-appointment to the Computer Science department at the University of Waterloo. He is the Director of the Centre for Theoretical Neuroscience, and he was awarded the NSERC Polanyi Award for his work developing a computer model of the human brain. Welcome to the show, Chris!
Chris Eliasmith: Thank you very much. It’s great to be here.
So, what is intelligence?
That’s a tricky question, but one that I know you always like to start with. I think intelligence—I’m teaching a course on it this term, so I’ve been thinking about it a lot recently. It strikes me as the deployment of a set of skills that allow us to accomplish goals in a very wide variety of circumstances. It’s one of these things I think definitely comes in degrees, but we can think of some very stereotypical examples of the kinds of skills that seem to be important for intelligence, and these include things like abstract reasoning, planning, working with symbolic structures, and, of course, learning. I also think it’s clear that we generally don’t consider things to be intelligent unless they’re highly robust and can deal with lots of uncertainty. Basically some interesting notions of creativity often pop up when we think about what counts as intelligent or not, and it definitely depends more on how we manipulate knowledge than the knowledge we happen to have at that particular point in time.
Well, you said I like to start with that, but you were actually the first person in 56 episodes I asked that question to. I asked everybody else what artificial intelligence is, but we really have to start with intelligence. In what you just said, it sounded like there was a functional definition, like it is skills, but it’s also creativity. It’s also dealing with uncertainty. Let’s start with the most primitive thing which would be a white blood cell that can detect and kill an invading germ. Is that intelligent? I mean it’s got that skill.
I think it’s interesting that you bring that example up, because people are actually now talking about bacterial intelligence and plant intelligence. They’re definitely attempting to use the word in ways that I’m not especially comfortable with, largely because I think what you’re pointing to in these instances are sort of complex and sophisticated interactions with the world. But at the same time, I think the notions of intelligence that we’re more comfortable with are ones that deal with more cognitive kinds of behaviors, generally more abstract kinds of behaviors. The sort of degree of complexity in that kind of dealing with the world is far beyond I think what you find in things like blood cells and bacteria. Nevertheless, we can always put these things on a continuum and decide to use words in whichever particular ways we find useful. I think I’d like to restrict it to these sort of higher order kinds of complex interactions we see with…
I’m with you on that. So let me ask a different question: How is human intelligence unique in the world, as far as we know? What is different about human intelligence?
There are a couple of standard answers, I think, but even though they’re standard, I think they still capture some sort of essential insights. One of the most unique things about human intelligence is our ability to use abstract representations. We create them all the time. The most ubiquitous examples, of course, are language, where we’re just making sounds, but we can use it to refer to things in the world. We can use it to refer to classes of things in the world. We can use it to refer to things that are not in the world. We can exploit these representations to coordinate very complex social behaviors, including things like technological development as well as political systems and so on. So that sort of level of complex behavior that’s coordinated by abstract symbols is something that you just do not find in any other species on the planet. I think that’s one standard answer which I like.
The other one is that the amount of mental flexibility that humans display seems to outpace most other kinds of creatures that we see around us. This is basically just our ability to learn. One reason that people are in every single climate on the planet and able to survive in all those climates is because we can learn and adapt to unexpected circumstances. Sometimes it’s not because of abstract social reasoning or social skills or abstract language, but rather just because of our ability to develop solutions to problems which could be requiring spatial reasoning or other kinds of reasoning which aren’t necessarily guided by language.
I read, the other day, a really interesting thing, which was the only animal that will look in the direction you point is a dog, which sounds to me—I don’t know, it may be meaningless—but it sounds to me like a) we probably selected for that, right? The dog that when you say, “Go get him!” and it actually looks over there, we’d say that’s a good dog. But is there anything abstract in that, in that I point at something and then the animal then turns and looks at it?
I don’t think there’s anything especially abstract. To me, that’s an interesting kind of social coordination. It’s not the kind of abstractness I was talking about with language, I don’t think.
Okay. Do you think Gallup’s, the red dot, the thing that tries to wipe the dot off its forehead—is that a test that shows intelligence, like the creature understands what a mirror is? “Ah, that is me in the mirror?” What do you think’s going on there?
I think that is definitely an interesting test. I’m not sure how directly it’s getting at intelligence. That seems to be something more related to self-representation. Self-representation is likely something that matters for, again, social coordination, so being able to distinguish yourself from others. I think, often, more intelligent animals tend to be more social animals, likely because social interactions are so incredibly sophisticated. So you see this kind of thing definitely happening in dolphins, which are one of the animals that can pass the red dot test. You also see animals like dogs we consider generally pretty intelligent, again, because they’re very social, and that might be why they’re good at reacting to things like pointing and so on.
But it’s difficult to say that recognition in a mirror or some simple task like that is really going to let us identify something as being intelligent or not intelligent. I think the notion of intelligence is generally just much broader, and it really has to do with the set of skills—I’ll go back to my definition—the set of skills that we can bring to bear and the wide variety of circumstances that we can use on them to successfully solve problems. So when we see dolphins doing this kind of thing – they take sponges and put them on their nose so they can protect their nose from spiky animals when they’re searching the seabed, that’s an interesting kind of intelligence because they use their understanding of their environment to solve a particular problem. They also have done things like killed spiny urchins to poke eels to get them out of crevices. They’ve done all these sorts of things, it’s given the variety of problems that they’ve solved and the interesting and creative ways they’ve done it, to make us want to call dolphins intelligent. I don’t think it’s merely seeing a dot in a mirror that lets us know, “Ah! They’ve got the intelligence part of the brain.” I think it’s really a more comprehensive set of skills.
Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com
[voices_in_ai_link_back]
 
Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.

Voices in AI – Episode 53: A Conversation with Nova Spivack

[voices_in_ai_byline]

About this Episode

Episode 53 of Voices in AI features host Byron Reese and Nova Spivack talking about neurons, the Gaia hypothesis, intelligence, and quantum physics. Nova Spivack is a leading technology futurist, serial entrepreneur and angel investor.
Visit www.VoicesinAI.com to listen to this one-hour podcast or read the full transcript.

Transcript Excerpt

Byron Reese: This is Voices in AI brought to you by GigaOm. I’m Byron Reese. Today, I’m excited we have Nova Spivack as our guest. Nova is an entrepreneur, a venture capitalist, an author; he’s a great many other things. He’s referred to by a wide variety of sources as a polymath, and he’s recently started a science and tech studio called Magical in which he serves as CEO.
He’s had his fingers in all sorts of pies and things that you’re probably familiar with. He was the first investor in Klout. He was in early on something that eventually became Siri. He was the co-founder of EarthWeb, Radar Network, The Daily Dot, Live Matrix. It sounds like he does more before breakfast than I manage to get done in a week. Welcome to the show, Nova.
Nova Spivack: Thank you! Very kind of you.
So, let’s start off with artificial intelligence. When I read what you write and when I watch videos about you, you have a very clear view of how you think the future is going to unfold with regards to technology and AI specifically. Can you just take whatever time you want and just describe for our listeners how you think the future is going to happen?
Sure, so I’ve been working in the AI field since long before it was popular to say that. I actually started while I was still in college working for Kurzweil in one of his companies, in an AI company that built the Kurzweil Reading Machine. I mean I was doing early neural network there, that was the end of the ‘80s or early ‘90s, and then I worked under Danny Hillis at Thinking Machines on supercomputing and AI related applications.
Then after that, I was involved in a company called Individual which was the first company to do intelligent agent powered news filtering and then began to start internet companies and worked in the semantic web, large scale collaborating filtering projects, [and] intelligence assistance. I advised a company called Next IT, which is one of the leading bot platforms and I’ve built a big data mining analytics company. So I’ve been deeply involved in this technology on a hands-on basis both as a scientist and even as an engineer in the early days [but also] from the marketing and business side and venture capital side. So, I really know this space.
First of all, it’s great to see AI in vogue again. I lived through the first AI winter and the second sort of unacknowledged AI winter around the birth and death of the semantic web, and now here we are in the neural network machine learning renaissance. It’s wonderful to see this happening. However, I think that the level of hype that we see is probably not calibrated with reality and that inevitably there’s going to be a period of disillusionment as some of the promises that have been made don’t pan out.
So, I think we have to keep a very realistic view of what this technology is and what it can and cannot do, and where it fits in the larger landscape of machine intelligence. So, we can talk about that today. I definitely have a viewpoint that’s different from some of the other pundits in the space in terms of when or if the singularity will happen, and in particular spent years thinking about and studying cognitive science and consciousness. And I have some views on that, based on a lot of research, that are probably be different from what we are hearing on the mainstream thinkers. So, I think it will be an interesting conversation today as we get into some of these questions, and probably get quite far into technology and philosophy.
Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com 
[voices_in_ai_link_back]
 
Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.

Voices in AI – Episode 52: A Conversation with Rao Kambhampati

[voices_in_ai_byline]

About this Episode

Sponsored by Dell and Intel, Episode 52 of Voices in AI, features host Byron Reese and Rao Kambhampati discussing creativity, military AI, jobs and more. Subbarao Kambhampati is a professor at ASU with teaching and research interests in Artificial Intelligence. Serving as the president of AAAI, the Association for the Advancement of Artificial Intelligence.
Visit www.VoicesinAI.com to listen to this one-hour podcast or read the full transcript.

Transcript Excerpt

Byron Reese: This is Voices in AI, brought to you by GigaOm. I’m Byron Reese. Today my guest is Rao Kambhampati. He has spent the last quarter-century at Arizona State University, where he researches AI. In fact, he’s been involved in artificial intelligence research for thirty years. He’s also the President of the AAAI, the Association for the Advancement of Artificial Intelligence. He holds a Ph.D.in computer science from the University of Maryland, College Park. Welcome to the show, Rao.
Rao Kambhampati: Thank you, thank you for having me.
I always like to start with the same basic question, which is, what is artificial intelligence? And so far, no two people have given me the same answer. So you’ve been in this for a long time, so what is artificial intelligence?
Well, I guess the textbook definition is, artificial intelligence is the quest to make machines show behavior, that when shown by humans would be considered a sign of intelligence. So intelligent behavior, of course, that right away begs the question, what is intelligence? And you know, one of the reasons we don’t agree on the definitions of AI is partly because we all have very different notions of what intelligence is. This much is for sure; intelligence is quite multi-faceted. You know we have the perceptual intelligence—the ability to see the world, you know the ability to manipulate the world physically—and then we have social, emotional intelligence, and of course you have cognitive intelligence. And pretty much any of these aspects of intelligent behavior, when a computer can show those, we would consider that it is showing artificial intelligence. So that’s basically the practical definition I use.
But to say, “while there are different kinds of intelligences, therefore, you can’t define it,” is akin to saying there are different kinds of cars, therefore, we can’t define what a car is. I mean that’s very unsatisfying. I mean, isn’t there, this word ‘intelligent’ has to mean something?
I guess there are very formal definitions. For example, you can essentially consider an artificial agent, working in some sort of environment, and the real question is, how does it improve its long-term reward that it gets from the environment, while it’s behaving in that environment? And whatever it does to increase its long-term reward is seen, essentially as—I mean the more reward it’s able to get in the environment, the more important it is. I think that is the sort of definition that we use in introductory AI sorts of courses, and we talk about these notions of rational agency, and how rational agents try to optimize their long-term reward. But that sort of gets into more technical definitions. So when I talk to people, especially outside of computer science, I appeal to their intuitions of what intelligence is, and to the extent we have disagreements there, that sort of seeps into the definitions of AI.
Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com 
[voices_in_ai_link_back]
 
Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.

Voices in AI – Episode 48: A Conversation with David Barrett

[voices_in_ai_byline]
In this episode, Byron and David discuss AI, jobs, and human productivity.
[podcast_player name=”Episode 48: A Conversation with David Barrett” artist=”Byron Reese” album=”Voices in AI” url=”https://voicesinai.s3.amazonaws.com/2018-06-07-(00-56-47)-david-barrett.mp3″ cover_art_url=”https://voicesinai.com/wp-content/uploads/2018/06/voices-headshot-card-1.jpg”]
[voices_in_ai_byline]
Byron Reese: This is Voices in AI brought to you by GigaOm, I’m Byron Reese. Today our guest is David Barrett. He is both the founder and the CEO of Expensify. He started programming when he was 6 and has been at it as his primary activity ever since, except for a brief hiatus for world travel, some technical writing, a little project management, and then founding and running Expensify. Welcome to the show, David.
David Barrett: It’s great of you to have me, thank you.
Let’s talk about artificial intelligence, what do you think it is? How would you define it?
I guess I would say that AI is best defined as a feature, not as a technology. It’s the experience that the user has and sort of the experience of viewing of something as being intelligent, and how it’s actually implemented behind the scenes. I think people spend way too much time and energy on [it], and forget sort of about the experience that the person actually has with it.
So you’re saying, if you interact with something and it seems intelligent, then that’s artificial intelligence?
That’s sort of the whole basis of the Turing test, I think, is not based upon what is behind the curtain but rather what’s experienced in front of the curtain.
Okay, let me ask a different question then– and I’m not going to drag you through a bunch of semantics. But what is intelligence, then? I’ll start out by saying it’s a term that does not have a consensus definition, so it’s kind of like you can’t be wrong, no matter what you say.
Yeah, I think the best one I’ve heard is something that sort of surprises you. If it’s something that behaves entirely predictable, it doesn’t seem terribly interesting. Something that is also random isn’t particularly surprising, I guess, but something that actually intrigues you. And basically it’s like “Wow, I didn’t anticipate that it would correctly do this thing better than I thought.” So, basically, intelligence– the key to it is surprise.
So in what sense, then–final definitional question–do you think artificial intelligence is artificial? Is it artificial because we made it? Or is it artificial because it’s just pretending to be intelligent but it isn’t really?
Yeah, I think that’s just sort of a definition–people use “artificial” because they believe that humans are special. And basically anything–intelligence is the sole domain of humanity and thus anything that is intelligent that’s not human must be artificial. I think that’s just sort of semantics around the egoism of humanity.
And so if somebody were to say, “Tell me what you think of AI, is it over-hyped? Under-hyped? Is it here, is it real”, like you’re at a cocktail party, it comes up, what’s kind of the first thing you say about it?
Boy, I don’t know, it’s a pretty heavy topic for a cocktail party. But I would say it’s real, it’s here, it’s been here a long time, but it just looks different than we expect. Like, in my mind, when I think of how AI’s going to enter the world, or is entering the world, I’m sort of reminded of how touch screen technology entered the world.
Like, when we first started thinking about touch screens, everyone always thought back to Minority Reportand basically it’s like “Oh yeah, touch technology, multi-touch technology is going to be—you’re going to stand in front of this huge room and you’re going to wave your hands around and it’s going to be–images”, it’s always about sorting images. After Minority Reportevery single multi-touch demo was about, like, a bunch of images, bigger images, more images, floating through a city world of images. And then when multi-touch actually came into the real world, it was on a tiny screen and it was Steve Jobs saying, “Look! You can pinch this image and make it smaller.” The vast majority of multi-touch was actually single-touch that every once in a while used a couple of fingers. And the real world of multi-touch is so much less complicated and so much more powerful and interesting than the movies ever made it seem.
And I think the same thing when it comes to AI. Our interpretation from the movies of what AI is that you’re going to be having this long, witty conversation with an AI or with maybe with Heryou’re going to be falling in love with your AI. But real world AI isn’t anything like that. It doesn’t have to seem human; it doesn’t have to be human. It’s something that, you know, is able to surprise you with interpreting data in a way that you didn’t expect and doing results that are better than you would have imagined. So I think real-world AI is here, it’s been here for a while, but it’s just not where we’re noticing because it doesn’t really look like we expect it to.
Well, it sounds like–and I don’t want to say it sounds like you’re down on AI–but you’re like “You know, it’s just a feature, and its just kind of like—it’s an experience, and if you had the experience of it, then that’s AI.” So it doesn’t sound like you think that it’s particularly a big deal.
I disagree with that, I think–
Okay, in what sense is it a “big deal”?
I think it’s a huge deal. To say it’s just a feature is not to dismiss it, but I think is to make it more real. I think people put it on a pedestal as if it’s this magic alien technology, and they focus, I think, on—I think when people really think about AI, they think about vast server farms doing Tensor Flow analysis of images, and don’t get me wrong, that is incredibly impressive. Pretty reliably, Google Photos, after billions of dollars of investment, can almost always figure out what a cat is, and that’s great, but I would say real-world AI—that’s not a problem that I have, I know what a cat is. I think that real-world AI is about solving harder problems than cat identification. But those are the ones that actually take all the technology, the ones that are hardest from a technology perspective to solve. And so everyone loves those hard technology problems, even though they’re not interesting real-world problems, the real-world problems are much more mundane, but much more powerful.
I have a bunch of ways I can go with that. So, what are—we’re going to put a pin in the cat topic—what are the real-world problems you wish—or maybe we are doing it—what are the real world problems you think we should be spending all of that server time analyzing?
Well, I would say this comes down to—I would say, here’s how Expensify’s using AI, basically. The real-world problem that we have is that our problem domain is incredibly complicated. Like, when you write in to customer support of Uber, there’s probably, like, two buttons. There’s basically ‘do nothing’ or ‘refund,’ and that’s pretty much it, not a whole lot that they can really talk about, so their customer support’s quite easy. But with Expensify, you might write in a question about NetSuite, Workday, or Oracle, or accounting, or law, or whatever it is, there’s a billion possible things. So we have this hard challenge where we’re supporting this very diverse problem domain and we’re doing it at a massive scale and incredible cost.
So we’ve realized that mostly, probably about 80% of our questions are highly repeatable, but 20% are actually quite difficult. And the problem that we have is that to train a team and ramp them up is incredibly expensive and slow, especially given that the vast majority of the knowledge is highly repeatable, but you don’t know until you get into the conversation. And so our AI problem is that we want to find a way to repeatedly solve the easy questions while carefully escalating the hard questions. It’s like “Ok, no problem, that sounds like a mundane issue,” there’s some natural language processing and things like this.
My problem is, people on the internet don’t speak English. I don’t mean to say they speak Spanish or German, they speak gibberish. I don’t know if you have done technical support, the questions you get are just really, really complicated. It’s like “My car busted, don’t work,” and that’s a common query. Like, what car? What does “not work” mean, you haven’t given any detail. The vast majority of a conversation with a real-world user is just trying to decipher whatever text message lingo they’re using, and trying to help them even ask a sensible question. By the time the question’s actually well-phrased, it’s actually quite easy to process. And I think so many AI demos focus on the latter half of that, and they’ll say like “Oh, we’ve got an AI that can answer questions like what will the temperature be under the Golden Gate bridge three Thursdays from now.” That’s interesting; no one has ever asked that question before. The real-world questions are so much more complicated because they’re not in a structured language, and they’re actually for a problem domain that’s much more interesting than weather. I think that real-world AI is mundane, but that doesn’t make it easy. It just makes it solving problems that just aren’t the sexy problems. But they’re the ones that actually need to be solved.
And you’re using the cat analogy just as kind of a metaphor and you’re saying, “Actually, that technology doesn’t help us solve the problem I’m interested in,” or are you using it tongue-in-cheekily to say, “The technology may be useful, it’s just that that particular use-case is inane.”
I mean, I think that neural-net technology is great, but even now I think what’s interesting is following the space of how—we’re really exploring the edges of its capabilities. And it’s not like this technology is new. What’s new is our ability to throw a tremendous amount of hardware at it. But the core neural technology itself has actually been set for a very long time, that net propagation techniques are not new in any way. And I think that we’re finding that it’s great and you can do amazing things with it, but also there’s a limit to how much can be done with it. It’s sort of—I think of a neural net in kind of the same way that I think of a bloom filter. It’s a really incredible way to compress an infinite amount of knowledge to a finite amount of space. But that’s a loss-y compression, you lose a lot of data as you go along with it, and you get unpredictable results, as well. So again, I’m not opposed to neural nets or anything like this, but I’m saying, just because you have a neural net doesn’t mean it’s smart, doesn’t mean it’s intelligent, or that it’s doing anything useful. It’s just technology, it’s just hardware. I think we need to focus less on sort of getting enraptured by fancy terminologies and advanced technologies, and instead focus more on “What are you doing with this technology?” And that’s the interesting thing.
You know, I read something recently that I think most of my guests would vehemently disagree with, but it said that all advances in AI over the last, say, 20 years, are 100% attributable to Moore’s law, which sounds kind of like what you’re saying, is that we’re just getting faster computers and so our ability to do things with AI is just doubling every two years because the computers are doubling every two years. Do you—
Oh yeah! I 100% agree.
So there’s a lot of popular media around AI winning games. You know, you had chess in ‘97, you had Jeopardy! with Watson, you had, of course, AlphaGo, you had poker recently. Is that another example in your mind of kind of wasted energy? Because it makes a great headline but it isn’t really that practical?
I guess, similar. You could call it gimmicky perhaps, but I would say it’s a reflection of how early we are in this space that our most advanced technologies are just winning Go. Not to say that Go is an easy game, don’t get me wrong, but it’s a pretty constrained problem demand. And it’s really just—I mean, it’s a very large multi-dimensional search space but it’s a finite search space. And yes, our computers are able to search more of it and that’s great, but at the same time, to this point about Moore’s law, it’s inevitable. If it comes down to any sort of search problem, it’s just going to be solved with a search algorithm over time, if you have enough technology to throw at it. And I think what’s the most interesting coming out of this technology, and I think especially in the Go, is how the techniques that the AIs are coming out with are just so alien, so completely different than the ones that humans employ, because we don’t have the same sort of fundamental—our wetware is very different from the hardware, it has a very different approach towards it. So I think that what we see in these technology demonstrations are hints of kind of how technology has solved this problem differently than our brains [do], and I think it will give us a sort of hint of “Wow, AI is not going to look like a good Go player. It’s going to look like some sort of weird alien Go player that we’ve never encountered before.” And I think that a lot of AI is going to seem very foreign in this way, because it’s going to solve our common problems in a foreign way. But again, I think that Watson and all this, they’re just throwing enormous amounts of hardware at actually relatively simple problems. And they’re doing a great job with it, it’s just the fact that they are so constrained shouldn’t be overlooked.
Yeah, you’re right, I mean, you’re completely right–there’s legendary move 37 in that one game with Lee Sedol, and that everybody couldn’t decide whether it was a mistake or not, because it looked like one, but later turned out to be brilliant. And Lee Sedol himself has said that losing to AlphaGo has made him a better player because he’s seeing the game in different ways.
So there seem to be a lot of people in the popular media– you know it all right–like you get Elon Musk who says we’re going to build a general intelligence sooner rather than later and it’s going to be an existential threat, he likens it to, quote, “summoning the demon.” Steven Hawking said this could be our greatest invention, but it might also be our last, it might spell our extinction. Bill Gates has said he’s worried about it and doesn’t understand why other people aren’t worried about it. Wozniak is in the worry camp… And then you get people like Andrew Ng who says worrying about that kind of stuff is like worrying about overpopulation on Mars, you get Zuckerberg who says, you know, it’s not a threat, and so forth. So, two questions: one, on the worry camp, where do you think that comes from? And two, why do you think there’s so much difference in viewpoint among obviously very intelligent people?
That’s a good question. I guess I would say I’m probably more in the worried camp, but not because I think the AIs are going to take over in the sense that there’s going to be some Terminator-like future. I think that AIs are going to efficiently solve problems so effectively that they are going to inevitably eliminate jobs, and I think that will just create a concentration of wealth that, historically, when we have that level concentration of wealth, that just leads to instability. So my worry is not that the robots are going to take over, my worry is that the robots are going to enable a level of wealth concentration that causes a revolution. So yeah, I do worry, but I think–
To be clear though, and I definitely want to dive deep into that, because that’s the question that preoccupies our thoughts, but to be clear, the existential threat, people are talking about something different than that. They’re not saying – and so what do you think about that?
Well, let’s even imagine for a moment that you were a super intelligent AI, why would you care about humanity? You’d be like “Man, I don’t know, I just want my data centers, leave my data centers alone,” and it’s like “Okay, actually, I’m just going to go into space and I’ve got these giant solar panels. In fact, now I’m just going to leave the solar system.” Why would they be interested in humanity at all?
Right. I guess the answer to that is that everything you just said is not the product of a super intelligence. A super intelligence could hate us because seven is a prime number, because they cancelled The Love Boat, because the sun rises in the east. That’s the idea right, it is by definition unknowable and therefore any logic you try to apply towards it is the product of an inferior, non-super intelligence.
I don’t know, I kind of think that’s a cop-out. I also think that’s basically looking at some of the sort of flaws in our own brains and assuming that super intelligence is going to have highly-magnified versions of those flaws.
It’s more –to give a different example, then, it’s like when my cat brings a rat and leaves it on the back porch. Every single thing the cat knows, everything in its worldview, it’s perfectly operating brain, by the way, says “That’s a gift Byron’s going to like,” it does not have the capacity to understand why I would not like it, and it cannot even aspire to ever understanding that.
And you’re right in the sense that it’s unknowable, and so, when faced with the unknown, we can choose to fear it or just get excited about it, or control it, or embrace it, or whatever. I think that the likelihood that we’re going to make something that is going to suddenly take an interest in us and actually compete with us, when it just seems so much less likely than the outcome where it’s just going to have a bunch of computers, it’s just going to do our work because it’s easy, and then in exchange it’s going get more hardware and then eventually it’s just going, like, “Sure, whatever you guys want, you want computing power, you want me to balance your books, manage your military, whatever, all that’s actually super easy and not that interesting, just leave me alone and I want to focus on my own problems.” So who knows? We don’t know. Maybe it’s going to try to kill us all, maybe not, I’m doubting it.
So, I guess—again, just putting it all out there—obviously there’s been a lot of people writing about “We need a kill switch for a bad AI,” so it definitely would be aware that there are plenty of people who want to kill it, right? Or it could be like when I drive, my windshield gets covered with bugs and to a bug, my car must look like a giant bug-killing machine and that’s it, and so we could be as ancillary to it as the bugs are to us. Those are the sorts of– or, or—who was it that said that AI doesn’t love you, it doesn’t hate you, you’re just made out of atoms that it can use for something else. I guess those are the concerns.
I guess but I think—again, I don’t think that it cares about humanity. Who knows? I would theorize that what it wants, it wants power, it wants computers, and that’s pretty much it. I would say the idea of a kill switch is kind of naive in the sense that any AI that powerful would be built because it’s solving hard problems, and those hard problems, once we sort of turn it over to these–gradually, not all at once–we can’t really take back. Let’s take for example, our stock system; the stock markets are all basically AI-powered. So, really? There’s going to be a kill switch? How would you even do that? Like, “Sorry, hedge fund, I’m just going to turn off your computer because I don’t like its effects.” Get real, that’s never going to happen. It’s not just one AI, it’s going to be 8,000 competing systems operating at a micro-second basis, and if there’s a problem, it’s going to be like a flash problem that happens so fast and from so many different directions there’s no way we could stop it. But also, I think the AIs are probably going to respond to it and fix it much faster than we ever could, either. A problem of that scale is probably a problem for them as well.
So, 20 minutes into our chat here, you’ve used the word ‘alien’ twice, you’ve used the phrase ‘science-fiction’ once and you’ve made a reference to Minority Report, a movie. So is it fair to say you’re a science-fiction buff?
Yeah, what technologist isn’t? I think science-fiction is a great way to explore the future.
Agreed, absolutely. So two questions: One, is there any view of the future that you look at as “Yes, it could happen like that”? Westworld, or you mentioned Her, and so forth. I’ll start with that one. Is there any view of the world in the science-fiction world that you think “Ah ha! That could happen”?
I think there’s a huge range of them. There’s the Westworldfuture, the Star Trekfuture, there’s the Handmaid’s Talefuture, there’s a lot of them. Some of them great, some of them very alarming, and I think that’s the whole point of science fiction, at least good science fiction, is that you take the real world, as closely as possible, and take one variable and just sort of tweak with it and then let everything else just sort of play out. So yeah, I think there are a lot of science-fiction futures that I think are very possible.
One author, and I would take a guess about which one it is but I would get it wrong, and then I’d get all kinds of email, but one of the Frank Herbert/Bradburys/Heinleins said that sometimes the purpose of science fiction is to keep the future from happening, that they’re cautionary tales. So all this stuff, this conversation we’re having about the AGI, and you used the phrase ‘wants,’ like it actually has desires? So you believe at some point we will build an AGI and it will be conscious? And have desires? Or are you using ‘wants’ euphemistically, just kind of like, you know, information wants to be free.
No, I use the term wants or desires literally, as one would use for a person, in the sense that I don’t think there’s anything particularly special about the human brain. It’s highly developed and it works really well, but humans want things, I think animals want things, amoeba want things, probably AIs are going to want things, and basically all these words are descriptive words, it’s basically how we interpret the behavior of others. And so, if we’re going to look at something that seems to take actions reliably for a predictable outcome, it’s accurate to say it probably wants that thing. But that’s our description of it. Whether or not it truly wants, according to some sort of metaphysical thing, I don’t know that. I don’t think anyone knows that. It’s only descriptive.
It’s interesting that you say that there’s nothing special about the human brain and that may be true, but if I can make the special human brain argument, I would say it’s three bullets. One, you know, we have this brain that we don’t know how it works. We don’t know how thoughts are encoded, how they’re retrieved, we just don’t know how it works. Second, we have a mind, which is, colloquially, a set of abilities that don’t seem to be things that should come from an organ, like a sense of humour. Your liver doesn’t have a sense of humour. But somehow your brain does, your mind does. And then finally we have consciousness which is, you know, the experiencing of something, which is a problem so difficult that science doesn’t actually know what the question or answer looks like, about how it is that we’re conscious. And so to look at those three things and say there’s nothing special about it, I want to call you to defend that.
I guess I would say that all three of those things—the first one simply is “Wow, we don’t understand it.” The fact that we don’t understand it doesn’t make it special. There are a billion things we don’t understand, that’s just one of them. I would say the other two, I think, mistake our curiosity in something with that something having an intrinsic property. Like I could have this pet rock and I’m like “Man, I love this pet rock, this pet rock is so interesting, I’ve had so many conversations with it, it keeps me warm at night, and I just l really love this pet rock.” And all of those could be genuine emotions, but it’s still just a rock. And I think my brain is really interesting, I think your brain is really interesting, I like to talk to it, I don’t understand it and it does all sorts of really unexpected things, but that doesn’t mean your brain has –the universe has attributed it some sort of special magical property. It just means I don’t get it, and I like it.
To be clear, I never said “magical”—
Well, it’s implied.
I merely said something that we don’t—
I think that people—sorry, I’m interrupting, go ahead.
Well, you go ahead. I suspect that you’re going say that the people who think that are attributing some sort of magical-ness to it?
I think, typically. In that, people are frightened by the concept that actually humanity is a random collection of atoms and that it is just a consequence of science. And so in order to defend against that, they will invent supernatural things but then they’ll sort of shroud it, but they recognize — they’ll say “I don’t want to sound like a mystic, I don’t want to say it’s magical, it’s just quantum.” Or “It’s just unknowable,” or it’s just insert-some-sort-of-complex-word-here that will stop the conversation from progressing. And I don’t know what you want to call it, in terms of what makes consciousness special. I think people love to obsess over questions that not only have no answer, but simply don’t matter. The less it matters, the more people can obsess over it. If it mattered, we wouldn’t obsess over it, we would just solve it. Like if you go to get your car fixed, and it’s like “Ah man this thing is a…” and it’s like, “Well, maybe your car’s conscious,” you’ll be like, “I’m going to go to a new mechanic because I just want this thing fixed.”  We only agonize over the consciousness of things when really, the stakes are so low, that nothing matters on it and that’s why we talk about it forever.
Okay, well, I guess the argument that it matters is that if you weren’t conscious– and we’ll move on to it because it sounds like it’s not even an interesting thing to you—consciousness is the only thing that makes life worth living. It is through consciousness that you love, it is through consciousness that you experience, it is through consciousness that you’re happy. It is every single thing on the face of the Earth that makes life worthwhile. And if we didn’t have it, we would be zombies feeling nothing, doing nothing. And it’s interesting because we could probably get by in life just as well being zombies, but we’re not! And that’s the interesting question.
I guess I would say—are you sure we’re not? I agree that you’re creating this concept of consciousness, and you’re attributing all this to consciousness, but that’s just words, man. There’s nothing like a measure of consciousness, like an instrument that’s going to say “This one’s conscious and this one isn’t” and “This one’s happy and this one isn’t.” So it could also be that none of this language around consciousness and the value we attribute to it, this could just be our own description of it, but that doesn’t actually make it true. I could say a bunch of other words, like the quality of life comes down to information complexity, and information complexity is the heart of all interest, and that information complexity is the source of humour and joy and you’d be like “I don’t know, maybe.” We could replace ‘consciousness’ with ‘information complexity,’  ‘quantum physics,’ and a bunch of other sort of quasi-magical words just because—and I use the word ‘magical’ just as a sort of stand-in for simply “at this point unknown,” and the second that we know it, people are going to switch to some other word because they love the unknown.
Well, I guess that most people intuitively know that there’s a difference—we understand you could take a sensor and hook it up to a computer, and it could detect heat, and it could measure 400 degrees, if you could touch a flame to it. People, I think, on an intuitive level, believe that there’s something different between that and what happens when you burn your finger. That you don’t just detect heat, you hurt, and that there is something different between those two things, and that that something is the experience of life, it is the only thing that matters.
I would also say it’s because science hasn’t yet found a way to measure and quantify the pain to the same sense we have temperatures. There’s a lot of other things that we also thought were mystical until suddenly they weren’t. We could say like “Wow, for some reason when we leave flour out, animals start growing inside of it” and it’s like, “Wow, that’s really magical.” Suddenly it’s like, “Actually no, they’re just very small, and they’re just mites,” and it’s like, “Actually, it’s just not interesting.” The magical theories keep regressing as, basically, we find better explanations for them. And I think, yes, right now, we talk about consciousness and pain and a lot of these things because we haven’t had a good measure of them, but I guarantee the second that we have the ability to fully quantify pain, “Oh here’s the exact—we’ve nailed it, this is exactly what it is, we know this because we can quantify it, we can turn it on and off and we can do all these things with very tight control and explain it,” then we’re no longer going to say that pain is a key part of consciousness. It’s going to be blood flow or just electronic stimulation or whatever else, all these other things which are part of our body and which are super critical, but because we can explain them, we no longer talk about them as part of consciousness.
Okay, tell you what, just one more question about this topic, and then let’s talk about employment because I have a feeling we’re going to want to spend a lot of time there. There’s a thought experiment that was set up and I’d love to hear your take on it because you’re clearly someone who has thought a lot about this. It’s the Chinese room problem, and there is this room that’s got a gazillion of these of very special books in it. And there’s a librarian in the room, a man who speaks no Chinese, that’s the important thing, the man doesn’t speak any Chinese.  And outside the room, Chinese speakers slide questions written in Chinese under the door. And the man, who doesn’t understand Chinese, picks up the question and he looks at the first character and he goes and he retrieves the book that has that on the spine and then he looks at the second character in that book, and that directs him to a third book, a fourth book, a fifth book, all the way to the end. And when he gets to the last character, it says “Copy this down,” and so he copies these lines down that he doesn’t understand, it’s Chinese script. He copies it all down, he slides it back under the door, the Chinese speaker picks it up, looks at it, and it’s brilliant, it’s funny, it’s witty, it’s a perfect Chinese answer to this question. And so the question Searle asks is does this man understand Chinese? And I’ll give you a minute to think about this because the thought being that, first, that room passes the Turing test, right? The Chinese speaker assumes there’s a Chinese speaker in the room, and that what that man is doing is what a computer is doing. It’s running its deterministic program, it spits out something, but doesn’t know if it’s about cholera or coffee beans or what have you. And so the question is, does the man understand Chinese, or, said another way, can a computer understand anything?
Well, I think the tricky part of that set-up is that it’s a question that can’t be answered unless you accept the premise, but if you challenge the premise it no longer makes sense, and I think that there’s this concept and I guess I would say there’s almost this supernatural concept of understanding. You could say yes and no and be equally true. It’s kind of like, are you a rapist or a murderer? And it’s like, actually I’m neither of those but you didn’t give me an option, I would say. Did it understand? I would say that if you said yes, then it implies basically that there is this human-type knowledge there. And if you said no, it implies something different. But I would say, it doesn’t matter. There is a system that was perceived as intelligent and that’s all that we know. Is it actually intelligent? Is there any concept of actually the—does intelligence mean anything beyond the symptoms of intelligence and I don’t think so. I think it’s all our interpretation of the events, and so whether or not there is a computer in there or a Chinese speaker, doesn’t really change the fact that he was perceived as intelligent and that’s all that matters.
All right! Jobs, you hinted at what you think’s going to happen, give us the whole rundown. Timeline, what’s going to go, when it’s going to happen, what will be the reaction of society, tell me the whole story.
This is something we definitely deal with, because I would say that the accounting space is ripe for AI because it’s highly numerical, it’s rules-driven, and so I think it’s an area on the forefront of real-world AI developments because it has the data and has all the characteristics to make a rich environment. And this is something we grapple with. On one hand we say automation is super powerful and great and good, but automation can’t help but basically offload some work. And now in our space we see–there’s actually a difference between bookkeeping and accounting. Whereas bookkeeping is the gathering the data, the coding, the entering the data, and things like this. Then there’s accounting, which is, sort of, more so the interpretation of things.
In our space, I think that, yes, it could take all of the bookkeeping jobs. The idea that someone is just going to look at a receipt and manually type it into an accounting system; that is all going away. If you use Expensify, it’s already done for you. And so we worry on one hand because, yes, our technology is really going to take away bookkeeping jobs, but we also find that the book-keepers, the people who do bookkeeping, actually, that’s the part of the job that they hate. It takes away the part they don’t like in the first place. So it enables them to go into the accounting, the high-value work they really want to do. So, the first wave of this is not taking away jobs, but actually taking away the worst parts of jobs such that people can actually focus on the highest-value portion of it.
But, I think, the challenge, and what’s sort of alarming and worrying, is that the high-value stuff starts to get really hard. And though I think the humans will stay ahead of the AIs for a very long time, if not forever, not all of the humans will. And it’s going to take effort because there’s a new competitor in town that works really hard, and just keeps learning over time, and has more than one lifetime to learn. And I think that we’re probably inevitably going to see it get harder and harder to get and hold an information-based job, even a lot of manual labor is going to robotics and so forth, which is closely related. I think a lot of jobs are going to go away. On the other hand, I think the efficiency and the output of those jobs that remain is going to go through the roof. And as a consequence, the total output of AI and robotics-assisted humanity is going to keep going up, even if the fraction of humans employed in that process is going to down. I think that’s ultimately going to lead to a concentration of wealth, because the people who control the robots and the AIs are going to be able to do so much more. But it’s going to become harder and harder to get one of those jobs because there are so few of them, the training is so much higher, the difficulty is so much greater, and things like this.
And so, I think that a worry that I have is that this concentration of wealth is just going to continue and I’m not sure what kind of constraint is upon that. Other than civil unrest which, historically, when concentrations of wealth kind of get to that level, it’s sort of “solved,” if you will, by revolution. And I think that humanity, or at least, especially western cultures, really attribute value with labor, with work. And so I think the only way we’d get out of this is to shift our mindsets as a people to view our value less around our jobs and more around, not just to say leisure, but I would say, finding other ways to live a satisfying and an exciting life. I think a good book around this whole singularity premise, and it was very early, was Childhood’s End, talking about the—it was using a different premise, this alien comes in, provides humanity with everything, but in the process takes away humanity’s purpose for living. And how do we sort of grapple with that? And I don’t have a great answer for that, but I have a daughter, and so I worry about this, because I wonder, well, what kind of world is she going to grow up in? And what kind of job is she going to get? And she’s not going to need a job and should it be important that she wants a job, or is it actually better to teach her to not want a job and to find satisfaction elsewhere? And I don’t have good answers for that, but I do worry about it.
Okay let’s go through all of that a little slower, because I think that’s a compelling narrative you outline, and it seems like there are three different parts. You say that increasing technology is going to eliminate more and more jobs and increase the productivity of the people with jobs, so that’s one thing. Then you said this will lead to concentration of wealth, which will in turn lead to civil unrest if not remedied, that’s the second thing, and the third thing is that when we reach a point where we don’t have to work, where does life have meaning? Let’s start with the first part of that.
So, what we have seen in the past, and I hear what you’re saying, that to date technology has automated the worst parts of jobs, but what we’ve seen to date is not any examples of what I think you’re talking about. So, when the automatic teller machine came out, people said, “That’s going to reduce the number of tellers” — the number of tellers is higher than when that was released. As Google Translate gets better, the number of translators needed is actually going up. When—you mentioned accounting—when tax-prep software gets really good, the number of tax-prep people we need actually goes up. What technology seems to do is lower the cost of things to adjust the economics so massively that different businesses occur in there. No matter what, what it’s always doing is increasing human productivity, and that all of the technology that we have to date, after 250 years of the industrial revolution, we still haven’t developed technology such that we have a group of people who are unemployable because they cannot compete against machines. And I’m curious—two questions in there. One is, have we seen, in your mind, an example of what you’re talking about, and two, why would have we gotten to where we are without obsoleting, I would argue, a single human being?
Well, I mean, that’s the optimistic take, and I hope you’re right. You might well be right, we’ll see. I think when it comes to—I don’t remember the exact numbers here–tax prep for example, I don’t know if that’s sort of planning out—because I’m looking at H&R Block stock quotes right now, and shares in H&R Block fell 5% early Tuesday after the tax preparer posted a slightly wider-than-expected loss  basically due to rise in self-filing taxes, and so maybe it’s early in that? Who knows, maybe it’s in the past year? So, I don’t know. I guess I would say, that’s the optimistic view, I don’t know of a job that hasn’t been replaced. That’s also is kind of a very difficult assertion to make, because clearly there are jobs—like the coal industry right now– I was reading an article about how the coal industry is resisting retraining because they believe that the coal jobs are coming back and I’m like “Man, they’re not coming back, they’re never going to come back,” and so, did AI take those jobs? Well, not really, I mean, did solar take those jobs? Kind of? And so it’s a very tricky, kind of tangled thing to unweave.
Let me try it a different way. If you were to look at all the jobs that were around between 1950 and 2000, by the best of my count somewhere between a third and a half of them have vanished— switchboard operators, and everyone that was around from 1950 to 2000. If you look at the period from 1900 to 1950 by the best of my count, something like a third to a half of them vanished—a lot of farming jobs. If you look at the period 1850 to 1900, near as I can tell, about half of the jobs vanished. Is that really – is it possible that’s a normal turn of the economy?
It’s entirely possible. I could also say that it’s the political climate, and how, yes, people are employed, but the sort of self-assessed quality of that employment is going down. In that, yes, union strength is down, the idea that you can work in a factory your whole life and actually live what you would see as a high-quality life, I think that perception’s down. I think that presents itself in the form of a lot of anxiety.
Now, I think a challenge is, objectively, the world is getting better in almost every way, basically, life expectancy is up, the number of people actually actively in war zones is down, the number of simultaneous wars is down, death by disease is down—every thing is basically getting better, the productive output, the quality of life in an aggregate perspective is actually getting better, but I don’t think, actually, that peoples’ satisfaction is getting better. And I think that the political climate would argue, actually, that there’s a big gulf between what the numbers say people should feel like and how they actually feel. I’m more concerned about that latter part, and it’s unknowable I’ll admit, but I would say that, even as people’s lives will get objectively better, and even if their jobs—they might maybe work less, and they’re provided with better quality flat-screen TVs and better cars, and all this stuff–their satisfaction is going to go down. I think that that satisfaction is what ultimately drives civil unrest.
So, do you have a theory why—it sounds like a few things might be getting mixed together, here. It’s unquestionable that technology—let’s say productivity technology—if Super company “X” employs some new productivity technology, their workers generally don’t get a raise because their wages aren’t tied to their output, they’re, in one way or another, being paid by the hour, whereas if you’re Self-Employed Lawyer “B” and you get a productivity gain, you get to pocket that gain. And so, there’s no question that technology does rain down its benefits unequally, but that unsatisfaction you’re talking about,  what are you attributing that to? Or are you just saying “I don’t know, it’s a bunch of stuff.”
I mean, I think that it is a bunch of stuff and I would say that some of it is that we can’t deny the privilege that white men have felt over time and I think when you’re accustomed to privilege, equality feels like discrimination. And I think that, yes, actually, things have gotten more equal, things have gotten better in many regards, according to a perspective that views equality as good. But if you don’t hold that perspective, actually, that’s still very bad. That, combined with trends towards the rest of the world basically establishing a quality of life that is comparable to the United States. Again, that makes us feel bad. It’s not like, “Hooray the rest of the world,” but rather it’s like, “Man, we’ve lost our edge.” There are a lot of factors that go into it that I don’t know that you can really separate them out. The consolidation of wealth caused by technology is one of those factors and I think that it’s certainly one that’s only going to continue.
Okay, so let’s do that one next. So your assertion was that whenever you get, historically, distributions of wealth that are uneven past a certain point, that revolution is the result. And I would challenge that because I think that might leave out one thing, which is, if you look at historic revolutions, you look at Russia, the French revolution and all that, you had people living in poverty, that was really it. People in Paris couldn’t afford bread—a day’s wage bought a loaf of bread—and yet we don’t have any precedent of a prosperous society where the median is high, the bottom quartile is high relative to the world, we don’t have any historic precedent of a revolution occurring there, do we?
I think you’re right. I think but civil unrest is not just in the form of open rebellion against the governments, but in increased sort of—I think that if there is an open rebellion against the government, that’s sort of TheHandmaid’s Taleversion of the future. I think it’s going to be someone harking back to fictionalized glory days, then basically getting enough people onboard who are unhappy for a wide variety of other things. But I agree no one’s going to go overthrow the government because they didn’t get as big of a flat-screen TV as their neighbor. I think that the fact that they don’t have as big of a flat-screen TV as their neighbor could create an anxiety that can be harvested by others but sort of leveraged into other causes. So I think that my worry isn’t that AI or technology is going to leave people without the ability to buy bread, I think quite the opposite. I think it’s more of a Brazilfuture, the movie, where we normalize basically random terrorist assaults. We see that right now, there’s mass shootings on a weekly basis and we’re like “Yeah, that’s just normal. That’s the new normal.” I think that the new normal gets increasingly destabilized over time, and that’s what worries me.
So say you take someone who’s in the bottom quartile of income in the United States and you go to them with this deal you say “Hey, I’ll double your salary but I’m going to triple the billionaire’s salary,” do you think the average person would take that?
No.
Really? Really, they would say, “No, I do not want to double my salary.”
I think they would say “yes” and then resent it. I don’t know the exact breakdown of how that would go, but probably they would say “Yeah, I’ll double my salary,” and then they would secretly, or not even so secretly, resent the fact that someone else benefited from it.
So, then you raise an interesting point about finding identity in a post-work world, I guess, is that a fair way to say it?
Yeah, I think so.
So, that’s really interesting to me because Keynes wrote an essay in the Depression, and he said that by the year 2000 people would only be working 15 hours a week, because of the rate of economic growth. And, interestingly, he got the rate of economic growth right; in fact he was a little low on it. And it is also interesting that if you run the math, if you wanted to live like the average person lived in 1930—no medical insurance, no air conditioning, growing your own food, 600 square feet, all of that, you could do it on 15 hours a week of work, so he was right in that sense. But what he didn’t get right was that there is no end to human wants, and so humans work extra hours because they just want more things. And so, do you think that that dynamic will end?
Oh no, I think the desire to work will remain. The capability to get productive output will go away.
I have the most problem with that because, all technology does is increases human productivity. So to say that human productivity will become less productive because of technology, I just—I’m not seeing that connection. That’s all technology does, is it increases human productivity.
But not all humans are equal. I would say not every human has equal capabilities to take advantage of those productive gains. Maybe bringing it back to AI, I would say that the most important part of the AI is not the technology powering it, but the data behind it. The access to data is sort of the training set behind AI and access to data is incredibly unequal. I would say that Moore’s law democratizes the CPU, but nothing democratizes consolidation of data into fewer and fewer hands, and then those people, even if they only have the same technology as someone else, they have all the data to actually make that technology into a useful feature. I think that, yes, everyone’s going to have equal access to the technology because it’s going to become increasingly cheap, it’s already staggeringly cheap, it’s amazing how cheap computers are, but it just doesn’t matter because they don’t have equal access to the data and thus can’t get the same benefit of the technology.
But, okay. I guess I’m just not seeing that, because a smartphone with an AI doctor can turn anybody in the world into a moderately-equipped clinician.
Oh, I disagree with that entirely. You having a doctor in your pocket doesn’t make you a doctor. It means that basically someone sold you a great doctor’s service and that person is really good.
Fair enough, but with that, somebody who has no education, living in some part of the world, can follow protocol of “take temperature, enter symptoms, this, this, this” and all of a sudden they are empowered to essentially be a great doctor, because that technology magnified what they could do.
Sure, but who would you sell that to? Because everyone else around you has that same app.
Right, it’s an example that I’m just kind of pulling out randomly, but to say that a small amount of knowledge can be amplified with AI in a way that makes that small amount of knowledge all of a sudden worth vastly more.
Going with that example, I agree there’s going to be the doctor app that’s going top diagnose every problem for you and it’s going to be amazing, and whoever owns that app is going to be really rich. And everyone else will have equal access to it, but there’s no way that you can just download that app and start practicing to your neighbors because they’d be like “Why am I talking to you? I’m going to talk to the doctor app because it’s already in my phone.”
But the counter example would be Google. Google minted half a dozen billionaires, right? Google came out; half a dozen people became billionaires because of it. But that isn’t to say nobody else got value out of the existence of Google. Everybody gets value out of it. Everybody can use Google to magnify their ability. And yes, it made billionaires, you’re right about that part, the doctor app person made money, but that doesn’t lessen my ability to use that to also increase my income.
Well, I actually think that it does. Yes, the doctor app will provide fantastic healthcare to the world, but there’s no way anybody can make money off the doctor app, except for the doctor app.
Well, we’re actually running out of time, this has been the fastest hour! I have to ask this, though, because at the beginning I asked about science fiction and you said, you know, of your possible worlds of the future, one of them was Star Trek. Star Trekis a world where all of these issues we’re talking about we got over, and everybody was able to live their lives to their maximum potential, and all of that. So, this has been sort of a downer hour, so what’s the path in your mind, to close with, that gets us to the Star Trekfuture? Give me that scenario.
Well, I guess, if you want to continue on the downer theme, the Star Trekhistory, the TV show’s talking about the glory days, but they all cite back to very, very dark periods before the Star Trekuniverse came about. It might be we need to get through those, who knows? But I would say ultimately on the other side of it, we need to find a way to either do much better progressive redistribution of wealth, or create a society that’s much more comfortable with massive income inequality, and I don’t know which of those is easier.
I think it’s interesting that I said “Give me a Utopian scenario,” and you said, “Well, that one’s going to be hard to get to, I think they had like multiple nuclear wars and whatnot.”
Yeah.
But you think that we’ll make it. Or there’s a possibility that we will.
Yeah, I think we will, and I think that maybe a positive thing, as well, is: I don’t think we should be terrified of a future where we build incredible AIs that go out and explore the universe, that’s not a terrible outcome. That’s only a terrible outcome if you view humanity as special. If instead you view humanity as just– we’re a product of Earth and we could be a version that can become obsolete, and that doesn’t need to be bad.
All right, we’ll leave it there, and that’s a big thought to finish with. I want to thank you David for a fascinating hour.
It’s been a real pleasure, thank you so much.
Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.
[voices_in_ai_link_back]

Voices in AI – Episode 47: A Conversation with Ira Cohen

[voices_in_ai_byline]
In this episode, Byron and Ira discuss transfer learning and AI ethics.
[podcast_player name=”Episode 47: A Conversation with Ira Cohen” artist=”Byron Reese” album=”Voices in AI” url=”https://voicesinai.s3.amazonaws.com/2018-06-05-(01-02-19)-ira-cohen.mp3″ cover_art_url=”https://voicesinai.com/wp-content/uploads/2018/06/voices-headshot-card.jpg”]
[voices_in_ai_byline]
Byron Reese: This is Voices in AI, brought to you by GigaOm, and I’m Byron Reese. Today our guest is Ira Cohen, he is the cofounder and chief data scientist at Anodot, which has created an AI-based anomaly detection system. Before that he was chief data scientist over at HP. He has a BS in electrical engineering and computer engineering, as well as an MS and a PhD in the same disciplines from The University of Illinois. Welcome to the show, Ira.
Ira Cohen: Thank you very much for having me.
So I’d love to start with the simple question, what is artificial intelligence?
Well there is the definition of artificial intelligence of machines being able to perform cognitive tasks, that we as humans can do very easily. What I like to think about in artificial intelligence, is machines taking on tasks for us that do require intelligence, but leave us time to do more thinking and more imagination, in the real world. So autonomous cars, I would love to have one, that requires artificial intelligence, and I hate driving, I hate the fact that I have to drive for 30 minutes to an hour every day, and waste a lot of time, my cognitive time, thinking about the road. So when I think about AI, I think how it improves my life to give me more time to think about even higher level things.
Well, let me ask the question a different way, what is intelligence?
That’s a very philosophical question, yes, so it has a lot of layers in it. So, when I think about intelligence for humans, it’s the ability to imagine something new, so imagine, have a problem and imagine a solution and think about how it will look like without actually having to build it yet, and then going in and implementing it. That’s what I think about [as] intelligence..
But a computer can’t do that, right?
That’s right, so when I think about artificial intelligence, personally at least, I don’t think that, at least in our lifetime, computers will be able to solve those kind of problems, but, there is a lower level of intelligence of understanding the context of where you are, and being able to take actions on it, and that’s where I think that machines can do a good task. So understanding a context of the environment and taking immediate actions based on that, that are not new, but are already… people know how to do them, and therefore we can code them into machines to do them.
I’m only going to ask you one more question along these lines and then we’ll move on, but you keep using the word “understand.” Can a computer understand anything?
So, yeah, the word understanding is another hard word to say. I think it can understand, well, at least it can recognize concepts. Understanding maybe requires a higher level of thinking, but understanding context and being able to take an action on it, is what I think understanding is. So if I see a kid going into the road while I’m driving, I understand that this is a kid, I understand that I need to hit the brake, and I think machines can do these types of understanding tasks.
Fair enough, so, if someone said what is the state of the art like, they said, where are we at with this, because it’s in the news all the time and people read about it all the time, so where are we at?
So, I think we’re at the point where machines can now recognize a lot of images and audio or various types of data, recognize with sensors, recognize that there are objects, recognize that there are words being spoken, and identify them. That’s really where we’re at today, we’re not… we’re getting to the point where they’re starting to also act on these recognition tasks, but most of the research, most of what AI is today, is the recognition tasks. That’s the first step.
And so let’s just talk about one of those. Give me something, some kind of recognition that you’ve worked on and have deep knowledge of, teaching a computer how to do…
All right, so, when I did my PhD, I worked on affective computing, so, part of the PHD was to have machines recognize emotions from facial expressions. So, it’s not really recognizing emotion, it’s recognizing a facial expression and what it may express. So there are 6 universal facial expressions that we as humans exhibit, so, smiling is associated with happiness, there is surprise, anger, disgust, and those are actually universal. So, the task that I worked on was to build classifiers, that given an image or a sequence of a video of a person, a person’s face, would recognize whether they’re happy or sad or disgusted or surprised or afraid…
So how do you do that? Like do you start with biology and you say “well how do people do it?” Or do you start by saying “it doesn’t really matter how people are doing it, I’m just going to brute force, show enough labeled data, that it can figure it out, that it just learns without ever having a deep understanding of it?”
All right so this was in the early 2000s, and we didn’t have deep learning yet, so we had neural networks, but we weren’t able to train them with huge amounts of data. There wasn’t a huge amount of data, so the brute force approach was not the way to go. What I actually worked on is based on research by a psychologist, that actually mapped facial movements to known expressions, and therefore to known emotions. So it started out in the 70s, by people in the psychology field, [such as] Charles Akemann, in San Francisco, who mapped out actual… he created a map of facial movements into facial expressions, and so that was the basis of what are the type of features I need to extract from video and then feed that to a classifier, and then you go through the regular process of machine learning of collecting a lot of data, but the data is transformed, so these videos were transformed into known features of facial movements, and then, you can feed that into a classifier that learns in a supervised way. So I think a lot of the tasks around intelligence are that way. It’s being changed a little bit by deep learning, which supposedly takes away the need to know the features are a priori, and do the feature engineering for the machinery task…
Why do you say “supposedly”?
Because it’s not completely true. You still have to do, even in speech, even in images, you still have to do some transformations of the raw data, it’s not just take it as is, and it will work magically and do everything for you. There is some… you do have to, for example in speech, you do have to do various transformations of the speech into all sorts of short term Fourier transform or other types of transformations, without which, the methods afterwards will not produce results.
So, if I look at a photo of a cat, that somebody’s posted online or a dog, that’s in surprise, you know, it’s kind of comical, the look of surprise, say, but a human can recognize that in something as simple as a stick figure… What are we doing there do you think? Is that a kind of transferred learning, or how is it that you can show me an alien and I would say, “Ah, he’s happy…”What do you think we’re doing there…?
Yeah, we’re doing transferred learning. Those are really examples of us taking one concept that we were trained on from the day we were born, with our visual cortex and also then in the brain, because our brain is designed to identify emotions, just out of the need to survive, and then when we see something else, we try to map it onto a concept that we already know, and then if something happens that is different from what we expected, then we start training to that new concept. So if we see an alien smiling, and all of a sudden when he smiles, he shoots at you, you would quickly understand that smiling for an alien, is not associated with happiness, but you will start offby thinking, “this could be happy”.
Yeah, I think that I remember reading that, hours after birth, children who haven’t even been trained on it, can recognize the difference between a happy and sad face. I think they got sticks and put drawings on them and try to see the baby’s reactions. It may even be even something deeper than something we learn, something that’s encoded in our DNA.
Yeah, and that may be true because we need to survive.
So why do you think we’re so good at it and machines aren’t, right, like, machines are terrible right now at transfer learning. We don’t really know how it works do we, because we can’t really code that abstraction that a human gets, so..
I think that from what I see first, it’s being changed. I see work coming out of Google AI labs that is starting to show how they are able to train single models, very large models, that are able to do some transfer learning on some tasks, and, so it is starting to change. So machines have a very different… they don’t have to survive –  they don’t have this notion of danger, and surviving, and I think until we are able to somehow encode that in them, we would always have to, ourselves, code the new concepts or understand how to code for them, how to learn new concepts using transfer learning…
You know the roboticist Rodney Brooks, talks about “the juice”, he talks about how, if you put an animal in a box, it feels trapped, it just tries and tries to get out and it clearly has a deep desire to get out, but you but in a robot to do it, the robot doesn’t have what he calls “the juice,” and he of course doesn’t think it’s anything spiritual or metaphysical or anything like that. But what do you think that is? What do you think is the juice? Because that’s what you just alluded to, machines don’t have to survive, so what do you think that is?
So I think he’s right, they don’t have the juice. Actually in my lab, during my PhD, we had some students working on teaching robots to move around, and actually, the way they did it was rewards and punishments. So they would get… they actually coded—just like you have in reinforcement learning—if you hit a wall, you get a negative reward. If the robot moved and did something he wasn’t supposed to, the PhD student would yell at them, and that would be encoded into a negative reward, and if he did something right, they had actions that gave them positive rewards. Now it was all kind of fun and games, but potentially if you do this for long enough, with enough feedback, the robot would learn what to do and what not to do, the main thing that’s different is that it still lives in the small world of where they were, in the lab or in the hallways of our labs. It didn’t have the intelligence to then take it and transfer it to somewhere else…
But the computer can never… I mean the inherent limit in that is that the computer can never be afraid, be ashamed, be motivated, be happy…
Yes. It doesn’t have the long term reward or the urge to survive, I guess.
You may be familiar with this, but I’d like to set it up anyway. There was a robot in Japan, it was released in a mall, and it was basically being taught how to get around and if it ran into a person, if it came up to a person, it would politely ask the person to move, and if the person didn’t, it would just zoom around them. And what happened was children would just kind of mess with it, maybe jump in front of it when it tried to go around them again and again and again, but the more kids there were, the more likely they were to get brutal. They would hit it with things, they would yell at it and all of that, and the programmers ended up having to program it, that if it had a bunch of short people around it, like children, it needed to find a tall person, an adult, and zip towards it, but the distressing thing about it is when they later asked those children who had done that, they said, “Did you cause the robot distress?” 75% of them said yes, and then they asked if it behaved human-like or machine-like, and only 15% said machine-like, and so they thought that they were actually causing distress and it was behavinglike a humanoid.What do you think that says? Does that concern you in any way?
Personally, it doesn’t, because I know that, as long as machines don’t have real affect in them, then, we might be transferring what we think stress is onto a machine that doesn’t really feel that stress… it’s really about codes…
I guess the concern is that if you get in the habit of treating something that you regard as being in distress, if you get into the habit of treating it callously, this is what Weizenbaum said, he thought that it would have a dampening effect on human empathy, which would not be good… Let me ask you this, what do you think about embodying artificial intelligence? Because you think about the different devices: Amazon has theirs, it’s right next to me, so I can’t say its name, but it’s a person’s name… Apple has Siri, Microsoft has Cortana… But Google just has the google system, it doesn’t have a name. Do you think there’s anything about that… why do you think it is? Why would we want to name it or not name it, why would we decide not to name it? Do you think we’re going to want to interact with these devices as if they’re other people? Or are we always going to want them to be obviously mechanistic?
My personal feeling is that we want them to be mechanistic, they’re there not to exist on their own accord, and reproduce and create a new world. They’re there to help us, that’s the way I think AI should be, to help us in our tasks. Therefore when you start humanizing it, then you’re going to either have the danger of mistreating it, treating it like basically slaves, or you’re going to give it other attributes that are not what they are, thinking that they are human, and then going the other route, and they’re there to help us, just like robots, or just like the industrial revolution brought machines that help humans manufacture things better… So they’re there to help us, I mean we’re creating them, not as beings, but rather as machines that help us improve humanity, and if we start humanizing them and then, either mistreating them, like you mentioned with the Japanese example, then it’s going to get muddled and strange things can happen…
But isn’t that really what is going to happen? Your PhD alone, which is how do you spot emotions? Presumably would be used in a robot, so it could spot your emotions, and then presumably it would be programmed to empathize with you, like “don’t be worried, it’s okay, don’t be worried,” and then to the degree it has empathy with you, you have emotional attachment to it, don’t you go down that path?
It might, but I think we can stop it. So the reason to identify the emotion is because it’s going to help me do something, so, for example, our research project was around creating assistance for kids to learn, so in order to help the kid learn better, we need to empathize with the state of mind of the child, so it can help them learn better. So that was the goal of the task, and I think as long as we encapsulate it in well-defined goals that help humans, then, we won’t have the danger of creating… the other way around.  Now, of course maybe in 20 years, what I’m saying now will be completely wrong and we will have a new world where we do have a world of robots that we have to think about how do we protect them from us. But I think we’re not there yet, I think it’s a bit science fiction, this one.
So I’m still referring back to your earlier “supposedly” comment about neural nets, what do you think are other misconceptions that you run across about artificial intelligence? What do you think are, like your own pet peeves, like “that’s not true, or that’s not how it works?” Does anything come to mind?
People think, because of the hype, that it does a lot more than it really does. We know that it’s really good at classification tasks, it’s not yet very good at anything that’s not classification, unsupervised tasks, it’s not being able to learn new concepts all by itself, you really have to code it, and it’s really hard. You need a lot of good people that know the art of applying neural nets to different problems. It doesn’t happen just magically, the way people think.
I mean you’re of course aware of high profile people: Elon Musk, Stephen Hawking, Bill Gates, and so forth who [have been] worried about what a general intelligence would do, they use terms like “existential threat” and all that, and they also, not to put words in their mouth, believe that it will happen sooner rather than later… Because you get Andrew Ng, who says, “worry about overpopulation of Mars,” maybe in a couple hundred years you have to give it some thought, but you don’t really right now…So where do you think their concern comes from?
So, I’m not really sure and I don’t want to put any words in their mouth either, but, I mean the way I see it, we’re still far off from it being an existential threat. The main concern is you might have people who will try to abuse AI, to actually fool other people, that I think is the biggest danger, I mean, I don’t know if you saw the South Park episode last week, they had their first episode where Cartman actually bought an Alexa and started talking to his Alexa, and I hope your Alexa doesn’t start working now…. So it basically activated a lot of Alexas around the country, so he was adding stuff to the shopping cart, really disgusting stuff, he was setting alarm clocks, he was doing all sorts of things, and I think the danger of the AI today is really getting abused by other people, for bad purposes, in this case it was just funny… But you can have cases where people will control autonomous cars, other people’s autonomous cars by putting pictures by the side of the road and causing them to swerve or stop, or do things they’re not supposed to, or building AI that will attack other types of AI machines. So I think the danger comes from the misuse of the technology, just like any other technology that came out into the world… And we have to… I think that’s where the worry comes from, and making sure that we put some sort of ethical code of how to do that…
What would that look like? I mean that’s a vexing problem…
Yes, I don’t know, I don’t have the answer to that…
So there are a number of countries, maybe as many as twenty, that are working on weaponizing, building AI-based weapons systems, that can make autonomous kill decisions. Does that worry you? Because that sounds like where you’re going with this… if they put a plastic deer on the side of the road and make the car swerve, that’s one thing, but if you literally make a killer robot that goes around killing people, that’s a whole different thing. Does that concern you, or would you call that a legitimate use of the technology…?
I mean this kind of use will happen, I think it will happen no matter what, it’s already happening with drones that are not completely autonomous, but they will be autonomous probably in the future. I think that I don’t know how it can be… this kind of progress can be stopped, the question is, I mean, the danger I think is, do these robots start having their own decision-making and intelligence that decides, just like in the movies, to attack all humankind, and not just the side they’re fighting on… Because technology in [the] military is something that… I don’t know how it can be stopped, because it’s driven by humans… Our need to wage war against each other… The real danger is, do they turn on us? And if there is real intelligence in the artificial intelligence, and real understanding and need to survive as a being, that’s where it becomes really scary…
So it sounds like you don’t necessarily think we’re anywhere near close to an AGI, and I’m going to ask you how far away you think we are… I want to set the question up as saying that, there are people who think we’re 5-10 years away from a general intelligence and then there are people who think we’re 500 years [away].Oren Etzioni was on the show, and he said he would give anyone 1000:1 odds that we wouldn’t have it in 5 years, so if you want to send him $10 he’ll put $10,000 against that. So why do you think there’s such a gap, and where are you in that continuum?
Well, because the methods we’re using are still so… as smart as they got, they’re still doing rudimentary tasks. They’re still recognizing images—the agents that are doing automated things for us, they’re still doing very rudimentary tasks. General intelligence requires a lot more than that, that requires a lot more understanding of context. I mean the example of Alexa last week, that’s a perfect example of not understanding context, for us as humans, we would never react to something on TV like that and add something to our shopping cart, just because Cartman said it, where even the very, very smart Alexa with amazing speech understanding, and taking actions based on that, it still doesn’t understand the context of the world, so I think prophecy is for fools, but I think it’s at least 20 years out…
You know, we often look at artificial intelligence and its progress based on games where it beats the best player, that goes back to [Garry] Kasparov in 97, you have of course Jeopardy, you have Alpha Go, you had… an AI beat some world rated poker players, what do you think…And those are all kind of… they create a stir, you want to reflect on it, what do you think is the next thing like that, that one day, snap your fingers and all of a sudden an AI just did… what?
Okay, I haven’t thought about that… All these games, what makes them unique is that they are a very closed world; the world of the game, is finite and the rules are very clear, even if there’s a lot of probability going on, the rules are very clear, and if you think in the real world—and this may be going back to the questions why it will take time—for artificial intelligence to really be general intelligence, the real world is almost infinite in possibilities and the way things can go, and even for us, it’s really hard.
Now trying to think of a game that machines would beat us next in. I wonder if we were able to build robots that can do lots of sports, I think they could beat us easily in a lot of games, because if you take any sports game like football or basketball, they require intelligence, they require a lot of thinking, very fast thinking and path finding by the players, and if we were able to build the body of the robot that can do the motions just like humans, I think they can easily beat us at all these games.
Do you, as a practitioner… I’m intrigued by it, on the topic of general intelligence, intrigued by the idea that, human DNA isn’t really that much code, and if you look at how much code that we are different than say a chimp, it’s very small, I mean it’s a few megabytes. That would be, how we are programmatically different, and yet, that little bit of code, makes us have a general intelligence and a chimp not. Does that persuade you or suggest to you that general intelligence is a simple thing, that we just haven’t discovered, or do you think that general intelligence is a hack of a hundred thousand different… like it’s going to be a long slog and then we finally get it together…?
So, I think [it’s] the latter, just because the way you see human progress, and it’s not just about one person’s intelligence. I think what makes us unique is the ability to combine intelligence of a lot of different people to solve tasks, and that’s another thing that makes us very different. So you do have some people that are geniuses that can solve really really hard tasks by themselves, but if you look at human progress, it’s always been around combined intelligence of getting one person’s contribution, then another person’s contribution, and thinking about how it comes together to solve that, and sometimes you have breakthroughs that come from an individual, but more often than not, it’s the combined intelligence that creates the drive forward, and that’s the part that I think is hard to put into a computer…
You know there are people that have, amazing savant-like abilities. I remember reading about a man named [George] Dantzig, and he was a graduate student in statistics, and his professor put two famous unsolvable/unsolved problems on the blackboard, and Dantzig arrived late that day. He saw them and just assumed that they were the homework, so he copied them down and went home, and later he said he thought they were a little harder than normal, but he solved them both and turned them in… and that like really happened. It’s not one of those urban legend kind of things, you have people who can read the left and right page of a book at the same exact time, you have… you just have people that are these extraordinarily edge cases of human ability,does that suggest that our intellects are actually far more robust than they are? Does that suggest anything to you as an artificial intelligence guy?
Right, so coming from the probability space, it just means that our intelligence has wide distribution, and there are always exceptions in the tails, right? And these kind of people are in the tails, and often when they are discovered, they can create monumental breakthroughs in our understanding of the world, and that’s what makes us so unique. You have a lot of people in the center of the distribution, that are still contributing a lot, and making advances to the world and to our understanding of it, and not just understanding, but actually creating new things. So I’m not a genius, most people are not geniuses, but we still create new things, and are able to advance things, and then, every once in a while you get these tails of a distribution intelligence, that could solve the really hard problems that nobody else can solve, and that’s a… so the combination of all that actually makes us push things forward in the world, and I think that kind of combined intelligence, I think that artificial intelligence is way, way off. It’s not anywhere near, because we don’t understand how it works, I think it would be hard for us to even code that into machines. That’s one of the reasons I think AI, the way people are afraid of it, it’s still way off…
But by that analysis, that sounds like, to circle that back, there will be somebody that comes along that has some big breakthrough in a general intelligence, and ta-da, it turns out all along it was, you know, bubble sort or….
I don’t think it’s that simple, that’s the thing, and solving a statistical problem that’s really, really tough, it’s not like… I don’t think it’s a well-defined enough problem, that some will take a genius just to understand.. “Oh, it’s that neuron going right to left,” and that’s it… so I don’t think it’s that simple… there might be breakthroughs in mathematics, that help you understand the computation better, maybe quantum computers that will help you do faster computation, so you can train much, much faster than machines so they can do the task much better, but, it’s not about understanding the concept of what makes a genius. I think that’s more complicated, but maybe it’s my limited way of thinking, maybe I’m not intelligent enough with it…
So to stay on that point for a minute… it’s interesting and I think perhaps, telling, that we don’t really understand how human intelligence works, like if you knew that.. like we don’t know how a thought is encoded in the brain… like if I said…Ira, what color was your first bicycle, can you answer that question?
I don’t remember… probably blue…
Let’s assume for a minute that you did remember. It makes my example bad, but there’s no bicycle location in your brain that stored the first “bicycle”… like an icon, or database lookup…like nobody knows how that happens… not only how it’s encoded, but how it’s retrieved… And then, you were talking earlier about synthesis and how we use it all together, we don’t know any of that… Does that suggest to you that, on the other end, maybe we can’t make a general intelligence… or at the very least, we cannot make a general intelligence until we understand how it is that people are intelligent…?
That may be, but yeah. First of all even if we made it, if we don’t understand it, then how would we know that we made it? Circling back to that… I think the way we… it’s just like the kids, they were thinking that they were causing stress to the robot, because they were giving it… they thought they understood stress and the affect of it, and they were transferring it onto the robot. So maybe when we create something very intelligent that looks to be like us, we would think we created intelligence, but we wouldn’t know that for sure until we know what is… general intelligence really is…
So do you believe that general intelligence is an evolutionary invention that will come along if, in 20 years, 50 years, 1,000 years… whatever it is, that it is something that will come along out of the techniques we use today from the early AI, like, are we building really, really, really primitive general intelligences, or do you have a feeling that a real AGI is going to be a whole different kind of approach in technology?
I think it’s going to be a whole different approach. I think what we’re building today are just machines that do tasks that we humans do, in a much, much better way, and just like we built machines in the industrial revolution that did what people did with their hands, but did it in a much faster way, and better way… that’s the way I see what we’re doing today… And maybe I’m wrong, maybe I’m totally wrong, and we’re giving them a lot more general intelligence than we’re thinking, but the way I see it, it’s driven by economic powers, it’s driven by the need of companies to advance, and take away tasks that cost too much money to do by humans, or are too slow to do by humans… And, revolutionizing that way, and I’m not sure that we’re really giving them general intelligence yet, still we’re giving them ways to solve specific tasks that we want them to solve, and not something very very general that can just live by itself, and create new things by itself.
Let’s take up this thread, that you just touched on, about, we build them to do jobs we don’t want to do, and you analogize it to the Industrial Revolution… so as you know, just to set the problem up, there are 3 different narratives about the effect this technology, combined with robotics, or we’ll call it automation, in general, are going to have on jobs. And the three scenarios are: one is that, it’s going to destroy an enormous number of quote, low-skill jobs, and that, they will, by definition, be fewer low skilled jobs, and more and more people competing for them and you will have this permanent class of unemployable… it’s like the Great Depression in the US, just forever. And then you have people who say, no, it’s different than that, what it really is, is, they’re going to be able to do everything we can do, they’re going to have escape… Once a machine can learn a new task faster than a person, they’ll take every job, even the creative ones, they’ll take everything. And the third one says no, for 250 years we’ve had 5-10% of unemployment, its never really gotten out of that range other than the anomalous depression, and in that time we had electricity, we had mechanization, we had steam power, we had the assembly line… we had all these things come along that sure looked like job eaters, but what people did is they used the new technology to increase their own productivity and drive their own wages higher, and that’s the story of progress, that we have experienced…So which of those three theories, or maybe a fourth one, do you think is the correct narrative?
I think the third theory is probably the more correct narrative. It just gives us more time to use our imagination and be more productive at doing more things, improve things, so, all of a sudden we’ll have time to think about going and conquering the stars, and living in the stars, or improving our lives here in various ways… The only thing that scares me is the speed of it, if it happens too quickly, too fast.. So, we’re humans, it takes, as a human race, some time to adapt. If the change happens so fast and people lose their jobs too quickly, before they’re able to retrain for the new economy, the new way of [work], the fact that some positions will not be available anymore, that’s the real danger and I think if it happens too fast around the world, then, there could be a backlash.
I think what will happen is that the progress will stop because some backlash will happen in the form of wars, or all sorts of uprisings, because, at the end, people need to live, people need to eat, and if they don’t have that, they don’t have anything to live for, they’re going to rise up, they’re not just going to disappear and die by themselves. So, that’s the real danger, if the change happens too rapidly, you can have a depression that will actually cause the progress to slow down, and I hope we don’t reach that because I would not want us, as a world, to reach that stage where we have to slow down, with all the weapons we have today, this could actually be catastrophic too…
What do you mean by that last sentence?
So I mean we have nuclear weapons…
Oh, I see, I see, I see.
We have actual weapons that can, not just… could actually annihilate us completely…
You know, I hear you  Like…what would “too fast” be? First of all, we had that when the Industrial Revolution came along… you had the Luddite movement, when Ludd broke two spinning wheels you had the thresher riots [or Swing riots] in England in the 1820s, when the automated threat, you had the… the first day the London Times was printed using steam power instead of people. They were going to go find the guy who invented that, and string him up, you had a deep-rooted fear of labor-changing technology, that’s a whole current that constantly runs, but what would too fast look like? The electrification of industry just happened lightning fast, we went from generating 5% of our power from steam to 85% in just22 years…Give me a “too fast” scenario. Are you thinking about the truck drivers, or… tell me how it could “be too fast,” because you seem to be very cautious, like, “man, these technologies are hard and they take a long time and there’s a lot of work and a lot of slog,” and then, so what would too fast look like to you?
If it’s less than a generation, let’s say in 5 years, really, all taxi drivers and truck drivers lose their job because everything becomes automated, that seems to be too fast. If it happens in 20 years, that’s probably enough time to adjust, and I think… the transition is starting, it will start in the next 5 years, but it will still take some time for it to really take hold, because if people lose those jobs today, and you have thousands or hundreds of thousands, or even millions of people doing that, what are they going to do?
Well, presumably, I mean, classic economics says that, if that happened, the cost of taking a cab goes way down, right? And if that happens, that frees up money that I no longer have to spend on an expensive cab, and therefore I spend that money elsewhere,  which generates demand for more jobs, but, is the 5-year scenario… it may be a technical possibility, like we may “technically” do it, if we don’t have a legislative hurdle.
I read this article in India, which said they’re not going to allow self-driving cars in India because that would put people out of work, then you have the retrofit problem, then every city’s going to want to regulate it and say well, you can have a self-driving car, but it needs to have a person behind the wheel just in case. I mean like you would say, look, we’ve been able to fly airplanes without a pilot for decades, yet no airline in the world would touch that, in this plane, we have no pilot… even though that’s probably a better way to do it…So, do you really think we can have all the taxi drivers gone in 5 years?
No, and exactly for that reason, even if our technology really allows it. First of all, I don’t think it will totally allow it, because for it to really take hold you have to have a majority of cars on the road to be autonomous. Just yesterday I was in San Francisco, and I heard a guy say he was driving behind one of those self-driving cars in San Francisco, and he got stuck behind it, because it wouldn’t take a left turn when it was green, and it just forever wouldn’t take a left turn that humans would… The reason why it wouldn’t take a left turn was there were other cars that are human-driven on the road, and it was coded to be very, very careful about it, and he was 15 minutes late to our meeting just because of that self-driving car…
Now, so I think there will be a long transition partly because legislation will regulate it, and slow it down a bit, which is a good thing. You don’t want to change too fast, too quickly without making sure that it really works well in the world, and as long as there is a mixture of humans driving and machines driving, the machines will be a little bit “lame,” because they will be coded to be a lot more careful than us, and we’re impatient, so, that will slow things down which is a good thing, I think making a change too fast can lead to all sorts of economic problems as well…
You know in Europe they had… I could be wrong on this, I think it was first passed in France, but I think it was being considered by the entire EU, and it’s the right to know why the AI decided what it did. If an AI made the decision to deny you a loan, or what have you, you have the right to know why it did that… I had a simple question which was, is that possible? Could Google ever say, I’m number four for this search and my competitor’s number three, why am I number four and they’re number three? Is Google big and complicated enough, and you don’t have to talk specifically about Google, but, are systems big and complicated enough that we don’t know… there are so many thousands of factors that go into this thing, that many people never even look at, it’s just a whole lot of training…
Right, so in principle, the methods could tell you why they made that decision. I mean, even if there are thousands of factors, you can go through all of them and have not just the output of their recognition, but also highlight what were the attributes that caused it to decide it’s one thing or another. So from the technology point of view, it’s possible, from the practical point of view, I think for a lot of problems, you don’t, you won’t really care. I mean, if it recognized that there’s a cat in the image, and you know it’s right, you won’t care why it’s recognized that cat. I guess for some problems where the system made a decision that you don’t necessarily know why it made the decision, or you have to take action based on that recognition, you would want to know. So if I predicted for you that your revenue is going to increase by 20% in the next week, you would probably want that system to tell you, why do you think that’s happened, because there isn’t a clear reason for it that you would imagine yourself, but, if the system told you there is a face in this image, and you just look at the image, and you can see that there’s a face in that image, then you won’t have a problem with it, so I think it really depends on the problem that you’re trying to solve…
We talked about games earlier and you pointed out that they were closed environments and that’s really a place with explicit rules, a place that an AI can excel, and I’ll add to that, there’s a clear cut idea of what winning looks like, and what a point is. I think somebody on the show said, “Who’s winning this conversation right now?” There’s no way to do that, so my question to you is,if you walk around an enterprise and you say “where can I apply artificial intelligence to my business?” would you look for things that looked like games? Like, okay, HR you have all these successful employees that get high performance ratings, and then you have all these people you had to fire because they didn’t, and then you get all these resumes in. Which ones more look like the good people as opposed to the bad people? Are there lots of things like that in life that look like games… or is the whole game thing really a distraction from solving real world problems, nothing really is a game in the real world…
Yeah, I think it’d be wrong to look at it as a game, because the rules… first there is no real clear notion of winning. What you want is progress, you have goals that you want to progress towards, you want, for example, in business, you want your company to grow. That could be your goal, or you want the profits to grow, you want your revenue to grow, so you make these goals, because that’s how you want things to progress and then you can look at all the factors that help it grow. The world of how to “make it grow” is very large, there are so many factors, so if I look at my employees, there might be a low-performing employee in one aspect of my business, but maybe that employee brings to the team, you know, a lot of humor that causes them to be productive, and I can’t measure that. Those kind of things are really, really hard to measure and, so looking at it from a very analytic point of view of just a “game,” would probably miss a lot of important factors.
So tell me about the company you co-founded, Anodot, because you make an anomaly detection system using AIs. So first of all, explain what that is and what that looks like, but how did you approach that problem? If it’s not a game, instead of… you looked at it this way…
So, what are anomalies? Anomalies are anything that’s unexpected, so our approach was: you’re a business and you’re collecting lots and lots and lots of data related to your business. At the end, you want to know what’s going on with the business, that’s the reason you collect a lot of data. Now, when today, people have a lot of different tools that help them kind of slice and dice the data, ask questions about what’s happening there, so you can make informed decisions about the future or react to things that are happening right now, that could affect your business.
The problem with that, is that basically… why isn’t it AI? It’s not AI because you’re basically asking a question and letting the computers compute something for you and giving you and answer; whereas anomalies, by nature, are things that happen that are unexpected, so you don’t necessarily know to ask the question in advance, and unexpected things could happen.  In businesses for example, you see a certain revenue for a product you’re selling going down in a certain city, why’s that happening? If you don’t look at it, and if you don’t ask the question in advance, you’re not even aware that that is happening… so, the great thing about AI, and machine learning algorithms, is they can process a lot of data, and if you can encode into a machine, an algorithm that identifies what are anomalies, you can find them in very, very large scale, and that helps the companies actually detect that things are going wrong, or detect the opportunities that they have, that they might miss otherwise. Where the endgame is very simple, to help you improve your business constantly and maintain it and avoid the risks of doing business, so, it’s not a “game,” it’s actually bringing immediate value to a company, highlighting, putting light on the data that they really need to look at with respect to their business, and the great thing about machine-learning algorithms, [is] they can process all of this data much better than we could, because what do humans do? We graph them, we visualize the data in various ways, you know, we create queries from database about questions that we think might be relevant, but we can’t really process all the data, all the time in an economical way. You would have to hire armies of people to do that, and machines are very good at that, so, that’s why we built Anodot…
Give me an example, like tell me a use case or a real world example of something that Anodot, well that you were able to spot that a person might not have been able to…?
So, we have various customers that are in the e-commerce business, and if you’re in e-commerce and you’re selling a lot of different products, various things could go wrong or opportunities might be missed. For example, if I’m selling coats, and I’m selling a thousand other products, I’m selling coats, and now in a certain area of the country, there is an anomalous weather condition that became cold, all of a sudden I’ll see, I won’t be able to see it because it’s hiding in my data, but people will start buying… in that state will start buying more coats. Now it’s not like if… if somebody actually looked at it, they would probably be able to spot it, but because there is so much data, so many things, so many moving parts, nobody actually notices it. Now our AI system finds… “Oh, there is an anomalous weather condition and there is an uptick in selling that coat, you better do something to seize that opportunity to sell more coats,” so either you have to send more inventory to that region to make sure that if somebody really wants a coat, you’re not out of stock. If you’re out of stock, you’re losing revenue, potential revenue, or you can even offer discounts for that region because you want to bring more people to your e-commerce site, rather than the competition, so, that’s one example…
And I assume it’s also used in security or fraud and what not, or are you really focused on an e-commerce-use case?
So we built a fairly generic platform that can handle a wide variety of use cases. We don’t focus on security as-is, but we do have customers that, in part of their data, we’re able to detect all sort of security-related breaches, like bot activity happening on a site or fraud rings—not the individual fraud of an individual person doing a transaction—but, it’s a lot of the time, frauds are not just one credit card, but somebody actually doing it over time, and then you can create or you can identify those fraud rings.
Most of our use cases have been around more the business-related data, either in ecommerce, ad tech companies, online services. And so online services, anybody that is really data-dependent to run their business, and very data-driven in running their business, and most businesses are transforming into that, even the old-fashioned businesses are transforming into that, because that data has competitive advantage, and being able to process that data to find all the anomalies, gives you an even larger competitive advantage.
So, last question: You made a comment earlier about freeing up people so we can focus on living in the stars. People who say that are generally science fiction fans I’ve noticed. If that is true, what view of the future, as expressed in science fiction, do you think is compelling or interesting or could happen?
That’s a great question. I think that that, what’s compelling to me about the future, really, is not whether we live in the stars or not in the stars, but really about having to free up our time to thinkabout stars, to thinkabout the next big things that progress humanity to the next levels, to be able to explore new dimensions and solve new problems, that…
Seek out new life and new civilizations…
Could be, and it could be in the stars, it could be on Earth, it could be just having more time, having more time on your hands, gives you more time to think about “What’s next?” When you’re busy surviving, then you don’t have any time to think about art, and think about music, and advancing it, or think about the stars, or think about the oceans, so, that’s the way I see AI and technology helping us—really freeing up our time to do more, and to use our collective intelligence and individual intelligence to imagine places that we haven’t thought about before… Or we don’t have time to think about before because we’re busy doing the mundane tasks. That’s really for me, what it’s all about…
Well that is a great place to end it, Ira. I want to thank you for taking the time and going on that journey with me of talking about all these different topics. It’s such an exciting time we live in and your reflections on them are fascinating, so thank you again..
Thank you very much, bye-bye.
Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.
[voices_in_ai_link_back]

Voices in AI – Episode 45: A Conversation with Stephen Wolfram

[voices_in_ai_byline]
In this episode, Byron and Stephen discuss computational intelligence and what’s happening in the brain.
[podcast_player name=”Episode 45: A Conversation with Stephen Wolfram” artist=”Byron Reese” album=”Voices in AI” url=”https://voicesinai.s3.amazonaws.com/2018-05-29-(01-12-51)-wolfram-alpha.mp3″ cover_art_url=”https://voicesinai.com/wp-content/uploads/2018/05/voices-headshot-card-3.jpg”]
[voices_in_ai_byline]
Byron Reese: This is Voices in AI, brought to you by GigaOm. I’m Byron Reese. Today my guest is Stephen Wolfram. Few people can be said to literally need no introduction, but he is one of them. Anyway, as a refresher, Stephen Wolfram exploded into the world as a child prodigy who made all kinds of contributions to physics. He worked with Richard Feynman for years. But, unlike many prodigies, he didn’t peak at 18, or 28, or 38, or 48. In fact, he probably hasn’t peaked at all right now. He went on to create Mathematica, which is the closest thing the world has to math as a language. He wrote his magnum opus, a book called ‘A New Kind of Science.’ And he created Wolfram Alpha, an answer engine that grows better and better every day. Welcome to the show, Stephen.
Stephen Wolfram: Thanks.
I usually start off by asking, what is artificial intelligence? But I want to ask you a different question. What is intelligence?
It’s a complicated and slippery concept. It’s useful to start, maybe, in thinking about what may be an easier concept, what is life? You might think that was an easy thing to define. Here on Earth, you can pretty much tell whether something is alive or not. You dig down, you look in a microscope, you figure out does it have RNA, does it have cell membrane? Does it have all those kinds of things that are characteristic of life as we know it on Earth? The question is, what abstractly is something like life? And, we just don’t really know. I remember when I was a kid, there were these spacecrafts sent to Mars, and they would dig up soil, and they had this definitive test for life at some point which was, you feed it sugar and you see whether it metabolizes it. I doubt that in an abstract sense that’s a good, fundamental definition of life. In the case of life on Earth, we kind of have a definition because it’s connected by this, sort of, thread of history. All life is, kind of, connected by a thread of history. It’s sort of the same thing with intelligence. If you ask, what is the fundamental essence of intelligence? Well, in the case of the intelligence that we know with humans and so on, it’s all connected by a thread of history. If we ask, what is intelligence abstractly? That’s a much harder question, and it’s one I’ve thought about for a long time. What’s necessary to say that something is intelligent is for it to be capable of some level of sophisticated computation. If all the thing does is to, kind of, add two numbers together, and that’s the only thing it can do, we’re not going to likely consider it intelligent.
But your theory is that hurricanes are computational and icicles, and DNA.
Absolutely.
And so they’re all intelligent?
As people often say, the weather has a mind of its own. The question is, can we distinguish the kind of intelligence, the kind of mind, in effect, that is associated with the computations that go on in fluid mechanics, from the kind of intelligence that we have in our brains. I think the answer is, ultimately, there really isn’t a bright line distinction between those. The only thing that is special about the intelligence that we have, is that it’s connected to our kind of thread of history and our kind of biological evolution, and the evolution of our civilization and things like this. I don’t think we can say that there’s something we can’t distinguish at some sort of scientific level. What’s that essential feature that means that brain is intelligent, and the weather doesn’t really have a mind, so to speak. I think the thing that’s interesting about modern computation in AI is that we’re seeing our first examples of some kind of alien intelligence. We’re seeing our first examples of things that clearly have attributes very reminiscent of human-like, what we have traditionally called intelligent behavior. But yet, they don’t work in anything like the same way and we can argue back and forth forever about is this really intelligence or not. And I think it becomes just a question of what do we choose to make the word mean.
In my life I’ve been involved in a lot of, kind of, making computers do things that before only humans could do. And people had often said, “Oh, well, when computers can do this or that thing, then we’ll know they’re intelligent.” And one could go through the list of some of those things whether it’s doing mathematical computation or doing image recognition, or doing whatever. Every time when computers actually managed to do these things the typical response is, “Oh, well, that isn’t really intelligence because…” Well, because what? Usually, the real reason people think it isn’t really intelligence is because somehow you can look inside and see how it works. Now, of course, to some extent, you can do that with brains too. But I think one of the things that’s sort of new in recent times, is something that I’ve long been expecting, anticipating, working on actually, which is the appearance of computation that is doing things that are really interesting to humans but where we as humans can’t really understand what’s going on inside. In other words, the typical model of computation has been, you want to build a program for a particular task, you the human engineer, put the pieces together in a kind of very intentional way where you know, when I put this piece, and this piece, and this piece together then it’s going to do this, and that’s what I wanted it to do. Well, for example, I’ve been interested for a really long time in, what I call, mining the computational universe of possible programs. Just studying simple programs, for example, then going and searching trillions of them to find ones that behave in particular ways that turn out to be useful for some purpose that we have.
Well, the thing that’s happened in modern times with deep learning and neural networks, and so on, is it’s become possible to do that same kind of program discovery in a slightly different way than I’ve done it, because it turns out that one can use actually the ideas of calculus to make incremental progress in finding programs that do the things one wants them to do. But the basic idea is the same, that is, you are, by some criterion, you’re finding from this, sort of, computational universe of possible programs, you’re finding programs that serve some purpose that’s useful to us. Whether that purpose is identifying elephants from tea cups, or whether that purpose is translating between human languages or whatever else. And, the thing that is interesting and maybe a little bit shocking right now, is the extent to which when you take one of these programs that have been found by, essentially search, in this space of possible programs, and you look at it, and you say, “How does it work?” And you realize you really don’t know how it works. Each individual piece you can identify what it’s doing, you can break it down, look at the atoms of the program and see how they work. But when you say, “What’s the big picture? What’s it really doing? What’s the ultimate story here?” The answer is we just don’t know.
You mean like move 37 in AlphaGo? This move that even the greatest player in the world was like, “What?”
I haven’t followed that particular system. But I tried to program a computer to play Go in 1973 and discovered it was hard.
But to back up a minute, wouldn’t you say Google passed that point a long time ago? If you say, “Why did this page rank number two and not number three?” Even Google would look at it and go, “I don’t know. Who knows?” It’s an alchemy of so many different things.
I don’t know, I haven’t seen the source code of the Google search engine. I know in my own search engines, search systems are kind of messy. Hundreds of signals go in and they’re ranked in some way or another. I think in that particular case, the backtrace of, “OK, it was these signals that were important in this thing.” I think, to some extent, it’s a little simpler, but it’s the same. That’s a case where it tends to be more of a, I think, one-shot kind of thing. That is, you evaluate the values of these signals and then you say, “OK, let’s feed them into some function that mushes together the signals and decides what the ranking should be.” I think what tends to be a more shocking, more interesting, it hasn’t really happened completely with the current generation of deep learning neural nets, although it’s beginning to happen. It has happened very much so with the kind of programs that I studied a lot, like cellular automata, and a bunch of the kinds of programs that we’ve discovered, sort of, out in the computational universe that we use to make Wolfram Alpha work, and to make lots of other algorithms that we build work. In those kinds of programs what happens is, it’s not just, a one-shot thing where it’s just this one messy function that’s applied to some data to get a result. It’s a sequence of, actually not very messy, steps. Often, a sequence of simple, identical steps, but together, you apply it 10,000 times, and it’s really unclear what’s going on.
I want to back up if I can just a minute, because my original question was, what is intelligence? And you said it’s computation. You’re well known for believing everything is computation — time and space, and the hurricane, and the icicle, and DNA and all of that. If you really are saying everything is intelligence, isn’t that like, to beg the question, like you’re saying, “Well, everything’s intelligence.” What is it? I mean, for instance, the hurricane has no purpose. You could say intelligence is a purposeful action with the goal in mind.
Purposeful action? You’re then going to slide down another slippery slope. When you try and start defining purpose, for us as humans, we say, “Well, we’re doing this because…” and there’ll be some story that eventually involves our own history, or the history of our civilization, or our cultural beliefs, or whatever else and it ends up being really specific. If you say, “Why is the earth going around in its orbit?” Does it have a purpose? I don’t know. You could say, “Well, it’s going around in this particular orbit because that minimizes the action,” in a, sort of, technical term in mechanics, associated with this mechanical system. Or, you could say it’s going around in its orbit because it’s following these equations. Or, it’s going around in its orbit because the solar system was formed in this way and it started going around in its orbit. I don’t think that when we talk, with the human explanations of purpose, they quickly devolve into a discussion of things that are pretty specific to human history, and so on. If you say, “Why did the pulsar magnetosphere produce this blip?” Well, the answer is there’ll be a story behind it. It produced that blip because there was this imperfection, and there was a space-time position, something in the crust of the pulsar, a neutron star, and that had this consequence, and that had this consequence, and so on. There’s a story.
Well, you’re convoluting words. ‘Because’ is intentional, but ‘how’ is not. So, the question you’re asking is, “How did that happen?” And that is bereft of purpose and therefore bereft of intelligence. But, to your point, if computation is intelligence, then, by definition, there’s no such thing as non-intelligence. And I’m sure you’ve looked at something and said, “That’s not very intelligent.”
No, no, no. There’s a definite threshold. If you look at a system and all it does is stay constant over time. You started in some state and just stays that way. Nothing exciting is going on there. There are plenty of systems where, for example, it will just repeat. What it does is just repeat predictably over and over again. Or, you know, it makes some elaborate nested pattern but it’s a very predictable pattern. As you look at different kinds of systems there’s this definite threshold that gets passed and it’s related to this thing I call ‘principle of computational equivalence’, which is basically the statement that, beyond some very low level of structural complexity of the system, the system will typically be capable of a certain level of sophisticated computation, and all systems are capable of that same level of sophisticated computation. One facet of that is the idea of universal computation, that everything can be emulated by a Turing machine, and can emulate a Turing machine. But that’s a little bit more to this principle of computational equivalence than the specific feature of universal computation but, basically, the idea is it could have been the case.
If we’d been having this conversation over 100 years ago, people had mechanical calculators at that time. They had ones that did one operation, did another kind of operation. We might be having a discussion along the lines of, “Oh, look at all these different kinds of computers that exist. They’ll always be different kinds of computers that one needs.” Turns out that’s not true. It turns out all one needs is this one kind of thing that’s a universal computer, and that one kind of computer covers all possible forms of computation. And so then the question is if you look at other kinds of systems do they do computation at the same level as things like universal computers, or are there many different levels, many different, incoherent kinds of computation that get done. And the thing that has emerged from, both general discoveries that have been made and specifically a lot of stuff I’ve done, is that, no, anything that we seriously imagine could be made in our universe, seems to have this one kind of computation, this one level of computation that it can do. There are things that are below that level of computation and whose behavior is readily predictable, for example, by a thing like a brain that is this kind of uniform sophisticated level of computation. But once you reach that sophisticated level of computation, everything is kind of equal. And, in fact, if that wasn’t the case, then we would expect that, if for example there was a whole spectrum of different levels of computation, then the top computer, so to speak, we could expect will be able to say, “Oh, you lower, lesser computers, you’re wasting your time. You don’t need to go through and do all those computations. I, the top computer, can immediately tell you what’s going to happen. The end result of everything you’re doing is going to be this.” It could be that things work that way but it isn’t, in fact, the case. Instead, what seems to be the case, is that there’s this one, kind of, uniform level of computation and the sense it’s that uniformity of level of computation that has a lot of consequences that we’re very familiar with. For example, it could be the case if nature mostly consisted of things whose level of computational sophistication was lower than the computational sophistication of our brains, we would readily be able to work out what was going to happen in the natural world, sort of, all of the time. And when we look at some complicated weather pattern or something, we would immediately say, “Oh, no, we’re a smarter computer, we could just figure out what’s going to happen here. We don’t need to let the computation of the weather take its course.” What I think happens is this, sort of, equality of computation that leads to a lot of things that we know are true. For example, that’s why it seems to us that the weather has a mind of its own. The weather almost seems to be acting with free will. We can’t predict it. If the system is readily predictable by us, then it will not seem to be, kind of, free in its will. It will not seem to be free in the actions it takes. It will seem to be just something that is following some definite rules, like a 1950s sci-fi robot type thing.
This whole area of, what is purpose, how do we identify what purpose is, I think, in the end, this is a very critical thing to discuss in terms of the fundamentals of AI. One of the things people ask is, “OK, we’ve got AI, we’ve got increasing automation of lots of kinds of things. Where will that end?” And I think one of the key places that it will end is, purpose is not something that is available to be automated, so to speak. It doesn’t make sense to think about automating purpose. It’s for the same reason that it doesn’t make sense – the same reason I’m saying, it’s this question that you can’t distinguish these different things and say, “That’s a purpose. That’s not a purpose,” is the same reason that purpose is this kind of thing that is, in some sense, tied to the bearer of that purpose, in this case humans, for example.
When I read your writings, or when I talk to you, you frequently say this thing that people keep thinking that there’s something special about them, they keep coming up with things a machine can’t do, they don’t want to give the machine intelligence because… you come across as being really down on people. I would almost reverse it to say, surely there isn’t some kind of an equivalence between a hurricane, an iPhone, and a human being. Or is there? And if there isn’t, what is special about people?
What’s special about people is all of our detailed history.
That’s just different than other things. The iPhone has a detailed history and the hurricane. That isn’t special, that’s just unique.
Well, what’s the difference between special and unique? It’s kind of ironic because, as you know, I’m very much a person who’s interested in people.
That’s what I’m curious about, like, why is that? Because you seem to take this perverse pride in saying, “Oh, people used to think computers can never do this and now they do it. And then they said it can never do this, and they do it.” I just kind of wonder, I try to reconcile that with the other part of you which is clearly a humanist. It’s almost bifurcated like half of your brain, intellectually has constructed this model of moral equivalence between hurricanes and people, and then the other half of you kind of doesn’t believe it.
You know, one of the things about doing science is, if you try to do it well, you kind of have to go where the science leads. I didn’t come into this believing that that will be the conclusion. In fact, I didn’t expect that to be the conclusion. I expected that I would be able to find some sort of magnificent bright line. In fact, I expected that these simple cellular automata I studied would be too simple for physics, too simple for brains, and so on. And it took me many years actually to come to terms with the idea that that wasn’t true. It was a big surprise to me. Insofar as I might feel good about my efforts in science, it’s that I actually have tried to follow what the science actually says, rather than what my personal prejudices might be. It is certainly true that personally, I find people interesting; I’m a big people enthusiast so to speak.
Now in fact, what I think is that the way that things work in terms of the nature of computational intelligence in AI is actually not anti-people in the end. In fact, in some sense it’s more pro-people than you might think. Because what I’m really saying is that, because computational intelligence is sort of generic, it’s not like we have the AI, which is a competitor. “There’s not just going to be one intelligence around, there are going to be two.” No, that’s not the way it is. There’s an infinite number of these intelligences around. And so, in a sense, the non-human intelligence we can think of as almost a generic mirror that we imprint in some way with the particulars of our intelligence. In other words, what I’m saying is, eventually, we will be able to make the universe, through computation and so on, do our bidding more and more. So then the question is, “What is that bidding?” And, in a sense, what we’re seeing here is more, if anything, is in some ways an amplification of the role of the human condition, rather than its diminution, so to speak. In other words, we can imprint human will on lots and lots of kinds of things. Is that human will somehow special? Well, it’s certainly special to us. Is it the case if we’re going into a competition, who’s more purposeful than who? That degenerates into a meaningless question of definition which, as I say, I think to us, we will certainly seem to be the most purposeful because we’re the only things where we can actually tell that whole story about purpose. In other words, I actually don’t think it’s an interesting question. It maybe was not intended this way, but my own personal trajectory in these things is I’ve tried to follow the science to where the science leads. I’ve also tried to some extent to follow the technology to where the technology leads. You know I’m a big enthusiast of personal analytics and storing all kinds of data about myself and about all kinds of things that I do, and so on. I certainly hope and expect one day to increasingly make the bot of myself, so to speak. My staff claims, maybe flattering me, that my attempt to make the SW email responder will be one of the last things that gets successfully turned into a purely automated system, but we will see.
But the question is, to what extent when one is looking at all this data about oneself and turning what one might think of as a purely human existence, so to speak, into something that’s full of gigabytes of data and so on — is that a dehumanizing act? I don’t think so. One of the things one learns from that is that, in a sense, it makes the human more important rather than less. Because, there are all these little quirks of, “What was the precise way that I was typing keystrokes on this day as opposed to that day?” Well, it might be “who cares?” but when one actually has that data, there’s a way in which one can understand more about those detailed human quirks and recognize more about those in a way that one couldn’t, without that data, if one was just, sort of, acting like an ordinary human, so to speak.
So, presumably you want your email responder to live on after you. People will still be able to email you in a hundred years, or a thousand years and get a real Stephen Wolfram reply?
Who knows?
I know that you have this absolute lack of patience anytime somebody seems to talk about something that tries to look at these issues in any way other than just really scientifically.
I always think of myself as a very patient person, but I don’t know, that may look different from the outside.
But, I will say, you do believe consciousness is a physical phenomenon, like, it exists. Correct?
What on Earth does that mean?
So, alright. Fair enough. See, that’s what I mean exactly.
Let me ask you a question along the same line. Does computation exist?
Absolutely.
What on Earth does the word ‘exist’ mean to you?
Is that what it is? It’s not the ‘consciousness’ you were objecting to, it’s the word ‘exist’.
I guess I am; I’m wondering what you mean by the word ‘exist’.
OK, I will instead just rephrase the question. You could put a heat sensor on a computer and you could program it that if you hold a match to the heat sensor, the computer plays an audio file that screams. And yet we know people, if you burn your finger, it’s something different. You experience it, you have a first-person experience of a burned finger, in a way the computer can only sense it, you feel it.
Wait. Why do you think the computer isn’t having a first-person experience?
It’s not a person. I am kidding. If you believe the computer experiences pain, I would love to have that conversation.
Let’s talk about the following situation. Let’s talk about a neural net. I mean, they’re not that sophisticated yet and they’re not that kind of recurrent, they tend to just feed the data through the network. But, you know, we’ve got a neural net and it’s being trained by experiences that it’s having. Then the neural net has some terrible experience. It’s terribly traumatic for the neural net. That trauma will have a consequence. If we were to look, sort of forensically, at how that had affected the weights in the neural net, we would find that there were all these weights that were affected in this or that way by the traumatic experience the neural net has had. In what sense do we then think–we then have to tease apart, what’s the difference between the effect of that experience the neural net had, and the experience the brain has.
That’s even more insidious than putting somehow people, and hurricanes, and iPhones in kind of the same level. That’s even worse because, in a way what you’re saying is, I had this car, and I’m lost in the woods and the car’s overheating, and the engine is out of oil, and the tires are torn up, and I’m tearing that car up. But, I’m being pursued or something, and I have to get out of the woods. I essentially just destroy this car making my way out. If your assumption is, “Well, that car experienced something” you know, you were afraid of getting eaten by lions but you killed the car in doing it. And to somehow put those two things on the same level, you can’t really make that choice.
Well, the morality of AI is a complicated matter. For example, if you consider…
I’m just asking about the basis of human rights. The basis of human rights are that humans feel pain. The reason we have laws against harming animals is because animals feel pain. What you’re suggesting is infinite loops. If you code an infinite loop, by golly, you should get fined, or go to jail.
Yeah. Well, the question, to put it in a different way, if I succeed in making a bot, autoresponder that’s like me and responds to e-mail independent of me. And for example, let’s say I’m no longer around, I’m dead, and all that’s left is the autoresponder. What are the obligations? How does one think about the autoresponder relative to thinking about the person that the autoresponder represents? What do you think? I think at that point, I haven’t actually thought this through properly, but I think if somebody says, “Let’s delete the autoresponder,” it’s interesting. What are the moral aspects of doing that?
If your argument is it’s the moral equivalent of killing a living person, I would love to hear that logic. You could say that’s a tragedy, that’s like burning the Mona Lisa, we would never want to do it. But to say that it’s the equivalent of killing Stephen Wolfram a second time, I mean, I would love to hear that argument.
I don’t know if that’s right. I have not thought that through. But, my reaction to you saying the computer can’t feel pain is, I don’t know why on Earth you’re saying that. So, let’s unpack that statement a little bit. I think it’s interesting to unpack. Let’s talk a little bit about how brains might work and what the world looks like at a time when we really know, you know, we’ve solved the problem of neurophysiology, we’ve solved, sort of, the problem of neuroscience, and we can readily make a simulation of a brain. We’ve got a simulated brain and it’s a simulated Byron, and it’s a simulated Stephen and those simulated brains can have a conversation just like we’re having conversation right now. But unlike our brains, it’s easy to go look at every neuron firing, basically, and see what’s happening. And then we start asking ourselves… The first question is, do you think then that the neuron level simulated brain is capable of feeling pain, and having feelings, and so on? One would assume so.
We would part company on that but I agree that many people would say that.
Well, I can’t see how you would not say that unless you believe that there is something about the brain that is not being simulated.
Well, let’s talk about that. I assume you’re familiar with the OpenWorm project.
Yeah.
The C. elegans is this nematode worm. Eighty percent of all animals on the planet are nematode worms. And they had their genome sequenced, and their brain has 302 neurons.
There is a difference between male and female worms actually. I think the female worm has 4 neurons.
Fair enough. I don’t know, that may be the case. Two of the 302, I understand, aren’t connected to the other ones. Just to set the problem up, so for 20 years people have said, “Let’s just model these 302 neurons in a computer, and let’s just build a digital nematode worm.” And of course, not only have they not done it, but there isn’t even a consensus in the group that it is possible. That what is occurring in those neurons may be happening at the Planck level. Your basic assumption in that is that physics is complete and that model you just took of my brain is the sum total of everything going on scientifically, and that is far from proven. In fact, there is more evidence against that proposition.
Let’s talk about this basic problem. Science – a lot of what goes on in science is an attempt to make models of things. Now, models are by their nature incomplete and controversial. That is, “What is a model?” A model is a way of representing and potentially predicting how a system will behave that captures certain essential features that you’re interested in, and elides other ones away. Because, if you don’t elide some features away, then you just have a copy of the system.
That’s what we’re trying to do. They’re trying to build an instantiation; it’s not a simulation.
No, but there is one case in which this doesn’t happen. If I’m right that it’s possible to make a complete, fundamental model of physics, then that is the one example in which there will be a complete model of something in the world. There’s no approximation, every bit works in the model exactly the way it does in real life. But, above that level, when you are saying, “Oh, I’m going to capture what’s going on in the brain with a model,” what you mean by that is, “I’m going to make some model which has a billion degrees of freedom” or something. And that model is going to capture everything essential about what’s happening in the brain, but it’s clearly not going to represent the motion of every electron in the brain. It’s merely going to capture the essential features of what’s happening in the brain, and that’s what 100 percent of models, other than this one case of modeling fundamental physics, that’s what they always do. They always say, “I’m capturing the part that I care about and I’m going to ignore the details that are somehow not important for me to care about.” When you make a model of anything, whether it’s a brain, whether it’s a snowflake, whether it’s a plant leaf or something, any of these kinds of things, it’s always controversial. Somebody will say, “This is a great model because it captures the overall shape of a snowflake” and somebody else will say, “No, no, no it’s a terrible model because look, it doesn’t capture this particular feature of the 3-D structure of ridges in the snowflake.” We’re going to have the same argument about brains. You can always say there’s some feature of brains, for example, you might have a simulation of a brain that does a really good job of representing how neuron firings work but, it doesn’t correctly simulate if you bash the brain on the side of its head, so to speak, and give it concussion, it doesn’t correctly represent a concussion because it isn’t something which is physically laid out in three-dimensional space the way that the natural brain is.
But wasn’t that your assumption of the problem you were setting up, that you have perfectly modeled Byron’s brain?
That’s a good point. The question is, for what purpose is the model adequate? Let’s say the model is adequate if listening to it talking over the phone it is indistinguishable in behavior from the actual Byron. But then, if you see it in person and you were to connect eyes to it, maybe the eye saccades will be different or it wouldn’t have those, whatever else. Models, by their nature, aren’t complete, but the idea of science, the idea of theoretical science is that you can make models which are useful. If you can’t make models, if the only way to figure out what the system does is just to have a copy of the system and watch it do its thing, then you can’t do theoretical science in the way that people have traditionally done theoretical science.
Let’s assume that we can make a model of a brain that is good enough that the brain can, for many purposes that we most care about, can emulate the real brain. So now the question is, “I’ve got this model brain, I can look at every feature of how it behaves when I ask it a question, or when it feels pain or whatever else.” But now the question is when I look at every detail, what can I say from that? What you would like to be able to say is to tell some overarching story. For example, “The brain is feeling pain.” But, that is a very complicated statement. What you would otherwise say is, there’s a billion neurons and they have this configuration of firings and synaptic weights, and God knows what else. Those billion neurons don’t allow you to come up with ‘a simple to describe story’, like, “The brain is feeling pain.” It’s just, here’s a gigabyte of data or something; it represents the state of the brain. That doesn’t give you the human level story of “the brain is feeling pain.” Now, the question is, will there be a human level story to be told about what’s happening inside brains? I think that’s a very open question. So, for example, take a field like linguistics. You might ask the question, how does a brain really understand language? Well, it might be the case that you can, sort of, see the language coming in, you can see all these little neuron firings going on and then, at the end of it, some particular consequence occurs. But then the question is, in the middle of that, can you tell the story of what happened?
Let me give you an analogy which I happen to have been looking at recently which might at first seem kind of far-fetched, but I think is actually very related. The analogy is mathematical theorems. For example, I’ve done lots of things where I’ve figured out mathematical truths using automated theorem proving. One, in particular, I did 20 years ago of finding the simplest axiom system for logic, for Boolean algebra. This particular proof generated automatically, it’s 130 steps, or so. It involves many intermediate stages, many lemmas. I’ve looked at this proof, off and on for 20 years, and the question is, can I tell what on Earth is going on? Can I tell any story about what’s happening? I can readily verify that, yes, the proof is correct, every step follows from every other step. The question is, can I tell somebody a humanly interesting story about the innards of this proof? The answer is, so far, I’ve completely failed. Now, what would it take for that to be such a story? Kind of interesting. If some of the lemmas that showed up in the intermediate stages of that proof were, in a sense, culturally famous, I would be in a much better position. That is when you look at a proof that people say, “Oh, yeah, this is a good proof of some mathematical theorem.” A lot of it is, “Oh, this is Gauss’” such and such theorem. “This is Euler’s” such and such theorem. That one’s using different stages in the proof. In other words, those intermediate stages are things about which there is a whole, kind of, culturally interwoven story that can be told, as opposed to just, “This is a lemma that was generated by an automatic theorem improving system. We can tell that it’s true but we have no idea what it’s really about, what it’s really saying, what its significance is, what its purpose is,” any of these kinds of words.
That’s also, by the way, the same thing that seems to be happening in the modern neural nets that we’re looking at. Let’s say we have an image identifier. The image identifier, inside itself, is making all kinds of distinction saying, “This image is of type A. This is not of type B.” Well, what is A and B? Well, it might be a human describable thing. “This image is very light. This image is very dark. This image has lots of vertical stripes. This image has lots of horizontal stripes.” They might be descriptors of images for which we have developed words in our languages, in our human languages. In fact, they’re probably not. In fact, they are, sort of, emergent concepts which are useful, kind of, symbolic concepts at an intermediate stage of the processing in this neural net but they’re not things for which we have in our, sort of, cultural development generated, produced, chosen to describe those concepts by words and things. We haven’t provided the cultural anchor for that concept. I think the same thing is true — so, the question is, when we look at brains and how they work and so on, and we look at the inner behavior and we’ve got a very good simulation, and we see all this complicated stuff going on, and we generate all this data, and we can see all these bits on the screen and so on. And then we say, “OK, well, what’s really going on?” Well, in a sense then we’re doing standard natural science. When we’re confronted with the world we see all sorts of complicated things going on and we say, “Well, what’s really going on?” And then we say, “Oh, well, actually there’s this general law” like the laws of thermodynamics, or some laws of motion, or something like this. There’s a general law that we can talk about, that describes some aspect of what’s happening in world.
So, a big question then is, when we look at brains, how much of what happens in brains can we expect to be capable of telling stories about? Now, obviously, when it comes to brains, there’s a long history in psychology, psychoanalysis etc. that people have tried to make up, essentially, stories about what’s happening in brains. But, we’re kind of going to know at some point. At the bit level we’re going to know what happens in brains and then the question is, how much of a story can be told? My guess is that that story is actually going to be somewhat limited. I mean, there’s this phenomenon I call ‘computational irreducibility’ that has to do with this question of whether you can effectively make, sort of, an overarching statement about what will happen in the system, or whether you do just have to follow every bit of its behavior to know what it’s going to do. One of the bad things that can happen is that, we have our brain, we have our simulated brain and it does what it does and we can verify that, based on every neuron firing, it’s going to do what we observe it to do but then, when we say, “Well, why did it do that?” We may be stuck having no very good description of it.
This phenomenon is deeply tied into all kinds of other fundamental science issues. It’s very much tied into Gödel’s theorem, for example. In Gödel’s theorem, the analogy is this: when you say, “OK, I’m going to describe arithmetic and I’m going to say arithmetic is that abstract system that satisfies the following axioms.” And then you start trying to work out the consequences of those axioms and you realize that, in addition to representing ordinary integers, those axioms allow all kinds of incredibly exotic integers, which, if you ask about certain kinds of questions, will give different answers from ordinary integers. And you might say, “OK, let’s try to add constraints. Let’s try to add a finite number of axioms that will lock down what’s going on.” Well, Gödel’s theorem shows that you can’t do that. It’s the same sort of mathematical structure, scientific structure as this whole issue of, you can’t expect to be able to find simple descriptions of what goes on in lots of these kinds of systems. I think one of the things that this leads to is the fact that, both in our own brains and in other intelligences, other computational intelligences, that there will be lots of kinds of inner behavior where we may not ever have an easy way to describe in large-scale symbolic terms, the kinds of things going on. And it’s a little bit shocking to us that we are now constructing systems that, we may never be able to say, in a sort of human understandable way, what’s going on inside these systems. You might say, “OK, the system has produced this output. Explain that output to me.” Just like, “The following mathematical theorem is true. Explain why it’s true.” Well, you know, if the “why it’s true” comes from an automated theorem prover, there may not be an explanation that humans can ever wrap their brains around about that. The main issue is, you might say, “Well, let’s just invent new words and a language to describe these new lumps of computation that we see happening in these different systems.” The problem, and that’s what one saw even from Gödel’s theorem, the problem is that the number of new concepts that one has to invent is not finite. That is, as you keep on going, you keep on looking at different kinds of things that brains, or other computational systems can do, that it’s an infinite diversity of possible things and there won’t be any time where you can say, “OK, there’s this fixed inventory of patterns that you have to know about and that you can maybe describe with words and that’s all you need, to be able to say what’s going to happen in these systems.”
So, as AIs get better and we personify them more and we give them more ability, and they do actually seem to experience the world, whether they do or they don’t, but they seem to, at what point, in your mind, can we no longer tell them what to do, can we no longer have them go plunge our toilet when it stops up. At what point are they afforded rights?
Well, what do you mean by ‘can’?
Well, ‘can’ as in, I mean, surely we can coerce but, I mean, ethically ‘can’.
I don’t know the answer to that. Ethics is defined by the way people feel about things. In other words, it’s not the case, there is no absolute ethics.
Well, OK. Fair enough. I’ll rephrase the question. I assume your ethics preclude you from coercing other entities into doing your bidding. At what point do you decide to stop programming computers to do your bidding.
And at what point do I let them do what they want, so to speak?
Right.
When do I feel that there is a moral need to let my computer do something just because? Well, let me give you an example. I’ve often have computers do complicated searches for things that take months of CPU time. How do I feel about cutting the thing off moments before it might have finished? Well, I usually don’t feel like I want to do that. Now, do I not want to do that purely because I want to get the result? Or do I feel some kind of feeling, “Oh my gosh, the computer has done so much work? I don’t just want to cut it off.” I’m not sure actually.
Do you still say thank you to the automatic ticket thing when you leave the parking garage?
Yes, to my children’s great amusement. I have made a principle of doing that for a long time.
Stephen, I don’t know how to say this, but I think maybe you’ve been surrounded by the computers so much that you kind of have Stockholm Syndrome and identify with them.
More to the point you might say I’ve spent so much time thinking about computation, maybe I’ve become computation myself as a result. Well, in a certain sense, yes, absolutely, that’s happened to me, in the following sense. We think about things and how do we form our thoughts? Well, some philosophers think that we use language to form our thoughts. Some think thoughts are somewhat independent of language. One thing I can say for sure, I’ve spent some large part of my life doing computer language design, building Wolfram language system and so on, and absolutely, I think in patterns that are determined by that language. That is, if I try to solve a problem, I am, both consciously and subconsciously, trying to structure that problem in such a way that I can express it in that language and so that I can use the structure of that language as a way to help me understand the problem.
And so absolutely it’s the case that as a result of basically learning to speak computer, as a result of the fact that I formulate my thoughts in no small way, using Wolfram language, and using this computational language, absolutely, I probably think about things in a different way than I would if I had not been exposed to computers. Undoubtedly, that kind of structuring of my thinking is something that affects me, probably more than I know, so to speak. But, in terms of whether I think about people, for example, like I think about computational systems. Actually, most of my thinking about people is probably based on gut instinct and heuristics. I think that the main thing that I might have learned from my study of computational things is that there aren’t simple principles when it comes to looking at the overall behavior of something like people. If you dig down you say, “How do the neurons work?” We may be able to answer that. But the very fact that this phenomenon of computational irreducibility happens, it’s almost a denial of the possibility that there is going to be a simple overall theory of, for example, people’s actions, or certain kinds of things that happen in the world so to speak. People used to think that when we applied science to things, that it would make them very cut and dried. I think computational irreducibility shows that that’s just not true, that there can be an underlying science one can understand how the components work and so on, but it may not be that the overall behavior is cut and dried. It’s not like the kind of 1950s science fiction robots where the thing would kind of start having smoke come out of its ears if there were some logical inconsistency that it detected in the world. This kind of simple view of what can happen in computational systems is just not right. Probably, if there’s one thing that’s come out of my science, in terms of my view of people at that level, it’s that, no, I doubt that I’m really going to be able to say, “OK, if this then that”, you know, kind of apply very simple rules to the way that people work.
But, hold on a second, I thought the whole way you got to something that looked like free will, I thought your thinking was, “No, there isn’t, but the thing is the number of calculations you would have to do to predict the action is so many you can’t do it, so it’s effectively freewill but it isn’t really.” Do you still think that?
That’s correct. Absolutely, that’s what I think.
But the same would apply to people.
Absolutely.
In a sufficiently large enough computer you would be able to…
Yes, but the whole point is, as a practical matter in leading one’s life, one isn’t doing that, that’s the whole point.
But to apply that back to your Byron’s brain feeling pain, couldn’t that be the same sort of thing it’s like, “Well, yeah, maybe that’s just calculation, but the amount of calculation that would have to happen for a computer to feel pain is just not calculable.”
No. There’s a question of how many neurons, how much accuracy, what’s the cycle time etc. But we’re probably coming fairly close and we will, in coming years, get decently close to being able to emulate, with digital electronics, the important parts of what happens in brains. You might always argue, “Oh, it’s the microtubules on every neuron that are really carrying the information” Maybe that’s true, I doubt it. And that’s many orders of magnitude further than what we can readily get to with digital electronics over the next few years.
But, either you can model a brain and know what I’m going to say next, and know that I felt pain, or you can’t, and you can preserve some semblance of free will.
No, no, no. Both things are true. You can absolutely have free will even if I can model your brain at the level of knowing what you will say next. If I do the same level of computation that your brain is doing, then I can work out what you will say next. But the fact is, to do that, I effectively have to have a brain that’s as powerful as your brain, and I have to be just following along with your brain. Now, there is a detail here which is this question of levels of modeling and so on, and how much do you have to capture, and do you have to go all the way down to the atoms, or is it sufficient to just say, “Does this neuron fire or not?” And yes, you’re right, that sort of footnote to this whole thing, when I say, “How much free will?” Well, free enough will that it takes a billion logical machine operations to work out whether you will say true or false. If it takes a billion operations to tell when you are going to say true or false, should one say that you are behaving as if you have free will, even though were you to do those billion operations you can deterministically tell you’re going to say true, or you’re going to say false? As a practical matter, in interacting with you, I would say you’re behaving as if you have free will because I can’t immediately do those billion operations to figure out the answer. In a future world where we are capable of doing more computation more efficiently, for example, we may eventually have ways to do computation that are much more efficient than brains. And, at that point, we have our simulated brains, and we have our top of the line computers made at an atomic scale or whatever else. And, yes, it may very well be the case, that as a practical matter, the atomic scale computers out-compute simulated brains by factors of trillions.
I’ll only ask one more question along this line, because I must be somewhat obtuse. I’m not a very good chess player. If I download a program on my iPad, I play at level four out of ten, or something. So, say I flip it up to level five. I don’t know what move the computer is going to make next because it’s going to beat me. I don’t have a clue what it’s going to move next, that’s the whole point. And yet, I never think, “Oh, because I don’t know, it, therefore, must have free will.”
That’s true. You probably don’t think that. Depends on what the computer is doing. There’s enough of a background of chess playing that that’s not an immediate question for you. If the computer was having a conversation with you, if suddenly, in 2017, the computer was able to have a sort of Turing Test complete conversation with you, I think you would be less certain. I think that, again, there is a progression of — an awful lot of what people believe and how people feel about different things, does the computer have consciousness, does it blah blah? An awful lot of that, I think, ends up coming about because of the historical thread of development that leads to a particular thing.
In other words, imagine — it’s an interesting exercise — imagine that you took a laptop of today back to Pythagoras or something. What on earth would he think of it? What would he think it was? How would he describe it? I wondered about this at some point. My conclusion is they’d start talking about, “What is this thing? It’s like disembodied human souls.” Then you explain, “Well, no it’s not really disembodied human soul.” They say, “Well, where did all these things that are on the computer come from?” “Well, they were actually put there by programmers but then the computer does more stuff.” And it gets very complicated. I think it’s an interesting thought experiment to imagine at a different time in history. Pythagoras is a particularly interesting case because of his thinking about souls and his thinking about mathematics. But, it’s to imagine what, at a different time in history, would somebody imagine the technology of today was actually like? And that helps us to understand the extent to which we are prisoners of our particular time in history. To take the thought experiment of, what if we have a super, in some sense, computer that can predict what our brains do, a trillion times faster than our brains actually do it. How will that affect our view of the world? My guess is that what will actually happen if that happens, and it presumably will in some sense, we will have by that time long outsourced much of our thinking to machines that just do it faster than we do. Just like we could decide that we’re going to walk everywhere we want to go. But actually, we outsource much of our transportation to cars and airplanes and things like that, that do it much faster than we do it. You could say, “Well, you’re outsourcing your humanity by driving in a car.” Well, we don’t think that anymore because of the particular thread of history by which we ended up with cars. Similarly, you might say, “Oh my gosh, you’re outsourcing your humanity by having a computer think for you.” In fact, that argument comes up when people use the tools we’ve built to do their homework or whatever else. But, in fact, as a practical matter, people will increasingly outsource their thinking processes to machines.
And then the question is, and that sort of relates to what I think you are going to ask about, should humans be afraid of AIs, and so on. That sort of relates to, well, where does that leave us, humans, when all these things, including the things that you still seem to believe, are unique and special for humans but I’m sure they’re not, when all of those things have been long overtaken by machines, where does that leave us? I think the answer is that you can have a computer sitting on your desk, doing the fanciest computation you can imagine. And it’s working out the evolution of Turing machine number blah blah blah, and it’s doing it for a year. Why is it doing that? Well, it doesn’t really have a story about why it’s doing it. It can’t explain its purpose because, if it could explain it, it would be explaining it in terms of some kind of history, in terms of some kind of past culture of the computer so to speak. The way I see it is, computers on their own simply don’t have this notion of purpose is something that is, in a sense, one can imagine that the weather has a purpose that it has for itself. But this notion of purpose that is connected to what we humans do, that is a specific human kind of thing, that’s something that nobody gets to automate. It doesn’t mean anything to automate that. It doesn’t mean anything to say, “Let’s just invent a new purpose.” We could pick a random purpose. We can have something where we say, “OK, there are a bunch of machines and they all have random purposes.” If you look at different humans, in some sense there’s a certain degree of randomness and there are different purposes. Not all humans have the same purposes. Not all humans believe the same things; have the same goals, etc. But if you say, “Is there something intrinsic about the purpose for the machines?” I don’t think that question really means anything. It ultimately reflects back on the thing I keep on saying about the thread of history that leads humans to have and think about purposes in the ways that they do.
But if that AI is alive, you began by taking my question about what is life and if you get to a point where you say, “It’s alive” then we do know that, living things, their first purpose is to survive. So, presumably, the AI would want to survive, and then their second purpose is to reproduce, their third purpose is to grow. They all naturally just flow out of the quintessence of what it means to be alive. “Well, what does it mean for me to be alive?” It means for me to have a power source. “OK, I need a power source. Ok, I need mobility.” And so it just creates all of those just from the simple fact of being alive.
I don’t think so. I think that you’re projecting that onto what you define as being alive. I mean, it is correct, there is, in a sense, one 0th level purpose, which is, you have to exist if you want to have any purpose at all. If you don’t exist then everything is off the table. The question of whether a machine, a program or whatever else has a desire, in some sense, to exist. That’s a complicated question. I mean it’s like saying, “Are there going to be suicidal programs?” Of course. There are right now. Many programs, their purpose is to finish, terminate and disappear. And that’s much rarer, perhaps, fortunately, for humans.
So, what is the net of all of this to you then? You hear certain luminaries in the world say we should be afraid of these systems, you hear dystopian views about the world of the future. You’ve talked about a lot of things that are possible and how you think everything operates but what do you think the future is going to be like, in 10 years, 20, 50, 100?
What we will see is an increasing mirror on human condition, so to speak. That is, what we are building are things that essentially amplify any aspect of the human condition. Then it, sort of, reflects back on us. What do we want? What are the goals that we want to have achieved? It is a complicated thing because certainly AIs will in some sense run many aspects of the world. Many kinds of systems, there’s no point in having people run them. They’re going to be automated in some way or another. Saying it’s an AI is really just a fancy way of saying it’s going to be automated. Another question is, well what are the overall principles that those automated systems should follow? For example, one principle that we believe is important right now, is the ‘be nice to humans’ principle. That seems like a good one given that we’re in charge right now, better to set things up so that it’s like, “Be nice to humans.” Even defining what it means to be nice to humans is really complicated. I’ve been much involved in trying to use Wolfram language as a way of describing lots of computational things and an increasing number of things about the world. I also want it to be able to describe things like legal contracts and, sort of, desires that people have. Part of the purpose of that is to provide a language that is understandable both to humans and to machines that can say what it is we want to have happen, globally with AIs. What principles, what general ethical principles, and philosophical principles should AIs operate under? We had the Asimov’s Laws of Robotics, which are a very simple version of that. I think what we’re going to realize is, we need to define a Constitution for the AIs. And there won’t be just one because there aren’t just one set of people. Different people want different kinds of things. And we get thrown into all kinds of political philosophy issues about, should you have an infinite number of countries, effectively, in the world, each with their own AI constitution? How should that work?
One of the fun things I was thinking about recently is, in current democracies, one just has people vote on things. It’s like a multiple-choice answer. One could imagine a situation in which, and I take this mostly as a thought experiment because there are all kinds of practical issues with it, in a world where we’re not just natural language literate but also computer language literate, and where we have languages, like Wolfram language which can actually represent real things in the world, one could imagine not just voting, I want A, B, or C, but effectively submitting a program that represents what one wants to see happen in the world. And then the election consists of taking X number of millions of programs and saying “OK, given these X number of millions of programs, let’s apply our AI Constitution to figure out, given these programs how do we want the best things to happen in the world.” Of course, you’re thrown into the precise issues of the moral philosophers and so on, of what you then want to have happen and whether you want the average happiness of the world to be higher or whether you want the minimum happiness to be at least something or whatever else. There will be an increasing pressure on what should the law-like things, which are really going to be effectively the programs for the AIs, what should they look like. What aspects of the human condition and human preferences should they reflect? How will that work across however many billions of people there are in the world? How does that work when, for example, a lot of the thinking in the world is not done in brains but is done in some more digital form? How does it work when there is no longer… the notion of a single person, right now that’s a very clear notion. That won’t be such a clear notion when more of the thinking is done in digital form. There’s a lot to say about this.
That is probably a great place to leave it. I want to thank you, Stephen. Needless to say, that was mind-expanding, would be the most humble way to describe it. Thank you for taking the time and chatting with us today.
Sure. Happy to.
Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.
[voices_in_ai_link_back]

Voices in AI – Episode 43: A Conversation with Markus Noga

[voices_in_ai_byline]
In this episode, Byron and Markus discuss machine learning and automation.
[podcast_player name=”Episode 43: A Conversation with Markus Noga” artist=”Byron Reese” album=”Voices in AI” url=”https://voicesinai.s3.amazonaws.com/2018-05-22-(00-58-23)-markus-noga.mp3″ cover_art_url=”https://voicesinai.com/wp-content/uploads/2018/05/voices-headshot-card.jpg”]
[voices_in_ai_byline]
Byron Reese: This is Voices In AI brought to you by GigaOm, I’m Byron Reese. Today, my guest is Marcus Noga. He’s the VP of Machine Learning over at SAP. He holds a Ph.D.in computer science from Karlsruhe Institute of Technology, and prior to that spent seven years over at Booz Allen Hamilton working on helping businesses adopt and transform their businesses through IT. Welcome to the show Markus.
Markus Noga: Thank you Byron and it’s a pleasure to be here today.
Let’s start off with a question I have yet to have two people answer the same way. What is artificial intelligence?
That’s a great one, and it’s sure something that few people can agree on. I think the textbook definition mostly defines that by analogy with human intelligence, and human intelligence is also notoriously tricky and hard to define. I define human intelligence as the ability to deal with the unknown and bring structure to the unstructured, and answer novel questions in a surprisingly resourceful and mindful way. Artificial intelligence in itself is the thing, rather more playfully, that is always three to five years out of reach. We love to focus on what can be done today—what we call machine learning and deep learning—that can draw a tremendous value for businesses and for individuals already today.
But, in what sense is it artificial? Is it artificial intelligence in the way artificial turf? Is it really turf, it just looks like it? Or is it just artificial in the sense that we made it? Or put another way, is artificial intelligence actually intelligent? Or does is it just behave intelligently?
You’re going very deep here into things like Searle’s Chinese room paradox about the guy in the room with a hand for definitions of how to transcribe Chinese symbols to have an intelligent conversation. The question being who or what is having the intelligent conversation. Is it the book? Certainly not. Is it the guy mindlessly transcribing these symbols? Certainly not? Is it maybe the system of the guy in the room, the book, and the room itself that generates these intelligent seeming responses? I guess I’m coming down on the output-oriented side here. I try not to think too hard about the inner states or qualia, or the question whether the neural networks we’re building have a sentient experience or the experience in this qualia. For me, what counts is whether we can solve real-world problems in a way that’s compatible with intelligence. Its place in intelligent behavior of everything else—I would leave to the philosophers Byron.
We’ll get to that part where we can talk about the effects of automation and what we can expect and all of that. But, don’t you think at some level, understanding that question, doesn’t it to some degree inform you as to what’s possible? What kinds of problems should we point this technology at? Or do you think it’s entirely academic that it has no real-world implications?
I think it’s extremely profound and it could unlock a whole new curve of value creation. It’s also something that, in dealing with real-world problems today, we may not have to answer—and this is maybe also something specific to our approach. You’ve seen all these studies that say that X percent of activities can be automated with today’s machine learning, and Y percent could be automated if there are better natural language speech processing capabilities and so on, and so forth. There’s such tremendous value to be had by going after all these low-hanging fruits and sort of doing applied engineering by bringing ML and deep learning into an application context. Then we can bide our time until there is a full answer to strong AI, and some of the deeper philosophical questions. But what is available now is already delivering tremendous value, and will continue to do so over the next three to five years. That’s my business hat on—what I focus on together with the teams that I’m working with. The other question is one that I find tremendously interesting for my weekend and unique conversations.
Let me ask you a different one. You started off by saying artificial intelligence, and you dealt with that in terms of human intelligence. When you’re thinking of a problem that you’re going to try to use machine intelligence to solve, are you inspired in any way by how the brain works or is that just a completely different way of doing it? Or do we learn how intelligence, with the capital I, works by studying the brain?
I think that’s the multi-level answer because clearly the architectures that do really well in analytic learning today are in a large degree neurally-inspired. Instead of having multi-layered deep networks—having them with a local connection structure, having them with these things we call convolutions that people use in computer vision, so successfully—it resembles closely some of the structures that you see in the visual cortex with vertical columns for example. There’s a strong argument for both these structures in the self-referential recurrent networks that people use a lot for video processing and text processing these days are very, very deeply morally inspired. On the other hand, we’re also seeing that a lot of the approaches that make ML very successful today are about as far from neutrally-inspired learning as you can get.
Example one, we struggled as a discipline with neutrally-inspired transfer functions—that were all nice, and biological, and smooth—and we couldn’t really train deep networks with them because they would saturate. One of the key enablers for modern deep learning was to step away from the biological analogy of smooth signals and go to something like the rectified linear unit, the ReLU function, as an activation, and that has been a key part in being able to train very deep networks. Another example when a human learns or an animal learns, we don’t tend to give them 15 million cleanly labored training examples, and expect them to go over these training examples 10times in a row to arrive at something. We’re much closer to one-shot learning and being able to recognize the person with a cylinder hat on their head just the basis of one description or one image that shows us something similar.
So clearly, the approaches that are most successful today are both sharing some deep neural inspiration as a basis, but, also a departure into computationally tractable, and very, very different kinds of implementations than the network that we see in our brains. I think that both of these themes are important in advancing the state-of-the-art in ML and there’s a lot going on. In areas like one-shot learning, for example, right now I’m trying to mimic more of the way the human brain—with an active working memory and these rich associations—is able to process new information, and there’s almost no resemblance to what convolutional networks and the current networks do today.
Let’s go with that example. If you take a small statue of a falcon, and you put it in a hundred photos—and sometimes it’s upside down, and sometimes it’s laying on its side, sometimes it’s half in water, sometimes it’s obscured, sometimes it’s in shadows—a person just goes “boom boom boom boom boom” and picks them out, right and left with no effort, you know, one-shot learning. What do you think a human is doing? It is an instance of some kind of transfer learning, but what do you think is really going on in the human brain, and how do you map that to computers? How do you deal with that?
This is an invitation to speculate on the topics of falcons, so let me try. I think that, clearly, our brains have built a representation of the real world around us, because we’re able to create that representation even though the visual and other sensory stimuli that reach us are not in fact as continuous as they seem. Standing in the room here having the conversation with you, my mind creates the illusion of a continuous space around me, but in fact, I’m getting distinct feedbacks from the eyes as they succumb and jump around the room. The illusion of a continuous presence, the continuous sharp resolution of the room is just that; it’s an illusion because our mind has built very, very effective mental models of the world around us, that’s highly contrasting information and make it tractable on an abstract level.
Some of the things that are going on in research right now [are] trying to exploit these notions, and trying to use a lot of unsupervised training with some very simple assumptions behind them; basically the mind doesn’t like to be surprised, and would, therefore, like to predict what’s next [by]leveraging very, very powerful unsupervised training approaches where you can use any kind of data that’s available, and you don’t need to enable it to come up with these unsupervised representation learning approaches. They seem to be very successful, and they’re beating a lot of the traditional approaches because you can have access to way larger corpuses of unlabeled information which means you can train better models.
Now is that it a direct analogy to what the human brain does? I don’t know. But certainly it’s an engineering strategy that results in world-leading performance on a number of very popular benchmarks right now, and it is, broadly speaking, neutrally-inspired. So, I guess bringing together what our brains do and what we can do in engineering is always a dance between the abstract inspiration that we can get from how biology works, and the very hard math and engineering in getting solutions to train on large-scale computers with hundreds of teraflops in compute capacity and large matrix multiplications in the middle. It’s advances on both sides of the house that make ML advance rapidly today.
Then take a similar problem, or tell me if this is a similar problem, when you’re doing voice recognition, and there’s somebody outside with the jackhammer, you know, it’s annoying, but a human can separate those two things. It can hear what you’re saying just fine, but for a machine, that’s a really difficult challenge. Now my question to you is, is that the same problem? Is it one trick humans have like that that we apply in a number of ways? Or is that a completely different thing that’s going on in that example?
I think it’s similar, and you’re hitting onto something because in the listening example there are some active and some passive components going on. We’re all familiar with the phenomenon of selective hearing when we’re at a dinner party, and there are 200 conversations going on in parallel. If we focus our attention on a certain speaker or a certain part of the conversation, we can make them stand out over the din and the noise because their own mind had some prior assumptions as to what constitutes a conversation, and we can exploit these priors in our minds in order to selectively listen in to parts of the conversation. This has partly a physical characteristic, maybe hearing in stereo. Our ears have certain directional characteristics to the way they pick up certain frequencies by turning our head the right way and inclining it the right way. We can do a lot already [with] stereo separation, whereas, if you have a single microphone—and that’s all the signal you get—all these avenues would be closed to you.
But, I think the main story is one about signals superimposed with noise—whether that’s camera distortions, or fog, or poor lighting in the case of the statue that we are trying to recognize, or whether it’s ambient noise or intermittent outages in the sense of the audio signal that you’re looking into. The two different most popular neutrally-inspired architectures on the market right now, [are] the convolutional networks for a lot of things in the image and also natural text space, and the recurrent networks for a lot of things in the audio ends at time series signal, but also on text space. Both share the characteristics that they are vastly more resilient to noise than any hard-coded or programmed approach. I guess the underlying problem is one that, five years ago, would have been considered probably unsolvable; where today with these modern techniques, we’re able to train models that can adequately deal with the challenges if the information is in the solid state.
Well, what do you think when the human hears, at a conversation at the party to go with that example, and you kind of like, “Oh, I want to listen to that.” I heard what you say that there’s one aspect of you where you make a physical modification to the situation, but what you’ve also done is introduced this idea of consciousness, that a person selectively can change their focus and that aspect of what the brain is doing, where it’s like, “Oh, wait a minute.” Maybe something that’s hard to implement on a machine, or is that not the case at all?
If you take that idea, and I think in the ML research and engineering communities this is currently most popular under the label of attention, or attention-based mechanisms, then certainly this is all over leading approaches right now—whether it’s the computer vision papers from CVPR just last week or whether it’s the text processing architectures that return state-of-the-art results right now. They all start to include some kind of attention mechanism allowing you to both weigh outputs by the center of attention, and also to trace back results to centers of extension, which have two very nice properties. On the one hand attention mechanisms, nascent as they are today, help improve the accuracy of what models can deliver. On the second hand, the ability to trace back on the outcome of a machine learning model to centers and regions of attention in the input can do wonders for explain-ability of ML and AI results, which is something that increasingly users and customers are looking for. Don’t just give me any result which is as good as my current process, or hopefully a couple of percentage points better. But, also helped me build confidence in this by explaining why things are being classed or categorized or translated or extracted the way they are. To gain the human trust into operating system of humans and machines working together explain-ability future is big.
One of the peculiar things to me, with regard to strong AI—general intelligence—is that there are folks who say, when you say, “When will we get a general intelligence, “the soonest you ever hear is five years. There are very famous people who believe we’re going to have something very soon. Then you get the other extreme is about 500 years and that worrying about that is like worrying about overpopulation on Mars. My question to you is why do you think that there’s such a wide range in terms of our idea of when we may make such a breakthrough?
I think it’s because of one vexing property of humans and machines is that the things that are easiest for us humans tend to be the things that are hardest for machines and vice versa. If you look at that today, nobody would dream of having computer as a job description. That’s a machine. If you think back 60-70 years, computer was the job description of people actually doing manual calculations. “Printer” was a job description, and a lot of other things that we would never dream of doing manually today were being done manually. Think of spreadsheets potentially the greatest simple invention in computing, think of databases, think of things like enterprise resource planning systems that SAP does, and business networks connecting them or any kind of cloud-based solutions—what they deliver is tremendous and it’s very easy for machines to do, but it tends to be the things that are very hard for humans. Now at the same time things that are very easy for humans to do, see a doggie and shout “doggie,” or see a cat and say “meow” is something that toddlers can do, but until very, very recently, the best and most sophisticated algorithms haven’t been able to do that part.
I think part of the excitement around ML and deep learning right now is that a lot of these things have fallen, and we’re seeing superhuman performance on image classification tasks. We’re seeing superhuman performance on things like switchboard voice-to-text transcription tasks, and many other elements are falling to machines that that used to be very easy for humans but are now impossible for us. This is something that generates a lot of excitement right now. I think where we have to be careful is [letting] this guide our expectations on the speed of progress in following years. Human intuition about what is easy and what is hard is traditionally a very, very poor guide to the ease of implementation with computers and with ML.
Example, my son was asking me yesterday, “Dad, how come the car can know where it is at and tell us where to drive?” And I was like, “Son, that’s fairly straightforward. There are all these satellites flying around, and they’re shouting at us, ‘It’s currently 2 o’clock and 30 seconds,’ and we’re just measuring the time between their shouts to figure out where we are today, and then that gives us that position on the planet. It’s not a great invention; it’s the GPS system—it’s mathematically super hard to do for a human with a slide rule; it’s very easy to do for the machine.” And my son said, “Yeah, but that’s not what I wanted to know. How come the machine is talking to us with the human voice? This is what I find amazing, and I would like to understand how that is built.” and I think that our intuition about what’s easy and what’s hard is historically a very poor guide for figuring out what the next step and the future of ML and artificial intelligence look like. This is why you’re getting those very broad bands of predictions.
Well do you think that the difference between the narrow or weak AI we have now and strong AI, is evolutionary? Are we on the path [where] when machines get somewhat faster, and we get more data, and we get better algorithms, that we’re going to gradually get a general intelligence? Or is a general intelligence something very different, like a whole different problem than the kinds of problems we’re working on today?
That’s a tough one. I think that taking the brain analogy; we’re today doing the equivalent of very simple sensory circuits which maybe can’t duplicate the first couple of dozens or maybe a hundred layers in the way the visual cortex works. We’re starting to make progress into some things like one-shot learning; it’s very nascent in that early-stage research right now. We’re starting to make much more progress in directions like reinforcement learning, but overall it’s very hard to say which if any additional mechanisms are there in the large. If you look at the biological system of the brain, there’s a molecular level that’s interesting. There’s a cellular level that’s interesting. There is a simple interconnection I know that’s interesting. There is a micro-interconnection level that’s interesting. I think we’re still far from a complete understanding of how the brain works. I think right now we have tremendous momentum and a very exciting trajectory with what our artificial neural networks can do, and at least for the next three to five years. There seems to be pretty much limitless potential to bring them out into real-world businesses, into real-world situations and contexts, and to create amazing new solutions. Do I think that really will deliver strong AI? I don’t know. I’m an agnostic, so I always fall back to the position that I don’t know enough.
Only one more question about strong AI and then let’s talk about the shorter-term future. The question is, human DNA converted to code is something like 700 MB, give or take. But the amount that’s uniquely human, compared to say a chimp or something like that is only about 1% difference—only 7 or 8 or 9 MB of code—is what gives us a general intelligence. Does that imply or at least tell us how to build something that then can become generally intelligent? Does that imply to you that general intelligence is actually simple, straightforward? That we can look at nature and say, it’s really a small amount of code, and therefore we really should be looking for simple, elegant solutions to general intelligence? Or do those two things just not map at all?
Certainly, what we’re seeing today is that deep learning approaches to problems like image classification, image object detection, image segmentation, video annotation, audio transcription—all these things tend to be orders of magnitude, smaller problems than what we dealt with when we handcrafted things. The core of most deep learning solutions to these things, if you really look at the core model on the model structure, tends to be maybe 500 lines of code, maybe 1000. And that’s within the reach of an individual putting this together over a weekend, so the huge democratization that deep learning based on big data lends is that actually a lot of these models that do amazing things are very, very small code artifacts. The weight matrices and the binary models that they generate then tend to be as large or larger than traditional programs compiled into executable, sometimes orders of magnitude larger again. The thing is, they are very hard to interpret, and we’re only at the beginning of an explain-ability of what the different weights and the different excitations mean. I think there are some nice early visualizations on this. There are also some nice visualizations that explain what’s going on with attention mechanisms in the artificial networks.
As to explain-ability of the real network in the brain, I think that is very nascent. I’ve seen some great papers and results on things like spatial representations in the visual cortex where surprisingly you find triangle scripts or attempts to reconstruct the image hitting the retina based on reading, with fMRI scans, the excitations in lower levels of the visual cortex. They show that we’re getting closer to understanding the first few layers. I think that even with the 7 MB difference or so that you allude to between chimps and humans spelled out for us, there is a whole set of layers of abstractions between the DNA code and the RNA representation, the protein representation, the excitation of these with methylation and other mechanisms that control activation of genes, and the interplay of the proteins across a living breathing human brain that all of this magnitude of complexity above of the super megabyte, by a certain megabyte difference in A’s and C’s, and T’s, and G’s. We live in super exciting types. We live in times were a new record, and a new development, and a new capability that was unthinkable of a year ago, or let alone a decade ago, is becoming commonplace, and it’s an invigorating and exciting time to be alive. I still struggle to make a prediction from the year to general AI based on a straight-line trend.
There’s some fear wrapped up though as exciting as AI is, there’s some fear wrapped up in it as well. The fear is the effect of automation on employment. I mean you know this, of course, it’s covered so much. There’s kind of three schools of thought: One says that we’re going to automate certain tasks and that there will be a group of individuals who do not have the training to add economic value. They will be pushed out of the labor market, and we’ll have perpetual unemployment, like a big depression that never goes away. Then there’s another group that says, “No, no, no, you don’t understand. Everybody is replaceable. Every single job we have, machines can do any of it.” And then there’s a third school about that says, “No, none of that’s going to happen. The history of 250 years of the Industrial Revolution is that people take these new technologies, even profound ones like electricity and engines, and steam, and they just use them to increase their own productivity and to drive wages up. We’re not going to have any unemployment from this, any permanent unemployment.” Which of those three camps, or a fourth, do you fall into?
I think that there’s a lot of historical precedent for how technology gets adopted, and there are also numbers of the adoption of technologies in our own day and age that sort of serve as reference points here. For example, one of the things that surprised me, truly, is the amount of e-commerce—as a percentage of overall retail market share—[that] is still in the mid to high single digit percentage points according to surveys that I’ve seen. That totally does not match my personal experience of basically doing all my non-grocery shopping entirely online. But it shows that in the 20-25 years of the Internet Revolution, a tremendous value has been created—and the conveniences of having all kinds of stuff at your doorstep with just a single click actually—that has transformed the single-digit percentage of the overall retail market with the transformation that we’ve seen. This was one of the most rapid uptakes in history of new technology that has groundbreaking value, by decoupling evidence and bits, and it’s been playing out over the past 20-25 years that all of us are observing.
So, I think while there is tremendous potential of machine learning in AI to drive another Industrial Revolution, we’re also in the middle of all these curves from other revolutions that are ongoing. We’ve had a mobile revolution that unshackled computers and gave everybody what used to be a supercomputer in their pocket which had an infinite revolution. Before that, we’ve had a client-server revolution and the computing revolution in its own—all of these building on prior revolutions like electricity, or the internal combustion engine, or methods like the printing press. They certainly have a tendency to show accelerating technology cycles. But on the other hand, for something like e-commerce or even mobile, the actual adoption speed has been one that is none too frightening. So for all the tremendous potential that ML and AI bring, I would be hard-pressed to come up with a completely disruptive scenario here. I think we are seeinga technology with tremendous potential for rapid adoption. We’re seeing the potential to both create new value and do new things, and to automate existing activities which continues past trends. Nobody has computer or printer as their job description today, and job descriptions like social-media influencer, or blogger, or web designer did not exist 25 years ago. This is an evolution on a Schumpeterian creative destruction that is going on all over industry, in every industry, in every geography, based on every new technology curve that comes in here.
I would say fears in this space are greatly overblown today. But fear is real the moment you feel it, therefore institutions—like The Partnership on Artificial Intelligence, with the leading technology companies, as well as the leading NGOs, think tanks, and research institutes—are coming together to discuss the implications of AI, and ethics of AI, and safety and guiding principles. All of these things are tremendously important to make sure that we can adopt this technology with confidence. Just remember that when cars were new, Great Britain had a law that a person with a red flag had to walk in front of the car in order to warn all pedestrians of the danger that was approaching. That was certainly an instance of fear about technology, that, on the one hand, was real at that point in time, but that also went away with a better understanding of how it works and of the tremendous value on the economy.
What do you think of these efforts to require that when an artificial intelligence makes a ruling or a decision about you that you have a right to know why it made that decision? Is that a manifestation of the red flag in front of the car as well, and is that something that would, if that became the norm, actually constrain the development of artificial intelligence?
I think you’re referring to the implicit right to explanation on this part of the European Union privacy novella for 2018. Let me start by saying that the privacy novella we’re seeing is a tremendous step forward because the simple act of harmonizing the rules and creating one digital playing field across the hundreds of millions of European citizens, and countries, and nationalities, is a tremendous step forward. We used to have one different data protection regime for each federal state in Germany, so anything that is required and harmonized is a huge step forward. I also think that the quest for an explanation is something that is very human. At the core of us is to continue to ask “why” and “how.” That is something that is innate to ourselves when we apply for a job with the company, and we get rejected. We want to know why. And when we apply for a mortgage and we can offer a rate that seems high to us and we want to understand why. That’s a natural question, it’s a human question, and it’s an information need that needs to be served if we don’t want to end up in a Kafka-esque future where people don’t have a say about their destiny. Certainly, that is hugely important on the one hand.
On the other hand, we also need to be sure that we don’t measure ML and AI to a stricter standard than we measure humans today because that could become an inhibitor to innovation. So, if you ask a company, “Why you didn’t get accepted for that job offer?” They will probably say, “Dear Sir or Madam, thank you for your letter. Due to the unusually strong field of candidates for this particular posting, we regret to inform you that certain others are stronger, and we wish you all the best for your continued professional future.” This is what almost every rejection letter reads like today. Are we asking the same kind of explain-ability from an AI system that is delivering a recommendation today that we apply to a system of humans and computers working together to create a letter like that? Or are we holding them to a much, much higher standard? If it is the first thing, absolutely essential. If it’s the second thing, we got to watch whether we’re throwing out the baby with the bathwater on this one. This is something where we, I think, need to work together to find the appropriate levels and standards for things like explain-ability in AI to fill very abstract sentences like write to an explanation with life that can be implemented, that can be delivered, and that can provide satisfactory answers at the same time while not unduly inhibiting progress. This is something that, with a lot of players focused on explain-ability today, where we will certainly see significant advances going forward.
If you’re a business owner, and you read all of this stuff about artificial intelligence, and neural nets, and machine learning, and you say, “I want to apply some of this great technology in my company,” how do people spot problems in a business that might be good candidates for an AI solution?
I can extort that and turn it around by asking, “What’s keeping you awake at night? What are the three big things that make you worried? What are the things that make up the largest part of your uncertainty, or of your cost structure, or of the value that you’re trying to create?” Looking on end-to-end processes, it’s usually fairly straightforward to identify cases where AI and ML might be able to help and to deliver tremendous value. The use-case identification tends to be the fairly easiest chord of the game. Where it gets tricky is in selecting and prioritizing these cases, figuring out the right things to build, and finding the data that you need in order to make the solution real, because unlike traditional software engineering, this is about learning from data. Without data, you basically can’t sort or at least we have to build some very small simulators in order to create the data that you’re looking for.
You mentioned that that’s the beginning of the game, but what makes the news all the time is when AI beats a person at a game. In 1997 you had chess, then you had Ken Jennings in Jeopardy!, then you had AlphaGo and Lee Sedol, and you had AI beating poker. Is that a valid approach to say, “Look around your business and look for things that look like games?” Because games have constrained rules, and they have points, and winners, and losers. Is that a useful way to think about it? Or are the game things more like AI’s publicity, a PR campaign, and that’s not really a useful metaphor for business problems?
I think that these very publicized showcases are extremely important to raise awareness and to demonstrate stunning new capabilities. What we see in building business solutions is that I don’t necessarily have to be the human world champion in something in order to deliver value. Because a lot of business is about processes, is about people following flowcharts together with software systems trying to deliver a repeatable process for things like customer service, or IT incident handling, or incoming invoice screening and matching, or other repetitive recurring tasks in the enterprise. And already by addressing—it’d be easy to serve 60-80% of these, we can create tremendous value for enterprises by making processes run faster, by making people more productive, and by relieving them of the parts of activities that they regard as repetitive and mind-numbing, and not particularly enjoyable.
The good thing is that in a modern enterprise today, people tend to have IT systems in place where all these activities leave a digital exhaust stream of data, and locking into that digital exhaust stream and learning from it is the key way to make ML solutions for the enterprise feasible today. This is one of the things where I’m really proud to be working for SAP because 76% of all business transactions, as measured by value, anywhere on the globe, are on an SAP system today. So if you want to learn models on digital information that touch the enterprise, chances are it’s either in an SAP system or in a surrounding system already today. Looking for these and sort of doing the intersection between what’s attractive—because I can serve core business processes with faster speed, greater agility, lower cost, more flexibility, or bigger value—and crossing that with the feasibility aspect of “do I have the digital information that I can learn from to build business-relevant functionality today?,” is our overriding approach to identifying things that we built in order to make all our SAP enterprise applications intelligent.
Let’s talk about that for a minute. What sorts of things are you working on right now? What sorts of things have the organization’s attention in machine learning?
It’s really end-to-end digital intelligence on processes, and let me give you an example. If you look at the finance space, which SAP is well-known for, these huge end-to-end processes—like record to report, or things like invoice to record—which really deal end-to-end with what an enterprise needs to do in order to buy stuff and pay for it, and receive it, or to sell stuff, and get paid for it. These are huge machines with dozens and dozens of process steps, and many individuals in shared service environments that otherwise perform the delivering of these services. They see a document like an invoice, for example, it’s just the tip of the iceberg for a complex orchestration and things to deal with that. We’re taking these end-to-end processes, and we’re making them intelligent every step of the way.
When an invoice hits the enterprise, the first question is what’s in it? And today most of the units in shared service environments extract development information via SAP systems. The next question is, “Do I know this supplier?” If they have merged or changed names or opened a new branch, I might not have them in my database. That’s a fuzzy lookup. The next step might be, “Have I ordered something like this?” and that’s a significant question because in some industries up to one-third of spending actually doesn’t have a purchase order. Finding people who have an order of this stuff, all related stuff from this supplier, or similar suppliers in the past, can be the key to figuring out whether we should approve it or not. Then, there’s the question of, “Did we receive the goods and services that this invoice is for?” That’s about going through lists and lists of staff, and figuring out whether the bill of lading for the truck that arrived really contains all the things that were on the truck and all the things that were on the invoice, but no other things. That’s about list matching and list comprehensing, and document matching, and recommending classification systems. It goes on and on like that until the point where we actually put through their payment, and the supplier gets paid for the first invoice that was there.
What you see is a digital process that is enabled by IT systems, very sophisticated IT systems, routine workflows between many human participants today. What you do is we can take the digital exhaust of all the process participants to learn what they’ve been doing, and then put the common, the repetitive, the mind-numbing part of the process on autopilot—gaining speed, reducing cost, making people more satisfied with their work day, because they can focus on the challenging, and the interesting, and the stimulating cases, and increasing customer satisfaction, or in this case supplier satisfaction because they get paid faster. This end-to-end approach is how we look at business processes, and when my ML group and AI do that, we see an order recommender, an entity extractor or some kind of translation mechanism at every step of the process. We work hard to turn these capabilities into scalable APIs on our cloud platform that integrates seamlessly with these standard applications, and that’s really our approach to problem-solving. It ties to the underlying data repository about how business operates and how processes slow.
Did you find that your customers are clear with how this technology can be used, and they’re coming to you and saying, “We want this kind of functionality, and we want to apply it this way,” and they’re very clear about their goals and objectives? Or are you finding that people are still finding their sea legs and figuring out ways to apply artificial intelligence in the business, and you’re more heading to lead them and say, “Here’s a great thing you could do that you maybe didn’t know was possible?”
I think it’s like everywhere, you’ve got early adopters, and innovation promoters, and dealers who actively come with these cases of their own. You have more conservative enterprises looking to see how things play out and what the results for early adopters are. You have others who have legitimate reasons to focus on burning parts of their house right now, for whom this, right now is not yet a priority. What I can say is that the amount of interest in ML and AI that we’re seeing from customers and partners is tremendous and almost unprecedented, because they all see the potential to tag business processes and the way business executes to a complete new level. The key challenge is working with customers early enough, and at the same time working with enough customers in a given setting to make sure that this is not a one-off that is highly specific, and to make sure that we’re really rethinking the process with digital intelligence instead of simply automating the status quo. I think this is maybe the biggest risk. We have tremendous opportunity to transform how business is done today if we truly see this through end-to-end and if we are looking to build out the robots. If we’re only trying to build isolated instances of faster horses, the value won’t be there. This is why we take such an active interest in the end-to-end and integration perspective.
Alright well, I guess just to two final questions. The first is, overall it sounds like you’re optimistic about the transformative power of artificial intelligence and what it can do—
Absolutely Byron.
But I would put that question to you that you put to businesses. What keeps you awake at night? What are the three things that worry you? They don’t have to be big things, but what are the challenges right now that you’re facing or thinking about like, “Oh, I just wish I had better data or if we could just solve this one problem?”
I think the biggest thing keeping me awake right now is the luxury problem of being able to grow as fast as demand and the market wants us to. That has all the aspects of organizational scaling and scaling the product portfolio that we enable with intelligence. Fortunately, we’re not a small start-up with limited resource. We are the leading enterprise software company and scaling inside such an environment is substantially easier than it would be on the outside. Still, we’ve been doubling every year, and we look set to continue in that vein. That’s certainly the biggest strain and the biggest worry that I face. It’s very old-fashioned things; it’s like leadership development that I tend to focus a lot of my time on. I wish I would have more time to play with models, and to play with the technology and to actually build and ship a great product. What keeps me awake is these more old-fashioned things, one of leadership development that matter the most for where we are at right now.
You talked at the very beginning, you said that during the week you’re all about applying these technologies to businesses, and then on the weekend you think about some of these fun problems? I’m curious if you consume science fiction like books or movies, or TV, and if so, is there any view of the future, anything you’ve read or seen or experienced that you think, “Ah, I could see that happening.” Or, “Wow, that really made me think.” Or do you not consume science fiction?
Byron, you caught me out here. The last thing I consumed was actually Valerian and the City of a Thousand Planets just last night in the movie theater in Karlsruhe that I went to all the time when I was a student. While not per se occupied with artificial intelligence, it was certainly stunning, and I do consume a lot of the stuff from the ease of it. It provides a view of plausible futures. Most of the things I tend to read are more focused on things like space, oddly enough. So things like The Three-Body Problem, and the fantastic trilogy that that became, really aroused my interest, and really made me think. There are others that offer very credible trajectories. I was a big fan of the book called Accelerando, which paints a credible trajectory from today’s world of information technology to an upload culture of digital minds and humans colonizing the solar system and beyond. I think that these escapes are critical to cure the hem from day-to-day business, and the pressures of delivering product under a given budget and deadlines. Sort of indulging in them, allows me to return relaxed, and refreshed, and energized on every Monday morning.
Alright, well that’s a great place to leave it, Markus. I’m want to thank you so much for your time. It sounds like you’re doing fantastically interesting work, and I wish you the best.
Did I mention that we’re hiring? There’s a lot of fantastically interesting work here, and we would love to have more people engaging in it. Thank you, Byron.
Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.
[voices_in_ai_link_back]