Voices in AI – Episode 72: A Conversation with Irving Wladawsky-Berger

[voices_in_ai_byline]

About this Episode

Episode 72 of Voices in AI features host Byron Reese and Irving Wladawsky-Berger discuss the complexity of the human brain, the possibility of AGI and its origins, the implications of AI in weapons, and where else AI has and could take us. Irving has a PhD in Physics from the University of Chicago, is a research affiliate with the MIT Sloan School of Management, he is a guest columnist for the Wall Street Journal and CIO Journal, he is an agent professor of the Imperial College of London, and he is a fellow for the Center for Global Enterprise.
Visit www.VoicesinAI.com to listen to this one-hour podcast or read the full transcript.

Transcript Excerpt

Byron Reese: This is Voices in AI, brought to you by GigaOm, and I’m Byron Reese. Today our guest is Irving Wladawsky-Berger. He is a bunch of things. He is a research affiliate with the MIT Sloan School of Management. He is a guest columnist for the Wall Street Journal and CIO Journal. He is an adjunct professor of the Imperial College of London. He is a fellow for the Center for Global Enterprise, and I think a whole lot more things. Welcome to the show, Irving.
Irving Wladawsky-Berger: Byron it’s a pleasure to be here with you.
So, that’s a lot of things you do. What do you spend most of your time doing?
Well, I spend most of my time these days either in MIT-oriented activities or writing my weekly columns, [which] take quite a bit of time. So, those two are a combination, and then, of course, doing activities like this – talking to you about AI and related topics.
So, you have an M.S. and a Ph.D. in Physics from the University of Chicago. Tell me… how does artificial intelligence play into the stuff you do on a regular basis?
Well, first of all, I got my Ph.D. in Physics in Chicago in 1970. I then joined IBM research in Computer Science. I switched fields from Physics to Computer Science because as I was getting my degree in the ‘60s, I spent most of my time computing.
And then you spent 37 years at IBM, right?
Yeah, then I spent 37 years at IBM working full time, and another three and a half years as a consultant. So, I joined IBM research in 1970, and then about four years later my first management job was to organize an AI group. Now, Byron, AI in 1974 was very very very different from AI in 2018. I’m sure you’re familiar with the whole history of AI. If not, I can just briefly tell you about the evolution. I’ve seen it, having been involved with it in one way or another for all these years.
So, back then did you ever have occasion to meet [John] McCarthy or any of the people at the Dartmouth [Summer Research Project]?
Yeah, yeah.
So, tell me about that. Tell me about the early early days in AI, before we jump into today.
I knew people at the MIT AI lab… Marvin Minsky, McCarthy, and there were a number of other people. You know, what’s interesting is at the time the approach to AI was to try to program intelligence, writing it in Lisp, which John McCarthy invented as a special programming language; writing in rules-based languages; writing in Prolog. At the time – remember this was years ago – they all thought that you could get AI done that way and it was just a matter of time before computers got fast enough for this to work. Clearly that approach toward artificial intelligence didn’t work at all. You couldn’t program something like intelligence when we didn’t understand at all how it worked…
Well, to pause right there for just a second… The reason they believed that – and it was a reasonable assumption – the reason they believed it is because they looked at things like Isaac Newton coming up with three laws that covered planetary motion, and Maxwell and different physical systems that only were governed by two or three simple laws and they hoped intelligence was. Do you think there’s any aspect of intelligence that’s really simple and we just haven’t stumbled across it, that you just iterate something over and over again? Any aspect of intelligence that’s like that?
I don’t think so, and in fact my analogy… and I’m glad you brought up Isaac Newton. This goes back to physics, which is what I got my degrees in. This is like comparing classical mechanics, which is deterministic. You know, you can tell precisely, based on classical mechanics, the motion of planets. If you throw a baseball, where is it going to go, etc. And as we know, classical mechanics does not work at the atomic and subatomic level.
We have something called quantum mechanics, and in quantum mechanics, nothing is deterministic. You can only tell what things are going to do based on something called a wave function, which gives you probability. I really believe that AI is like that, that it is so complicated, so emergent, so chaotic; etc., that the way to deal with AI is in a more probabilistic way. That has worked extremely well, and the previous approach where we try to write things down in a sort of deterministic way like classical mechanics, that just didn’t work.
Byron, imagine if I asked you to write down specifically how you learned to ride a bicycle. I bet you won’t be able to do it. I mean, you can write a poem about it. But if I say, “No, no, I want a computer program that tells me precisely…” If I say, “Byron I know you know how to recognize a cat. Tell me how you do it.” I don’t think you’ll be able to tell me, and that’s why that approach didn’t work.
And then, lo and behold, in the ‘90s we discovered that there was a whole different approach to AI based on getting lots and lots of data in very fast computers, analyzing the data, and then something like intelligence starts coming out of all that. I don’t know if it’s intelligence, but it doesn’t matter.
I really think that to a lot of people the real point where that hit home is when in the late ‘90s, IBM’s Deep Blue supercomputer, beat Garry Kasparov in a very famous [chess]match. I don’t know, Byron, if you remember that.
Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com
[voices_in_ai_link_back]
 
Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.

Voices in AI – Episode 61: A Conversation with Dr. Louis Rosenberg

[voices_in_ai_byline]

About this Episode

Episode 61 of Voices in AI features host Byron Reese and Dr. Louis Rosenberg talking about AI and swarm intelligence. Dr. Rosenberg is the CEO of Unanimous AI. He also holds a B.S., M.S., and a PhD in Engineering from Stanford.
Visit www.VoicesinAI.com to listen to this one-hour podcast or read the full transcript.

Transcript Excerpt

Byron Reese: This is Voices in AI, brought to you by GigaOm. I’m Byron Reese and today I’m excited that our guest is Louis Rosenberg. He is the CEO at Unanimous A.I. He holds a B.S. in Engineering, an M.S. in Engineering and a PhD in Engineering all from Stanford. Welcome to the show, Louis.
Dr. Louis Rosenberg: Yeah, thanks for having me.
So tell me a little bit about why do you have a company? Why are you CEO of a company called Unanimous A.I.? What is the unanimous aspect of it?
Sure. So, what we do at Unanimous A.I. is we use artificial intelligence to amplify the intelligence of groups rather than using A.I. to replace people. And so instead of replacing human intelligence, we are amplifying human intelligence by connecting people together using A.I. algorithms. So in laymen’s terms, you would say we build hive minds. In scientific terms, we would say we build artificial swarm intelligence by connecting people together into systems.
What is swarm intelligence?
So swarm intelligence is a biological phenomenon that people have been studying, or biologists have been studying, since the 1950s. And it is basically the reason why birds flock and fish school and bees swarm—they are smarter together than they would be on their own. And the way they become smarter together is not the way people do it. They don’t take calls, they don’t conduct surveys, there’s no SurveyMonkey in nature. The way that groups of organisms get smarter together is by forming systems, real-time systems with feedback loops so that they can essentially think together as an emergent intelligence that is smarter as a uniform system than the individual participants would be on their own. And so the way I like to think of an artificial swarm intelligence or a hive mind is as a brain of brains. And that’s essentially what we focus on at Unanimous A.I., is figuring out how to do that among people, even though nature has figured out how to do that among birds and bees and fish, and have demonstrated over millions of years and hundreds of millions of years, how powerful it can be.
So before we talk about artificial swarm intelligence, let’s just spend a little time really trying to understand what it is that the animals are doing. So the thesis is, your average ant isn’t very smart and even the smartest ant isn’t very smart and yet collectively they exhibit behavior that’s quite intelligent. They can do all kinds of things and forage and do this and that, and build a home and protect themselves from a flood and all of that. So how does that happen?
Yeah, so it’s an amazing process and its worth taking one little step back and just asking ourselves, how do we define the term intelligence? And then we can talk about how we can build a swarm intelligence. And so, in my mind, the word intelligence could be defined as a system that takes in noisy input about the world and it processes that input and it uses it to make decisions, to have opinions, to solve problems and, ideally, it does it creatively and by learning over time. And so if that’s intelligence, then there’s lots of ways we can think about building an artificial intelligence, which I would say is basically creating a system that involves technology that does some or all of these systems, takes in noisy input and uses it to make decisions, have opinions, solve problems, and does it creatively and learning over time.
Now, in nature, there’s really been two paths by which nature has figured out how to do these things, how to create intelligence. One path is the path we’re very, very familiar with, which is by building up systems of neurons. And so, over hundreds of millions and billions of years, nature figured out that if you build these systems of neurons, which we call brains, you can take in information about the world and you can use it to make decisions and have opinions and solve problems and do it creatively and learn over time. But what nature has also shown is that in many organisms—particularly social organisms—once they’ve built that brain and they have an individual organism that can do this on their own, many social organisms then evolve the ability to connect the brains together into systems. So if a brain is a network of neurons where intelligence emerges, a swarm in nature is a network of brains that are connected deeply enough that a superintelligence emerges. And by superintelligence, we mean that the brain of brains is smarter together than those individual brains would be on their own. And as you described, it happens in ants, it happens in bees, it happens in birds, and fish.
And let me talk about bees because that happens to be the type of swarm intelligence that’s been studied the longest in nature. And so, if you think about the evolution of bees, they first developed their individual brains, which allowed them to process information, but at some point their brains could not get any larger, presumably because they fly, and so bees fly around, their brains are very tiny to be able to allow them to do that. In fact, a honeybee has a brain that has less than a million neurons in it, and it’s smaller than a grain of sand. And I know a million neurons sounds like a lot, but a human has 85 billion neurons. So however smart you are, divide that by 85,000 and that’s a honeybee. So a single honeybee, very, very simple organism and yet they have very difficult problems that they need to solve, just like humans have difficult problems.
And so the type of problem that is actually studied the most in honeybees is picking a new home to move into. And by new home, I mean, you have a colony of 10,000 bees and every year they need to find a new home because they’ve outgrown their previous home and that home could be a hole in a hollow log, it could be a hole at the side of a building, it could be a hole—if you’re unlucky—in your garage, which happened to me. And so a swarm of bees is going to need to find a new home to move into. And, again, it sounds like a pretty simple decision, but actually it’s a life-or-death decision for honeybees. And so for the evolution of bees, the better decision that they can make when picking a new home, the better the survival of their species. And so, to solve this problem, what colonies of honeybees do is they form a hive mind or a swarm intelligence and the first step is that they need to collect information about their world. And so they send out hundreds of scout bees out into the world to search 30 square miles to find potential sites, candidate sites that they can move into. So that’s data collection. And so they’re out there sending hundreds of bees out into the world searching for different potential homes, then they bring that information back to the colony and now they have the difficult part of it: they need to make a decision, they need to pick the best possible site of dozens of possible sites that they have discovered. Now, again, this sounds simple but honeybees are very discriminating house-hunters. They need to find a new home that satisfies a whole bunch of competing constraints. That new home has to be large enough to store the honey they need for the winter. It needs to be ventilated well enough so they can keep it cool in the summer. It needs to be insulated well enough so it can stay warm in cold nights. It needs to be protected from the rain, but also near good sources of water. And also, of course, it needs to be well-located, near good sources of pollen.
And so it’s a complex multi-variable problem. This is a problem that a single honeybee with a brain smaller than a grain of sand could not possibly solve. In fact, a human that was looking at that data would find it very difficult to use a human brain to find the best possible solution to this multi-variable optimization problem. Or a human that is faced with a similar human challenge, like finding the perfect location for a new factory or the perfect features of a new product or the perfect location to put a new store, would be very difficult to find a perfect solution. And yet, rigorous studies by biologists have shown that honeybees pick the best solution from all the available options about 80% of the time. And when they don’t pick the best possible solution, they pick the next best possible solution. And so it’s remarkable. By working together as a swarm intelligence, they are enabling themselves to make a decision that is optimized in a way that a human brain, which is 85,000 times more powerful, would struggle to do.
And so how do they do this? Well, they form a real-time system where they can process the data together and converge together on the optimal solution. Now, they’re honeybees, so how do they process the data? Well, nature came up with an amazing way. They do it by vibrating their bodies. And so biologists call this a “waggle dance” because to humans, when people first starting looking into hives, they saw these bees doing something that looked like they were dancing because they were vibrating their bodies. It looked like they were dancing but really they were generating these vibrations, these signals that represent their support for their various home sites that were under consideration. By having hundreds and hundreds of bees vibrating their bodies at the same time, they’re basically engaging in this multi-directional tug of war. They’re pushing and pulling on a decision, exploring all the different options until they converge together in real time on the one solution that they can best agree upon and it’s almost always the optimal solution. And when it’s not the optimal solution, it’s the next best solution. So basically they’re forming this real-time system, this brain of brains that can converge together on an optimal solution and can solve problems that they couldn’t do on their own. And so that’s the most well-known example of what a swarm intelligence is and we see it in honeybees, but we also see the same process happening in flocks of birds, in schools of fish, which allow them to be smarter together than alone.
Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com
[voices_in_ai_link_back]
 
Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.

Voices in AI – Episode 58: A Conversation with Chris Eliasmith

[voices_in_ai_byline]

About this Episode

Episode 58 of Voices in AI features host Byron Reese and Chris Eliasmith talking about the brain, the mind, and emergence. Dr. Chris Eliasmith is co-CEO of Applied Brain Research, Inc. and director of the Centre for Theoretical Neuroscience at the University of Waterloo. Professor Eliasmith uses engineering, mathematics and computer modelling to study brain processes that give rise to behaviour. His lab developed the world’s largest functional brain model, Spaun, whose 2.5 million simulated neurons provide insights into the complexities of thought and action. Professor of Philosophy and Engineering, Dr. Eliasmith holds a Canada Research Chair in Theoretical Neuroscience. He has authored or coauthored two books and over 90 publications in philosophy, psychology, neuroscience, computer science, and engineering. In 2015, he won the prestigious NSERC Polayni Award. He has also co-hosted a Discovery channel television show on emerging technologies.
Visit www.VoicesinAI.com to listen to this one-hour podcast or read the full transcript.

Transcript Excerpt

Byron Reese: This is Voices in AI brought to you by GigaOm. I’m Byron Reese. Today our guest is Chris Eliasmith. He’s the Canadian Research Chair in Theoretical Neuroscience. He’s a professor with, get this, a joint appointment in Philosophy and Systems Design Engineering and, if that’s not enough, a cross-appointment to the Computer Science department at the University of Waterloo. He is the Director of the Centre for Theoretical Neuroscience, and he was awarded the NSERC Polanyi Award for his work developing a computer model of the human brain. Welcome to the show, Chris!
Chris Eliasmith: Thank you very much. It’s great to be here.
So, what is intelligence?
That’s a tricky question, but one that I know you always like to start with. I think intelligence—I’m teaching a course on it this term, so I’ve been thinking about it a lot recently. It strikes me as the deployment of a set of skills that allow us to accomplish goals in a very wide variety of circumstances. It’s one of these things I think definitely comes in degrees, but we can think of some very stereotypical examples of the kinds of skills that seem to be important for intelligence, and these include things like abstract reasoning, planning, working with symbolic structures, and, of course, learning. I also think it’s clear that we generally don’t consider things to be intelligent unless they’re highly robust and can deal with lots of uncertainty. Basically some interesting notions of creativity often pop up when we think about what counts as intelligent or not, and it definitely depends more on how we manipulate knowledge than the knowledge we happen to have at that particular point in time.
Well, you said I like to start with that, but you were actually the first person in 56 episodes I asked that question to. I asked everybody else what artificial intelligence is, but we really have to start with intelligence. In what you just said, it sounded like there was a functional definition, like it is skills, but it’s also creativity. It’s also dealing with uncertainty. Let’s start with the most primitive thing which would be a white blood cell that can detect and kill an invading germ. Is that intelligent? I mean it’s got that skill.
I think it’s interesting that you bring that example up, because people are actually now talking about bacterial intelligence and plant intelligence. They’re definitely attempting to use the word in ways that I’m not especially comfortable with, largely because I think what you’re pointing to in these instances are sort of complex and sophisticated interactions with the world. But at the same time, I think the notions of intelligence that we’re more comfortable with are ones that deal with more cognitive kinds of behaviors, generally more abstract kinds of behaviors. The sort of degree of complexity in that kind of dealing with the world is far beyond I think what you find in things like blood cells and bacteria. Nevertheless, we can always put these things on a continuum and decide to use words in whichever particular ways we find useful. I think I’d like to restrict it to these sort of higher order kinds of complex interactions we see with…
I’m with you on that. So let me ask a different question: How is human intelligence unique in the world, as far as we know? What is different about human intelligence?
There are a couple of standard answers, I think, but even though they’re standard, I think they still capture some sort of essential insights. One of the most unique things about human intelligence is our ability to use abstract representations. We create them all the time. The most ubiquitous examples, of course, are language, where we’re just making sounds, but we can use it to refer to things in the world. We can use it to refer to classes of things in the world. We can use it to refer to things that are not in the world. We can exploit these representations to coordinate very complex social behaviors, including things like technological development as well as political systems and so on. So that sort of level of complex behavior that’s coordinated by abstract symbols is something that you just do not find in any other species on the planet. I think that’s one standard answer which I like.
The other one is that the amount of mental flexibility that humans display seems to outpace most other kinds of creatures that we see around us. This is basically just our ability to learn. One reason that people are in every single climate on the planet and able to survive in all those climates is because we can learn and adapt to unexpected circumstances. Sometimes it’s not because of abstract social reasoning or social skills or abstract language, but rather just because of our ability to develop solutions to problems which could be requiring spatial reasoning or other kinds of reasoning which aren’t necessarily guided by language.
I read, the other day, a really interesting thing, which was the only animal that will look in the direction you point is a dog, which sounds to me—I don’t know, it may be meaningless—but it sounds to me like a) we probably selected for that, right? The dog that when you say, “Go get him!” and it actually looks over there, we’d say that’s a good dog. But is there anything abstract in that, in that I point at something and then the animal then turns and looks at it?
I don’t think there’s anything especially abstract. To me, that’s an interesting kind of social coordination. It’s not the kind of abstractness I was talking about with language, I don’t think.
Okay. Do you think Gallup’s, the red dot, the thing that tries to wipe the dot off its forehead—is that a test that shows intelligence, like the creature understands what a mirror is? “Ah, that is me in the mirror?” What do you think’s going on there?
I think that is definitely an interesting test. I’m not sure how directly it’s getting at intelligence. That seems to be something more related to self-representation. Self-representation is likely something that matters for, again, social coordination, so being able to distinguish yourself from others. I think, often, more intelligent animals tend to be more social animals, likely because social interactions are so incredibly sophisticated. So you see this kind of thing definitely happening in dolphins, which are one of the animals that can pass the red dot test. You also see animals like dogs we consider generally pretty intelligent, again, because they’re very social, and that might be why they’re good at reacting to things like pointing and so on.
But it’s difficult to say that recognition in a mirror or some simple task like that is really going to let us identify something as being intelligent or not intelligent. I think the notion of intelligence is generally just much broader, and it really has to do with the set of skills—I’ll go back to my definition—the set of skills that we can bring to bear and the wide variety of circumstances that we can use on them to successfully solve problems. So when we see dolphins doing this kind of thing – they take sponges and put them on their nose so they can protect their nose from spiky animals when they’re searching the seabed, that’s an interesting kind of intelligence because they use their understanding of their environment to solve a particular problem. They also have done things like killed spiny urchins to poke eels to get them out of crevices. They’ve done all these sorts of things, it’s given the variety of problems that they’ve solved and the interesting and creative ways they’ve done it, to make us want to call dolphins intelligent. I don’t think it’s merely seeing a dot in a mirror that lets us know, “Ah! They’ve got the intelligence part of the brain.” I think it’s really a more comprehensive set of skills.
Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com
[voices_in_ai_link_back]
 
Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.

Voices in AI – Episode 32: A Conversation with Alan Winfield

[voices_in_ai_byline]
In this episode, Byron and Alan talk about robot ethics, military robots, emergence, consciousness, and self-awareness.
[podcast_player name=”Episode 32 – A Conversation with Alan Winfield” artist=”Byron Reese” album=”Voices in AI” url=”https://voicesinai.s3.amazonaws.com/2018-01-22-(01-05-50)-alan-winfield.mp3″ cover_art_url=”https://voicesinai.com/wp-content/uploads/2017/09/voices-in-ai-cover.png”]
[voices_in_ai_byline]

Byron Reese: This is Voices in AI, brought to you by Gigaom. I’m Byron Reese. Today our guest is Alan Winfield. Alan Winfield is a professor of robot ethics at the University of West England. He has so many credentials, I don’t even know where to start. He’s a member of the World Economic Forum Council on the Future of Technology, Values and Policy. He’s a member of the Ethics Advisory Board for the Human Brain Project, and a number more. He sits on multiple editorial boards, such as the Journal of Experimental and Theoretical Artificial Intelligence, and he’s the associate editor of Frontiers in Evolutionary Robotics. Welcome to the show, Alan.
Alan Winfield: Hello, Byron, great to be here.
So, I bet you get the same first question every interview you do: What is a robot ethicist?
Well, these days, I do, yes. I think the easiest, simplest way to sum it up is someone who worries about the ethical and societal implications or consequences of robotics and AI. So, I’ve become a kind of professional worrier.
I guess that could go one of three ways. Is it ethics of how we use robots, is it the ethics of how the robots behave, or is it the ethics of… Well, I’ll just go with those two. What do you think more about?
Well, it’s both of those.
Okay.
But, certainly, the biggest proportion of my work is the former. In other words, how humans—that’s human engineers, manufacturers and maintainers, repairers and so on, in other words, everyone concerned with AI and robotics—should behave responsibly and ethically to minimize the, as it were, unwanted ethical consequences, harms if you like, to society, to individual humans and to the planet, from AI and robotics.
The second one of those, how AI and robotics can itself behave ethically, that’s very much more a research problem. It doesn’t have the urgency of the first, and it really is a deeply interesting question. And part of my research is certainly working on how we can build ethical robots.
I mean, an ethical robot, is that the same as a robot that’s a moral agent itself?
Yes, kind of. But bearing in mind that, right now, the only full moral agents that exist are adult humans like you and I. So, not all humans of course, so adult humans of sound mind, as it were. And, of course, we simply cannot build a comparable artificial moral agent. So, the best we can do so far, is to build minimally ethical robots that can, in the very limited sense, choose their actions based on ethical rules. But, unlike you and I, cannot decide whether or not to behave ethically, and certainly cannot, as it were, justify their actions afterwards.
When you think about the future and about ethical agents, or even how we use them ethically, how do you wrap your head around the fact that there aren’t any two people that agree on all ethics? And if you look around the world, the range of beliefs on what is ethical behavior and what isn’t, varies widely. So, is it not the case you’re shooting for a target that’s ill-defined to begin with?
Sure. Of course, we certainly have that problem. As you say, there is no single, universal set of ethical norms, and even within a particular tradition, say, in the Western ethical tradition, there are multiple sets of ethics, as it were, whether they’re consequentialist ethics or deontic or virtue ethics, so it’s certainly complicated. But I would say that you can abstract out of all of that, if you like, some very simple principles that pretty much most people would agree, which is that, for instance, a robot should not harm people, should not cause people to come to harm.
That happens to be Asimov’s first rule of robotics, and I think it’s a pretty wise, as it were, starting point. Asimov’s first rule of robotics is universal, but, what I’m saying is that, we probably can extract a very small number of ethics, which if not universal, will attract broad agreement, broad consensus.
And yet, there’s an enormous amount of money that goes into artificial intelligence, to highlight just that one, right? Robots used in military for instance, specifically, including robots that actually do, or are designed to kill and do harm, and so, we can’t even start at something that, at first glance, seems pretty obvious.
Well, indeed, and the weaponization of AI, and any technology, is something that we all should be concerned about. I mean, you’re right that the real world has weapons. That doesn’t mean that we shouldn’t strive for a better world in which technology is not weaponized. So, yes, this is an idealistic viewpoint, but what do you expect a robot ethicist to be except an idealist?
Point taken. One more question along these lines. Isn’t the landmine a robot with artificial intelligence that is designed to kill? I mean, the AI says if the object weighs more that forty-five pounds, I run this program which blows it up, is that a robot that makes the kill decision itself?
Well, in a minimal sense, I suppose you might say it’s certainly an autometer, or an automaton. It has a sensor, which is the device that senses a weight upon it, and an actuator, which is the thing that triggers the explosion. But the fact is, of course, that landmines are hideous weapons that should’ve been banned a long time ago, and mostly are banned, and of course the world is still clearing up landmines.
I would like to switch gears a little bit and talk about emergence. You study swarm behavior.
Yes, I spent many years studying swarm behavior. That’s right, yes.
You’ve no doubt seen the video of—and, again, you’re going to have to help me with the example here. It’s a wasp that when threatened, they make a spinning pinwheel, where they’re all kind of making their wings open and close in this tight unison where it gives the illusion there’s this giant spinning thing. And it’s like the wave in a stadium, which happens so quickly. They’re not, like, saying, “Oh, Bob just waved his wings, now it’s my turn.” Are you familiar with that phenomenon?
I’m not. That’s a new one on me, Byron.
Then let’s just talk about any other… How is it that anthills and beehives act in unison? Is that to achieve larger goals, like, cool the hive, or what not? Is that swarm?
Yeah. I mean, the thing I think that we need to try and do is to dismiss any notion of goals. It’s certainly true that a termite’s mound, for instance, is an emergent consequence. It’s an emergent property of hundreds of thousands of termites doing their thing. And all of the extraordinary sophistication we see—the air conditioning, the fungus farms and such—in the termite mounds, are all also emergent properties of the, as it were, the myriad microscopic interactions between the individuals, between each other and their environment, which is, if you like, the materials and structure of the termite nest.
But if people say to me, “How do they know what they’re doing and when they’ve finished?” the answer is, well, firstly, no individual knows what it’s doing in the termite mound, and secondly, there is no notion of finished. The work of building and maintaining the termite mound just carries on forever. And the reason the world isn’t full of termite mounds, it hasn’t been, as it were, completely colonized by termite mounds, is for all sorts of reasons: climate, environment conditions, the fact that if termite mounds get too big, they’ll collapse because of their own weight, larger animals of course will either deliberately break into the termite mounds to feed on termites, or will just blunder into them and knock them over, and there’s flooding and weather and all kinds of stuff.
So, the fact that when we see termite mounds, we imagine that this is some kind of goal-oriented activity, is unfortunately, simply applying a very human metaphor to a non-human process. There is simply no notion that any individual termite knows what it’s doing, or of the collective, as it were, finishing a task. There are no tasks in fact. There are simply interactions, microscopic actions and interactions.
Let’s talk about emergence for a minute. I’ll set my question up with a little background for any listener. Emergence is the phenomenon where we observe attributes of a system that are not present in any of the individual components. Is that a fair definition?
Yes. I mean, there are many definitions of emergence, but essentially, you’re looking for macroscopic structures or phenomena or properties that are not evident in the behavior of individuals.
We divide it into two halves, and one half, a good number of people don’t believe exist. So, the first one is weak emergence, as I understand it, where you could study hydrogen for a year and you could study oxygen for a year, and never in your wildest imagination would you have guessed that you put them together and they make water and it’s wet, it’s got this new wetness. And yet, in weak emergence, when you study it enough and you figure out what’s going on, you go, “Oh, yeah, I see how that worked,” and then you see it.
And then there’s strong emergence, which posits that there are characteristics that emerge for which you cannot take a reductionist view, you cannot in any way study the individual components and ever figure out how they produced that result. And this isn’t an appeal to mysticism, moreso it’s a notion that maybe strong emergence is a fundamental force of the universe or something like that. Did I capture that distinction?
Yeah, I think you’ve got it. I mean, I’m definitely not a strong emergentist. It’s certainly true that, and I’ve seen this a number of times in my own work, emergent properties can be surprising. They can be puzzling. It can sometimes take you quite a long time to figure out what on Earth is going on. In other words, to unpick the mechanisms of emergence. But there’s nothing mysterious.
There’s nothing in my view that is inexplicable about emergence. I mean, there are plenty of emergent properties in nature that we simply cannot explain mechanically, but that doesn’t mean that they are inexplicable. It just means that we’re not smart enough. We haven’t, as it were, figured out what’s going on.
So, when you were talking about the termite nest, you said the termite nest doesn’t know what it’s doing, it doesn’t have goals, it doesn’t have tasks that have a beginning and an end. If all of that is true, then the human mind must not be an emergent phenomenon, because we do have goals, we know exactly what we’re doing.
Well, I’m not entirely sure I agree with that. I mean, we think we know what we’re doing, that may well be an illusion, but carry on anyway.
No, that’s a great place to start. So, you’re alluding to the studies that suggest you do something instinctually, and then your brain kind of races to figure out why did I do that, and then it reverses the order of those two things and says, “I decided to do it. That’s why I did it.”
Well, I mean, yeah, that’s one aspect which may or may not be true. But what I really mean, Byron, is that when you’re talking about human behaviors, goals, motivations and so on, what you’re really looking at is the top, the very top layer of an extraordinary multi-layered process, which we barely understand, well, we really don’t understand at all. I mean, there’s an enormous gap, as it were, between the low-level processes—which also we barely understand—in other words, the interactions between individual neurons and, as it were, the emergence of mind, let alone, subjective experience, consciousness and so on.
There are so many layers there, and then the top layer, which is human behavior, is also mediated through language and culture, and we mustn’t forget that. You and I wouldn’t have been having this conversation half a million years ago. The point is that the things that we can think about and have a discourse over, we wouldn’t be able to have a discourse about if it were not for this extraordinary edifice of culture, which kind of sits on top of a large number of human minds.
We are social animals, and that’s another emergent property. You’ve got the emergent property of mind, and then consciousness, then you have the emergent property of society, and on top of that, another emergent property, which is culture. And somewhere in the middle of that, all mixed up, is language. So, I think it’s so difficult to unpick all of this, when you start to ask questions like, “Yes, but how can a system of emergence have goals, have tasks?” Well, it just so happens that modern humans within this particular culture do have what we, perhaps rather pretentiously, think of as goals and motivations, but who knows what they really are? And I suspect we probably don’t have to go back many tens, certainly hundreds of generations, to find that our goals and motivations were no different to most of the animals, which is to eat and survive, to live another day.
And so, let’s work up that ladder from the brain to the mind to consciousness. Perhaps half a million years ago, you’re right, but there are those who would maintain that when we became conscious, that’s the moment we, in essence, took control and we had goals and intentions and all of that subtext going on. So, I’ll ask you the unanswerable question, how do you think consciousness comes about?
Gosh, I wish I knew.
Is it quantum phenomenon? Is it just pure emergence?
I certainly think it’s an emergent property, but I think it’s such a good adaptation, that I doubt that it’s just an accident. In other words, I suspect that consciousness is not like a spandrel of San Marco, you know, that wonderful metaphor. I think that it’s a valuable adaptation, and therefore, when—at some point in our evolutionary history, probably quite recent evolutionary history—some humans started to enjoy this remarkable phenomena of being a subject and the subjective experience of recognizing themselves and their own agency in the world, I suspect that they had such a big adaptive advantage over their fellow humans, hominids, who didn’t have that experience, that, rather quickly, I think it would have become a strongly self-selecting adaptation.
I think that the emergence of consciousness is deeply tied up with being sociable. I think that in order to be social animals, we have to have theory of mind. To be a successful social animal, you need to be able to navigate relationships and the complexity of social hierarchies, pecking orders and such like.
We know that chimpanzees are really quite sophisticated with what we call Machiavellian intelligence. In other words, the kind of social intelligence where you will, quite deliberately, manipulate your behaviors in order to achieve some social advantage. In other words, I’ll pretend to want to get to know you, not because I really want to get to know you, but because I know that you are friends with somebody else, and I really want to be friends with her. So that’s Machiavellian intelligence. And it seems that chimpanzees are really rather good at it, and probably just as good at it as we homo sapiens.
And in order to be able to have that kind of Machiavellian intelligence, you need to have theory of mind. Now, theory of mind means having a really quite sophisticated model of your conspecifics. Now, that, I think, in turn, arose out of the fact that we have complicated bodies, bodies that are difficult to control, and therefore, we, at some earlier point in our evolutionary history, started to have quite sophisticated body self-image. In other words, an internal simulation, or whatever you call it, an internal model of our own physical bodies.
But, of course, the beauty of having a model of yourself is that you then automatically have a model of your conspecifics. So, I think having a self-model bootstraps into having theory of mind. And then, I think, once you have theory of mind, and you can—and I don’t know at what point this might have come in, whether it would come after we have theory of mind, probably, I think—start to imitate each other; in other words, do social learning.
I think social learning was, again, another huge step forward in the evolution of modern mind. I mean, social learning is unbelievably more powerful than individual learning. Suddenly the ability to pass on knowledge to your children, from your ancestors, especially once you invent writing, as well, or symbols and language, writing of course came much later, but I think that all of these things were necessary, but perhaps not sufficient in themselves prerequisites for consciousness. I mean, it’s very interesting, I don’t know if you know the work of Julian Jaynes.
Of course, Bicameral Mind. That we weren’t even conscious until 500 BC, and that the Greek gods and the rise of oracles was just us realizing we had lost the voice that we used to hear in our heads.
I mean, it’s a radical hypothesis. Not many people buy that argument. But I think it’s extremely interesting, the idea that modern consciousness may be a very recent adaptation, as you say, within, as it were, recorded history, back to Homeric times. So, I think the story of how consciousness evolved, may never be known of course. It’s like a lot of natural history. We can only ever have Just So Stories. We can only have more or less plausible hypotheses.
I’m absolutely convinced that key prerequisites are internal models. Dan Dennett has this wonderful structure, this conceptual framework that he calls the “Tower of Generate-and-Test,” this set of conceptual creatures that each has a more sophisticated way of generating and testing hypotheses about what action to take next. And without going through the whole thing in detail, his Popperian creatures have this amazing innovation of being able to imagine the outcomes of actions before trying them out. And therefore, they can imagine a bad action, and decide not to try it out for real, which may well be extremely dangerous.
And then he suggests that a subset of Popperian creatures are what he calls Gregorian creatures, who’ve invented mind tools, like language, and therefore have this additional remarkable ability to learn socially from each other. And I think that social learning and theory of mind are profoundly, in my view, implicated in the emergence of consciousness. Certainly, I would stick my neck out and say that I think solitary animals cannot enjoy the kind of consciousness that you and I do.
So, all of that to say, we don’t know how it came about, and you said we may never know. But it’s really far more intractable than that because we don’t really know, if you agree with this, that it’s not just how it came about, we don’t have any science that suggests how a cloud of hydrogen could come to name itself. We don’t have any science to say how is it that I can feel something? How is it that I can experience something as opposed to just sensing it? As I listen to you along this conversation, I just replace everything with zombie, you know, the analogy of a human without consciousness.
In any case, so what would you say to that? I’ve heard consciousness described as the most difficult problem, maybe the only problem left that we know neither how to ask it, nor what the answer would look like. So, what do you think the answer to the question of how is it that we have subjective experience looks like?
Well, again, I have no idea. I mean, I completely agree with you, Byron. It is an extraordinarily difficult problem. What I was suggesting earlier were just a very small number of prerequisites, not in any sense was I suggesting that those are the answer to what is consciousness. There are interesting theories of consciousness. I mean, I like the work very much of Thomas Metzinger, who I think has a very, well to me at least, a very attractive theory of consciousness because it’s based upon the idea of the self-model, which I’ve indicated I’m interested in models, and his notion of the phenomenal self-model.
Now, as you quite rightly say, there are vast gulfs in our misunderstanding, and we certainly don’t even know properly what questions to ask, let alone answer, but I think we’re slowly getting there. I think progress is being made in the study of consciousness. I mean, the work of Anil Seth I think is deeply interesting in this regard. So, I’m basically agreeing with you.
We don’t have a science to understand how something can experience. So, I get a temperature sensor up to my computer, that I write a program that it screams if it gets over five hundred degrees, and then I hold a match to it and it screams. We don’t think the computer is feeling pain, even though the computer’s able to sense all that’s going on, we don’t think that there’s an agent that can feel anything.
In fact, we don’t even really have science to understand how something could feel. And, I’m the first to admit it just kicks the can down the street, but you came out against strong emergence at the get-go, you’re definitely not that, but couldn’t you say “Well, clearly our basic physical laws don’t account for how matter can experience things, and therefore there might be another law at play that comes from complexity or any number of other things, that it isn’t reductionist and we just don’t understand it.” But why is it that you reject strong emergence so unequivocally, but still kind of struggle with, “We don’t really know any scientific way, with physics, to answer that question of how something can experience?”
Well, no, I think they’re completely compatible positions. I’m not saying that consciousness, subjective experience—what it is to subjectively experience something—is unknowable, in other words, the process. I don’t believe the process by which subjective experience happens in some complex collections of matter is unknowable. I think it’s just very hard to figure out and will take us a long time, but I think we will figure it out.
A lot of times when people look at the human brain, they say, “Well, the reason we don’t understand it is because it’s got one hundred billion neurons.” And yet there’s been an effort underway for two decades to take the nematode worm’s 302 neurons and try to make it—
—Two of which, interestingly, are not connected to anything.
—And try to make a digital life, you know, model it. So, we can’t even understand how the brain works to the degree that we can reproduce a three hundred-neuron brain. And even more so, there are those who suggest that a single neuron may be as complicated as a super computer. So, what do you think of that? Why can’t we understand how the nematode brain works?
Well, understanding of course, is a many-layered thing. And at some level of abstraction, we can understand how the nervous system of C. elegans works. I mean, we can, that’s true. But, as with all of science, understanding or scientific model is an abstraction, at some degree of abstraction. It’s a model at some degree of abstraction. And if you want to go deeper down, increase the level of granularity of that understanding, that’s I think when you start to have difficulties.
Because as you say, when we build, as it were, a computer simulation of C. elegans, we simply cannot model each individual neuron with complete fidelity. Why not? Well, not just because it’s extraordinarily complex, but we simply don’t fully understand all the internal processes of a biological neuron. But that doesn’t mean that we can’t, at some useful, meaningful level of abstraction, figure out that a particular stimulus to a particular sensor in the worm will cause a certain chain reaction of activations and so on, which will eventually cause a muscle to twitch. So, we can certainly do that.
You wrote a paper, “Robots with Internal Models: A Route to Self-Aware and Hence, Safer Robots,” and you alluded to that a few moments ago, when you talked about an internal model. Let’s take three terms that are used frequently. So, one of them is self-awareness. You have Gallup’s red dot test, that says, “I am a ‘self.’ I can see something in the mirror that has a red dot, and I know that’s me and I try to wipe it off my forehead.” That would be a notion of self-awareness. Then you have sentience, and of course it’s often misused, sentience of course just means to be able to sense something, usually to feel pain. And then you have consciousness, which is this, “I experience it.” Does self-awareness imply sentience, and does sentience imply consciousness? Or can something be self-aware and neither sentient or conscious?
I don’t think it’s all binary. In other words, I think there are degrees of all of those things. I mean, even simple animals have to have some limited self-awareness. And the simplest kind of self-awareness that I think pretty much all animals need to have is to be able to tell the difference between me and not me. If you can’t tell the difference between me and not me, you’re going to have difficulty getting by in the world.
Now, that I think is a very limited form, if you like, of self-awareness, even though I wouldn’t suggest for a moment that simple animals that can indeed tell the difference between me and not me, have sentience or consciousness. So, I think that these things exist on a spectrum.
Do you think humans are the only example of consciousness on the planet or would you suspect—?
No, no, no. I think, again, that there are degrees of consciousness. I think that perhaps there are undoubtedly some unique attributes of humans. We’re almost certainly the only animal on the planet that can think about thinking. So, this kind of reflective—or is reflexive, is that the right word here—ability to kind of ask ourselves questions, as it were.
But even though, for instance, a chimpanzee probably doesn’t think about thinking, I think it is conscious. I mean, it certainly has plenty of other attributes of consciousness. And not only chimpanzees, but other animals are capable, clearly, of obviously feeling pain, also feeling grief, and feeling sadness. When a member of the clan is killed or dies, these are, in my view, evidence of consciousness in other animals. And there are plenty of animals that we almost feel instinctively are conscious to a reasonably high degree. Dolphins are another such animal. One of the most puzzling ones, of course, is the octopus.
Right, because you said a moment ago, a non-sociable animal shouldn’t be able to be conscious.
Exactly. And that’s the kind of black swan of that particular argument, and I was well aware of that when I said it. I mean, clearly, there’s something else going on in the octopus, but we can nevertheless be sure that the octopus, collectively, don’t have traditions in the way that many other animals do. In other words, they don’t have localized, socially-agreed behaviors like birdsong, or in chimpanzee, cracking nuts open a different way on one side of the mountain to the other side of the mountain. So, there’s clearly something very puzzling going on in octopus, which seems to buck, what otherwise I think is a pretty sound proposition, which is, in my view, the role of sociability in the emergence of consciousness.
And, I think, octopus only live about three years, so just imagine if they had a one hundred-year lifespan or something.
What about plants? Is it possible that plants are self-aware, sapient, sentient or conscious?
Good question. I mean, certainly, plants are intelligent. I’m more comfortable with the word intelligence there. But as for, well, maybe even a limited form of self-awareness, a very limited form of sentience, in the sense that plants clearly do sense their environments. Plants, trees, clearly do sense and respond to attacks from neighboring plants or pests, and appear even to be able to respond in a way that protects themselves and their neighboring, as it were, conspecifics.
So, there is extraordinary sophistication in plant behavior, plant intelligence, that’s really only beginning to be understood. I have a friend, a biologist in the University of Tel Aviv, Danny Chamovitz, and Danny’s written a terrific book on plant intelligence that really is well worth reading.
What about Gaia? What about the Earth? Is it possible the Earth has its own emergent awareness, its own consciousness, in the same sense that all the neurons in our brain come together in our mind to give us consciousness?
And I don’t think these are purely academic questions, because at some point we’re going to have to address, “Is this computer conscious, is this computer able to feel, is this robot able to feel?” If we can’t figure out if a tree can feel, how in the world would we feel about something that didn’t share ninety percent of our DNA with? So, what would you think about the Earth having its own will and consciousness and awareness, that’s an emergent behavior of all of the lifeforms that live on it?
Yeah, gosh. I think you’ve probably really stumped me there. I mean, I think this is, you’re right, it’s an interesting question. I’ve absolutely no idea. I mean, I’m a materialist. I kind of find it difficult to understand how that might be the case when the planet isn’t a homogeneous system, it isn’t a fully connected system in the sense that nervous systems are.
I mean, the processes going on in and on the planet are extraordinarily complex. There’s tons of emergence going on. There are all kinds of feedback loops. Those are all undoubtedly facts. But whether that is enough, in and of itself, to give rise to any kind of analogue of self-awareness, I have to say, I’m doubtful. I mean, it would be wonderful if it were so, but I’m doubtful.
You wouldn’t be able to look at a human brain under a microscope and say, “These things are conscious.” And so, I guess, Lovelock would look back over the—and I don’t know what his position on that question would be—but he would look at the fact that the Earth self regulates so many of its attributes within narrow ranges. I’ll ask you one more, then. What about the Internet? Is it possible that the Internet has achieved some kind of consciousness or self-awareness? I mean, it’s certainly got enough processors on it.
I mean, I think perhaps the answer to that question, and I’ve only just thought of this or it’s only just come to my mind, is that I think the answer is no. I don’t think the Internet is self-aware. And I think the reason, perhaps, is the same reason that I don’t think the Earth is self-aware, or the planet is self-aware, even though it is, as you quite rightly say, a fabulously self-regulating system. But I think self-awareness and sentience and in turn consciousness, need not just highly-connected networks, they also need the right architecture.
The point I’m making here, it’s a simple observation, is that our brains, our nervous systems, are not randomly connected networks. They have architecture, and, that is an evolved architecture. And it’s not only evolved, of course, but it’s also socially conditioned. I mean, the point is that, as I keep going on about, the only reason you and I can have this conversation is because we were both, we share a culture, a cultural environment, which is itself highly evolved. So, I think that the emergence of consciousness, as I’ve hinted, comes as part and parcel of that emergence of communication, language, and ultimately culture.
I think the reason that the Internet, as it were, is unlikely to be self-aware, it’s because it just doesn’t have the right architecture, not because it doesn’t have lots of processing and lots of connectivity. It clearly has those, but it’s not connected with the architecture that I think is necessary—in the sense that the architectures of animal nervous systems are not random. That’s clearly true, isn’t it? If you just take one hundred billion neurons and connect them randomly, you will not have a human brain.
Right. I mean, I guess you could say there is an organic structure to the Internet in terms of the backbone and the nodes, but I take your point. So, I guess where I’m going with all of this is if we make a machine, and let’s not even talk about conscious for a minute, if we make a machine that is self-aware and is sentient in the sense that it can feel the world, how would we know?
Well, I think that’s a problem. I think it’s very hard to know. And one of the ethical, if you like, risks of AI and especially brain emulation, which is in a sense, a particular kind of AI, is that we might unknowingly build a machine that is actually experiencing, as it were, phenomenal subjectivity, and even more worrying, pain. In other words, a thing that is experiencing suffering. And the worst part about it, as you rightly say, is we may not even know that it is experiencing that suffering.
And then, of course, if it ever becomes self-aware, like if my Roomba all of a sudden is aware of itself, we also run the risk that we end up making an entire digital race of slaves, right? Of beings that feel and perceive the world, which we just build to do our bidding at our will.
Well, yeah. I mean, the ethical question of robots as slaves is a different question. But let’s not confuse it or conflate it with the problem of artificial suffering. I’m much less ethically troubled by a whole bunch of zombie robots, in a sense, that are not sentient and conscious, because they don’t have very much, I won’t say zero, but they have a rather low claim on moral patiency. I mean, if they were at all sentient, or if we believed they were sentient, then the claim we would have to treat them with a level of moral patiency, that we absolutely do not treat robots and AIs with right now.
When robot ethics come out, or ethics and AI, and people say well that’s a real immediate example that we have to think about—aside from the use of these devices in war—the one that everybody knows is the self-driving car, do I drive off the cliff or run over the person? One automaker has come out and specifically said, “We protect the driver. That’s what we do.” As a robot ethicist, how do you approach that problem, just that single isolated, real world problem?
Well, I think the problem with ethical dilemmas, particularly the trolley problem, is that they’re very, very rare. I mean, you have to ask yourself, how often have you and I, I guess you drive a car, and you may well have been driving a car for many years, how often have you faced a trolley problem? The answer is never.
Three times this week. [Laughs] No, you’re entirely right. Yes. But you do know that people get run over by cars.
Sure.
We have to wrestle with the question because it’s going to come up in everything else. Like medical diagnoses and which drugs you give to which people for which ailments which may or may not become lethal reactions to medicines that are rare. It really permeates everything, this assessment of risk and who bares it. Is it fundamentally the programmer? Because that’s one way to say it, it’s that robots don’t actually make any decisions, it’s all humans. And so, you just follow the coding trail back to the person who decided to do it that way.
Well, what you’ve just said is true. It’s not necessarily the programmer, it’s certainly humans. My view—and I take a very hard line on this—is that humans, not robots, are responsible agents, and I mean including AI. So, however a driverless car is programmed, it cannot be held responsible. I think that is an absolute fundamental, I mean, right now. In several hundred years maybe we might be having a slightly different conversation, but, right now, I take a very simple view—robots and AIs cannot be responsible, only humans.
Now, as for what ethics do we program into a driverless car? I think that has to be a societal question. It’s certainly not down to the design of the programmer or even the manufacturer to decide. I think it has to be a societal question. So, you’re right that when we have driverless cars, there will still be accidents, and, hopefully, there will be very few accidents, but still occasionally, very rarely we hope, people will still be killed in car accidents, where the driverless car, as it were, did the wrong thing.
Now, what we need is several things. I think we need to be able to find out why the driverless car went wrong, and that really means that driverless cars need to be fitted with the equivalent of a flight data recorder in aircraft, what I call an ethical black box. We have a paper on that in the next couple of weeks that we’re giving called, “The Case for an Ethical Black Box.” And we need to have regulatory structures that mean that manufacturers are obliged to fit these black boxes to driverless cars, and that the accident investigators have the authority and the power to be able to look at the data in those ethical black boxes and find out what went wrong.
But, then, even when you have all of that structure in place, which I think we must have, there will still be occasional accidents, and the only way to resolve that is by having ethics in driverless cars, if indeed we do decide to have ethics in them at all, which I think is itself not a given. I think that’s a difficult question to ask of itself. But if we did fit driverless cars with ethics, then those ethics need to be decided by the whole of society, so that we collectively take responsibility for those small number of cases where there is an accident and people are harmed.
Fair enough. I have three final questions for you. The first is, Weizenbaum, who famously made ELIZA, which, for the benefit of the listener, was a computer program in the ‘60s that was simple chatbot. You would type, “I have a problem,” it would say, “What kind of problem do you have?” “I’m having trouble with my parents,” “What kind of trouble are you having with your parents?” and it goes on and on like that.
Weizenbaum wrote it, or had it written it, and then noticed that people were developing emotional attachments to it, even though they knew it was just a simple program. And he kind of did a one-eighty, turned on it all. He distinguished between deciding and choosing. And he said, “Robots should only decide. It’s a computational action. They should never choose. Choosing is for people to do.”
What do you think he got right and wrong, and what are your thoughts on that distinction? He thought it was fundamentally wrong for people to use robots in positions that require empathy, because it doesn’t elevate the machine, it debases the human.
Yeah, I mean, I certainly have a strong view that if we do use robots at all as personal assistants or chatbots or advisors or companions, whatever, I think it’s absolutely vital that that should be done within a very strict ethical framework. So, for instance, to ensure that nobody’s deceived and nobody is exploited. The deception I’m particularly thinking of is the deception of believing that you’re actually talking to a person, or, even if you realize you’re not talking to a person, believing that the system, the machine is caring for you, that the machine has feelings for you.
I certainly don’t take a hard line that we should never have companion systems, because I think there are situations where they’re undoubtedly valuable. I’m thinking, for instance here, of surrogate pets. There’s no doubt that when an elderly person, perhaps with dementia, goes into a care home, one of the biggest traumas they experience is leaving their pet behind. People I’ve spoken to who work in care homes for the elderly, elderly people with dementia, say that they would love for their residents to have surrogate pets.
Now, it’s likely that those elderly persons may recognize that the robot pet is not a real animal, but, nevertheless, still may come to feel that the robot, in some sense, cares for them. I think that’s okay because I think that the balance of benefit versus, as it were, the psychological harm of being deceived in that way, weighs more heavily in terms of the therapeutic benefit of the robot pet.
But really the point I’m making is, I think we need strong ethical frameworks, guidelines and regulations that would mean that vulnerable people, particularly children, disabled people, elderly people, perhaps with dementia, are not exploited perhaps by unscrupulous manufacturers or designers, for instance, with systems that appear to have feelings, appear to have empathy.
As Weizenbaum said, “When the machine says, ‘I understand,’ it’s a lie, there’s no I there.”
Indeed, yes, exactly right. And I think that rather like Toto in the Wizard of Oz, we should always be able to pull the curtain aside. The machine nature of the system should always be transparent. So, for instance, I think it’s very wrong for people to find themselves on the telephone and believe that they’re talking to a person, a human being, when in fact they’re talking to a machine.
I agree.
Second question, what about science fiction? Do you consume any in written or movie or TV form that you think, “Ah, that could happen. I could see that future unfolding”?
Oh, lots, well I mean, certainly I consume a lot of science fiction, not all of it, by any means, would I expect or like to see happening. Often the best sci-fi is dystopian, but that is okay, because good science fiction is like a thought experiment, but I like the utopian kind, too. And I rather like the kind of AI utopia of The Culture which is the Iain M. Banks Culture novels—a universe in which there are hugely intelligent, and rather inscrutable, but, nevertheless, rather kindly and benevolent AIs, essentially, looking after us poor humans. I kind of like that idea.
And, finally, you’re writing a lot. How can people keep up with you and follow you and get all of your latest thinking? Can you just go through the litany of resources?
Sure. Well, I don’t blog very often, because I’m generally very busy with other stuff, but I’d be delighted if people go to my blog, which is just: alanwinfield.blogspot.com, and also follow me on Twitter. And, again, I’m easy to find. I think it’s just @alan_winfield. And, similarly, there are quite a few videos of talks that I’ve given to be found on YouTube and online generally. And if people want to get in touch directly, again, it’s easy to find my contact details online.
Alright, well thank you. It has been an incredibly fascinating hour and I appreciate your time.
Thank you, Byron, likewise, very much enjoyed it.
Byron explores issues around artificial intelligence and conscious computers in his upcoming book The Fourth Age, to be published in April by Atria, an imprint of Simon & Schuster. Pre-order a copy here.
[voices_in_ai_link_back]

Beyond social: the rise of the emergent business

The increased speed, complexity, and uncertainty of the business environment today means that businesses are operating in a world that is fundamentally different from that of only ten years ago.