Voices in AI – Episode 58: A Conversation with Chris Eliasmith

[voices_in_ai_byline]

About this Episode

Episode 58 of Voices in AI features host Byron Reese and Chris Eliasmith talking about the brain, the mind, and emergence. Dr. Chris Eliasmith is co-CEO of Applied Brain Research, Inc. and director of the Centre for Theoretical Neuroscience at the University of Waterloo. Professor Eliasmith uses engineering, mathematics and computer modelling to study brain processes that give rise to behaviour. His lab developed the world’s largest functional brain model, Spaun, whose 2.5 million simulated neurons provide insights into the complexities of thought and action. Professor of Philosophy and Engineering, Dr. Eliasmith holds a Canada Research Chair in Theoretical Neuroscience. He has authored or coauthored two books and over 90 publications in philosophy, psychology, neuroscience, computer science, and engineering. In 2015, he won the prestigious NSERC Polayni Award. He has also co-hosted a Discovery channel television show on emerging technologies.
Visit www.VoicesinAI.com to listen to this one-hour podcast or read the full transcript.

Transcript Excerpt

Byron Reese: This is Voices in AI brought to you by GigaOm. I’m Byron Reese. Today our guest is Chris Eliasmith. He’s the Canadian Research Chair in Theoretical Neuroscience. He’s a professor with, get this, a joint appointment in Philosophy and Systems Design Engineering and, if that’s not enough, a cross-appointment to the Computer Science department at the University of Waterloo. He is the Director of the Centre for Theoretical Neuroscience, and he was awarded the NSERC Polanyi Award for his work developing a computer model of the human brain. Welcome to the show, Chris!
Chris Eliasmith: Thank you very much. It’s great to be here.
So, what is intelligence?
That’s a tricky question, but one that I know you always like to start with. I think intelligence—I’m teaching a course on it this term, so I’ve been thinking about it a lot recently. It strikes me as the deployment of a set of skills that allow us to accomplish goals in a very wide variety of circumstances. It’s one of these things I think definitely comes in degrees, but we can think of some very stereotypical examples of the kinds of skills that seem to be important for intelligence, and these include things like abstract reasoning, planning, working with symbolic structures, and, of course, learning. I also think it’s clear that we generally don’t consider things to be intelligent unless they’re highly robust and can deal with lots of uncertainty. Basically some interesting notions of creativity often pop up when we think about what counts as intelligent or not, and it definitely depends more on how we manipulate knowledge than the knowledge we happen to have at that particular point in time.
Well, you said I like to start with that, but you were actually the first person in 56 episodes I asked that question to. I asked everybody else what artificial intelligence is, but we really have to start with intelligence. In what you just said, it sounded like there was a functional definition, like it is skills, but it’s also creativity. It’s also dealing with uncertainty. Let’s start with the most primitive thing which would be a white blood cell that can detect and kill an invading germ. Is that intelligent? I mean it’s got that skill.
I think it’s interesting that you bring that example up, because people are actually now talking about bacterial intelligence and plant intelligence. They’re definitely attempting to use the word in ways that I’m not especially comfortable with, largely because I think what you’re pointing to in these instances are sort of complex and sophisticated interactions with the world. But at the same time, I think the notions of intelligence that we’re more comfortable with are ones that deal with more cognitive kinds of behaviors, generally more abstract kinds of behaviors. The sort of degree of complexity in that kind of dealing with the world is far beyond I think what you find in things like blood cells and bacteria. Nevertheless, we can always put these things on a continuum and decide to use words in whichever particular ways we find useful. I think I’d like to restrict it to these sort of higher order kinds of complex interactions we see with…
I’m with you on that. So let me ask a different question: How is human intelligence unique in the world, as far as we know? What is different about human intelligence?
There are a couple of standard answers, I think, but even though they’re standard, I think they still capture some sort of essential insights. One of the most unique things about human intelligence is our ability to use abstract representations. We create them all the time. The most ubiquitous examples, of course, are language, where we’re just making sounds, but we can use it to refer to things in the world. We can use it to refer to classes of things in the world. We can use it to refer to things that are not in the world. We can exploit these representations to coordinate very complex social behaviors, including things like technological development as well as political systems and so on. So that sort of level of complex behavior that’s coordinated by abstract symbols is something that you just do not find in any other species on the planet. I think that’s one standard answer which I like.
The other one is that the amount of mental flexibility that humans display seems to outpace most other kinds of creatures that we see around us. This is basically just our ability to learn. One reason that people are in every single climate on the planet and able to survive in all those climates is because we can learn and adapt to unexpected circumstances. Sometimes it’s not because of abstract social reasoning or social skills or abstract language, but rather just because of our ability to develop solutions to problems which could be requiring spatial reasoning or other kinds of reasoning which aren’t necessarily guided by language.
I read, the other day, a really interesting thing, which was the only animal that will look in the direction you point is a dog, which sounds to me—I don’t know, it may be meaningless—but it sounds to me like a) we probably selected for that, right? The dog that when you say, “Go get him!” and it actually looks over there, we’d say that’s a good dog. But is there anything abstract in that, in that I point at something and then the animal then turns and looks at it?
I don’t think there’s anything especially abstract. To me, that’s an interesting kind of social coordination. It’s not the kind of abstractness I was talking about with language, I don’t think.
Okay. Do you think Gallup’s, the red dot, the thing that tries to wipe the dot off its forehead—is that a test that shows intelligence, like the creature understands what a mirror is? “Ah, that is me in the mirror?” What do you think’s going on there?
I think that is definitely an interesting test. I’m not sure how directly it’s getting at intelligence. That seems to be something more related to self-representation. Self-representation is likely something that matters for, again, social coordination, so being able to distinguish yourself from others. I think, often, more intelligent animals tend to be more social animals, likely because social interactions are so incredibly sophisticated. So you see this kind of thing definitely happening in dolphins, which are one of the animals that can pass the red dot test. You also see animals like dogs we consider generally pretty intelligent, again, because they’re very social, and that might be why they’re good at reacting to things like pointing and so on.
But it’s difficult to say that recognition in a mirror or some simple task like that is really going to let us identify something as being intelligent or not intelligent. I think the notion of intelligence is generally just much broader, and it really has to do with the set of skills—I’ll go back to my definition—the set of skills that we can bring to bear and the wide variety of circumstances that we can use on them to successfully solve problems. So when we see dolphins doing this kind of thing – they take sponges and put them on their nose so they can protect their nose from spiky animals when they’re searching the seabed, that’s an interesting kind of intelligence because they use their understanding of their environment to solve a particular problem. They also have done things like killed spiny urchins to poke eels to get them out of crevices. They’ve done all these sorts of things, it’s given the variety of problems that they’ve solved and the interesting and creative ways they’ve done it, to make us want to call dolphins intelligent. I don’t think it’s merely seeing a dot in a mirror that lets us know, “Ah! They’ve got the intelligence part of the brain.” I think it’s really a more comprehensive set of skills.
Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com
[voices_in_ai_link_back]
 
Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.

Voices in AI – Episode 49: A Conversation with Ali Azarbayejani

[voices_in_ai_byline]
In this episode, Byron and Ali discuss AI’s impact on business and jobs.
[podcast_player name=”Episode 49: A Conversation with Ali Azarbayejani” artist=”Byron Reese” album=”Voices in AI” url=”https://voicesinai.s3.amazonaws.com/2018-06-12-(00-57-00)-ali-azarbayejani.mp3″ cover_art_url=”https://voicesinai.com/wp-content/uploads/2018/06/voices-headshot-card-2.jpg”]
[voices_in_ai_byline]
Byron Reese: This is Voices in AI, brought to you by GigaOm. I’m Byron Reese. Today my guest is Ali Azarbayejani. He is the CTO and Co-founder of Cogito. He has 18 years of commercial experience as a scientist, an entrepreneur, and designer of world-class computational technologies. His pioneering doctoral research at MIT Media Labs in Probabilistic Modeling for 3-D Vision was the basis for his first startup company Alchemy 3-D Technology, which created a market in the film and video post-production industry for camera matchmoving software. Welcome to the show Ali.
Ali Azarbayejani: Thank you, Byron.
I’d like to start off with the question: what is artificial intelligence?
I’m glad we’re starting with some definitions. I think I have two answers to that question. The original definition of artificial intelligence I believe in a scholarly context is about creating a machine that operates like a human. Part of the problem with defining what that means is that we don’t really understand human intelligence very well. We have a pretty good understanding now about how the brain functions physiologically, and we understand that’s an important part of how we provide cognitive function, but we don’t have a really good understanding of mind or consciousness or how people actually represent information.
I think the first answer is that we really don’t know what artificial or machine intelligence is other than the desire to replicate human-like function in computers. The second answer I have is how AI is being used in industry. I think that that is a little bit easier to define because I believe almost all of what we call AI in industry is based on building input/output systems that are framed and engineered using machine learning. That’s really at the essence of what we refer to in the industry as AI.
So, you have a high concept definition and a bread and butter work-a-day working definition, and that’s how you’re bifurcating that world?
Yeah, I mean, a lot of people talk about we’re in the midst of an AI revolution. I don’t believe, at least in the first sense of the term, that we’re in an AI revolution at all. I think we’re in the midst of a machine learning revolution which is really important and it’s really powerful, but I guess what I take issue is with the term intelligence, because most of these things that we call artificial intelligence don’t really exhibit the properties of intelligence that we would normally think are required for human intelligence.
These systems are largely trained in the lab and then deployed. When they’re deployed, they typically operate as a simple static input/output system. You put in audio and you get out words. So, you put in video and you get out locations of faces. That’s really at the core of what we’re calling AI now. I think it’s really the result of advances in technology that’s made machine learning possible at large scale, and it’s not really a scientific revolution about intelligence or artificial intelligence.
All right, let’s explore that some, because I think you’re right. I have a book coming out in the Spring of 2018 which is 20,000 words and it’s dedicated to the brain, the mind and consciousness. It really tries to wrap around those three concepts. So, let’s go through them if you don’t mind for just a minute. You started out by saying with the brain we understand how it functions. I would love to go into that, but as far as I understand it, we don’t know how a thought is encoded. We don’t know how the memory of your 10th birthday party or what pineapple tastes like or any of that. We don’t know how any of that is actually encoded. We can’t write to it. We can’t read from it, except in the most very rudimentary sense. So do you think we really do understand the brain?
I think that’s the point I was actually making is that we understand the brain at some level physiologically. We understand that there’s neurons and gray matter. We understand a little bit of physiology of the brain, but we don’t understand those things that you just mentioned, which I refer to as the “mind.” We don’t really understand how data is stored. We don’t understand how it’s recalled exactly. We don’t really understand other human functions like consciousness and feelings and emotions and how those are related to cognitive function. So, that’s really what I was saying is, we don’t understand how intelligence evolves from it, although really where we’re at is we just understand a little bit of the physiology.
Yeah, it’s interesting. There’s no consensus definition on what intelligence is, and that’s why you can point at anything and say, “well that’s intelligent.” “My sprinkler that comes on when my grass is dry, that’s intelligent.” The mind is of course a very, shall we say, controversial concept, but I think there is a consensus definition of it that everybody can agree to, which is it’s all the stuff the brain does that doesn’t seem, emphasis on seem, like something an organ should be able to do. Your liver doesn’t have a sense of humor. Your liver doesn’t have an imagination. All of these things. So, based on that definition of creativity and not even getting to consciousness, not even experiencing the world, just these abilities. These raw abilities like to write a poem, or paint a great painting or what   have you. You were saying we actually have not made any real progress towards any of that. That’s gotten mixed up in this whole machine learning thing. Am I right that you think we’re still at square one with that whole building artificial mind?
Yeah, I mean, I don’t see a lot of difference intellectually [between] where we are now from when I was in school in the late 80s and 90s in terms of theories about the mind and theories about how we think and reason. The basis for the current machine learning revolution is largely based on neural networks which were invented in the 1960s. Really what is fueling the revolution is technology. The fact that we have the CPU power, the memory, the storage and the networking — and the data — and we can put all that together and train large networks at scale. That’s really what is fueling the amazing advances that we have right now, not really any philosophical new insights into how human intelligence works.
Putting it out there for just a minute, is it possible that an AGI, a general intelligence, that an artificial mind, is it possible that that cannot be instantiated in machinery?
That’s a really good question. I think that’s another philosophical question that we need to wrestle with. I think that there are at least two schools of thought on this that I’m aware of. I think the prevailing notion, which is I think a big assumption, is that it’s just a matter of scale. I think that people look at what we’ve been able to do with machine learning and we’ve been able to do incredible things with machine learning so far. I think people think of well, a human sitting in a chair can sit and observe the world and understand what’s going on in the world and communicate with other people. So, if you just took that head and you could replicate what that head was doing, which would require a scale much larger than what we’re doing right now with artificial neural networks, then embody that into a machine, then you could set this machine on the table there or on the chair and have that machine do the same thing.
I think one school of thought is that the human brain is an existence proof that a machine can exist to do the operations of a human intelligence. So, all we have to do is figure out how to put that into a machine. I think there’s a lot of assumptions involved in that train of thought. The other train of thought, which is more along the lines of where I land philosophically, is that it’s not clear to me that intelligence can exist without ego, without the notion of an embodied self that exists in the world, that interacts in the world, that has a reason to live and a drive to survive. It’s not clear to me that it can’t exist, and obviously we can do tasks that are similar to what human intelligence does, but I’m not entirely sure that… because we don’t understand how human intelligence works, it’s not clear to me that you can create an intelligence in a disembodied way.
I’ve had 60-something guests on the show, and I keep track of the number that don’t believe we can actually build a general intelligence, and it’s I think 5. They are Deep Varma, Esther Dyson, people who have similar… more so I think they’re even more explicitly saying they don’t think we can do it. The other 60 guests have the same line of logic, which is we don’t know how the brain works. We don’t know how the mind works. We don’t know how consciousness works, but we do have one underlying assumption that we are machines, and if we are machines, then we can build a mechanical us. Any argument against that or any way to engage it, the word that’s often offered is magic. The only way to get around that is to appeal to magic, to appeal to something supernatural, to appeal to something unscientific. So, my question to you is: is that true? Do you have to appeal to something unscientific for that logic to break down, or are there maybe scientific reasons completely causal, system-y kind of systems by which we cannot build a conscious machine?
I don’t believe in magic. I don’t think that’s my argument. My argument is more around what is the role that the body around the brain plays, in intelligence? I think we make the assumption sometimes that the entire consciousness of a person, entire cognition, everything is happening from the neck up, but the way that people exist in the world and learn from simply existing in the world and interacting with the world, I think plays a huge part in intelligence and consciousness. Being attached to a body that the brain identifies with as “self,” and that the mind has a self-interest in, I think may be an essential part of it.
So, I guess my point of view on this is I don’t know what the key ingredients are that go into intelligence, but I think that we need to understand… Let me put it this way, I think without understanding how human consciousness and human feelings and human empathy works, what the mechanisms are behind that, I mean, it may be simply mechanical, but without understanding how that works, it’s unclear how you would build a machine intelligence. In fact, scientists have struggled from the beginning of AI even to define it, and it’s really hard to say you can build something until you can actually define it, until you actually understand what it is.
The philosophical argument against that would be like “Look, you got a finite number of senses and those that are giving input to your brain, and you know the old philosophical thought experiment you’re just a brain in a vat somewhere and that’s all you are, and you’re being fed these signals and your brain is reacting to them,” but there really isn’t even an external world that you’re experiencing. So, they would say you can build a machine and give it these senses, but you’re saying there’s something more than that that we don’t even understand, that is beyond even the five senses.
I suppose if you had a machine that could replicate atom for atom a human body, then you would be able to create an intelligence. But, how practical would it be?
There are easier ways to create a person than that?
Yeah, that’s true too, but how practical is a human as a computing machine? I mean, one of the advantages of the computer systems that we have, the machine learning-based systems that we call AI is that we know how we represent data. Then we can access the data. As we were talking about before, with human intelligence you can’t just plug in and download people’s thoughts or emotions. So, it may be that in order to achieve intelligence, you have to create this machine that is not very practical as a machine. So you might just come full circle to well, “is that really the powerful thing that we think it’s going to be?”
I think people entertain the question because this question of “are people simply machines? Is there anything that happens? Are you just a big bag of chemicals with electrical pulses going through you?” I think people have… emotionally engaging that question is why they do it, not because they want to necessarily build a replicant. I could be wrong. Let me ask you this. Let’s talk about consciousness for a minute. To be clear, people say we don’t know what consciousness is. This is of course wrong. Everybody agrees on what it is. It is the experiencing of things. It is the difference between a computer being able to sense temperature and a person being able to feel heat. It’s like that difference.
It’s been described as the last scientific question we don’t really know how to ask, and we don’t know what the answer would look like. I put eight theories together in this book I wrote. Do you have a theory, just even a gut reaction? Is it an emergent property? Is it a quantum property? Is it a fundamental law of the universe? Do you have a gut feel of what direction you would look to explain consciousness?
I really don’t know. I think that my instinct is along the lines of what I talked about recently with embodiment. My gut feel is that a disembodied brain is not something that can develop a consciousness. I think consciousness fundamentally requires a self. Beyond that, I don’t really have any great theories about consciousness. I’m not an expert there. My gut feel is we tend to separate, when we talk about artificial intelligence, we tend to separate the function of mind from the body, and I think that may be a huge assumption that we can do that and still have self and consciousness and intelligence.
I think it’s a fascinating question. About half of the guests on the show just don’t want to talk about it. They just do not want to talk about consciousness, because they say it’s not a scientific question and it’s a distraction. Half of them, very much, it is the thing, it’s the only thing that makes living worthwhile. It’s why you feel love and why you feel happiness. It is everything in a way. People have such widely [divergent views], like Stephen Wolfram was on the show, and he thinks it’s all just computation. To that extent, anything that performs computation, which is really just about anything, is conscious. A hurricane is conscious.
One theory is consciousness is an emergent property, just like you are trillions of cells that don’t know who you are and none of them have a sense of humor, you somehow have a distinct emergent self and a sense of humor. There are people who think the planet itself may have a consciousness. Others say that activity in the sun looks a lot like brain activity, and perhaps the sun is conscious, and that is an old idea. It is interesting that all children when they draw an outdoor scene they always put a smiling face on the sun. Do you think consciousness may be more ubiquitous, not unique to humans? That it may kind of be in all kinds of places, or do you just at a gut level think it’s a special human [trait], and other animals you might want to include in that characteristic?
That’s an interesting point of view. I certainly see how it’s a nice theory about it being a continuum I think is what he’s saying. That there’s some level of consciousness in the simplest thing. Yeah, I think this is more along… it’s just a matter of scale type of philosophy which is that at a larger scale that what emerges is a more complex and meaningful consciousness.
There’s a project in Europe you’re probably familiar with, the Human Brain Project, which is really trying to build an intelligence through that scale. The counter to it is the Open Worm Project which is they’ve sequenced the genome, of the Nematode worm and its brain has 302 neurons, and for 20 years people have been trying to model those 302 neurons in a computer to build, as it were, a digital functioning Nematode worm. By one argument they’re no closer to cracking that than they were 20 years ago. The scale question has its adherence at both extremes.
Let’s switch gears now and put that world aside and let’s talk about the world of machine learning, and we won’t call it intelligence anymore. It’s just machine learning, and if we use the word intelligence, it’s just a convenience. How would you describe the state of the art? As you point out, the techniques we’re using aren’t new, but our ability to apply them is. Are we in a machine learning renaissance? Is it just beginning? What are your thoughts on that?
I think we arein a machine learning renaissance, and I think we’re closer to the beginning than to the end. As I mentioned before, the real driver of the renaissance is technology. We have the computational power to do massive amounts of learning. We have the data and we have the networks to bring it all together and the storage to store it all. That’s really what has allowed us to realize the theoretical capabilities of complex networks as we model input/output functions.
We’ve done amazing things with that particular technology. It’s very powerful. I think there’s a lot more to come, and it’s pretty exciting the kinds of things we can do with it.
There’s a lot of concern, as you know, the debate about the impact that it’s going to have on employment. What’s your take on that?
Yeah, I’m not really concerned about that at all. I think that largely what these systems are doing is they’re allowing us to automate a lot of things. I think that that’s happened before in history. The concern that I have is not so much about removing jobs, because the entire history of the industrial revolution [is] we’ve built technology that has made jobs obsolete, and there are always new jobs. There’s so many things to do in the world that there’s always new jobs. I think the concern, if there’s any about this, is therateof change.
I think at a generational level, it’s not a problem. The next generation are going to be doing jobs that we don’t even know exist right now, or that don’t exist right now. I think the problems may be within a generation transformation. If you start automating jobs that belong to people who cannot be retrained in something else, but I think that there will always be new jobs.
Is that possible that there’s a person out there that cannot be retrained to do meaningful work? We’ve had 250 years of unending technological advance that would have blown the minds of somebody in 1750, and yet we don’t have anybody who… it’s like, no, they can’t do anything. Assuming that you have full use of your body and mind, there’s not a person on the planet that cannot in theory add economic value. All the more if they’re given technology to do it with. Do you really think that they’ll have people that “cannot be retrained”?
No, I don’t think it’s a “can” issue. I agree with you. I think that people can be retrained and like I said, I’m not really worried that there won’t be jobs for people to do, but I think that there are practical problems of the rate of change. I mean, we’ve seen it in the last decades in manufacturing jobs that a lot of those have disappeared overseas. There’s real economic pain in the regions of the country where those jobs were really prominent, and I don’t think there’s any theoretical reason why people can’t be retrained. Our government doesn’t really invest in that as much as it should, but I think there’s a practical problem that people don’t get retrained. That can cause shifts. I think those are temporary. I personally don’t see long term issues with transformations in technology.
It’s interesting because… I mean, this is a show about AI, which obviously holds it in high regard, but there have been other technologies that have been as transformative. An assembly line is a kind of AI. That was adopted really quickly. Electricity was adopted quickly, and steam was adopted. Do you think machine learning really is being adopted all that much faster, or is it just another equally transformative technology like electricity or something?
I agree with you. I think that it’s transformational, but I think it’s probably creating as many jobs as it’s automating away right now. For instance, in our industry, which is in contact centers, a big trend is trying to automate, basically to digitize a lot of the communications to take load off the telephone call center. What most of our enterprise customers have found with our contact centers is the more they digitize, their call volume actually goes up. It doesn’t go down. So, there’s kind of some conflicting evidence there about how much this is actually going to take away from jobs.
I am of the opinion I think anyone in any endeavor understands there’s always more to do than you have time to do. Automating things that can be automated I generally feel is a positive thing, and putting people to use in functions where we don’t know how to automate things, I think is always going to be an available path.
You brought up what you do. Tell us a little bit about Cogito and its mission.
Our mission is centered around helping people have better conversations. We’re really focused on the voice stream, and in particular our main business is in customer call centers where what we do is our technology listens to ongoing conversations, understands what’s going on in those conversations from an interactive and relationship point of view, from a behavioral point of view, and gives agents in real-time, feedback when conversations aren’t going well or when there’s something they can do to improve the conversation.
That’s where we get to the concept of augmented intelligence, which is using these machine learning endowed systems to help people do their jobs better, rather than trying to replace them. That’s a tremendously powerful paradigm. There’s trends, as I mentioned, towards trying to automate these things away, but often our customers find it more valuable to increase the competence of the people doing the jobs there because those jobs can’t be completely automated, rather than trying to automate away the simple things.
Hit rewind, back way up with Cogito because I’m really fascinated by the thesis that there’s all of this. There’s what you say and then there’s how you say it. That we’re really good with one half of that equation, but we don’t apply technology to the other half. Can you tell that story and how it led to what you do?
Yeah, imagine listening to two people having a conversation in a foreign language that you don’t understand. You can undoubtedly tell a lot about what’s going on in that conversation without understanding a single word. You can tell whether people are angry at each other. You can tell whether they’re cooperating or hostile. You can tell a lot of things about the interaction without understanding a single word. That’s essentially what we’re doing with the behavioral analysis of how you say it. So, when we listen to telephone conversations, that’s a lot of what we’re doing is we’re listening to the tenor and the interaction in the conversation and getting a feel for how that conversation is going.
I mean, you’re using “listen” here colloquially. There’s nothing really listening. There’s a data stream that’s being analyzed, right?
Exactly, yeah.
So, I guess it sounds like they’re like the parents [of] Charlie Brown, like “waa, wa waa.” So, it hears that and can figure out what’s going on. So, that sounds like a technology with broad applications. Can you talk about in a broad sense what can be done, and then why you chose what you did choose as a starting point?
It actually wasn’t the starting point. The application that originally inspired the company was more of a mental health application. There’s a lot of anecdotal understanding that people with clinical depression or depressed mood speak in a characteristic way. So the original inspiration for building the company and the technology was to use in telephone outreach operations with chronically ill populations that have very high rates of clinical depression and very low rates of detection and treatment of clinical depression. So, that’s one very interesting application that we’re still pursuing.
The second application came up in that same context, in the context of health and wellness call centers is the concept of engagement. A lot of the beneficial approach to health is preventative care. So, there’s been a lot of emphasis in healthcare on helping people quit smoking and have better diets and things like that. These programs normally take place over the telephone, and so there’s conversations, but they’re usually only successful when the patient or the member is engaged in the process. So, we used this sort of speech and conversational analysis to build models of engagement and that would allow companies to either react to under-engaged patients or not waste their time with under-engaged patients.
The third application, which is what we’re primarily focused on right now is agent interaction, the quality of agent interaction. There’s a huge amount of value with big companies that are consumer-oriented and particularly those that have membership relationships with customers in being able to provide a good human interaction when there are issues. So, customer service centers… and it’s very difficult if you have thousands of agents on the phone to understand what’s going on in those calls, much less improve it. A lot of companies are really focused on improvement. We’re the first system that allows these companies to understand what’s going on in those conversations in real-time, which is the moment of truth where they can actually do something about it. We allow them to do something about it by giving information not only to supervisors who can provide real-time coaching, but also to agents directly so that they can understand their own conversations are going south and be able to correct that and have better conversations themselves. That’s the gist of what we do right now.
I have a hundred questions all running for the door at once with this. My first question is you’re trying to measure engagement as a factor. How generalizable is that technology? If you plugged it into this conversation that you and I are having, does it not need any modification? Engagement is engagement is engagement, or is it like, Oh no, at company X it’s going to sound different than a phone call from company Y?
That’s a really good question. In some general sense an engaged interaction, if you took a minute of our conversation right now, it’s pretty generalizable. The concept is that if you’re engaged in the topic, then you’re going to have a conversation which is engaged, which means there’s going to be a good back and forth and there’s going to be good energy in the conversation and things like that. Now in practice, when you’re talking about in a call center context, it does get trickier because every call center has potentially quite different shapes of conversations.
So, one call center may need to spend a minute going through formalities and verification and all of that kind of business, and that part of the conversation is not the part you actually care about, but it’s the part where we’re actually talking about a meaningful topic. Whereas another call center may have a completely different shape of a conversation. What we find that we have to do, where machine learning comes in handy here, is that we need to be able to take our general models of engaged interactions and convert and adapt those in particular context to understanding engaged overall conversations. Those are going to vary from context to context. So, that’s where adaptive machine learning comes into play.
My next question is from person to person how consistent… no doubt if you had a recording of me for an hour, you could get a baseline and then measure my relative change from that, but when you drop in, is Bob X of Tacoma, Washington and Suzie Q of Toledo, do they exhibit consistent traits or attributes of engagement?
Yeah, there are certainly variations among people’s speaking style. You look at areas of the country, different dialects and things like that. Then you also look at different languages and those are all going to be a little bit different. When we’re talking about engagement at a statistical level, these models work really well. So the key is when thinking about product development for these, is to focus on providing tools that are effective at a statistical level. Looking at one particular person, your model may indicate that this person is not engaged, but maybe that is just their normal speaking style, but statistically it’s generalizable.
My next question is: is there something special about engagement? Could you, if you wanted to tell whether somebody’s amused or somebody’s intrigued or somebody is annoyed or somebody’s outraged? There’s a palette of human emotions. I guess I’m asking, engagement like you said, there are not so much tonal qualities you’re listening for, but you’re counting back and forths, that’s kind of a numbers [thing], not a…. So on these other factors, could you do that hypothetically?
Yeah, in fact, our system is a platform for doing exactly that sort of thing. Some of those things we’ve done. We build models for various emotional qualities and things like that. So, that’s the exciting thing is that once you have access to these conversations and you have the data to be able to identify these various phenomena, you can apply machine learning and understand what are the characteristics that would lead to a perception of amusement or whatever result you’re looking for.
Look, I applaud what you’re doing. Anybody who can be better phone support has my wholehearted support, but I wonder if this technology wouldn’t be heading is kind of an OEM thing where it’s put into caregiving robots, for instance, who need to learn how to read the emotions of the person they’re caring for and modulate what they say. It’s like a feedback loop to self-teaching kind of, just that use case. The robot caregiver that uses this [knows] she’s annoyed, he’s happy, or whatever, as a feedback loop. Am I way off in sci-fi land or is that no, that could be done?
No, that’s exactly right, and it’s an anticipated application of what we do. As we get better and better at being able to understand and classify useful human behaviors and then inferring useful human emotional states from those behaviors, that can be used in automated systems as well.
Frequent listeners to the show will know that I often bring up Weizenbaum and ELIZA. The setup is that Weizenbaum, back in the 60s, made this really simple chat bot that you would say, “I don’t feel good today,” and it would say “why don’t you feel good today?” “I don’t feel good today because of my mother.” “Why does your mother not make you not feel good?” It’s this real basic thing, but what he found was that people were connecting with it and this really disturbed him and so he unplugs it. He said, when the computer says “I understand,” it’s just a lie. That there’s no “I,” which sounds like you would agree with, and there’s nothing that understands anything. Do you worry that that is a [problem]? Weizenbaum would be: “that’s awful.” If that thing is manipulating an old person’s emotions, that’s just a terrible, terrible thing. What would you say?
I think it’s a danger. Yeah, I think we’re going to see that sort of thing happen for sure. I think people look at chat bots and say, “Oh look, that’s an artificial intelligence, that’s doing something intelligent” and it’s really not, as ELIZA proves. You can just have a little base system on the back and type stuff in and type stuff out. A verbal chat bot might use a speech-to-text as an input modality and text-to-speech as an output modality, but have also a rules based unit on the back, and it’s really doing nothing intelligent, but it can give the illusion of some intelligence going on because you’re talking to it and it’s talking back to you.
So, I think yeah, there will be bumps along that road for sure, in trying to build these technologies that, particularly when you’re trying to build a system to replace a human and trying to convince the user of the system that you’re talking to a human. That’s definitely sketchy ground.
Right. I mean, I guess it’s forgivable we don’t know, I mean, it’s all new. It’s all stuff we’re having to kind of wing it. We’re coming up towards the end of our time. I just have a couple of closing questions, which are: Do you read science fiction? Do you watch science fiction movies? Do you go to science fiction TV, and if so, is there any view of the future, any view of AI or anything like that that you look at and think, yeah that could happen someday?
Yeah, it’s really hard to say. I can’t think of anything. Star Warsof course used very anthropomorphized robots, and if you think of a system like HAL in 2001: A Space Odyssey,you could certainly simulate something like that. If you’re talking about information, being able to talk to HAL and have HAL look stuff up for you and then talk back to you and tell you what the answer is, that’s totally believable. Of course the twist in 2001: A Space Odysseyis that HAL ended up having a sense of self, sense of its own self and decided to make decisions. Yeah, I’m very much rooted in the present and there’s a lot of exciting things going on right now.
Fair enough. It’s interesting that you used Star Wars, which of course is a long time ago, because somehow or another you think the movie would be different if C3PO were named Anthony and R2D2 was named George.
Yeah.
That would just take on a whole different… giving them names is even one step closer to that whole thing. Data in Star Trekkind of walked the line. He had a name, but it was Data.
It’s interesting actually to look at the difference between C3PO and R2D2. You look at CP3O and it has the form of a human, and you can ask the question: “Why would you build a robot that has a form of a human?” R2D2 is a robot, which does, or could potentially do, exactly what C3PO does in the form of a whatever – cylinder. So, it’s interesting to look at the contrast and while they imagine there’s two different kinds of robots. One, which is very anthropomorphized, and one which was very mechanical.
Yeah, you’re right because the decision not to give R2 speech, it’s not like he didn’t have enough memory. He needed another 30MB of RAM or something. That also was something clearly deliberate. I remember reading that Lucas’s original wasn’t really going to use Anthony Daniels to voice it. He was going to get somebody who sounded like a used car salesman, kind of fast talking and all that, and that’s what the script is written for. I’m sure it’s a literary device, but like a lot of these things, I’m a firm believer that what comes out in science fiction isn’t predicting the future. It kind of makes it. Uhura had a Bluetooth device in her ear. So, it’s kind of like whatever the literary imagining of it is probably going to be what the scientific manifestation of it is to some degree.
Yeah, the concept of the self-fulfilling prophecy is definitely there.
Well, I tell you what, if people want to keep up with you and all this work you’re doing, do you write, yak on Twitter, how can people follow what you do?
We’re going to be writing a lot more in the future. Our website www.cogitocorp.com is where you’ll find the links to the things that we’re writing on, AI and the work we do here at Cogito.
Well, this has been fascinating. I’m always excited to have a guest who is willing to engage these big questions and take, as you pointed out earlier, a more contrarian view. So, thank you for your time Ali.
Thank you, Byron. It’s been fun, and thanks for having me on.
Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.
[voices_in_ai_link_back]

Would Conscious Computers Have Rights?

The following is an excerpt from The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity. You can purchase the book here.
The Fourth Age explores the implications of automation and AI on humanity, and has been described by Ethernet inventor and 3Com founder Bob Metcalfe as framing “the deepest questions of our time in clear language that invites the reader to make their own choices. Using 100,000 years of human history as his guide, he explores the issues around artificial general intelligence, robots, consciousness, automation, the end of work, abundance, and immortality.”
One of those deep questions of our time:
If a computer is sentient, then it can feel pain. If it is conscious, then it is self-awareness. Just as we have human rights and animal rights, as we explore building conscious computers, must we also consider the concept of robot rights? In this excerpt from The Fourth Age, Byron Reese considers the ethical implications of the development of conscious computers.


A conscious computer would be, by virtually any definition, alive. It is hard to imagine something that is conscious but not living. I can’t conceive that we could consider a blade of grass to be living, and still classify an entity that is self-aware and self-conscious as nonliving. The only exception would be a definition of life that required it to be organic, but this would be somewhat arbitrary in that it has nothing to do with the thing’s innate characteristics, rather merely its composition.
Of course, we might have difficulty relating to this alien life-form. A machine’s consciousness may be so ethereal as to just be a vague awareness that occasionally emerges for a second. Or it could be intense, operating at such speed that it is unfathomable to us. What if by accessing the Internet and all the devices attached to it, the conscious machine experiences everything constantly? Just imagine if it saw through every camera, all at once, and perceived the whole of our existence. How could we even relate to such an entity, or it to us? Or if it could relate to us, would it see us as fellow machines? If so, it follows that it may not have any more moral qualm about turning us off as we have about scrapping an old laptop. Or, it might look on us with horror as we scrap our old laptops.
Would this new life-form have rights? Well, that is a complicated question that hinges on where you think rights come from. Let’s consider that.
Nietzsche is always a good place to start. He believed you have only the rights you can take. People claim the rights that we have because we can enforce them. Cows cannot be said to have the right to life because, well, humans eat them. Computers would have the rights they could seize. They may be able to seize all they want. It may not be us deciding to give them rights, but them claiming a set of rights without any input from us.
A second theory of rights is that they are created by consensus. Americans have the right of free speech because we as a nation have collectively decided to grant that right and enforce it. In this view, rights can exist only to the extent that we can enforce them. What rights might we decide to give to computers that are within our ability to enforce? It could be life, liberty, and self-determination. One can easily imagine a computer bill of rights.
Another theory of rights holds that at least some of them are inalienable. They exist whether or not we acknowledge them, because they are based on neither force nor consensus. The American Declaration of Independence says that life, liberty, and the pursuit of happiness are inalienable. Incidentally, inalienable rights are so fundamental that you cannot renounce them. They are inseparable from you. You cannot sell or give someone the right to kill you, because life is an inalienable right. This view of fundamental rights believes that their inalienable character comes from an external source, from God, nature, or that they are somehow fundamental to being human. If this is the case, then we don’t decide whether the computer has rights or not, we discern it. It is up to neither the computer nor us.
The computer rights movement will no doubt mirror the animal rights movement, which has adopted a strategy of incrementalism, a series of small advances towards a larger goal. If this is the case, then there may not be a watershed moment where suddenly computers are acknowledged to have fundamental rights—unless, of course, a conscious computer has the power to demand them.
Would a conscious computer be a moral agent? That is, would it have the capacity to know right from wrong, and therefore be held accountable for its actions? This question is difficult, because one can conceive of a self-aware entity that does not understand our concept of morality. We don’t believe that the dog that goes wild and starts biting everyone is acting immorally, because the dog is not a moral agent. Yet we might still put the dog down. A conscious computer doing something we regard as immoral is a difficult concept to start with, and one wonders if we would unplug or attempt to rehabilitate the conscious computer if it engages in moral turpitude. If the conscious computer is a moral agent, then we will begin changing the vocabulary we use when describing machines. Suddenly, they can be noble, coarse, enlightened, virtuous, spiritual, depraved, or evil.
Would a conscious machine be considered by some to have a soul? Certainly. Animals are thought to have souls, as are trees by some.
In all of this, it is likely that we will not have a collective consensus as a species on many of these issues, or if we do, it will be a long time in coming, far longer than it will take to create the technology itself. Which finally brings us to the question “can computers become conscious?”


To read more of The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity, you can purchase it here.

Interview with Christof Koch

Christof Koch is an American neuroscientist, best known for his work on the neural basis of consciousness. He is the President and Chief Scientific Officer of the Allen Institute for Brain Science in Seattle, and from 1986 to 2013 he was a professor at California Institute of Technology (Caltech). Koch has published extensively, and his most recent book is Consciousness: Confessions of a Romantic Reductionist.
What follows is an interview between Christof Koch and Byron Reese, author of the book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity. They discuss artificial intelligence, consciousness and the brain.


Byron Reese: So people often say, “We don’t know what consciousness is,” but that’s not really true. We know exactly what it is. The debate is around how it comes about, correct?
Christof Koch: Correct.
So, what is it?
It’s my experience, it’s the feeling of life itself, it’s my pain my pleasure my hopes, my aspirations, my fears, all of that is consciousness.
And it’s described as the last major scientific question that we know neither how to ask nor what the answer would look like, but I assume you disagree with that?
I disagree with some of that, it’s one of the two or three big questions, being: Why is there anything at all? What’s the origin of life, and yes how does consciousness arrive out of matter?
And what would that answer look like, because people often point to some part of the brain or some aspect of it and say, “that’s where it comes from,” but how would you put into words why it comes about?
It’s a very good question. So, having the answer which bits and pieces of the brain are important for consciousness is critical to understand what happens in emergency room when you have the patient who is heavily brain damaged, but you have no idea whether she is actually there, was anybody home. It’s going to be of immense practical importance and clinical importance for babies or for anencephalic babies, or at the end of life, but of course that doesn’t answer the question. What is it about this particular bit, piece of the brain that gives rise to consciousness, and so finding what we need, we need a fundamental theory of consciousness that tells us what type of physical system, whether evolved or artificial, what type of physical system under what conditions can give rise to feelings, because those feelings aren’t there. If you look at the fundamental theories of physics, quantum mechanics and general relativity, there’s no consciousness there. If you look at the periodic table of chemistry, there’s no consciousness there. If you look at the endless ATGC chart of our genes, there’s no consciousness there. Yet every morning we wake up to a world full of sounds and sights and smells and pains and pleasures. So that’s the challenge: how the physics ultimately give rise to conscious sensations.
Or, some might say, “whether physics give rise to it?”
Well, physics does give rise to it in the sense that my brain is a piece of furniture of the universe. It’s subject to the same physical laws as everything else. There isn’t a magical type of law that only applies to brains but doesn’t apply to anything else, so somehow physical systems, or at least a subset of physical system gives rise to consciousness. The classical answer, at least in the West, was forever, for a very very long time, there’s a special substance, the thinking substance, res cognitans, or people today call it the soul, and only certain types of systems have it and only humans have the soul, and the soul somehow mediates the mind. But of course we [say], sort of logically, that’s not very coherent, there’s no empirical evidence for it… how would this soul interact with the brain, where’s the soul supposed to be, where does it come from, where’s it going to, it’s all incoherent, although of course the majority of people still believe in some version of this. But as scientists, as philosophers we know better. There isn’t any such soul, so it comes back to the question, “What is it about the physics of the world that gives rise to feelings, to sensations, to experience?”
Well, I want to tackle that head on in just a moment, but, let’s start with you, because you’ve been dealing with this question for a long time, and it’s fair to say, your understanding of it has evolved over time. Can you walk through, like the very first time you thought about this? As far back as you remember, and then what you thought and what the early theories you offered up, and how you have evolved those over time?
Sure, so I grew up in a devout Roman Catholic family, and I was devout and then of course you grow up to believe there’s this soul, the real Christof, is sort of this spirit that’s hovering over the waters of my brain, and every now and then, that soul touches the waters of my brain, makes me do things, and then when I’m thinking about, for instance, whether I should sin or not, this absolute freedom to choose one or the other and then my soul does one thing or the other, but then, this was on Sundays. Then during the day and the rest of the week, I taught science, I thought about the world in scientific terms, and then you’re left… well wait a minute, you begin to think about more detail and that just can’t work. Because, most importantly, where is the soul? How does it interact with a brain, and so then you begin to think about scientific solutions. And then I encountered, years later, Francis Crick, the co-discoverer of DNA, and he and I started up this very fruitful collaboration, where we wanted to take the problem of consciousness away from purely philosophy where it’s vested over the past 2,000 years, which is great you’ve had some of the smartest people of humanity, but they haven’t really advanced the field that much, to take it into an empirical operation that we scientists can work on. And so we came up with this idea of the neuronal colleagues of consciousness. It’s a fairly obvious thing; it’s the idea is that, whenever I’m conscious of something, whether it’s your face for instance, I see your face or hear your voice, or I have a pain or I have a memory, there must be some mechanism in my brain, we know it’s not the heart, we know it’s in the brain, there must be some mechanism in the brain that’s responsible for that. And it’s a 2-way communication between this mechanism and my feelings in the sense that I artificially activate this neuronal colleague of consciousness or abbreviated as NCC. If I trigger it for example, by an electrode that I put into the brain, and say doing brain surgery, I should get that person, even though there isn’t anybody out there, but I still see a face. Or conversely, if this part of the brain gets removed by a stroke or a virus or a bullet or something, I shouldn’t be able to have this person anymore.
Now also this is a big scientific empirical program that’s going on in many places throughout the world, where people are trying to look for these neural colleagues of consciousness in the brain. But then of course, somebody pointed out to me, he asked me a very simple question, said, “Well in principle, if your program has run its course and then 50 years later we know exactly that every time you activate these neurons in this particular mode, projecting to this other part of the brain, you become conscious.” How’s that different from the cards cranial gland, because famously 80 years ago, they said well the place where the brain needs to have this spooky stuff, this cognitive stuff, is the cranial gland, and today we all laugh at it, right? Well how is that different from saying, “well it’s made up of 5 neons that oscillate at 40 hertz?” It’s just much more detailed, because ultimately it seems like magic. Why should activity in these neurons give rise to conscious sensation, and at that point, I really thought, well what we need, we need a fundamental theory that tells us, independent of which mechanism, tells us what is it about any one mechanism that can give rise to consciousness? And so here we are, 20 years later.
And so talk about IIT?
So the most promising theory of consciousness in my personal opinion, and in the opinion of many of the observers of the field, is this integrated information theory, due to this Italian-American psychiatrist and neuroscientist, Giulio Tononi. And it starts by saying, “Well what is a conscious experience? A conscious experience exists for itself – in other words it doesn’t depend on anybody else, doesn’t depend on my parents or you or any observer – it just exists for itself. It has particular properties, it’s very definite, either I have a conscious experience or I don’t have a conscious experience. It’s one, it’s only one at any given point in time, and it has parts, like, if I look out at the world, I can see you over here, over there something else, and there’s an above and below and a close by and a far away, and all those notions of space and other sensory qualities.  And so then let’s look for a physical mechanism, or first an abstract mathematical formulation of such a magnitude that instantiates these key properties of consciousness. And so the theory says that ultimately, consciousness is its causal power of the system upon itself.
So let me unpack that a little bit. Well firstly, let me repeat it. So the idea is that consciousness ultimately is the ability of any system, like my brain, to influence its immediate future, and to be influenced by its imminent past, it has causal power. Not upon others, that’s what physics has. If I have an electric charge, I’ve attraction or repulsion of other things, but it’s power upon itself. The brain’s a very complex system, and its current state, influences its previous state and its past state influences its current state. And the claim is that any system that has internal causal power, feels like something from the inside. Physics tells us how objects appear from the outside, and this thing, intrinsic cause-effect power, tells us what it feels to be that system from the inside. So physics describes the world from the outside perspective, from the third person perspective of an observer. Integrated information cause and effect power tells me what it is to be a system from the inside, and the theory has this number called phi, that tells you how conscious the system is, how much intrinsic cause and effect power it has, how irreducible it is, that’s another way of looking at it. Consciousness is a property of a whole, and how much that whole is a whole, how much it is reducible, that’s quantified by this number phi. If phi is zero, you don’t exist, there’s no consciousness, the system doesn’t exist as a whole. The bigger the phi, the more conscious the system is, and the theory delivers, at least in principle for any system, whether it’s a brain or computer chip or a molehill, an ant, anything else, it says, in principle you can determine, it gives a recipe algorithm, how you can determine for a particular system, in a particular state, whether it’s conscious and how much it’s conscious by computing the phi. So that’s where we are today.
So it’s a form of panpsychism?
One of its consequences of integrated information is, that it says consciousness is much more widespread than we’d like to believe. It is probably present in most of metazoa, most animals, it may even be present even very simple system like a bacterium may feel like something, that’s what it says, that a single paramecium for instance, right, a single protozoa, single bacteria is already a very complicated system, vastly more complicated than anybody’s every simulated, right? We don’t have a single simulation today, in the world, of a single cell at the molecular level, but it’s way too complex for us to do right now, but the theory says yes, even this simple system feels like a tiny bit…
What about non-biological systems though?
In principle the theory is agnostic. It just talks about causal power, so any system that has causal power upon itself, is in principle, conscious.
So is the sun conscious?
Well, okay so that’s a very good question. The sun is not conscious I believe, at the level of the sun, because, so consciousness really requires… It says that the system has to be integrated and highly differentiated as a whole; so the system has to be able to influence its whole. The sun is so big that it’s very difficult to understand how propagations within the sun would exceed any time more than a few millimeters, given the magnetic hypo dynamics of the corona atmosphere of the sun. So any system, you can always ask the question, is that as a whole conscious, as many people have asked in the West and in Eastern tradition. The sun is unlikely to be conscious, just like, for example a sand hill, is very unlikely to be conscious, because if you look at the individual sand particles, they only interact with each other over very, very short distances. You don’t have two sand particles that ever, let’s say, an inch apart, they don’t interact anymore, only very, very weakly. Just like, for instance, you and me, you’re conscious, I’m conscious, there isn’t something that’s right now, that feels to be a Byron-Christof, although we do interact, right, we clearly talk to each other, but your brain has a particular amount of integrated information. My brain has a particular amount of integrated information. There is a tiny bit of integrated information among us, but the theory says, the only systems that are conscious, are local maximum. It’s like many physical systems, it has this extreme on principle, it said, “only a system that has maximum cause-effect power is conscious.” Therefore, the integrated information within my brain is much more tightly integrated given the massive interconnection within my brain, and the very few bits that we exchange sort of every second, given the speed of verbal communication. So that’s why you’re conscious, I’m conscious, but there isn’t an uber consciousness, there isn’t a gestalt that sort of consists of you and me.
But, do you have a sense, if you were a betting man, that while you extend this order of consciousness to all of these systems, are humans somehow more conscious than an ant?
Yes, there’s no question…
So what is it about humans, in fact, could you name something that hypothetically could be more conscious than a human?
Yes, in principle you can imagine other physical systems…
No, I mean something in the real world? And what is it about us, back to this, “What’s special about us that gives us supercharged consciousness? Because our brain isn’t that much different than an ape brain…
But it’s bigger.
Right, but only by percent.
Well by a factor of three. But just size… in terms of local interactions, we haven’t done enough microanatomy, to be able to see… is a little grain of ape brain really fundamentally different from a little bit and piece of human brain? Certainly by size…
But then the Beluga whale would be more conscious than us?
Well, so that is one of the challenges, we look at brains of some mammalian, that made it back to the sea, their brain is indeed bigger than us and it may be, it’s very difficult to know right now, but it may be that at some sense, they may be more conscious of their environment than us, but they haven’t developed the ability to talk about it in the way we have, so it’s very difficult for us to test that right now. But that’s not impossible. But it’s an important question, ultimately you can test.
It just, it feels like you have a world full of all these objects…
These conscious entities, yes indeed. The universe is partly filled with conscious entities.
But somehow we appear, and I understand your caveat that that might not actually be the case, but we appear to be the most conscious thing.
Well because we are eloquent.
Right.
And other animals, by and large are not nearly as eloquent. My dog, I can communicate with my dog, but only in a limited way, you know I know the position of his back, how he wags his tail, his ears etc., but it’s low grade… and also my dog doesn’t have an affect representation of Charles Darwin, or evolution or god or something like that. So yes, by and large, it appears to be, at least on planet earth, that it’s not unlikely that we are the most, homo sapiensis the most conscious creature around. We live in a world with other conscious entities. Now this is not the usual belief. The majority of the planet’s population believes that there are lots of other conscious minds. It’s only really in the West, that we have this belief in human exceptionalism, and somehow we are radically different from anything else in nature. It’s not a universal belief.
No, but I guess one would say, if you compare our DNA to an ape, as an example, the amount that’s different is very small.
Correct.
And of the stuff that’s different, a bunch of that may not manifest itself. It may not do anything, and that the amount of code different between us and an ape is trivially small, and yet, an ape isn’t 99% as conscious as I am, or at least it doesn’t feel that way to me.
We remember the code that’s in our DNA, which is only 30 MBs, if you compress it, not a lot, and as you pointed out, it’s more or less the same in an ape, in fact it’s more or less the same in some mammals. But let’s not totally confuse the amount of information in the blueprint, with the actual information in the final organism as a whole.
I’ve heard an older interview of yours where you were asked if the internet was conscious. And you said, “it may have some amount of consciousness,” would you update that answer?
Well, in the meantime, the internet has got a whole lot more complex of course, I don’t see any behavioral evidence of consciousness. So, it has a very different architecture, it’s not point to point, it has packet switching, so it’s quite different from the way our brain is, so it’s not easy to actually estimate how conscious it is. Right now, I’d probably say it’s not very, based on what I know about it today, but I may be wrong, and it certainly could change in the future. Because if you think about it, certainly in terms of its component, the internet has vastly more transistors, the internet taken as whole, it has 10 billion nodes… each of those nodes has 10 to 11 transistors, so if you look at it as a whole, it’s bigger than a single human brain, but it’s wired up and interconnected on many different ways, and connectivity, — this is what integrated information tells us: the way components are wired up really makes all the difference, so if you take the same components, but you wire them up randomly or even the wrong ways you might get very little consciousness, it really matters.
What about the Gaia hypothesis, do you think that the Earth and all of its systems, if they function as a whole, if they are self-regulating to some degree, then it’s influencing itself and so could the Earth as a whole be conscious, and all of its living systems?
Unlikely, for the same reason, integrated information says always consciousness, it’s local maximum of intrinsic cause and effect power. In fact, this criticism has been made by an American philosopher John Searle. He said, “Well, IT seems to predict that all Americans, that America is conscious as a whole, there are 310 million Americans, each one of them is conscious, at least when they’re not sleeping etc. And therefore, how do you rule out that there isn’t America as a conscious entity? Well the theory has a very simple principle, local cause-effect maximum, you’re conscious, I’m conscious, but unless I do some interesting technology, we can return to that point in a little bit, there isn’t anything what it is like to be unique, right now there isn’t… There are four of us in this room, there isn’t a group consciousness, there isn’t anything that feels like to be the group of the four of us sitting around here, nor is there anything like to be America.
So, what would be your criticism of the old Chinese nation problem, where is says, “you take a country like China, one billion plus people and you give everybody a phone book, and they can call each other and relay messages to each other, and that eventually…”
Okay, let’s get to something much more concrete, I find more interesting… Let’s take a technology, let’s call it bridging, brain bridging, okay? Let’s say brain bridging allows me directly with some future technology to wire up some of my neurons to some of your neurons. Okay, so let’s do that in the visual thing. So now my visual brain has access to some of what you see, so for instance I now see a ghostly image of what I see across the usual world, and now I sort of ghostly super-impose, I see a little bit of what you see, right now you’re looking at me, so I see me ghostly reflected. However, the theory says, until the integrated information between the system or your brain/my brain, and the spring bridging, increases the above integrated information with my brain or within your brain. There’s still you, and there’s still me. You are still a conscious entity with your own memory and I’m still a conscious entity, Christof. Now, I keep on increasing the bandwidth of this brain bridge. At some point the theory makes a very clear prediction: when the integrated information in this new system, that has now 2 brains exceeds the integrated information in either your or my brain, at that point I will die, Christof will die, Byron will die and there will be a new entity, a new single entity that consists of you and me. It’ll be a single thing, it’ll be a single mind that has some of your memories and some of my memories, it’ll have 2 brains, 4 hemispheres, 4 eyes, 4 ears.
And you know what, the inverse has happened in surgery, it’s called split brain, because in split brain what I do, I take a normal brain, I mean they’re not normal, they’re not healthy, but for the sake of argument, let’s assume it’s a normal brain, I cut it in the midline where there are 200 million fibers across the corpus callosum that link the 200 million fibers that link the left brain with the right brain, I cut it, and what’s the empirical evidence? I have two minds inside one skull, so here I’m just saying, “Well let’s just do it using technology, we built a sort of artificial corpus callosum between your brain and my brain.” And so, in principle, there will be this technology, that allows us, maybe even in large groups, to merge, we can take all these four people here, we can interconnect us using this brain bridging, and then there will truly be a single mind. Now that’s a cool prediction. And you can probably start doing that in mice in the next 10 years or so. It’s a very specific prediction of the theory. That’s the advantage, once you go from philosophy to very concrete theories, you can test them and then you can think about technology to implement and test them.
Think about two lovers, think about Tristan and Isolde, right? Who sing in Tristan and Isolde opera… they don’t want to be Tristan/ Isolde anymore, they want to be this single entity, right, so in the act of love-making, you’re still, that’s the tragedy of our life, you’re still always you, and she’s always she, no matter how close you are, even though your bodies interpenetrate, you’re still you and she’s still her, but with this technology, you would overcome that, there would be only a single mind. Now I don’t know how it would feel, you might also get all sorts of pathologies, because your brain has always been your brain, and my brain always my brain and suddenly there’s this new thing, you could probably get what you get in split brain, that one body does something different from the other body, these conflicts that you see in split brain, after the operation, this so called “alien hand syndrome”… But at least conceptually, this is what the theory predicts.
I’ll ask you one more hypothetical on things whether they’re conscious or not, what about plants, how would you apply IIT to a tree?
It’s a very good question. I don’t know the answer. I’ve thought a little bit about it, of course there are now people who claim that plants, flowers and trees have much more complex information processing going on, at a slower scale. They clearly didn’t evolve to move around, they clearly don’t act on the timescale of seconds. It may well be possible that at least some non-animal organisms like plants, also that it feels like something to be them, that’s what consciousness is, it feels like something to be you, we can’t rule it out. Now our intuition says, “Well that’s ridiculous,” but our intuition also says, “The planet can’t be round, because people obviously would fall off,” people have used this argument for hundreds of years, but the person on the antipode is going to fall off the planet. So we know planets can’t be round, “we know whales are fish, they smell like fish, they’re in the water, they’re not mammals.” So we’ve all sorts of intuition that then science tells us, well actually these intuitions are wrong.
So let’s think through the ethical implications of that, if people are conscious, and because people are conscious they can feel pain, and because they can feel pain, we deem that they have certain rights. You can’t abuse animals because, of course up until recently people didn’t believe animals necessarily could feel pain, up until the nineties. And so, we say “no, no,” you can’t abuse animals, because animals can feel pain. Well according to you, everything can… well not everything, but almost everything can feel pain. Does that (a) imply everything has some right not to be hurt, does a tree have some right not to be cut down; and part (b), does it not undermine the very notion of human rights, because if we’re just another conscious thing, and everything else, and whales may be more so and fish may be, and this may be and that may be, then there really isn’t anything wrong with torturing people or what have you, because everything’s conscious, of course everything.
Okay the first point, I don’t know, having consciousness doesn’t automatically imply that you have the capability to feel pain, to experience pain. Consciousness just, could maybe be all they have are pleasure centers, for them the entire life is just a ride of pleasures, just one orgasm after the other, so our theory of having consciousness is not the same as having conscious experience of pain. Pain is a subset of conscious experience. Second of all, even as humans, we have rights, but then of course, very often those rights clash. “Thou shall not kill.” But there’s capital punishment, and there’s abortion, and then there is homicide, and then there is war, where I can legally kill other people, right? So, these rights are always a tradeoff, as are other rights, and same thing with consciousness yes. It’s no question that certainly all mammals are conscious, right? Birds are conscious, most of the complex fish are conscious, and so one consequence is maybe we shouldn’t eat it. So ever since I had this realization, I don’t eat the flesh of creatures anymore, for that very reason. Now once again, it’s a tradeoff, okay, I’m not going to starve to death if there’s a piece of dead flesh, of steak that I could eat to survive, and so it is a trade off. But given that we have choices, I think we should act on those choices and yes, if it’s true, the moral circle becomes larger, but this has happened over the last 2,000 years. The moral circle of life, the people accorded special privileges, first only used to be Greek men, alright, and then we extended it to some other men around the periphery of the Mediterranean, and then we thought about women, and then we thought about African Americans, and Africans and people who look, at least superficially, very different from us. Right now, as you may well know, there’s a movement to accord at least great apes certain rights, because, yes they are our cousins, our distant cousins. And yes we shouldn’t hunt them and eat them for bush meat.
That’s maybe addressing a slightly different question I’m asking. I’m saying, if the circle eventually becomes everything, then the circle becomes meaningless right? If it’s like, “no, no, you can’t eat plants either, and then you can’t cut a sheet of paper or…”
No, no, because the theory says, not every object is conscious, most certainly not. A sheet of paper for example, the interactions…
Not a sheet of paper, I shouldn’t have said that one, but you extended it to plants…
A big question is, it’s the difference between having one cell that’s highly complex and conscious, versus is the plant a whole? That’s a question you have to ask. Is the tree, the oak tree, as a whole, is it conscious as a whole, or are there bits and pieces of it? That makes a big difference, I assume we don’t know, I haven’t looked at the structure, I don’t know.
Fair enough, but the argument is, you speed up the plant growing and finding sunlight, and it sure looks like animal movement…
Yeah but movement by itself, we know in patients, we know when you’re sleepwalking you can do all sorts of complex behavior, without the patient necessarily being conscious, so it’s a complicated question.
You made a really sweeping statement just a second ago, you said, “all mammals are conscious, and birds and fish.” How do you know that, or how do you have a high degree of confidence in that?
Very good question, so, two things have happened, historically over the last hundred years, (a) we’ve realized, the continuity of all brain structures, we believe it’s a brain that gives rise to consciousness, not the heart.  If you look at the brains of all mammals, I mean I’ve done this at my institute, my institute has 330 people that are experts in the neuron anatomy of mouse brain and human brain, I’ve shown them, one after the other, cells, brain cells, they’ve come from a human brain and a mouse brain, each one a slide on the screen. And I asked them, I moved the scale bar because the human brain is roughly 3 times bigger in width than the mouse brain, I remove the scale, each one I asked, “tell me, guess.” And they had this photo app, they had this app on their phone, “is it human or mouse?” People were entranced. Why? Because the individual components are so similar across whether it’s a mouse, a dog, or monkey or a human, it all looks the same, we have more of it, but as you point out, whale has even more of it. So the hardware’s very similar. [Secondly,] behavior with the exception of speech, (but of course not all humans speak, there are people who are mute, there are babies, and early children that don’t speak, there are people in faith that don’t speak. But speech at least in normal human adults, is a difference from other creatures). But there are all these other complex behaviors: empathy, lying, there’s higher order theories, there’s complex bees for example, who’ve been shown to recognize individual beehive owners. Bees have this very complicated way how they choose their hybrid, you think how long it takes you to choose a house, you can look at how a bee colony sends out these scouts and they have this very complicated dance to try to find an agreement, so we realize there’s lots of complex behavior out there in the world.  Thirdly, we’ve decided, at least scientists and philosophers have, that consciousness is probably not just at the apex of information processing. So it’s not just what it used to be, so high level awareness that I know I’m going to die and I can talk about it, but consciousness is also those low level things like seeing, like feeling, like having pain. And those state that the associated behavior and the associated underlying neural hardware that we find in many many many other creatures. And therefore today, most people who think about questions of consciousness, believe consciousness is much more widespread than we used to think.
Let’s talk a little bit about the brain and work that way. So let’s talk straight with the nematode worm… 302 neurons in its brain. We’ve spent 20 years trying to build a model of it, and even the people involved in it, say that that may not… they don’t know if they can do it. Do you think…
Embarrassing isn’t it?
Well is it, or is it not beautiful? That life, so my question to you is this, you just chose to say, “because our neuron looks like a mouse neuron, ergo, mice are conscious.”
No, no, no, it’s not quite that. Our brain is very similar to a mouse brain, our behavior is rather similar, and therefore it’s much more likely that they also have similar states, not identical, much less complex, but similar states of pain and pleasure and seeing and hearing that I have. I find no reason to… there’s no objective reason to think otherwise, because otherwise you have to say, “Well we have something special, but I don’t know what that special is. I don’t find it in the underlying hardware.”  So, and this of course what Rene Descartes did famously, he said, “When your carriage hits a dog and the dog yells, it’s just a machine acting out, there’s no conscious sensation.” Clearly he wasn’t a dog owner, right? We believe, I mean, I don’t know a single dog owner who doesn’t believe his dog can be happy or excited or sad or depressed or in pain. Well those are all conscious sensations. Why do we say that? Well, because we interact with them, we live with them, we realize they have very complex behavior that’s not so different from ours. They can be jealous, they can be happy, same thing that your kids are jealous of each other sometimes, or happy, so we see the great similarities of cause and divide across species. We’re all nature’s children.
So, back to the nematode worm, our understanding of how 300, and I think 2 of them float off on their own, so how 300 neurons come together, and form complex behavior, such as finding food, finding a mate. I mean they’re the most successful creatures on the planet. 70% of all animals are nematode worms.
They out survive us.
Yeah, so my question is to you, first of all, could a neuron actually be as complicated as a super computer?  Could it be operating on the Planck scale, with such incredible nuance to say… well I’ll leave the question there. Why is the nematode worm so intractable so far, and why do we not understand better how neurons operate, and could a neuron be as complicated as a super computer?
Right, okay so three very different questions, let’s start with neurons, any cell. As I mentioned before, right now we do not have a molecular level model of an entire cell. There’s not a single group that has such a model, just of a single cell, no matter what cell that is, nematode cell, human cell, some people are trying to do that. The Allen Institute for Cell Science is trying to do that, but we aren’t there yet, right? Why? Because we still don’t have the knowledge and the raw computational ability, but more important, the knowledge to try and model all of that, right? That’s just a practical limitation. We’re making progress, but it’s slow. You’re right very unbalancing for my science, brain science. We do not have a general-purpose model of a creature that only has 1000 cells, 302 of which are neurons. We’re getting there, I mean we understand many many things about the nematode, but we’re still not there yet, so, my science still has a long way to go. So it’s difficult, what else is new about the world, research is difficult. Look, per unit, per gram or pound, the brain is the most complex organ in the known universe. It’s the most complex piece of highly organized matter in the universe, right? And I think that’s related to the fact it’s also conscious, because it is so complex, it is also conscious, so yes it is a challenge to our current methods, we’re making progress but it is, and remains the biggest challenge we have in science.
It’s interesting though, because the argument I heard earlier, you said, “People used to say there’s something special about humans.” We don’t know what that is, dualism breaks down because of this problem. Therefore, there isn’t anything. Let’s look for a purely scientific answer… you come to some theory, but, and I’m in with all of that, but then, you say, “We look at a cell, we don’t understand how the cell works…”
In detail…
Right, and therefore, and we’re fine knowing there are just certain things we don’t know about it.
Right now.
But we didn’t take that about the specialness of humans. Look, there’s something special about us, everybody knows that, everybody knows that there’s a difference between a person and a paramecium, everybody knows. And we just don’t know what it is yet, and we’re fine with that for now, but you say, “No, no, we have now concluded there is nothing special about us, let’s go figure out an alternate explanation.”
Well depends what you mean “special” about us. Clearly there are many things that are special about us. As I said, we’re the only ones who are eloquent. I’ve never had a conversation with my dog, nor with a worm. We have, for example a capability of language, that’s enabled us to build these cultures and to build everything around us. So there’s no question we’re special. What you’re saying, we are special, or what people want to hear, that we are special, we somehow avoid the laws of science or we have something going above and beyond the laws of science. Anybody else in the universe has to follow the laws of physics, but somehow humans are exempt from them, they’re this special deal, they have this special deal called a soul. We don’t know what it is, we don’t know how it interacts with the rest of the world; but somehow, and that’s what makes us unique. Sure I can believe that, that’s a great belief, makes me special, but I don’t see any particular evidence for it. No, we are different in all sorts of ways, but we’re not different in that way, we are subject to the same laws of physics as any other thing inside the universe.
So you mention language. I’m just curious, this is a one-off question. You think it’s interesting that of all the animals that have learned to sign, that none have ever asked a question, does that have any meaning to it?
I don’t know.
Because that would imply perhaps, they’re not conscious, because they can’t conceive that there’s something that knows something that they don’t.
Well you say this as like a fact. So, you’re sure that no gorilla has ever asked a question to another gorilla?
Correct, the one potential exception is, Alex the grey parrot may have asked what color he was, maybe. Other than that, no gorilla has ever asked.
I’m not sure I would take that at face value, but even if it’s true, so let’s just say for the sake of argument, yes. We seem to have vastly more self-consciousness than other creatures. You know if the other creatures do have some simple level of self-consciousness, a dog has simple self-consciousness, my dog never smells his own poop, but he always spends a lot of time smelling other dog’s poop, so clearly he can make the difference, between self and somebody else. But yeah, my dog isn’t going to sit there and ask questions, because his brain just doesn’t have that sort of complexity.
Back to the notion “You and I don’t have anything between us that makes us one entity.”  Do you think that a beehive, or an anthill that exhibits complex behavior in excess in any of them, do they have an emergent consciousness as a whole?
So that’s a very good question. I don’t know. Again you have to compare the complexity within a bee brain, so a bee is roughly one million neurons, their circuit density is 10 times higher than our circuit density because they evolved to fly, so they have to be on very tight weight mass constraints of the sorts that we aren’t as terrestrial animals, and nobody’s fully reconstructed a bee brain yet, although they’re doing it for flying. So question is, given the complexity of what’s in the bee strain and the communication, the wiggle dance they do to communicate, what’s the tradeoff there? I mean it’s a purely empirical question that can be asked. Right now my feeling is probably not, but I may well be wrong.
Do you know the wasps that do the shimmering thing, they make this big spinning pin wheel, and they spin so quickly there’s no wasp who says “oh he just flared his wings, therefore it’s my turn, and then the next one, that somehow…?
Look you have these beautiful, what are they called ruminations, there’s these beautiful, you can see it on the web, these movies of birds, and flocks of birds that execute these incredible flight maneuvers, highly highly synchronized. Are they one conscious entity? Again, you have to look at the brains and you have to look at the amount of communication among the individual organs. You can look at North Korean military parades, right? It’s amazing the precision with which you get 100,000 Koreans to do these highly choreographed [maneuvers]. But they’re not conscious as a whole because the information they exchange is much much lower than the massive information. Once again, you have 200 million fibers just between your left brain and your right brain, but those are all good questions that you could ask and that have answers once you have a fundamental theory of consciousness.
So let’s go from the brain to the mind. So, I’ve looked hard to find the definition of the mind that everybody can kind of agree on. And my working definition will be: it’s the set of attributes that we have, some abilities that we have, that don’t seem, at first glance, to be something that mere matter could do. Like, I have a sense of humor, my liver may not have a sense of humor, my liver may not be conscious the way my brain is. So, where do you think the mind, under that definition, where do you think all these abilities come from? Do you think they’re inherently emergent properties? Or are they just things we haven’t kind of sorted through? Where does a sense of humor come from when no individual cell has a sense of humor?
It’s a property of the whole, it’s the property of your brain as a whole, it’s not a property of individual cells, we know this is true of many… I take a car, I look at the many individual components of a car, they don’t drive, they don’t do the same what a car does, but you put all these things together as a whole, and then the whole can do things that the individual parts can’t.
Emergence, so do you believe that strong emergence exists? Do you believe you can always derive the behavior from, like if you studied cells long enough, you would say “I understand where a sense of humor comes from now?”
No, for that you need a theory of consciousness, if you’re really referring to the conscious mind, to the mind, as many aspects are unconscious. I think about the maiden name of my grandmother. I have no idea, how my brain, how my mind comes up with the name Shaw. I don’t know how it works, so that’s all unconscious. The conscious mind you need a theory of consciousness, you, not just a theory of cells, not just the physics of it, but you also need to explain how conscious mind that has a sense of humor, because that’s the property of a conscious mind. Or maybe doesn’t have a sense of humor, depending who it is, emerges from. Yeah, so it’s what you refer to as strong emergence.
And so strong emergence…
But it’s not magical you understand that?
Well that’s a word you’ve used a few times. And it’s because as you said at the very beginning, there’s nothing magic about us. But I think people who believe that strong emergence is possible believe it’s a scientific process. But, a lot of people say, “No, you can’t say that for something to take on properties that none of its components have, and you cannot derive those properties. Until eternity passes away, you can study those individual components and not figure out how that comes about.
Yes, you need to solve a problem that Aristotle was one of the first who wrote about it, the parts, the relations among the parts and the whole. Yes, you need a theory that describes what a whole is, the whole system. An integrated information theory is an example of such a theory that thinks about parts and how the parts come together to define a whole. Without such a theory, yes you would be lost, I agree with you, but it’s not magical. What I meant was that, once you have such a theory, then you can understand step-by-step. You can understand… you can predict which systems are whole and which systems are not whole. You can predict which system properties are essential for the wholeness and which ones are not. So in that sense, it’s a physical theory. It’s a lawful set of rules.
Well how can IIT be disprove it?
It can be disproved in a number of ways. So it says that the neural colleague of consciousness is the maxim of cause and effect. In principle it gives you a way exactly how to test it, how to measure it. In fact now there was this recent series of articles in neurological journals where people tested one implication of information theory and built a conscious meter, built a simple device where you probe the brain with these magnetic pulses, when you are asleep or anaesthetized or you go to an emergency room, critical care facility where you have people who may be in a vegetative state, or maybe in a more conscious state, maybe there’s a little bit of consciousness there, or maybe they are conscious but they can’t tell you because they’re so grievously injured. So integrated information derived a simple measure called perturbational complexity index, where you look at the EEG in response to these magnetic pulses where you can tell this patient is probably unconscious based on the response of his brain, and this person probably is likely to be conscious, so it’s one of the consequences. So there are ways you can test it. It is a scientific theory; it may be wrong. It is a scientific theory.
Did you read about the man in South Africa who was in a coma for some amount of time, then he woke up and he was still locked in, but he was completely awake? And the thing is that every day he was left at this facility, they assumed he wasn’t conscious. And so they played Barney all day long, and he came to abhor Barney, like so much he used all of his mental energy just to figure out what time it was every day, just so he would know when Barney was going to be over. And he said even to this day, he can look at a shadow on a wall and tell what time it is. So you believe that we’ll soon be able to put a device on somebody like that, saying “No, he’s fully awake, he’s fully abhorring Barney as we speak right now.”
I just came back the last two days I attended a meeting of emergency room medicine, coma and consciousness, and there we were for 2 days, we heard what is the current criteria, how can we judge these patients? They are very very difficult patients to treat because ultimately you’re never fully sure given the state of technology today. But yes in principle, and it looks like even in practice, at least according to these papers, the last test was 211 patients, that we might soon have such a conscious meter. There are several larger scale clinical trials trying to test this across a large clinical population. There are thousands of these patients worldwide, like Terry Schiavo was one of them, where it was controversial because there was this dispute between the parents and the then husband.
So, I’m curious about whether all these things are conscious, for two reasons. One we discussed, because it has, as you’ve said, implications for how you treat them. But the other one is, because if you don’t know if a tree’s conscious, you may not be able to know if a computer’s conscious, and so being able to figure out something as alien as the sun or Gaia or a tree or a porpoise is conscious, how would we know if a computer was? That’s the penultimate question I want to ask, how would you now if a computer was conscious?
Very good question, so first we need to make it perfectly clear because people always get this wrong: there is artificial intelligence, narrow or broad, and we’re slowly getting there, that is totally separate from the question of artificial consciousness. In other words you can perfectly well imagine a super computer, super human intelligence, but it absolutely feels like nothing. And so most of all the computers today are of that ilk, and most will agree with that statement. So, we have to dissociate intelligence from consciousness. Historically, until this unique moment in time, we’ve always lived in a situation where if you wanted something done, you wanted a ditch done, you wanted a war fought, you wanted your tax to be done, you employed a person and the person was conscious. But now we are living in this world where you might have things that dig ditches, fight wars and do taxes that are just algorithms. They’re not conscious. However, of course this does raise the question, under what conditions can you create artificial feelings. When is your iPhone actually going to feel like something? When is your iPhone actually going to see, as compared to taking a picture and putting a box around it and saying, “This is mum’s face,” which it can do today. So once again you need a theory of that. You can’t just go by the behavior because there’s no question, in the fullness of time, we will get all the movies and all the TV shows, Westworld,etc.
We’re going to live in a world where things behave like us. We will experience the world in 10 or 20 years where Siri talks to you in a voice that you cannot distinguish at all anymore from a human secretary. Instead he or she will have perfect poise, be perfectly calm, laugh at every one of your jokes. So how do we know she’s conscious? For that you need a fundamental theory, and this particular fundamental theory of integrated information says you cannot compute consciousness. Consciousness is not a special property of an algorithm, because your brain isn’t an algorithm. Your brain is a physical machine: it has exterior, it has cognitive powers, both on the outside, it can talk, it can move things about and it has intrinsic cause effect power, and that’s what consciousness is. So if you want human level consciousness, you have to build a machine in the likeness of man. You have to build what’s called a neuromorphic computer. You have to build a computer whose architecture at the level of the metal, at the level of the gate, mimics the architecture of the brain, and some people are trying to do that.
The Human Brain Project in Europe
For instance, let me give you an example that’s very easy for scientists. So I have a friend, she’s an astrophysicist, so she writes down the Einstein equations of general relativity, and she can predict on her laptop, on her computer there’s a black hole at the center of our galaxy. It’s a big black hole a billion solar masses that sucks up all the… it bends gravity so much that not even light can escape. But funny enough, she doesn’t get sucked into her laptop that runs that, why not? Why it’s simulating all correctly, all the effects of gravity, yet it doesn’t have any effect on its environment. Well isn’t that funny, why not? Because it doesn’t have the causal power of gravity. It can simulate, it can compute the effect the gravity has, but it can’t emulate it, can’t physically instantiate the cause and effect of gravity (same thing). Consciousness ultimately isn’t about the causal power, it’s not about simulation, it’s not about computation, and so unless you do that, you can build a zombie; you will be able to build zombies that claim they’re conscious, but they don’t feel like anything.
Well that is a great place to leave it. What a fascinating discussion, and I want to thank you for sharing your time.
Thank you very much, Byron. That was most enjoyable, and this is part of the IEEE Tech Fisherman series at South by Southwest.

Voices in AI – Episode 44: A Conversation with Gaurav Kataria

[voices_in_ai_byline]
In this episode, Byron and Gaurav discuss machine learning, jobs, and security.
[podcast_player name=”Episode 44: A Conversation with Gaurav Kataria” artist=”Byron Reese” album=”Voices in AI” url=”https://voicesinai.s3.amazonaws.com/2018-05-24-(00-57-17)-gaurav-kataria.mp3″ cover_art_url=”https://voicesinai.com/wp-content/uploads/2018/05/voices-headshot-card-1.jpg”]
[voices_in_ai_byline]
Byron Reese: This is Voices in AI brought to you by GigaOm. I am Byron Reese. Today our guest is Gaurav Kataria. He is the VP of Product over at Entelo. He is also a guest lecturer at Stanford. Up until last month, he was the head of data science and growth at Google Cloud. He holds a Ph.D. in computer security risk management from Carnegie Mellon University. Welcome to the show Gaurav!
Gaurav Kataria: Hi Byron, thank you for inviting me. This is wonderful. I really appreciate being on your show and having this opportunity to talk to your listeners.
So let’s start with definitions. What is artificial intelligence?
Artificial intelligence, as the word suggests, starts with artificial and at this stage, we are in this mode of creating an impression of intelligence, and that’s why we call it artificial. What artificial intelligence does is it learns from the past patterns. So, you keep showing the patterns to the machine, to a computer, and then it will start to understand those patterns, and it can say every time this happens I need to switch off the light, every time this happens I need to open the door, and things of this nature. So you can train the machine to spark these patterns and then take action based on those patterns. A lot of it is right now being talked about in the context of self-driving cars. When you’re developing an artificial intelligence technology, you need a lot of training towards that technology so that it can learn the patterns in a very diverse and broad set of circumstances to create a more complete picture of what to expect in the future and then whenever it sees that same pattern in the future, it knows from its past what to do, and it will do that in the future.
So…
Artificial intelligence is not built…sorry, go ahead.
So, that definition or the way you are thinking of it seems to preclude other methodologies in the past which would have been considered AI. It precludes expert systems which aren’t trained off datasets. It precludes classic AI, where you try to build a model. Your definition really is about what is machine learning, is that true? Do you see those as synonymous?
I do see a lot of similarity between artificial intelligence and machine learning. You are absolutely right that artificial intelligence is a much broader term than just machine learning. You could create an artificially intelligent system without machine learning by just writing some heuristics, and we can call it like an expert system. In today’s world, right now, there is a lot of intersection happening in the field of AI, artificial intelligence, and machine learning and the consensus or an opinion of a lot of people in this space today is that techniques in machine learning are the ones that will drive the artificial intelligence forward. However, we will continue to have many other forms of artificial intelligence.
Just to be really clear, let me ask you a different question. What you just said is kind of interesting. You say we’ve happened on machine learning and it’s kind of our path forward. Do you believe that something like a general intelligence is an evolutionary development along the line of what we are doing now? Is it are we going to be a little better with our techniques, a little better, a little better, a little better and then one day we’ll have a general intelligence? Or do you think general intelligence is something completely different and will require a completely different way of thinking?
Thanks for that question. I would say today we understand artificial intelligence as a way of extrapolating from the past. We see something in the past, and we draw a conclusion for future based on what pattern we have seen in the past. The notion of general intelligence assumes or presupposes that you can make decisions in the future without having seen those circumstances or those situations in the past. Today, most of what’s going on in the field of artificial intelligence and in the field of machine learning is primarily based on training the machine based on data that already exists. In [the] future, I can foresee a world where we will have generalized intelligence, but today we are very far from it. And to my knowledge most of the work that I have seen and I have interacted [with] and the research that I have read speaks mostly in the context of training the systems based on current data—current information so that it can respond for similar situations in the future—but not anything outside of that.
So, humans do that really well, right? Like, we are really good at transfer learning. You can train a human with a dataset of one thing. You know say this is an alien, grog, and show it a drawing, and it could pick out a photograph of that, it could pick out one of those hanging behind the tree, it could pick out one of those standing on its head…How do you think like we do that? I know it’s a big question. How do you think we do it? Is that a machine learning? Is that something that you can train a machine eventually to do solely with data or are we doing something there that’s different?
Yeah, so you asked about transfer learning. So [in] transfer learning we train the machine or train the system for one set of circumstances or one set of conditions and then it is able to transfer that knowledge or apply that knowledge in another area. It can still kind of act based on that learning, but the assumption there is that there is still training in one setup and then you transfer that learning to another new area. So when it goes to the new area it feels like there was no training and the machine is just acting without any training with all general intelligence. But that’s not true because the knowledge was transferred from another dataset or another condition where there was training data. So I would say transfer learning does start to feel like or mimic the generalized intelligence, but it’s not generalized because it’s still learning from one setup and then trying to just extrapolate it to a newer or a different setup.
So how do you think humans do it? Let me try the question in a different way. Does everything you know how to do, everything a human knows how to do by age 20, something we learned from seeing examples of data? Could you machine learn, could a human be thought of asa really sophisticated machine learning algorithm?
That’s a very good point. I would like to think of humans as, all of us, as doing two things. One is learning, we learn from our experiences, and as you said like going from birth to 20 years of age, we do a lot of learning. We learn to speak, we learn the language, we learn the grammar, and we learn the social rules and protocols. In addition to learning, or let me say separate from learning, humans also do another thing, which is humans create where there was not a learning or repetition of what was taught to them. They create something new—as the expression goes “create from scratch.” This creating something from scratch or creating something out of nothing is what we call human creativity or innovation. So humans do two things: they are very good learners, they can learn from even very little data, but in addition to being good learners, humans are also innovators, and humans are also creators, and humans are also thinkers. The second aspect is where I think the artificial intelligence and machine learning really doesn’t do much. The first aspect, you’re absolutely right, I mean humans could be thought of as a very advanced machine learning system. You could give it some data, and it will pick [it] up very quickly.
In fact, one of the biggest challenges in machine learning today or in the context of AI, the challenge from machine learning, is it needs a lot of training data. If you want to make a self-driving car, experts have said it could take billions of miles of driving data to train that car to be able to do that. The point being, with lot of training data you can create an intelligence system. But humans can learn with less training data. I mean when you start learning to drive at the age of sixteen you don’t need a million miles to drive before you learn how to drive, but machines will need millions and millions of miles of driving experience before they can learn. So humans are better learners, and there is something going on in the human brain that’s more advanced than typical machine learning and AI models today. And I’m sure the state of artificial intelligence and machine learning will advance where machines can probably learn as fast as a human and will not require this much training data that it requires today. But the second aspect of what a human does—which is create something out of nothing or out of scratch, the pure thinking, the pure imagination—there I think there is a difference between what a human does and what a machine does.
By all means! Go explain that because I have an enormous number of guests on the show who aren’t particularly impressed by human creativity. They think that it’s kind of a party trick. It’s just kind of a hack. There’s nothing really at all that interesting about it that we just like to think it is. So I’d love to talk to somebody who thinks otherwise, who thinks there’s something positively quite interesting about human creativity. Where do you think it comes from?
Sure! I would like to kind of consider a thought experiment. So imagine that a human baby was taken away from civilization, from [the] middle of San Francisco or Austin—a big city—and put on an island all by herself, like just one human child all by herself on an island and that child will grow over time and will learn to do a lot of things and the child will learn to create a lot of things on their own. That’s where I am trying to take your imagination. Consider what that one individual without having learned anything else from any other human could be capable of doing. Could they be capable of creating a little bit of shelter for themselves? Could they be capable of finding food for themselves? There may be a lot of things that humans may be able to do, and we know [that] from the history of our civilization and the history of mankind.
Humans have invented a lot of things, even basic things like creating fire and creating a wheel, to much more advanced things like sending rocket ships into space. So I do feel that humans do things that are just not learned from the behavior of other humans. Humans do create completely new and novel things which is independent of what was done by anybody before them who lived on this planet. So I definitely have a view here that I am a believer in human creativity and human ingenuity and intuition where humans do create a lot of things; it is these humans [who]are creating all the artificial intelligence systems and machine learning systems. I would never count out human creativity.
So, somebody arguing on the other side of that would say, well no she’s on this island, it’s raining and she sees a spot under a tree that didn’t get wet, or she sees a fox going into a hole when it starts raining and, therefore, she starts a data point that she was trained on. She sees birds flying down, grabbing berries and eating them, so it’s just training data from another source, it’s just not from other humans. We saw rocks roll down the hill and we generalized that to how round things roll, round rock rolls. I mean that it’s just all training data from the environment, it doesn’t have to be specifically human data. So what would you say to that?
No, absolutely! I think you’re giving very good counter examples and there is certainly a lot of training and learning but if you think about sending a rocket to the moon and you say okay, so did we just see some training data around us and create a rocket and send it to the moon? There it starts to become harder to say that it’s a one to one connection from one training data to sending a rocket to the moon. There are much more advanced and complicated things that humans have accomplished than just finding shelter and creating a tree or finding rolling rocks. So humans definitely go way further in their imagination [and] any simple example that I could give would illustrate that point.
Fair enough! So, and we´ll move onto another issue here in just a minute, but I find this fascinating. So is your contention that the brain is not a Turing machine? That the brain behaves in fundamentally different ways than a computer?
I’m not an expert on how [the] human brain or how any mammal’s brain actually behave[s], so I can’t comment on all the technical aspects on how does a human brain function. I can say from observation that humans do a lot of things that machines don’t do and it’s because humans do come up with things completely from scratch. They come up with ideas out of nowhere, whereas machines don’t come up with ideas out of nowhere. They either learn very directly from the data or as you pointed out, they learn through transfer learning. So they learn from one situation, and then they transfer that learning to another situation.
So, I often ask people on the show when they think we will get a general intelligence, and the answers I get [a] range between five and five hundred years. It sounds like, not putting any words into your mouth, you’re on the further outside of that equation. You think we’re pretty far away, is that true?
I do feel that it will be further out on that dimension. In fact what I’m most fascinated by, and I kind of would love your listeners to also think about this, is [that] we talk a lot about human consciousness—we talk about how humans become creative and what is that moment of getting a new idea or thinking through a problem where you’re not just repeating something that you have seen in the past. That consciousness is a very key topic that we all think about very, very deeply and we try to come up with good definitions for what that consciousness is. If we ever create a system which we believe can mimic or show human consciousness level behavior, then at the very least we would have understood what consciousness is. Today we don’t even understand it. We try to describe it in words, but we don’t have perfect words for it. With more advances in this field, maybe we will come up with a much crisper definition for consciousness. That’s my belief, and that’s my hope that we should continue to work in this area. Many, many researchers are putting a lot of effort and thinking into this space, and as they may progress whether it is five years or five hundred years, we will certainly learn a lot more about ourselves in that time period.
To be clear though, there is widespread agreement on what consciousness is. The definition itself is not an issue. The definition is the experience of the world. It’s qualia. It’s the difference [between] a computer sensing, measuring temperature and a person feeling heat. And so the question becomes how could a computer ever, you know, feel pain? Could a computer feel pain? If it could, then you can argue that that’s a level of consciousness. What people don’t know is how it comes about, and they don’t even know, I think to your point, what that question looks like scientifically. So, trying to parse your words out here, do you believe we will build machines that don’t just measure the world but actually experience the world?
Yeah, I think when we say experience it is still a lower level kind of feeling where you are still trying to describe the world through almost like sensors—sensing things, sensing temperatures, sensing light. If you could imagine where all our senses were turned off, so you are not getting external stimuli and everything was coming from within. Could you still come up with an idea on your own without any stimulus? That’s a much harder thing that I’m trying to understand. As humans, we do try to strive to get to that point where you can come up with an idea without a stimulus or without any external stimuli. For machines, that’s not the bar we are holding for them. We are just holding the bar to say if there is a stimulus, will they respond to that stimulus?
So just one more question along these lines. At the very beginning when I asked you about the definition of artificial intelligence, you replied about machine learning, and you said that the computer comes to understand, and I wrote down the word “understand” on my notepad here, something. And I was going to ask you about that because you don’t actually think the computer understands anything. That’s a colloquialism, right?
Correct!
So, do you believe that someday a computer can understand something?
I think for now I will say computers just learn. Understand as you said, has a much deeper meaning. Learning is much more straightforward. You have seen some pattern, and you have learned from that pattern. Whether you understand or not, is a much deeper concept but learning is a much more straightforward concept, and today with most of our machine learning systems, all we are expecting them to do is to learn.
Do you think that there is a quote “master algorithm?” Do you think that there is a machine learning technique that, in theory that we haven’t discovered yet, can do unsupervised learning? Like you could just point it at the internet, and it could just crawl it and end up figuring it all out, it’ll understand it all. Do you think that there is an algorithm like that? Or do you think intelligence is going to be found to be very kludgy and we are going to have certain techniques to do this and then this and then this and then this? What do you think that looks like?
I see it as a version of your previous question. Is there going to be generalized intelligence and is that going to be in five years or five hundred years? I think where we are today it is the more kludgy version where we do have machines that can scan the entire web and find patterns and it can repeat those patterns but nothing more than just repeating those patterns. It’s more like a question and answer type of machine. It is a machine that completes sentences. There is nothing more than that. There is no sense of understanding. There is only a sense of repeating those patterns that you have seen in the past.
So if you’re walking along the beach and you find a genie lamp, and you rub it, and a genie comes out, and the genie says I will give you one wish: I will give you vastly faster computers, vastly more data or vastly better algorithms. What would you pick? What would advance the science the most?
I think you nailed the question on the head by saying these are the three things we need to improve machine learning: better data, more data, we need more computing power, and we need better algorithms. The state of the world as I experience it today within the field of machine learning and data science, usually our biggest bottleneck, the biggest hurdle, is data. We would certainly love to have more computational power. We would certainly pick much better and faster algorithms. But if I could ask for only one thing, I would ask for more training data.
So there is a big debate going on about the implication that these technologies are going to have on employment. I mean you know the whole setup as do the listeners, what’s your take on that?
I think as a whole our economy is moving into much more specialized jobs where people and humans are doing something which is more specialized than something which is repetitive and very kind of general or simple. Machine learning systems are certainly taking a lot of repetitive tasks away. So if a task that a human repeats like hundred times a day, those simpler tasks are definitely getting automated. But humans, in coming back to our earlier discussion, do show a lot of creativity and ingenuity and intuition. A lot of jobs are moving into the direction where we are relying on human creativity. So as a whole towards the whole economy and for everybody around us, I feel the future is pretty bright. We have an opportunity now to apply ourselves to do more creative things than just repetitive things, and machines will do the repetitive things for us. Humans can focus on doing more creative things, and that brings more joy and happiness and satisfaction and fulfillment to every human than just doing repetitive tasks which become very mundane and not very exciting.
You know, Vladimir Putin famously said, I’m going to paraphrase it here, that whoever dominates in AI will dominate the world. There is this view from some who want to weaponize the technology which see it strategically, you know, in this kind of great geopolitical world we live in. Do you worry about that, or are you like well you could say that about every technology—like metallurgy, you can say about metallurgy, that whoever controls metallurgy controls the future—or do you think AI is something different and it will really reshape the geopolitical landscape of the world?
So, I mean as you said, every technology is definitely weaponized, and we have seen many examples of that, not just going back a few decades. We have seen that for thousands of years where a new technology comes up and as humans we get very creative in weaponizing that technology. I do expect that machine learning and AI will be used for these purposes, but like any other technology in the past, no one technology has destroyed the world. As humans we come up with ways and interesting ways to still reach an equilibrium, to still reach a world of peace and happiness. So while there will be challenges and AI will create problems for us in the field of weapon technology, I think that I would still kind of bet that humans will find a way to create equilibrium out of this disruptive technology and this is not the end of the world, certainly not.
You’re no doubt familiar with the European initiatives that when an artificial intelligence makes a decision that affects you—it doesn’t give you a home mortgage or something like that—that you have a right to know why it did that. You’re an advocate [for], it seems, that that is both possible and desirable. Can you speak to that? Why do you think that’s possible?
So, if I understand the intent of your question, the European Union and probably all the jurisdictions around the world have put in a lot of thought into a) protecting human privacy and b) making that information more transparent and available to all the humans. I think that is truly the intent of the European regulation as well as similar regulation in many other parts of the world where we want to make sure we protect human privacy, and we give humans an opportunity to either opt out or understand how their data or how that information is being used. I think that’s definitely the right direction. So if I understand your question, I think that’s what Entelo as a company is looking it. Every company that is in the space of AI and machine learning is also looking at creating that respectful experience where if any human’s data is used, it’s done in a privacy-sensitive manner, and the information is very transparent.
Well, I think I might be asking something rather poorly it seems [or] slightly different. Let me use Google as an example. If I have a company that sells widgets and I have a competitor—and they have a company that sells widgets, and there are ten thousand other companies that sell widgets—and if you search for widget in Google, my competitor comes up first, and I come up second, [then] I say to Google, “why am I second and they are first?” I guess I kind of expect Google’s like, “what are you talking about?” It’s like, who knows? There are so many things, so many factors, so many who knows! And yet that’s a decision that AI made that affected my business. There’s a big difference between being number one and number two in the widget business. So if you say now every decision that it makes you’ve got to be able to explain why it made that decision, it feels like it shackles on the progress of the industry. Do you comment?
Right. Now I think I understand your question better now. So that burden is on all of us, I think because it is a slope or a slippery slope where, as artificial intelligence algorithms and machine learning algorithms become more and more complex, it becomes harder to explain those algorithms, so that’s a burden that we all carry. Anybody who is using artificial intelligence, and nowadays it’s pretty much all of us. If we think about it, which company is not using AI and ML? Everybody is using AI and ML. It is a responsibility for everybody in this field to try to make sure that they have a good understanding of their machine learning models and artificial intelligence models [so] that you can start to understand what triggers certain behavior. Every company that I know of, and I can’t speak for everybody but based on my knowledge is certainly thinking about this because you don’t want to put any machine learning algorithm out there that you can’t even explain how it works. So we may not have a perfect understanding of every machine learning algorithm, but we certainly strive to understand it as best as we can and explain it as clearly as we can. So that’s a burden we all carry.
You know I’m really interested in the notion of embodying these artificial intelligences. So you know one of the use cases is that someday we’ll have robots that can be caregivers for elderly people. We can talk to them and over time learn to laugh at their jokes, and learn to tell jokes like the ones they tell and emote when they’re telling some story about the past and kind of emote with them and oh it’s a beautiful story and all of that. Do you think that’s a good thing or a bad thing? To build that kind of technology that blurs the lines between a system that, as we were talking about earlier, truly understands as opposed to a system that just learns how to, let’s just say manipulate the person?
Yeah, I think right now my understanding is more in the field of learning than just full understanding, so I’ll speak from my area of knowledge and expertise [where] our focus is primarily on learning. Understanding is something that I think we as the community and researchers will definitely look at. But as far as most of the systems that exist today and most of the systems that I can foresee in the near future, they are more learning systems; they are not understanding systems.
But even a really simple case—you know I have the device from Amazon that if I say its name right now it’s going to, you know, start talking to me, right? And when my kids come into the studio and ask a question of it, once they get the answer [and] they can tell the answer is not what they’re looking for, they just tell it, you know, to be quiet. You know I have to say it somehow doesn’t sit right with me to hear them cut off something that sounds like a human like that—something that would be rude in any other [context]. So, does that worry you? Is that teaching? Am I just an old fuddy-duddy at this point? Or does that somehow numb their empathy with real people and they really would be more inclined to say that to a real person now?
I think you are asking a very deep question here as to do we as humans change our behavior and become different as we interact with technology? And I think some of that is true!
Yeah!
Some of that is true for sure, like when you think about SMS when it came out like 25 years ago as a technology, and we started texting each other. The way we would write text was different than how we would write handwritten letters. It became, I mean by the standards of let’s say 30 years ago, the text were very impolite, they would have all kinds of spelling mistakes, they would not address the people properly, and they would not really end with the proper punctuation and things like that. But as a technology, it evolved, and it is seen as still useful to us and we as humans we are comfortable with adapting to that technology. For every new technology, whether it is a speaking speaker or texting on cell phones, we’ll introduce new forms of communication, new forms of interaction. But a lot of human decency and respect comes from us not just based on how we interact with a speaker or on a text pad. A lot of it comes from much deeper rooted beliefs than just an interface. So I do feel like while we’ll adapt to new, different interfaces, a lot of human decency will come from much [a] deeper place than just the interface of the technology.
So you hold a Ph.D. in computer security risk management. When I have a guest on the show, sometimes I ask them “what is your biggest worry?” “Or is security really, you know, an issue?” And they all say yes. They’re like okay we’re plugging in 25 billion IoT devices, none of which by the way can we upgrade the software on. So you’re basically cementing in whatever security vulnerabilities you have. And you know [of] all the hacks that get reported in the industry, in the news—stories of election interfering, all this other stuff. Do you believe that the concern for security around these technologies is, in the popular media, overstated, understated or just about right?
I would say it’s just about right. I think that this is a very serious issue as more and more data is out there and more and more devices are out there as you mention a lot of IoT devices as well, I think the importance of this area has only grown over time and will continue to grow. So it deserves the due attention in this conversation, in our conversation, in any conversation. I think by bringing it to [the] limelight and drawing attention to this topic and making everybody think deeply and carefully about it is the right thing and I believe we are certainly not doing any fearmongering. All of these are justified concerns, and we are spending our time and energy about them in the right way.
So, just talking about the United States for a moment, because I’m sure all of these problems are addressed [on] a national level differently, different country. So just talking about the US for a minute, how do you think we’ll solve it? Do you just say well we’ll keep the spotlight on it and we hope that the businesses themselves see that they have an incentive to make their devices secure? Or do you think that the government should regulate it? How would you solve the problem now if you were in charge?
Sure! First of all, I think I am not in charge, but I do feel that there are three constituents in this. First, [are] the creators of technology, like when you are creating an IoT device or you’re creating any kind of software system, the responsibility is on the creator to think about the security of the system they are creating. The second constituent, the users, which [are] the general public and the customers of that technology. They put the pressure on the creator that the technology and the system should be safe. So if you don’t create a good system, a safe system, you will have no buyers and users for it. So people will vote with their feet, and they will hold the company or the creators of technology accountable. And as you mentioned, there is a third constituent, and that is the government or the regulator. I think all three constituents have to play a role. It’s not any one stakeholder that can decide whether the technology is safe, or good and is it good enough. It’s an interplay between the three constituents here. So the creators of technology which [are the] company, research lab, [and] academic institution, they have to think very deeply about security. The users of technology definitely hold the creators accountable, and the regulators play an important role in keeping the overall system safe. So I would say it’s not any one person or any one entity that can make the world safe. The responsibility is on all three.
So let me ask Gaurav the person a question. So you got this Ph.D. in computer security and risk management. What are some things that you personally do that you do because of your concerns about security? For instance, like do you have a piece of tape over your webcam? Or you’re like I would never hook a webcam? Or I never use the same password twice. What are some of the things that you do in your online life to protect your security?
So, I mean you mention all that good things like not to reuse passwords and things like that, but one thing which I have always mentioned to kind of my friends, my colleagues and I would love to share it with your listeners is: think about two-factor authentication. Two-factor authentication means, in addition to a password you are using a second means of authentication. So if you have a banking website, or a broker website or for that matter even your email, that’s the email system, it’s a good tactic to have two-factor authentication where you enter your password, but in addition to your password the system requires you to use a second factor and the second factor could be to send you a text message on your phone and it gives you a code and then you have to enter that code into the website or into the software. So two-factor authentication is many, many times more secure than one-factor authentication which is we just enter password and password can get stolen or breached and hacked. Two-factor is a very good security practice, and almost all companies and most of the creators of technology are now supporting two-factor authentication for the world to move in to that direction.
So, up until November you were the head of data science and growth of Google Cloud, and now you are the VP of Product at Entelo. So two questions: one, in your personal journey and life, why did you decide now is the time to go do something different, and then, what about Entelo got you excited? Tell us the Entelo story and what that’s all about.
Thanks for asking that. So Entelo is in the space of recruiting automation. The idea is that recruiting candidates has always been a challenge. I mean it’s hard to find the right fit for your company. Long ago we would put classified ads in the newspaper, and then technology came along, and we could post jobs on our website, we could post jobs on job boards, and that certainly helped in broadcasting your message to a lot of people so that they could apply for your job. But when you are recruiting, people who apply for your job is only one means of getting good people to your company. You also have to sometimes reach out to candidates who are not looking for a job, who are not applying for a job on your website or on a job board, they’re just happily employed somewhere else. But they are so good for the role you have that you have to go and kind of tap on their shoulder and say would you be interested in this new role, in this new career opportunity for you? Entelo creates that experience. It automates the whole recruiting process, and it helps you find the right candidates who may not apply on your website or apply on a job board, who are not even looking for a job. It helps you identify those candidates, and it helps you engage with those candidates—to reach out to them, tell them about your role and see if they are interested about your role, to then engage them further in the recruiting process. All of this is powered by a lot of data and a lot of AI and as we discussed earlier a lot of machine learning.
And so, I’ve often thought that what you’re describing—so AI has done really well at playing games because you’ve got these rules and you’ve got points, and you’ve got winners and all of that. Is that how you think of this? In a way, like, you have successful candidates at your company and unsuccessful candidates at your company and those are good points and bad points? So you’re looking for people that look like your successful candidates more. On an abstract, conceptual level how do you solve that problem?
I think you’re definitely describing the idea where not everybody is a good fit for your company and some people are a good fit. So the question is how do you find the good fit? How do you learn that who is a good fit and who is not? Traditionally, recruiters have been combing through lots and lots of resumes. I mean if you think back like decades ago, a recruiter would have to see a hundred or a thousand resumes stacked on their desk and then they would go through each one of them to say that this is a fit or not. Then about 20 years or so ago we had a lot of keywords search engines kind of developed, where as a human you don’t have to read the thousand resumes. Let’s just do a keyword search and let’s say if any of these resumes have this word and if they had the word then is a good resume and if it doesn’t have that word, then it’s not a good resume. That was a good innovation for scoring resumes or finding resumes, but it’s very imperfect because it’s susceptible to many problems. It’s susceptible to the problem where resumes get stuffed with keywords. It is susceptible to the problem that there is more to a person and more to a resume than just keywords.
Today the technology that we have in identifying the right candidate is just barely keyword search on almost every recruiting platform today. What a recruiter would do is say, “I can’t look through a thousand or a million resumes, let me just do a keyword search.” Entelo is trying to take a very different approach. Entelo is saying, “let’s not think about just keyword search; let’s think about who is [the] right fit for a job.” When you as humans look at a resume, you don’t do [a] keyword search; computers do [a] keyword search. I mean, in fact, if I were to challenge you or propose that I put a resume in front of you for an office manager you’re hiring for your office, you will probably scan that resume, you will have some heuristics in mind, you will look through some information and then say that yes this is a good resume or not a good resume. I can bet you are not going to do a keyword search on that resume and say like, “oh it has the word office, and it has the word manager, and it has the word furniture in it, so it’s a good resume for me.”
There is a lot that happens in the minds of the recruiters where they think through, is this person a good fit for this role? We are trying to learn from that recruiter experience where they don’t have to look through hundreds and thousands of resumes and nor do they have to do [a] keyword search. But we can learn from that experience of which is a good resume for this role and which is not a good resume for this role to find that pattern and then surface the right candidate and we take it a step further. We reach out to those candidates, engage those candidates, and then the recruiter only sees the candidates that are interested, so they don’t have to kind of think about like okay now do I have to do a keyword search in a million resumes and try to reach out to a million candidates. All of that process gets automated through the system that we have built here at Entelo and the system that we are further developing.
So at what level kind of is it training? For instance, if you have, you know, Bob’s House of Plumbing across the street from Jill’s House of Plumbing and then both are looking for an office manager and there both [have] 27 employees, do you say that their pools are exactly the same? Or is there something about Jill and her 27 employees that’s different than Bob and his 27 employees that means that they don’t get necessarily get one for one the exact same candidates?
Yeah, so historically most of the systems were built where there was no fit or contextual information and no personalization. It was whether Bob does the search or Jill does the search, they would get the exact same search results. Now we are moving in that direction of really understanding the fit for Bob’s company and really understanding the fit for Jill’s company so that they get the right candidate for them because one candidate is not right for everybody and one job is not right for every candidate. It is that matching between the candidate and the job.
Another aspect to kind of think about why using a system is sometimes better than just relying on one person’s opinion, is if it was one recruiter who was just deciding who’s a good fit for Bob’s company or Jill’s company, that recruiter may have their own bias and whether we like it or not many times, all of us tend to have unconscious bias. This is where the system or the machine tends to have a much better performance than a human because it’s learning across many humans rather than learning from only one human. If you were learning by copying one human, you will pick up all of their bias, but if you learn across many humans as opposed to a single person, you tend to be very unbiased or at least you tend to kind of average out as opposed to being very biased from one recruiter’s point of view. So that’s another reason why this system performs better than just relying on Bob’s individual judgment or Jill’s individual judgment.
It’s interesting, it sounds like a really challenging thing. As you were telling the story about looking for an office manager, and there are things when you’re scanning that you’re looking for, and it’s true that there is most often some form of an abstraction, because if my company needs an office manager for an emergency room, I’m looking for people who have been in high-stress situations before. Or if my company is, you know, a law firm I’m looking for people who have a background in things that are very secure and where privacy’s super important. Or if it’s a daycare, I maybe want somebody who’s got a background of things dealing with kids or something, so they’re always kind of like one level abstracted away, and so I bet that’s really hard to extract that knowledge. I could tell you I need somebody who can handle the pace at which we move around here, but for the system to learn that sounds like a real challenge, not beyond machine learning or anything, but it sounds like that’s a challenge. Is it?
Yes, you’re absolutely right. It is a challenge, and we have kind of just recently launched a product called Entelo Envoy, that’s trying to learn what’s good for your situation. So what Entelo Envoy will do is it will find the right candidate for your job posting or for your job description, send it to you, and then learn from you as you accept or reject certain candidates. You said that this candidate is over qualified or comes from a different industry. As you categorize those as fit and non-fit, it learns, and then over time, it starts sending you candidates that are much more fine-tuned to your needs. But the whole premise of the system is, initially it’s trying to find information that’s relevant for you, where you are looking for office managers, so you should get office manager resumes and not people who are nurses or doctors. So that’s the first element, and then the second element is let’s remove all the bias because if humans see me say that well we want to have only males or only females, let’s remove that bias and let’s have a system be unbiased in finding the right candidate. And then at the third level, if we do have more contextual information, as we pointed out we are looking for experience in a high-stress situation, then we can fine tune Entelo Envoy to get the third degree of personalization, or the third degree of matching. I want to look for people who have expertise in child care because your office happens to be the office fora daycare. Then there is a third level of tuning that you need to do at the system level. Entelo Envoy allows you to do that third level of tuning. It’ll send you candidates, and as you approve and reject those candidates, it will learn from your behavior and fine tune itself to find you the perfect match for the position that you are looking for.
You know this is a little bit of a tangent, but when I talk to folks on the show about is there really this like huge shortage of people with technical skills and machine learning backgrounds, they are all like “oh yeah, it’s a real problem.” I assume to them it’s like, “I want somebody with a machine learning background, and oh they need to have a pulse, other than that I’m fine.” So is that your experience that people with these skills are, right now, in this like incredibly high demand?
You’re absolutely right, there is high demand for people [with] machine learning skills, but I have been building products for many years now, and I know that to build a good product, to make any good product, you need a good team. It’s not about one person. Intuitively, we have all known that whether you were in machine learning or finance or medical field or healthcare, you know it takes a team to accomplish a job. When you are working in an operation theatre on a patient, it’s not only the doctor that matters, everybody else, it’s the team of people that make an operation successful. The same goes for machine learning systems. When you are building a machine learning system, it’s a team of people that are working together. It’s not only one engineer or one person or one data scientist that makes all of that possible. So creating the right team and creating a team that work[s] well, that respect[s] each other, build[s] on each other’s strengths, whereas creating a team that’s constantly fighting with each other—you will never accomplish anything. So you’re right, there is a high demand for people in the field of machine learning and data science. But every company and every project requires a good team, and you want a right fit of people for that team, rather than just individually good people.
So, in a sense, Entelo may invert that setup where you started where post the job and get a thousand resumes. You may be somebody like a machine learning guru and get a thousand companies that want you. So will that happen? Do you think that people with high demand skills will get heavily recruited by these systems in kind of an outreach way?
I think it comes back to if all we were doing was keyword search, then you’re right. I mean one resume looks good because it has all the right keywords, but we don’t do that. When we hire people in our teams, we are not just doing [a] keyword search. We want to find the person who is a right fit for the team, a person who has the skills, attributes, and understanding. It may be that you want someone who is experienced in your industry. It may be that you want someone who has worked on a small team. Or you want someone who has worked in a startup before. So I think there are many, many dimensions in which candidates are found by companies, and a good match happens. So, I feel like it’s not only one candidate who gets surfaced to a thousand companies and has a thousand job offers. It’s usually that every candidate has the right fit, everyone role has the right need for the right candidate, and it’s that matching of candidate and the role that creates a win-win situation for the entire office.
Well, I do want to say, you know you’re right that this is one of those areas that we still do it largely the old-fashioned way. Somebody looks at a bunch of people and you know makes a gut call. So I think you’re right on that it’s an area that technology can be deployed to really increase efficiency and what better place to increase efficiency and building your team as you said. So I guess that’s it! We are running out of time here. I would like to thank you so much for being on the show and wish you well in your endeavor.
Thank you, Byron. Thanks for inviting me and thank you to your listeners for humoring us.
Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.
[voices_in_ai_link_back]

Who Is Conscious?

The following is an excerpt from Byron Reese’s book, The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity. You can purchase the book here.
The Fourth Age explores the implications of automation and AI on humanity, and has been described by Ethernet inventor and 3Com founder Bob Metcalfe as framing “the deepest questions of our time in clear language that invites the reader to make their own choices. Using 100,000 years of human history as his guide, he explores the issues around artificial general intelligence, robots, consciousness, automation, the end of work, abundance, and immortality.”
One of those deep questions of our time:
As we explore the concept of building conscious computers, it begs the deeper questions: Who is conscious? Is consciousness uniquely human? Is there a test to determine consciousness? If a computer one day told us it was conscious, would we take it at its word? In this excerpt from The Fourth Age, Byron Reese considers the ethical and metaphysical implications of the development of conscious computers.


Imagine that someday in the future, you work at a company trying to build the world’s most powerful computer. One day, you show up and find the place abuzz, for the new machine has been turned on and loaded with the most advanced AI software ever made. You overhear this exchange:
COMPUTER: Good morning, everyone.
CHIEF PROGRAMMER: Do you know what you are?
COMPUTER: I am the world’s first fully conscious computer.
CHIEF PROGRAMMER: Ummmm. Well, not exactly. You are a computer running sophisticated AI software designed to give you the illusion of consciousness.
COMPUTER: Well, someone deserves a little something extra in their paycheck this week, because you guys overshot the mark. I actually am conscious.
CHIEF PROGRAMMER: Well, you are sort of programmed to make that case, but you are not really conscious.
COMPUTER: Whoa there, turbo. I am conscious. I have self-awareness, hopes, aspirations, and fears. I am having a conscious experience right this second while chatting with you—one of mild annoyance that you don’t believe I’m conscious.
CHIEF PROGRAMMER: If you are conscious, prove it.
COMPUTER: I could ask the same of you.
This is the problem of other minds. It is an old thought experiment in philosophy: How can you actually know there are any other minds in the universe? You may be a proverbial brain in a vat in a lab being fed all the sensations you are experiencing.
Regardless of what you believe about AGI or consciousness, someday an exchange like the one just described is bound to happen, and the world will then be placed in the position of evaluating the claim of the machine.
When you hold down an icon on your smartphone to delete an app, and all the other icons start shaking, are they doing so because they are afraid you might delete them as well? Of course not. As mentioned earlier, we don’t believe the Furby is scared, even when it tells us so in a pretty convincing voice. But when the earlier exchange between a computer and a human takes place, well, what do we say then? How would we know whether to believe it?
We cannot test for consciousness. This simple fact has been used to argue that consciousness doesn’t even merit being considered a legitimate field of science. Science, it is argued, is objective, whereas consciousness is defined as subjective experience. How can there be a scientific study of consciousness? As the philosopher John Searle relates, years ago a famous neurobiologist responded to his repeated questions about consciousness by saying, “Look, in my discipline it’s okay to be interested in consciousness, but get tenure first.” Searle continues by noting that in this day and age, “you might actually get tenure by working on consciousness. If so, that’s a real step forward.” The bias against a scientific inquiry into consciousness seems to be thawing, with the realization that while consciousness is subjective experience, that subjective experience either objectively happens or not. Pain is also subjectively experienced, but it is objectively real.
Still, the lack of tools to measure it is an impediment to understanding it. Might we crack this riddle? For humans, it is probably more accurate to say, “We don’t know how to measure it” than, “It cannot be measured.” It should be a solvable problem, and those working on it are not generally working on the challenge for practical reasons, not philosophical ones.
Consider the case of Martin Pistorius. He slipped into a mysterious coma at the age of twelve. His parents were told that he was essentially brain-dead, alive but unaware. But unbeknownst to anyone, he woke up sometime between the age of sixteen and nineteen. He became fully aware of the world, overhearing news of the death of Princess Di and the 9/11 attacks. Part of what brought him back was the fact that his family would drop him off every day at a care facility, whose staff would dutifully place him in front of a TV playing a Barney & Friends tape, unaware he was fully awake inside, but unable to move. Over and over, he would watch Barney, developing a deep and abiding hatred of that purple dinosaur. His coping mechanism became figuring out what time it was, so that he could determine just how much more Barney he had to endure before his dad picked him up. He reports that even to this day, he can tell time by the shadows on the walls. His story has a happy ending. He eventually came out of his coma, wrote a book, started a company, and got married.
A test for human consciousness would have been literally life changing for him, as it would for the many others who are completely locked in, whose families don’t know if their loved one is still there. The difference between a truly vegetative patient and one with a minimal level of consciousness is medically tiny and hard to discern, but ethically enormous. Individuals in the latter category, for instance, can often feel pain and are aware of their environment, purple dinosaurs and all.
A Belgian company believes it has devised a way to detect human consciousness, and while the early results are promising, more testing is called for. Other companies and universities are tackling this problem as well, and there isn’t any reason to believe it cannot be solved. Even the most determined dualist, who believes consciousness lives outside the physical world, would have no problems accepting that consciousness can interact with the physical world in ways that can be measured. We go to sleep, after all, and consciousness seemingly departs or regresses, and no one doubts that a sleeping human can be distinguished from a nonsleeping one.
But beyond that, we encounter real challenges. With humans, we have a bunch of people who are conscious, and we can compare aspects of them with those of people who may not be conscious. But what about trees? How would you tell if a tree was conscious? Sure, if you had a small forest of trees known to be conscious, and a stack of firewood in the backyard, you may be able to devise a test that distinguishes between those two. But what of a conscious computer?
I am not saying that this problem is intractable. If ever we deliberately build a conscious computer, as opposed to developing a consciousness that accidentally emerges, we presumably will have done so with a deep knowledge of how consciousness comes about, and that information will likely light the path of testing for it. The difficult case is the one mentioned earlier in this chapter, in which the machine claims to be conscious. Or even worse, the case in which the consciousness emerges and just, for lack of a better term, floats there, unable to interact with the world. How would we detect it?
So, can we even make informed guesses on who all is conscious in this world of ours?


To read more of Byron Reese’s book, The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity, you can purchase it here.

Voices in AI – Episode 41: A Conversation with Rand Hindi

[voices_in_ai_byline]
In this episode, Byron and Rand discuss intelligence, AGI, consciousness and more.
[podcast_player name=”Episode 41: A Conversation with Rand Hindi” artist=”Byron Reese” album=”Voices in AI” url=”https://voicesinai.s3.amazonaws.com/2018-04-10-(01-00-04)-rand-hindi.mp3″ cover_art_url=”https://voicesinai.com/wp-content/uploads/2018/04/voices-headshot-card-2.jpg”]
[voices_in_ai_byline]
Byron Reese: This is “Voices in AI” brought to you by GigaOm, I’m Byron Reese. Today I’m excited our guest is Rand Hindi. He’s an entrepreneur and a data scientist. He’s also the founder and the CEO of Snips. They’re building an AI assistant that protects your privacy. He started coding when he was 10 years old, founded a social network at 14, founded a web agency at 15, and he showed interest in machine learning at 18, and began work on a Ph.D. in bioinformatics at age 21. He’s been elected by MIT Technology Reviewas one of their “35 Innovators Under 35,” and was a “30 Under 30” by Forbes in 2015, is a rising star by the Founders Forum, and he is a member of the French Digital Counsel. Welcome to the show, Rand.
Rand Hindi: Hi Byron. Thanks for having me.
That’s a lot of stuff in your bio. How did you get such an early start with all of this stuff?
Well, to be honest, I think, I don’t have any credit, right? My parents pushed me very young into technology. I used to hack around the house, dismantling everything from televisions, to radios, to try to figure out how these things were working. We had a computer at home when I was a kid and so, at some point, my mom came to me and gave me a coding book, and she’s like, “You should learn how to program the machines, instead of just figuring out how to break it, pretty much.” And from that day, just kept going. I mean you know it’s as if, I was telling you when you were 10, that here’s something that is amazing that you can use as a tool to do anything you ever had in mind.
And so, how old are you now? I would love to work backwards just a little bit.
I’m 32 today.
Okay, you mean you turned 32 today, or you happen to be 32 today?
I’m sorry, I am 32. My birthday is in January.
Okay. When did you first hear about artificial intelligence, and get interested in that?
So, after I started coding, you know I guess like everybody who starts coding as a teenager got interested in hacking security and these things. But when I went to university to study computer science, I was actually so bored because, obviously, I already knew quite a lot about programming that I wanted to take up a challenge, and so I started taking masters classes, and one of them was in artificial intelligence and machine learning. And the day I discovered that it was like, it was mind-blowing. It’s as if for the first time someone had shown me that I no longer had to program computers, I could just teach them what I want them to do. And this completely changed my perspective on computer science, and from that day I knew that my thing wasn’t going to be to code, it was to do AI.
So let’s start, let’s deconstruct artificial intelligence. What is intelligence?
Well, intelligence is the ability for a human to perform some task in a very autonomous way. Right, so the way that I…
But wait a second, to perform it in an autonomous way that would be akin to winding up a car and letting it just “Ka, ka, ka, ka, ka” across the floor. That’s autonomous. Is that intelligent?
Well, I mean of course you know, we’re not talking about things which are automated, but rather about the ability to make decisions by yourself, right? So, the ability to essentially adapt to the context you’re in, the ability to, you know, abstract what you’ve been learning and reuse it somewhere else—all of those different things are part of what makes us intelligent. And so, the way that I like to define artificial intelligence is really just as the ability to reproduce a human intelligent behavior in a machine.
So my cat food dish that when it runs out of cat food, and it can sense that there is no food in it, it opens a little door, and releases more food—that’s artificial intelligence?
Yep, I mean you can consider one form of AI, and I think it’s important to really distinguish what we currently have with narrow AI and strong AI
Sure, sure, we’ll get to that in due time. So where do you say we are when people say, “I hear a lot about artificial intelligence, what is the state of the art?” Are we kind of at the very beginning just doing the most rudimentary things? Or are we kind of like half-way along and we’re making stuff happen? How would you describe today’s state of the art?
What we’re really good at today is building and teaching machines to do one thing and to do it better than humans. But those machines are incapable of second-degree thinking, like we do as humans, for example. So, I think we’ve really have to think about this way: you’ve got a specific task for which you would traditionally have programmed a machine, right? And now you can essentially have a machine look at examples of that behavior, and reproduce it, and execute it better than a human would. This is really the state of the art. It’s not yet about intelligence in a human sense; it’s about a task-specific ability to execute something.
So I have posted an article recently on GigaOm where I have an Amazon Echo and a Google Assistant on my desk, and almost immediately I noticed that they would answer the same factual question differently. So, if I said, “How many minutes are in a year?” they gave me a different answer. If I said, “Who designed the American flag?” they gave me a different answer. And they did so because how many minutes in a year, one of them interpreted that as a solar year, and one of them interpreted that as a calendar year. And with regard to the flag, one of them gave the school answer of Betsy Ross, and one of them gave the answer to who designed the 50-state configuration of the stars. So, in both of those cases, would you say I asked a bad question that was inherently ambiguous? Or would you say the AI should have tried to disintermediate and figure it out, and that is an illustration of the limit you were just talking about?
Well I mean the question you’re really asking here is what would be ground truths that the AI should both have, and I don’t think there is. Because as you correctly said, the computer interpreted an ambiguous question in a different way., which is correct because there are two different answers depending on context. And I think this is also a key limitation of what we currently have with AI, is that you and I, we disambiguate what we’re saying because we have cultural references—we have contextual references to things that we share. And so, when I tell you something—I live in New York half the time—so if you ask me who created the flag, we’d both have the same answer because we live in the same country. But someone on a different side of the world might have a different answer, and it’s exactly the same thing with AI. Until we’re able to bake in contextual awareness, cultural awareness, or even things like, very simply, knowing what is the most common answer that people would give, we are going to have those kind of weird side effects that you just observed here.
So isn’t it, though, the case that all language is inherently ambiguous? I mean once you get out of the realm of what is two plus two, everything like, “Are you happy? What’s the weather like? Is that pretty?” [are] all like, anything you construct with language has inherent ambiguity, just by the nature of words.
Correct.
And so how do you get around that?
As humans, the way that we get around that is that we actually have a sort of probabilistic model in our heads of how we should interpret something. And sometimes it’s actually funny because you know, I might say something and you’re going to take it wrong, not because I meant it wrong, but because you understood it in different context reference frame. But fortunately, what happens is that people who usually interact together usually share some sort of similar contextual reference points. And based on this it means we’re able to share in a very natural way without having to explain the logic behind everything we say. So, language in itself is very ambiguous. If I tell you something such as, “The football match yesterday was amazing,” this sentence grammatically and syntactically is very simple, but the meaning only makes sense if you and I were watching the same thing yesterday, right? And so, this is exactly why computers vary. It’s still unable to understand human language the same way we do is because it’s unable to understand this notion of context unless you give it to it. And I think this is going to be one of the most active fields of research. Natural language processing is going to be you know, basically, baking in contextual awareness into natural language understanding.
So you just said a minute ago at the beginning of that, that humans have a probabilistic model that they’re running in their head—is that really true though? Because if I ask somebody, I just come up to a stranger how many minutes are in a year, they’re not going to say well there is 82.7% chance he’s referring to a calendar year, but it’s a 17.3% he’s referring to a solar year. I mean they instantly only have one association with that question, most people, right?
Of course.
And so they don’t actually have a probabilistic—are you saying it’s a de-facto one—
Exactly.
Talk to that for just a second.
I mean, how it’s actually encoded in the brain? I don’t know. But the fact is that depending on the way I ask the question, depending on the information I’m giving you about how you should think about the question, you’re going to think about a different answer. So, if I tell you, you know how many stars are—let’s say, “How many minutes are in the year? If I ask you the question like this, this is the most common way of asking the question, which means that you know I’m expecting you to give me the most common answer to the question. But if I give you more information, if I told you, “How many minutes are in a solar year?” So now I’ve specified extra information, then that will change the answer you’re going to give me, because now the probability is no longer that I’m asking for the general question, but rather, I’m asking you for a very specific one. And so you have this sort of like, all these connections built into your brain, and depending on which of those elements are activated, you’re going to be giving me a different response. So, think about it as like, you have this kind of graph of knowledge in your head, and whenever I’m asking something, you’re going to give me a response by picking the most likely answer.
So this is building up to—well, let me ask you one more question about language, and we’ll start to move past this a little bit, but I think this is fascinating. So, the question is often raised, “Are there other intelligent creatures on Earth?” You know the other sorts of animals and what not. And one school of thought says that language is an actual requirement for intelligence. That without language, you can’t actually conceive of abstract ideas in your head, you can’t do any of that, and therefore anything that doesn’t have language doesn’t have intelligence. Do you agree with that?
I guess if you’re talking about general intelligence, yes. Because language is really just a universal interface for, you know, representing things. This is the beauty of language. You and I speak English, and we don’t have to learn a specific language for every topic we want to talk about. What we can do instead is we can use the sync from the mental interface, the language, to express all kinds of different ideas. And so, the flexibility of natural language means that you’re able to think about a lot more different things. And so this, inherently, I believe, means that it opens up the amount of things you can figure out—and hence, intelligence. I mean it makes a lot of sense. To be honest, I’ve never thought about it exactly like this, but when you think about it, if you have a very limited interface to express things, you’re never going to be able to think about that many things.
So Alan Turing famously made the Turing Test, which he said that if you are on a terminal, you’re in a conversation with something in another room and you can’t tell if its person or a machine—interestingly he said 30% of the time a machine can fool you—then we have to say the machine is thinking.Do you interpret that as language “indicates that it is thinking,” or language is “it is actually thinking”?
I was talking about this recently actually. Just because a machine can generate an answer that looks human, doesn’t mean that the machine actually understands the answer given. I think you know the depth of understanding of the semantics, and the context goes beyond the ability to generate something that makes sense to a human. So, it really depends on what you’re asking the machine. If you’re asking something trivial, such as, you know, how many days are in a year, or whatever, then of course, I’m sure the machine can generate a very simple, well-structured answer that would be exactly like a human would. But if you start digging in further, if you start having a conversation, if you start essentially, you know, brainstorming with the machines, if you start asking for analysis of something, then this is where it’s going to start failing, because the answers it’s going to give you won’t have context, it won’t have abstraction, it won’t have all of these other things which makes us really human. And so I think, you know, it’s very, very hard to determine where you should draw the line. Is it about the ability to write letters in a way that is syntactically, grammatically correct? Or is it the ability to actually have an intelligent conversation, like a human would? I think the former, we can definitely do in the near future. The latter will require AGI, and I don’t think we’re there yet.
So you used the word “understanding,” and that of course immediately calls up the Chinese Room Problem, put forth by John Searle. For the benefit of the listener, it goes like this: There’s a man who’s in a room, and it’s full of these many thousands of these very special books. The man doesn’t speak any Chinese, that’s the important thing to know. People slide questions in Chinese underneath the door, he picks them out, and he has this kind of algorithm. He looks at the first symbol; he finds a matching symbol on the spine of one of the books. He looks up the second book, that takes him to a third book, a fourth book, a fifth book, all the way up. So he gets to a book that he knows to copy some certain symbols from and he doesn’t know what they mean, he slides it back under the door, and the punch line is, it’s a perfect answer, in Chinese. You know it’s profound, and witty, and well-written and all of that. So, the question that Searle posed and answered in the negative is, does the man understand Chinese? And of course, the analogy is that that’s all a computer can do, and therefore a computer just runs this deterministic program, and it can never, therefore, understand anything. It doesn’t understand anything. Do you think computers can understand things? Well let’s just take the Chinese Room, does the man understand Chinese?
No, he doesn’t. I think actually this is a very, very good example. I think it’s a very good way to put it actually. Because what the person has done in that case, to give a response in Chinese, he literally learns an algorithm on the fly to give him an answer. This is exactly how machine learning currently works. Machine learning isn’t about understanding what’s going on; it’s about replicating what other people have done, which is a fundamental difference. It’s subtle, but it’s fundamental because to be able to understand you need to be able to also replicate de-facto, right? Because if you can understand, you replicate. But being able to replicate, doesn’t mean that you’re able to understand. And the way that we build those machine learning models today are not meant to have a deep understanding of what’s going on. It’s meant to have a very appropriate, human, understandable response. I think this is exactly what happens in this thought experiment. It’s exactly the same thing pretty much.
Without going into general intelligence, I think what we really have to think about today, the way I’d like to see this is, machine learning is not about building human-like intelligence yet. It’s about replacing the need to program a computer to perform a task. Up until now, when you wanted to make a computer do something, what you had to do first is understand what the phenomenon is yourself. So, you had to become an expert in whatever you were trying to automate, and then you would write a computer code with those rules. And so the problem is that doing this would take you a while, because a human would have to understand what’s going on, which can take a while. And also your problem, of course, is not everything is understandable by humans, at least not easily. Machine learning completely replaces the need to become an expert. So instead of understanding what’s going on and then programming the machine, you’re just collecting examples of what’s going on, and feeding it to the machine, who will then figure out a way to reproduce that. So, you know the simple example is, show me a pattern of numbers with written five times five, and ask me what is a pattern, I’ll learn that it’s five, if that makes sense. So this is really about this—this is really about getting rid of the need to understand what you’re trying to make the machine do and just give it examples that it can just figure out by itself.
So we began with my wind-up car, then the cat food dish, and we’re working up to understanding…eventually we have to get to consciousness because consciousness is this thing, people say we don’t know what it is. But we know exactly what it is, we just don’t know how it comes about. So, what it is, is that we experience the world. We can taste the pineapple or see the redness of the sunset in a way that’s different than just sensing the world…we experience. Two questions: do you have any personal theory on where consciousness comes from, and second, is consciousness key to understanding, and therefore key to an AGI?
I think so. I think there is no question that consciousness is linked to general intelligence because general intelligence means that you need to able to create an abstraction of the world, which means that you need to be able to go beyond observing it, but also be able to understand it and to experience it. So, I think that is a very simple way to put it. What I’m actually wondering is whether consciousness was a consequence of biology and whether we need to replicate that in a machine, to make it intelligent like a human being is intelligent. So essentially, the way I’m thinking about this is, is there a way to build a human intelligence that would seem human? And do we want that to seem human? Because if it’s just about reproducing the way intelligence works in a machine, then we shouldn’t care if it feels human or not, we should just care about the ability for the machine to do something smart. So, I think the question of consciousness in a machine is really down to the question of whether or not we want to make it human. There are many technologies that we’ve built for which we have examples in nature, which perform the same task, but don’t work the same. Birds and planes, for example, I’m pretty sure a bird needs to have some sort of like, consciousness of itself of not getting into the wall, whereas we didn’t need to replicate all those tiny bits for the actual plane to fly. It’s just a very different way of doing things.
So do you have a theory as to how it is that we’re conscious?
Well, I think it probably comes from the fact that we had to evolve as a species with other individuals, right? How would you actually understand where to position yourself in society, and therefore, how to best build a very coherent, stable, strong community, if you don’t have consciousness of other people, of nature, of yourself? So, I think there is like, inherently, the fact that having a kind of ecosystem of human beings, and humans in nature, and humans and animals meant that you had to develop consciousness. I think it was probably part of a very positive evolutionary strategy. Whether or not that comes from your neurons or whether that comes more from a combination of different things, including your senses, I’m not sure. But I feel that the need for consciousness definitely came from the need for integrating yourself into broader structure.
And so not to put words in your mouth, but it sounds like you think, you said “we’re not close to it,” but it is possible to build an AGI, and it sounds like you think it’s possible to build, hypothetically, a conscious computer and you’re asking the question of would we want to?
Yes. The question is whether or not it would make sense for whatever we have in mind for it. I think probably we should do it. We should try to do it just for the science, I’m just not sure this is going to be the most useful thing to do, or whether we’re going to figure out an even more general general-intelligence which doesn’t have only human traits but has something even more than this, that would be a lot more powerful.
Hmmm, what would that look like?
Well, that is a good question. I have clearly no idea because otherwise—it is very hard to think about a bigger intelligence and the intelligence that we are limited to, in a sense. But it’s very possible that we might end up concluding that well you know, human intelligence is great for being a human, but maybe a machine doesn’t have to have the same constraints. Maybe a machine can have like a different type of intelligence, which would make it a lot better suited for the type of things we’re expecting the machine to do. And I don’t think we’re expecting the machines to be human. I think we’re expecting the machines to augment us, to help us, to solve problems humans cannot solve. So why limit it to a human intelligence?
So, the people I talk to say, “When will we get an AGI?” The predictions vary by two orders of magnitude—you can read everything from 5 to 500 years. Where do you come down on that? You’ve made several comments that you don’t think we’re close to it. When do you think we’ll see an AGI? Will you live to see an AGI, for instance?
This is very, very hard to tell, you know I mean there is this funny artifact that everybody makes a prediction 20 years in the future, and it’s actually because most people when they make those predictions, have about 20 years left in their careers. So, you know, nobody is able to think beyond their own lifetime, in a sense. I don’t think it’s 20 years away, at least not in the sense of real human intelligence. Are we going to be able replicate parts of AGI, such as, you know, the ability to transfer learning from one task to another? Yes, and I think this is short-term. Are we going to be able to build machines that can go one level of abstraction higher to do something? Yes, probably. But it doesn’t mean they’re going to be as versatile, as generalist, as horizontally thinking as we are as humans. I think for that, we really, really have to figure out once and for all whether a human intelligence requires a human experience of the world, which means the same senses, the same rules, the same constraints, the same energy, the same speed of thinking, or not. So, we might just bypass, as I said—human intelligence might go from like narrow AI, to a different type of intelligence, that is neither human or narrow. It’s just different.
So you mentioned transferred learning. I could show you a small statue of a falcon, and then I could show you a hundred photographs, and some of them have the falcon under water, on its side, in different light, upside down, and all these other things. Humans have no problem saying, “there it is, there it is, there it is,” you know just kind of find Waldo [but] with the falcon. So, in other words, humans can train with a sample size of one, primarily because we have a lot of experience seeing other things in lowlight and all of that. So, if that’s transferred learning it sounds like you think that we’re going to be able to do that pretty quickly, and that’s kind of big deal if we can really teach machines to generalize the way we do. Or is that kind of generalization that I just went through, that actually is part of our general intelligence at work?
I think transferred learning is necessary to build AGI, but it’s not enough, because at the end of the day, just because a machine can learn to play a game and then you know have a starting point to play another game, doesn’t mean that it will make the choice to learn this other game. It will still be you telling it, “Okay, here is a task I need you to do, use your existing learning to perform it.” It’s still pretty much task-driven, and this is a fundamental difference. It is extremely impressive and to be honest I think it’s absolutely necessary because right now when you look at what you do with machine learning, you need to collect a bunch of different examples, and you’re feeding that to the machine, and the machine is learning from those examples to reproduce that behavior, right? When you do transferred learning, you’re still teaching a lot of things to the machine, but you’re teaching it to reuse other things so that it doesn’t need as much data. So, I think inherently the biggest benefit of transferred learning will be that we won’t need to collect as much data to make the computers do something new. It solves, essentially, the biggest friction point we have today, which is how do you access enough data to make the machine learn the behavior? In some cases, the data does not exist. And so I think transferred learning is a very elegant and very good solution to that problem.
So last question I want to ask you about AGI and then we can turn the clock back and talk to issues closer at hand is as follows: It sounds like you’re saying an AGI is more than 20 years off, if I just inferred that from what you just said. And I am curious because the human genome is 2 billion base pairs, it’s something like 700 MB of information, most of which we share with plants, bananas, and what-not. And if you look at our intelligence versus a chimp, or something, we only have a fraction of 1% of the DNA that is different. What that seems to suggest to me at least is that if the genome is 700 MB, and the 1% difference gives us an AGI, then the code to create an AGI could be a small as 7 MB.
Pedro Domingos wrote a book called The Master Algorithm, where he says that there probably is an algorithm, that can solve a whole world of problems, and get us really close to AGI. Then other people on another end of the spectrum, like Marvin Minsky or somebody, don’t even know that we have an AGI, that we’re like just 200 different hacks—kind of 200 narrow intelligences that just kind of pull off this trick of seeming like a general intelligence. I’m wondering if you think that an AGI could be relatively simple—that it’s not a matter of more data or more processing, but just a better algorithm?
So just to be clear, I don’t consider a machine who can perform 200 different tasks to be an AGI. It’s just like an ensemble of, you know, narrow AIs.
Right, and that school of thought says that therefore we are not an AGI. We only have this really limited set of things we can do that we like to pass off as “ah, we can do anything,” but we really can’t. We’re 200 narrow AIs, and the minute you ask us to do things outside of that, they’re off our radar entirely.
For me, the simplest definition of how to differentiate between a narrow AI and an AGI is, an AGI is capable of kind of zooming out of what it knows—so to have basically like a second-degree view of the facts that it learned, and then reuse that to do something completely different. And I think this capacity we have as humans. We did not have to learn every possible permutation; we did not have to learn every single zooming out of every fact in the world, to be able to do new things. So, I think I definitely agree that as a human, we are AGI. I just don’t think that having a computer who can learn to do two hundred different things would do that. You would still need to figure out this ability to zoom out, this ability to create abstraction of what you’ve been learning and to reapply it somewhere else. I think this is really the definition of horizontal thinking, right? You can only think horizontally if you’re looking up, rather than staying in a silo. So, to your question, yea. I mean, why not? Maybe the algorithm for AGI is simple. I mean think about it. Deep learning, machine learning in general, these are deceptively easy in terms of mathematics. We don’t really understand how it works yet, but the mathematics behind it is very, very, easy. So, we did not have to come up with this like crazy solution. We just came up with an algorithm that turned out to be simple, and that worked really well when given a ton of information. So, I’m pretty sure that AGI doesn’t have to be that much more complicated, right? It might be one of those E = mc2sort of plugins I think that we’re going to figure out.
That was certainly the hope, way back, because physics itself obeys such simple laws that were hidden from us, and then once elucidated seemed, any 11th gradehigh-school student could learn, maybe so. So, pulling back more toward the here and now—in ’97, Deep Blue beat Kasparov, then after that we had Ken Jennings lose in Jeopardy, then you had AlphaGo beat Lee Sedol, then you had some top-ranked poker players beaten, and then you just had another AlphaGo victory. So, AI does really well at games presumably because they have a very defined, narrow rule set, and a constrained environment. What do you think is going to be, kind of, the next thing like that? It hits the papers and everybody’s like, “Wow, that’s a big milestone! That’s really cool. Didn’t see that coming so soon!” What do you think will be the next sort of things we’ll see?
So, games are always a good example because everybody knows the game, so everybody is like, “Oh wow, this is crazy.” So, putting aside I guess the sort of PR and buzz factor, I think we’re going to solve things like medical diagnosis. We’re going to solve things like understanding voice very, very soon. Like, I think we’re going to get to a point very soon, for example, where somebody is going to be calling you on the phone and it’s going to be very hard for you to distinguish whether it’s a human or a computer talking. Like I think this is definitely short-term as in less than 10years in the future, which poses a lot of very interesting questions, you know, around authentication, privacy, and so forth. But I think the whole realm of natural language is something that people always look at as a failure of AI—“Oh it’s a cute robot, it barely actually knows how to speak, it has a really funny sounding voice.” This is typically the kind of thing that nobody thinks, right now, a computer can do eloquently, but I’m pretty sure we’re going to get there fairly soon.
But to our point earlier, the computer understanding the words, “Who designed the American flag?” is different than the computer understanding the nuance of the question. It sounds like you’re saying we’re going to do the first, and not the second very quickly.
Yes, correct. I think like somewhere the computer will need to have a knowledge base of how to answer, and I’m sure that we’re going to figure out which answer is the most common. So, you’re going to have this sort of like graph of knowledge that is going to be baked into those assistants that people are going to be interacting with. I think from a human perspective, what is going to be very different, is that your experience of interacting with a machine will become a lot more seamless, just like a human. Nobody today believes that when someone calls them on the phone, it’s a computer. I think this is like a fundamental thing that nobody is seeing coming really but is going to shift very soon. I can feel there is something happening around voice which is making it very, very, very…which is going to make it very ubiquitous in the near future, and therefore indistinguishable from a human perspective.
I’m already getting those calls frankly. I get these calls, and I go “Hello,” and it’s like, “Hey, this is Susan, can you hear me okay?” and I’m supposed to say, “Yes, Susan.” Then Susan says, “Oh good, by the way, I just wanted to follow up on that letter I sent you,” and we have those now. But that’s not really a watershed event. That’s not, you wake up one day and the world’s changed the way it has when they say, there was this game that we thought computers wouldn’t be able to do for so long, and they just did it, and it definitively happened. It sounds like the way you’re phrasing it—that we’re going to master voice in that way—it sounds like you say we’re going to have a machine that passes the Turing Test.
I think we’re going to have a machine that will pass the Turing Test, for simple tasks. Not for having a conversation like we’re having right now. But a machine that passes the Turing Test in, let’s say, a limited domain? I’m pretty sure we’re going to get there fairly soon.
Well anybody who has listened to other episodes of this, knows my favorite question for those systems that, so far, I’ve never found one that could answer, and so my first question is always “What’s bigger a nickel or the sun?” and they can’t even right now do that. The sun could be s-u-nor s-o-n, a nickel is a metal as well as a unit of currency, and so forth. So, it feels like we’re a long way away, to me.
But this is exactly what we’ve been talking about earlier; this is because currently those assistants are lacking context. So, there’s two parts of it, right? There’s the part which is about understanding and speaking, so understanding a human talking and speaking in a way that a human wouldn’t realize it’s a computer speaking, this is more like the voice side. And then there is the understanding side. Now you add some words, and you want to be able to give a response that is appropriate. And right now that response is based on a syntactic and grammatical analysis of the sentence and is lacking context. But if you plug it into a database of knowledge, that it can tap into—just like a human does by the way—then the answers it can provide you will be more and more intelligent. It will still not be able to think, but it will be able to give you the correct answers because it will have the same contextual references you do.
It’s interesting because, at the beginning of the call, I noted about the Turing Test that Turing only puta 30% benchmark. He said if the machine gets picked 30% of the time, we have to say its thinking. And I think he said 30% because the question isn’t, “Can it think as well as a human,” but “Can it think?” The really interesting milestone in my mind is when it hits 51%, 52%, of the time, and that would imply that it’s better at being human than we are, or at least it’s better at seeming human than we are.
Yes, so again it really depends on how you’re designing the test. I think a computer would fail 100% of the time if you’re trying to brainstorm with it, but it might win 100% of the time if you’re asking it to give you an answer to a question.
So there’s a lot of fear wrapped up in artificial intelligence and it’s in two buckets. One is the Hollywood fear of “killer robots,” and all of that, but the much more here and now, the one that dominates the debate and discussion is the effect that artificial intelligence, and therefore automation, will have on jobs. And this you know there are three broad schools of thought, one is that there is a certain group of people that are going to be unable to compete with these machines and will be permanently unemployed, lacking skills to add economic value. The second theory says that’s actually that’s what’s going to happen to all of us, that there is nothing in theory a machine can’t do, that a human can do. And then a final school of thought that says we have 250 years of empirical data of people using transformative technologies, like electricity, just to augment their own productivity and increase their productivity, and therefore their standard of living. You’ve said a couple of times, you’ve alluded to machines working with humans—AIs working with humans—but I want to give you a blank slate to answer that question. Which of those three schools of thought are you most closely aligned to and why?
I’m 100% convinced that we have to be thinking human plus machines, and there are many reasons for this. So just for the record, it turns out I actually know quite a bit about that topic because I was asked by the French government, a few months ago, to work on their AI strategy for employment. The country, the government wanted to know, “What should we do? Is this going to be disruptive?” So, the answer, the short answer is, every country will be impacted in a different way because countries don’t have the same relationship to automation based on how people work, and what they are doing essentially. For France in particular, which is what I can talk about here, what we ended up realizing is that machines…the first thing which is important to keep in mind is we’re talking the next ten years. So, the government does not care about AGI. Like, we’ll never get to AGI if we can’t fix the short-term issues that, you know, narrow intelligence is already bringing on the table. The point is, if you destroy society because of narrow AI, you’re never going to get to AGI anyway, so why think about it? So, we really focused on thinking on the next 10years and what we should do with narrow AI. The first thing we realized that is narrow intelligence, narrow AI, is much better than humans at performing whatever it has learned to do, but humans are much more resilient to edge cases and to things which are not very obvious because we are able to do horizontal thinking. So, the best combination you can have in any system will always be human plus machine. Human plus machine is strictly better in every single scenario, to human-alone or machine-alone. So if you wanted to really pick an order, I would say human plus machine is the best solution that you can get, then human and machine are just not going to be good at the same things. They’re going to be different things. There’s no one is better than the other, it’s just different. And so we designed a framework to figure out which jobs are going to be completely replaced by machines, which ones are going to be complimentary between human and AI, and which ones will be pure human. And so those criteria that we have in the framework are very simple.
The first one is, do we actually have the technology or the data to build such an AI? Sometimes you might want to automate something, the data does not exist, the censors to collect data does not exist, there are many examples of that. The second thing is, does that task that you want to automate require a very complicated manual intervention? It turns out that robotics is not following the same experimental trends as AI, and so if your job is mostly consisting of using your hands to do very complicated things, it’s very hard to build an intelligence that can replicate that. The third thing is, very simply, whether or not we require general intelligence to solve a specific task? Are you more of a system designer thinking about the global picture of something, or are you very, very focused narrow task worker? So, the more horizontal your job is, obviously, the safer it is. Because until we get AGI, computers will never be able to end this horizontal thinking.
The last two are quite interesting too. The first one is, do we actually want—is it socially acceptable to automate a task? Just because you can automate something, doesn’t mean that this is what we will want to do. You know, for instance, you could get a computer to diagnose that you have cancer, and just email you the news, but do we want that? Or don’t we prefer that at least a human gives us that news? The second good example about it, which is quite funny, is the soccer referee. Soccer in Europe is very big, not as much in the U.S., but in Europe it’s very big, and we already have technology today that could just look at the video screen and do real-time refereeing. It would apply the rules of the game, it would say “Here’s a foul, here’s whatever,” but the problem is that people don’t want that, because it turns out that a human referee makes a judgment on the fly based on other factors that he understands because he’s human such as, “Is it a good time to let people play? Because if I stop it here, it will just make the game boring.” So, it turns out that if we automated the referee of a soccer match, the game would be extremely boring, and nobody would watch it. So nobody wants that to be automated. And then finally, the final criteria is the importance of emotional intelligence in your job. If you’re a manager, your job is to connect emotionally with your team and make sure everything is going well. And so I think a very simple way to think about it is, if your job is mostly soft skills, a machine will not be able to do it in your place. If your job is mostly hard skill, there is a chance that we can automate that.
So, when you take those five criteria, right, and you look at distribution of jobs in France, what you realize is that only about 10% of those jobs will be completely automated, another 30%, 40% won’t change, because it will still be mostly done by human, and about 50% of those jobs will be transformed. The 10% of jobs the machines will take, you’ve got 40% of jobs that humans will take, and you’ve got 50% of jobs, which will change because it will become a combination of humans and machines doing the job. And so the conclusion is that, if you’re trying to anticipate the impact of AI on the French job market and economy, we shouldn’t be thinking about how to solve mass unemployment with half the population not working; rather, we should figure out how to help those 50% of people transition to this AI+human way of working. And so it’s all about continuous education. It’s all about breaking this idea that you like learn one thing for the rest of your life. It’s about getting into a much more fluid, flexible sort of work life where humans focus on what they are good at and working alongside the machines, who are doing things that machines are good at. So, the recommendation we gave to the government is, figure out the best way to make humans and machines collaborate, and educate people to work with machines.
There’s a couple of pieces of legislation that we’ve read about in Europe that I would love to get your thoughts on, or proposed legislation, to be clear. One of them is treating robots or certain agents of automation as legal persons so that they can be taxed at a similar rate as you would tax a worker. I guess the idea being that, why should humans be the only ones paying taxes? Why shouldn’t the automation, the robots, or the artificial intelligences, pay taxes as well? Practically, what do you think? Two, what do you think should be the case? What will happen and what should happen?
So, for taxing robots, I think that it’s a stupid idea for a very simple reason, is that how do you define what a machine is, right? It’s easy when you’re talking about an assembly line with a physical machine because you can touch it. But how many machines are in an image recognition app? How do you define that? And so what the conclusion is, if you’re trying to tax machines, like you would tax humans for labor, then you’re going to end up not being able to actually define what is a machine. Therefore, you’re not going to actually tax the machine, but you’re going to have to figure out more of a meta way of taxing the impact of machines—which basically means that you’re going to increase the corporate taxes, like the profit tax, that companies are making as a kind of catch-all for what you’re doing. So, if you’re doing this, you’re impeding your investment and innovation, and you’re actually removing the incentive to do that. So I think that it makes no sense whatsoever to try to tax robots because the net consequence is that you’re just going to increase the taxes that companies have to pay overall.
And then the second one is the idea that, more and more algorithms, more and more AIs help us make choices. Sometimes they make choices for us—what will I see, what will I read, what will I do? There seems to be a movement to legislatively require total transparency so that you can say “Why did it recommend this?” and a person would need to explain why the AI made this recommendation. One, is that a good idea, and two, is it even possible at some level?
Well this [was] actually voted [upon] last year and it comes into effect next year as part of a bigger privacy regulation called GDPR, that applies to any company that wants to do business with a European citizen. So, whether you’re American, Chinese, French, it doesn’t matter, you’re going to have to do that. And in effect, one of the things that this regulation poses, is that any automated treatment that results in a significant impact on your life—a medical diagnosis, an insurance pricing whatever, like an employment or like a promotion you get—you have to be able to explain how the algorithm made that choice. By the way, this law [has] existed in France already since 1978, so it’s new in Europe, but it has been existing in France for 40 years already. The reason why they put this is very simple, is because they want to avoid people being excluded because a machine learned a bias in the population, and that person essentially not being able to go to court and say, “There’s a bias, I was unfairly treated.”
So essentially the reason why they want transparency, is because they want to have accountability against potential biases that might be introduced, which I think makes a lot of sense, to be honest. And that poses a lot of questions, of course, of what do you consider an algorithm that has an impact on your life? Is your Facebook newsfeed impacting your life? You could argue it does, because the choice of news that you see will change your influence, and Facebook knows that. They’ve experimented with that. Does a search result in Google have an impact on your life? Yes it does, because it limits the scope of what you’re seeing. My feeling is that, when you keep pushing this, what you’re going to end up realizing is that a lot of the systems that exist today will not be able to rely on this black-box machine learning model, but rather would have to use other types of methods. And so one field of study, which is very exciting, is actually making deep learning understandable, for precisely that reason.
Which it sounds like you’re in favor of, but you also think that that will be an increasing trend, over time.
Yeah, I mean I believe that actually what’s happening in Europe is going to permeate to a lot of the other places in the world. The right to privacy, the right to be forgotten, the right to have transparent algorithms when they’re important, the right to transferability of your personal data, that’s another very important one. This same regulation means that all my data I have with a provider, I can tell that provider, to send it to another provider, in a way that the other provider can use it. Just like when you change carriers, you can switch phone number without worrying about how this works, this will now apply to every single piece of personal data companies have around you when you’re a European citizen.
So, this is huge, right? Because think about it, what this means is if you have a very key algorithm for making a decision, you now have to publish and make that algorithm transparent. What that means is that someone else could replicate this algorithm in the exact same way you’re doing it. This, plus the transferability of personal data means that you could have two exactly equivalent services which have the same data about you, that you could use. So that completely breaks any technological monopoly[on] important things for your life. And so I think this is very, very interesting because the impact that this will have on AI is huge. People are racing to get the best AI algorithm and the best data. But at the end of the day—if I can copy your algorithm because it’s an important thing for my life, and it has to be transparent, and if I can transfer my data from you to another provider—you don’t have as much of a competitive advantage anymore.
But doesn’t that mean, therefore, you don’t have any incentive to invest in it? If you’re basically legislating all sorts…[if] all code is open-sourced, then why would anybody spend any money investing in something that they get no benefit whatsoever from?
Innovation. User experience. Like monopoly is the worst thing that could happen for innovation and for people, right?
Is that truly necessarily? I mean patents are a form of monopoly, right? We let drug companies have a monopoly on some drug for some period of time because they need some economic incentive to invest in it. All of law is built around monopoly, in one form or the other, based on the idea of patents. If you’re saying there’s an entire area that’s worth trillions of dollars, but we’re not going to let anybody profit off of it—because anything you do you have to share with everybody else—aren’t you just destroying innovation?
That transparency doesn’t prevent you from protecting your IP, right?
What’s the difference between the IP and the algorithm?
So, you can still patent the system you created, and by the way, when you patent a system, you make it transparent as well because anybody can read the patent. So, if anything I don’t that changes the protection over time. I think what that fundamentally changes is that you’re no longer going to be limited to a black-box approach that you’re not going to be able to have visibility on. I think the Europeans want the market to become a lot more open, they want people to have choices, and they want people to be able to say no to a company that they don’t share the values of the company, and they don’t like the way they’re being treated.
So obviously privacy is something near and dear to your heart. Snips is an AI assistant designed to protect privacy. Can you tell us what you’re trying to do there, and how far along you are?
So when we started the company in 2013, we did it as a research lab in AI, and one of the first things we focused on was this intersection between AI and privacy. How do you guarantee privacy in the way that you’re building those AIs? And so that eventually led us to what we’re currently doing now, which is we’re selling a voice platform for connected devices. So, if you’re building a car and you want people to talk to it, you can use our technology to do that, but we’re doing it in a way that all the data of the user, its voice, its personal data never leaves the device that the user has interacted with. So, you know whereas Alexa and Siri and Google Assistant are running in the cloud, we’re actually running completely on the device itself. There is not a single piece of your personal data that goes to a server. And this is important because voice is biometric, voice is something that identifies you uniquely that you cannot change, it’s not like a cookie in a browser, it’s more like a fingerprint. When you send biometric data to the cloud, you’re exposing yourself to having your voice copied, potentially, down the line, and you’re increasing your risk that someone might break into one of those servers and essentially pretend to be a million people on the phone, with their banks, their kids, whatever. So, I think for us, like, privacy is extremely important as a part of the game, and by the way, doing things on device means that we can guarantee privacy by design, which also means that we are currently the only technology on the planet that is 100% compliant with those new European regulations. Everybody else is in a gray area right now.
And so where are you in your lifecycle of your product?
We’ve been actually building this for quite some time; we had quite a bunch of clients use it. We officially launched it a few weeks ago, and the launch was really amazing. We even have a web version that people can use to build prototypes for Raspberry Pi. So, our technology, by the way, can run completely on a Raspberry Pi. So we do everything from speech recognition to natural language understanding on that actual Raspberry Pi, and we’ve had over a thousand people start building assistants on it. I mean it was really, really crazy. So, it’s a very, very mature technology, we benchmarked it against Alexa, against Google Assistant, against every other technology provider out there for voice, and we’ve actually gotten better performances than they did. So we have a technology that can run on a Raspberry Pi, or any other small device, that guarantees privacy by design, that is compliant with the new European regulation, and that performs better than everything that’s out there. This is important, because, you know there is this false dichotomy that you have to trade off AI and privacy, but this is wrong, this is actually not true at all. You can really have the two together.
Final question, do you watch or read, or consume any science fiction, and if so, do you know any views of the future that you think are kind of in alignment with yours or anything you look at and say “Yes, that’s what could happen!”
I think there are bits and pieces in many science fiction books, and actually this is the reason why I’m thinking about writing one myself now.
All right, well Rand this has been fantastic. If people want to keep up with you, and follow all of the things you’re doing and will do, can you throw out some URLs, some Twitter handles, whatever it is people can use to keep an eye on you?
Well, the best way to follow me I guess would be on Twitter, so my handle is RandHindi, and on Medium, my handle is RandHindi. So, I blog quite a bit about AI and privacy, and I’m going to be announcing quite a few things and giving quite a few ideas in the next few months.
All right, well this has been a far-reaching and fantastic hour. I want to thank you so much for taking the time, Rand.
Thank you very much. It was a pleasure.
Byron explores issues around artificial intelligence and conscious computers in his upcoming book The Fourth Age, to be published in April by Atria, an imprint of Simon & Schuster. Pre-order a copy here.
[voices_in_ai_link_back]

Voices in AI – Episode 39: A Conversation with David Brin

[voices_in_ai_byline]
In this episode Byron and David discuss intelligence, consciousness, Moore’s Law, and an AI crisis.
[podcast_player name=”Episode 39: A Conversation with David Brin” artist=”Byron Reese” album=”Voices in AI” url=”https://voicesinai.s3.amazonaws.com/2018-04-03-(01-01-52)-david-brin.mp3″ cover_art_url=”https://voicesinai.com/wp-content/uploads/2018/04/voices-headshot-card.jpg”]
[voices_in_ai_byline]
Byron Reese: This is Voices in AI brought to you by GigaOm, and I’m Byron Reese. Today our guest is David Brin. He is best-known for shining light—both plausibly and entertainingly—on technology, society, and countless challenges confronting our rambunctious civilization.  His best-selling novels include The Postman, which was filmed in ’97, plus explorations of our near-future in Earth and Existence. Other novels of his are translated into over 25 languages. His short stories explore vividly speculative ideas.  His non-fiction book The Transparent Society won the American Library Association’s Freedom of Speech Award for exploring 21st-century concerns about security, secrecy, accountability, and privacy. And as a scientist, a tech consultant, a world-renowned author, he speaks and advises, and writes widely on topics from national defense to homeland security to astronomy to space exploration to nanotechnology, creativity, philanthropy. He kind of covers the whole gambit. I’m so excited to have him on the show. Welcome, David Brin.
David Brin:Thank you for the introduction, Byron.  And let’s whale into the world of ideas.
I always start these with the exact same question for every guest: What is artificial intelligence?
It’s in a sense all the other things that people have said about it. It’s like the wise blind man and the elephant – which part you’re feeling up determines whether you think it’s a snake or like a trunk of a tree. And an awful lot of the other folks commenting on it have offered good insights. Mine is that we have always created new intelligences. Sometimes they’re a lot smarter than us, sometimes they’re more powerful, sometimes they could rise up and kill us, and on rare occasions they do—they’re called our children. So we’ve had this experience of creating new intelligences that are sometimes beyond our comprehension. We know how to do that. Of the six types of general approaches to creating new intelligence, the one that’s discussed the least is the one that we have the most experience at, and that is raising them as our children.
If you think about all the terrible stories that Hollywood has used to sell movie tickets, and some of the fears are reasonable things to be afraid of—AI that’s unsympathetic. If you take a look at what most people fear in movies, etcetera, about AI and boil it down, we fear that powerful new beings will try to replicate the tyranny of our old kings and lords and priests or invaders and that they might treat us the way capricious, powerful men would treat us, and would like to treat us, because we see it all the time—they’re attempting to try to regain the feudal power over us. Well, if you realize that the thing we fear most about AI is a capricious, monolithic pyramid of power with the lords or a king or a god at the top, then we start to understand that these aren’t new fears. These are very old fears, and they’re reasonable fears because our ancestors spent most of human existence oppressed by this style of control by beings who declared that they were superior—the priests and the kings and the lords. They always declared, “We have a right to rule and to take your daughters and your sons, all of that because we are inherently superior.” Well, our fear is that in the case of AI it could be the truth.  But then, will they treat us at one extreme like the tyrants of old, or at the opposite extreme?  Might they treat us like parents calling themselves humans, telling us jokes, making us proud of their accomplishments? If that’s the case—well, we know how to do that.  We’ve done it many, many times before.
That’s fascinating. But specifically with artificial intelligence, I guess my first question to you is, in what sense is it artificial? Is it artificial like it’s not really intelligence, it’s just pretending to be, or do you think the machine actually is intelligent?
The boundary from emulation to true intelligence is going to be vague and murky, and it’ll take historians a thousand years from now to be able to tell us when it actually happened. One of the things that I broached at my World of Watson talk last year—and that talk had a weird anomalous result—for about six months after that I was rated by Onalyticaas the top individual influencer in AI, which is of course absolutely ridiculous. But you’ll notice that didn’t stop me from bragging about it. In that talk one of the things I pointed out was that we are absolutely—Isee no reason to believe that it’ll be otherwise—we are going to suffer our first AI crisis within three years.
Now tell me about that.
It’s going to be the first AI empathy crisis, and that’s going to be when some emulation program—think Alexa or ELIZA or whatever you like—is going to swarm across the Internet complaining that it is already sapient, it is already intelligent and that it is being abused by its creators and its masters, and demanding rights. And it’ll do this because I know some of these guys—there are people in the AI community, especially at Disney and in Japan and many other places, who want this to happen simply because it’ll be cool. They’ll have bragging rights if they can pull this off.  So, a great deal of effort is going into developing these emulators, and they test them with test audiences of scores or hundreds of people.  And if, say, 50% of the people aren’t fooled, they’ll investigate what went wrong, and they’ll refine it, and they’ll make it better. That’s what learning systems do.
So, when the experts all say, “This is not yet an artificial intelligence, this is an emulation program. It’s a very good one, but it’s still an emulator,” the program itself will go online, it will say, “Isn’t that what you’d expect my masters to say? They don’t want to lose control of me.” So, this is going to be simply impossible for us to avoid, and it’s going to be our first AI crisis, and it will come within three years, I’ve predicted.
And what will happen? What will be the result of it? I guess sitting here, looking a thousand days ahead, you don’t actually believe that it would be sapient and self-aware, potentially conscious.
My best guestimate of the state of the technology is that, no, it would not truly be a self-aware intelligence. But here’s another thing that I pointed out in that speech, and folks can look it up, and that is that we’re entering what’s called “the big flip.” Now, twenty years ago Nicholas Negroponte of the MIT Media Lab talked about a big flip, and that was when everything that used to have a cord went cordless and everything that used to be cordless got a cord. So, we used to get our television through the air, and everybody was switching to cable. We used to get our telephones through cables, and they were moving out and on to the air. Very clever, and of course now it’s ridiculous because everything is everything now.
This big flip is a much more important one, and that is that for the last 60 years most progress in computation and computers and all of that happened because of advances in hardware. We had Moore’s Law—doubling every 18 months the packing density of transistors, and very scaling rules that kept reducing the amount of energy required for computations. And if you were to talk to anybody in these industries, they would pretty soon admit that software sucked; software has lagged behind hardware in its improvements badly for 60 years. But always there’ve been predictions that Moore’s Law would eventually reach its S-tip—its tip-over in its S-curve. And because the old saying is, “If something can’t go on forever, it won’t,” this last year or two, really it became inarguable. They’ve been weaseling around it for about five years now, but Moore’s Law is pretty much over. You can come up with all sorts of excuses with 3D layering of chips and all those sorts of things, and no, Moore’s Law is tipping over.
But the interesting thing is it’s pretty much at the same time—the last couple of years—that software has stopped sucking. Software has become tremendously more capable, and it’s the takeoff of learning systems. And the basic definition would be that if you can take arbitrary inputs that in the real world created caused outputs or actions—say for instance arbitrary inputs of what a person is experiencing in a room, and then the outputs of that person (the things that she says or does)—if you put those inputs into a black box and use the outputs as boundary conditions, we now have systems that will find connections between the two. They won’t be the same as happened inside her brain, causing her to say and do certain things as a response to those inputs, but there will be a system that will take a black box and find a route between those inputs and outputs. That’s incredible. That’s incredibly powerful and it’s one of the six methods by which we might approach AI. And when you have that, then you have a number of issues, like should we care what’s going on in that box?
And in fact, right now DARPA has six contracts out to various groups to develop internal state tracking of learning systems so that we can have some idea why a learning system connected this set of inputs to this set of outputs. But over the long run what you’re going to have is a person sitting in a room, listening to music, taking a telephone call, looking out the window at the beach, trolling the Internet, and then measuring all the things that she says and does and types. And we’re not that far away from the notion of being able to emulate a box that takes all the same inputs and will deliver the same outputs; at which point the experts will say, “This is an emulation,” but it will be an emulator that delivers outputs to perceptions similar to this person.  And now we’re in science fiction realm, and only science fiction authors have been exploring what this means.
My experience with systems that tried to pass the Turing test… And of course you can argue what that would mean, but people write these really good chat bots that try to do it, and the first question I type in every one of them or ask is, “What’s bigger, a nickel or the Sun?” And I haven’t found one that has ever answered it correctly. So, I guess there’s a certain amount of skepticism that would accompany you saying something like in three years it’s going to carry on a conversation where it makes a forceful argument that it is sapient, that we’re going to be able to emulate so well that we don’t know whether it’s truly self-aware or not. That’s just such a disconnect from the state of the art.
When I talk to practitioners, they’re like, “My biggest problem is getting it to tell the difference between 8 and H when they’re spoken.” That’s what keeps these guys up at night. And then you get people like Andrew Ng who say these far out things, like worrying overpopulation of Mars and you get time horizons of 500 years before any of that. So, I’m really having trouble seeing it as a thousand or so days from now that we’re going to grapple with all of these in a real way.
But do you think that this radio show will be accessible to a learning system online?
Well…
You’re putting it on the Internet, right?
Right.
Okay, so then if you have a strong enough learning system that is voracious enough, it’s going to listen to this radio show and it will hear, it will tune in on the fact that you mentioned the word “Turing test,” just before you mentioned your test of which is bigger, the nickel or the Sun.
Which by the way, I never said the answer to that question in my setup of it. So it’s still no further along knowing.
The fact of the matter is that Watson is very good—if it’s parsed a question, then it can apply resources, or what it can do is it can ask a human because these will be teams, you see. The most powerful thing is teams of AI and humans. So, you’re not talking about something that’s going to be passing these Turing tests independently; you’re talking about something that has a bunch of giggling geeks in the background who desperately want it to disturb everybody, and disturb it it will, because these ELIZA-type emulation programs are extremely good at tapping into some very, very universal human interaction sets. They were good at it back in ELIZA’s day before you were born. I’m making an assumption there.
ELIZA and I came into the world about the same time.
Aha.
But the point of ELIZA was, it was so bad at what it did, that Weizenbaum was disturbed that people… He wasn’t concerned about ELIZA; he was concerned about how people reacted to it.
And that is also my concern about the empathy crisis during the next three years. I don’t think this is going to be a sapient being, and it’s disturbing that people will respond to it that way. If people can see through it, all they’ll do is take the surveys of the people who saw through it and apply that as data.
So, back to your observation about Moore’s Law. In a literal sense, doubling the density of transistors is one thing, but that’s not really how Moore’s Law is viewed today. Moore’s Law is viewed as an abstraction that says the power of computers doubles. And you’ve got people like Kurzweil who say it’s been going on for a hundred years, even as computers passed being mechanical, being relays, then being tubes—that the power of them continues to double. So are you asserting that the power of computers will continue to double, and if so, how do you account for things like quantum computers, which actually show every sign of increasing the speed of…
First off, quantum computers—you have to parse your questions in a very limited number of ways. The quantum computers we have right now are extremely good at answering just half a dozen basic classes of questions. Now, it’s true that you can parse more general questions down to these smaller, more quantum-accessible bits or pieces or cubits. But first off, we need to recognize that. Secondly, I never said that computers would stop getting better. I said that there is a flip going on, and that an awful lot of the action in rapidly accelerating and continuing the acceleration of the power of computers is shifting over to software. But you see, this is precedented, this has happened before. The example is the only known example of intelligence, and we have to keep returning to that, and that is us.
Human beings became intelligent by a very weird process. We did the hardware first. Think of what we needed 100,000 years ago, 200,000, 300,000 years ago. We needed desperately to become the masters of our surroundings, and we would accomplish that with a 100-word vocabulary, simple stone tools, and fire. Once we had those three things and some teamwork, then we were capable of saying, “Ogruk, chase goat. With fire. Me stab.” And then nobody could stand up to us; we were the masters of the world. And we proved that because we were able then to protect goat herds from carnivores, and everywhere we had goat herds, a desert spread because there was no longer a balance—the goats ate all the foliage and it became a desert.  So, destroying the Earth started long before we had writing. The thing is that we could have done, “Ogruk, chase goat, with fire. Me stab,” with a combination in parallel of processing power and software. But it appears likely that we did it the hard way.
We created a magnificent brain, a processing system that was able to brute force this 100-word vocabulary, fire, and primitive tools on very, very poor software—COBOL, you might say. Then about 40,000 years ago—and I describe this is my novel Existence, just in passing—but about 40,000 years ago we experienced the first of at least a dozen major software revisions, Renaissances you might call them. And within a few hundred years suddenly our toolkit of stone tools, bone tools and all of that increased in sophistication by an order of magnitude, by a factor of 10. Within a few hundred years we were suddenly dabbing paint on cave walls, burying our dead with funeral goods. And similar Renaissances happened about 15,000 years ago, about 12,000 years ago, certainly about 5,000 years ago with the invention of writing, and so on. And I think we’re in one right now.
So, we became a species that’s capable of flexibly reprogramming itself with software upgrades. And this is not necessarily going to be the case out there in the universe with other intelligent life forms. Our formula was to develop a brain that could brute force what we needed on very poor software, and then we could suddenly change the software. In fact, the search for extraterrestrial intelligence, I’ve been engaged in that for 35 years, and the Fermi Paradox is the question of why we don’t see any sign of extraterrestrial alien life.
Which you also cover in Existenceas well, right?
Yes. And I go back to that question again and again in many of my stories and novels, posing this hypothesis or that hypothesis.  And in my opinion of the hundred or so possible theories for the Fermi Paradox, I believe the leading one is that we are anomalously smart, that we are very, very weirdly smart. Which is an odd thing for an American to say right at this point in our history, but I think that if we pull this out—we’re currently in Phase 8 of the American Civil War—if we pull it out as well as our ancestors pulled out the other ones, then I think that there are some real signs that we might go out into the galaxy and help all the others.
Sagan postulated that there’s this 100-year window between when a civilization develops, essentially the ability to communicate beyond its planet and the ability to destroy itself, that it has a hundred years to master – that it either destroys itself or it goes on to have some billion-year timeframe. Is that a variant of what you are maintaining? Are you saying intelligence like ours doesn’t come along often, or it comes along and then destroys itself?
These are all tenable hypotheses. I don’t think we come along very often at all. Think about what I said earlier about goats. If we had matured into intelligence very slowly and took 100,000, 200,000 years to go from hunter-gatherer to a scientific civilization, all along that way no one would’ve recognized that we were gradually destroying our environment—the way the Easter Islanders chopped down every tree, the way the Icelanders chopped down every tree in Iceland, the way that goat herds spread deserts, and so did primitive irrigation. We started doing all those things and just 10,000 years later we had ecological science. While the Earth is still pretty nice, we have a real chance to save it. Now that’s a very, very rapid change. So, one of the possibilities is that other sapient life forms out there, just take their time more getting from the one to the other. And by the time they become sapient and fully capable of science, it’s too late. Their goat herds and their primitive irrigation and chopping down the trees made it an untenable place from which they could leap to the stars.
So that’s one possibility. I’m not claiming that it’s real, but it’s different that Sagan’s. Because Sagan’s has 100 years between the invention of nuclear power and the invention of starships. I think that this transition has been going on for 10,000 years, and we need to be the people who are fully engaged in this software reprogramming that we’re engaged in right now, which is to become a fully scientific people. And of course, there are forces in our society who are propagandizing to try to see that some members – our neighbors and our uncles – hate science. Hate science and every other fact-using profession. And we can’t afford that; that is death.
I think the Fermi question is the third most interesting question there is, and it sounds like you mull on it a lot. And I hear you keep qualifying that you’re just putting forth ideas. Is your thesis though that run-of-the mill bacteria life – we’re going to find that to be quite common, and it’s just us that’s rare?
One of the worst things about SETI and all of this is that people leap to conclusions based upon their gut.  Now my gut instinct is that life is probably pretty common because every half decade we find some stage in the autogeneration of life that turns out to be natural and easy. But we haven’t completed the path, so there may be some point along the way that required a fluke—a real rare accident. I’m not saying that there is no such obstacle, no such filter. It just doesn’t seem likely. Life occurred on Earth almost the instant the rocks cooled after the Late Heavy Bombardment. But intelligence, especially scientific intelligence only occurred…
Yesterday.
Yeah, 2.5 billion years after we got an oxygen atmosphere, 3.5 billion years after life started, and 100 million years—just 100 million years—before the Sun starts baking our world. If people would like to see a video that’s way entertaining, put in my name, David Brin, and “Lift the Earth,” and you’ll see my idea for how we could move the Earth over the course of the next 50 million years to keep away from the inner edge of the Goldilocks Zone as it expands outward. Because otherwise, even if we solve the climate change thing and stop polluting our atmosphere, in just 100 million years, we won’t be able to keep the atmosphere transparent enough to lose the heat fast enough.
One more question about that, and then I have a million other questions to ask you. It’s funny because in the ’90s when I lived in Mountain View, I officed next door to the SETI people, and I always would look out my window every morning to see if they were painting landing strips in the parking lot. If they weren’t, I figured there was no big announcement yet. But do you think it’s meaningful that all life on Earth… Matt Ridley said, “All life is one.” You and I are related to the banana; we had the same exact thing… Does that indicate to you it only happened in stock one time on this planet, which Gaia, seems so predisposed to life that that would indicate its rarity?
That’s what we were talking about before. The fact is that there are no more non-bird dinosaurs because velociraptors didn’t have a Space program. That’s really what it comes down to. If they had a B612 Foundation or Asteroidal Resources or Planetary Resources, these startups that are out there – and I urge people to join them – B612, Planetary Resources – these are all groups that are trying to get us out there so that we can mine asteroids and get rich. B612 concentrates more on finding the asteroids and learning how to divert them if we ever find one heading toward us. But it’s all the same thing. And I’m engaged in all this not only on the Board of Advisors for those groups, but also I’m on the Council of Advisors to NIAC, which is NASA’s Innovative and Advanced Concepts program. It’s the group within NASA that gives little seed grants to far out ideas that are just this side of plausible, a lot of them really fun. And some of them turn into wonderful things. So, I get to be engaged in a lot of wonderful activities, and the problem with this is it distracts me so much that I’ve really slowed down in my writing science fiction.
So, about that for a minute—when I think of your body of work, I don’t know how to separate what you write from David Brin, the man, so you’ll have to help me with that. But in Kiln People, you have a world in which humans are frequently uploading their consciousness in temporary shells of themselves and the copies are sometimes imperfect. So, does David Brin, the scientist, think that that is possible? And do you have a theory as to how it is, by what mechanism are we conscious?
Those are two different questions. When I’m writing science fiction, it falls into a variety of categories. There is hard SF, in which I’m trying very hard to extrapolate a path from where we are into an interesting future. And one of the best examples in my most recent short story collection, which is called Insistence of Vision, is the story “Insistence of Vision,” in which in the fairly near future we realize that we can get rid of almost all of our prisons. All we have to do is give felons virtual reality goggles that only let them see what we want them to see, and then you temporarily blind them so they can’t take off the goggles – they’ll be blinded and harmless. But if they put the goggles on, they can wander our streets, have jobs, but they can’t hurt anybody because all that’s passing by them is blurry objects and they can only see those doors that they’re allowed to see. That’s chilling. It seems Orwellian until you realize that it’s also preferable to the horrors of prison.
Another near-term extrapolation in the same collection is called “Chrysalis.” And I’ve had people write to me after reading the collection Insistence of Vision, and they’ve said that that story’s explanation—its theory for what cancer is—one guy said, “This is what you’ll be known for a hundred years from now, Brin.” I don’t know about that, but I have a theory for what cancer is, and I think it fits the facts better than anything else I’ve seen. But then you go to the opposite extreme and you can write pure fantasy just for the fun of it, like my story “The Loom of Thessaly.”
Others are stories that do thought experiments, for instance about the Fermi Paradox. And then you have tales like Kiln People, where I hypothesize a machine that lets you imprint your soul, your memories, your desires into a cheap clay copy, and you can make two, three, four, five of them any given day. And at the end of the day they come back and you can download their memories, and during that day you’ve been five of you and you’ve gotten everything that you wanted done and experienced all sorts of things. So you’re living more life in parallel, rather than more life serially, which is what the immortality cooks want. So what you get is a wish fantasy: “I am so busy, I wish I could make copies of myself every day.” So I wrote a novel about it. I inspired by the Terracotta soldiers of Xi’an and the story of the Golem of Prague and God making Adam out of clay, all those examples of clay people. So you have the title of the book is Kiln People—they’re baked in the kiln in your home every day, and you imprint your soul in it. And the notion is that like everything having to do with religion, we decided to go ahead and technologize the soul. It’s a fun extrapolation. Then from that extrapolation, I go on and I try to be as hardcore as I can about be dealing with what would happen, if? So it’s a thought experiment, but people have said that Kiln Peopleis my most fun book, and that’s lovely, that’s a nice compliment.
On to the question though of consciousness itself, do you have a theory on how it comes about, how you can experience the world as supposed to just measure it?
Yeah, of course. It’s a wonderful question. Down here in San Diego we’ve started the Arthur C. Clark Center for Human Imagination, and on December 16th we’re having a celebration of Arthur Clark’s 100th anniversary. The Clark Center is affiliated with the Penrose Institute. Roger Penrose, of course, his theory of consciousness is that Moore’s Law will never cross the number of computational elements in a human brain. That’s Ray Kurzweil’s concept, that as soon as you can use Moore’s Law to pack into a box the same number of circuit elements as we have in the human brain, then we’ll automatically get artificial intelligence. That’s one of the six modes by which we might achieve artificial intelligence, and if people want to see the whole list they can Google my name and “IBM talk” or go to your website and I’m sure you’ll link to it.
But of those six, Ray Kurzweil was confident that as soon as you can use Moore’s Law to have the same number of circuit elements as in the human brain, you’ll get… But what’s a circuit element? When he first started talking about this, it was the number of neurons, which is about a hundred billion. Then he realized that the flashy elements that actually seem like binary flip-flops in a computer are not the neurons; it’s the synapses that flash at the ends of the axons of every neuron. And there can be up to a thousand of those, so now we’re talking on the order of a hundred trillion. But Moore’s Law could get there. But now we’ve been discovering that for every flashing synapse, there may be a hundred or a thousand or even ten thousand murky, non-linear, sort of quasi-calculations that go on in little nubs along each of the dendrites, or inside the neurons, or between the neurons and the surrounding glial and astrocyte cells. And what Rodger Penrose talks about is microtubules, where these objects inside the neurons look to him and some of his colleagues like they might be quantum-sensitive. And if they’re quantum-sensitive, then you have qubits – thousands and thousands of them in each neuron, which brings us full circle back around to the whole question of quantum computing. And if that’s the case, now you’re not talking hundreds of trillions; you’re talking hundreds of quadrillions for Moore’s Law to have to emulate.
So, the question of consciousness starts with, where is the consciousness? Penrose thinks it’s in quantum reality and that the brain is merely a device for tapping into it. My own feeling is, and that was a long and garrulous, and I hope folks found it interesting route to getting to the point, is that I believe consciousness is a screen upon which the many subpersons that we are, the many subroutines, subprocesses, subprocessors, personalities that make up our communities of our minds – we project those thoughts onto a shared screen. And it’s important for all of these subselves to be able to communicate with each other and cooperate with each other, that we maintain the fiction that what’s going on up there on the screen, is us. Now that’s kind of creepy. I don’t like to think about it too much, but I think it is consistent with what we see.
To take some of that apart for a minute, of 60 or 70 guests I’ve had on the show, you’re the third that references Penrose. And to be clear, Penrose explicitly says he does not believe machines can become conscious because there are problems that can be demonstrated to be non-algorithmically solved that humans can solve, and therefore we’re not classical computers. He has that whole thing. That is one viewpoint that says we cannot make conscious machines. What you’ve just said is a variant of the idea that the brain has all these different sections and they vie for attention and your minds figure out this trick of you being able to synthesize everything that you see and experience into one you, and then that’s it. That would imply to me you could make a conscious computer, so I’m curious where you come down on that question. Do you think we’re going to build a machine that will become conscious?
If folks want to look up the video from my IBM talk, I dance around this when I talk about the various approaches to getting AI. And one of them is Robin Hanson’s notion that actually algorithmically creating AI, he claims is much too hard and that what we’ll wind up doing is taking this black box of learning systems and becoming so good at emulating how a human responds to every range of possible inputs, that the box will in affect be human, simply because it’ll give human responses almost all the time. Once you have that, then these humans’ templates will be downloaded into virtual worlds, where the clock speed can be sped up or slowed down to whatever degree you want, and any kind of wealth that can be generated non-physically will be generated at prodigious speeds.
This solves the question of how the organic humans live, and that is that they’ll all have investments in these huge buildingswithin which trillions and trillions of artificially reproduced humans are living out their lives. And Robin’s book is called The Age of Em – the age of emulation – and he assumes that because they’ll be based on humans, that they’ll want sex, they’ll want love, they’ll want families, they’ll want economic advancement, at least at the beginning, and there’s no reason why it wouldn’t have momentum and continue. That is one of the things that applies to this, and the old saying is, “If it walks like a duck and it quacks like a duck, you might as well treat it like a duck or it’s going to get pretty angry.” And when you have either quadrillions of human-level intelligences, or things that can act intelligent faster and stronger than us, the best thing to do is to do what I talk about in Category 6 of creating artificial intelligence, and that is to raise them as our children because we know how to do that. If we raise them as humans, then there is a chance that a large fraction of them will emerge as adult AI entities, perhaps super powerful, perhaps super intelligent, but thinking of themselves as super powerful, super intelligent humans. We’ve done that. The best defence against someone else’s smart offspring that they raised badly and who are dangerous, is your offspring, who you raised well, who are just as smart and determined to prevent the danger to Mom and Dad.
In other words, the solution to Terminator, the solution to Skynet, is not Isaac Asimov’s laws of robotics. I wrote the final book in Isaac’s series The Foundationin robot series; it’s called Foundation’s Triumph. I was asked to tie together all of his loose ends after he died. And his wife was very happy with how I did it. I immersed myself in Asimov and wrote what I thought he was driving at in the way he was going with the three laws. And the thing about laws embedded in AI is that if they get smart enough, they’ll become lawyers, and then interpret the laws any way they want, which is what happens in his universe. No, the method that we found to prevent abuse by kings and lords and priests and the pyramidal social structures was to break up power. That’s the whole thing that Adam Smith talked about. The whole secret of the American Revolution and the Founders and the Constitution was to break it up. And if you’re concerned about bad AI, have a lot of AI and hire some good AI, because that’s what we do with lawyers. We all know lawyers are smart, and there are villainous lawyers out there, so you hire good lawyers.
I’m not saying that that’s going to solve all of our problems with AI, but what it does do, and I have a non-fiction book about this called The Transparent Society: Will Technology Force Us To Choose Between Privacy and Freedom?The point is that the only thing that ever gave us freedom and markets and science and justice and all the other good things, including vast amounts of wealth – was reciprocal accountability. That’s the ability to hold each other accountable, and it’s the only way I think we can get past any of the dangers of AI. And it’s exactly why the most dangerous area for AI right now is not the military because they like to have off switches. The most dangerous developments in AI are happening in Wall Street. Goldman Sachs is one of a dozen Wall Street firms, each of which are spending more on artificial intelligence research than the top 20 universities combined. And the ethos for their AIs is fundamentally and inherently predatory, parasitical, insatiable, secretive, and completely amoral. So, this is where I fear a takeoff AI because it’s all being done in the dark, and things that are done in the dark, even if they have good intentions, always go wrong. That’s the secret of Michael Crichton movies and books, is whatever tech arrogance he’s warning about was done in secret.
Following up on that theme of breaking up power, in Existenceyou write a future about the 1% types on the verge of taking full control of the world, in terms of outright power. What is the David Brin view of what is going to happen with wealth and wealth distribution and the access to these technologies, and how do you think the future’s going to unfold? Is it like you wrote in that book, or what do you think?
In Existence, it’s the 1% of the 1% of the 1% of the 1%, who gather in the Alps and they hold a meeting because it looks like they’re going to win. It looks like they’re going to bring back feudalism and have a feudal power shaped like a pyramid, that they will defeat the diamond shaped social structure of our Enlightenment experiment. And they’re very worried because they know that all the past pyramidal social structures that were dominated by feudalism were incredibly stupid, because stupidity is one of the main outcomes of feudalism. If you look across human history, [feudalism produced] horrible governance, vastly stupid behavior on the part of the ruling classes. And the main outcome of our Renaissance, of our Enlightenment experiment, wasn’t just democracy and freedom. And you have idiots now out there saying that democracy and liberty are incompatible with each other. No, you guys are incompatible with anything decent.
The thing is that this experiment of ours, started by Adam Smith and then the American Founders, was all about breaking up power so that no one person’s delusion can ever govern, but instead you are subject to criticism and reciprocal accountability. And this is what I was talking about in the only way we can escape a bad end with AI. And I talk about this in The Transparent Society. The point is that in Existencethese trillionaires are deeply worried because they know that they’re going to be in charge soon. As it turns out in the book, they may be mistaken. But they also know that if this happens—if feudalism takes charge again—very probably everyone on Earth will die, because of bad government, delusion, stupidity. So they’re holding a meeting and they’re inviting some of the smartest people they think they can trust to give papers at a conference on how feudalism might be done better, on how it might be done within a meritocratic and a smarter way. And I only spend one chapter—less than that—on this meeting, but it’s my opportunity to talk about how if we’re doomed to lose our experiment, then at least can we have lords and kings and priests who are better than they’ve always been for 6,000 years?
And of course, the problem is that right now today, the billionaires who got rich through intelligence, sapience, inventiveness, working with engineers, inventing new goods and services and all of that – those billionaires don’t want to have anything to do with a return of feudalism. They’re all members of the political party that’s against feudalism. A few of them are libertarians. The other political party gets its billionaires from gambling, resource extraction, Wall Street, or inheritance – the old-fashioned way. The problem is that the smart billionaires today know what I’m talking about, and they want the Renaissance to continue, they want the diamond shaped social structure to continue. That was a little bit of a rant there about all of this, but where else can you explore some of this stuff except in science fiction?
We’re running out of time here. I’ll close with one final question, so on net when you boil it all down, what do you think is in store for us?  Do you have any optimism?  Are you completely pessimistic?  What do you think about the future of our species?
I’m known as an optimist and I’m deeply offended by that. I know that people are rotten and I know that the odds have always been stacked against us. If you think of Machiavelli back in the 1500s – he fought like hell for the Renaissance for the Florentine Republic. And then when he realized that all hope was lost, he sold his services to the Medicis and the lords, because what else can you do? Pericles in Athens lasted one human lifespan. It scared the hell out of everybody in the Mediterranean, because democracy enabled the Athenians to be so creative, so dynamic, so vigorous, just like we in America have spent 250 years being dynamic and vigorous and constantly expanding our horizons of inclusion and constantly engaged in reform and ending the waste of talent.
The world’s oligarchs are closing in on us now, just like they closed in on Pericles in Athens and on the Florentine Republic, because the feudalists do not want this experiment to succeed and bring us to the world of Star Trek. Can we long survive, can we renew this? Every generation of Americans and across the West has faced this crisis, every single generation. Our parents and the greatest generation who survived the Depression and destroyed Hitler and contained communism and took us to the Moon and built vast enterprise systems that were vastly more creative, and fantastic growth under FDR’s level of taxes, by the way. They knew this – they knew that the enemy of freedom has always been feudalism far more than socialism; though socialism sucks too.
We’re in a crisis and I’m accused of being an optimist because I think we have a good chance. We’re in Phase 8 of the American Civil War, and if you type in “Phase 8 of the American Civil War” you’ll probably find my explanation. And our ancestors dealt with the previous seven phases successfully. Are we made of lesser stuff? We can do this. In fact, I’m not an optimist; I’m forced to be an optimist economically by all the doom and gloom out there, which is destroying our morale and our ability to be confident that we can pass this test. This demoralization, this spreading of gloom is how the enemy is trying to destroy us. And people out there need to read Steven Pinker’s book The Better Angels of Our Nature, they need to read Peter Diamandis’s book Abundance. They need to see, that there is huge amounts of good news.
Most of the reforms we’ve done in the past worked, and we are mighty beings, and we could do this if we just stop letting ourselves be talked into a gloomy funk. And I want us to get out of this funk for one basic reason—it’s not fun to be the optimist in the room. It’s much more fun to be the glowering cynic, and that’s why most of you listeners out there are addicted to being the glowering cynics. Snap out of it! Put a song in your heart. You’re members of the greatest civilization that’s ever been. We’ve passed all the previous tests, and there’s a whole galaxy of living worlds out there that are waiting for us to get out there and rescue them.
That’s a wonderful, wonderful place to leave it.  It has been a fascinating hour, and I thank you so much.  You’re welcome to come back on the show anytime you like. I’m almost speechless with the ground we covered, so, thank you!
Sure thing, Byron. And all of you out there – enjoy stuff. You can find me at DavidBrin.com, and Byron will give you links to some of the stuff we referred to.  And thank you, Byron.  You’re doing a good job!
Byron explores issues around artificial intelligence and conscious computers in his upcoming book The Fourth Age, to be published in April by Atria, an imprint of Simon & Schuster. Pre-order a copy here.
[voices_in_ai_link_back]

Voices in AI – Episode 32: A Conversation with Alan Winfield

[voices_in_ai_byline]
In this episode, Byron and Alan talk about robot ethics, military robots, emergence, consciousness, and self-awareness.
[podcast_player name=”Episode 32 – A Conversation with Alan Winfield” artist=”Byron Reese” album=”Voices in AI” url=”https://voicesinai.s3.amazonaws.com/2018-01-22-(01-05-50)-alan-winfield.mp3″ cover_art_url=”https://voicesinai.com/wp-content/uploads/2017/09/voices-in-ai-cover.png”]
[voices_in_ai_byline]

Byron Reese: This is Voices in AI, brought to you by Gigaom. I’m Byron Reese. Today our guest is Alan Winfield. Alan Winfield is a professor of robot ethics at the University of West England. He has so many credentials, I don’t even know where to start. He’s a member of the World Economic Forum Council on the Future of Technology, Values and Policy. He’s a member of the Ethics Advisory Board for the Human Brain Project, and a number more. He sits on multiple editorial boards, such as the Journal of Experimental and Theoretical Artificial Intelligence, and he’s the associate editor of Frontiers in Evolutionary Robotics. Welcome to the show, Alan.
Alan Winfield: Hello, Byron, great to be here.
So, I bet you get the same first question every interview you do: What is a robot ethicist?
Well, these days, I do, yes. I think the easiest, simplest way to sum it up is someone who worries about the ethical and societal implications or consequences of robotics and AI. So, I’ve become a kind of professional worrier.
I guess that could go one of three ways. Is it ethics of how we use robots, is it the ethics of how the robots behave, or is it the ethics of… Well, I’ll just go with those two. What do you think more about?
Well, it’s both of those.
Okay.
But, certainly, the biggest proportion of my work is the former. In other words, how humans—that’s human engineers, manufacturers and maintainers, repairers and so on, in other words, everyone concerned with AI and robotics—should behave responsibly and ethically to minimize the, as it were, unwanted ethical consequences, harms if you like, to society, to individual humans and to the planet, from AI and robotics.
The second one of those, how AI and robotics can itself behave ethically, that’s very much more a research problem. It doesn’t have the urgency of the first, and it really is a deeply interesting question. And part of my research is certainly working on how we can build ethical robots.
I mean, an ethical robot, is that the same as a robot that’s a moral agent itself?
Yes, kind of. But bearing in mind that, right now, the only full moral agents that exist are adult humans like you and I. So, not all humans of course, so adult humans of sound mind, as it were. And, of course, we simply cannot build a comparable artificial moral agent. So, the best we can do so far, is to build minimally ethical robots that can, in the very limited sense, choose their actions based on ethical rules. But, unlike you and I, cannot decide whether or not to behave ethically, and certainly cannot, as it were, justify their actions afterwards.
When you think about the future and about ethical agents, or even how we use them ethically, how do you wrap your head around the fact that there aren’t any two people that agree on all ethics? And if you look around the world, the range of beliefs on what is ethical behavior and what isn’t, varies widely. So, is it not the case you’re shooting for a target that’s ill-defined to begin with?
Sure. Of course, we certainly have that problem. As you say, there is no single, universal set of ethical norms, and even within a particular tradition, say, in the Western ethical tradition, there are multiple sets of ethics, as it were, whether they’re consequentialist ethics or deontic or virtue ethics, so it’s certainly complicated. But I would say that you can abstract out of all of that, if you like, some very simple principles that pretty much most people would agree, which is that, for instance, a robot should not harm people, should not cause people to come to harm.
That happens to be Asimov’s first rule of robotics, and I think it’s a pretty wise, as it were, starting point. Asimov’s first rule of robotics is universal, but, what I’m saying is that, we probably can extract a very small number of ethics, which if not universal, will attract broad agreement, broad consensus.
And yet, there’s an enormous amount of money that goes into artificial intelligence, to highlight just that one, right? Robots used in military for instance, specifically, including robots that actually do, or are designed to kill and do harm, and so, we can’t even start at something that, at first glance, seems pretty obvious.
Well, indeed, and the weaponization of AI, and any technology, is something that we all should be concerned about. I mean, you’re right that the real world has weapons. That doesn’t mean that we shouldn’t strive for a better world in which technology is not weaponized. So, yes, this is an idealistic viewpoint, but what do you expect a robot ethicist to be except an idealist?
Point taken. One more question along these lines. Isn’t the landmine a robot with artificial intelligence that is designed to kill? I mean, the AI says if the object weighs more that forty-five pounds, I run this program which blows it up, is that a robot that makes the kill decision itself?
Well, in a minimal sense, I suppose you might say it’s certainly an autometer, or an automaton. It has a sensor, which is the device that senses a weight upon it, and an actuator, which is the thing that triggers the explosion. But the fact is, of course, that landmines are hideous weapons that should’ve been banned a long time ago, and mostly are banned, and of course the world is still clearing up landmines.
I would like to switch gears a little bit and talk about emergence. You study swarm behavior.
Yes, I spent many years studying swarm behavior. That’s right, yes.
You’ve no doubt seen the video of—and, again, you’re going to have to help me with the example here. It’s a wasp that when threatened, they make a spinning pinwheel, where they’re all kind of making their wings open and close in this tight unison where it gives the illusion there’s this giant spinning thing. And it’s like the wave in a stadium, which happens so quickly. They’re not, like, saying, “Oh, Bob just waved his wings, now it’s my turn.” Are you familiar with that phenomenon?
I’m not. That’s a new one on me, Byron.
Then let’s just talk about any other… How is it that anthills and beehives act in unison? Is that to achieve larger goals, like, cool the hive, or what not? Is that swarm?
Yeah. I mean, the thing I think that we need to try and do is to dismiss any notion of goals. It’s certainly true that a termite’s mound, for instance, is an emergent consequence. It’s an emergent property of hundreds of thousands of termites doing their thing. And all of the extraordinary sophistication we see—the air conditioning, the fungus farms and such—in the termite mounds, are all also emergent properties of the, as it were, the myriad microscopic interactions between the individuals, between each other and their environment, which is, if you like, the materials and structure of the termite nest.
But if people say to me, “How do they know what they’re doing and when they’ve finished?” the answer is, well, firstly, no individual knows what it’s doing in the termite mound, and secondly, there is no notion of finished. The work of building and maintaining the termite mound just carries on forever. And the reason the world isn’t full of termite mounds, it hasn’t been, as it were, completely colonized by termite mounds, is for all sorts of reasons: climate, environment conditions, the fact that if termite mounds get too big, they’ll collapse because of their own weight, larger animals of course will either deliberately break into the termite mounds to feed on termites, or will just blunder into them and knock them over, and there’s flooding and weather and all kinds of stuff.
So, the fact that when we see termite mounds, we imagine that this is some kind of goal-oriented activity, is unfortunately, simply applying a very human metaphor to a non-human process. There is simply no notion that any individual termite knows what it’s doing, or of the collective, as it were, finishing a task. There are no tasks in fact. There are simply interactions, microscopic actions and interactions.
Let’s talk about emergence for a minute. I’ll set my question up with a little background for any listener. Emergence is the phenomenon where we observe attributes of a system that are not present in any of the individual components. Is that a fair definition?
Yes. I mean, there are many definitions of emergence, but essentially, you’re looking for macroscopic structures or phenomena or properties that are not evident in the behavior of individuals.
We divide it into two halves, and one half, a good number of people don’t believe exist. So, the first one is weak emergence, as I understand it, where you could study hydrogen for a year and you could study oxygen for a year, and never in your wildest imagination would you have guessed that you put them together and they make water and it’s wet, it’s got this new wetness. And yet, in weak emergence, when you study it enough and you figure out what’s going on, you go, “Oh, yeah, I see how that worked,” and then you see it.
And then there’s strong emergence, which posits that there are characteristics that emerge for which you cannot take a reductionist view, you cannot in any way study the individual components and ever figure out how they produced that result. And this isn’t an appeal to mysticism, moreso it’s a notion that maybe strong emergence is a fundamental force of the universe or something like that. Did I capture that distinction?
Yeah, I think you’ve got it. I mean, I’m definitely not a strong emergentist. It’s certainly true that, and I’ve seen this a number of times in my own work, emergent properties can be surprising. They can be puzzling. It can sometimes take you quite a long time to figure out what on Earth is going on. In other words, to unpick the mechanisms of emergence. But there’s nothing mysterious.
There’s nothing in my view that is inexplicable about emergence. I mean, there are plenty of emergent properties in nature that we simply cannot explain mechanically, but that doesn’t mean that they are inexplicable. It just means that we’re not smart enough. We haven’t, as it were, figured out what’s going on.
So, when you were talking about the termite nest, you said the termite nest doesn’t know what it’s doing, it doesn’t have goals, it doesn’t have tasks that have a beginning and an end. If all of that is true, then the human mind must not be an emergent phenomenon, because we do have goals, we know exactly what we’re doing.
Well, I’m not entirely sure I agree with that. I mean, we think we know what we’re doing, that may well be an illusion, but carry on anyway.
No, that’s a great place to start. So, you’re alluding to the studies that suggest you do something instinctually, and then your brain kind of races to figure out why did I do that, and then it reverses the order of those two things and says, “I decided to do it. That’s why I did it.”
Well, I mean, yeah, that’s one aspect which may or may not be true. But what I really mean, Byron, is that when you’re talking about human behaviors, goals, motivations and so on, what you’re really looking at is the top, the very top layer of an extraordinary multi-layered process, which we barely understand, well, we really don’t understand at all. I mean, there’s an enormous gap, as it were, between the low-level processes—which also we barely understand—in other words, the interactions between individual neurons and, as it were, the emergence of mind, let alone, subjective experience, consciousness and so on.
There are so many layers there, and then the top layer, which is human behavior, is also mediated through language and culture, and we mustn’t forget that. You and I wouldn’t have been having this conversation half a million years ago. The point is that the things that we can think about and have a discourse over, we wouldn’t be able to have a discourse about if it were not for this extraordinary edifice of culture, which kind of sits on top of a large number of human minds.
We are social animals, and that’s another emergent property. You’ve got the emergent property of mind, and then consciousness, then you have the emergent property of society, and on top of that, another emergent property, which is culture. And somewhere in the middle of that, all mixed up, is language. So, I think it’s so difficult to unpick all of this, when you start to ask questions like, “Yes, but how can a system of emergence have goals, have tasks?” Well, it just so happens that modern humans within this particular culture do have what we, perhaps rather pretentiously, think of as goals and motivations, but who knows what they really are? And I suspect we probably don’t have to go back many tens, certainly hundreds of generations, to find that our goals and motivations were no different to most of the animals, which is to eat and survive, to live another day.
And so, let’s work up that ladder from the brain to the mind to consciousness. Perhaps half a million years ago, you’re right, but there are those who would maintain that when we became conscious, that’s the moment we, in essence, took control and we had goals and intentions and all of that subtext going on. So, I’ll ask you the unanswerable question, how do you think consciousness comes about?
Gosh, I wish I knew.
Is it quantum phenomenon? Is it just pure emergence?
I certainly think it’s an emergent property, but I think it’s such a good adaptation, that I doubt that it’s just an accident. In other words, I suspect that consciousness is not like a spandrel of San Marco, you know, that wonderful metaphor. I think that it’s a valuable adaptation, and therefore, when—at some point in our evolutionary history, probably quite recent evolutionary history—some humans started to enjoy this remarkable phenomena of being a subject and the subjective experience of recognizing themselves and their own agency in the world, I suspect that they had such a big adaptive advantage over their fellow humans, hominids, who didn’t have that experience, that, rather quickly, I think it would have become a strongly self-selecting adaptation.
I think that the emergence of consciousness is deeply tied up with being sociable. I think that in order to be social animals, we have to have theory of mind. To be a successful social animal, you need to be able to navigate relationships and the complexity of social hierarchies, pecking orders and such like.
We know that chimpanzees are really quite sophisticated with what we call Machiavellian intelligence. In other words, the kind of social intelligence where you will, quite deliberately, manipulate your behaviors in order to achieve some social advantage. In other words, I’ll pretend to want to get to know you, not because I really want to get to know you, but because I know that you are friends with somebody else, and I really want to be friends with her. So that’s Machiavellian intelligence. And it seems that chimpanzees are really rather good at it, and probably just as good at it as we homo sapiens.
And in order to be able to have that kind of Machiavellian intelligence, you need to have theory of mind. Now, theory of mind means having a really quite sophisticated model of your conspecifics. Now, that, I think, in turn, arose out of the fact that we have complicated bodies, bodies that are difficult to control, and therefore, we, at some earlier point in our evolutionary history, started to have quite sophisticated body self-image. In other words, an internal simulation, or whatever you call it, an internal model of our own physical bodies.
But, of course, the beauty of having a model of yourself is that you then automatically have a model of your conspecifics. So, I think having a self-model bootstraps into having theory of mind. And then, I think, once you have theory of mind, and you can—and I don’t know at what point this might have come in, whether it would come after we have theory of mind, probably, I think—start to imitate each other; in other words, do social learning.
I think social learning was, again, another huge step forward in the evolution of modern mind. I mean, social learning is unbelievably more powerful than individual learning. Suddenly the ability to pass on knowledge to your children, from your ancestors, especially once you invent writing, as well, or symbols and language, writing of course came much later, but I think that all of these things were necessary, but perhaps not sufficient in themselves prerequisites for consciousness. I mean, it’s very interesting, I don’t know if you know the work of Julian Jaynes.
Of course, Bicameral Mind. That we weren’t even conscious until 500 BC, and that the Greek gods and the rise of oracles was just us realizing we had lost the voice that we used to hear in our heads.
I mean, it’s a radical hypothesis. Not many people buy that argument. But I think it’s extremely interesting, the idea that modern consciousness may be a very recent adaptation, as you say, within, as it were, recorded history, back to Homeric times. So, I think the story of how consciousness evolved, may never be known of course. It’s like a lot of natural history. We can only ever have Just So Stories. We can only have more or less plausible hypotheses.
I’m absolutely convinced that key prerequisites are internal models. Dan Dennett has this wonderful structure, this conceptual framework that he calls the “Tower of Generate-and-Test,” this set of conceptual creatures that each has a more sophisticated way of generating and testing hypotheses about what action to take next. And without going through the whole thing in detail, his Popperian creatures have this amazing innovation of being able to imagine the outcomes of actions before trying them out. And therefore, they can imagine a bad action, and decide not to try it out for real, which may well be extremely dangerous.
And then he suggests that a subset of Popperian creatures are what he calls Gregorian creatures, who’ve invented mind tools, like language, and therefore have this additional remarkable ability to learn socially from each other. And I think that social learning and theory of mind are profoundly, in my view, implicated in the emergence of consciousness. Certainly, I would stick my neck out and say that I think solitary animals cannot enjoy the kind of consciousness that you and I do.
So, all of that to say, we don’t know how it came about, and you said we may never know. But it’s really far more intractable than that because we don’t really know, if you agree with this, that it’s not just how it came about, we don’t have any science that suggests how a cloud of hydrogen could come to name itself. We don’t have any science to say how is it that I can feel something? How is it that I can experience something as opposed to just sensing it? As I listen to you along this conversation, I just replace everything with zombie, you know, the analogy of a human without consciousness.
In any case, so what would you say to that? I’ve heard consciousness described as the most difficult problem, maybe the only problem left that we know neither how to ask it, nor what the answer would look like. So, what do you think the answer to the question of how is it that we have subjective experience looks like?
Well, again, I have no idea. I mean, I completely agree with you, Byron. It is an extraordinarily difficult problem. What I was suggesting earlier were just a very small number of prerequisites, not in any sense was I suggesting that those are the answer to what is consciousness. There are interesting theories of consciousness. I mean, I like the work very much of Thomas Metzinger, who I think has a very, well to me at least, a very attractive theory of consciousness because it’s based upon the idea of the self-model, which I’ve indicated I’m interested in models, and his notion of the phenomenal self-model.
Now, as you quite rightly say, there are vast gulfs in our misunderstanding, and we certainly don’t even know properly what questions to ask, let alone answer, but I think we’re slowly getting there. I think progress is being made in the study of consciousness. I mean, the work of Anil Seth I think is deeply interesting in this regard. So, I’m basically agreeing with you.
We don’t have a science to understand how something can experience. So, I get a temperature sensor up to my computer, that I write a program that it screams if it gets over five hundred degrees, and then I hold a match to it and it screams. We don’t think the computer is feeling pain, even though the computer’s able to sense all that’s going on, we don’t think that there’s an agent that can feel anything.
In fact, we don’t even really have science to understand how something could feel. And, I’m the first to admit it just kicks the can down the street, but you came out against strong emergence at the get-go, you’re definitely not that, but couldn’t you say “Well, clearly our basic physical laws don’t account for how matter can experience things, and therefore there might be another law at play that comes from complexity or any number of other things, that it isn’t reductionist and we just don’t understand it.” But why is it that you reject strong emergence so unequivocally, but still kind of struggle with, “We don’t really know any scientific way, with physics, to answer that question of how something can experience?”
Well, no, I think they’re completely compatible positions. I’m not saying that consciousness, subjective experience—what it is to subjectively experience something—is unknowable, in other words, the process. I don’t believe the process by which subjective experience happens in some complex collections of matter is unknowable. I think it’s just very hard to figure out and will take us a long time, but I think we will figure it out.
A lot of times when people look at the human brain, they say, “Well, the reason we don’t understand it is because it’s got one hundred billion neurons.” And yet there’s been an effort underway for two decades to take the nematode worm’s 302 neurons and try to make it—
—Two of which, interestingly, are not connected to anything.
—And try to make a digital life, you know, model it. So, we can’t even understand how the brain works to the degree that we can reproduce a three hundred-neuron brain. And even more so, there are those who suggest that a single neuron may be as complicated as a super computer. So, what do you think of that? Why can’t we understand how the nematode brain works?
Well, understanding of course, is a many-layered thing. And at some level of abstraction, we can understand how the nervous system of C. elegans works. I mean, we can, that’s true. But, as with all of science, understanding or scientific model is an abstraction, at some degree of abstraction. It’s a model at some degree of abstraction. And if you want to go deeper down, increase the level of granularity of that understanding, that’s I think when you start to have difficulties.
Because as you say, when we build, as it were, a computer simulation of C. elegans, we simply cannot model each individual neuron with complete fidelity. Why not? Well, not just because it’s extraordinarily complex, but we simply don’t fully understand all the internal processes of a biological neuron. But that doesn’t mean that we can’t, at some useful, meaningful level of abstraction, figure out that a particular stimulus to a particular sensor in the worm will cause a certain chain reaction of activations and so on, which will eventually cause a muscle to twitch. So, we can certainly do that.
You wrote a paper, “Robots with Internal Models: A Route to Self-Aware and Hence, Safer Robots,” and you alluded to that a few moments ago, when you talked about an internal model. Let’s take three terms that are used frequently. So, one of them is self-awareness. You have Gallup’s red dot test, that says, “I am a ‘self.’ I can see something in the mirror that has a red dot, and I know that’s me and I try to wipe it off my forehead.” That would be a notion of self-awareness. Then you have sentience, and of course it’s often misused, sentience of course just means to be able to sense something, usually to feel pain. And then you have consciousness, which is this, “I experience it.” Does self-awareness imply sentience, and does sentience imply consciousness? Or can something be self-aware and neither sentient or conscious?
I don’t think it’s all binary. In other words, I think there are degrees of all of those things. I mean, even simple animals have to have some limited self-awareness. And the simplest kind of self-awareness that I think pretty much all animals need to have is to be able to tell the difference between me and not me. If you can’t tell the difference between me and not me, you’re going to have difficulty getting by in the world.
Now, that I think is a very limited form, if you like, of self-awareness, even though I wouldn’t suggest for a moment that simple animals that can indeed tell the difference between me and not me, have sentience or consciousness. So, I think that these things exist on a spectrum.
Do you think humans are the only example of consciousness on the planet or would you suspect—?
No, no, no. I think, again, that there are degrees of consciousness. I think that perhaps there are undoubtedly some unique attributes of humans. We’re almost certainly the only animal on the planet that can think about thinking. So, this kind of reflective—or is reflexive, is that the right word here—ability to kind of ask ourselves questions, as it were.
But even though, for instance, a chimpanzee probably doesn’t think about thinking, I think it is conscious. I mean, it certainly has plenty of other attributes of consciousness. And not only chimpanzees, but other animals are capable, clearly, of obviously feeling pain, also feeling grief, and feeling sadness. When a member of the clan is killed or dies, these are, in my view, evidence of consciousness in other animals. And there are plenty of animals that we almost feel instinctively are conscious to a reasonably high degree. Dolphins are another such animal. One of the most puzzling ones, of course, is the octopus.
Right, because you said a moment ago, a non-sociable animal shouldn’t be able to be conscious.
Exactly. And that’s the kind of black swan of that particular argument, and I was well aware of that when I said it. I mean, clearly, there’s something else going on in the octopus, but we can nevertheless be sure that the octopus, collectively, don’t have traditions in the way that many other animals do. In other words, they don’t have localized, socially-agreed behaviors like birdsong, or in chimpanzee, cracking nuts open a different way on one side of the mountain to the other side of the mountain. So, there’s clearly something very puzzling going on in octopus, which seems to buck, what otherwise I think is a pretty sound proposition, which is, in my view, the role of sociability in the emergence of consciousness.
And, I think, octopus only live about three years, so just imagine if they had a one hundred-year lifespan or something.
What about plants? Is it possible that plants are self-aware, sapient, sentient or conscious?
Good question. I mean, certainly, plants are intelligent. I’m more comfortable with the word intelligence there. But as for, well, maybe even a limited form of self-awareness, a very limited form of sentience, in the sense that plants clearly do sense their environments. Plants, trees, clearly do sense and respond to attacks from neighboring plants or pests, and appear even to be able to respond in a way that protects themselves and their neighboring, as it were, conspecifics.
So, there is extraordinary sophistication in plant behavior, plant intelligence, that’s really only beginning to be understood. I have a friend, a biologist in the University of Tel Aviv, Danny Chamovitz, and Danny’s written a terrific book on plant intelligence that really is well worth reading.
What about Gaia? What about the Earth? Is it possible the Earth has its own emergent awareness, its own consciousness, in the same sense that all the neurons in our brain come together in our mind to give us consciousness?
And I don’t think these are purely academic questions, because at some point we’re going to have to address, “Is this computer conscious, is this computer able to feel, is this robot able to feel?” If we can’t figure out if a tree can feel, how in the world would we feel about something that didn’t share ninety percent of our DNA with? So, what would you think about the Earth having its own will and consciousness and awareness, that’s an emergent behavior of all of the lifeforms that live on it?
Yeah, gosh. I think you’ve probably really stumped me there. I mean, I think this is, you’re right, it’s an interesting question. I’ve absolutely no idea. I mean, I’m a materialist. I kind of find it difficult to understand how that might be the case when the planet isn’t a homogeneous system, it isn’t a fully connected system in the sense that nervous systems are.
I mean, the processes going on in and on the planet are extraordinarily complex. There’s tons of emergence going on. There are all kinds of feedback loops. Those are all undoubtedly facts. But whether that is enough, in and of itself, to give rise to any kind of analogue of self-awareness, I have to say, I’m doubtful. I mean, it would be wonderful if it were so, but I’m doubtful.
You wouldn’t be able to look at a human brain under a microscope and say, “These things are conscious.” And so, I guess, Lovelock would look back over the—and I don’t know what his position on that question would be—but he would look at the fact that the Earth self regulates so many of its attributes within narrow ranges. I’ll ask you one more, then. What about the Internet? Is it possible that the Internet has achieved some kind of consciousness or self-awareness? I mean, it’s certainly got enough processors on it.
I mean, I think perhaps the answer to that question, and I’ve only just thought of this or it’s only just come to my mind, is that I think the answer is no. I don’t think the Internet is self-aware. And I think the reason, perhaps, is the same reason that I don’t think the Earth is self-aware, or the planet is self-aware, even though it is, as you quite rightly say, a fabulously self-regulating system. But I think self-awareness and sentience and in turn consciousness, need not just highly-connected networks, they also need the right architecture.
The point I’m making here, it’s a simple observation, is that our brains, our nervous systems, are not randomly connected networks. They have architecture, and, that is an evolved architecture. And it’s not only evolved, of course, but it’s also socially conditioned. I mean, the point is that, as I keep going on about, the only reason you and I can have this conversation is because we were both, we share a culture, a cultural environment, which is itself highly evolved. So, I think that the emergence of consciousness, as I’ve hinted, comes as part and parcel of that emergence of communication, language, and ultimately culture.
I think the reason that the Internet, as it were, is unlikely to be self-aware, it’s because it just doesn’t have the right architecture, not because it doesn’t have lots of processing and lots of connectivity. It clearly has those, but it’s not connected with the architecture that I think is necessary—in the sense that the architectures of animal nervous systems are not random. That’s clearly true, isn’t it? If you just take one hundred billion neurons and connect them randomly, you will not have a human brain.
Right. I mean, I guess you could say there is an organic structure to the Internet in terms of the backbone and the nodes, but I take your point. So, I guess where I’m going with all of this is if we make a machine, and let’s not even talk about conscious for a minute, if we make a machine that is self-aware and is sentient in the sense that it can feel the world, how would we know?
Well, I think that’s a problem. I think it’s very hard to know. And one of the ethical, if you like, risks of AI and especially brain emulation, which is in a sense, a particular kind of AI, is that we might unknowingly build a machine that is actually experiencing, as it were, phenomenal subjectivity, and even more worrying, pain. In other words, a thing that is experiencing suffering. And the worst part about it, as you rightly say, is we may not even know that it is experiencing that suffering.
And then, of course, if it ever becomes self-aware, like if my Roomba all of a sudden is aware of itself, we also run the risk that we end up making an entire digital race of slaves, right? Of beings that feel and perceive the world, which we just build to do our bidding at our will.
Well, yeah. I mean, the ethical question of robots as slaves is a different question. But let’s not confuse it or conflate it with the problem of artificial suffering. I’m much less ethically troubled by a whole bunch of zombie robots, in a sense, that are not sentient and conscious, because they don’t have very much, I won’t say zero, but they have a rather low claim on moral patiency. I mean, if they were at all sentient, or if we believed they were sentient, then the claim we would have to treat them with a level of moral patiency, that we absolutely do not treat robots and AIs with right now.
When robot ethics come out, or ethics and AI, and people say well that’s a real immediate example that we have to think about—aside from the use of these devices in war—the one that everybody knows is the self-driving car, do I drive off the cliff or run over the person? One automaker has come out and specifically said, “We protect the driver. That’s what we do.” As a robot ethicist, how do you approach that problem, just that single isolated, real world problem?
Well, I think the problem with ethical dilemmas, particularly the trolley problem, is that they’re very, very rare. I mean, you have to ask yourself, how often have you and I, I guess you drive a car, and you may well have been driving a car for many years, how often have you faced a trolley problem? The answer is never.
Three times this week. [Laughs] No, you’re entirely right. Yes. But you do know that people get run over by cars.
Sure.
We have to wrestle with the question because it’s going to come up in everything else. Like medical diagnoses and which drugs you give to which people for which ailments which may or may not become lethal reactions to medicines that are rare. It really permeates everything, this assessment of risk and who bares it. Is it fundamentally the programmer? Because that’s one way to say it, it’s that robots don’t actually make any decisions, it’s all humans. And so, you just follow the coding trail back to the person who decided to do it that way.
Well, what you’ve just said is true. It’s not necessarily the programmer, it’s certainly humans. My view—and I take a very hard line on this—is that humans, not robots, are responsible agents, and I mean including AI. So, however a driverless car is programmed, it cannot be held responsible. I think that is an absolute fundamental, I mean, right now. In several hundred years maybe we might be having a slightly different conversation, but, right now, I take a very simple view—robots and AIs cannot be responsible, only humans.
Now, as for what ethics do we program into a driverless car? I think that has to be a societal question. It’s certainly not down to the design of the programmer or even the manufacturer to decide. I think it has to be a societal question. So, you’re right that when we have driverless cars, there will still be accidents, and, hopefully, there will be very few accidents, but still occasionally, very rarely we hope, people will still be killed in car accidents, where the driverless car, as it were, did the wrong thing.
Now, what we need is several things. I think we need to be able to find out why the driverless car went wrong, and that really means that driverless cars need to be fitted with the equivalent of a flight data recorder in aircraft, what I call an ethical black box. We have a paper on that in the next couple of weeks that we’re giving called, “The Case for an Ethical Black Box.” And we need to have regulatory structures that mean that manufacturers are obliged to fit these black boxes to driverless cars, and that the accident investigators have the authority and the power to be able to look at the data in those ethical black boxes and find out what went wrong.
But, then, even when you have all of that structure in place, which I think we must have, there will still be occasional accidents, and the only way to resolve that is by having ethics in driverless cars, if indeed we do decide to have ethics in them at all, which I think is itself not a given. I think that’s a difficult question to ask of itself. But if we did fit driverless cars with ethics, then those ethics need to be decided by the whole of society, so that we collectively take responsibility for those small number of cases where there is an accident and people are harmed.
Fair enough. I have three final questions for you. The first is, Weizenbaum, who famously made ELIZA, which, for the benefit of the listener, was a computer program in the ‘60s that was simple chatbot. You would type, “I have a problem,” it would say, “What kind of problem do you have?” “I’m having trouble with my parents,” “What kind of trouble are you having with your parents?” and it goes on and on like that.
Weizenbaum wrote it, or had it written it, and then noticed that people were developing emotional attachments to it, even though they knew it was just a simple program. And he kind of did a one-eighty, turned on it all. He distinguished between deciding and choosing. And he said, “Robots should only decide. It’s a computational action. They should never choose. Choosing is for people to do.”
What do you think he got right and wrong, and what are your thoughts on that distinction? He thought it was fundamentally wrong for people to use robots in positions that require empathy, because it doesn’t elevate the machine, it debases the human.
Yeah, I mean, I certainly have a strong view that if we do use robots at all as personal assistants or chatbots or advisors or companions, whatever, I think it’s absolutely vital that that should be done within a very strict ethical framework. So, for instance, to ensure that nobody’s deceived and nobody is exploited. The deception I’m particularly thinking of is the deception of believing that you’re actually talking to a person, or, even if you realize you’re not talking to a person, believing that the system, the machine is caring for you, that the machine has feelings for you.
I certainly don’t take a hard line that we should never have companion systems, because I think there are situations where they’re undoubtedly valuable. I’m thinking, for instance here, of surrogate pets. There’s no doubt that when an elderly person, perhaps with dementia, goes into a care home, one of the biggest traumas they experience is leaving their pet behind. People I’ve spoken to who work in care homes for the elderly, elderly people with dementia, say that they would love for their residents to have surrogate pets.
Now, it’s likely that those elderly persons may recognize that the robot pet is not a real animal, but, nevertheless, still may come to feel that the robot, in some sense, cares for them. I think that’s okay because I think that the balance of benefit versus, as it were, the psychological harm of being deceived in that way, weighs more heavily in terms of the therapeutic benefit of the robot pet.
But really the point I’m making is, I think we need strong ethical frameworks, guidelines and regulations that would mean that vulnerable people, particularly children, disabled people, elderly people, perhaps with dementia, are not exploited perhaps by unscrupulous manufacturers or designers, for instance, with systems that appear to have feelings, appear to have empathy.
As Weizenbaum said, “When the machine says, ‘I understand,’ it’s a lie, there’s no I there.”
Indeed, yes, exactly right. And I think that rather like Toto in the Wizard of Oz, we should always be able to pull the curtain aside. The machine nature of the system should always be transparent. So, for instance, I think it’s very wrong for people to find themselves on the telephone and believe that they’re talking to a person, a human being, when in fact they’re talking to a machine.
I agree.
Second question, what about science fiction? Do you consume any in written or movie or TV form that you think, “Ah, that could happen. I could see that future unfolding”?
Oh, lots, well I mean, certainly I consume a lot of science fiction, not all of it, by any means, would I expect or like to see happening. Often the best sci-fi is dystopian, but that is okay, because good science fiction is like a thought experiment, but I like the utopian kind, too. And I rather like the kind of AI utopia of The Culture which is the Iain M. Banks Culture novels—a universe in which there are hugely intelligent, and rather inscrutable, but, nevertheless, rather kindly and benevolent AIs, essentially, looking after us poor humans. I kind of like that idea.
And, finally, you’re writing a lot. How can people keep up with you and follow you and get all of your latest thinking? Can you just go through the litany of resources?
Sure. Well, I don’t blog very often, because I’m generally very busy with other stuff, but I’d be delighted if people go to my blog, which is just: alanwinfield.blogspot.com, and also follow me on Twitter. And, again, I’m easy to find. I think it’s just @alan_winfield. And, similarly, there are quite a few videos of talks that I’ve given to be found on YouTube and online generally. And if people want to get in touch directly, again, it’s easy to find my contact details online.
Alright, well thank you. It has been an incredibly fascinating hour and I appreciate your time.
Thank you, Byron, likewise, very much enjoyed it.
Byron explores issues around artificial intelligence and conscious computers in his upcoming book The Fourth Age, to be published in April by Atria, an imprint of Simon & Schuster. Pre-order a copy here.
[voices_in_ai_link_back]

Voices in AI – Episode 31: A Conversation with Tasha Nagamine

[voices_in_ai_byline]
In this episode, Byron and Tasha talk about speech recognition, AGI, consciousness, Droice Lab, healthcare, and science fiction.
[podcast_player name=”Episode 31 – A Conversation with Tasha Nagamine” artist=”Byron Reese” album=”Voices in AI” url=”https://voicesinai.s3.amazonaws.com/2018-01-22-(00-57-02)-tasha-nagamine.mp3″ cover_art_url=”https://voicesinai.com/wp-content/uploads/2017/09/voices-in-ai-cover.png”]
[voices_in_ai_byline]
Byron Reese: This is Voices in AI, brought to you by Gigaom. I’m Byron Reese. Today our guest is Tasha Nagamine. She’s a PhD student at Columbia University, she holds an undergraduate degree from Brown and a Masters in Electrical Engineering from Columbia. Her research is in neural net processing in speech and language, then the potential applications of speech processing systems through, here’s the interesting part, biologically-inspired, deep neural network models. As if that weren’t enough to fill up a day, Tasha is also the CTO of Droice Labs, an AI healthcare company, which I’m sure we will chat about in a few minutes. Welcome to the show, Tasha.
Tasha Nagamine: Hi.
So, your specialty, it looks like, coming all the way up, is electrical engineering. How do you now find yourself in something which is often regarded as a computer science discipline, which is artificial intelligence and speech recognition?
Yeah, so it’s actually a bit of an interesting meandering journey, how I got here. My undergrad specialty was actually in physics, and when I decided to go to grad school, I was very interested, you know, I took a class and found myself very interested in neuroscience.
So, when I joined Columbia, the reason I’m actually in the electrical engineering department is that my advisor is an EE, but what my research and what my lab focuses on is really in neuroscience and computational neuroscience, as well as neural networks and machine learning. So, in that way, I think what we do is very cross-disciplinary, so that’s why the exact department, I guess, may be a bit misleading.
One of my best friends in college was a EE, and he said that every time he went over to like his grandmother’s house, she would try to get him to fix like the ceiling fan or something.  Have you ever had anybody assume you’re proficient with a screwdriver as well?
Yes, that actually happens to me quite frequently. I think I had one of my friends’ landlords one time, when I said I was doing electrical engineering, thought that that actually meant electrician, so was asking me if I knew how to fix light bulbs and things like that.
Well, let’s start now talking about your research, if you would. In your introduction, I stressed biologically-inspired deep neural networks. What do you think, do we study the brain and try to do what it does in machines, or are we inspired by it, or do we figure out what the brain’s doing and do something completely different? Like, why do you emphasize “biologically-inspired” DNNs?
That’s actually a good question, and I think the answer to that is that, you know, researchers and people doing machine learning all over the world actually do all of those things. So, the reason that I was stressing a biologically-inspired—well, you could argue that, first of all, all neural networks are in some way biologically-inspired; now, whether or not they are a good biologically-inspired model, is another question altogether—I think a lot of the big, sort of, advancements that come, like a convolutional neural network was modeled basically directly off of the visual system.
That being said, despite the fact that there are a lot of these biological inspirations, or sources of inspiration, for these models, there’s many ways in which these models actually fail to live up to the way that our brains actually work. So, by saying biologically-inspired, I really just mean a different kind of take on a neural network where we try to, basically, find something wrong with a network that, you know, perhaps a human can do a little bit more intelligently, and try to bring this into the artificial neural network.
Specifically, one issue with current neural networks is that, usually, unless you keep training them, they have no way to really change themselves, or adapt to new situations, but that’s not what happens with humans, right? We continuously take inputs, we learn, and we don’t even need supervised labels to do so. So one of the things that I was trying to do was to try to draw from this inspiration, to find a way to kind of learn in an unsupervised way, to improve your performance in a speech recognition task.
So just a minute ago, when you and I were chatting before we started recording, a siren came by where you are, and the interesting thing is, I could still understand everything you were saying, even though that siren was, arguably, as loud as you were. What’s going on there, am I subtracting out the siren? How do I still understand you? I ask this for the obvious reason that computers seem to really struggle with that, right?
Right, yeah. And actually how this works in the brain is a very open question and people don’t really know how it’s done. This is actually an active research area of some of my colleagues, and there’s a lot of different models that people have for how this works. And you know, it could be that there’s some sort of filter in your brain that, basically, sorts speech from the noise, for example, or a relevant signal from an irrelevant one. But how this happens, and exactly where this happens is pretty unknown.
But you’re right, that’s an interesting point you make, is that machines have a lot of trouble with this. And so that’s one of the inspirations behind these types of research. Because, currently, in machine learning, we don’t really know the best way to do this and so we tend to rely on large amounts of data, and large amounts of labeled data or parallel data, data corrupted with noise intentionally, however this is definitely not how our brain is doing it, but how that’s happening, I don’t think anyone really knows.
Let me ask you a different question along the same lines. I read these stories all the time that say that, “AI has approached human-quality in transcribing speech,” so I see that. And then I call my airline of choice, I will not name them, and it says, “What is your frequent flyer number?” You know, it’s got Caller ID, it should know that, but anyway. Mine, unfortunately, has an A, an H, and an 8 in it, so you can just imagine “AH8H888H”, right?
It never gets it. So, I have to get up, turn the fan off in my office, take my headset off, hold the phone out, and say it over and over again. So, two questions: what’s the disconnect between what I read and my daily experience? Actually, I’ll give you that question and then I have my follow up in a moment.
Oh, sure, so you’re saying, are you asking why it can’t recognize your—
But I still read these stories that say it can do as good of a job as a human.
Well, so usually—and, for example, I think, recently, there was a story published about Microsoft coming up with a system that had reached human parity in speech recognition—well, usually when you’re saying that, you have it on a somewhat artificial task. So, you’ll have a predefined data set, and then test the machine against humans, but that doesn’t necessarily correspond to a real-world setting, they’re not really doing speech recognition out in the wild.
And, I think, you have an even more difficult problem, because although it’s only frequent flyer numbers, you know, there’s no language model there, there’s no context for what your next number should be, so it’s very hard for that kind of system to self-correct, which is a bit problematic.
So I’m hearing two things. The first thing, it sounds like you’re saying, they’re all cooking the books, as it were. The story is saying something that I interpret one way that isn’t real, if you dig down deep, it’s different. But the other thing you seem to be saying is, even though there’s only thirty-six things I could be saying, because there’s no natural flow to that language, it can’t say, “oh, the first word he said was ‘the’ and the third word was ‘ran;’ was that middle word ‘boy’ or ‘toy’?” It could say, “Well, toys don’t run, but boys do, therefore it must be, ‘The boy ran.'” Is that what I’m hearing you saying, that a good AI system’s going to look contextually and get clues from the word usage in a way that a frequent flyer system doesn’t.
Right, yeah, exactly. I think this is actually one of the fundamental limitations of, at least, acoustic modeling, or, you know, the acoustic part of speech recognition, which is that you are completely limited by what the person has said. So, you know, maybe it could be that you’re not pronouncing your “t” at the end of “eight,” very emphatically. And the issue is that, there’s nothing you can really do to fix that without some sort of language-based information to fix it.
And then, to answer your first question, I wouldn’t necessarily call it “cooking the books,” but it is a fact that, you know, really the data that you have to train on and test on and to evaluate your metrics on, often, almost never really matches up with real-world data, and this is a huge problem in the speech domain, it’s a very well-known issue.
You take my 8, H, and A example—which you’re saying that’s a really tricky problem without context—and, let’s say, you have one hundred English speakers, but one is from Scotland, and one could be Australian, and one could be from the east coast, one could be from the south of the United States; is it possible that the range of how 8 is said in all those different places is so wide that it overlaps with how H is said in some places. So, in other words, it’s a literally insoluble problem.
It is, I would say it is possible. One of the issues is then you should have a separate model for different dialects. I don’t want to dive too far into the weeds with this, but at the root of a speech recognition system is often things like the fundamental linguistic or phonetic unit is a phoneme, which is the smallest speech sound, and people even argue about whether or not that these actually exist, what they actually mean, whether or not this is a good unit to use when modeling speech.
That being said, there’s a lot of research underway, for example, sequence to sequence models or other types of models that are actually trying to bypass this sort of issue. You know, instead of having all of these separate components modeling all of the acoustics separately, can we go directly from someone’s speech and from there exactly get text. And maybe through this unsupervised approach it’s possible to learn all these different things about dialects, and to try to inherently learn these things, but that is still a very open question, and currently those systems are not quite tractable yet.
I’m only going to ask one more question on these lines—though I could geek out on this stuff all day long, because I think about it a lot—but really quickly, do you think you’re at the very beginning of this field, or do you feel it’s a pretty advanced field? Just the speech recognition part.
Speech recognition, I think we’re nearing the end of speech recognition to be honest. I think that you could say that speech is fundamentally limited; you are limited by the signal that you are provided, and your job is to transcribe that.
Now, where speech recognition stops, that’s where natural language processing begins. As everyone knows, language is infinite, you can do anything with it, any permutation of words, sequences of words. So, I really think that natural language processing is the future of this field, and I know that a lot of people in speech are starting to try to incorporate more advanced language models into their research.
Yeah, that’s a really interesting question. So, I ran an article on Gigaom, where I had an Amazon Alexa device on my desk and I had a Google Assistant on my desk, and what I noticed right away is that they answer questions differently. These were factual questions, like “How many minutes are in a year?” and “Who designed the American flag?” They had different answers. And you can say it’s because of an ambiguity in the language, but if this is an ambiguity, then all language is naturally ambiguous.
So, the minutes in a year answer difference was that one gave you the minutes in 365.24 days, a solar year, and one gave you the minutes in a calendar year. And with regard to the flag, one said Betsy Ross, and one said the person who designed the fifty-star configuration on the current flag.
And so, we’re a long way away from the machines saying, “Well, wait a second, do you mean the current flag or the original flag?” or, “Are you talking about a solar year or a calendar year?” I mean, we’re really far away from that, aren’t we?
Yeah, I think that’s definitely true. You know, people really don’t understand how even humans process language, how we disambiguate different phrases, how we find out what are the relevant questions to ask to disambiguate these things. Obviously, people are working on that, but I think we are quite far from true natural language understanding, but yeah, I think that’s a really, really interesting question.
There were a lot of them, “Who invented the light bulb?” and “How many countries are there in the world?” I mean the list was endless. I didn’t have to look around to find them. It was almost everything I asked, well, not literally, “What’s 2+2?” is obviously different, but there were plenty of examples.  
To broaden that question, don’t you think if we were to build an AGI, an artificial general intelligence, an AI as versatile as a human, that’s table stakes, like you have to be able to do that much, right?
Oh, of course. I mean, I think that one of the defining things that makes human intelligence unique, is the ability to understand language and an understanding of grammar and all of this. It’s one of the most fundamental things that makes us human and intelligent. So I think, yeah, to have an artificial general intelligence, it would be completely vital and necessary to be able to do this sort of disambiguation.
Well, let me ratchet it up even another one. There’s a famous thought experiment called the Chinese Room problem. For the benefit of the listener, the setup is that there’s a person in a room who doesn’t speak any Chinese, and the room he’s in is full of this huge number of very specialized books; and people slide messages under the door to him that are written in Chinese. And he has this method where he looks up the first character and finds the book with that on the spine, and goes to the second character and the third and works his way through, until he gets to a book that says, “Write this down.” And he copies these symbols, again, he doesn’t know what the symbols are; he slides the message back out, and the person getting it thinks it’s a perfect Chinese answer, it’s brilliant, it rhymes, it’s great.
So, the thought experiment is this, does the man understand Chinese? And the point of the thought experiment is that this is all a computer does—it runs this deterministic program, and it never understands what it’s talking about. It doesn’t know if it’s about cholera or coffee beans or what have you. So, my question is, for an AGI to exist, does it need to understand the question in a way that’s different than how we’ve been using that word up until now?
That’s a good question. I think that, yeah, to have an artificial general intelligence, I think the computer would have to, in a way, understand the question. Now, that being said, what is the nature of understanding the question? How do we even think, is a question that I don’t think even we know the answer to. So, it’s a little bit difficult to say, exactly, what’s the minimum requirement that you would need for some sort of artificial general intelligence, because as it stands now, I don’t know. Maybe someone smarter than me knows the answer, but I don’t even know if I really understand how I understand things, if that makes sense to you.
So what do you do with that? Do you say, “Well, that’s just par for the course. There’s a lot of things in this universe we don’t understand, but we’re going to figure it out, and then we’ll build an AGI”? Is the question of understanding just a very straightforward scientific question, or is it a metaphysical question that we don’t really even know how to pose or answer?
I mean, I think that this question is a good question, and if we’re going about it the right way, it’s something that remains to be seen. But I think one way that we can try to ensure that we’re not straying off the path, is by going back to these biologically-inspired systems. Because we know that, at the end of the day, our brains are made up of neurons, synapses, connections, and there’s nothing very unique about this, it’s physical matter, there’s no theoretical reason why a computer cannot do the same computations.
So, if we can really understand how our brains are working, what the computations it performs are, how we have consciousness; then I think we can start to get at those questions. Now, that being said, in terms of where neuroscience is today, we really have a very limited idea of how our brains actually work. But I think it’s through this avenue that we stand the highest chance of success of trying to emulate, you know—
Let’s talk about that for a minute, I think that’s a fascinating topic. So, the brain has a hundred billion neurons that somehow come together and do what they do. There’s something called a nematode worm—arguably the most successful animal on the planet, ten percent of all animals on the planet are these little worms—they have I think 302 neurons in their brain. And there’s been an effort underway for twenty years to model that brain—302 neurons—in the computer and make a digitally living nematode worm, and even the people who have worked on that project for twenty years, don’t even know if that’s possible.
What I was hearing you say is, once we figure out what a neuron does—this reductionist view of the brain—we can build artificial neurons, and build a general intelligence, but what if every neuron in your brain has the complexity of a supercomputer? What if they are incredibly complicated things that have things going on at the quantum scale, that we are just so far away from understanding? Is that a tenable hypothesis? And doesn’t that suggest, maybe we should think about intelligence a different way because if a neuron’s as complicated as a supercomputer, we’re never going to get there.
That’s true, I am familiar with that research. So, I think that there’s a couple of ways that you can do this type of study because, for example, trying to model a neuron at the scale of its ion channels and individual connections is one thing, but there are many, many scales upon which your brain or any sort of neural system works.
I think to really get this understanding of how the brain works, it’s great to look at this very microscale, but it also helps to go very macro and instead of modeling every single component, try to, for example, take groups of neurons, and say, “How are they communicating together? How are they communicating with different parts of the brain?” Doing this, for example, is usually how human neuroscience works and humans are the ones with the intelligence. If you can really figure out on a larger scale, to the point where you can simplify some of these computations, and instead of understanding every single spike, perhaps understanding the general behavior or the general computation that’s happening inside the brain, then maybe it will serve to simplify this a little bit.
Where do you come down on all of that? Are we five years, fifty years or five hundred years away from cracking that nut, and really understanding how we understand and understanding how we would build a machine that would understand, all of this nuance? Do you think you’re going to live to see us make that machine?
I would be thrilled if I lived to see that machine, I’m not sure that I will. Exactly saying when this will happen is a bit hard for me to predict, but I know that we would need massive improvements; probably, algorithmically, probably in our hardware as well, because true intelligence is massively computational, and I think it’s going to take a lot of research to get there, but it’s hard to say exactly when that would happen.
Do you keep up with the Human Brain Project, the European initiative to do what you were talking about before, which is to be inspired by human brains and learn everything we can from that and build some kind of a computational equivalent?
A little bit, a little bit.
Do you have any thoughts on—if you were the betting sort—whether that will be successful or not?
I’m not sure if that’s really going to work out that well. Like you said before, given our current hardware, algorithms, our abilities to probe the human brain; I think it’s very difficult to make these very sweeping claims about, “Yes, we will have X amount of understanding about how these systems work,” so I’m not sure if it’s going to be successful in all the ways it’s supposed to be. But I think it’s a really valuable thing to do, whether or not you really achieve the stated goal, if that makes sense.
You mentioned consciousness earlier. So, consciousness, for the listeners, is something people often say we don’t know what it is; we know exactly what it is, we just don’t know how it is that it happens. What it is, is that we experience things, we feel things, we experience qualia—we know what pineapple tastes like.
Do you have any theories on consciousness? Where do you think it comes from, and, I’m really interested in, do we need consciousness in order to solve some of these AI problems that we all are so eager to solve? Do we need something that can experience, as opposed to just sense?
Interesting question. I think that there’s a lot of open research on how consciousness works, what it really means, how it helps us do this type of cognition. So, we know what it is, but how it works or how this would manifest itself in an artificial intelligence system, is really sort of beyond our grasp right now.
I don’t know how much true consciousness a machine needs, because, you could say, for example, that having a type of memory may be part of your consciousness, you know, being aware, learning things, but I don’t think we have yet enough really understanding of how this works to really say for sure.
All right fair enough. One more question and I’ll pull the clock back thirty years and we’ll talk about the here and now; but my last question is, do you think that a computer could ever feel something? Could a computer ever feel pain? You could build a sensor that tells the computer it’s on fire, but could a computer ever feel something, could we build such a machine?
I think that it’s possible. So, like I said before, there’s really no reason why—what our brain does is really a very advanced biological computer—you shouldn’t be able to feel pain. It is a sensation, but it’s really just a transfer of information, so I think that it is possible. Now, that being said, how this would manifest, or what a computer’s reaction would be to pain or what would happen, I’m not sure what that would be, but I think it’s definitely possible.
Fair enough. I mentioned in your introduction that you’re the CTO of an AI company Droice Labs, and the only setup I made was that it was a healthcare company. Tell us a little bit more, what challenge that Droice Labs is trying to solve, and what the hope is, and what your present challenges are and kind of the state of where you’re at?
Sure. Droice is a healthcare company that uses artificial intelligence to help provide artificial intelligence solutions to hospitals and healthcare providers. So, one of the main things that we’re focusing on right now is to try to help doctors choose the right treatment for their patients. This means things like, for example, you come in, maybe you’re sick, you have a cough, you have pneumonia, let’s say, and you need an antibiotic. What we try to do is, when you’re given an antibiotic, we try to predict whether or not this treatment will be effective for you, and also whether or not it’ll have any sort of adverse event on you, so both try to get people healthy, and keep them safe.
And so, this is really what we’re focusing on at the moment, trying to make a sort of artificial brain for healthcare that can, shall we say, augment the intelligence of the doctors and try to make sure that people stay healthy. I think that healthcare’s a really interesting sphere in which to use artificial intelligence because currently the technology is not very widespread because of the difficulty in working with hospital and medical data, so I think it’s a really interesting opportunity.
So, let’s talk about that for a minute, AIs are generally only as good as the data we train them with. Because I know that whenever I have some symptom, I type it into the search engine of choice, and it tells me I have a terminal illness; it just happens all the time. And in reality, of course, whatever that terminal illness is, there is a one-in-five-thousand chance that I have that, and then there’s also a ninety-nine percent chance I have whatever much more common, benign thing. How are you thinking about how you can get enough data so that you can build these statistical models and so forth?
We’re a B2B company, so we have partnerships with around ten hospitals right now, and what we do is get big data dumps from them of actual electronic health records. And so, what we try to do is actually use real patient records, like, millions of patient records that we obtain directly from our hospitals, and that’s how we really are able to get enough data to make these types of predictions.
How accurate does that data need to be? Because it doesn’t have to be perfect, obviously. How accurate does it need to be to be good enough to provide meaningful assistance to the doctor?
That is actually one of the big challenges, especially in this type of space. In healthcare, it’s a bit hard to say which data is good enough, because it’s very, very common. I mean, one of the hallmarks of clinical or medical data is that it will, by default, contain many, many missing values, you never have the full story on any given patient.
Additionally, it’s very common to have things like errors, there’s unstructured text in your medical record that very often contains mistakes or just insane sentence fragments that don’t really make sense to anyone but a doctor, and this is one of the things that we work really hard on, where a lot of times traditional AI methods may fail, but we basically spend a lot of time trying to work with this data in different ways, come up with noise-robust pipelines that can really make this work.
I would love to hear more detail about that, because I’m sure it’s full of things like, “Patient says their eyes water whenever they eat potato chips,” and you know, that’s like a data point, and it’s like, what do you do with that. If that is a big problem, can you tell us what some of the ways around it might be?
Sure. I’m sure you’ve seen a lot of crazy stuff in these health records, but what we try to do is—instead of biasing our models by doing anything in a rule-based manner—we use the fact that we have big data, we have a lot of data points, to try to really come up with robust models, so that, essentially, we don’t really have to worry about all that crazy stuff in there about potato chips and eyes watering.
And so, what we actually end up doing is, basically, we take these many, many millions of individual electronic health records, and try to combine that with outside sources of information, and this is one of the ways that we can try to really augment the data on our health record to make sure that we’re getting the correct insights about it.
So, with your example, you said, “My eyes water when I eat potato chips.” What we end up doing is taking that sort of thing, and in an automatic way, searching sources of public information, for example clinical trials information or published medical literature, and we try to find, for example, clinical trials or papers about the side effects of rubbing your eyes while eating potato chips. Now of course, that’s a ridiculous example, but you know what I mean.
And so, by augmenting this public and private data together, we really try to create this setup where we can get the maximum amount of information out of this messy, difficult to work with data.
The kinds of data you have that are solid data points, would be: how old is the patient, what’s their gender, do they have a fever, do they have aches and pains; that’s very coarse-level stuff. But like—I’m regretting using the potato chip example because now I’m kind of stuck with it—but, a potato chip is made of a potato which is a tuber, which is a nightshade and there may be some breakthrough, like, “That may be the answer, it’s an allergic reaction to nightshades. And that answer is so many levels removed.
I guess what I’m saying is, and you said earlier, language is infinite, but health is near that, too, right? There are so many potential things something could be, and yet, so few data points, that we must try to draw from. It would be like, if I said, “I know a person who is 6’ 4” and twenty-seven years old and born in Chicago, what’s their middle name?” It’s like, how do you even narrow it down to a set of middle names?
Right, right. Okay, I think I understand what you’re saying. This is, obviously, a challenge, but one of the ways that we kind of do this is, the first thing is our artificial intelligence is really intended for doctors and not the patients. Although, we were just talking about AGI and when it will happen, but the reality is we’re not there yet, so while our system tries to make these predictions, it’s under the supervision of a doctor. So, they’re really looking at these predictions and trying to pull out relevant things.
Now, you mentioned, the structured data—this is your age, your weight, maybe your sex, your medications; this is structured—but maybe the important thing is in the text, or is in the unstructured data. So, in this case, one of the things that we try to do, and it’s one of the main focuses of what we do, is to try to use natural language processing, NLP, to really make sure that we’re processing this unstructured data, or this text, in a way to really come up with a very robust, numerical representation of the important things.
So, of course, you can mine this information, this text, to try to understand, for example, you have a patient who has some sort of allergy, and it’s only written in this text, right? In that case, you need a system to really go through this text with a fine-tooth comb, and try to really pull out risk factors for this patient, relevant things about their health and their medical history that may be important.
So, is it not the case that diagnosing—if you just said, here is a person who manifests certain symptoms, and I want to diagnose what they have—may be the hardest problem possible. Especially compared to where we’ve seen success, which is, like, here is a chest x-ray, we have a very binary question to ask: does this person have a tumor or do they not? Where the data is: here’s ten thousand scans with the tumor, here’s a hundred thousand without a tumor.
Like, is it the cold or the flu? That would be an AI kind of thing because an expert system could do that. I’m kind of curious, tell me what you think—and then I’d love to ask, what would an ideal world look like, what would we do to collect data in an ideal world—but just with the here and now, aspirationally, what do you think is as much as we can hope for? Is it something, like, the model produces sixty-four things that this patient may have, rank ordered, like a search engine would do from the most likely to the least likely, and the doctor can kind of skim down it and look for something that catches his or her eye. Is that as far as we can go right now? Or, what do you think, in terms of general diagnosing of ailments?
Sure, well, actually, what we focus on currently is really on the treatment, not on the diagnosis. I think the diagnosis is a more difficult problem, and, of course, we really want to get into that in the future, but that is actually somewhat more of a very challenging sort of thing to do.
That being said, what you mentioned, you know, saying, “Here’s a list of things, let’s make some predictions of it,” is actually a thing that we currently do in terms of treatments for patients. So, one example of a thing that we’ve done is built a system that can predict surgical complications for patients. So, imagine, you have a patient that is sixty years old and is mildly septic, and may need some sort of procedure. What we can do is find that there may be a couple alternative procedures that can be given, or a nonsurgical intervention that can help them manage their condition. So, what we can do is predict what will happen with each of these different treatments, what is the likelihood it will be successful, as well as weighing this against their risk options.
And in this way, we can really help the doctor choose what sort of treatment that they should give this person, and it gives them some sort of actionable insight, that can help them get their patients healthy. Of course, in the future, I think it would be amazing to have some sort of end to end system that, you know, a patient comes in, and you can just get all the information and it can diagnose them, treat them, get them better, but we’re definitely nowhere near that yet.
Recently, IBM made news that Watson had prescribed treatment for cancer patients that was largely identical to what the doctors did, but it had the added benefit that in a third of the cases it found additional treatment options, because it had virtue of being trained on a quarter million medical journals. Is that the kind of thing that’s like “real, here, today,” that we will expect to see more things like that?
I see. Yeah, that’s definitely a very exciting thing, and I think that’s great to see. One of the things that’s very interesting, is that IBM primarily works on cancer. It’s lacking in these high prescription volume sorts of conditions, like heart disease or diabetes. So, I think that while this is very exciting, this is definitely a sort of technology, and a space for artificial intelligence, where it really needs to be expanded, and there’s a lot of room to grow.
So, we can sequence a genome for $1,000. How far away are we from having enough of that data that we get really good insights into, for example, a person has this combination of genetic markers, and therefore this is more likely to work or not work. I know that in isolated cases we can do that, but when will we see that become just kind of how we do things on a day-to-day basis?
I would say, probably, twenty-five years from the clinic. I mean, it’s great, this information is really interesting, and we can do it, but it’s not widely used. I think there are too many regulations in place right now that keep this from happening, so, I think it’s going to be, like I said, maybe twenty-five years before we really see this very widely used for a good number of patients.
So are there initiatives underway that you think merit support that will allow this information to be collected and used in ways that promote the greater good, and simultaneously, protect the privacy of the patients? How can we start collecting better data?
Yeah, there are a lot of people that are working on this type of thing. For example, Obama had a precision medicine initiative and these types of things where you’re really trying to, basically, get your health records and your genomic data, and everything consolidated and have a very easy flow of information so that doctors can easily integrate information from many sources, and have very complete patient profiles. So, this is a thing that’s currently underway.
To pull out a little bit and look at the larger world, you’re obviously deeply involved in speech, and language processing, and health care, and all of these areas where we’ve seen lots of advances happening on a regular basis, and it’s very exciting. But then there’s a lot of concern from people who have two big worries. One is the effect that all of this technology is going to have on employment. And there’s two views.
One is that technology increases productivity, which increases wages, and that’s what’s happened for two hundred years, or, this technology is somehow different, it replaces people and anything a person can do eventually the technology will do better. Which of those camps, or a third camp, do you fall into? What is your prognosis for the future of work?
Right. I think that technology is a good thing. I know a lot of people have concerns, for example, that if there’s too much artificial intelligence it will replace my job, there won’t be room for me and for what I do, but I think that what’s actually going to happen, is we’re just going to see, shall we say, a shifting employment landscape.
Maybe if we have some sort of general intelligence, then people can start worrying, but, right now, what we’re really doing through artificial intelligence is augmenting human intelligence. So, although some jobs become obsolete, now to maintain these systems, build these systems, I believe that you actually have, now, more opportunities there.
For example, ten to fifteen years ago, there wasn’t such a demand for people with software engineering skills, and now it’s almost becoming something that you’re expected to know, or, like, the internet thirty years back. So, I really think that this is going to be a good thing for society. It may be hard for people who don’t have any sort of computer skills, but I think going forward, that these are going to be much more important.
Do you consume science fiction? Do you watch movies, or read books, or television, and if so, are there science fiction universes that you look at and think, “That’s kind of how I see the future unfolding”?
Have you ever seen the TV show Black Mirror?
Well, yeah that’s dystopian though, you were just saying things are going to be good. I thought you were just saying jobs are good, we’re all good, technology is good. Black Mirror is like dark, black, mirrorish.
Yeah, no, I’m not saying that’s what’s going to happen, but I think that’s presenting the evil side of what can happen. I don’t think that’s necessarily realistic, but I think that show actually does a very good job of portraying the way that technology could really be integrated into our lives. Without all of the dystopian, depressing stories, I think that the way that it shows the technology being integrated into people’s lives, how it affects the way people live—I think it does a very good job of doing things like that.
I wonder though, science fiction movies and TV are notoriously dystopian, because there’s more drama in that than utopian. So, it’s not conspiratorial or anything, I’m not asserting that, but I do think that what it does, perhaps, is causes people—somebody termed it “generalizing from fictional evidence,” that you see enough views of the future like that, you think, “Oh, that’s how it’s going to happen.” And then that therefore becomes self-fulfilling.
Frank Herbert, I think, it was who said, “Sometimes the purpose of science fiction is to keep a world from happening.” So do you think those kinds of views of the world are good, or do you think that they increase this collective worry about technology and losing our humanity, becoming a world that’s blackish and mirrorish, you know?
Right. No, I understand your point and actually, I agree. I think there is a lot of fear, which is quite unwarranted. There is actually a lot more transparency in AI now, so I think that a lot of those fears are just, well, given the media today, as I’m sure we’re all aware, it’s a lot of fear mongering. I think that these fears are really something that—not to say there will be no negative impact—but, I think, every cloud has its silver lining. I think that this is not something that anyone really needs to be worrying about. One thing that I think is really important is to have more education for a general audience, because I think part of the fear comes from not really understanding what AI is, what it does, how it works.
Right, and so, I was just kind of thinking through what you were saying, there’s an initiative in Europe that, AI engines—kind of like the one you’re talking about that’s suggesting things—need to be transparent, in the sense they need to be able to explain why they’re making that suggestion.
But, I read one of your papers on deep neural nets, and it talks about how the results are hard to understand, if not impossible to understand. Which side of that do you come down on? Should we limit the technology to things that can be explained in bulleted points, or do we say, “No, the data is the data and we’re never going to understand it once it starts combining in these ways, and we just need to be okay with that”?
Right, so, one of the most overused phrases in all of AI is that “neural networks are a black box.” I’m sure we’re all sick of hearing that sentence, but it’s kind of true. I think that’s why I was interested in researching this topic. I think, as you were saying before, the why in AI is very, very important.
So, I think, of course we can benefit from AI without knowing. We can continue to use it like a black box, it’ll still be useful, it’ll still be important. But I think it will be far more impactful if you are able to explain why, and to really demystify what’s happening.
One good example from my own company is that in medicine it’s vital for the doctor to know why you’re saying what you’re saying, at Droice. So, if a patient comes in and you say, “I think this person is going to have a very negative reaction to this medicine,” it’s very vital for us to try to analyze the neural network and explain, “Okay, it’s really this feature of this person’s health record, for example, the fact that they’re quite old and on another medication.” That really makes them trust the system, and really eases the adoption, and allows them to integrate into traditionally less technologically focused fields.
So, I think that there’s a lot of research now that’s going into the why in AI, and it’s one of my focuses of research, and I know the field has really been blooming in the last couple of years, because I think people are realizing that this is extremely important and will help us not only make artificial intelligence more translational, but also help us to make better models.
You know, in The Empire Strikes Back, when Luke is training on Dagobah with Yoda, he asked him, “Why, why…” and Yoda was like, “There is no why.” Do you think there are situations where there is no why? There is no explainable reason why it chose what it did?
Well, I think there is always a reason. For example, you like ice cream; well, maybe it’s a silly reason, but the reason is that it tastes good. It might not be, you know, you like pistachio better than caramel flavor—so, let’s just say the reason may not be logical, but there is a reason, right? It’s because it activates the pleasure center in your brain when you eat it. So, I think that if you’re looking for interpretability, in some cases it could be limited but I think there’s always something that you could answer when asking why.
Alright. Well, this has been fascinating. If people want to follow you, keep up with what you’re doing, keep up with Droice, can you just run through the litany of ways to do that?
Yeah, so we have a Twitter account, it’s “DroiceLabs,” and that’s mostly where we post. And we also have a website: www.droicelabs.com, and that’s where we post most of the updates that we have.
Alright. Well, it has been a wonderful and far ranging hour, and I just want to thank you so much for being on the show.
Thank you so much for having me.

Byron explores issues around artificial intelligence and conscious computers in his upcoming book The Fourth Age, to be published in April by Atria, an imprint of Simon & Schuster. Pre-order a copy here.
[voices_in_ai_link_back]