Interview with Jay Iorio


Jay Iorio is a technology strategist for the IEEE Standards Association, specializing in the emerging technologies of virtual worlds and 3D interfaces. In addition to being a machinimatographer, Iorio manages IEEE Island in Second Life and has done extensive building and environment creation in Second Life and OpenSimulator.
What follows is an interview between Jay Iorio and Byron Reese, author of the book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity. They discuss artificial intelligence and virtual and augmented reality.


Byron Reese: Synthetic reality, is that a term that you use internally and is that something we’re going to hear more about as a class or concept? Or is that just useful in your line of work?
Jay Iorio: That’s sort of a term that I use internally in my own mind, it doesn’t really come from anywhere. I’m trying to think of a term that includes all of the illusory technologies: virtual reality, augmented reality, everything along the Milgram spectrum and the technologies that also contribute to that. So that it doesn’t just become a playback mechanism; that in fact it becomes a part of the interaction with the physical space and with other people and so forth.
So I would say that specifically what I’m talking about is AR (augmented reality) in the context of a sensor network, in the context of what we’re calling the internet of things (IoT), so that the street becomes aware, it becomes aware that you’re there. It knows your history, knows what you bought. It knows, because of biometric devices for example, it knows your blood sugar. It’s monitoring your gait, it’s inferring a lot about… from the data that it’s picking up in you, from you in real time. Integrating that with the physical world, so that the augmented reality becomes the display for this highly intelligent system, this adaptive system. You and I could walk down the same street in Austin for example and see very different things. Not even getting into… “I don’t like that style of architecture,” it’s going to occlude that from my vision or changed into mid-century modern or something. But content, the traditional streams that we’re used to now, the electronic streams and so forth could be integrated into the built environment. So that in a sense it looks like your personalized desktop, it still looks like Fourth Street, but it’s your Fourth Street and this would be a fairly powerful AI system that was continually feeding you information that it thought you wanted, correcting for it and so forth. It could dim street signage; it could change things. It could do the hospital thing of follow blue for “To Obstetrics.” You know it could give you guidance or the more conventional uses for AR if you can call them conventional.
But, I think where it really comes alive is that it starts to anticipate,like a lot of online systems are starting to do today. But I think we’re seeing just the foothills of a mountain range. They’re trying to predict your commercial behavior. They’re trying to predict what you like. They’re trying to learn more about you and that can… everybody focuses on the possible negatives of that and the invasiveness but there are also enormous positives to it and there are ways that you know we can guide that development. I think the street becomes in a sense a personal valet. The city becomes a response instead of an inert collection of buildings. It becomes a part of your body, in a sense an extension of your body. If it knows your blood sugar is a certain way, it will dim the lights for the doughnut shop, or well you know, you could take it to an extreme where in a sense it becomes an illusion that’s based on reality. But it’s such an enhanced illusion that in a sense it’s almost approaching virtual reality.
So that’s all like kind of science fiction sounding stuff right from where we are today. Where I call my airline and say my frequent flyer number and it doesn’t get it right. What time frame are you talking about to have that experience of the world?
Well, I mean we know it isn’t going to happen on one Monday morning. So we’re already seeing pieces of it the way…
No. But, to get that fulfilled vision of my environment that I am in is all around me. Everything I see and touch and feel is somehow enlivened by this technology.
I think the first step is going to be the mainstreaming of full vision AR.
Let’s start with that step, what does that mean? Full vision AR?
I would say that a big step forward from the existing ones. You take the meta visor for example or the hollow lens, something like that. I think that’s probably the latest we’ve got right now and it’s not bad. But, I think there are discoveries in the pipeline that are really reducing it to this, and it could be contact lenses, ultimately it could be implants.
When you say this, you mean your glasses?
My glasses, yes I’m sorry.
That’s all right and you’re talking about the new ones that are coming out from… Are you referring to any specific product?
Well, I know that, I think Intel.
Do you think that’s going to be projected on your eye? You’re going to see it as in the lens or -?
That I don’t know. I’m going to leave that to the engineers you know that I think that it could well be…
So someday you get a pair of glasses or head contacts that convey that information to you through a means we don’t have down yet, and you think that that’s going to be the first step, that you’ll have the blank slate as it were?
The first step I think will be to take what we currently do on our smartphones and extend it to that realm. So basically the selling point is that it’s hands-free. It’s full-time, it’s always there. It will get rid of this 2018 gesture. I think that will… the phone is sort of an interim step. It wasn’t intended as an interim step. It wasn’t intended to be used the way it’s being used now. You know this is everybody’s computer at this point and I don’t think anybody thought that 10 years ago.
I wonder if people still do that thing with their thumb and pinky when they’re you know when they’re doing the phone thing because it like doesn’t make any sense. Like when will the banana be displaced as the comedic substitute for the telephone? I guess it would become something, anyway keep going.
It’s true. It’s like the fact that you can’t hang up on it.
I know, I remember in the second Spidermanmovie with Tobey Maguire, there’s a scene where the villain’s talking to somebody and hangs up. And he hears a dial tone and immediately it was just jarring to me, like you know you don’t hear when, somebody hangs up their cellphone. There is no dial tone.
That’s right yeah.
It’s like they had to have some audio indicator that there was no longer a person on the other end because otherwise you’re like “hello, hello?”
The drama has been removed from the phone.
I know, so keep going. The first step is it takes over what our phones do.
I think so and I know people who work in AR and a few artists who actually do, be public spectacles with AR.  And you know the problem is you have to hold your phone up, but the real problem is discovery; you have to know that it’s there in the first place. And I think that AR explodes when you no longer have to discover it when it’s just there and then the further step of when it’s feeding you. It isn’t giving us all the same stuff. It knows that you like modern art and so it’s feeding you that, the public art becomes much more harmonious with what you like and so forth.
Do you have shared experiences then anymore?
It’s a good question.
And, is that not an isolating technology? When we go for a walk down the street and I see Art Deco and you see something else?
It is ironic that this ultra-connectivity technology, this web of technologies could… Its most easily used to do exactly what you’re saying which is to give us exactly what we want. And that is the ethical issue that I’m most focused on which is the needs of people, we want what we want. We want to be comfortable. We want certain things; we want to get what we want. The commercial marketplace wants to give it to us and live only to those impulses. I think we’ll end up with what you’re talking about. Which is a lot of sort of gated communities, a more insular way of looking at life so that you’re getting everything you like, but you don’t really understand other people. You’re not experiencing the real city as you walk down the street. You’re experiencing an illusion that’s coming largely from your own mind and behavior. So one of the issues I’d really like to address, not necessarily today, won’t be solved today, but over the long term what I’m looking at is: how you introduce randomness, serendipity, happy accidents, the kinds of things that in a very structured world like the one I’m describing, that stuff tends to either be filtered out or predictable.
Presumably the algorithms would be good enough that it says: “I’m going to find what we both have in common and then we can have a shared experience that we both [like], and it may not be your favorite or my favorite, but the music, the song that’s playing is at least something we both like.”
That’s right. I mean what I’m afraid of losing in that environment, is, I live in in Los Angeles. I used to live in New York City, I like big cities. I like the craziness of them, I like the fact that every day you’re going to experience something that you never have predicted and you might not have wanted, I’ll hear musicians playing a genre that if anybody asked me the day before… it’s “I really don’t like that stuff, it’s not for me.” And then I find myself stopping and listening and then as a musician, I find myself being influenced by a genre I never… This to me is the beauty of the cities, that you ride in a subway as unpleasant as it can be. You’re constantly confronting the full humanity and I think there’s something very humanizing about that. It makes you more open-minded, makes you realize that not everybody believes the way you do. And you know, you even see a primitive version of it on Facebook, for example, where the illusion is created to an extent that the world is much more like you than it really is. That you’re confronting all the whole fake news idea. But basically the idea that you’re being presented with content that that makes you feel good about what you already believe. And I’m disturbed by that. I think that’s very destructive.
I have a policy that I don’t read any book I agree with. I’m serious because it’s like I spend that time and then I get to the end I’m like yeah, that’s just what I thought. So I literally only read things that…so I’m an optimist about the future, so I only read pessimistic views and so forth. So, let me ask you a question. Let’s say we get some form of AI that is… we won’t even say whether it’s an AGI or whether it’s conscious or anything like that. But, it gets Siri or some equivalent technology. It’s so good that it laughs at your jokes and tells you things and you converse with it and all of that, and you regard it as a friend. Maybe it manifests in a robot that’s vaguely humanoid, I don’t know. And let’s say that those become your best friends, and then you know then you find one that’s your spouse and then you just deal with those all day and you never deal with another person. Because those people never let you down and always like… why is that bad? I mean at human level you say doesn’t sound… but why is that bad? Why not just live that life around people that make you feel good about yourself and tell jokes you like? And you had all the stuff in common, why deal with other people?
Well, we do that to an extent already, and even before any of these electronic tools we found communities and you always want to hang out with people that you know you have a similar worldview, where you get each other’s jokes and so forth. So you’re not constantly arguing about basic assumptions. But there’s a difference I think between in the analog world, knowing that I’m living and hanging out with a community of people, like-minded people. You know, we’re all in the ballpark but being aware that right across that highway are people who don’t share any of our assumptions. And we really look at the world quite differently. So is it a good thing or a bad thing to be aware of them and to have to interact with them? I think, with no evidence, but I think it’s a good thing to interact with people you disagree with.
Well that’s people’s gut reaction, but try, and I heard your caveat with no evidence, but try to justify it.
If you don’t encounter things you don’t like, it’s like a muscle that doesn’t encounter resistance. It never develops. It requires friction, I think, for humans because I think that’s the way we’ve evolved is that we evolved in a very complex diverse society and we have to find our way through that and our identity I think is constructed. We construct our identity based largely on how we see ourselves in the midst of that. So it might not be bad, it might just lead to humans who are less able to handle diverse opinions, new ideas, and inventions. They might be less tolerant of eccentricity, of artists, of people who by nature, inventors and artists, people who break the mold. If you’re so accustomed to the world being exactly as you like, it might be very difficult for you to accept a revolutionary concept or a work of art that’s startling and offensive maybe at first. But you grow by accepting those things and incorporating them into your identity. So I would say that it’s good to throw a lot of stuff at people and let them sort it out.
Say here, you get these two robots to choose from: you know this one is exactly like what you want, this one however has body odor and tells offensive jokes that that just really offend you at every level and you really should pick that one.
Well it’s sort of the movie Herfor example, you know this is your perfect companion, and because she was intelligent, she evolved to grow to him and so forth just like a human would. I’m not saying necessarily surround yourself with the obnoxious or what you find uncomfortable. On the other hand, don’t surround yourself necessarily with everybody who agrees with you all the time. It leads to an intellectual inflexibility and a cultural inflexibility.
Do you think human evolution has ended now because the strong don’t necessarily survive any better than the weak, and the intelligent don’t necessarily reproduce more or have higher survival rates than the less [intelligent]? Is human evolution over and the only betterment we’re going to have now is through machines?
I don’t think so. I think that humans as organisms continue to evolve. I think that the strongest is not the physically strongest, because any tiger could knock a weightlifter out. I mean compared to other species, we’re very weak. I would say if you interpret strength for humans as having the characteristics of harmonizing society, cooperativeness, collaboration and so forth, I would see those as the human strengths and I would see those as having as very refined evolutions of our temperaments. Human strength is not individual despite our mythology. Yes, inventors come up with ideas, yes, artists come up with ideas and so forth. And those tend to happen individually, but the real changes tend to happen with a lot of people collaborating, some of whom don’t even know they’re collaborating but they’re participating in a movement. So I would say that the highest point of human evolution is something like empathy, understanding of people who are very different and so forth, that’s human strength. And I would say that is something that’s what allows us to survive, not our physical strength. We don’t really have any physical strength to speak of.
So your contention is that ethical, I mean that empathetic, people with empathy will reproduce more than people without it over the long run?
I don’t think though, there are too many issues with reproduction. I don’t think that will be the case, but the numbers don’t necessarily dictate the influence that has on society.
So let’s get back to our narrative. We have our cellphone [that] has migrated to a hands-free device that we can effortlessly interact with, and you assume that people want to do that based on how they’re willing… it is true that taking the elevator up here I noticed everybody whipped out their phone. It’s like “What am I going to do for the next thirty-four stories of elevator time? I’ve got to pass this time some way.” And so your contention is that there is a latent desire for that because people want to have it on 24/7?
I think so. I think that if I had to come up with a one gut justification for this, it would be, and I know this is not visual, but I’m making the gesture of playing with your phone with two thumbs. That is the fact that, I think it’s an obsession with me. I go into a crowd in an airport a hotel and I count the people who are using phones and the ones who aren’t, and it’s always over 50% of people who are like this. Especially if you consider it, count the laptops. So there’s a need, it could be an obsession, it could be… who knows where it’s coming from. But there is definitely a need to look at this thing all day and who wouldn’t rather strap it to their head and have it be full fidelity and high definition and overlays that don’t look cartoonish, that actually look like they’re fixed and integrated with the environment and so forth. And be able to do all the things you can do on your phone. You get your mail, your messages, you take photographs and whatever.
I have been to North Korea several times and there is no internet. There is no cell phone reception, there is nothing. And I find that the most isolating aspect of it all… like you know I cuddle up to like the warmth of this thing that’s… it’s almost like, I don’t know, I feel untethered and adrift when I don’t have it. And I wonder did it awaken something in me because I wouldn’t have felt that way when I was younger, because I didn’t have the device? Or did it change me, did it somehow weaken me, that now I need it? Or did it awaken this latent desire to want to be connected to a world of information? What do you think?
I think we might have a lot of latent desires that technology hasn’t given us an avenue for and this is one of them. When I was a kid, there was no such thing as email so being without it… so what? You wouldn’t even have been able to explain to me what this phone does. You know, I mean you’d have to explain the internet. You have to explain all the protocols, it’s an amazing amount of history that we’ve got in our pockets. So we didn’t know in the 15th Century, would people have been doing this? Yeah. I think they would have. I think it’s human. I think that you’ve got a little device here that is magical. It carries… it’s your portal to the world. It’s a computer that you can carry on you. It makes me wonder what other technologies could evolve that show that we have other desires that aren’t being met or that we could become addicted to. I mean, it’s not the right word, but habituated to, it becomes essential.
Why would you make that distinction between habituation versus addiction?
Well, because I think of addiction as a drug, but it’s really the same thing yeah it is…
Because I have withdrawal symptoms if I’m cut off from it.
That’s true and in fact we’ve seen in the last year that some Facebook original designers have started to come clean and talk about how that is deliberately, addictively designed. That’s not surprising in a way, it’s, I mean from Facebook’s standpoint you want to keep people using it and that’s where the information about people comes from and so forth. So it’s understandable, but you know we have become addicted to something that is actually very useful. I guess that’s my reluctance to use the word addiction. I think of addiction as to something bad, but you could be addicted to something good too I suppose.
So we have our device and now we transport into the future, and you said the street is aware and I assume you mean that colloquially not literally the street is not conscious.
The street couldn’t really be conscious but the sense, the sensors and the interaction between the sensors and the databases and that there’s a whole web of intelligence I guess you could call it, that will create the illusion that in a sense, I think that the city is responding. That building changed because of something I bought. My health changed and so that facade looks different, the artwork looks different. It’s something now to make me feel more relaxed because it knows I’m very nervous and it knows that I have a heart condition or the opposite, or what have you. The city could become your doctor for most things. It’s constantly diagnosing you. It’s looking at your heart rate continuously, you know an automated vehicle could show up on the sidewalk when you think you’re having indigestion, and it realizes that you’re having a heart attack, so it takes you immediately to the hospital. And starts treating you as soon as it comes in contact with you. I mean the healthcare benefits are just staggering over the next generation.
So you’re an ethicist and you think about the ethics of all of this stuff?
I’m an amateur ethicist.
Fair enough. I don’t know how you go pro… Regardless, tell me some ethical considerations that we may not have thought about, or we had that you want to weigh in on, so what sorts of questions are outstanding?
I’m going to avoid AI by itself because that becomes, well in a way I can’t avoid AI because this whole thing is basically run on machine learning. I would say that the biggest ethical concern I have at this point is that this amazing collection of technologies not be used to de-nature the human experience. Not to make it seem as though life is simpler than it is. There are no people I dislike. There are no people with political views I disagree with. There are no genres of music or movies that I don’t like. I’m not exposed to any of that and it makes me happy. That I find to be a very dangerous thing. It leads to the fabric coming apart I think. So that’s one of my concerns. The commercial motivation of a lot of the AI, the Facebook and Google and so forth, is potentially problematic because there are other values in society that are more conducive to holding the fabric together, appreciating other people’s experiences and points of view and so forth. You know that are not…
Fair enough. So let’s take the first one of those two, that somehow is bubbling… goes to a whole new dimension where it isn’t just “Here are suggested stories for you.” But people and all experiences contrary to your current preferences are off limits, and you say that pulls the fabric apart because it dissolves community. I don’t have any reason at all to empathize with you because you had absolutely nothing in common with me.  Is that how you’re seeing it?
Something like that. Everybody I know disagrees with you, so how could you possibly be right? You know? As opposed to: there are lots of people with a range of points of view and they very idiosyncratically… and sometimes they’re full of contradictions and so forth. And I think to become a full member of the community, you have to sort of appreciate the messiness of people. And a lot of these technologies are naturally inclined, I think, to shave off the messiness and to make it seem like it’s a lot more, you know…
So run both scenarios. Run the worst case and then tell me why that’s not going to happen?
The worst case would be if it were used, I think, if a system like this were used in society where there was no tradition of democratic values. I think that’s very dangerous, because then your primary motivation becomes efficiency and that’s not a very good way to organize society, I don’t think. Society is inherently inefficient and the freer people are, the less efficient it is. Efficiency is never really the goal of a democratic republic. But an authoritarian State with these technologies could create an extremely obedient population that would govern itself in a sense. They would not need to be censored, they would not need to be told that this was inappropriate or so they would know better. They would know that. They would behave. And that might lead to industrial efficiency but it doesn’t lead to a human freedom or any kind of society that I think any of us would feel comfortable living in. I think that’s a natural tendency especially in certain countries where it’s basically a way to enhance authority. That’s one scenario and that could happen here. That’s a very portable model that doesn’t necessarily apply to China or Gulf States or other states they might be thinking of it. It could apply to Western Europe, it could apply to North America. The temptation is going to be high to assert authority through a system like this I think.
On the other hand, it can be incredibly liberating for people, first from a health care standpoint. It basically puts you in your doctors hands all the time. You’re constantly being watched and assuming that this does it in a secure fashion that people are comfortable with. If recreation, entertainment, being exposed to different locations in a physically utterly believable way: travel, education, just one field after another. There’s hardly a field that isn’t revolutionized by this kind of thing. And very positively it really takes the resources and it expands them very openly to people. Everybody becomes empowered in a certain way, but I think that takes the guidance in the development of these systems. And those are the kinds of questions I’m trying to raise with software developers, for example, of people working these technologies. Think of how you can push towards the second scenario instead of the first scenario, and it’s a difficult thing, and it might actually go contrary to some of the, you know the commercial needs of developing AI and mixed reality and so forth. So it’s not easy and there’s no obvious answer. It could go in a lot of different directions.
It’s interesting because as I sit here I think about it: There’s a whole different mindset that says “the great thing about these technologies is they let you find your tribe. You are not alone. There are people like you and these technologies will let you find and have community with those like you whether they’re/it’s spread all over the world. They may be older and they may be this and then may be that, and you will find your place.” But you are describing tribalism in a really kind of dystopian sense like where would you…?
That’s a really good point, it’s one of the paradoxes of these technologies, that they’re very liberatory but they’re potentially restrictive. And the tribal mentality I mean that’s a fantastic thing about… well the Internet itself is the ability basically to form communities without respect to geography as you say, age, any demographic considerations and that’s fantastic, that’s unprecedented. It’s a matter of degree, I think. You know I’m heavily involved with people who are interested in the various things I’m interested in and so forth. But just as you try to read books that you disagree with, I try to find people that I disagree with. I try to emphasize that these tribes are not the world for me, even if I want to make them that. There is an incredibly diverse population out there and once you wrap your head around that, I think if you end up actually dealing with your tribe in a more intelligent way. You know what I mean there? That the more you see of human diversity the better it is, even when you’re in a group that’s heavily circumscribed by interest or one factor or another. So there are tribal utopias and tribal dystopias. I think it’s almost a sliding scale. But I think what changes the utopian to a dystopia is that you realize that this isn’t the sum total. You don’t become satisfied by living in a world that’s just like you – as tempting as that is.
I wonder though if there is such a world. If I’m really into banks shaped like pigs, and I find the Bank shaped like Pig society and I connect with 19 other people. They’re not going to agree with me about anything else. And so there aren’t all bubbles just one or two dimensional and people are so rich and multi-dimensional that there’s really no way to completely… I mean you can isolate yourself from people who have vastly different economic situations than you who live in abject poverty in another part of the world, but that already happens. So how is this any different than I live in a neighborhood and everybody in my neighborhood is, to your point, in some way very similar to me. They’ve all chosen to live there and afford a house of that kind and so forth. But on the other hand, not at all like me. And so how are you saying technology says “oh no, you’re finding your own clones. And when you find your own clones you’ll completely cut off the rest of the world.”
That’s a good point. I think in the physical world we actually do that; you know the people in your neighborhood for example. You have a lot in common as you say, you also have a lot that you disagree with, but if you’re digitally creating communities it might be one of those things where you’re focusing on the similarities to the point where you really want a homogeneous community. It gives you more tools to eliminate the pieces you don’t want. I’m not saying that that’s necessarily going to happen, but you look at Facebook, which is very primitive compared to what we’re talking about. It’s still on a screen. It’s still basically text-based. You know it’s really, we think of it as current, but when you’re  talking about this stuff, it’s not really. It’s an old-fashioned system in a way and even that, even with text, which is very abstract, it still manages to convince people to focus strictly on the things they have in common. It pulls you away, I mean you know the effects that it has on public discussion of politics for example, people are looking for it. Again what you said about reading books that you don’t agree with, you’re looking to confirm and when you confirm, suddenly you’re right. It isn’t just my opinion, it becomes more difficult to compromise with people. So you know we see it happening in that world and yes, within groups on Facebook or in digital groups, you’ll find differences. But they tend to get very narrow cast. You know this is a world view, kind of, that is shared by the group. So it makes it easier to craft a group, but that same impulse is going to be there and maybe one of the solutions is to belong to a lot of different groups so that they overlap and don’t narrow cast your identity in a sense.  Don’t think that, “well I’m this and this and therefore these [are] the only people I deal with.” Because believe me with this and this, you’re going to find a lot of people that disagree with you, it’s just – people are complicated. So anything that we can do to encourage that, what would be the word, “hetero-genization” I guess. That sort of throwing surprises in there. Surprises I think are good for people especially intellectual surprises.
One in every 10 of your friends on Facebook should be randomly assigned to you.
You know I’ve never heard that but some, something like that or something. I mean often we get that with the relatives.
Yeah that crazy Uncle Eddie, who comes to the cook-out… So let’s talk about your second concern the commercial factors and you’ve alluded to, your concern that the incentives are, with Facebook, to make the technology sticky. But I think you probably mean something much more philosophical or broader or maybe not. Tell me the dystopian narrative of how the forces of free enterprise make a dystopia using these technologies?
Well it’s another one of those paradoxes is that the marketplace that exists is, to a large extent responsible for these technologies that are being developed. At the same time, the motivation of the individual companies, I mean take Google and Facebook for example, and their motivation is to gather data about us and sell it to advertisers. There are other models that would be possible, but that’s the one that the marketplace naturally leads to. I mean if I were running Google I’d be doing the same thing, it’s almost unavoidable. So it’s useful to know what information is being gathered for what purposes, how it’s integrated with other information for what purposes and so on. I think that the commercial motivation is to give people what they want and it’s very hard to sell castor oil to people. You know you’re going to say, “well this product you should buy because this app you’re not going to like it but it’s good for you,” nobody’ going to buy that. So there has to be, you know, some built-in incentive to. I think really what we have to do is replicate the real world more fully. So thirty years from now, when a virtual environment becomes indistinguishable from a physical world, a lot of these problems might disappear because, you kind of embrace the values of a diverse civilization and you imprint that. I don’t think that’s what the companies are doing right now. I think they’re saying, “well we need to gather data because this is… the accumulation of data is really our business model.” So that’s a fundamental conflict I think in a utopian vision of these technologies is that, I would argue to the corporations that are doing this, that ultimately there’s greater profitability and greater adoption and less pushback. If you do the right thing, leaving that undefined for the moment, but if you don’t necessarily… without exploitation, you get a lot more buying into this system. You get people who really throw themselves into it with more security for example, less hack-ability. So yeah there is and I’m not picking on a market system, because any governmental system, any economic system is going to bring its own slant to how they do things.
Do you think that life can, because you just said something, I’m still back at “when these systems become indistinguishable from reality?” And it seems implicit in that some machine learning does a very simple thing. It studies the past and assumes the static world and the future is going to be like the past and it looks for patterns in the past and it projects those into the future. Do you think everything about our existence came [from] human creativity? You know I look at a Banksy piece of graffiti and I think, “Could a machine learning system have studied anything in the past and produced that?” So if not, everything can be learned that way. Can a world be built that is therefore indistinguishable from this world?
I think large parts of it can be made indistinguishable. I mean certainly this environment that this conference downstairs, you know South by Southwest, could be made virtual and it could be just as immersive as it is now. The problem comes with invention, with people who with artisans, inventors, creators, people who don’t do what was done yesterday, people who break the pattern. And I’m wondering about a future form of AI that is able to do that. I don’t know how it would. I think a lot of that is biologically rooted, I think there is an urge in a person to create that’s a very hard, and creation involves doing something that hasn’t been done before, not completely divorced from reality. It has to be familiar, but it has to be, it has to break certain rules of the past. Major changes, all of these inventions really involve a deviation from what happened last week. So that’s a piece, the creative piece that is still I think in the realm of humans.
Let me pose another question to you. This is something I’m mulling about as we speak, and I would love to get your thoughts on it. So I often have a narrative that goes like this: If you want to teach a computer to tell the difference between the dog and cat, you need X-million images labeled dog and X-million labeled cat and it does it all. And then I say, you know the interesting thing is, people can be trained on a sample size of one. So if I take that stuffed animal which you’ve never seen before and I said okay find it in these twenty photos. And sometimes it’s upside-down. Sometimes it’s covered in peanut butter. Sometimes it’s underwater or sometimes it’s frozen in a block of ice. You’re like: “it’s there, it’s there” and we call that transfer learning. We don’t know how we do it. We don’t know how to teach computers to do it. So but then people say “aha” here’s the part I want your thoughts on: “You have a lifetime of experience of seeing things that are smeared with substances and perhaps frozen in glass and all of that.” And that seems to be the answer, and then I say, “Ha-ha, you don’t have to show a five-year-old a million cats. You can show a five-year-old three cats and they can pick cats out, and they don’t have a lifetime of experiencing things like cats.” But then they see the Manx, it doesn’t have a tail, and they say “oh it’s a cat without a tail,” like they know that. And that’s a little kid who hasn’t lived a life of absorbing all of this thing. So two-part question as they say part one is how do you think that child gets trained on such a small amount of data, and second, could the answer be it’s the same way that birds in isolation know how to build a nest? Somehow that is encoded in us in a way that we don’t even understand how that would happen?
My answer to both of them is, I don’t know. And that’s a real interesting speculation on that, the birds. The part B, why do children, why can children do that? I don’t know. There are certain things that the human brain, the human mind does that I don’t know how you would code.
Are you saying I don’t know how you would code it or I don’t know if that can be coded?
I don’t know if it can be coded.
Interesting, so you might be one of those people who says general intelligence may not be possible.
I go both ways on that one. I think there are certain things that we do, metaphor, analogy, seeing relationships, intuition, certain very human ways of thinking, I don’t know how much of that can be systematized.
So the counter-argument, the one I hear all the time is you are a machine, your brain is a machine, your brain is subject to the laws of physics that can therefore be modeled in a machine and therefore it can do everything human can do. I mean that’s the logic is that…
Yeah, I have trouble, I understand the point of that, I think it’s reductive. I think that a machine is something that humans create and we didn’t create this, a machine we understand. This we didn’t. This grew. This evolved. This is full of mysteries and un-examinable pieces. We don’t know why we come up with what we come up with. What motivates an inventor to come up with something? Well, okay he has an idea, but there’s more than that. Is he proving that high school teacher wrong? Is he showing his dad, “yes I can do this?” There are all kinds of personal things that they might not even know they’re motivated by, that are, that require being alive. If there’s no sexuality, if there’s no desire, if there’s no irrationality, how can you be fully human? And if you want general intelligence on that level, do you have to program a simulation of that in there? Does it have to believe that it’s alive? Does it have to believe that it’s mortal? Does human life have the same… if we live to 200, how valuable would human life be? Isn’t the preciousness of it, that it’s finite? It is all too short that it follows an arc. Does a machine have to have that same physiological basis? How much of this is rooted in our existence as creatures? Does it have to think it is? Does it have to be really human and alive in order to do the kinds of things that we think of as quintessentially human, like great music or invent smartphones or build cities?
It isn’t just that you know you can do it and you know how to do it, you have to want to do it, and it has to consume your life. Are you willing to do that? Well why a machine would do that where’s this motivation coming from? I only have five years to live…  you know what I mean, how can a machine know that? I want to attract a certain person to me. Does a machine want to do that? It has no need for that, no understanding…? So a lot of this stuff is very squishy human stuff that is evolved. And I think that if you’re going to get general intelligence you might have to grow it. Because if you have something that’s alive, it has a sense of self in a way. It has a sense of survival. It knows it’s going to die in a certain way.
Well interestingly, life is an incredibly low bar, and I think the only reason you can say computer viruses aren’t alive, is because… and it’s interesting because life doesn’t have a consensus definition. Death doesn’t have one. Intelligence doesn’t have one. Creativity doesn’t have one, which either mean to me that we don’t know what they are, or the term itself is meaningless. I don’t know which of those. But life is a really low bar because…  the reason we don’t say computer viruses are not alive is simply because they’re non-biological and right now most definitions require biology. But a virus we generally regard to be alive, a bacterium we do, and yet those don’t have any of those. You’re talking about something more than being alive, right, you’re talking about consciousness?
Consciousness, although well, consciousness let’s say in silicon as opposed to consciousness in some wet petri dish that’s actually grown tissue, for example. Let’s say you have the same kind of general intelligence imbued in both of those. I think the one that’s alive is going to get you closer to a replication of the physical world that we know.
Do you think humans are unique in our level of consciousness?
On this planet? I think that’s impossible to know. I can’t put myself in the head of a macaque. You know I don’t know. I suspect that that every living creature has a sense of itself, in the sense that…
A tree?
Yeah, a tree can’t move but it will turn to face the Sun, it will respond to the environment. An animal definitely will avoid threat, fire.
We derive the notion of human rights and enact laws against animal abuse because we feel that they are entities, that they can feel, that they have a self. If you say a tree has that, have you not undermined the basis by which you say humans have human rights?
No, I would say that a plant, I know this is going to sound arbitrary. A plant is probably in a different category. I would say that, in fact I would say a lizard is probably in a different category you know. I hate to be species-ist but you know I think that we’re talking about higher mammals pretty much. And as inferred from their behavior: complex, social structures and so forth. Trees don’t do that.
Isn’t it fascinating that up until the ‘90s the conventional wisdom among veterinarians was that animals don’t feel pain?
Really?
Sure and they operated open-heart surgeries on babies in the 90s without anesthesia because they said they can’t feel pain either. And the theory goes that if you take a Paramecium and you poke it with something, it moves away and you don’t infer it has a nervous system and it felt that. And yet and so they say that’s all the dog that gets cut has, and that’s up until the 1990s, that was a standard of belief that animals didn’t feel pain.
You could I mean if you were willing to accept that logic you could also accept human surgery without you know I mean there’s no clear line there.
No, I’m not advocating that position…
I know you’re not.
It’s interesting to think that the problem… I think it was a position argued in part from convenience by people who use animals or raise animals and so forth. Because if they can’t feel pain then they don’t, you know…
Yeah, then who cares. We know dogs feel pain. Can they create sophisticated societies? No.
I use the very example in a book I have coming up shortly about this time my dog was running and jumped over this water faucet and tore her leg open. And she yelped and yelped and I said I wrote you know nobody could convince me my dog did not feel pain. But you noticed the way I described it that she seemed to feel pain, because I do have no way of knowing. That’s the oldest philosophical question on the books is you don’t know what anybody else feels or they exist or anything. It’s intractable, and the reason it interests me is because I’m deeply interested in whether computers can become conscious and more interested in how we would know if they were. So I would like that to be my last question for you. How would you know if a computer was conscious?
If I had to pin it down to one thing?
Well no. The computer says, “I am the world’s first conscious computer.” What do you say to it?
I would say “make me laugh.” You know let’s say do something that’s human and irrational.
Yeah, the net plays a recording of flatulence…
Okay, but that’s…
But you did it, you did it. It made you laugh.
Well the description of the machine doing that made me laugh. But if the machine actually did that I’d say, “that’s not funny.” It has to do something. Write a song. Do something that hasn’t been done before. If you’re just basing it on what happened last week, then I can be tricked and to believing that.
So you know they had these programs that write beatnik poetry. You know the dog sat on the step, bark, bark, eleven is an odd number indeed. You know write stuff like that, and they would say “well nobody’s ever written that poem before” and you’re like well there is a reason for that. They feed Bach into it and use machine learning to make Bach-ish. I think you can’t trick a musician, but the musicians are like “that’s kind of like Bach.” And so neither of those come anywhere near close to passing your bar I assume, and yet…
They didn’t invent it. That would be if the robot came up and played like Jimi Hendrix. I’d say that’s pretty good, but if he came up with that in 1967, that’s a whole different thing.
You know it’s interesting because we are recording this on the anniversary of the tournament between Alpha Go and Lee Sedol. And there was a move, move 37 in game 3 that people say was a creative move. It was a move that no human would have seen to make. Even Alpha Go said… Lee described it as people started talking about Alpha Go’s creativity on that day and subsequent to that they have systems that train themselves. So there’s no training on human games and there was one that trained itself to play chess, and what it’s doing are things no chess player would do. In one game it won, it sacrificed a queen and then a bishop in two consecutive minutes and won the game to secure a position. It hid a queen way back in one corner and people describe it as alien chess because it’s the first thing that wasn’t trained on this huge corpus of chess games we have. So is that getting near it?
It’s getting near it, that’s doing what a really creative person does which is to take the basic elements and not impose any of the preconceptions on top of it, sort of look at it fresh.
The question I ask, is that creativity? Or is that something that looks like creativity, or is there a difference between those two statements? That would be my last question for you.
That’s a hard one to say. You can imitate creativity by creating Bach-like music. Chess I’m not sure falls into the same category or a sophisticated game, Go or something. Because there is a certain set of possibilities, whereas in the arts, for example, or in invention there really isn’t. I mean, there are physical restrictions, but aside from that, it can go anywhere and although it seems like I’m splitting hairs basically…
These are hard. The challenge with languages that we’ve never had to – we’ve always been able to have a kind of colloquial understanding of all these concepts. Because we never had to say, “well how would you know if a computer could think?” How would you know that? Because the words just aren’t equipped for it, the language therefore I think limits our ability to imagine it. But what a fascinating hour it has been. I could go on for another hour but I won’t subject you to that. Thank you so much for this.
It’s my pleasure.

Interview with Christof Koch

Christof Koch is an American neuroscientist, best known for his work on the neural basis of consciousness. He is the President and Chief Scientific Officer of the Allen Institute for Brain Science in Seattle, and from 1986 to 2013 he was a professor at California Institute of Technology (Caltech). Koch has published extensively, and his most recent book is Consciousness: Confessions of a Romantic Reductionist.
What follows is an interview between Christof Koch and Byron Reese, author of the book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity. They discuss artificial intelligence, consciousness and the brain.


Byron Reese: So people often say, “We don’t know what consciousness is,” but that’s not really true. We know exactly what it is. The debate is around how it comes about, correct?
Christof Koch: Correct.
So, what is it?
It’s my experience, it’s the feeling of life itself, it’s my pain my pleasure my hopes, my aspirations, my fears, all of that is consciousness.
And it’s described as the last major scientific question that we know neither how to ask nor what the answer would look like, but I assume you disagree with that?
I disagree with some of that, it’s one of the two or three big questions, being: Why is there anything at all? What’s the origin of life, and yes how does consciousness arrive out of matter?
And what would that answer look like, because people often point to some part of the brain or some aspect of it and say, “that’s where it comes from,” but how would you put into words why it comes about?
It’s a very good question. So, having the answer which bits and pieces of the brain are important for consciousness is critical to understand what happens in emergency room when you have the patient who is heavily brain damaged, but you have no idea whether she is actually there, was anybody home. It’s going to be of immense practical importance and clinical importance for babies or for anencephalic babies, or at the end of life, but of course that doesn’t answer the question. What is it about this particular bit, piece of the brain that gives rise to consciousness, and so finding what we need, we need a fundamental theory of consciousness that tells us what type of physical system, whether evolved or artificial, what type of physical system under what conditions can give rise to feelings, because those feelings aren’t there. If you look at the fundamental theories of physics, quantum mechanics and general relativity, there’s no consciousness there. If you look at the periodic table of chemistry, there’s no consciousness there. If you look at the endless ATGC chart of our genes, there’s no consciousness there. Yet every morning we wake up to a world full of sounds and sights and smells and pains and pleasures. So that’s the challenge: how the physics ultimately give rise to conscious sensations.
Or, some might say, “whether physics give rise to it?”
Well, physics does give rise to it in the sense that my brain is a piece of furniture of the universe. It’s subject to the same physical laws as everything else. There isn’t a magical type of law that only applies to brains but doesn’t apply to anything else, so somehow physical systems, or at least a subset of physical system gives rise to consciousness. The classical answer, at least in the West, was forever, for a very very long time, there’s a special substance, the thinking substance, res cognitans, or people today call it the soul, and only certain types of systems have it and only humans have the soul, and the soul somehow mediates the mind. But of course we [say], sort of logically, that’s not very coherent, there’s no empirical evidence for it… how would this soul interact with the brain, where’s the soul supposed to be, where does it come from, where’s it going to, it’s all incoherent, although of course the majority of people still believe in some version of this. But as scientists, as philosophers we know better. There isn’t any such soul, so it comes back to the question, “What is it about the physics of the world that gives rise to feelings, to sensations, to experience?”
Well, I want to tackle that head on in just a moment, but, let’s start with you, because you’ve been dealing with this question for a long time, and it’s fair to say, your understanding of it has evolved over time. Can you walk through, like the very first time you thought about this? As far back as you remember, and then what you thought and what the early theories you offered up, and how you have evolved those over time?
Sure, so I grew up in a devout Roman Catholic family, and I was devout and then of course you grow up to believe there’s this soul, the real Christof, is sort of this spirit that’s hovering over the waters of my brain, and every now and then, that soul touches the waters of my brain, makes me do things, and then when I’m thinking about, for instance, whether I should sin or not, this absolute freedom to choose one or the other and then my soul does one thing or the other, but then, this was on Sundays. Then during the day and the rest of the week, I taught science, I thought about the world in scientific terms, and then you’re left… well wait a minute, you begin to think about more detail and that just can’t work. Because, most importantly, where is the soul? How does it interact with a brain, and so then you begin to think about scientific solutions. And then I encountered, years later, Francis Crick, the co-discoverer of DNA, and he and I started up this very fruitful collaboration, where we wanted to take the problem of consciousness away from purely philosophy where it’s vested over the past 2,000 years, which is great you’ve had some of the smartest people of humanity, but they haven’t really advanced the field that much, to take it into an empirical operation that we scientists can work on. And so we came up with this idea of the neuronal colleagues of consciousness. It’s a fairly obvious thing; it’s the idea is that, whenever I’m conscious of something, whether it’s your face for instance, I see your face or hear your voice, or I have a pain or I have a memory, there must be some mechanism in my brain, we know it’s not the heart, we know it’s in the brain, there must be some mechanism in the brain that’s responsible for that. And it’s a 2-way communication between this mechanism and my feelings in the sense that I artificially activate this neuronal colleague of consciousness or abbreviated as NCC. If I trigger it for example, by an electrode that I put into the brain, and say doing brain surgery, I should get that person, even though there isn’t anybody out there, but I still see a face. Or conversely, if this part of the brain gets removed by a stroke or a virus or a bullet or something, I shouldn’t be able to have this person anymore.
Now also this is a big scientific empirical program that’s going on in many places throughout the world, where people are trying to look for these neural colleagues of consciousness in the brain. But then of course, somebody pointed out to me, he asked me a very simple question, said, “Well in principle, if your program has run its course and then 50 years later we know exactly that every time you activate these neurons in this particular mode, projecting to this other part of the brain, you become conscious.” How’s that different from the cards cranial gland, because famously 80 years ago, they said well the place where the brain needs to have this spooky stuff, this cognitive stuff, is the cranial gland, and today we all laugh at it, right? Well how is that different from saying, “well it’s made up of 5 neons that oscillate at 40 hertz?” It’s just much more detailed, because ultimately it seems like magic. Why should activity in these neurons give rise to conscious sensation, and at that point, I really thought, well what we need, we need a fundamental theory that tells us, independent of which mechanism, tells us what is it about any one mechanism that can give rise to consciousness? And so here we are, 20 years later.
And so talk about IIT?
So the most promising theory of consciousness in my personal opinion, and in the opinion of many of the observers of the field, is this integrated information theory, due to this Italian-American psychiatrist and neuroscientist, Giulio Tononi. And it starts by saying, “Well what is a conscious experience? A conscious experience exists for itself – in other words it doesn’t depend on anybody else, doesn’t depend on my parents or you or any observer – it just exists for itself. It has particular properties, it’s very definite, either I have a conscious experience or I don’t have a conscious experience. It’s one, it’s only one at any given point in time, and it has parts, like, if I look out at the world, I can see you over here, over there something else, and there’s an above and below and a close by and a far away, and all those notions of space and other sensory qualities.  And so then let’s look for a physical mechanism, or first an abstract mathematical formulation of such a magnitude that instantiates these key properties of consciousness. And so the theory says that ultimately, consciousness is its causal power of the system upon itself.
So let me unpack that a little bit. Well firstly, let me repeat it. So the idea is that consciousness ultimately is the ability of any system, like my brain, to influence its immediate future, and to be influenced by its imminent past, it has causal power. Not upon others, that’s what physics has. If I have an electric charge, I’ve attraction or repulsion of other things, but it’s power upon itself. The brain’s a very complex system, and its current state, influences its previous state and its past state influences its current state. And the claim is that any system that has internal causal power, feels like something from the inside. Physics tells us how objects appear from the outside, and this thing, intrinsic cause-effect power, tells us what it feels to be that system from the inside. So physics describes the world from the outside perspective, from the third person perspective of an observer. Integrated information cause and effect power tells me what it is to be a system from the inside, and the theory has this number called phi, that tells you how conscious the system is, how much intrinsic cause and effect power it has, how irreducible it is, that’s another way of looking at it. Consciousness is a property of a whole, and how much that whole is a whole, how much it is reducible, that’s quantified by this number phi. If phi is zero, you don’t exist, there’s no consciousness, the system doesn’t exist as a whole. The bigger the phi, the more conscious the system is, and the theory delivers, at least in principle for any system, whether it’s a brain or computer chip or a molehill, an ant, anything else, it says, in principle you can determine, it gives a recipe algorithm, how you can determine for a particular system, in a particular state, whether it’s conscious and how much it’s conscious by computing the phi. So that’s where we are today.
So it’s a form of panpsychism?
One of its consequences of integrated information is, that it says consciousness is much more widespread than we’d like to believe. It is probably present in most of metazoa, most animals, it may even be present even very simple system like a bacterium may feel like something, that’s what it says, that a single paramecium for instance, right, a single protozoa, single bacteria is already a very complicated system, vastly more complicated than anybody’s every simulated, right? We don’t have a single simulation today, in the world, of a single cell at the molecular level, but it’s way too complex for us to do right now, but the theory says yes, even this simple system feels like a tiny bit…
What about non-biological systems though?
In principle the theory is agnostic. It just talks about causal power, so any system that has causal power upon itself, is in principle, conscious.
So is the sun conscious?
Well, okay so that’s a very good question. The sun is not conscious I believe, at the level of the sun, because, so consciousness really requires… It says that the system has to be integrated and highly differentiated as a whole; so the system has to be able to influence its whole. The sun is so big that it’s very difficult to understand how propagations within the sun would exceed any time more than a few millimeters, given the magnetic hypo dynamics of the corona atmosphere of the sun. So any system, you can always ask the question, is that as a whole conscious, as many people have asked in the West and in Eastern tradition. The sun is unlikely to be conscious, just like, for example a sand hill, is very unlikely to be conscious, because if you look at the individual sand particles, they only interact with each other over very, very short distances. You don’t have two sand particles that ever, let’s say, an inch apart, they don’t interact anymore, only very, very weakly. Just like, for instance, you and me, you’re conscious, I’m conscious, there isn’t something that’s right now, that feels to be a Byron-Christof, although we do interact, right, we clearly talk to each other, but your brain has a particular amount of integrated information. My brain has a particular amount of integrated information. There is a tiny bit of integrated information among us, but the theory says, the only systems that are conscious, are local maximum. It’s like many physical systems, it has this extreme on principle, it said, “only a system that has maximum cause-effect power is conscious.” Therefore, the integrated information within my brain is much more tightly integrated given the massive interconnection within my brain, and the very few bits that we exchange sort of every second, given the speed of verbal communication. So that’s why you’re conscious, I’m conscious, but there isn’t an uber consciousness, there isn’t a gestalt that sort of consists of you and me.
But, do you have a sense, if you were a betting man, that while you extend this order of consciousness to all of these systems, are humans somehow more conscious than an ant?
Yes, there’s no question…
So what is it about humans, in fact, could you name something that hypothetically could be more conscious than a human?
Yes, in principle you can imagine other physical systems…
No, I mean something in the real world? And what is it about us, back to this, “What’s special about us that gives us supercharged consciousness? Because our brain isn’t that much different than an ape brain…
But it’s bigger.
Right, but only by percent.
Well by a factor of three. But just size… in terms of local interactions, we haven’t done enough microanatomy, to be able to see… is a little grain of ape brain really fundamentally different from a little bit and piece of human brain? Certainly by size…
But then the Beluga whale would be more conscious than us?
Well, so that is one of the challenges, we look at brains of some mammalian, that made it back to the sea, their brain is indeed bigger than us and it may be, it’s very difficult to know right now, but it may be that at some sense, they may be more conscious of their environment than us, but they haven’t developed the ability to talk about it in the way we have, so it’s very difficult for us to test that right now. But that’s not impossible. But it’s an important question, ultimately you can test.
It just, it feels like you have a world full of all these objects…
These conscious entities, yes indeed. The universe is partly filled with conscious entities.
But somehow we appear, and I understand your caveat that that might not actually be the case, but we appear to be the most conscious thing.
Well because we are eloquent.
Right.
And other animals, by and large are not nearly as eloquent. My dog, I can communicate with my dog, but only in a limited way, you know I know the position of his back, how he wags his tail, his ears etc., but it’s low grade… and also my dog doesn’t have an affect representation of Charles Darwin, or evolution or god or something like that. So yes, by and large, it appears to be, at least on planet earth, that it’s not unlikely that we are the most, homo sapiensis the most conscious creature around. We live in a world with other conscious entities. Now this is not the usual belief. The majority of the planet’s population believes that there are lots of other conscious minds. It’s only really in the West, that we have this belief in human exceptionalism, and somehow we are radically different from anything else in nature. It’s not a universal belief.
No, but I guess one would say, if you compare our DNA to an ape, as an example, the amount that’s different is very small.
Correct.
And of the stuff that’s different, a bunch of that may not manifest itself. It may not do anything, and that the amount of code different between us and an ape is trivially small, and yet, an ape isn’t 99% as conscious as I am, or at least it doesn’t feel that way to me.
We remember the code that’s in our DNA, which is only 30 MBs, if you compress it, not a lot, and as you pointed out, it’s more or less the same in an ape, in fact it’s more or less the same in some mammals. But let’s not totally confuse the amount of information in the blueprint, with the actual information in the final organism as a whole.
I’ve heard an older interview of yours where you were asked if the internet was conscious. And you said, “it may have some amount of consciousness,” would you update that answer?
Well, in the meantime, the internet has got a whole lot more complex of course, I don’t see any behavioral evidence of consciousness. So, it has a very different architecture, it’s not point to point, it has packet switching, so it’s quite different from the way our brain is, so it’s not easy to actually estimate how conscious it is. Right now, I’d probably say it’s not very, based on what I know about it today, but I may be wrong, and it certainly could change in the future. Because if you think about it, certainly in terms of its component, the internet has vastly more transistors, the internet taken as whole, it has 10 billion nodes… each of those nodes has 10 to 11 transistors, so if you look at it as a whole, it’s bigger than a single human brain, but it’s wired up and interconnected on many different ways, and connectivity, — this is what integrated information tells us: the way components are wired up really makes all the difference, so if you take the same components, but you wire them up randomly or even the wrong ways you might get very little consciousness, it really matters.
What about the Gaia hypothesis, do you think that the Earth and all of its systems, if they function as a whole, if they are self-regulating to some degree, then it’s influencing itself and so could the Earth as a whole be conscious, and all of its living systems?
Unlikely, for the same reason, integrated information says always consciousness, it’s local maximum of intrinsic cause and effect power. In fact, this criticism has been made by an American philosopher John Searle. He said, “Well, IT seems to predict that all Americans, that America is conscious as a whole, there are 310 million Americans, each one of them is conscious, at least when they’re not sleeping etc. And therefore, how do you rule out that there isn’t America as a conscious entity? Well the theory has a very simple principle, local cause-effect maximum, you’re conscious, I’m conscious, but unless I do some interesting technology, we can return to that point in a little bit, there isn’t anything what it is like to be unique, right now there isn’t… There are four of us in this room, there isn’t a group consciousness, there isn’t anything that feels like to be the group of the four of us sitting around here, nor is there anything like to be America.
So, what would be your criticism of the old Chinese nation problem, where is says, “you take a country like China, one billion plus people and you give everybody a phone book, and they can call each other and relay messages to each other, and that eventually…”
Okay, let’s get to something much more concrete, I find more interesting… Let’s take a technology, let’s call it bridging, brain bridging, okay? Let’s say brain bridging allows me directly with some future technology to wire up some of my neurons to some of your neurons. Okay, so let’s do that in the visual thing. So now my visual brain has access to some of what you see, so for instance I now see a ghostly image of what I see across the usual world, and now I sort of ghostly super-impose, I see a little bit of what you see, right now you’re looking at me, so I see me ghostly reflected. However, the theory says, until the integrated information between the system or your brain/my brain, and the spring bridging, increases the above integrated information with my brain or within your brain. There’s still you, and there’s still me. You are still a conscious entity with your own memory and I’m still a conscious entity, Christof. Now, I keep on increasing the bandwidth of this brain bridge. At some point the theory makes a very clear prediction: when the integrated information in this new system, that has now 2 brains exceeds the integrated information in either your or my brain, at that point I will die, Christof will die, Byron will die and there will be a new entity, a new single entity that consists of you and me. It’ll be a single thing, it’ll be a single mind that has some of your memories and some of my memories, it’ll have 2 brains, 4 hemispheres, 4 eyes, 4 ears.
And you know what, the inverse has happened in surgery, it’s called split brain, because in split brain what I do, I take a normal brain, I mean they’re not normal, they’re not healthy, but for the sake of argument, let’s assume it’s a normal brain, I cut it in the midline where there are 200 million fibers across the corpus callosum that link the 200 million fibers that link the left brain with the right brain, I cut it, and what’s the empirical evidence? I have two minds inside one skull, so here I’m just saying, “Well let’s just do it using technology, we built a sort of artificial corpus callosum between your brain and my brain.” And so, in principle, there will be this technology, that allows us, maybe even in large groups, to merge, we can take all these four people here, we can interconnect us using this brain bridging, and then there will truly be a single mind. Now that’s a cool prediction. And you can probably start doing that in mice in the next 10 years or so. It’s a very specific prediction of the theory. That’s the advantage, once you go from philosophy to very concrete theories, you can test them and then you can think about technology to implement and test them.
Think about two lovers, think about Tristan and Isolde, right? Who sing in Tristan and Isolde opera… they don’t want to be Tristan/ Isolde anymore, they want to be this single entity, right, so in the act of love-making, you’re still, that’s the tragedy of our life, you’re still always you, and she’s always she, no matter how close you are, even though your bodies interpenetrate, you’re still you and she’s still her, but with this technology, you would overcome that, there would be only a single mind. Now I don’t know how it would feel, you might also get all sorts of pathologies, because your brain has always been your brain, and my brain always my brain and suddenly there’s this new thing, you could probably get what you get in split brain, that one body does something different from the other body, these conflicts that you see in split brain, after the operation, this so called “alien hand syndrome”… But at least conceptually, this is what the theory predicts.
I’ll ask you one more hypothetical on things whether they’re conscious or not, what about plants, how would you apply IIT to a tree?
It’s a very good question. I don’t know the answer. I’ve thought a little bit about it, of course there are now people who claim that plants, flowers and trees have much more complex information processing going on, at a slower scale. They clearly didn’t evolve to move around, they clearly don’t act on the timescale of seconds. It may well be possible that at least some non-animal organisms like plants, also that it feels like something to be them, that’s what consciousness is, it feels like something to be you, we can’t rule it out. Now our intuition says, “Well that’s ridiculous,” but our intuition also says, “The planet can’t be round, because people obviously would fall off,” people have used this argument for hundreds of years, but the person on the antipode is going to fall off the planet. So we know planets can’t be round, “we know whales are fish, they smell like fish, they’re in the water, they’re not mammals.” So we’ve all sorts of intuition that then science tells us, well actually these intuitions are wrong.
So let’s think through the ethical implications of that, if people are conscious, and because people are conscious they can feel pain, and because they can feel pain, we deem that they have certain rights. You can’t abuse animals because, of course up until recently people didn’t believe animals necessarily could feel pain, up until the nineties. And so, we say “no, no,” you can’t abuse animals, because animals can feel pain. Well according to you, everything can… well not everything, but almost everything can feel pain. Does that (a) imply everything has some right not to be hurt, does a tree have some right not to be cut down; and part (b), does it not undermine the very notion of human rights, because if we’re just another conscious thing, and everything else, and whales may be more so and fish may be, and this may be and that may be, then there really isn’t anything wrong with torturing people or what have you, because everything’s conscious, of course everything.
Okay the first point, I don’t know, having consciousness doesn’t automatically imply that you have the capability to feel pain, to experience pain. Consciousness just, could maybe be all they have are pleasure centers, for them the entire life is just a ride of pleasures, just one orgasm after the other, so our theory of having consciousness is not the same as having conscious experience of pain. Pain is a subset of conscious experience. Second of all, even as humans, we have rights, but then of course, very often those rights clash. “Thou shall not kill.” But there’s capital punishment, and there’s abortion, and then there is homicide, and then there is war, where I can legally kill other people, right? So, these rights are always a tradeoff, as are other rights, and same thing with consciousness yes. It’s no question that certainly all mammals are conscious, right? Birds are conscious, most of the complex fish are conscious, and so one consequence is maybe we shouldn’t eat it. So ever since I had this realization, I don’t eat the flesh of creatures anymore, for that very reason. Now once again, it’s a tradeoff, okay, I’m not going to starve to death if there’s a piece of dead flesh, of steak that I could eat to survive, and so it is a trade off. But given that we have choices, I think we should act on those choices and yes, if it’s true, the moral circle becomes larger, but this has happened over the last 2,000 years. The moral circle of life, the people accorded special privileges, first only used to be Greek men, alright, and then we extended it to some other men around the periphery of the Mediterranean, and then we thought about women, and then we thought about African Americans, and Africans and people who look, at least superficially, very different from us. Right now, as you may well know, there’s a movement to accord at least great apes certain rights, because, yes they are our cousins, our distant cousins. And yes we shouldn’t hunt them and eat them for bush meat.
That’s maybe addressing a slightly different question I’m asking. I’m saying, if the circle eventually becomes everything, then the circle becomes meaningless right? If it’s like, “no, no, you can’t eat plants either, and then you can’t cut a sheet of paper or…”
No, no, because the theory says, not every object is conscious, most certainly not. A sheet of paper for example, the interactions…
Not a sheet of paper, I shouldn’t have said that one, but you extended it to plants…
A big question is, it’s the difference between having one cell that’s highly complex and conscious, versus is the plant a whole? That’s a question you have to ask. Is the tree, the oak tree, as a whole, is it conscious as a whole, or are there bits and pieces of it? That makes a big difference, I assume we don’t know, I haven’t looked at the structure, I don’t know.
Fair enough, but the argument is, you speed up the plant growing and finding sunlight, and it sure looks like animal movement…
Yeah but movement by itself, we know in patients, we know when you’re sleepwalking you can do all sorts of complex behavior, without the patient necessarily being conscious, so it’s a complicated question.
You made a really sweeping statement just a second ago, you said, “all mammals are conscious, and birds and fish.” How do you know that, or how do you have a high degree of confidence in that?
Very good question, so, two things have happened, historically over the last hundred years, (a) we’ve realized, the continuity of all brain structures, we believe it’s a brain that gives rise to consciousness, not the heart.  If you look at the brains of all mammals, I mean I’ve done this at my institute, my institute has 330 people that are experts in the neuron anatomy of mouse brain and human brain, I’ve shown them, one after the other, cells, brain cells, they’ve come from a human brain and a mouse brain, each one a slide on the screen. And I asked them, I moved the scale bar because the human brain is roughly 3 times bigger in width than the mouse brain, I remove the scale, each one I asked, “tell me, guess.” And they had this photo app, they had this app on their phone, “is it human or mouse?” People were entranced. Why? Because the individual components are so similar across whether it’s a mouse, a dog, or monkey or a human, it all looks the same, we have more of it, but as you point out, whale has even more of it. So the hardware’s very similar. [Secondly,] behavior with the exception of speech, (but of course not all humans speak, there are people who are mute, there are babies, and early children that don’t speak, there are people in faith that don’t speak. But speech at least in normal human adults, is a difference from other creatures). But there are all these other complex behaviors: empathy, lying, there’s higher order theories, there’s complex bees for example, who’ve been shown to recognize individual beehive owners. Bees have this very complicated way how they choose their hybrid, you think how long it takes you to choose a house, you can look at how a bee colony sends out these scouts and they have this very complicated dance to try to find an agreement, so we realize there’s lots of complex behavior out there in the world.  Thirdly, we’ve decided, at least scientists and philosophers have, that consciousness is probably not just at the apex of information processing. So it’s not just what it used to be, so high level awareness that I know I’m going to die and I can talk about it, but consciousness is also those low level things like seeing, like feeling, like having pain. And those state that the associated behavior and the associated underlying neural hardware that we find in many many many other creatures. And therefore today, most people who think about questions of consciousness, believe consciousness is much more widespread than we used to think.
Let’s talk a little bit about the brain and work that way. So let’s talk straight with the nematode worm… 302 neurons in its brain. We’ve spent 20 years trying to build a model of it, and even the people involved in it, say that that may not… they don’t know if they can do it. Do you think…
Embarrassing isn’t it?
Well is it, or is it not beautiful? That life, so my question to you is this, you just chose to say, “because our neuron looks like a mouse neuron, ergo, mice are conscious.”
No, no, no, it’s not quite that. Our brain is very similar to a mouse brain, our behavior is rather similar, and therefore it’s much more likely that they also have similar states, not identical, much less complex, but similar states of pain and pleasure and seeing and hearing that I have. I find no reason to… there’s no objective reason to think otherwise, because otherwise you have to say, “Well we have something special, but I don’t know what that special is. I don’t find it in the underlying hardware.”  So, and this of course what Rene Descartes did famously, he said, “When your carriage hits a dog and the dog yells, it’s just a machine acting out, there’s no conscious sensation.” Clearly he wasn’t a dog owner, right? We believe, I mean, I don’t know a single dog owner who doesn’t believe his dog can be happy or excited or sad or depressed or in pain. Well those are all conscious sensations. Why do we say that? Well, because we interact with them, we live with them, we realize they have very complex behavior that’s not so different from ours. They can be jealous, they can be happy, same thing that your kids are jealous of each other sometimes, or happy, so we see the great similarities of cause and divide across species. We’re all nature’s children.
So, back to the nematode worm, our understanding of how 300, and I think 2 of them float off on their own, so how 300 neurons come together, and form complex behavior, such as finding food, finding a mate. I mean they’re the most successful creatures on the planet. 70% of all animals are nematode worms.
They out survive us.
Yeah, so my question is to you, first of all, could a neuron actually be as complicated as a super computer?  Could it be operating on the Planck scale, with such incredible nuance to say… well I’ll leave the question there. Why is the nematode worm so intractable so far, and why do we not understand better how neurons operate, and could a neuron be as complicated as a super computer?
Right, okay so three very different questions, let’s start with neurons, any cell. As I mentioned before, right now we do not have a molecular level model of an entire cell. There’s not a single group that has such a model, just of a single cell, no matter what cell that is, nematode cell, human cell, some people are trying to do that. The Allen Institute for Cell Science is trying to do that, but we aren’t there yet, right? Why? Because we still don’t have the knowledge and the raw computational ability, but more important, the knowledge to try and model all of that, right? That’s just a practical limitation. We’re making progress, but it’s slow. You’re right very unbalancing for my science, brain science. We do not have a general-purpose model of a creature that only has 1000 cells, 302 of which are neurons. We’re getting there, I mean we understand many many things about the nematode, but we’re still not there yet, so, my science still has a long way to go. So it’s difficult, what else is new about the world, research is difficult. Look, per unit, per gram or pound, the brain is the most complex organ in the known universe. It’s the most complex piece of highly organized matter in the universe, right? And I think that’s related to the fact it’s also conscious, because it is so complex, it is also conscious, so yes it is a challenge to our current methods, we’re making progress but it is, and remains the biggest challenge we have in science.
It’s interesting though, because the argument I heard earlier, you said, “People used to say there’s something special about humans.” We don’t know what that is, dualism breaks down because of this problem. Therefore, there isn’t anything. Let’s look for a purely scientific answer… you come to some theory, but, and I’m in with all of that, but then, you say, “We look at a cell, we don’t understand how the cell works…”
In detail…
Right, and therefore, and we’re fine knowing there are just certain things we don’t know about it.
Right now.
But we didn’t take that about the specialness of humans. Look, there’s something special about us, everybody knows that, everybody knows that there’s a difference between a person and a paramecium, everybody knows. And we just don’t know what it is yet, and we’re fine with that for now, but you say, “No, no, we have now concluded there is nothing special about us, let’s go figure out an alternate explanation.”
Well depends what you mean “special” about us. Clearly there are many things that are special about us. As I said, we’re the only ones who are eloquent. I’ve never had a conversation with my dog, nor with a worm. We have, for example a capability of language, that’s enabled us to build these cultures and to build everything around us. So there’s no question we’re special. What you’re saying, we are special, or what people want to hear, that we are special, we somehow avoid the laws of science or we have something going above and beyond the laws of science. Anybody else in the universe has to follow the laws of physics, but somehow humans are exempt from them, they’re this special deal, they have this special deal called a soul. We don’t know what it is, we don’t know how it interacts with the rest of the world; but somehow, and that’s what makes us unique. Sure I can believe that, that’s a great belief, makes me special, but I don’t see any particular evidence for it. No, we are different in all sorts of ways, but we’re not different in that way, we are subject to the same laws of physics as any other thing inside the universe.
So you mention language. I’m just curious, this is a one-off question. You think it’s interesting that of all the animals that have learned to sign, that none have ever asked a question, does that have any meaning to it?
I don’t know.
Because that would imply perhaps, they’re not conscious, because they can’t conceive that there’s something that knows something that they don’t.
Well you say this as like a fact. So, you’re sure that no gorilla has ever asked a question to another gorilla?
Correct, the one potential exception is, Alex the grey parrot may have asked what color he was, maybe. Other than that, no gorilla has ever asked.
I’m not sure I would take that at face value, but even if it’s true, so let’s just say for the sake of argument, yes. We seem to have vastly more self-consciousness than other creatures. You know if the other creatures do have some simple level of self-consciousness, a dog has simple self-consciousness, my dog never smells his own poop, but he always spends a lot of time smelling other dog’s poop, so clearly he can make the difference, between self and somebody else. But yeah, my dog isn’t going to sit there and ask questions, because his brain just doesn’t have that sort of complexity.
Back to the notion “You and I don’t have anything between us that makes us one entity.”  Do you think that a beehive, or an anthill that exhibits complex behavior in excess in any of them, do they have an emergent consciousness as a whole?
So that’s a very good question. I don’t know. Again you have to compare the complexity within a bee brain, so a bee is roughly one million neurons, their circuit density is 10 times higher than our circuit density because they evolved to fly, so they have to be on very tight weight mass constraints of the sorts that we aren’t as terrestrial animals, and nobody’s fully reconstructed a bee brain yet, although they’re doing it for flying. So question is, given the complexity of what’s in the bee strain and the communication, the wiggle dance they do to communicate, what’s the tradeoff there? I mean it’s a purely empirical question that can be asked. Right now my feeling is probably not, but I may well be wrong.
Do you know the wasps that do the shimmering thing, they make this big spinning pin wheel, and they spin so quickly there’s no wasp who says “oh he just flared his wings, therefore it’s my turn, and then the next one, that somehow…?
Look you have these beautiful, what are they called ruminations, there’s these beautiful, you can see it on the web, these movies of birds, and flocks of birds that execute these incredible flight maneuvers, highly highly synchronized. Are they one conscious entity? Again, you have to look at the brains and you have to look at the amount of communication among the individual organs. You can look at North Korean military parades, right? It’s amazing the precision with which you get 100,000 Koreans to do these highly choreographed [maneuvers]. But they’re not conscious as a whole because the information they exchange is much much lower than the massive information. Once again, you have 200 million fibers just between your left brain and your right brain, but those are all good questions that you could ask and that have answers once you have a fundamental theory of consciousness.
So let’s go from the brain to the mind. So, I’ve looked hard to find the definition of the mind that everybody can kind of agree on. And my working definition will be: it’s the set of attributes that we have, some abilities that we have, that don’t seem, at first glance, to be something that mere matter could do. Like, I have a sense of humor, my liver may not have a sense of humor, my liver may not be conscious the way my brain is. So, where do you think the mind, under that definition, where do you think all these abilities come from? Do you think they’re inherently emergent properties? Or are they just things we haven’t kind of sorted through? Where does a sense of humor come from when no individual cell has a sense of humor?
It’s a property of the whole, it’s the property of your brain as a whole, it’s not a property of individual cells, we know this is true of many… I take a car, I look at the many individual components of a car, they don’t drive, they don’t do the same what a car does, but you put all these things together as a whole, and then the whole can do things that the individual parts can’t.
Emergence, so do you believe that strong emergence exists? Do you believe you can always derive the behavior from, like if you studied cells long enough, you would say “I understand where a sense of humor comes from now?”
No, for that you need a theory of consciousness, if you’re really referring to the conscious mind, to the mind, as many aspects are unconscious. I think about the maiden name of my grandmother. I have no idea, how my brain, how my mind comes up with the name Shaw. I don’t know how it works, so that’s all unconscious. The conscious mind you need a theory of consciousness, you, not just a theory of cells, not just the physics of it, but you also need to explain how conscious mind that has a sense of humor, because that’s the property of a conscious mind. Or maybe doesn’t have a sense of humor, depending who it is, emerges from. Yeah, so it’s what you refer to as strong emergence.
And so strong emergence…
But it’s not magical you understand that?
Well that’s a word you’ve used a few times. And it’s because as you said at the very beginning, there’s nothing magic about us. But I think people who believe that strong emergence is possible believe it’s a scientific process. But, a lot of people say, “No, you can’t say that for something to take on properties that none of its components have, and you cannot derive those properties. Until eternity passes away, you can study those individual components and not figure out how that comes about.
Yes, you need to solve a problem that Aristotle was one of the first who wrote about it, the parts, the relations among the parts and the whole. Yes, you need a theory that describes what a whole is, the whole system. An integrated information theory is an example of such a theory that thinks about parts and how the parts come together to define a whole. Without such a theory, yes you would be lost, I agree with you, but it’s not magical. What I meant was that, once you have such a theory, then you can understand step-by-step. You can understand… you can predict which systems are whole and which systems are not whole. You can predict which system properties are essential for the wholeness and which ones are not. So in that sense, it’s a physical theory. It’s a lawful set of rules.
Well how can IIT be disprove it?
It can be disproved in a number of ways. So it says that the neural colleague of consciousness is the maxim of cause and effect. In principle it gives you a way exactly how to test it, how to measure it. In fact now there was this recent series of articles in neurological journals where people tested one implication of information theory and built a conscious meter, built a simple device where you probe the brain with these magnetic pulses, when you are asleep or anaesthetized or you go to an emergency room, critical care facility where you have people who may be in a vegetative state, or maybe in a more conscious state, maybe there’s a little bit of consciousness there, or maybe they are conscious but they can’t tell you because they’re so grievously injured. So integrated information derived a simple measure called perturbational complexity index, where you look at the EEG in response to these magnetic pulses where you can tell this patient is probably unconscious based on the response of his brain, and this person probably is likely to be conscious, so it’s one of the consequences. So there are ways you can test it. It is a scientific theory; it may be wrong. It is a scientific theory.
Did you read about the man in South Africa who was in a coma for some amount of time, then he woke up and he was still locked in, but he was completely awake? And the thing is that every day he was left at this facility, they assumed he wasn’t conscious. And so they played Barney all day long, and he came to abhor Barney, like so much he used all of his mental energy just to figure out what time it was every day, just so he would know when Barney was going to be over. And he said even to this day, he can look at a shadow on a wall and tell what time it is. So you believe that we’ll soon be able to put a device on somebody like that, saying “No, he’s fully awake, he’s fully abhorring Barney as we speak right now.”
I just came back the last two days I attended a meeting of emergency room medicine, coma and consciousness, and there we were for 2 days, we heard what is the current criteria, how can we judge these patients? They are very very difficult patients to treat because ultimately you’re never fully sure given the state of technology today. But yes in principle, and it looks like even in practice, at least according to these papers, the last test was 211 patients, that we might soon have such a conscious meter. There are several larger scale clinical trials trying to test this across a large clinical population. There are thousands of these patients worldwide, like Terry Schiavo was one of them, where it was controversial because there was this dispute between the parents and the then husband.
So, I’m curious about whether all these things are conscious, for two reasons. One we discussed, because it has, as you’ve said, implications for how you treat them. But the other one is, because if you don’t know if a tree’s conscious, you may not be able to know if a computer’s conscious, and so being able to figure out something as alien as the sun or Gaia or a tree or a porpoise is conscious, how would we know if a computer was? That’s the penultimate question I want to ask, how would you now if a computer was conscious?
Very good question, so first we need to make it perfectly clear because people always get this wrong: there is artificial intelligence, narrow or broad, and we’re slowly getting there, that is totally separate from the question of artificial consciousness. In other words you can perfectly well imagine a super computer, super human intelligence, but it absolutely feels like nothing. And so most of all the computers today are of that ilk, and most will agree with that statement. So, we have to dissociate intelligence from consciousness. Historically, until this unique moment in time, we’ve always lived in a situation where if you wanted something done, you wanted a ditch done, you wanted a war fought, you wanted your tax to be done, you employed a person and the person was conscious. But now we are living in this world where you might have things that dig ditches, fight wars and do taxes that are just algorithms. They’re not conscious. However, of course this does raise the question, under what conditions can you create artificial feelings. When is your iPhone actually going to feel like something? When is your iPhone actually going to see, as compared to taking a picture and putting a box around it and saying, “This is mum’s face,” which it can do today. So once again you need a theory of that. You can’t just go by the behavior because there’s no question, in the fullness of time, we will get all the movies and all the TV shows, Westworld,etc.
We’re going to live in a world where things behave like us. We will experience the world in 10 or 20 years where Siri talks to you in a voice that you cannot distinguish at all anymore from a human secretary. Instead he or she will have perfect poise, be perfectly calm, laugh at every one of your jokes. So how do we know she’s conscious? For that you need a fundamental theory, and this particular fundamental theory of integrated information says you cannot compute consciousness. Consciousness is not a special property of an algorithm, because your brain isn’t an algorithm. Your brain is a physical machine: it has exterior, it has cognitive powers, both on the outside, it can talk, it can move things about and it has intrinsic cause effect power, and that’s what consciousness is. So if you want human level consciousness, you have to build a machine in the likeness of man. You have to build what’s called a neuromorphic computer. You have to build a computer whose architecture at the level of the metal, at the level of the gate, mimics the architecture of the brain, and some people are trying to do that.
The Human Brain Project in Europe
For instance, let me give you an example that’s very easy for scientists. So I have a friend, she’s an astrophysicist, so she writes down the Einstein equations of general relativity, and she can predict on her laptop, on her computer there’s a black hole at the center of our galaxy. It’s a big black hole a billion solar masses that sucks up all the… it bends gravity so much that not even light can escape. But funny enough, she doesn’t get sucked into her laptop that runs that, why not? Why it’s simulating all correctly, all the effects of gravity, yet it doesn’t have any effect on its environment. Well isn’t that funny, why not? Because it doesn’t have the causal power of gravity. It can simulate, it can compute the effect the gravity has, but it can’t emulate it, can’t physically instantiate the cause and effect of gravity (same thing). Consciousness ultimately isn’t about the causal power, it’s not about simulation, it’s not about computation, and so unless you do that, you can build a zombie; you will be able to build zombies that claim they’re conscious, but they don’t feel like anything.
Well that is a great place to leave it. What a fascinating discussion, and I want to thank you for sharing your time.
Thank you very much, Byron. That was most enjoyable, and this is part of the IEEE Tech Fisherman series at South by Southwest.

Interview with Dean Kamen

Dean Kamen is an engineer, inventor, and businessman. He holds hundreds of U.S. and foreign patents, many of them for medical devices including the  iBOTTM  mobility device, the first wearable infusion pump, the first wearable insulin pump for diabetics. He is perhaps best known for his invention of the Segway® Human Transporter.
He founded DEKA Research & Development Corporation as well as FIRST (For Inspiration and Recognition of Science and Technology), a global organization dedicated to helping young people understand and enjoy science and technology.
Kamen has received many awards including the National Medal of Technology in 2000, the Lemelson-MIT Prize in 2002, and he was inducted into the National Inventors Hall of Fame in May 2005.
What follows is an interview between Dean Kamen and Byron Reese, author of the book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity. They discuss artificial intelligence and future technology, and their impact on jobs, education, innovation.


Byron Reese: I want to start off by saying this show is about artificial intelligence. Let’s just start with that piece of technology. When people say, “Well what is it?” or “How should I feel about it?” What would you say?
Dean Kamen: I think the first thing people should do is not be afraid of putting those words together. The word ‘artificial’ to some people sounds bad. You don’t want to have artificial stuff in your food. You don’t want artificial stuff elsewhere. And the word ‘intelligence’ means very different things to very different people. So, putting those two words together makes a mess of language and thought.
But I would say to maybe make it a more constructive conversation, we can say that people, for the first few thousand years after we climbed out of the primordial ooze, would try to build things with their hands, which would get pretty tough I guess. And then somebody invented the first tool: a hammer or a shovel. Ok, those were artificial muscles. Then we, through the industrial revolution, created things substantially more capable than the shovel, like a bulldozer. And it eliminated all those jobs of those ditch diggers — because one bulldozer could do the job of a thousand ditch diggers. I’d call a bulldozer artificial muscle. And it probably gave you a thousand to one — or more — leverage over what you could do with your back-breaking work. But it didn’t eliminate jobs, because when you didn’t have a bulldozer, you might spend a whole life digging a hole big enough to make a house. Once you had a bulldozer, you didn’t eliminate the careers for a thousand people that might dig a hole  You created the plausible possibility of paving for instance, North America with a super highway system.
So as we developed more artificial muscle, we built more and more things because we could, from roads to skyscrapers. And those people that somehow believe that as we use computers or computing technologies to eliminate the work that people used to do that’s drudgery or boring work, the idea that we’re going to eliminate jobs is equally silly as saying the bulldozer eliminated a career. No, it eliminated the horrible work that nobody wanted to do and gave people time to do way more magnificent things.
So I will call out artificial intelligence as the ability of the engineering community to move its capability to amplify not what your muscles used to do by building us the industrial revolution, but by amplifying what your thinking and problem solving capabilities are, because you’ve added the equivalent of an electronic bulldozer to eliminate the bulldozer, did something to the shovel, and the computer did something to the adding machine. In each case it opens up new opportunities to do more and more great stuff and to more and more young people, especially if they develop the skill sets to use these tools. And I hope they do it through my first robotics competition. I hope that more and more young people see ‘artificial intelligence’ as nothing more than saying, I don’t do back-breaking physical work due to the industrial revolution, I won’t be doing boring mental work due to artificial intelligence. I’ll leverage these technologies to build a better future and more capable career for myself. And that’s what it is.
Anybody who listens to the Voices in AI show knows I agree with every word you’ve just said. But the arguments that are sometimes brought against that viewpoint, believe me I hear them a lot. The first one is that, well, the change that is coming is going to come so fast, that it’s going to be severely disrupting. Do you think that there’s any merit to that?
I think every major change that has been wrought by technology permeates two things through the culture that’s experiencing it. To the pessimist, it brings fear, because people don’t like change. To the optimists, it brings excitement.
I’m sure when those first machines were made that could knit so quickly that all the people that knew how to make cloth by hand were terrified by the textile industry. I am again sure that the industrial revolution and steam engines and locomotives terrified all the people that were doing things that existed without the leverage of those technologies. And I’m sure equally that other people saw those technologies as opportunities to do more and better things for themselves and their community in the world. So, I think you are right.
There are certainly lots of people that are appropriately concerned that the skill sets they now have are going to be displaced by more efficient, more cost effective, maybe more accurate, more reliable, more stable systems, all lumped into something we call artificial intelligence. But those very same people ought to say, if I learn about these systems, I learn how to use them, design them, develop them, deploy them, it will give me opportunities to advance my career and my sights about the future, and be part of something that’s bigger and better than the past. That’s called innovation, that’s called progress.
I think it comes down in every generation to the fact that as the world changes, you have a choice. You can be on that bus, or you could be off that bus, and if you embrace technology, you have the opportunity to be on a bus that’s moving further and moving faster. Now having said that, by the way, you get on to a bus that’s bigger, heavier and faster moving than the one that you used to have and the scale of the impact of the accidents that can be caused are terrifying. But, again, this is not a new problem.
I’m sure the first tools that we made, using a rock as a hammer, can help you build something. Using the first hammer, you could break your thumb. Figuring out how to control fire, gave us the capability to stay warm and cook our food, and have light at night. It also gave us the capability to burn down our houses.
So, there is no technology that has only upside. In fact, I’d argue that the more potential upside that any new technology has, the more it can amplify what we do, by definition, it is just an amplifier. It’s not good, it’s not bad, it’s not immoral, it’s amoral. It’s an amplifier. It can help us do more good better; it can help people with nefarious goals do more bad, better.
And we need to deal with that. I think one way we deal with that is when we start teaching kids at an ever-earlier age about the power of technology, they also have to be taught not to use it as a weapon but to use it as a tool and not to simply do what you can do with technology, but to focus our efforts and do what we should with technology. And to kids that are lucky enough, that have the privilege to have access to advanced technologies, they should understand that with every privilege comes a responsibility.
I know in this country everybody runs around, saying “I’ve got my rights because of the Bill of Rights.” Well, maybe the founding fathers should have put right next to the Bill of Rights, the bill of responsibilities. Those people with capability need to use it wisely and prudently and help the rest.
So to those people that now claim the next big evolution in technology, which isn’t about amplifying muscles, it’s amplifying brains, it is going to be terrifying. I’m sure they were terrified by the locomotive, by the sewing machine. I think smart people have to recognize that there’s always risk when things change, and we need to continue to make sure that the changes net us all out to be in a better place, and we use those changes responsibly.
And then the second concern people have, and I think it’s one you share as well, is, do people have the education to do the jobs of the future?  Talk a little bit about that, and what you’re doing in that regard.
So, now you hit one where I think there is a problem, but again, in any rapidly changing environment, where you displace the status quo, some people fall out of the bottom. I think we now have the changes happening not at what was typically the worker base in most industries, but these are changes that affect everything.  And what are called white collared jobs or professional jobs are going to be potentially hugely impacted, and for some people in a negative way by ‘artificial intelligence’ because it was those white collar or professional jobs that required a lot of sophisticated thinking that now might be displaced by programs that can get to better results more quickly than the manual process of thinking was capable of doing, even a decade ago.
So I started this program called FIRST, for inspiration and recognition of Science and Technology, a few decades ago recognizing that the jobs of the future are going to need kids that have a much more sophisticated skill set as they get through even their junior high school and high school years to be ready for these career options. And the rate of change in the skill sets that will be necessary to have really interesting, exciting career opportunities over the next decade or two, are going to require a major change in our education system. Keeping kids lined up in rows, having them memorize facts that used to be important, because if you didn’t know it, where were you going to find it? [is an obsolete approach].
Now, every kid in the country is carrying around on his or her belt every fact known to man, in a very well-organized way to find those facts. So, education should no longer be giving you the disciplines and the toolsets that you used to need to go become a factory worker, learn how to follow instructions, learn how to do the same repetitive thing over and over again.
Education has to now be a much more sophisticated process of giving kids the toolsets and the understanding of how, for instance, to use ‘artificial intelligence,’ how to leverage the fact that information is now virtually free and what they need is to learn how to be systems innovators that add innovative ways for taking all this information and creating new opportunities to solve old problems.  And that’s what I tried to do in the FIRST community and that’s what I think schools need to quickly embrace so that school, as we knew it, can remain relevant to kids and it could be worth them spending so much of their life in these locations.
Well put a little flesh on the bones. How would you do schools?
Well, one of the things I’ve urged every school in this country to do, is incorporate a FIRST program. I mean, we’ve known now for decades that kids will sit in class and for 45 minutes once a week do phonics or spelling, but then every day during the season, every day after school, for three hours they practice [sports], whether it’s the football season, or the baseball season, or the basketball season, or soccer.
Kids, in a free country, you get the best of what you celebrate, and we have great programs that turn kids into great athletes, because they put more time and effort and passion into that than they do in ‘academic’ stuff. We justify all of that, by the way, putting so much into our physical school environment, whether it’s the parquet floors on the basketball court or the side lawns for the football and baseball. We justify sports even though kids run the risk of being physically hurt. We justify it, almost exclusively by saying it’s critical that kids at an early age learn teamwork, and learn how to work together, compete in a positive way. Well, really, if teamwork is all that important, why when they do it in the classroom, do we still call it cheating?
So I said look, we have a model that works, that gets kids inspired. It’s called sports. What if we could take that kind of model, that kind of essentially…a program that’s an interactive project base like building a sports team, what if we could take that model, but make the content not bouncing a ball or kicking a ball or throwing. What if we could make the content developing the muscle hanging between their ears? What if we could create an opportunity within the school environment where kids could all participate in something where unlike in the other sport, every kid on a first team could turn pro.
There simply aren’t millions of jobs in the NBA, the NFL or Hollywood. There are millions of not just jobs, but there are millions of career opportunities to create whole new industries that you and I haven’t even conceived of yet, that will be created by, and available to the next generation of kids that understand technology, that understand how to work together, that understand how to stand on the shoulders of the giants that have delivered, e.g., microprocessors that have essentially now made computation free and memory is essentially free, and the speed and the power of these devices have now turned them all into commodities.
We need kids that know how to leverage those commodities to solve the world’s problems, to create the new industries, and I think schools should be giving kids the toolset and the environment to do that, and I think rather than line them up with twenty, or thirty year old text books where science to them is putting pins in a frog, I think FIRST has the real potential to change the environment and the culture in schools to turn them into places where kids are excited to participate, and come away with opportunities to create careers that they wouldn’t have imagined without FIRST.
So, take a step back just for the readers who may not be familiar with it. Describe what FIRST is.
FIRST, well the name stands for, For Inspiration and Recognition of Science and Technology. Notice the word ‘education’ isn’t in there. The same way as the word ‘education’ isn’t in little league baseball. I said look, let’s create an institution that we can offer to schools, that will give kids kits and parts, cutting edge technologies, almost exclusively donated by massive, fantastic corporate supporters we have across the country, across every industry, to give kids access to the most cutting-edge technologies, and software development tools. Let them take those kits or parts into their school and in a very exciting competitive short intense season, like any other sporting season, the schools will have these FIRST kits working between the kids, the teachers, the parents and the magic is the outside mentors from my 3700 corporate sponsors, pretty much every high tech company in this country and in the world now, embraces us because they need these kids more than these kids need them.  So, it’s a win, win, win for everybody.
The kids win, the teachers win, the parents win, the companies win, but basically FIRST is a program that brings together all of these different entities and says, we’re going to not give you quizzes and tests, but we’re going to give you this aspirational, extracurricular activity during which you learn how to do programming, how do you do electrical engineering, mechanical engineering, systems, controls, team work, build your company, build your little team, get it out there, go compete in these tournaments.
And again, you could say it’s all about teaching kids how to build robots, but I’ve been saying for years, we have never started FIRST… the goal was never to figure out how to get to use robots as the output product that we measure. What we’ve now shortened it out to, is hey, everybody, we are not using kids to build robots. That would be slave labor. We are not using kids to build robots. We are using robots to build kids. We are using these robots as an example to kids of what happens when you give people a sophisticated tactical challenge of an open-ended set of tools and inspire them to just try to do it. There are no answers in the back of the book, there’s no one right way to do it. We’re not asking you to recall what we told you yesterday, that you took notes on, when you were sitting in your classroom.
We’re saying, there is this problem. And every different school is going to get the same kit of parts, and you’re all going to have six or eight weeks to turn that kit of parts into an operating system. And you’re going to put it out on that playing field, our playing fields are smaller than a basketball court, and you’re going to go head on in battles in a double elimination tournament against other kids and other schools that had other ideas on what’s the best way to accomplish this goal. And in the end, again, [there are] no quizzes, no tests. It’s bring the school band, bring the mascots, bring the cheerleaders, let’s go celebrate what we’ve all accomplished. Let’s go celebrate the creativity we’ve demonstrated, let’s go celebrate what we’ve learned about science and technology and engineering and problem solving.
It went from twenty three teams the first year we did it, to this year, we have over 61,000 schools from 83 countries, will be represented back here in Texas at the end of next month at the World Championship and then a couple of days later, because it was so big, we couldn’t get everybody here, a couple of days later we will do the second half of our World Championship, in Detroit, with thousands of these teams.
The passion that you see in these kids lets you know that despite all the crummy news we’re always hearing about technology — we’re running out of water and food, and the polar caps are melting and the environment is being… You know, news typically loves to make a spotlight around big problems — these kids just beam when you talk to them about these problems. They are showing self-confidence and their ability to innovate and their ability to deal with these issues in a positive way, and it renews your confidence, that, while the world has gotten better and better at being negative and pessimistic, if you give kids the right tools and the right mentorship, and first, if you inspire them, to recognize science and technology, you can walk away from our events believing that the future is going to be better than the past.
And how much of your time does all of that take?
Well, if I’m awake I’m working, so I probably work a normal forty or fifty or sixty-hour week in my day job. I have 500 engineers and in our day job, we design lots of critical systems for medical needs.
Let’s talk about DEKA. Let’s do that. You started it in 1982, it’s in New Hampshire, it’s focused on R & D.
Yeah, DEKA mostly does the front-end development of what we’d like to do, which is take advanced technologies as they get developed and come to scale, and then figure out how to apply them into a world where most people don’t apply them. As processors got better and better, and faster and faster, you had ever more realistic violence on video games. Well that’s great because we can do it, it’s easy.
But, once these big processors and faster processors and lower power processors got developed, we said, could we use those to make a better dialysis machine, one that might actually be so capable through monitoring itself through artificial intelligence that patients could do life support at home in their own bedroom. It’s more comfortable, there’s more dignity, it’s lower cost,  it’s better outcomes. So, we didn’t invent microprocessors or sensors, or lithium batteries or solid-state gyros or things like iBots and Segways.
We are a company of systems integrators. We’re always looking at the world and saying, what new technologies have been developed, because some industry sees a need for huge amounts of them, whether it’s the gaming industry, or the defense industry, or the automotive industry, and we look at those technologies and say, now that they exist, could we system integrate them to make a better, simpler, smaller wearable drug delivery system, so somebody isn’t tied to a hospital.  Or, can we make an iBot so that a paraplegic or a quadriplegic who hasn’t been able to look somebody in the eye, or go up a curb or a flight of stairs since their accident, or whatever [has mobility again]. Can we give them back that capability by bringing these technologies to the field of human health? And I now have 500 engineers working on various projects to do that. Just a couple of weeks ago, we put a pair of prosthetic arms on a guy that had lost both of his arms. This guy was bilaterally without arms.
Is this the Luke?
This is the Luke Arm.
You hold it above your head?
Yes, you sure can. And each one of the Luke Arms gives somebody substantially more capability than they had frankly with a plastic stick with the hook on the end of it, that they’d been using now for decades. But as I said a couple of weeks ago, with a lot of support from the Veterans Administration and DARPA and the military, we ended up putting onto a guy for the first time ever a pair of our prosthetic arms, and within a very short time, this guy stands up in front of frankly our Senator, they had to see it in New Hampshire, and he spreads both of his arms and looks at her and says, I’m ambidextrous now.  And it was a great moment. So, I have a lot of engineers, as I said about 500 technical people in Manchester. We work mostly on systems to improve health care.
So, how does that work? In a way, you’ve done a fantastic job of systematizing the productization of technology through innovation. How do you go all the way from “we know how to do this really cool thing,” to “and there’s a business here?” How do you do that, because usually things get handed off multiple times and different people have different skill sets, but you kind of do both ends of it, and how have you managed to do that?
So, I mean technically I guess it was specifically, I’m really not an engineer. I studied physics, I love mathematics and logic. I have, I think some of the best engineers in the world in each of the disciplines you need to do these very multi-disciplinary projects that we do. We need mechanical engineers and electrical engineers, and systems engineers and controls engineers, but to your point, in most companies, they’re very vertical, and then they hand it off to a manufacturing group and then they hand it off to sales and marketing. And at DEKA, we’re a little bit different from that, and we look at the whole problem from end to end, and say, “Look, let’s be good systems integrators and let’s figure out how to take all the stuff that really did take, in many cases, decades to develop core technologies, but now that they’re here and they’re ready, let’s figure out how we can,– across all the engineering disciplines that would be necessary to do it, — create a new class of solution to an old problem.”
The most exciting one we’re doing right now and hopefully we’re going to get a lot of support from IEEE for this one, is we were just given at the end of the last administration, $80 million by the Department of Defense with the specific goal of, ok let’s take all those miracles happening in med schools and research labs, called regenerative medicine, let’s take those Petri dishes and roller bottles which have these little miracles in them and bring it to scale where 400,000 people that are right now waiting for a liver or a kidney or a lung are going to get one before, frankly, they die waiting.  And the researchers that have literally broken down the problem of understanding life, how does a liver be a liver, how does a kidney do what it does, how does a pancreas make insulin. They know the answers to these questions, but they’re doing it in laboratory scale environments and we said, I am as unlikely tomorrow to wake up and suddenly understand all the cellular biologies as they are unlikely tomorrow to wake up and say, oh, verification, validation, process control, regulatory standards.
These things aren’t going to jump out of these roller bottles and Petri dishes and suddenly become an industry themselves.  We need standards. We need the expertise and scale of companies that know how to take a prototype and make lots and lots of them, and if what we’re making lots and lots of are human organs, and human tissue, man you better get it right, because you’re putting it in somebody.
So, we said, look, as systems integrators, we think we can bring together probably dozens of them. We ended up bringing more than 80 companies: engineering companies, and manufacturing companies. In fact, there’s only one giant company I know who has the name ‘automation’ in their name, Rockwell Automation.
Well we went to Rockwell and their Chairman, Blake Moret, said, “Dean, I’ll not only support you, I’ll join [your] board” of this new entity called Advanced Regenerative Manufacturing Institute. He kind of sat there and said, we, like anybody else, don’t know a whole lot about manufacturing whole human organs, because nobody has ever done it, but Dean, if you can bring the medical community and these researchers, and these guys that have or will win the Nobel Prize for their contributions to medicine, if you can bring those guys to the table, and you at that table can bring your systems integrators and the rest of the engineering industry and people that understand artificial intelligence and robotics, and if we can bring, for instance, IEEE and ASME, and NIST, the National Institute of Standards and Technology all together, so that when we show up at the Food and Drug Administration (FDA), and say, trust us, this is a real organ, it needs a standard, we know what the quality is, we can make these things, and make them safely, and we can make them in volume and we can make them affordable, that’s what Army is going to do.
And if it succeeds we will create a new industry, and that industry will be able to start supplying spare parts to humans and assure them of a higher quality of life than we now can offer people who find out their kidneys failed and we put them on dialysis. I built a lot of dialysis equipment. We are very proud that we’re helping to keep these people alive while they wait for a cure, but you wouldn’t want to be on dialysis, trust me.
We make lots of stuff to keep people going, but how much better could it be, if instead of keeping them alive with chronic treatment, how much better would it be if we could cure their condition. Somebody suffering from Macular Degeneration and they’re sitting there saying, I see less and less, and soon I’m not going to see at all. And maybe it’s your Mom or your Dad, what would it be like to say, oh, well go to this place and they’ll give you a new eye.
What would it be like to see that little kid that has to take insulin three times a day and say, oh, we’re going to give you a new pancreas. Imagine a world where you can safely and reliably replace organs that have stopped working in people and give them a new way to start. That’s what Army’s going to do.
You say “if” it works, but it isn’t going to a binary outcome, right? Some things will work, and some things won’t.
Fair enough. When I said, “if it works,” I didn’t mean that there is any chance whatsoever that we won’t eventually do it.
Right.
I should have said, “We’re doing this now, going down this path that we’ve created and what I’ve promised a whole lot of people, is that within five years there’ll be at least some evidence of some of these things that have gotten far enough that they’re actually now meeting realistic clinical needs.” Certainly, we’re not going to wake up one day and be able to do double click and say, send me a liver, send me a kidney. That’s not going to happen digitally, instantly.
We’ll start with simple things. Maybe it won’t be whole organs. It’ll be cells, it’ll be pieces of tissue, it’ll be cartilage, it’ll be skin, it’ll be bone, and then it will grow, no pun intended, into full organs, and in some cases those full organs, we will integrate the process again, not that we invented, but we know very well now that in a laboratory environment, we can take cells from an individual and through a very, very elegant process, turn them into what are called IPSCs, Induced pluripotent stem cells.  We could take a cell from your body because all the cells in your body have the same information, [but] then why did one of them become an eyeball and one of them become a toenail? Well, they got differentiated.
But what if we could get a cell out of you and say, we’re going to put it back to what it looked like when you were an embryo. And what if we could take that cell from you and put it into a structure that would manufacture on a 3D printer that wants to be a kidney or a liver. And what if I could make these IPSC cells from you and that cell essentially can grow up to be whatever it wants. It could be a liver cell, it could be a hepatocyte, it could be an Islet cell, and it could make insulin.
What if we could, through the engineering capabilities, grow the physical structure and at the same time, develop a scalable process to take your cells and at just the right moment, coming down a ‘manufacturing line,’ take those cells from you, put them into this organ that we just grew, have those cells become a fully operating organ of that type, a liver, a lung, or a kidney, have it in a sterile environment, essentially delivered to the surgical suite, where the first time a human touched this manufactured product, it gets taken and it’s assembled back into you replacing the defective one, in the same way you take your car to the dealership and they take the old noisy cracked muffler off and put on a new muffler or a new started motor, or a new generator.
You’ve got this whole beautiful car, but if those spark plugs don’t work, the car is useless. If that starter motor won’t crank, the car is useless. Well, you’ve got this whole beautiful car, we have learned a long time ago to figure out how to replace the one or two parts that needed to be replaced to make the whole car work again.
What if we could do that for you, and if we did it by putting an organ back in you that was built from your own cells, and it won’t be rejected? You won’t need to spend the rest of your life taking immuno suppressive drugs to prevent it from being rejected, which gives you a higher quality of life, a lower cost medical system. Everybody wins.
So, my “if,” was not if that could be done. My if, was if our plan and the support we’re getting from government, from researchers, from the engineering community, from the standards community, from NIST, from IEEE, from the FDA…if all of those things come together to create what might be the most sophisticated, vertically integrated, manufacturing process the world has ever seen, we could transform medicine in two ways. We could give people a way better quality of life, and we will dramatically lower the costs that right now look like they’re going to bankrupt this country unless we come up with some great innovations.
Another thing I guess DEKA’s been working on that we’re interested in, is computer vision.
Yes.
Can you talk a little bit about that?
Sure. As an example of two places where we’re in desperate need of better computer vision, in this Army advanced regenerative manufacturing, we’re trying to manufacture organs, but those organs have had some of their sub components, cells, literally cells, we’ve got to be able to see where they are, how are they moving, how are they duplicated, where are they putting themselves.
So, if we could create an environment in which some of the systems we’re developing to bring these things out of the laboratory don’t need a post doc sitting glued to a microscope and manually doing things. What if we could automate through vision systems, some of the process of monitoring and controlling the manufacture of things that have components that are literally smaller than human cells? That would be a huge win for us and we’re working on that.
And another place we want great vision systems is, I’m excited to say, we just got our next generation of iBot approved through the Food and Drug Administration and we’ve proven that’s it’s a very safe, reliable system to keep a person standing up. But just like anybody that stands up, you and I, if we’re not paying attention, we could trip, we can slip, we can step into that pothole, we miss that curb, and then we fall down. Well, what if we could add to our iBot systems that allow us to do local mapping to make sure that the device is even safer when the person that’s using it isn’t paying attention.
What if we could help prevent some of those slips and trips? So, now the computer vision has gotten to where the actual cameras are so small and so inexpensive and require so little power consumption. I mean everybody is walking around, and now on their smartphone they have a camera that has super high resolution, can operate in very low light, it’s very small, it’s very low powered. What if we now integrate that with again, some of the very sophisticated software that can look at images coming through these cameras and help us map the environment and make sure that what we’re doing is safe?
So, I think vision systems, because the hardware has gotten so much better and smaller and cheaper, and because the algorithms to take the data coming out of these cameras has gotten so much better, whether it’s a simple camera or a lidar system or an ultrasonic system, or some combination of all sorts of others, giving machines the capability to interact with the outside world that they’re in, and gather data in real-time without physical locations and things, is going hugely improve the capability of machines to do things we’d like them to do. Hence, the self-driving car.
And last question. You’re clearly an optimistic person about the future in a world that is full of people who aren’t. How are you seeing things differently than people who are more down on the future?
Well, I just think the world works like this: you never solve a problem until you can identify and define it. So naturally as we’ve gotten better and better with technology, we’re able to define more and more problems. That could be depressing to people. To me, it’s just a transition stage. Now that we’ve identified that problem, let’s go about solving it.
So there are those people that look at the world, and now ironically through technology, through those satellite images, we can see that, wow, we’re impacting the environment, we can prove to ourselves the polar caps are melting. We can take tests now that say, “Wow, you’re predisposed that you will have Parkinson’s disease” or “You will have Alzheimer’s,” and we develop more and more tools to help define problems. That’s always the first step. So people see those tools, and they get obsessed with…oh my gosh, I’m going to have this disease or that disease. But an optimist says, oh, now that I’ve defined that problem, I’ll put some grey matter to it; I’m going to solve that problem.
I’m sure sixty or seventy or eighty years ago, when they figured out, oh my gosh, Polio, this virus, it’s gone through the country and it’s just wreaking havoc. My grandparents were terrified that this disease would come and what if their kids got Polio. And then somebody figured out how to invent the iron lung so that if the Polio extended past your legs it would get up to your lungs which normally killed you. Then people, those pessimists, those guys that know how to extrapolate into the future are probably the same guys that are telling us that our health care system is going to bankrupt the country. They probably sat there and said, wait a minute, what if a few million kids next year get Polio and it gets so bad they need an iron lung.
Well it used to be as tragic as that; they would just die. But now, I’m sure they would have extrapolated in the next ten years, half the population of America is going to be kept alive lying in an iron lung and the other half of the population is going to be stuck to take care of them and we’ll all be bankrupt. Because they looked at the current state of data, the current state of what we knew, the current state of the problem. But they didn’t say, oh, don’t worry, in about ten years this visionary guy named Jonas Salk is going to come along, he’s going to realize that if you take the virus, kill it and then scratch it under somebody’s skin, they not only will not need an iron lung, they won’t even need the little things on the…they’ll never get Polio.  And kids today are not only not afraid of Polio, they don’t know Polio, it’s gone, it’s over. And it didn’t bankrupt the country. It’s about the cheapest thing you can do. Kids are born; they get a bunch of vaccines, Smallpox, Polio.
So I think the people today that are similar to the ones that must have been worried back then about that problem, will always see the cost of everything and the value of nothing. They will always see this insurmountable set of problems and I’ve heard as a very young person that phrase, well every problem represents an opportunity. I think we are surrounded by merely insurmountable opportunity and I think smart people will dice up those problems, those opportunities into different pieces and if we can create an army, a large enough army of young kids… Back to FIRST, if we can create millions of kids that can spread themselves across all these opportunities, the generation of people alive today as the pessimists about health care, global warming, you name it, that generation of pessimists is going to see the next generation of smart, passionate kids, one by one, say, oh, well we’ve eliminated this problem on the global warming, we’ve eliminated this health care problem, we’ve now just created a vaccine against this cancer or Alzheimer’s or that.
Or, we’ve just figured out a way to make new organs for this, so you won’t need dialysis. I think the smart optimistic kids with the right toolsets will always stay one step ahead and in this constant race that we have in our society between the fear of catastrophe and the opportunity of success, the kids that are focused, that are optimistic, that work hard, that embrace technology, will be the kids that make sure in that race between catastrophe and success we will succeed.
And we will succeed if we invest in these kids and if we have policies that allow us to embrace innovation, and I’m hoping that our government leaders, our industry leaders, our parents, our schools, and most of all our kids will embrace innovation, will embrace hard work, will take reasonable risks and create a better world in the future, as has happened in every generation since we climbed out of the primordial ooze.
Well what a fantastic message and thank you for sharing it with our audience and thank you for taking the time.
You’re very welcome.

Voices in AI – Episode 22: A Conversation with Rudina Seseri

[voices_in_ai_byline]
In this episode, Byron and Rudina talk about the AI talent pool, cyber security, the future of learning, and privacy.
[podcast_player name=”Episode 22: A Conversation with Rudina Seseri” artist=”Byron Reese” album=”Voices in AI” url=”https://voicesinai.s3.amazonaws.com/2017-11-20-(01-05-05)-rudina-seseri.mp3″ cover_art_url=”https://voicesinai.com/wp-content/uploads/2017/11/voices-headshot-card-3-1.jpg”]
[voices_in_ai_link_back]
Byron Reese: This is Voices in AI brought to you by Gigaom. I’m Byron Reese. Today, our guest is Rudina Seseri. She is the founding and manager partner over at Glasswing Ventures. She’s also an entrepreneur in residence at Harvard Business School and she holds an MBA from that same institution. Welcome to the show, Rudina.
Rudina Seseri: Hello Byron. Thank you for having me.
You wrote a really good piece for Gigaom, as a matter of fact; it was your advice to startups—don’t say you’re doing AI just to have the buzzwords on the side, you better be able to say what you’re really doing.  
What is your operational definition of artificial intelligence, and can you expand on that theme? Because I think it’s really good advice.
Sure, happy to. AI—and I think of it as the wave of disruption—has become such a popular term, and I think there are definitional challenges in the market. From my perspective, and at the very highest level, AI is technology, largely computers and software, that possesses or has some level of intelligence that mirrors that of humans. It’s as basic as one would imagine it to be by the very name artificial intelligence.
Where I think we are in the AI maturity curve, if one wants to express it in such a form, is really the early days of AI and the impact it is having and will have going forward. It’s really, what I would call, “narrow AI” in that we’re not at a point where machines, in general, can operate at the same level of diversity and complexity as the human mind. But for narrow purposes, or in a narrow function—for a number of areas across enterprise and consumer businesses—AI can be really transformational, even narrow AI.
Expressed differently, we think of AI as anything—such as visual recognition, social cognition, speech recognition—underpinned with a level of machine learning, with a particular interest around deep learning. I hope that helps.
That’s wonderful. You’re an investor so you get pitches all the time and you’re bound to see ones where the term AI is used, and it’s really just in there to play “buzzword bingo” and all of that… Because, your definition that it’s, “doing things humans would normally do” kind of takes me back to my cat food bowl that fills itself up when it’s empty. It’s weighing and measuring it so that I don’t have to. I used to do it, and now a computer does it. Surely, if you saw that in a business case, like, “We have an AI cat food bowl,” that really isn’t AI, or is it? And then you’ve got things like the Nest, which is a learning system. It learns as you do it, and yours is eventually going to be different than mine—I think that is clearly in the AI camp. What would be a case of something that you would see in a business case and just roll your eyes?
To address your examples and give you a few illustrations, I think in your example of the cat food plate or whatnot, I think you’re describing automation much more than AI. And you can automate it because it’s very prescriptive—if A takes place, then do B; if C takes place, then do D. I think that’s very different than AI.
I think when technologies and products are leveraging artificial intelligence, you are really looking for a learning capability. Although, to be perfectly honest, even within the world of artificial intelligence, researchers don’t agree on whether learning, in and of its own, qualifies as AI. But, coming back to everyday applications, I think, much like the human mind learns, in artificial intelligence, whatever facet of it, we are looking for some level of learning. For sure, there’s a differentiator.
To then address your question head on, my goodness, we’re seeing AI disrupt all facets—from cyber security and martech to IT and HR to new robotics platforms—it’s running the whole gamut. Why don’t I give you a perfect example, that’s a real example, and I can give you the name of a portfolio company so we make it even more practical and less hypothetical?
One of my recent investments is a company called Talla. Talla is taking advantage of natural language processing capabilities for the HR and IT organizations in particular, where they’re automating lower level tickets, Q&A for issues that an employee may have—maybe an outage of email or some other question around an HR benefit—and instead of having a human address the question, it is actually the bot that’s addressing the question. The bot is initially augmenting, so if the question is too complex and the bot can only take the answer so far and can’t fully address the particular question, then the human becomes involved. But the bot is learning, so when a second person has a similar question, the bot can actually address it fully.
In that instance, you have both natural language processing and a lot of learning, because no two humans ask the very same question. And even if we are asking the same question, we do not ask it in the same manner. That’s the beauty of our species. So, there’s a lot of learning that goes on in that regard. And, of course, it’s also the case that it’s driving productivity and augmentation. Does that address your question, Byron?
Absolutely. That’s Rob May’s company, isn’t it?
Yes, it is.
I know Rob; he’s a brilliant guy.
Phenomenal.
Specifically, with that concept, as we are able to automate more things at a human level, like customer service inquiries, how important do you think it is that the end-to-end user knows that they’re talking to a bot of some kind, as opposed to a person?
When you say “know,” are you trying to get at the societal norm of what… Is this a normative question?
Exactly. If I ask where is your FAQ and “Julia”—in air quotes—says, “Here. Our FAQs are located here,” and there was no human involved, how important is it that I, as an end user, know that it’s called “Julia Bot” not “Julia”?
I think disclosure is always best. There’s nothing to be hidden, there’s nothing that’s occurring that’s untoward. In that regard, I would personally advocate for erring on the side of disclosure rather than not, especially if there is learning involved, which means observing, on the part of the bot. I think it would be important. I also think that we’re in the early days of this type of technology being adopted and becoming pervasive that the best practices and norms have yet to be established.
Where I suspect you will see both, is what I call the “New York Times risk”—where we’ll have a lot more discussion around what’s an acceptable norm and what’s right and wrong in this emerging paradigm—when we read a story where something went the wrong way. Then we will all weigh in, and the bodies will come together and establish norms. But, I think, fundamentally, erring on the side of disclosure serves a company well at all times.
You’re an investor. You see all kinds of businesses coming along. Do you have an investment thesis like, “I am really interested in artificial intelligence applied to enterprises”? What is your thesis?
We refer to our thesis as—not only do we have a thesis, but I think we have a good name to capture it—“Intelligent, Connect and Protect,” wherein our firm strategy is to invest in startups that are really disrupting, in a positive manner, and revolutionizing the enterprise—from sales tech and martech, to pure IT and data; around platforms, be those software platforms or robotics and the like; as well as cyber security and infrastructure.
So that first part, around enterprise and platforms, is the “Connect” world and then the cyber security and the infrastructure is the protection of that ecosystem. The reason why we don’t just call it “Connect and Protect” is because with every single startup that we invest in, core to our strategy is the utilization, or taking advantage, of artificial intelligence, so that is the “Intelligent” part in describing, or in capturing our thesis.
Said differently, we fundamentally believe that if a technology startup, in this day and age, is not leveraging some form of machine learning, some facet of AI, it’s putting itself at a disadvantage from day one. Put more directly, it becomes legacy from the get-go, because from a performance point of view those legacy products, or products without any kind of learning and AI, just won’t be able to keep up and outperform their peers that do.
You’re based in Boston. Are you doing most of your investing on the East Coast?
For the most part, correct. Yes. East Coast, and in other pockets of opportunity where our strategy holds. There are some interesting things in areas like Atlanta with security, even in certain parts of Europe like London, Berlin, Munich, etcetera, but yes.
Are AI being used for different things on the East Coast than what we think of in Silicon Valley? Can you go into that a little more? Where do you see pockets that are doing different things?
I think AI is a massive wave, and I think we would be in our own bubble if we thought that it was divided by certain coasts. Where I think it manifests itself differently, however—and I think it’s impacting at a global level to be honest rather than in our own microcosms—is where you see a difference in the concentration of the talent pool around AI, and especially deep learning. Because, keep in mind, the notion of specializing in machine learning or visual cognition, but particularly deep learning, is the best example, it didn’t exist before 2012. We talk a lot about data scientists, but the true data scientists and machine learning experts are very, very hard to come by, because it is, in many ways, driven by the explosion in data, and then the maturity that the whole deep learning field is achieving to be commercializable, for the techniques to be used in real products. It’s all very new, only existing in the last five to—if you want to be generous—ten years.
From that perspective, where talent is concentrated makes a difference. To come back to how, maybe, the East Coast compares, I think we will see AI companies across the board. I’m very optimistic, in that I think we have quite a bit of concentration of AI through the universities on the east coast. I think of MIT, Carnegie Mellon, and Cornell; and what we’re seeing come out of Harvard and BU on the NLP side.
Across the universities, there are very, very deep pockets of talent, and I think that manifests itself both with the number and high quality of AI-enabled products and startups that we’re seeing get launched, but also for, what one would call, the “incumbents” such as Facebook, Amazon, Google, Uber, and the list goes on; if you look closely at where their AI teams are—even though almost all the companies I just mentioned are headquartered in the Valley and, in the case of Amazon, in Seattle—their AI talent is concentrated on the East Coast; probably most notably is Facebook’s AI headquartered in New York. So, combine that talent concentration with the market, that we, in particular, focus with our strategy around—the enterprise—where the East Coast has always had, and continues to have an advantage, I think, it’s an interesting moment in time.
I assume with the concentration of government on the East Coast and finance on the East Coast, that you see more technologies like security and those sorts of things. Specifically, with security, there’s been this game that’s gone back and forth for thousands of years between people who make codes, and people who break them. And nobody’s ever really come to an agreement about who has the harder job. Can you make an unbreakable code, and can it be broken? Do you think AI helps those who want to violate security, or those who want to defend against those violations, right now?
I think AI will play an important role in defending and securing the ecosystem. The reason I say that is because, in this day and age, with the exploding number of devices, and pervasive connectivity everywhere—translated in cyber security lingo, an increase in the number of endpoints, the areas of vulnerability, whether it is at the network level and device level or whether it is at the data and identity levels—has made us a lot more vulnerable, which is sort of the paradigm we live in.
Where I think AI and machine learning can be true differentiators is that not only can they be leveraged, again, for the various software solutions to continuously learn, but also on the predictive side they can point out where a vulnerability attack is being predicted before it actually takes place. There are certain patterns that help the enterprise to hone in on the vulnerability—from assessment to time of attack, at or during the attack, and then post attack. I do think that AI is a really meaningful differentiator for cyber security.
You alluded, just a moment ago, to the lack of talent; there just aren’t enough people who are well-versed in a lot of these topics. How does that shake out? Do you think that we will use artificial intelligence to make up for shortage of people with the skills? Or, do you think that universities are going to produce a surge of new talent coming in? How do we solve that? Because you look out your window, and almost everything you see, you could figure out how we could use data to study that and make it better. It’s kind of a blue ocean. What do you think is going to happen in the talent marketplace to solve for that?
AI eventually will be a layer, you’re absolutely right. From that perspective, I cannot come up with an area where AI will not play a role, broadly put, for the foreseeable future and for a long time in the future.
In terms of the talent challenge, let me address your question twofold. The talent shortage challenge that we have right now stems from the fact that it’s a relatively new field, or the resurgence of the field, and the ability to now actually deploy it in the real world and commercialize; this is what’s driving this demand. It’s the demand that has spurred it, and of course, the supply for that adjustment to take place requires talent, if I can think of it in that manner, and it’s not there. It’s a bit of a matter of market timing at one level. For sure, we will see many more students enter the field, many more students specialize and get trained in machine learning.
Then the real question becomes will part of their functions be automated? Will we need fewer humans to perform the same functions, which I think was the second part of your question if I understood it correctly?
Yes.
I think we’re in a phase of augmentation. And we’ve seen this in the past. Think about this, Byron: how did developers code, going back ten to fifteen years ago? Largely, in different languages, but largely, from the ground up. How do they code today? I don’t know of any developer who doesn’t use the tools available to get a quick spin up, and to ramp up quickly.
AI and machine learning are no different. Not every company is going to build their own neural net. Quite the opposite. A lot of them will use what’s open source and available out there in the market, or what’s commercialized for their needs. They might do some customization on top, and then they will focus on the product they’re building.
The fact that you will see part of the machine learning function that’s being performed by the data scientists be somewhat automated should come as no surprise, and that has nothing to do with AI. That has to do with driving efficiencies and getting tools and having access to open source support, if you will.
I think down the road—where AI plays a role both in augmentation and in automation—we will see definitional changes to what it means to be in a certain profession. For example, I think a medical doctor of the future might look, from a day-to-day activity point of view, very differently than what we perceive a doctor’s role to be—from interaction to what they’re trained at. The fact that a machine learning expert and a data scientist—which by the way are not the same thing but for the sake of argument, I’m using them interchangeably—are going to use tools, and not start from scratch but are going to leverage some level of automation and AI learning is par for the course.
When I give talks on these topics, especially on artificial intelligence, I always get asked the question, “What should I, or what should my children, study to remain employable in the future?”—and we’ll talk about that in a minute, about how AI kind of shakes up all of that.
There are two kind of extreme ends on this. One school of thought says everyone in school should learn how to code, everyone. It’s just like one of the three R’s, but it starts with a C. Everyone should learn to code. And then Mark Cuban, at South by Southwest here in Austin, said that the first trillionaires are going to be from AI companies because it offers the ability to make better decisions, right? And he said if he were coming up today, he would study philosophy, because it’s going to be that kind of thinking that allows you to use these technologies, and to understand how to apply them and whatnot.
On that spectrum of everyone should code, or no, we might just be making a glut of people to code, when what we really need are people to think about how to use these technologies and so forth, what would you say to that?
I have a 4-year-old daughter, so you better believe that I think about this topic quite a bit. My view is that AI is an enabler. It’s a tool for us as a society to augment and automate the mundane, and give us more ability and more room for creativity and different thinking. I would hope to God that the students of the future will study philosophy, they will study math, they will study the arts, they will study all the sciences that we know, and then some. Creativity of thinking, and diversity of thinking will remain the most precious asset we have, in my view.
I do think that, much like children today study the core hard sciences of math and chemistry and biology as well as literature, part of the core curriculum in the future will probably be some form of advanced data statistics, or inter machine learning, or some level computer sciences. We will see some technology training that becomes core, but I think that is a very, very, very different discussion than, “Everybody should study computer science or, looking forward, everybody should be a roboticist or machine learning expert or AI expert.” We need all the differentiation in thinking that we can get. Philosophy does matter, because what we do today shapes the present and society in the future.
Back to the talent question, to your point about someone who is well-versed in machine learning—which is different than data science, as you were saying—do you think those jobs are very difficult, and we’re always going to have a shortage of them because they’re just really hard? Or, do you think it’s just a case that we haven’t really taught them that much and they’re not any harder than coding in C or something? Which of those two things do you think it is?
I think it’s a bit more the latter than the former, that it’s a relatively new field. Yes, math and quants matter in this area, but it’s a new field. It will be talent that has certain predisposition around, like I said, math and quants, yes for sure. But, I do think that the shortage that we experience has a lot more to do with the newness of the field rather than the lack of interest or the lack of qualified talent or lack of aptitude.
One thing, when people say, “How can I spot a place to use artificial intelligence in my enterprise?” one thing I say is find things that look like games. Because every time AI wins in chess, and beats Ken Jennings in Jeopardy and Lee Sedol in Go—the games are really neat because they are these very constrained universes with definable rules and clear objectives.
So, for example, you mentioned HR in your list of all the things it was going to affect, so I’ll use that one. When you have a bunch of resumes, and you’ve hired some people that get great performance reviews, and some people that don’t, and you can think of them as points, or whatever—and you can then look at it as a big game, and you can then try to predict, you know? You can go into each part of the enterprise and say, “What looks like a game here?” Do you have a rule like that or just a guiding metaphor in your own mind? Because, you see all these business plans, right? Is there something like that, that you’re looking for?
There were several questions embedded in this. Let me see if I can decouple a couple of them. I think any area that is data-driven, any facet of the enterprise that is data-driven or that there is information, I think you can leverage learning and narrow AI for predictive, so you used some of the keywords. Is there opportunities for optimization? Are there areas where analytics are involved where you can move away from basic statistical models, and can start leveraging AI? I think where there is room for efficiency and automation, you can leverage it. It’s hard not to find an area where you can leverage it. The question is where can you create the most value?
For example, if you are on the forefront of an enterprise on the sales side, can you leverage AI? Of course, you can—not all prospective customers are created equal, there are better funnels, you can leverage predictives; the more and better data you have, the better are the outcomes. At the end of the day, your neural net will perform as well as the data you put in: junk in, junk out. That’s one facet.
If you’re looking at the marketing and technology side, think about how one can leverage machine learning and predictives around advertising, particularly on the programmatic side, so that you’re personalizing your engagement in whichever capacity with your consumer or your buyer. We can go down the list, Byron. I think the better question is what are the lower-hanging fruits that I can start taking advantage of AI right away, and which ones will I wait on rather than do I have any areas? If the particular manager or business person can’t find any areas, I think they’re missing the big picture, and the day-to-day execution.
I remember in the ‘90s when the consumer web became a big thing, and companies had a web department and they had a web strategy, and now that’s not really a thing, because the internet is part of your business. Do you think we’re like that with artificial intelligence, where it’s siloed now, but eventually, we won’t talk about it the way we’re talking about it now?
I do think so. I often get asked the very same question, “How do I think AI will shape up?” and I think AI will be a layer much like the internet has become a layer. I absolutely do. I think we will see tools and capabilities that will be ever pervasive.
Since AIs are only as good as the data you train them on, does it seem monopolistic to you that certain companies are in a place where they can constantly get more and more and more data, which they can therefore use to make their businesses stronger and stronger and stronger, and it’s hard for new entrants to come in because they don’t have access to the data? Do you think that data monopolies will become kind of a thing, and we’ll have to think about how to regulate them or how to make them available, or is that not likely?
I think the possession of data is, for sure, a barrier to entry in the market, and I do think that the current incumbents, probably more than we’ve ever seen before, have built this barrier to entry by amalgamating the data. How it will shake out… First of all, two thoughts: one, even though they have amassed huge amounts of data with this whole pervasive connectivity, and devices that stay connected all the time, even the large incumbents are only scratching the surface of the data we are generating, and the growth that we’ll continue to see on the data side. So, even though it feels oligarchy-like, maybe—not quite monopolistic—that the big players have so much data, I think we’re generating even more data going forward. So that’s sort of at the highest level.
I do think that, particularly on the consumer side, something needs to be done around customers taking control of their data. I think brands and advertisers have been squatting on consumer data with very little in return for us. I think, again, one can leverage AI in predictives, in that regard, to compensate—whether it’s through an experience or in some other form—consumers for their personal private data being used. And, we probably need some form of regulation, and I don’t know if it’s at the industry standard level, or with more regulatory bodies involved.
Not sure if you follow Sir Timothy Berners-Lee who invented the web, but he does talk a lot about data centralization. I think there is something quite substantive in his statements around centralizing the web and all the data and giving consumers a say. I think we’re seeing a bit of ground swell in that regard. How it will manifest itself? I’m not quite sure, but I do think that the discussion around data will remain very relevant and become even more important as the amount of data increases, and as it becomes critical in a barrier to entry for future businesses.
With regard to privacy in AI, do you think that we are just in a post-privacy world? Because so much of what you do is recorded one way or the other that data just exists and we’ll eventually get used to that. Or do you think people are always going to insist on the protections that you’re talking about, and ways to guarantee their anonymity; and that the technology will actually be used to help promote privacy, not to wear it down?
I think we haven’t given up on privacy. I think the definition of privacy might have changed, especially with the millennials and the social norms that they have been driving, and, largely, the rest of the population has adopted. I’d say we have a redefinition of privacy, but for sure, we haven’t given up on it; even the younger generations who often get accused of doing so. And you don’t need to take my word on it, look at what happened with Snap. Basically, in the early days, it was really almost tweens but let’s say it was teenagers who were on Snapchat and what they were doing was “borderline misbehavior” because it was going to go away, it wouldn’t leave a footprint. The value prop being that it disappears, so your privacy, your behavior, does not become exposed to the broader world. It mattered, and, in my view, it was a critical factor in the growth that the company saw.
I think you’d be hard pressed to find people, I’m sure they exist but I think they are in the minority, that would say, “Oh, I don’t care. Put all of my data, 24/7, let the world know what I’m up to.” Even on the exhibitionist side, I think there’s a limit to that. We care about privacy. How we define it today, I suspect, is very different than how we defined it in the past and that is something that’s still a bit more nebulous.
I completely agree with that. My experience with young people is they are onto it, they understand it better and they are all about it. Anyway, I completely agree with all of that.
So, what about European efforts with regard to the “right to know why”? If an artificial intelligence makes a decision that impacts your life—like gives you a loan or doesn’t—you have the right to know how that conclusion was made. How does that work in a world of neural nets where there may not be a why that’s understandable, kind of, in plain English? Do you think that that is going to hold up the development of black box systems, or that that’s a passing fad? What are your thoughts on that?
I think Europe has always been on the side of protecting consumers. We were just thinking about privacy, and look at what they are doing with GDPR, and what’s coming to market from the data point of view on the topic we were just wrapping up. I think, as we gain a better understanding of AI and as the field matures, if we hide behind, “We don’t quite know how the decision was made,” and we may not fully comprehend but if we hide behind the, “Oh, it’s hard to explain and people can’t understand it,” I think at some point it becomes a cop-out. I don’t think we need to educate everyone on how neural nets and deep learning are performed, but I think you can talk about the fundamentals of what are the drivers, how are they interacting with each other, and at a minimum, you can give the consumer some basic level of understanding as to where they probably outperformed or underperformed.
It reminds me, in tech, we used to use acronyms in talking to each other, and making everybody feel like they were less intelligent than the rest of the world. I don’t think we need to go into the science of artificial intelligence machine learning to help consumers understand how decisions were made. Because guess what? If we can’t explain it to the consumer, the person on the other side that’s managing the relationship will not understand it themselves.
I think you’re right, but, if you ask Google, “Why did this page come number one for this search?” the answer, “We don’t know,” is perfectly understandable. It’s six hundred different algorithms that go into how they rank pages—or whatever the number is, it’s big. So, how can they know why this is page number one and that is page number two?
They may not know fully, or it may take some effort to drill in specifically as to why, but at some level they can tell you what some of the underlying drivers were behind the ranking or how the ranking algorithms took place etcetera, etcetera. I think, Byron, what you and I are going back and forth on is, in my view, it’s a level of granularity question, rather than can they or can they not. It’s not a yes or a no, it’s a granularity question.
There’s a lot of fear in the world around the effect that artificial intelligence is going to have on people, and one of the fear areas is the effect on jobs. As you know, there kind of are three narratives. One narrative is that there are some people who don’t have a lot of training in things that machines can’t do, and the machines are eventually going to take their jobs, and that we’ll have some portion of the population that’s permanently unemployed, like a permanent Great Depression.
Then there’s a school of thought that says, “No, no, no. Everybody’s replaceable by a machine, that eventually, they’re going to get to a point where they can learn something new faster than a human, and then we’re all out of work.”
And then there’s a third group that says, “No, no, no, we’re not going to have any unemployment because we’ve had disruptive technologies: electricity, replacing animals with machines, and steam; all these really disruptive technologies, and unemployment never spiked because of those. All that happens is people learned to use those tools to increase their own productivity.”
My question to you is, which of those three narratives, or is there a fourth one, do you identify with?
I would say I identify only in part with the last narrative. I do think we will see job displacement. I do think we will see job displacement in categories of workers that we would have normally considered highly-skilled. In my view, what’s different about the paradigm we are in vis-à-vis, let’s say, the Industrial Revolution, is that it is not the lowest-trained workers or the highly-specialized workers—if you think about artisanal-type workers back in the day—that get displaced out of their roles, and, through automation, replaced by machines in the Industrial Revolution, or here by technology and the AI paradigm.
I think with the current paradigm and what’s tricky is that the middle class and the upper middle class gets impacted as much as the less-trained, low-skilled workers. There will be medical doctors, there will be attorneys, there will be highly-educated parts of the workforce where their jobs—some of the jobs may be done away with—in large part, will be redefined. And very analogous to the discussion we were just having about see a shortage in machine learning experts, we’ll see older generations who are still seeking to be active members of the workforce be are put out of the labor market, or are no longer qualified and require new training, and it will be a challenge for them to gain the training to be as high of a performer as someone who has been learning the particular skill that’s in medicine in an AI paradigm from the get-go.
I think we’ll see a shift in job definitions, and a displacement of meaningful chunks of the highly-trained workforce, and that will have significant societal consequences as well as economic consequences. Which is why I think a form of guaranteed basic income is a worthy discussion, at least until that generation of workers get settled and the new labor force that’s highly-trained in an AI-type of paradigm comes into play.
I also think there will be many, many, many new jobs and professions that will be created that we have yet to think about or even imagine as a result. I do not think that AI is a net negative in terms of creating entire unemployment or lower employment. It’s not a net negative. I think—McKenzie and many, many others have done studies on this—in the long term, we’ll probably see more employment than not created as a result of AI. But, at any point in time, as we look at the AI disruption and adoption over the next few decades, I think we will see moments of pain and meaningful pain.
That’s really interesting because, in the United States, as an example, since the industrial revolution, unemployment has been between five and nine percent, without fail five and nine percent, except the Great Depression which nobody said was caused by technology. If you think about an assembly line, an assembly line is AI. If you were making cars one at a time in a garage, and then all of a sudden, Henry Ford shows up and he makes them a hundred at a time and sells them for a tenth the price and they’re better, that has got to be like, “Oh my gosh, this AI, this technology just really upset this enormous amount of people,” and yet you never see unemployment go above nine percent in this country.
I will leave the predictions of the magnitude of the impact to the macroeconomists; I will focus on startups. But I do think, let me stick with that example, so have artisanal shops and sewing by hand, and then the machine comes along and the factory line, and now it’s all automated, and you and others are displaced. So, for every ten of you who were working, one is now on the factory line and nine are finding themselves out of a position. That was the paradigm I was describing a minute ago with doctors and lawyers and other professions, that a lot of their function will become automated or replaced by AI. But then, it’s also the case that now their children or their grandchildren are studying outer space, or are going into astronomy and other fields that we might have, at a folklore level, thought about, but never expected that we’d get there; so, new fields emerge.
The pain will be felt, though. What do you do with the nine out of ten who are, right there and then, out of a position? In the long term, in an AI paradigm, we’ll see many, many more professions get created. It’s just about where you get caught in the cycle.
It’s true. In ’95, you never would have thought, “If you just connect a bunch of computers together with a common protocol and make the web, you’re going to have Google and eBay and Etsy.”
Let’s talk about startups for a minute. You see a lot of proposals, and then you make investments, and then you help companies along. What would you say are the most common mistakes that you’re seeing startups make, and do you have general advice for portfolio companies?
Well, my portfolio companies get the advice in real time, but I think, especially for AI companies—to go back to how you opened this discussion, which was referencing a byline I had done for Gigaom—if a company truly does have artificial intelligence, show it. And it’s pretty easy to show. You show how your product leverages various learning techniques, you show who the people on your team are that are focusing on machine learning, but also how also you, the founder, whether you are a technical founder or not, understands the underpinnings of AI and of machine learning. I think that’s critical.
So many companies, they’re calling themselves something-something-dot-AI and it’s very, very similar and analogous to what we saw with big data. If you remember, seven to ten years ago, every company was big data. Every company is now AI, because it’s the hot buzzword. So, rising above the noise while taking advantage of the wave is important, but meaningfully so because it’s valuable to your business, and because, from the get-go, you’re taking advantage of machine learning and AI not because it’s the buzzword of the day that you think might get you money. The matter of fact is for those of us who live and breathe AI and startups, we’ll cut through the noise fairly quickly, and pattern recognition and the number of deals we see in any given week is such that the true AI capabilities will stand out. That’s one piece.
I do think, also, that for the companies and founders that truly are leveraging neural net, truly are getting the software or hardware—whatever their product might be—to outperform; the dynamics within the companies have changed. Because we don’t just have the technology team consisting of the developers with the link to the product people; we now have this third leg, the machine learning or the data scientist people. So, how is the product roadmap being driven, is it the product people driving it, or is the machine learning talent coming up with models to help support it, or are they driving it, and product is turning it into a roadmap, and technology, the developers, are implementing it? It’s a whole new dichotomy among these various groups.
There’s a school of thought, in fact, that says, “Machine learning experts, who’s that? It’s the developers who will have machine learning expertise, they will be the same people.” I don’t share the view. I think developers will have some level of fluency in machine learning AI, but I think we will have distinct talent around it. So, getting the culture right amongst those groups makes a very, very big difference to the outcome. I think it’s still in the making, to be honest.
This may be an unanswerable question, because it’s too vague.
Lucky me.
I know.
Go ahead.
Two business plans come across your desk, and one of them is a company that says, “We have access to data that nobody else has, and we can use this data to learn how to do something really well,” and the other one says, “We have algorithms that are so awesome that they can do stuff that nobody else knows how to do.” Which of those do you pick up and read first?
Let’s merge them. Ideally, you’d like to have both the algorithms, or the neural nets, and the data. If you really force me to pick one, I’ll pick the data. I think there are enough tools out there and there is enough TensorFlows or whatnot out there in the market and in open source, that I think you could probably work with those and build on top of them. Data becomes the big differentiator.
I think of data, Byron, today as we used to think of patents back in the day. The role of patents is an interesting topic because, with execution, they’ve taken second or third seat as a barrier to entry. But, back ten, fifteen years ago, patents mattered a lot more. I think data can give you that kind of barrier to entry and even more so. So, I pick data. It is an answerable question; I’ll pick big data.
Actually, my very next question was the role of patents in this world. Because doesn’t the world change so quickly, plus you have to disclose so much. Would you advise people to keep them as trade secrets? Or, just, how do you think that companies who develop a technology should protect and utilize it?
I think your question depends a bit on what facet of technology are we talking about. In the life sciences, they still matter quite a bit, which is an area that I don’t know as much about, for sure. I think, in technology, their role has diminished, although still relevant. I cannot think of a company that became big and a market leader because they had patents. I think they are an important facet, but it is not the make-all or break-all in terms of must-have. In my view, they are a nice to have.
I think where one pauses, is if their immediate competitor has a healthy body of patents, then you think a bit more about that. As far as the tradeoff between patents and trade secrets, I think there is a moment in time when one files a patent, especially if secrecy matters. At the end the day though—and this may be ironic given that we’re talking about artificial intelligence startups—much like any other facet of our lives, what matters is excellence of execution, and people. People can make or break you.
So, when you ask me about the various startups that I see, and talk about the business plans, I never think of them as “the business plan.” I always think of them in the context of, “Who are the founders? Who are the team members, the management team?” So, team first. Then, market timing for what they are going after, because you could have the right execution or the right product, but the wrong market timing. And then, of course, the question of what problem are they solving, and how are they taking advantage of AI. But, people matter. To come back to your question, patents are one more area that a startup can build defensibility but not the end-all and be-all by any stretch, and they have a diminished role, in fact.
How do you think startups have changed in the last five or ten years? Are they able to do more early? Or, are they demographically different—are they younger or older? How do you think the ecosystem evolves in a world where we have all these amazing platforms that you can access for free?
I think we’ve seen a shift. Earlier, you referenced the web, and with the emergence of the web, back in 1989, we saw digital and e-commerce and martech; and entire new markets get created. In that world—what I’ll call not just pure technology businesses, but tech-enabled businesses—we saw a shift both in younger demographics and startups founded by younger entrepreneurs, but also more diversity in terms of gender and background as well, in that not everybody needed to have a computer science degree or an engineering degree to be able to launch a tech or a tech-enabled company.
I think that became even more prevalent and emphasized in the more recent wave that we’re just on the completion side of with social-mobile. I mean, the apps, that universe and ecosystem, it’s two twenty-year-olds, right? It’s not the gray-headed three-time entrepreneur. So, we absolutely saw a demographic shift. In this AI paradigm, I think we’ll see a healthy mixture. We’ll see the researcher and the true machine learning expert who’s not quite twenty but not quite forty either, so, a bit more maturity. And then we’ll see the very young cofounder or the very experienced cofounder. I think we’ll see a mix of demographics and age groups, which is the best. Again, we’re in a business of diversity of thought and creativity. We’re looking for that person who’s taking advantage of the tools and innovation and what’s out there to reimagine the world and deliver a new experience or product.
I was thinking it’s a great time to be a university professor in these topics because, all of a sudden, they are finding themselves courted right and left because they have long-term deep knowledge in what everyone is trying to catch up on.
I would agree, but keep in mind that there is quite a bit of a chasm between teaching a topic and actually commercializing, in that regard. So I think the professors who are able to cross the chasm—not to sound too Geoffrey Moore-ish—are the ones, that, yes, they’re in the right field and in the right moment in time. Otherwise, their students, the talent that is knowledgeable enough, those PhDs that don’t go into academia, but are actually going into commercialization, execution, and implementation; that’s the talent that we’re in high demand for.
My last question is, kind of, how big can this be? If you’re a salesperson, and you have a bunch of leads, you can just use your gut, and pick one, and work that one, or you have data that informs you and makes you better. If you’re an HR person, you hire people more suited to the job than you would have before. If you’re a CEO, you make better decisions about something. If you’re a driver, you can get to the place quicker. I mean, when you add all of that up across an entire world of inefficiency… So, you kind of imagine this world where, on one end of the spectrum, we all just kind of stumble through life like drunken sailors on shore leave, randomly making decisions based on how we feel; and then you think of this other world where we have all of this data, and it’s all informed, and we make the best decisions all the time. Where do you think we are? Are we way over at the wandering around, and this this is going to get us over to the other side? How big of an impact is this? Could artificial intelligence double GNP in the United States? How would you say how big can it be?
Fortunately, or unfortunately, I don’t know, but I don’t think we live in a binary world. I think, like everything else, it’s going to be a matter of shades. I think we’ve driven productivity and efficiency, historically, to entirely new levels, but I don’t think we have any more free time, because we find other ways to occupy ourselves even in our roles. We have mobile phones now, we have—from a legacy perspective—laptops, computers, and whatnot; yet, somehow, I don’t find myself vacationing on the beach. Quite the contrary, I’m more swamped than ever.
I think we have to be careful about—if I understood your question correctly—transplanting technology into, “Oh, it will take care of everything and we’ll just kind of float around a bit dumber, a bit freer, and whatnot.” I think we’ll find different ways to reshape societal norms, not in a bad way, but in a, “What constitutes work?” way, and possibly explore new areas that we didn’t think were possible before.
I think it’s not necessarily about gaining efficiency, but I think we will use that time, not in an unproductive or leisurely way, but to explore other markets, other facets of life that we may or may not have imagined. I’m sorry for giving you such a high-level answer, and not making it more concrete. I think productivity from technology has been something that’s been, as you well know, very hard to measure. We know, anecdotally, that it’s had an impact on measured activity, but there are entire groups of macroeconomists, who, not only can they not measure it, but they don’t believe it has improved productivity.
It will have a fundamental transformative impact, whether we’re able to measure it—I know you defined it as GNP, but I’m defining it from a productivity point of view—or not remains to be seen. Some would argue, that it’s not productive, but I would throw the thought out there, that traditional methodologies of measuring productivity do not account for technological impact. Maybe we need to look at how we’re defining productivity. I don’t know if I answered your question.
That’s good. The idea that technology hasn’t increased our standard of living, I don’t think… I live a much more leisurely life than my great grandparents, not because I work any harder than them, but because I have technology in my life, and because I use that technology to make me more productive. I know the stuff you’re referring to where it’s like, “We’ve got all these computers in the office and worker productivity doesn’t seem to just be shooting through the roof.” I don’t know. Let’s leave it there.
Actually, I do have a final question. You said you have a four-year-old daughter, are you optimistic overall about the world she’s going to grow up in with these technologies?
My gosh! We’re going into a shrink session.
No, I mean are you an optimist or a pessimist about the future?
Apparently, I’ve just learned—in the spirit of sharing information with you and all your listeners—that my age group falls into something called the Xennial where we are very cynical like Generation X, but also optimists like the Millennials. I’m not sure what to make of that. I would call it an interesting hybrid.
I am very optimistic about my daughter’s future, though. I think of it as, today’s twentysomethings are digital natives, and today’s ten-year-olds and later are mobile natives. My daughter is going to be an AI native, and what an amazing moment in time for her to be living in this world. The opportunities she will have and the world she will explore on this planet and beyond, I think, will be fascinating. I do hope that somewhere in the process, we manage to find a bit more peace, and not destroy each other. But, short of that, I think I’m quite optimistic about the future that lies ahead.
Alrighty, well let’s leave it at that. I want to thank you for an absolutely fascinating hour. We touched on so many things and I just thank you for taking the time.
My pleasure. Thanks again for having me.
Byron explores issues around artificial intelligence and conscious computers in his upcoming book The Fourth Age, to be published in April by Atria, an imprint of Simon & Schuster. Pre-order a copy here.
[voices_in_ai_link_back]

Voices in AI – Episode 21: A Conversation with Nikola Danaylov

[voices_in_ai_byline]
In this episode, Byron and Nikola talk about singularity, consciousness, transhumanism, AGI and more.
[podcast_player name=”Episode 21: A Conversation with Nikola Danaylov” artist=”Byron Reese” album=”Voices in AI” url=”https://voicesinai.s3.amazonaws.com/2017-11-20-(01-05-27)-nikola-danaylov.mp3″ cover_art_url=”https://voicesinai.com/wp-content/uploads/2017/11/voices-headshot-card-3.jpg”]
[voices_in_ai_link_back]
Byron Reese: This is Voices in AI, brought to you by Gigaom. I’m Byron Reese. Today our guest is Nikola Danilov. Nikola started the Singularity Weblog, and hosts the wildly popular singularity.fm podcast. He has been called the “Larry King of the singularity.” He writes under the name Socrates, or to the Bill & Ted fans out there, Socrates. Welcome to the show, Nikola.
Nikola Danaylov: Thanks for having me, Byron, it’s my pleasure.
So let’s begin with, what is the singularity?
Well, there are probably as many definitions and flavors as there are people or experts in the field out there. But for me, personally, the singularity is the moment when machines first catch up and eventually surpass humans in terms of intelligence.
What does that mean exactly, “surpass humans in intelligence”?
Well, what happens to you when your toothbrush is smarter than you?
Well, right now it’s much smarter than me on how long I should brush my teeth.
Yes, and that’s true for most of us—how long you should brush, how much pressure you should exert, and things like that.
It gives very bad relationship advice, though, so I guess you can’t say it’s smarter than me yet, right?
Right, not about relationships, anyway. But about the duration of brush time, it is. And that’s the whole idea of the singularity, that, basically, we’re going to expand the intelligence of most things around us.
So now we have watches, but they’re becoming smart watches. We have cars, but they’re becoming smart cars. And we have smart thermostats, and smart appliances, and smart buildings, and smart everything. And that means that the intelligence of the previously dumb things is going to continue expanding, while unfortunately our own personal intelligence, or our intelligence as a species, is not.
In what sense is it a “singularity”?
Let me talk about the roots of the word. The origin of the word singularity comes from mathematics, where it basically is a problem with an undefined answer, like five divided by zero, for example. Or in physics, where it signifies a black hole. That’s to say a place where there is a rupture in the fabric of time-space, and the laws of the universe don’t hold true as we know them.
In the technological sense, we’re borrowing the term to signify the moment where humanity stops being the smartest species on our planets, and machines surpass us. And therefore, beyond that moment, we’re going to be looking into a black hole of our future. Because our current models fail to be able to provide sufficient predictions as to what happens next.
So everything that we have already is kind of going to have to change, and we don’t know which way things are going to go, which is why we’re calling it a black hole. Because you cannot see beyond the event horizon of a black hole.
Well if you can’t see beyond it, give us some flavor of what you think is going to happen on this side of the singularity. What are we going to see gradually, or rapidly, happen in the world before it happens?
One thing is the “smartification” of everything around us. So right now, we’re still living in a pretty dumb universe. But as things come to have more and more intelligence, including our toothbrushes, our cars—everything around us—our fridges, our TVs, our computers, our tables, everything. Then that’s one thing that’s going to keep happening, until we have the last stage where, according to Ray Kurzweil, quote, “the universe wakes up,” and everything becomes smart, and we end up with different things like smart dust.
Another thing will be the merger between man and machine. So, if you look at the younger generation, for example, they’re already inseparable from their smartphones. It used to be the case that a computer was the size of a building—and by the way, those computers were even weaker in terms of processing power than our smartphones are today. Even the Apollo program used a much less powerful machine to send astronauts to the moon than what we have today in our pockets.
However, that change is not going to stop there. The next step is that those machines are going to actually move inside of our bodies. So they used to be inside of buildings, then they went on our body, in our pockets, and are now becoming what’s called “wearable technology.” But tomorrow it will not be wearable anymore, because it will be embedded.
It will be embedded inside of our gut, for example, to monitor our microbiome and to monitor how our health is progressing; it will be embedded into our brains even. Basically, there may be a point where it becomes inseparable from us. That in turn will change the very meaning of the definition of being human. Not only at the sort of collective level as a species, but also at the personal level, because we are possibly, or very likely, going to have a much bigger diversification of the understanding of what it means to be a human than we have right now.
So when you talk about computers becoming smarter than us, you’re talking about an AGI, artificial general intelligence, right?
Not necessarily. The toothbrush example is artificial narrow intelligence, but as it gets to be smarter and smarter there may be a point where it becomes artificial general intelligence, which is unlikely, but it’s not impossible. And the distinction between the two is that artificial general intelligence is equal or better than human intelligence at everything, not only that one thing.
For example, a calculator today is better than us in calculations. You can have other examples, like, let’s say a smart car may be better than us at driving, but it’s not better than us at Jeopardy, or speaking, or relationship advice, as you pointed out.
We would reach artificial general intelligence at the moment when a single machine will be able to be better at everything than us.
And why do you say that an AGI is unlikely?
Oh no, I was saying that an AGI may be unlikely in a toothbrush format, because the toothbrush requires only so many particular skills or capabilities, only so many kinds of knowledge.
So we would require the AGI for the singularity to occur, is that correct?
Yeah, well that’s a good question, and there’s a debate about it. But basically the idea is that anything you can think of which humans do today, that machine would be equal or better at it. So, it could be Jeopardy, it could be playing Go. It could be playing cards. It could be playing chess. It could be driving a car. It could be giving relationship advice. It could be diagnosing a medical disease. It could be doing accounting for your company. It could be shooting a video. It could be writing a paper. It could be playing music or composing music. It could be painting an impressionistic or other kind of piece of art. It could be taking pictures equal or better than Henri Cartier-Bresson, etc. Everything that we’re proud of, it would be equal or better at.
And when do you believe we will see an AGI, and when would we see the singularity?
That’s a good question. I kind of fluctuate a little bit on that. Depending on whether we have some kind of general sort of global-scale disaster like it could be nuclear war, for example—right now the situation is getting pretty tense with North Korea—or some kind of extreme climate-related event, or a catastrophe caused by an asteroid impact; falling short of any of those huge things that can basically change the face of the Earth, I would say probably 2045 to 2050 would be a good estimate.
So, for an AGI or for the singularity? Or are you, kind of, putting them both in the same bucket?
For the singularity. Now, we can reach human-level intelligence probably by the late 2020’s.
So you think we’ll have an AGI in twelve years?
Probably, yeah. But you know, the timeline, to me, is not particularly crucial. I’m a philosopher, so the timeline is interesting, but the more important issues are always the philosophical ones, and they’re generally related to the question of, “So what?” Right? What are the implications? What happens next?
It doesn’t matter so much whether it’s twelve years or sixteen years or twenty years. I mean, it can matter in the sense that it can help us be more prepared, rather than not, so that’s good. But the question is, so what? What happens next? That’s the important issue.
For example, let me give you another crucial technology that we’re working on, which is life extension technology, trying to make humanity “amortal.” Which is to say we’re not going to be immortal—we can still die if we get ran over by a truck or something like that—but we would not be likely to die from general causes of death that we see today, which are usually old-age related.
As an individual, I’m hoping that I will be there when we develop that technology. I’m not sure I will still be alive when we have it, but as a philosopher what’s more important to me is, “So what? What happens next?” So yeah, I’m hoping I’ll be there, but even if I’m not there it is still a valid and important question to start considering and investigating right now—before we are at that point—so that we are as intellectually and otherwise prepared for events like this as possible.
I think the best guesses are, we would live to about 6,750. That’s how long it would take for some, you know, Wile E Coyote kind of piano-falling-out-the-top-floor-of-a-building-and-landing-on-you thing to happen to you, actuarially-speaking.
So let’s jump into philosophy. You’re, of course, familiar with Searle’s Chinese Room question. Let me set that up for the listeners, and then I’ll ask you to comment on it.
So it goes like this: There’s a man, we’ll call him the librarian. And he’s in this giant room that’s full of all of these very special books. And the important part, the man does not speak any Chinese, absolutely no Chinese. But people slide him questions under the door that are written in Chinese.
He takes their question and he finds the book which has the first symbol on the spine, and he finds that book and he pulls it down and he looks up the second symbol. And when he finds the second symbol and it says go to book 24,601, and so he goes to book 24,601 and looks up the third symbol and the fourth and the fifth—all the way to the end.
And when he gets to the end, the final book says copy this down. He copies these lines, and he doesn’t understand what they are, slides it under the door back to the Chinese speaker posing the question. The Chinese speaker picks it up and reads it and it’s just brilliant. I mean, it’s absolutely over-the-top. You know, it’s a haiku and it rhymes and all this other stuff.
So the philosophical question is, does that man understand Chinese? Now a traditional computer answer might be “yes.” I mean, the room, after all, passes the Turing test. Somebody outside sliding questions under the door would assume that there’s a Chinese speaker on the other end, because the answers are so perfect.
But at a gut level, the idea that this person understands Chinese—when they don’t know whether they’re talking about cholera or coffee beans or what have you—seems a bit of a stretch. And of course, the punchline of the thing is, that’s all a computer can do.
All a computer can do is manipulate ones and zeros and memory. It can just go book to book and look stuff up, but it doesn’t understand anything. And with no understanding, how can you have any AGI?
So, let me ask you this? How do you know that that’s not exactly what’s happening right now in my head? How do you know that me speaking English to you right now is not the exact process you described?
I don’t know, but the point of the setup is: If you are just that, then you don’t actually understand what we’re actually talking about. You’re just cleverly answering things, you know, it is all deterministic, but there’s, quote, “nobody home.” So, if that is the case, it doesn’t invalidate any of your answers, but it certainly limits what you’re able to do.
Well, you see, that’s a question that relates very much with consciousness. It relates to consciousness, and, “Are you aware of what you’re doing,” and things like that. And what is consciousness in the first place?
Let’s divide that up. Strictly speaking, consciousness is subjective experience. “I had an experience of doing X,” which is a completely different thing than “I have an intellectual understanding of X.” So, just the AGI part, the simple part of: does the man in the room understand what’s going on, or not?
Let’s be careful here. Because, what do you mean by “understand”? Because you can say that I’m playing chess against a computer. Do I understand the playing of chess better than a computer? I mean what do you mean by understand? Is it not understanding that the computer can play equal or better chess than me?
The computer does not understand chess in the meaningful sense that we have to get at. You know, one of the things we humans do very well is we generalize from experience, and we do that because we find things are similar to other things. We understand that, “Aha, this is similar to that,” and so forth. A computer doesn’t really understand how to play chess. It’s arguable that the computer is even playing chess, but putting that word aside, the computer does not understand it.
The computer, that program, is never going to figure out baccarat any more than it can figure out how many coffee beans Colombia should export next year. It just doesn’t have any awareness at all. It’s like a clock. You wind a clock, and tick-tock, tick-tock, it tells you the time. We progressively add additional gears to the clockwork again and again. And the thesis of what you seem to be saying is that, eventually, you add enough gears so that when you wind this thing up, it’s smarter than us and it can do absolutely anything we can do. I find that to be, at least, an unproven assumption, let alone perhaps a fantastic one.
I agree with you on the part that it’s unproven. And I agree with you that it may or may not be an issue. But it depends about what you’re going for here, and it depends on the computer you’re referring to, because we have the new software that was invented by AlphaGo to play Go. And that actually learned to play the program exactly based on the previous games—that’s to say, on the previous experience by other players. And then that same kind of approach of learning from the past, and coming up with new creative solutions to the future was then implemented in a bunch of other fields, including bioengineering, including medicine, and so on.
So when you say the computer will never be able to calculate how many beans that country needs for next season, actually it can. That’s why it’s getting more and more generalized intelligence.
Well, let me ask that question a slightly different way. So I have, hypothetically, a cat food dish that measures out cat food for my cat. And it learns, based on the weight of the food in it, the right amount to put out. If the cat eats a lot, it puts more out. If the cat eats less, it puts less out. That is a learning algorithm, that is an artificial intelligence. It’s a learning one, and it’s really no different than AlphaGo, right? So what do you think happens from the cat dish—
—I would take issue with you saying it’s really no different from AlphaGo.
Hold on, let me finish the question; I’m eager to hear what you have to say. What happens, between the cat food AI and AlphaGo and an AGI? At what point does something different happen? Where does that break, and it’s not just a series of similar technologies?
So, let me answer your question this way… When you have a baby born, it’s totally dumb, stupid, blind, and deaf. It lacks complete self-awareness. Its unable to differentiate between itself and its environment, and it lacks complete self-awareness for probably the first, arguably, year-and-a-half to two years. And there’s a number of psychological tests that can be administered as the child develops. Usually girls, by the way, do about three to six months better, or they develop personal awareness faster and earlier than boys, on average. But let’s say the average age is about a year-and-a-half to two years—and that’s a very crude estimation, by the way. The development of AI would not be exactly the same, but there will be parallels.
The question you’re raising is a very good question. I don’t have a good answer because, you know, that can only happen with direct observational data—which we don’t have right now to answer your question, right? So, let’s say tomorrow we develop artificial general intelligence. How would we know that? How can we test for that, right? We don’t know.
We’re not even sure how we can evaluate that, right? Because just as you suggested, it could be just a dumb algorithm, processing just like your algorithm is processing how much cat food to provide to your cat. It can lack complete self-awareness, while claiming that it has self-awareness. So, how do we check for that? The answer is, it’s very hard. Right now, we can’t. You don’t know that I even have self-awareness, right?
But, again, those are two different things, right? Self-awareness is one thing, but an AGI is easy to test for, right? You give a program a list of tasks that a human can do. You say, “Here’s what I want you to do. I want you to figure out the best way to make espresso. I want you to find the Waffle House…” I mean, it’s a series of tasks. There’s nothing subjective about it, it’s completely objective.
Yes.
So what has happened between the cat food example, to the AlphaGo, to the AGI—along that spectrum, what changed? Was there some emergent property? Was there something that happened? Because you said the AlphaGo is different than my cat food dish, but in a philosophical sense, how?
It’s different in the sense that it can learn. That’s the key difference.
So does my cat food thing, it gives the cat more food some days, and if the cat’s eating less, it cuts the cat food back.
Right, but you’re talking just about cat food, but that’s what children do, too. Children know nothing when they come into this world, and slowly they start learning more and more. They start reacting better, and start improving, and eventually start self-identifying, and eventually they become conscious. Eventually they develop awareness of the things not only within themselves, but around themselves, etc. And that’s my point, is that it is a similar process; I don’t have the exact mechanism to break down to you.
I see. So, let me ask you a different question. Nobody knows how the brain works, right? We don’t even know how thoughts are encoded. We just use this ubiquitous term, “brain activity,” but we don’t know how… You know, when I ask you, “What was the color of your first bicycle?” and you can answer that immediately, even though you’ve probably never thought about it, nor do you have some part of your brain where you store first bicycles or something like that.
So, assuming we don’t know that, and therefore we don’t really know how it is that we happen to be intelligent. By what basis do you say, “Oh, we’re going to build a machine that can do something that we don’t even know how we do,” and even put a timeline on it, to say, “And it’s going to happen in twelve years”?
So there are a number of ways to answer your question. One is, we don’t necessarily need to know. We don’t know how we create intelligence when we have babies, too, but we do it. How did it happen? It happened through evolution; so, likewise, we have what are called “evolutionary algorithms,” which are basically algorithms that learn to learn. And the key point, as Dr. Stephen Wolfram proved years ago in his seminal work Mathematica, from very simple things, very complex patterns can emerge. Look at our universe; it emerged from tiny little, very simple things.
Actually I’m interviewing Lawrence Krauss next week, he says it emerged from nothing. So from nothing, you have the universe, which has everything, according to him at least. And we don’t know how we create intelligence in the baby’s case, we just do it. Just like you don’t know how you grow your nails, or you don’t know how you grow your hair, but you do it. So, likewise, just one of the many different paths that we can take to get to that level of intelligence is through evolutionary algorithms.
By the way, this is what’s sometimes referred to as the black box problem, and AlphaGo is a bit of an example of that. There are certain things we know, and there are certain things we don’t know that are happening. Just like when I interviewed David Ferrucci, who was the team leader behind Watson, we were talking about, “How does Watson get this answer right and that answer wrong?” His answer is, “I don’t really know, exactly.” Because there are so many complicated things coming together to produce an answer, that after a certain level of complexity, it becomes very tricky to follow the causal chain of events.
So yes, it is possible to develop intelligence, and the best example for that is us. Unless you believe in that sort of first-mover, God-is-the-creator kind of thing, that somebody created us—you can say that we kind of came out of nothing. We evolved to have both consciousness and intelligence.
So likewise, why not have the same process only at the different stratum? So, right now we’re biologically-based; basically it’s DNA code replicating itself. We have A, C, T, and G. Alternatively, is it inconceivable that we can have this with a binary code? Or even if not binary, some other kind of mathematical code, so you can have intelligence evolve—be it silicone-based, be it photon-based, or even organic processor-based, be it quantum computer-based… what have you. Right?
So are you saying that there could be no other stratum, and no other way that could ever hold intelligence other than us? Then my question to you will be, well what’s the evidence of that claim? Because I would say that we have the evidence that it’s happened once. We could therefore presume that it could not be necessarily limited to only once. We’re not that special, you know. It could possibly happen again, and more than once.
Right, I mean it’s certainly a tenable hypothesis. The Singularians, for the most part, don’t treat it as a hypothesis, they treat it as a matter of faith.
That’s why I’m not such a good Singularitarian.
They say, “We have achieved consciousness and an AGI. We have a general intelligence. Therefore, we must be able to build one.” You don’t generally apply that logic to anything else in life, right? There is a solar system, therefore we must be able to build one. There is a third dimension, we must be able to build one.
With almost nothing else in life do you do it, and yet people who talk about the singularity, and are willing to put a date on it, by the way, to them there’s nothing up for debate. Even though all the things that are required for it are completely unknown, how we achieved them.
Let me give you Daniel Dennett’s take on things, for example. He says that consciousness doesn’t exist. That it is self-delusion. He actually makes a very, very good argument about it, per se. I’ve been trying to get him on my podcast for a while. But he says it’s total self-fabrication, self-delusion. It doesn’t exist. It’s beside the point, right?
But he doesn’t deny that we’re intelligent though. He just says that what we call “consciousness” is just brain activity. But he doesn’t say, “Oh, we don’t really have a general intelligence, either.” Obviously, we’re intelligent.
Exactly. But that’s kind of what you’re trying to imply with the machines, because they will be intelligent in the sense that they will be able to problem-solve anything that we’re able to problem-solve, as we pointed out—whether it’s chess, whether it’s cat food, whether it’s playing or composing the tenth symphony. That’s the point.
Okay, well that’s at least unquestionably the theory.
Sure.
So let’s go from there. Talk to me about Transhumanism. You write a lot about that. What do you think we’ll be able to do? And if you’re willing to say, when do you think we’ll be able to do it? And, I mean, a man with a pacemaker is a Transhuman, right? He can’t live without it.
I would say all of us are already cyborgs, depending on your definition. If you say that the cyborg is an organism consisting of, let’s say, organic and inorganic parts working together in a single unit, then I would answer that if you have been vaccinated, you’re already a cyborg.
If you’re wearing glasses, or contact lenses, you’re already a cyborg. If you’re wearing clothes and you can’t survive without them, or shoes, you’re already a cyborg, right? Because, let’s say for me, I am severely short-sighted with my eyesight. I’m like, -7.25 or something crazy like that. I’m almost kind of blind without my contacts. Almost nobody knows that, unless people listen to these interviews, because I wear contacts, and for all intents and purposes I am as eye-capable as anybody else. But take off my contacts and I’ll be blind. Therefore you have one single unit between me and that inorganic material, which basically I cannot survive without.
I mean, two hundred years ago, or five hundred years ago, I’d probably be dead by now, because I wouldn’t be able to get food. I wouldn’t be able to survive in the world with that kind of severe shortsightedness.
The same with vaccinations, by the way. We know that the vast majority of the population, at least in the developed world, has at least one, and in most cases a number of different vaccines—already by the time you’re two years old. Viruses, basically, are the carriers for the vaccines. And viruses straddle that line, that gray area between living and nonliving things—the hard-to-classify things. They become a part of you, basically. You carry those vaccine antibodies, in most cases, for the rest of your life. So I could say, according to that definition, we are all cyborgs already.
That’s splitting a hair in a very real sense though. It seems from your writing you think we’re going to be doing much more radical things than that; things which, as you said earlier, call into question whether or not we’re even human anymore. What are those things, and why does that affect our definition of “human”?
Let me give you another example. I don’t know if you’ve seen in the news, or if your audience has seen in the news, a couple of months ago the Chinese tried to modify human embryos with CRISPR gene-editing technology. So we are not right now at the stage where, you know… It’s been almost 40 years since we had the first in vitro babies. At the time, basically what in vitro meant was that you do the fertilization outside of the womb, into a petri dish or something like that. And then you watch the division process begin, and then you select—by basically visual inspection—what looks to be the best-fertilized egg, simply by visual examination. And that’s the egg that you would implant.
Today, we don’t just observe; we actually we can preselect. And not only that, we can actually go in and start changing things. So it’s just like when you’re first born, you start learning the alphabet, then you start reading full words; then you start reading full sentences; and then you start writing yourself.
We’re doing, currently, exactly that with genetics. We were starting to just identify the letters of the alphabet thirty, or forty, or fifty years ago. Then we started reading slowly; we read the human genome about fifteen years ago. And now we’re slowly starting to learn to write. And so the implication of that is this: how does the meaning of what it means to be human change, when you can change your sex, color, race, age, and physical attributes?
Because that’s the bottom line. When we can go and make changes at the DNA level of an organism, you can change all those parameters. It’s just like programming. In computer science it’s 0 and 1. In genetics it’s ATCG, four letters, but it’s the same principle. In one case, you’re programming a software program for a computer; in the other case, you’re programming living organisms.
But in that example, though, everybody—no matter what race you are—you’re still a human; no matter what gender you are, you’re still a human.
It depends how you qualify “human,” right? Let’s be more specific. So right now, when you say “humans,” what you mean actually is Homo sapiens, right? But Homo sapiens has a number of very specific physical attributes. When you start changing the DNA structure, you can actually change those attributes to the point where the result doesn’t carry those physical attributes anymore. So are you then Homo sapiens anymore?
From a biological point of view, the answer will most likely depend on how far you’ve gone. There’s no breakpoint, though, and different people will have a different red line to cross. You know for some, just a bit. So let’s say you and your wife or partner want to have a baby. And both of you happen to be carriers of a certain kind of genetic disease that you want to avoid. You want to make sure, before you conceive that baby, the fertilized egg doesn’t carry that genetic material.
And that’s all you care about, that’s fine. But someone else will say, that’s your red line, whereas my red line is that I want to give that baby the good looks of Brad Pitt, I want to give it the brain of Stephen Hawking, and I want to give it the strength of a weightlifter, for example. Each person who is making that choice would go for different things, and would have different attributes that they would choose to accept or not to accept.
Therefore, you would start having that diversification that I talked about in the beginning. And that’s even before you start bringing in things like neural cognitive implants, etc.—which would be basically the merger between men and machine, right? Which basically means that you can have both parallel developments of biotech or genetics. Our biological evolution and development, accelerated, on the other hand; and on the other hand, you can have the merger of that with the acceleration and evolution and improvement of computer technology and neurotech. When you put those two things together, you end up with a final entity which is nothing like what we are today, and it definitely would not fit the definition of being human.
Do you worry, at some level, that it’s taken us five thousand years of human civilization to come up with this idea that there are things called human rights? That there are these things you don’t do to a person no matter what. That you’re born with them, and because you are human, you have these rights.
Do you worry that, for better or worse, what you’re talking about will erode that? That we will lose this sense of human rights, because we lose some demarcation of a human is.
That’s a very complicated question. I would suggest people read Yuval Harari’s book Homo Deus on that topic, and the previous one was called Sapiens. Those two are probably the best two books that I’ve read in the last ten years. But basically, the idea of human rights is an idea that was born just a couple hundred years ago. It came to exist with humanism, and especially liberal humanism. Right now, if you see how it’s playing out, humanism is kind of taking what religion used to do, in the sense of that religion used to put God in the center of everything—and then, since we were his creation, everything else was created for us, to serve us.
For example the animal world, etc., and we used to have the Ptolemaic idea of the universe, where the earth was the center, and all of those things. Now, what humanism is doing is putting the human in the center of the universe, and saying humanity has this primacy above everything else, just because of our very nature. Just because you are human, you have human rights.
I would say that’s an interesting story, but if we care about that story we need to push it even further.
In our present context, how is that working out for everyone else other than humanity? Well the moment we created humanism and invented human rights, we basically made humanity divine. We took the divinity from God, and gave it to humanity, but we downgraded everybody else. So animals which, back in the day—let’s say the hunter-gatherer society—we considered ourselves to be equal and on par with the animals.
Because you see, one day I would kill you and eat you, next day maybe a tiger would eat me. That’s how the world was. But now, we downgraded all the animals to machine—they don’t have consciousness, they don’t have any feelings, they lack self-awareness—and therefore we can enslave and kill them any way we wish and like.
So as a result, we pride ourselves on our human rights and things like that, and yet we enslave and kill seventy to seventy-five billion animals every year, and 1.3 trillion sea organisms like fish, annually. So the question then is, if we care so much about rights, why should they be limited only to human rights? Are we saying that other living organisms are incapable of suffering? I’m a dog owner, I have a seventeen-and-a-half-year-old dog. She’s on her last leg. She actually had a stroke last weekend.
I can tell you that she has taught me that she possesses the full spectrum of happiness and suffering that I do, pretty much. Even things like jealousy, and so on, she demonstrated to me multiple times, right? Yet, we today use that idea of humanism and human rights to defend ourselves and enslave everybody else.
I would suggest it’s time to expand that and say, first, to our fellow animals, that we need to include them, that they have their own rights, first of all. Second of all, that possibly rights should not be limited to organic organisms, and should not be called human or animal rights, but they should be called intelligence rights, or even beyond intelligence—any kind of organism that can exhibit things like suffering and happiness and pleasure and pain.
Because obviously, there is a different level of intelligence between me and my dog—we would hope—but she’s able to suffer as much as I am, and I’ve seen it. And that’s true especially more for whales and great apes and stuff like that, which we have brought to the brink of extinction right now. We want to be special, that’s what religion does to us. That’s what humanism did with human rights.
Religion taught us that we’re special because God created us in his own image. Then humanism said there is no God, we are the God, so we took the place of God—we took his throne and said, “We’re above everybody else.” That’s a good story, but it’s nothing more than a story. It’s a myth.
You’re a vegan, correct?
Yes.
How far down would you extend these rights? I mean, you have consciousness, and then below that you have sentience, which is of course a misused word. People use “sentience” to mean intelligence, but sentience is the ability to feel something. In your world, you would extend rights at some level all the way down to anything that can feel?
Yeah, and look: I’ve been a vegan for just over a year and a couple of months, let’s say fourteen months. So, just like any other human being, I have been, and still am, very imperfect. Now, I don’t know exactly how far we should expand that, but I would say we should stop immediately at the level we can easily observe that we’re causing suffering.
If you go to a butcher shop, especially an industrialized farming butcher shop, where they kill something like ten thousand animals per day—it’s so mechanized, right? If you see that stuff in front of your eyes, it’s impossible not to admit that those animals are suffering, to me. So that’s at least the first step. I don’t know how far we should go, but we should start at the first steps, which are very visible.
What do you think about consciousness? Do you believe consciousness exists, unlike Dan Dennett, and if so where do you think it comes from?
Now you’re putting me on the spot. I have no idea where it comes from, first of all. You know, I am atheist, but if there’s one religion that I have very strong sympathies towards, that would be Buddhism. I particularly value the practice of meditation. So the question is, when I meditate—and it only happens rarely that I can get into some kind of deep meditation—is that consciousness mine, or am I part of it?
I don’t know. So I have no idea where it comes from. I think there is something like consciousness. I don’t know how it works, and I honestly don’t know if we’re part of it, or if it is a part of us.
Is it at least a tenable hypothesis that a machine would need to be conscious, to be an AGI?
I would say yes, of course, but the next step, immediately, is how do we know if that machine has consciousness or not? That’s what I’m struggling with, because one of the implications is that the moment you accept, or commit to that kind of definition, that we’re only going to have AGI if it has consciousness, then the question is, how do we know if and when it has consciousness? An AGI that’s programmed to say, “I have consciousness,” well how do you know if it’s telling the truth, and if it’s really conscious or not? So that’s what I’m struggling with, to be more precise in your answers.
And mind you, I have the luxury of being a philosopher, and that’s also kind of the negative too—I’m not an engineer, or a neuroscientist, so…
But you can say consciousness is required for an AGI, without having to worry about, well how do we measure it, or not.
Yes.
That’s a completely different thing. And if consciousness is required for an AGI, and we don’t know where human consciousness comes from, that at least should give us an enormous amount of pause when we start talking about the month and the day when we’re going to hit the singularity.
Right, and I agree with you entirely, which is why I’m not so crazy about the timelines, and I’m staying away from it. And I’m generally on the skeptical end of things. By the way, for the last seven years of my journey I have been becoming more and more skeptical. Because there are other reasons or ways that the singularity…
First of all, the future never unfolds the way we think it will, in my opinion. There’s always those black swan events that change everything. And there are issues when you extrapolate, which is why I always stay away from extrapolation. Let me give you two examples.
The easy example is when you have positive, or let’s say negative extrapolation. We have people such as Lord Kelvin—he was the president of the British Royal Society, one of the smartest people—who wrote a book in the 1890’s about how heavier-than-air aircraft are impossible to build.
The great H.G. Wells wrote, just in 1902, that heavier-than-air aircraft are totally impossible to build, and he’s a science fiction writer. And yet, a year later the Wright brothers, two bicycle makers, who probably never read Lord Kelvin’s book, and maybe didn’t even read any of H.G. Wells’ science fiction novels, proved them both wrong.
So people were extrapolating negatively from the past. Saying, “Look, we’ve tried to fly since the time of Icarus, and the myth of Icarus is a warning to us all: we’re never going to be able to fly.” But we did fly. So we didn’t fly for thousands of years, until one day we flew. That’s one kind of extrapolation that went wrong, and that’s the easy one to see.
The harder one is the opposite, which is called positive extrapolation. From 1903 to, let’s say, the late 1960s, we went from the Wright brothers, to the moon. People said—amazing people, like Arthur C. Clarke—said, well if we made it from 1903 to the late 1960s to the moon, by 2002 we will be beyond Mars; we will be outside of our solar system.
That’s positive extrapolation. Based on very good data for, let’s say, sixty-five years from 1903 to 1968—very good data—you saw tremendous progress in aerospace technology. We went to the moon several times, in fact, and so on and so on. So it was logical to extrapolate that we would be by Mars and beyond, today. But actually, the opposite happened. Not only did we not reach Mars by today, we are actually unable to get back to the moon, even. As Peter Thiel says in his book, we were promised flying cars and jetpacks, but all we got was 140 characters.
In other words, beware of extrapolations, because they’re true until they’re not true. You don’t know when they are going to stop being true, and that’s the nature of black swan sorts of things. That’s the nature of the future. To me, it’s inherently unknowable. It’s always good to have extrapolations, and to have ideas, and to have a diversity of scenarios, right?
That’s another thing which I agree with you on: Singularians tend to embrace a single view of the future, or a single path to the future. I have a problem with that myself. I think that there’s a cone of possible futures. There are certainly limitations, but there is a cone of possibilities, and we are aware of only a fraction of it. We can extrapolate only in a fraction of it, because we have unknown unknowns, and we have black swan phenomena, which can change everything dramatically. I’ve even listed three disaster scenarios—like asteroids, ecological collapse, or nuclear weapons—which can also change things dramatically. There are many things that we don’t know, that we can’t control, and that we’re not even aware of that can and probably will change the actual future from the future we think will happen today.
Last philosophical question, and then I’d like to chat about what you’re working on. Do you believe humans have free will?
Yes. So I am a philosopher, and again—just like with the future—there are limitations, right? So all the possible futures stem from the cone of future possibilities derived from our present. Likewise, our ability to choose, to make decisions, to take action, have very strict limitations; yet, there is a realm of possibilities that’s entirely up to us. At least that’s what I’m inclined to think. Even though most scientists that I meet and interview on my podcast are actually one level, or one degree or another degree, of determinist.
Would an AGI need to have free will in order to exist?
Yes, of course.
Where do you think human free will comes from? If every effect had a cause, and every decision had a cause—presumably in the brain—whether it’s electrical or chemical or what have you… Where do you think it comes from?
Yeah, it could come from quantum mechanics, for example.
That only gets you randomness. That doesn’t get you somehow escaping the laws of physics, does it?
Yes, but randomness can be sort of a living-cat and dead-cat outcome, at least metaphorically speaking. You don’t know which one it will be until that moment is there. The other thing is, let’s say, you have fluid dynamics, and with the laws of physics, we can predict how a particular system of gas, will behave within the laws of fluid dynamics. But it’s impossible to predict how a single molecule or atom will behave within that system. In other words, if the laws of the universe and the laws of physics set the realm of possibilities, then within that realm, you can still have free will. So, we are such tiny minuscule little parts of the system, as individuals, that we are more akin to atoms, if not smaller particles than that.
Therefore, we can still be unpredictable.
Just like it’s unpredictable, by the way, with quantum mechanics, to say, “Where is the electron located?” and if you try to observe it, then you are already impacting on the outcome. You’re predetermining it, actually, when you try to observe it, because you become a part of the system. But if you’re not observing it, you can create a realm of possibilities where it’s likely to be, but you don’t know exactly where it is. Within that realm, you get your free will.
Final question: Tell us what you’re working on, what’s exciting to you, what you’re reading about… I see you write a lot about movies. Are there any science fiction movies that you think are good ones to inform people on this topic? Just talk about that for a moment.
Right. So, let me answer backwards. In terms of movies—it’s been awhile since I’ve watched it, but I actually even wrote a review on in—one of the movies that I really enjoyed watching, it’s by the Wachowskis, and it’s called “Cloud Atlas.” I don’t think that movie was very successful at all, to be honest with you.
I’m not even sure if they managed to recover the money they invested in it, but in my opinion it was one of the top ten best movies I’ve ever seen in my life. Because it’s a sextet—so it had six plots progressing in a parallel fashion, in six different timelines. So six things happening in six different locations in six different epochs, with six different timelines, with tremendous actors, and it touched on a lot of those future technologies, and even the meaning of being human—what separates us from the others, and so on.
I would suggest people check out “Cloud Atlas.” One of my favorite movies. The previous question you asked was, what am I working on?
Mm-hmm.
Well, to be honest, I just finished my first book three months ago or something. I launched it on January 23rd I think. So I’ve been basically promoting my book, traveling, giving speeches, trying to raise awareness about the issues, and the fact that, in my view, we are very unprepared—as a civilization, as a society, as individuals, as businesses, and as governments.
We are going to witness a tremendous amount of change in the next several decades, and I think we’re grossly unprepared. And I think, depending on how we handle those changes, with genetics, with robotics, with nanotech, with artificial intelligence—even if we never reach the level of artificial general intelligence, by the way, that’s beside the point to me—just the changes we’re going to witness as a result of the biotech revolution can actually put our whole civilization at risk. They’re not just only going to change the meaning of what it is to be human, they would put everything at risk. All of those things converging together, in the narrow span of several decades basically, I think, create this crunch point which could be what some people have called a “pre-singularity future,” which is one possible answer to the Fermi Paradox.
Enrico Fermi was this very famous Italian mathematician who, a few decades ago, basically observed that there are two-hundred billion galaxies just in the observable realm of the universe. And each of those two-hundred billion galaxies has two-hundred billion stars. In other words, there’s almost an endless number of exoplanets like ours—which are located in the Goldilocks area, where it’s not too hot or too cold—which can potentially give birth to life. The question then is, if there are so many planets and so many stars and so many places where we can have life, where is everybody? Where are all the aliens? There’s a diversity of answers to that question. But at least one of those possible scenarios, to explain this paradox, is what’s referred to as the pre-singularity future. Which is to say, in each civilization, there comes a moment where its technological prowess surpasses its capacity to control it. Then, possibly, it self-destructs.
So in other words, what I’m saying is that it may be an occurrence which happens on a regular basis in the universe. It’s one way to explain the Fermi Paradox, and it’s possibly the moment that we’re approaching right now. So it may be a moment where we go extinct like dinosaurs; or, if we actually get it right—which right now, to be honest with you, I’m getting kind of concerned about—then we can actually populate the universe. We can spread throughout the universe, and as Konstantin Tsiolkovsky said, “Earth is the cradle of humanity, but sooner or later, we have to leave the cradle.” So, hopefully, in this century we’ll be able to leave the cradle.
But right now, we are not prepared—neither intellectually, nor technologically, nor philosophically, nor ethically, not in any way possible, I think. That’s why it’s so important to get it right.
The name of your book is?
Conversations with the Future: 21 Visions for the 21st Century.
All right, Nikola, it’s been fascinating. I’ve really enjoyed our conversation, and I thank you so much for taking the time.
My pleasure, Byron.
Byron explores issues around artificial intelligence and conscious computers in his upcoming book The Fourth Age, to be published in April by Atria, an imprint of Simon & Schuster. Pre-order a copy here.
[voices_in_ai_link_back]

Voices in AI – Episode 20: A Conversation with Marie des Jardins

[voices_in_ai_byline]
In this episode, Byron and Marie talk about the Turing test, Watson, autonomous vehicles, and language processing.
[podcast_player name=”Episode 20: A Conversation with Marie des Jardins” artist=”Byron Reese” album=”Voices in AI” url=”https://voicesinai.s3.amazonaws.com/2017-11-20-(01-03-03)-marie-de-jardin.mp3″ cover_art_url=”https://voicesinai.com/wp-content/uploads/2017/11/voices-headshot-card-2.jpg”]
[voices_in_ai_link_back]
Byron Reese: This is Voices in AI, brought to you by Gigaom. I’m Byron Reese. Today I’m excited that our guest is Marie des Jardins. She is an Associate Dean for Engineering and Information Technology as well as a professor of Computer Science at the University of Maryland, Baltimore County. She got her undergrad degree from Harvard, and a Ph.D. in computer science from Berkeley, and she’s been involved in the National Conference of the Association for the Advancement of Artificial Intelligence for over 12 years. Welcome to the show, Marie.
Marie des Jardins: Hi, it’s nice to be here.
I often open the show with “What is artificial intelligence?” because, interestingly, there’s no consensus definition of it, and I get a different kind of view of it from everybody. So I’ll start with that. What is artificial intelligence?
Sure. I’ve always thought about artificial intelligence as just a very broad term referring to trying to get computers to do things that we would consider intelligent if people did them. What’s interesting about that definition is it’s a moving target, because we change our opinions over time about what’s intelligent. As computers get better at doing things, they no longer seem that intelligent to us.
We use the word “intelligent,” too, and I’m not going to dwell on definitions, but what do you think intelligence is at its core?
So, it’s definitely hard to pin down, but I think of it as activities that human beings carry out, that we don’t know of lower order animals doing, other than some of the higher primates who can do things that seem intelligent to us. So intelligence involves intentionality, which means setting goals and making active plans to carry them out, and it involves learning over time and being able to react to situations differently based on experiences and knowledge that we’ve gained over time. The third part, I would argue, is that intelligence includes communication, so the ability to communicate with other beings, other intelligent agents, about your activities and goals.
Well, that’s really useful and specific. Let’s look at some of those things in detail a little bit. You mentioned intentionality. Do you think that intentionality is driven by consciousness? I mean, can you have intentionality without consciousness? Is consciousness therefore a requisite for intelligence?
I think that’s a really interesting question. I would decline to answer it mainly because I don’t think we ever can really know what consciousness is. We all have a sense of being conscious inside our own brains—at least I believe that. But of course, I’m only able to say anything meaningful about my own sense of consciousness. We just don’t have any way to measure consciousness or even really define what it is. So, there does seem to be this idea of self-awareness that we see in various kinds of animals—including humans—and that seems to be a precursor to what we call consciousness. But I think it’s awfully hard to define that term, and so I would be hesitant to put that as a prerequisite on intentionality.
Well, I think people agree what it is in a sense. Consciousness is the experience of things. It’s having a subjective experience of something. Isn’t the debate more like where does that come from? How does that arise? Why do we have it? But in terms of the simple definition, we do know that, don’t we?
Well, I don’t know. I mean, where does it come from, how does it arise, and do different people even have the same experience of consciousness as each other? I think when you start to dig down into it, we don’t have any way to tell whether another being is conscious or self-aware other than to ask them.
Let’s look at that for a minute, because self-awareness is a little different. Are you familiar with the mirror test that Professor Gallup does, where they take a sleeping animal, and paint a little red spot on its forehead, and then wait until it walks by a mirror, and if it stops and rubs its own forehead, then, according to the theory, it has a sense of self and therefore it is self-aware. And the only reason all of this matters is if you really want to build an intelligent machine, you have to start with what goes into that. So do you think that is a measure of self-awareness, and would a computer need to pass the mirror test, as it were?
That’s where I think we start to run into problems, right? Because it’s an interesting experiment, and it maybe tells us something about, let’s say, a type of self-awareness. If an animal’s blind, it can’t pass that test. So, passing the test therefore can’t be a precursor to intelligence.
Well, I guess the question would be if you had the cognitive ability and a fully functional set of senses that most of your species have, are you able to look at something else and determine that, “I am a ‘me’” and “That’s a reflection of me,” and “That actually is me, but I can touch my own forehead.”
I’m thinking, sorry. I’m being nonresponsive because I’m thinking about it, and I guess what I’m trying to say is that a test that’s designed for animals that have evolved in the wild is not necessarily a meaningful test for intelligent agents that we’ve engineered, because I could design a robot that can pass that test, that nobody would think was self-aware in any interesting and meaningful sense. In other words, for any given test you design, I can game and redesign my system to pass that test. But the problem is that the test measures something that we think is true in the wild, but as soon as we say, “This is the test,” we can build the thing that passes that test that doesn’t do what we meant for the agent to be able to do, to be self-aware.
Right. And it should be pointed out that there are those who look at the mirror test and say, “Well, if you put a spot on an animal’s hand, and just because they kind of wipe their hand…” That it’s really more a test of do they have the mental capability to understand what a mirror does?” And it has nothing to do with…
Right. Exactly. It’s measuring something about the mirror and so forth.
Let’s talk about another thing in your intelligence definition, because I’m fascinated by what you just kind of outlined. You said that some amount of communication, therefore some language, is necessary. So do you think—at least before we get to applying it to machines—that language is a requisite in the animal kingdom for intelligence?
Well, I don’t think it has to be language in the sense of the English language or our human natural language, but there are different ways to communicate. You can communicate through gestures. You can communicate through physical interaction. So it doesn’t necessarily have to be spoken language, but I do think the ability to convey information to another being that can then receive the information that was conveyed is part of what we mean by intelligence. Languages for artificial systems could be very limited and constrained, so I don’t think that we necessarily have to solve the natural language problem in order to develop what we would call intelligent systems. But I think when you talk about strong AI, which is referring to sort of human level intelligence, at that point, I don’t think you can really demonstrate human level intelligence without being able to communicate in some kind of natural language.
So, just to be clear, are you saying language indicates intelligence or language is required for intelligence?
Language is required for intelligence.
There are actually a number of examples in the plant kingdom where the plants are able to communicate signals to other plants. Would you say that qualifies? If you’re familiar with any of those examples, do those qualify as language in a meaningful sense, or is that just like, “Well, you can call it language if you’re trying to do clever thought riddles, but it’s not really a language.”
Yeah, I guess I’d say, as with most interesting things, there’s sort of a spectrum. But one of the characteristics of intelligent language, I think, is the ability to learn the language and to adapt the language to new situations. So, you know, ants communicate with each other by laying down pheromones, but ants can’t develop new ways to communicate with each other. If you put them into a new environment, they’re biologically hardwired to use communication.
There’s an interesting philosophical argument that the species is intelligent, or evolution is intelligent at some level. I think those are interesting philosophical discussions. I don’t know that they’re particularly helpful in understanding intelligence in individual beings.
Well, I definitely want to get to computers here in a minute and apply all of this as best we can, but… By our best guess, humans acquired speech a hundred thousand years ago, roughly the same time we got fire. The theory is that fire allowed us to cook food, which allowed us to break down the proteins in it and make it more digestible, and that that allowed us to increase our caloric consumption, and we went all in on the brain, and that gave us language. Would your statement that language is a requirement for intelligence imply that a hundred and one thousand years ago, we were not intelligent?
I would guess that human beings were communicating with each other a hundred and one thousand years ago and probably two hundred thousand years ago. And again, I think intelligence is a spectrum. I think chimpanzees are intelligent and dolphins are intelligent, at some level. I don’t know about pigs and dogs. I don’t have strong evidence.
Interestingly, of all things, dogs don’t pass the red paint mirror test. They are interestingly the only animal on the whole face of the earth—and by all means, any listener out there who knows otherwise, please email me—that if you point at an object, will look at the object.
Really?
Yeah, even chimpanzees don’t do it. So it’s thought that they co-evolved with us as we domesticated them. That was something we selected for, not overtly but passively, because that’s useful. It’s like, “Go get that thing,” and then the dog looks over there at it.
Right.
It’s funny, there’s an old Far Side cartoon—you can’t get those things out of your head—where the dolphins are in the tank, and they’re writing down all the dolphins’ noises, and they’re saying things like, “Se habla español,” and “Sprechen sie Deutsch,” and the scientists are like, “Yeah, we can’t make any sense of it.”
So let’s get back to language, because I’m really fascinated by this and particularly the cognitive aspects of it. So, what do you think is meaningful, if anything, about the Turing test—which of course you know, but for the benefit of our listeners, is: Alan Turing put this out that if you’re on a computer terminal, and you’re chatting with somebody, typing, and you can’t tell if it’s a person or a machine, then you have to say that machine is intelligent.
Right, and of course, Alan Turing’s original version of that test was a little bit different and more gendered if you’re familiar.
He based it on the gendered test, right. You’re entirely right. Yes.
There’s a lot of objections to the Turing test. In fact, when I teach the Introductory AI class at UMBC, I have the students read some of Alan Turing’s work and then John Searle’s arguments against the Turing test.
Chinese Room, right?
The Chinese Room and so forth, and I have them talk about all of that. And, again, I think these are, sort of, interesting philosophical discussions that, luckily, we don’t actually need to resolve in order to keep making progress towards intelligence, because I don’t think this is one that will ever be resolved.
Here’s something I think is really interesting: when that test was proposed, and in the early years of AI, the way it was envisioned was based on the communication of the time. Today’s Turing tests are based in an environment in which we communicate very differently—we communicate very differently online than we do in person—than Alan Turing ever imagined we would. And so the kind of chat bots that do well at these Turing tests really probably wouldn’t have looked intelligent to an AI researcher in the 1960s, but I don’t think that most social media posts would have looked very intelligent, either. And so we’ve kind of adapted ourselves to this sort of cryptic, darting, illogical, jumping-around-in-different-topics way of conversing with each other online, where lapses in rationality and continuity are forgiven really easily. And when I see some of the transcripts of modern Turing tests, I think, well, this kind of reminds me a little bit of PARRY. I don’t know if you’re familiar with ELIZA and PARRY.
Weizenbaum’s 1960s Q&A, his kind of psychologist helper, right?
Right. So ELIZA was a pattern-recognition-based online psychologist that would use this, I guess, Freudian way of interrogating a patient, to ask them about their feelings and so forth. And when this was created, people were very taken in by it, because, you know, they would spill out their deepest, darkest secrets to what turned out to be, essentially, one of the earliest chat bots. There was a version of that that was created later. I can’t remember the researcher who created it, but it was studying paranoid schizophrenia and the speech patterns of paranoid schizophrenics, and that version of ELIZA was called PARRY.
If you read any transcripts by PARRY, it’s very disjointed, and it can get away with not having a deep semantic model, because if it doesn’t really understand anything, and if it can’t match anything, it just changes the topic. And that’s what the modern Turing test look like to me, mostly. I think if we were going to really use the Turing test as some measure of intelligence, I think maybe we need to put some rules on critical thinking and rationality. What is it that we’re chatting about? And what is the nature of this communication with the agent in the black box? Because, right now, it’s just degenerated into, again, this kind of gaming the system. Well, let’s just see if we can trick a human into thinking that we’re a person, but we get to take advantage of the fact that online communication is this kind of dance that we play that’s not necessarily logical and rational and rule-following.
I want to come back to that, because I want to go down that path with you, but beforehand, it should be pointed out, and correct me if I’m wrong because you know this a lot better than I do, but the people who interacted with ELIZA all knew it was a computer and that there was “nobody at home.” And that, in the end, is what freaked Weizenbaum out, and had him turn on artificial intelligence, because I think he said something to the effect that when the computer says, “I understand,” it’s a lie. It’s a lie because there is no “I,” and there’s nothing to understand. Was that the same case with PARRY that they knew full and well they were talking to a machine, but they still engaged with it as if it was another person?
Well, that was being used to try to model the behavior of a paranoid schizophrenic, and so my understanding is that they ran some experiments where they had psychologists, in a blind setting, interact with an actual paranoid schizophrenic or this model, and do a Turing test to try to determine whether this was a convincing model of paranoid schizophrenic interaction style. I think it was a scientific experiment that was being run.
So, you used the phrase, when you were talking about PARRY just now, “It doesn’t understand anything.” That’s obviously Searle’s whole question with the Chinese Room, that the non-Chinese speaker who can use these books to answer questions in Chinese doesn’t understand anything. Do you think even today a computer understands anything, and will a computer ever understand anything?
That’s an interesting question. So when we talk about this with my class, with my students, I use the analogy of learning a new language. I don’t know if you speak any foreign languages to any degree of fluency.
I’m still working on English.
Right. So, I speak a little bit of French and a little bit of German and a little bit of Italian, so I’m very conscious of the language learning process. When I was first learning Italian, anything I said in Italian was laboriously translated in my mind by essentially looking up rules. I don’t remember any Italian, so I can’t use Italian as an example anymore. I want to say, “I am twenty years old” in French, and so in order to do that, I just don’t say, “J’ai vingt ans”; I say to myself, “How do I say, ‘I am 20 years old’? Oh, I remember, they don’t say, ‘I am 20 years old.’ They say, ‘I have 20 years.’ OK. ‘I have’ is ‘J’ai,’ ‘twenty’ is ‘vingt’…” And I’m doing this kind of pattern-based look up in my mind. But doing that inside my head, I can communicate a little bit in French. So do I understand French?
Well, the answer to that question would be “no,” but what you understand is that process you just talked about, “OK, I need to deconstruct the sentence. I need to figure out what the subject is. I need to line that up with the verb.” So yes, you have a higher order understanding that allows you to do that. You understand what you’re doing, unquestionably.
Right.
And so the question is, at that meta-meta-meta-meta-meta level, will a computer ever understand what it’s doing.
And I think this actually kind of gets back to the question of consciousness. Is understanding—in the sense that Searle wants it to be, or Weizenbaum wanted it to be—tied up in our self-awareness of the processes that we’re carrying out, to reason about things in the world?
So, I only have one more Turing test question to ask, then I would love to change the subject to the state of the art today, and then I would love to talk about when you think we’re going to have certain advances, and then maybe we can talk about the impact of all this technology on jobs. So, with that looking forward, one last question, which is: when you were talking about maybe rethinking the Turing test, that we would have a different standard, maybe, today than Turing did. And by the way, the contests that they have where they really are trying to pass it, they are highly restricted and constrained, I think. Is that the case?
I am not that familiar with them, although I did read The Most Human Human, which is a very interesting book if you are looking for some light summer reading.
All right.
Are you familiar with the book? It’s by somebody who served as a human in the Loebner Prize Turing test, and sort of his experience of what it’s like to be the human.
No. I don’t know that. That’s funny. So, the interesting thing was that—and anybody who’s heard the show before will know I use this example—I always start everyone with the same question. I always ask the same question to every system, and nobody ever gets it right, even close. And because of that, I know within three seconds that I’m not talking to a human. And the question is: “What’s larger? The sun or a nickel?” And no matter how, I think your phrase was “schizophrenic” or “disjointed” or what have you, the person is, they answer, “The sun” or “Duh” or “Hmm.” But no machine can.  
So, two questions: Is that question indicative of the state of the art, that we really are like in stone knives and bear skins with natural language? And second, do you think that we’re going to make strides forward that maybe someday you’ll have to wonder if I’m actually not a sophisticated artificial intelligence chatting with you or not?
Actually, I guess I’m surprised to hear you say that computers can’t answer that question, because I would think Watson, or a system like that, that has a big backend knowledge base that it’s drawing on would pretty easily be able to find that. I can Google “How big is the sun?” and “How big is a nickel?” and apply a pretty simple rule.
Well, you’re right. In all fairness, there’s not a global chat bot of Watson that I have found. I mean, the trick is nickel is both a metal and a coin, and the sun is a homophone that could be a person’s son. But a person, a human, makes that connection. These are both round and so they kind of look like alike and whatnot. When I say it, what I mean is you go to Cleverbot, or you go to the different chat bots that are entered in the Turing competitions and whatnot. You ask Google, you type that into Google, you don’t get the answer. So, you’re right, there are probably systems that can nail it. I just never bump into them.
And, you know, there’s probably context that you could provide in which the answer to that question would be the nickel. Right? So like I’ve got a drawing that we’ve just been talking about, and it’s got the sun in it, and it has a nickel in it, and the nickel is really big in the picture, and the sun is really small because it’s far away. And I say, “Which is bigger?” There might actually be a context in which the obvious answer isn’t actually the right answer, and I think that kind of trickiness is what makes people, you know, that’s the signal of intelligence, that we can kind of contextualize our reasoning. I think the question as a basic question, it’s such a factual question, that that’s the kind of thing that I think computers are actually really good at. What do you love more: A rose or a daisy? That’s a harder question.
Right.
You know, or what’s your mother’s favorite flower? Now there’s a tricky question.
Right. I have a book coming out on this topic at the end of the year, and I try to think up the hardest question, like what’s the last one. I’m sure listeners will have better ideas than I have. But one I came up with was: Dr. Smith is eating at her favorite restaurant when she receives a phone call. She rushes out, neglecting to pay her bill. Is management likely to prosecute? So we need to know: She’s probably a medical doctor. She probably got an emergency call. It’s her favorite restaurant, so she’s probably known there. She dashes out. Are they really going to go to all the effort to prosecute, not just get her to pay next time she’s in and whatnot? That is the kind of thing that has so many layers of experience that it would be hard for a machine to do.
Yeah, but I would argue that I think, eventually, we will have intelligent agents that are embedded in the world and interact with people and build up knowledge bases of that kind of common sense knowledge, and could answer that question. Or a similar type of question that was posed based on experience in the world and knowledge of interpersonal interactions. To me, that’s kind of the exciting future of AI. Being able to look up facts really fast, like Watson… Watson was exciting because it won Jeopardy, but let’s face it: looking up a lot of facts and being able to click on a buzzer really fast are not really the things that are the most exciting about the idea of an intelligent, human-like agent. They’re awfully cool, don’t get me wrong.
I think when we talk about commercial potential and replacing jobs, which you mentioned, I think those kinds of abilities to retrieve information really quickly, in a flexible way, that is something that can really lead to systems that are incredibly useful for human beings. Whether they are “strong AI” or not doesn’t matter. The philosophical stuff is fun to talk about, but there’s this other kind of practical, “What are we really going to build and what are we going to do with it?”
Right.
And it doesn’t require answering those questions.
Fair enough. In closing on all of that other part, I heard Ken Jennings speak at South by Southwest about it, and I will preface this by saying he’s incredibly gracious. He doesn’t say, “Well, it was rigged.” He did describe, though, that the buzzer situation was different, because that’s the one part that’s really hard to map. Because the buzzer’s the trick on Jeopardy, not the answers.
That’s right.
And that was all changed up a bit.
Ken is clearly the best human at the buzzer. He’s super smart, and he knows a ton of stuff, don’t get me wrong, I couldn’t win on Jeopardy. But I think it’s that buzzer that’s the difference. And so I think it would be really interesting to have a sort of Jeopardy contest in which the buzzer doesn’t matter, right? So, you just buzz in, and there’s some reasonable window in which to buzz in, and then it’s random who gets to answer the question, or maybe everybody gets to answer the question independently. A Jeopardy-like thing where that timed buzzing in isn’t part of it; it’s really the knowledge that’s the key. I suspect Watson would still do pretty well, and Ken would still do pretty well, but I’m not sure who would win in that case. It would depend a lot on the questions, I think.
So, you gave us a great segue just a minute ago when you said, “Is all of this talk about consciousness and awareness and self and Turing test and all that—does it matter?” And it sounded like you were saying, whether it does or doesn’t, there is plenty of exciting things that are coming down the pipe. So let’s talk about that. I would love to hear your thoughts on the state of the art. AI’s passed a bunch of milestones, like you said, there was chess, then Jeopardy, then AlphaGo, and then recently poker. What are some things, you think—without going to AGI which we’ll get to in a minute—we should look for? What’s the state of the art, and what are some things you think we’re going to see in a year, or two years, three years, that will dominate the headlines?
I think the most obvious thing is self-driving cars and autonomous vehicles, right? Which we already have out there on the roads doing a great deal. I drive a Volvo that can do lane following and can pretty much drive itself in many conditions. And that is really cool and really exciting. Is it intelligence? Well, no, not by the definitions we’ve just been talking about, but the technology to be able to do all of that very much came out of AI research and research directions.
But I guess there won’t be a watershed with that, like, in the way that one day we woke up and Lee Sedol had lost. I mean, won’t it be that in three years, the number one Justin Bieber song will have been written by an AI or something like that, where it’s like, “Wow, something just happened”?
Yeah, I guess I think it’s a little bit more like cell phones. Right? I mean, what was the moment for cell phones? I’m not sure there was one single.
Fair enough. That’s right.
It’s more of like a tipping point, and you can look back at it and say, “Oh, there’s this inflection point.” And I don’t know what it was for cell phones. I expect there was an inflection point when either cell phone technology became cheap enough, or cell tower coverage became prevalent enough that it made sense for people to have cell phones and start using them. And when that happened, it did happen very fast. I think it will be the same with self-driving cars.
It was very fast that cars started coming out with adaptive cruise control. We’ve had cruise control for a long time, where your car just keeps going at the same speed forever. But adaptive cruise control, where your car detects when there’s something in front of it and slows down or speeds up based on the conditions of the road, that happened really fast. It just came out and now lots of cars have that, and people are kind of used to it. GPS technology—I was just driving along the other day, and I was like, “Oh yeah, I’ve got a map in my car all the time.” And anytime I want to, I can say, “Hey, I’d like to go to this place,” and it will show me how to get to that place. We didn’t have that, and then within a pretty short span of time, we have that, and that’s an AI derivative also.
Right. I think that those are all incredibly good points. I would say with cell phones—I can remember in the mid ‘90s, the RAZR coming out, which was smaller, and it was like, “Wow.” You didn’t know you had it in your pocket. And then, of course, the iPhone was kind of a watershed thing.
Right. A smartphone.
Right. But you’re right, it’s a form of gradualism punctuated by a series of step functions up.
Definitely. Self-driving car technology, in particular, is like that, because it’s really a big ask to expect people to trust self-driving cars on the road. So there’s this process by which that will happen and is already happening, where individual bits of autonomous technology are being incorporated into human-driven cars. And meanwhile, there’s a lot of experimentation with self-driving cars under relatively controlled conditions. And at some point, there will be a tipping point, and I will buy a car, and I will be sitting in my car and it will take me to New York, and I won’t have to be in control.
Of course, one impediment to that is that whole thing where a vast majority of the people believe the statistical impossibility that they are above-average drivers.
That’s right.
I, on the other hand, believe I’m a below-average driver. So I’m going to be the first person—I’m a menace on the road. You want me off as soon as you can. It probably is good enough for that. I know prognostication is hard, and I guess cars are different, because I can’t get a free self-driving car with a two-year contract at $39.95 a month, right? So it’s a big capital shift, but do you have a sense—because I’m sure you’re up on all of this—when you think the first fully autonomous car will happen? And then the most interesting thing, when will it be illegal not to drive a fully autonomous car?
I’m not quite sure how it will roll out. It may be that it’s in particular locations or particular regions first, but I think that ordinary people being able to drive a self-driving car; I would say within ten years.
I noticed you slipped that, “I don’t know when it’s going to roll out” pun in there.
Pun not intended. You see, if my AI could recognize that as a pun… Humor is another thing that intelligent agents are not very good at, and I think that’ll be a long time coming.
Right. So you have just confirmed that I’m a human.
So, next question, you’ve mentioned strong AI, also called an artificial general intelligence, that is an intelligence as smart as a human. So, back to your earlier question of does it matter, we’re going to be able to do things like self-driving cars and all this really cool stuff, without answering these philosophical questions; but I think the big question is can we make an AGI? 
Because if you look at what humans are good at doing, we’re good at transfer learning where we pick something to learn in one domain and map it to another one effortlessly. We are really good at taking one data point, like, you could show a human one data point of something, and then a hundred photos, and no matter how you change the lighting or the angle, a person will go, “There, there, there, and there.” So, do you think that an AGI is the sum total of a series of weak AIs bolted together? Or is there some, I’m going to use a loaded word, “magic,” and obviously I don’t mean magic, but is there some hitherto unknown magic that we’re going to need to discover or invent?
I think hitherto unknown magic, you know, using the word “magic” cautiously. I think there are individual technologies that are really exciting and are letting us do a lot of things. So right now, deep learning is the big buzz word, and it is kind of cool. We’ve taken old neural net technology, and we’ve updated it with qualitatively different ways of thinking about essentially neural network learning that we couldn’t really think about before, because we didn’t have the hardware to be able to do it at the scale or with the kind of complexity that deep learning networks exist now. So, deep learning is exciting. But deep learning, I think, is just fundamentally not suited to do this single point generalization that you’re talking about.
Big data is a buzz word, but I’m, personally, I’ve always been more interested in tiny data. Or maybe it’s big data in the service of tiny data, so I experience lots and lots and lots of things, and by having all of that background knowledge at my disposal, I can do one shot learning, because I can take that single instance and interpret it and understand what is relevant about that one single instance that I need to use to generalize to the next thing. One shot learning works because we have vast experience, but that doesn’t mean that throwing vast experience at that one thing is, by itself, going to let us generalize from that single thing. I think we still really haven’t developed the cognitive reasoning frameworks that will let us take the power of deep learning and big data, and apply it in these new contexts in creative ways, using different levels of reasoning and abstraction. But I think that’s where we’re headed, and I think a lot of people are thinking about that.
So I’m very hopeful that the broad AI community, in its lushest, many-flowers-blooming way of exploring different approaches, is developing a lot of ideas that eventually are going to come together into a big intelligent reasoning framework, that will let us take all of the different kinds of technologies that we’ve built for special purpose algorithms, and put them together—not just bolt it together, but really integrate it into a more coherent, broad framework for AGI.
If you look at the human genome, it’s, in computer terms, 720MB, give or take. But a vast amount of that is useless, and then a vast amount of that we share with banana trees. And if you look at the part that’s uniquely human, which gives us our unique intelligence, it may be 4MB or 8MB; it’s a really a small number. Yet in that little program are the instructions to make something that becomes an AGI. So do you take that to mean that there’s a secret, a trick—and again, I’m using words that I mean metaphorically—there’s something very simple we’re missing. Something you could write in a few lines of code. Maybe a short program that could make something that’s an AGI?
Yeah, we had a few hundred million years to evolve that. So, the length of something doesn’t necessarily mean that it’s simple. And I think I don’t know enough about genomics to talk really intelligently about this, but I do think that 4MB to 8MB that’s uniquely human interacts with everything else, with the rest of the genome, possibly with the parts that we think don’t do anything. Because there were parts of the genome that we thought didn’t do anything, but it turns out some of it does do something. It’s the dark matter of the genome. Just because we don’t know what it’s doing, I don’t know that that means that it’s not doing anything.
Well, that’s a really interesting point—the 4MB to 8MB may be highly compressed, to use the computer metaphor, and it may be decompressing to something that’s using all the rest. But let’s even say it takes 720MB, you’re still talking about something that will fit on an old CD-ROM, something smaller than most operating systems today.  
And I one hundred percent hear what you’re saying, which is nature has had a hundred million years to compress that, to make that really tight code. But, I guess the larger question I’m trying to ask is, do you think that an AGI may… The hope in AI had always been that, just like in the physical universe, there’s just a few laws that explain everything. Or is it that it’s like, no, we’re incredibly complicated, and it’s going to be this immense system that becomes a general intelligence, and it’s going to be of complexity we can’t wrap our heads around yet.
Gosh, I don’t know. I feel like I just can’t prognosticate that. I think if and when we have an AGI that we really think is intelligent, it probably will have an awful lot of component. The core that drives all of it may be, relatively speaking, fairly simple. But, if you think about how human intelligence works, we have lots and lots of modules. Right?
There’s this sort of core mechanism by which the brain processes information, that plays out in a lot of different ways, in different parts of the brain. We have the motor cortex, and we have the language cortex, and they’re all specialized. We have these specialized regions and specialized abilities. But they all use a common substrate or mechanism. And so when I think of the ultimate AI, I think of there being some sort of architecture that binds together a lot of different components that are doing different things. And it’s that architecture, that glue, that we haven’t really figured out how to think about yet.
There are cognitive architectures. There are people who work on designing cognitive architectures, and I think those are the precursors of what will ultimately become the architecture for intelligence. But I’m not sure we’re really working on that hard enough, or that we’ve made enough progress on that part of it. And it may be that the way that we get artificial intelligence ultimately is by building a really, really, really big deep learning neural network, which I would find maybe a little bit disappointing, because I feel like if that’s how we get there, we’re not really going to know what’s going on inside of it. Part of what brought me into the field of AI was really an interest in cognitive psychology, and trying to understand how the human brain works. So, maybe we can create another human-like intelligence by just kind of replicating the human brain. But I, personally, just from my own research perspective, wouldn’t find that especially satisfying, because it’s really hard to understand what’s going on in the human brain. And it’s hard to understand what’s going on even in any single deep learning network that can do visual processing or anything like that.
I think that in order for us to really adopt these intelligence systems and embrace them and trust them and be willing to use them, we’ll have to find ways for them to be more explainable and more understandable to human beings. Even if we go about replicating human intelligence in that way, I still think we need to be thinking about understandability and how it really works and how we extract meaning.
That’s really fascinating. So you’re saying if we made this big system that was huge and studied data, it’s kind of just brute force. We don’t have anything elegant about that. It doesn’t tell us anything about ourselves.
Yeah.
So my last theoretical question, and then I’d love to talk about jobs. You said at the very beginning that consciousness may be beyond our grasp, that somehow we’re too close to it, or it may be something we can’t agree on, we can’t measure, we can’t tell in others, and all of that. Is it possible that the same is true of a general intelligence? That in the end, this hope of yours that you said brought you into the field, that it’s going to give us deep insights into ourselves, actually isn’t possible?
Well, I mean, maybe. I don’t know. I think that we’ve already gained a lot of insight into ourselves, and because we’re humans, we’re curious. So if we build intelligent agents without fully understanding how they work or what they do, then maybe we’ll work side by side with them to understand each other. I don’t think we’re ever going to stop asking those questions, whether we get to some level of intelligent agents before then or after then. Questions about the universe are always going to be with us.
Onto the question that most people in their day-to-day lives worry about. They don’t worry as much about killer robots, as they do about job-killing robots. What do you think will be the effect? So, you know the setup. You know both sides of this. Is artificial intelligence something brand new that replaces people, and it’s going to get this critical velocity where it can learn things faster than us and eventually just surpass us in all fields? Or, is it like other disruptive technologies—arguably equally disruptive as such things as the mechanization of industry, the harnessing of steam power, of electricity—that came and went and never, ever budged unemployment even one iota. Because people learned, almost instantly, how to use these new technologies to increase their own productivity. Which of those two or a third choice do you think is most likely?
I’m not a believer in the singularity. I don’t see that happening—that these intelligent agents are going to surpass us and make us completely superfluous, or let us upload our brains into cyberspace or turn us into The Matrix. It could happen. I don’t rule it out, but that’s not what I think is most likely. What I really think is that this is like other technologies. It’s like the invention of the car or the television or the assembly line. If we use it correctly, it enhances human productivity, and it lets us create value at less human cost.
The question is not a scientific question or a technological question. The question is really a political question of how are we, as a society, going to decide to use that extra productivity? And unfortunately, in the past, we’ve often allowed that extra productivity to be channeled into the hands of a very few people, so that we just increased wealth disparity, and the people at the bottom of the economic pile have their jobs taken away. So they’re out of work, but more importantly, the benefit that’s being created by these new technologies isn’t benefiting them. And I think that we can choose to think differently about how we distribute the value that we get out of these new technologies.
The other thing is I think that as you automate various kinds of activities, the economy transforms itself. And we don’t know exactly how that is going to happen, and it would have been hard to predict before any historical technological disruption, right? You invent cars. Well, what happens to all the people who took care of the horses before? Something happened to them. That’s a big industry that’s gone. When we automate truck driving, this is going to be extremely disruptive, because truck driver is one of the most common jobs, in most of our country at least. So, what happens to the people who were truck drivers? It turns out that you’re automating some parts of that job, but not all of it. Because a truck driver doesn’t just sit at the wheel of a car and drive it down the road. The truck driver also loads and offloads and interacts with people at either end. So, maybe the truck driver job becomes more of a sales job, you know, there’s fewer of them, but they’re doing different things. Or maybe it’s supplanted by different kinds of service roles.
I think we’re becoming more and more of a service economy, and that’s partly because of automation. We always need more productivity. There’s always things that human society wants. And if we get some of those things with less human effort, that should let us create more of other things. I think we could use this productivity and support more art. That would be an amazing, transformational, twenty-first century kind of thing to do. I look at our current politics and our current society, and I’m not sure that enough people are thinking that way, that we can think about how to use these wonderful technologies to benefit everybody. I’m not sure that’s where we’re headed right now.
Let’s look at that. So there’s a wide range of options, and everybody’s going to be familiar with them all. On the one hand, you could say, you know, Facebook and Google made twelve billionaires between them. Why don’t we just take their money and give it to other people? All the way to the other extreme that says, look, all those truck drivers, or their corollaries, in the past, nobody in a top-down, heavy handed way reassigned them to different jobs. What happened was the market did a really good job of allocating technology, creating jobs, and recruiting them. So those would be two incredibly extreme positions. And then there’s this whole road in between where you’d say, well, we need more education. We need to help make it easier for people to become productive again. Where on that spectrum do you land? What do you think? What specific meat would you put on those bones?
I think taxes are not an inherently bad thing. Taxes are how we run our society, and our society is what protects people and enables people to invent things like Google. If we didn’t have taxes, and we didn’t have any government services, it would be extremely difficult for human society to invent things like Google, because to invent things like that requires collaboration, it requires infrastructure; it requires the support of people around you to make that happen. You couldn’t have Google if you didn’t have the Internet. And the Internet exists because the government invested in the Internet, and the government could invest in the Internet because we pay taxes to the government to create collective infrastructure. I think there’s always going to be a tension between how high should taxes be and how much should you tax the wealthy—how regressive, how progressive? Estate taxes; should you be able to build up a dynasty and pass along all of your wealth to your children? I have opinions about some of that, but there’s no right answer. It changes over time. But I do think that the reason that we come together as human beings to create governments and create societies is because we want to have some ability to have a protected place where we can pursue our individual goals. I want to be able to drive to and from my job on roads that are good, and have this interview with you through an Internet connection that’s maintained, and not to have marauding hordes steal my car while I’m in here. You know, we want safety and security and shared infrastructure. And I think the technology that we’re creating should let us do a better job at having that shared infrastructure and basic ability for people to live happy and productive lives.
So I don’t think that just taking money from rich people and giving it to poor people is the right way to do that, but I do think investing in a better society makes a lot of sense. We have horribly decaying infrastructure in much of the country. So, doesn’t it make sense to take some of the capital that’s created by technology advances and use it to improve the infrastructure in the country and improve health care for people?
Right. And of course the countervailing factor is, do all of the above without diminishing people’s incentives to work hard and found these companies that they created, and that’s the historical tension. Well, I would like to close with one question for you which is: are you optimistic about the future or pessimistic or how would you answer that?
I’m incredibly optimistic. I mean, you know, I’m pessimistic about individual things on individual days, but I think, collectively, we have made incredible strides in technology, and in making people’s quality of life better.
I think we could do a better job. There’s places where people don’t have the education or don’t have the infrastructure or don’t have access to jobs or technology. I think we have real issues with diversity in technology, both in creating technology and in benefiting from technology. I’m very, very concerned about the continuing under-representation of women and minority groups in computing and technology. And the reason for that is partly because I think it’s just socially unjust to not have everybody equally benefiting from good jobs, from the benefits of technology. But it’s also because the technology solutions that we create are influenced by the people who are creating them. When we have a very limited subset of the population creating technology, there’s a lot of evidence that shows that the technology is not as robust, and doesn’t serve as broad a population of users as technology that’s created by diverse teams of engineers. I’d love to see more women coming into computer science. I’d love to see more African Americans and Hispanics coming into computer science. That’s something I work on a lot. It’s something I think matters a lot to our future. But, I think we’re doing the right things in those areas, and people care about these things, and we’re pushing forward.
There’s a lot of really exciting stuff happening in the AI world right now, and it’s a great time to be an AI scientist because people talk about AI. I walk down the street, or I sit at Panera, and I hear people talking about the latest AI solution for this thing or that—it’s become a common term. Sometimes, I think it’s a little overused, because we sort of use it for anything that seems kind of cool, but that’s OK. I think we can use AI for anything that seems pretty cool, and I don’t think that hurts anything.
All right. Well, that’s a great place to end it. I want to thank you so much for covering this incredibly wide range of topics. This was great fun and very informative. Thank you for your time.
Yeah, thank you.
Byron explores issues around artificial intelligence and conscious computers in his upcoming book The Fourth Age, to be published in April by Atria, an imprint of Simon & Schuster. Pre-order a copy here.
[voices_in_ai_link_back]

Voices in AI – Episode 19: A Conversation with Manoj Saxena

[voices_in_ai_byline]
In this episode, Byron and Manoj discuss cognitive computing, consciousness, data, DARPA, explainability, and superconvergence.
[podcast_player name=”Episode 19: A Conversation with Manoj Saxena” artist=”Byron Reese” album=”Voices in AI” url=”https://voicesinai.s3.amazonaws.com/2017-11-20-(01-02-35)-manoj-saxena.mp3″ cover_art_url=”https://voicesinai.com/wp-content/uploads/2017/11/voices-headshot-card-1.jpg”]
[voices_in_ai_link_back]
Byron Reese: This is Voices in AI, brought to you by Gigaom. Today my guest is Manoj Saxena. He is the Executive Chairman of CognitiveScale. Before that, he was the General Manager of IBM Watson, the first General Manager, in fact. He’s also a successful entrepreneur who founded and sold two venture-backed companies within five years. He’s the Founding Managing Director of the Entrepreneur’s Fund IV, a 100-million-dollar seed fund focused exclusively on cognitive computing. He holds an MBA from Michigan State University and a Master’s in Management Sciences from the Birla Institute of Technology and Science in Pilani, India. Welcome to the show, Manoj.
Manoj Saxena: Thank you.
You’re well-known for eschewing the term “artificial intelligence” in favor of “cognitive computing”; even your bio says cognitive computing. Why is that?
AI, to me, is the science of making intelligent systems and intelligent machines. Cognitive computing, and most of AI, is around replacing the human mind and creating systems that do the jobs of human beings. I think the biggest opportunity and it has been proven out in multiple research reports, is augmenting human beings. So, AI for me is not artificial intelligence; AI for me is augmented intelligence. It’s how you could use machines to augment and extend the capabilities of human beings. And cognitive computing uses artificial intelligence technologies and others, to pair man and machine in a way that augments human decision-making and augments human experience.
I look at cognitive computing as the application of artificial intelligence and other technologies to create—I call it the Iron Man J.A.R.V.I.S. suit, that makes every human being a superhuman being. That’s what cognitive computing is, and that was frankly the category that we started off when I was running IBM Watson as, what we believed, was the next big thing to happen in IT and in enterprise.
When AI was first conceived, and they met at Dartmouth and all that, they thought they could kind of knock it out in the summer. And I think the thesis was, Minsky later said, it was just like physics had just a few laws, and electricity had just a few laws, they thought there was just a couple of laws. And then AIs had a few false starts, expert systems and so forth, but, right now, there’s an enormous amount of optimism about it, of what we’re going to be able to do. What’s changed in the last, say, decade?
I think a couple of dimensions in that, one is, when AI initially got going the whole intention was, “AI to model the world.” Then it shifted to, “AI to model the human mind.” And now, where I believe the most potential is, is, “AI to model human and business experiences.” Because each of those are gigantic. The first ones, “AI to model the world” and “AI to model the mind,” are massive exercises. In many cases, we don’t even know how the mind works, so how do you model something that you don’t understand? The world is too complex and too dynamic to be able to model something that large.
I believe the more pragmatic way is to use AI to model micro-experiences, whether it’s an Uber app, or a Waze. Or it is to model a business process, whether it’s a claim settlement, or underwriting, or management of diabetes. I think that’s where the third age of AI will be more focused around, not modeling the world or modeling the mind, but to model the human experience and a business process.
So is that saying we’ve lowered our expectations of it?
I think we have specialized in it. If you look at the human mind, again, you don’t go from being a child to a genius overnight. Let alone a genius that understands all sciences and all languages and all countries. I think we have gotten more pragmatic and more outcome driven, rather than more research and science-driven on how and where to apply AI.
I notice you’ve twice used the word “the mind,” and not “the brain.” Is that deliberate, and if so, where do you think “the mind” comes from?
I think there is a lot of hype, and there is a lot of misperception about AI right now. I like saying that, “AI today is both: AI equals ‘artificially inflated,’ and AI equals ‘amazing innovations.’” And I think in the realm of “AI equals artificially inflated,” there are five myths. One of the first myths is that AI equals replacement of the human mind. And I separate the human brain from the human mind, and from human consciousness. So, at best, what we’re trying to do is emulate functions of a human brain in certain parts of AI, let alone human mind or human consciousness.
We talked about this last time, we don’t even know what consciousness is, other than a doctor saying whether the patient is dead or alive. There is no consciousness detector. And a human mind, there is a saying that you probably need a quantum machine to really figure out how a human mind works—it’s not a Boolean machine or von Neumann machine; it’s a different kind of a processor. But a human brain, I think, can be broken down and can be augmented through AI to create exceptional outcomes. And we’ve seen that happen in radiology, at Wall Street, the quants, and other areas. I think that’s much more exciting, to apply AI pragmatically into these niches.
You know, it’s really interesting because there’s been a twenty-year effort called OpenWorm Project, to take the nematode worm’s brain, which is 302 neurons, and to model it. And even after twenty years, people in the project say it may not be possible. And so, if you can’t do a nematode… One thing is certain, you’re not going to do a human before you do a nematode worm.
Exactly. You know the way I see that, Byron, is that I’m more interested in “richer,” and not “smarter.” We need to get smarter but also we need to equally get richer. By “richer,” I don’t mean just making money, by “richer,” I mean: how do we use AI to improve our society, and our businesses, and our way of life? That’s where I think coming at it in the way of “outcome in,” rather than “science out,” is a more pragmatic way to apply AI.
So, you’ve mentioned five misconceptions, that was one of them. What are some of the other ones?
The first misconception was, AI equals replacing human mind. Second misconception is, AI is the same as natural language processing, which is far from the truth—NLP is just a technique within AI. It’s like saying, “My ability to understand and read a book, is the same as my brain.” That’s the second misconception.
The third is, AI is the same as big data and analytics. Big data and analytics are tools that are used to capture more input for an AI to work on. Saying that big data is the same as AI is saying, “Just because I can sense more, I can be smarter.” All big data is giving you is more input; it’s giving you more senses. It’s not making you smarter, or more intelligent. That’s the third myth.
The fourth myth is that AI is something that is better implemented horizontally versus vertically. I believe true AI, and successful AI—particularly in the business world—will have to be verticalized AI. Because it’s one thing to say, “I’ve got an AI.” It’s another thing to say, “I have an AI that understands underwriting,” versus an AI that understands diabetes, versus an AI that understands Super Bowl ads. Each of these require a domain-specific optimization of data and models and algorithms and experience. And that’s the fourth one.
The fifth one is that AI is all about technology. At best, AI is only half about technology. The other half of the equation has to do with skills, has to do with new processes, and methods, and governance on how to manage AI responsibly in the enterprise. Just like when the Internet came about, you didn’t have the methods and processes to create a web page, to build a website, to manage the website from getting hacked, to manage updates of the website. Similarly, there is a whole AI lifecycle management, and that’s what CognitiveScale focuses on: how do you create, deploy, and manage AI responsibly and at scale?
Because, unlike traditional IT systems—which do not learn; they are mostly rules-based systems, and rules-based systems don’t learn—AI-based systems are pattern-based, and they learn from patterns. So, unlike traditional IT systems that did not learn, AI systems have an ability to self-learn and geometrically improve themselves. If you can’t get visibility and control over these AI systems, you could have a massive problem of “rogue AI”—is what CognitiveScale calls it—where it’s irresponsible AI. You know that character Chucky from the horror movie, it’s like having a bunch of Chuckys running around in your enterprise, opening up your systems. What is needed is a comprehensive end-to-end view of managing AI from design, from deployment, to production, and governance of it at scale. That requires a lot more than technology; it requires skills and methods; and processes.
When we were chatting earlier you mentioned that some people were having difficulty scaling their projects, that they began in their enterprise, making them kind of enterprise-ready. Talk about that for a moment. Why is that, and what’s the solution to that?
Yes. I’ve talked to over six hundred customers just in the last five years—everything from IT level to board level and CEO level. There are three big things that are going on that they’re struggling with getting value of AI. Number one is, AI is seen as something that can be done by data scientists and analytics people. AI is far too important to be left to just data scientists. AI has to be done as a business strategy. AI has to be done top-down to drive business outcomes, not bottom-up as a way of finding data patterns. That’s the first part. I see a lot of science projects that are happening. One of the customers called it darts versus bubbles. He says, “There are lots of darts of projects that are going on, but where do I know where the big bubbles are, which really move the needle for a multibillion-dollar business that I have?” There is a lot of, I call it, bottom-up engineering experiments that are going on, that are not moving the needle. That’s one thing.
Number two is, the data scientists and application developers are struggling with taking these projects into production, because they are not able to provide fundamental capabilities to AI that you need in an enterprise, such as explainability. I believe 99.9% of the AI companies today that are funded will not make it in the next three years, because they lack some fundamental capability, like explainability. It’s one thing to find pictures of cats on the internet using a deep learning network, it’s another thing to explain to a chief risk officer why a particular claim was denied, and the patient died, and now they have a hundred-million-dollar lawsuit. The AI has to be responsible, trustworthy, and explainable; able to say why was that decision made at that time. Because of lack of these kinds of capabilities—and there are five such capabilities that we call enterprise-grade AI—most of these projects are not able to move into production, because they’re not able to meet the requirements from a security and performance perspective.
And then last but not least, these skills are very sparse. There are very few skills. Someone told me there are only seven thousand people in this world who have the skills to be able to understand and run AI models and networks like deep learning and others. Imagine that, seven thousand. I know of a bank who’s got twenty-two thousand developers, one bank alone. There is a tremendous gap in the way AI is being practiced today, versus the skills that are available in trying to get this production-ready.
That’s another thing that CognitiveScale is doing, we have created this platform to democratize AI. How do you take application developers and data scientists and machine learning people, and get them to collaborate, and deploy AI in 90-day increments? We have this method called “10-10-10,” where, in 10 hours we select a use case, and in 10 days we build the reference application using their data, and in 10 weeks we take them into production. We do that by helping these groups of people collaborate on a new platform called Cortex, that lets you take AI safely and securely into production, at scale.
Backing that up a little bit, there are European efforts to require that if the AI makes a decision about you, that you have a right to understand to know why—why it denies you a loan. So, you’re saying that that is something that isn’t happening now, but it is something that’s possible.
Actually, there are some efforts that are going on right now. DARPA has got some initiatives around this notion of XAI, explainable AI. And I know other companies are exploring this, but it’s still a very low-level technology effort. It is not coming up—explainable AI—at a business process level, and at an industry level, because explainability requirements of an AI vary from process to process, and from industry to industry. The explainability requirements for a throat cancer specialist talking talk about why he recommended a treatment, are different than explainability requirements for an investment advice manager in wealth management, who says, “Here’s the portfolio I recommended to you with our systems of AI.” So, explainability exists at two levels. It exists at a horizontal level as a technology, and it exists at an industry-optimized level, and that’s why I believe AI has to be verticalized and industry-optimized for it to really take off.
You think that’s a valid request to ask of an AI system.
I think it’s a requirement.
But if you ask a Google person, “I rank number three for this search. Somebody else ranks number four. Why am I three and they’re four?” They’d be like, “I don’t know. There are six thousand different things going on.”
Exactly. Yeah.
So wouldn’t an explainability requirement impede the development of the technology?
Or, it can create a new class of leaders who know how to crack that nut. That’s the basis on which we have founded CognitiveScale. It’s one of the six requirements, that we’ve talked about, in creating enterprise-grade AI. One of the big things—and I learned this while we were doing Watson—was how do you build AI systems you can trust, as a human being? Explainability is one of them. Another one is, recommendations with reasons. When your AI gives you an insight, can it also give you evidence to support, “Why I’m suggesting this as the best course of action for you”? That builds trust in the AI, and that’s when the human being can take action. Evidence and explainability are two of those dimensions that are requirements of enterprise-grade AI and for AI to be successful at large.
There’s seven thousand people who understand that. Assuming it’s true, is that a function of how difficult it is, or how new it is?
I think it’s a function of how different a skill set it is that we’re trying to bring into the enterprise. It is also how difficult it is. It’s like the Web; I keep going back to Internet. We are like where the Internet was in 1997. There were probably, at that time, only a few thousand people who knew how to develop HTML-based applications or web pages. AI today is where the Internet was in 1996 and 1997, where people were building a web page by hand. It’s far different from building a web application, which is connecting a series of these web pages, and orchestrating them to a business process to drive an outcome. That’s far different from optimizing that process to an industry, and managing it at the requirement of explainability, governance, and scalability. There is a lot of innovation around enterprise AI that is yet to come about, and we have not even scratched the surface yet.
When the Web came out in ’97, people rushed to have a web department in their company. Are we there, are we making AI departments and is that, like, not the way to do it?
Absolutely. I won’t say it’s not the way to do it. I’ll say it’s a required first step; to really understand and learn. Not only just AI, even blockchain—CognitiveScale calls it “blockchain with a brain.” I think that’s the big transformation, which has yet to happen, that’s on the horizon in the next three to four years—where you start building self-learning and self-assuring processes. Coming back to the Web analogy, that was the first step of three or four, in making a business become an e-business. Twenty-five years ago when the Web came about, everyone became in e-business, every process became “webified.” Now, with AI, everyone will become an i-business, or a c-business—a cognitive business—and everyone is going to get “cognitized.” Every process is going to get cognitized. Every process will learn from new data, and new interactions.
The steps they will go through are not unlike what they went through with the Web. Initially, they had a group of people building web apps, and the CEO said after a while, 1998, “I’ve spent half a million dollars, all I have is an intelligent digital brochure on the website. What has it done for my business?” That is exactly the stage we are at. Then, someone else came up and said, “Hey, I can connect a shopping cart to this particular set of web pages. I can put a payment system around it. I can create an e-commerce system out of it. And I have this open-source thing called JBoss, that you can build off of.” That’s kind of similar to what Google TensorFlow is doing today for AI. Then, there are next-generation companies like Siebel and Salesforce that came in and said, “I can build for you a commercial, web-based CRM system.” Similarly, that’s what CognitiveScale does. We are building the next-generation intelligent CRM system, or intelligent HRM system, that lets you get value out of these systems in a reliable and scalable manner. So it’s sort of the same progression that they’re going to go through with AI, like we went through with the Web. And there’s still a tremendous amount of innovation and new market leadership. I believe there will be a new hundred-billion-dollar AI company and that will get formed in the next seven to ten years.
What’s the timescale on AI going to be, is it going to be faster or slower?
I think it’ll be faster. I think it’ll be faster for multiple reasons. We have, and I gave a little TED Talk on this, around this notion of a superconvergence of technologies. When the Web came about, we were shifting from just one technology to another—we moved from client-server to Web. Right now, you’ve got these super six technologies that are converging that will make AI adoption much faster—they are cloud, mobile, social, big data, blockchain, and analytics. All of these are coming together at a rate and pace that is enabling compute and access, at a scale that was never possible before, and you combine that with an ability for a business to get disrupted dramatically.
One of the biggest reasons that AI is different than the Web is that those web systems are rules-based. They did not geometrically learn and improve. The concern and the worry that the CEOs and boards have this time around is—unlike a web-based system—an AI-based system improves with time, and learns with time, so either I’m going to dramatically get ahead of the competition, or I’m going to be dramatically left behind. What some people call “the Uber-ification” of businesses. There is this threat, and an opportunity to use AI as a great transformation and accelerator for their business model. That’s where this becomes an incredibly exciting technology, riding on the back of the superconvergence that we have.
If a CEO is listening, and they hear that, and they say, “That sounds plausible. What is my first step?”
I think there are three steps. The first step is to educate yourself, and your leadership team, on the business possibilities of AI—AI-powered business transformation, not technology possibilities of AI. So, one step is just education; educate yourself. Second is, to start experimenting. Experiment by deploying 90-day projects that cost a few hundred thousand dollars, not a two-year project with multiple million dollars put into it, so you can really start understanding the possibilities. Also you can start cutting through the vendor hype about what is product and what is PowerPoint. The narrative for AI, unfortunately, today, is being written by either Hollywood, or by glue-sniffing marketers from large companies, so the 90-day projects will help you cut through it. So, first is to educate, second is experiment, and third is enable. Enable your workforce to really start having the skill sets and the governance and the processes and enable an ecosystem, to really build out the right set of partners—with technology, data, and skills—to start cognitizing your business.
You know AI has always kind of been benchmarked against games, and what games it can beat people at. And that’s, I assume, because games are these closed environments with fixed rules. Is that the way an enterprise should go about looking for candidate projects, look for things that look like games? I have a stack of resumes, I have a bunch of employees who got great performance reviews, I have a bunch of employees that didn’t. Which ones match?
I think that’s the wrong metaphor to use. I think the way to have a business think about AI, is in the context of three things: their customers, their employees, and their business processes. They have to think about, “How can I use AI in a way that my customer experience is transformed? That every customer feels very individualized, and personalized, in terms of how I’m engaging them?” So, that’s one, the customer experiences that are highly personalized and highly contextualized. Second is employee expertise. “How do I augment my experience and expertise of my employees such that every employee becomes my smartest employee?” This is the Iron Man J.A.R.V.I.S. suit. It’s, “How do I upskill my employees to be the smartest at making decisions, to be the smartest in handling exceptions?” The third thing is my business processes. “How do I implement business processes that are constantly learning on their own, from new data and from new customer interaction?” I think if I were a CEO of a business, I would look at it from those three vectors and then implement projects in 90-day increments to learn about what’s possible across those three dimensions.
Talk a minute about CognitiveScale. How does it fit into that mix?
CognitiveScale was founded by a series of executives who were part of IBM Watson, so it was me and the guy who ran Watson Labs. We ran it for the first three years, and one thing we immediately realized was how powerful and transformative this technology is. We came away with three things: first, we realized that for AI to be really successful, it has to be verticalized and it has to really be optimized to an industry. Number two is that the power of AI is not in a human being asking the question of an AI, but it’s the AI telling the human being what questions to ask and what information to look for. We call it the “known unknowns” versus “unknown unknowns.” Today, why is it that I have to ask an Alexa? Why doesn’t Alexa tell me when I wake up, “Hey, while you were sleeping, Brexit happened. And—” if I’m an investment adviser, “—here are the seventeen customers you should call today and take them through the implications, because they’re probably panicking.” It’s using a system which is the opposite of a BI. A BI is a known-unknown—I know I don’t know something, therefore I run a query. An AI is an unknown unknown, which means it’s tapping me on the shoulder and saying, “You ought to know this,” or, “You ought to do this.” So, that was the second thesis. One is verticalize, second is unknown unknowns, and the third is quick value in 90-day increments—this is delivered using the method we call “10-10-10,” where we can stand up little AIs in 90-day increments.
The company got started about three-and-a-half years ago and the mission is to create exponential business outcomes in healthcare, financial services, telecom, and media. The company has done incredibly well, we have investments from Microsoft, Intel, IBM, Norwest—raised over $50 million. There are offices in Austin, New York, London and India. And the who’s-who, there are over thirty customers who are deploying this, and now scaling this as an enterprise-wide initiative, and it’s, again, built on this whole hypothesis of driving exponential business outcomes, not driving science projects with AI.
CognitiveScale is an Austin-based company, Gigaom is an Austin-based company, and there’s a lot of AI activity in Austin. How did that come about, and is Austin an AI hub?
Absolutely, that’s one of the exciting things I’m working on. One of my roles is Executive Chairman of CognitiveScale. Another of my roles is that I have a hundred-million-dollar seed fund that focuses on investing in vertical AI companies. And for my third thing, we just announced last year, is an initiative called AI Global—out of Austin—whose focus is on fostering the deployment of responsible AI.
I believe East Coast and West Coast will have their own technology innovations in AI. AI will be bigger than the Internet was. AI will be at the scale of what electricity was. Everything we know around us—from our chairs to our lightbulbs and our glasses—is going to have elements of AI woven into it over the next ten years. And, I believe one of the opportunities that Austin has—and that’s why we founded AI Global in Austin—is to help businesses implement AI in a responsible way so that it creates good for the business in an ethical and a responsible manner.
Part of the ethical use of AI and responsible use of AI involves bringing a community of people together in Austin, and have Austin be known as the place to go, for designing responsible AI systems. We have the UT Law school working with us, the UT Design school, the UT Business school, the UT IT school—all of them are working together as one. We have the mayor’s office and the city working together extensively. We also have some local companies like USAA, who is coming in as a founding member of this. What we are doing now is helping companies that come to us for getting a prescription on how to design, deploy, and manage responsible AI systems. And I think there are tremendous opportunities, like you and I have talked about, for Gigaom and AI Global to start doing things together to foster implementation of responsible AI systems.
You may have heard that IBM Watson beat Ken Jennings at Jeopardy. Well, he gave a TED Talk about that, and he said that there was a graph that, as Watson got better, it would show the progress, and every week they would send him an update and their line would be closer to his. He said he would look at it with dread. He said, “That’s really what AI is, it’s not the Terminator coming for you. It’s the one thing you do great, and it just gets better and better and better and better at it.” And you talked about Hollywood driving the narrative of AI, but one of the narratives is AIs effect on jobs, and there’s a lot of disagreement about it. Some people believe it’s going to eat a bunch of low-skill work, and we will have permanent unemployment and it will be like the Depression, and all of that. While some think that it’s actually going to create a bunch of jobs. That, just like any other transformative technology, it’s going to raise productivity which is how we raise wages. So which of those narratives, or a different one, do you follow?
And there’s a third group that says that AI could be our last big innovation, and it’s going to wipe us out as a species. I think the first two, in fact, all three are true, elements of them.
So it will wipe us out as a civilization?
If you don’t make the right decisions. I’m hearing things like autonomous warfare which scares the daylights out of me.
Let’s take all three. In terms of AI dislocating jobs, I think every major technology—from the steam engine to the tractor to semiconductors—has always dislocated jobs; and AI will be no different. There’s a projection that by the year 2020 eighteen million jobs will be dislocated by AI. These are tasks that are routine tasks that can be automated by a machine.
Hold on just a second, that’s twenty-seven months from now.
Yeah, eighteen million jobs.
Who would say that?
It’s a report that was done by, I believe it was World Economic Forum, but here’s the thing, I think that’s quite true. But I don’t worry about that as much as I focus on the 1.3 billion jobs that AI will uplift the roles on. That’s why I look at augmentation as a bigger opportunity than replacement of human beings. Yes, AI is going to remove and kill some jobs but there is a much, much larger opportunity by using AI to augment and skill your employees, just like the Web did. The Web gave you reach and access and connection, at a scale that was never possible before—just like the telephone did before that, and the telegraph did before that. And I think AI is going to give us a tremendous amount of opportunities for creating—someone called it the “new collar jobs,” I think it was IBM—not just blue collar or white collar, but “new collar” jobs. I do believe in that; I do believe there is an entire range of jobs that AI will bring about. That’s one.
The second narrative was around AI being the last big innovation that we will make. And I think that is absolutely the possibility. If you even look at the Internet when it came about, the top two applications in the early days of the Internet were gambling and pornography. Then we started putting the Internet to work for the betterment of businesses and people, and we made choices that made us use the Internet for greater good. I think the same thing is going to happen with AI. Today, AI is being used for everything from parking tickets being contested, to Starbucks using it for coffee, to concert tickets being scalped. But I think there are going to be decisions as a society that we have to make, on how we use AI responsibly. I’ve heard the whole Elon Musk and Zuckerberg argument; I believe both of them are right. I think it all comes down to the choices we make as a society, and the way we scale our workforce on using AI as the next competitive advantage.
Now, the big unknown in all of this is what a bad actor, or nation states, can do using AI. The part that I still don’t have a full answer to, but it worries the hell out of me, is this notion of autonomous warfare. Where people think that by using AI they can actually restrict the damage, and they can start taking out targets in a very finite way. But the problem is, there’s so much that is unknown about an AI. An AI today is not trustworthy. You put that into things that can be weapons of mass destruction, and if something goes wrong—because the technology is still maturing—you’re talking about creating massive destruction at a scale that we’ve never seen before. So, I would say all three elements of the narrative: removing jobs, creating new jobs, creating an existential threat to us as a race—all of those elements are a possibility going forward. The one I’m the most excited about is how it’s going to extend and enhance our jobs.
Let’s come back to jobs in just a minute, but you brought up warfare. First of all, there appear to be eighteen countries working to make AI-based systems. And their arguments are twofold. One argument is, “There’s seventeen other people working to develop it, if I don’t…”
Someone else will. 
And second, right now, the military drops a bomb and it blows up everything… Let’s look at a landmine. A landmine isn’t AI. It will blow up anything over forty pounds. And so if somebody came and said, “I can make an AI landmine that sniffs for gunpowder, and it will only blow up somebody who’s carrying a weapon.” Then somebody else says, “I can make one that actually scans the person and looks for drab.” And so forth. If you take warfare as something that is a reality of life, why wouldn’t you want systems that were more discriminative?
That’s a great question, and I believe that will absolutely happen, and probably needs to happen, but over a period of time—maybe that’s five or ten years away. We are in the most dangerous time right now, where the hype about AI has far exceeded the reality of AI. These AIs are extremely unstable systems today. Like I said before, they are not evidence-based, there is no kill-switch in an AI, there is no explainability; there is no performance that you can really figure out. Take your example of something that can sniff gunpowder and will explode. What if I store that mine in a gun depot, in the middle of a city, and it sniffs the gunpowder from the other weapons there, and it blows itself up. Today, we don’t have the visibility and control at a fine-grain level with AI to warrant an application of it in that scale.
My view is that it will be a prerogative for everyone to get on it as nation-states—you saw Putin talk about it, saying, “He who controls AI will control the future world.” There is no putting the genie back in the bottle. And just like we did with the rules of war, and just like we did with nuclear warfare; there will be new Geneva Convention-like rules that we will have to come up with as a society on how and where these responsible AI systems have to be deployed, and managed, and measured. So, just like we have done that for chemical warfare, I think there will be new rules that will come up for AI-based warfare.
But the trick with it is… A nuclear event is a binary thing; it either happened or it didn’t. A chemical weapon, there is a list of chemicals, that’s a binary thing. AI isn’t though. You can say your dog-food dish that refills automatically when it’s empty, that’s AI. How would you even phrase the law, assuming people followed it, how would you phrase it in just plain English?
In a very simple way. You’ve heard Isaac Asimov’s three rules in I, Robot. I think as a society we will have to—in fact, I’m doing a conference on this next year north of London around how to use AI and drones in warfare in a responsible way—come up with a collective mindset and will from the nations to propose something like this. And I think the first event has not happened yet, though you could argue that the “fake news” event was one of the big AI events that’s happened, that, potentially, altered the direction of a presidential race. People are worried about hacking; I’m more worried about attacks that you can’t trace the source of. And I think that’s work to be done, going forward.
There was a weapons system that did make autonomous kill decisions, and the militaries that were evaluating it said, “We need it to have a human in the middle.” So they added that, but of course you can turn that off. It’s almost intractable to define it in such a way.
It sounds like you’re in favor of AI weapons, as long as they’re not buggy.
I’m not in favor of AI weapons. In general, as a person, I’m anti-war. But it’s one of those human frailties and human limitations that war is a necessary—as ugly as it is—part of our lives. I think people and countries will adopt AI and they will start using it for warfare. What is needed, I think, is a new set of agreements and a new set of principles on how they go about using it, much like they do with chemical weapons and nuclear warfare. I don’t think it’s something we can control. What we can do is regulate and manage and enforce it.
So, moving past warfare, do you believe Putin’s statement that he who controls AI in the future will control the world?
Absolutely. I think that’s a given.
Back to jobs for a moment. Working through the examples you gave, it is true that steam and electricity and mechanization destroyed jobs, but, what they didn’t do is cause unemployment. Unemployment in this country, in the US, at least, has been between five and ten percent for two hundred years, other than the Depression, which wasn’t technology’s fault. So, what has happened is, yes, we put all of the elevator operators out of business when we invented the button and you no longer had to have a person, but we never saw a spike in unemployment. Is that what’s going to happen? Because if we really lost eighteen million jobs in the next twenty-seven months, that would just be… That’s massive.
No, but here’s the thing, that eighteen million number is a global number.
Okay, that’s a lot better then. Fair enough, then.
And you have to put this number in context of the total workforce. So today, there are somewhere between seven hundred million to 1.3 billion workers that are employed globally and eighteen million is a fraction of that. That’s number one. Number two, I believe there is a much bigger potential in using AI as a muse, and AI as a partner, to create a whole new class of jobs, rather than be afraid of the machine replacing the job. Machines have always replaced jobs, and they will continue to do that. But I believe—and this is where I get worried about our education system, one of the first things we did with Watson was we started a university program to start skilling people with the next generation skillsets that are needed to deploy and manage AI systems—that over the next decade or, for that matter over the next five decades, there is a whole new class of human creativity and human potential that can and will be unleashed through AI by creating whole new types of job.
If you look at CognitiveScale, we’re somewhere around one hundred and sixty people today. Half of those jobs did not exist four years ago. And many of the people who would have never even considered a job in a tech company are employed by CognitiveScale today. We have linguists who are joining a software company because we have made their job into computational linguistics, where they’re taking what they knew of linguistics, combining it with a machine, and creating a whole new class of applications and systems. We have people who are creating a whole new type of testing mechanisms for AI. These testers never existed before. We have people who are now designing and composing intelligent agents using AI, with skills that they are blending from data science to application development, to machine learning. These are new skills that have come about. Not to mention salespeople, and business strategists, who are coming up with new applications of this. I tend to believe that this is one of the most exciting times—from the point of view of economic growth and jobs—that we, and every country in this world, has in front of them. It all depends on how we commercialize it. One of the great things we have going for the US is a very rich and vibrant venture investment community and a very rich and vibrant stock market that values innovation, not just revenues and profits. As long as we have those, and as long as we have patent coverage and good enforcement of law, I see a very good future for this country.
At the dawn of the Industrial Revolution, there was a debate in this country, in the United States, about the value of post-literacy education. Think about that. Why would most people, who are just going to be farmers, need to go to school after they learn how to read? And then along came some people who said that the jobs of the future, i.e. Industrial Revolution jobs, will require more education. So the US was the first country in the world to guarantee every single person could go to high school, all the way through. So, Mark Cuban said, if he were coming up now, he would study philosophy. He’s the one who said, “The first trillionaires are going to be AI people.” So he’s bullish on this, he said, “I would study philosophy because that’s what you need to know.” If you were to advise young people, what should they study today to be relevant and employable in the future?
I think that’s a great question. I would say, I would study three different things. One, I would study linguistics, literature—soft sciences—things around how decisions are made and how the human mind works, cognitive sciences, things like that. That’s one area. The second thing I would study is business models and how businesses are built and designed and scaled. And the third thing I would study is technology to really understand the art of the possible with these systems. It’s at the intersection of these three things, the creative aspects of design and literature and philosophy around how the human mind works, to the commercial aspect of what to make, and how to build a successful business model, to the technological underpinnings of how to power these business models. I would be focusing on the intersection of those three skills; all embraced under the umbrella of entrepreneurship. I’m very passionate about entrepreneurship. They are the ones who will really lead these country forward, entrepreneurs, both in big companies, and small.
You and I have spoken on the topic of an artificial general intelligence, and you said it was forty or fifty years away, that’s just a number, and that it might require quantum computers. You mentioned Elon and his fear of the existential threat. He believes, evidently, that we’re very close to an AGI and that’s where the fear is. That’s what he’s concerned about. That’s what Hawking is concerned about. You said, “I agree with the concern, if we screw up, it’s an existential threat.” How do you reconcile that with, “I don’t think we’ll have an AGI for forty years”?
Because I think you don’t need an AGI to create an existential threat. There are two different dimensions. You can create an existential threat by just building a highly unreliable autonomous weapons system that doesn’t know anything about general intelligence. It only knows how to seek out and kill. And that, in the wrong hands, could really be the existential threat. You could create a virus on the Internet that could bring down all public utilities and emergency systems, without it having to know anything about general intelligence. If that somehow is released without proper testing or controls, you could bring down economies and societies. You could have devastation, unfortunately, at the scale of what Puerto Rico is now going through without a hurricane going through it; it could be an AI-powered disaster like that. I think these are the kinds of outcomes we have to be aware of. These are the kinds of outcomes we have to start putting rules and guidelines and enforcements around. And that’s an area, that and skills, are the two that I think we are lagging behind significantly today.
The OpenAI initiative is an effort to make AI so that one player doesn’t develop it—in that case an AGI, but all along the way. Do you think that is a good initiative?
Yeah, absolutely. I think OpenAI, we probably need a hundred other initiatives like that, that focus on different aspects of AI. Like what we’re doing at AI Austin, and AI Global. We are focusing on the ethical use of AI. It’s one thing to have a self-driving car, it’s another thing to have a self-driving missile. How do you take a self-driving car that ran over four people, and how do you cross-examine that in a witness box? How is that AI explainable? Who’s responsible for it? So there is a whole new set of ethics and laws that have to be considered when putting this into the intelligent products. Almost like the underwriter labs equivalent of AI that needs to be woven into every product and every process. Those are the things that our governments need to get aware of, and our regulators need to get savvy about, and start implementing.
There is one theory that says that if it’s going to rely on government, that we are all in bad shape because the science will develop faster than the legislative ability to respond to it. Do you have a solution for that?
I think there’s a lot of truth to that, particularly with what we’re seeing recently in the government around technology, there’s a lot of merit to that. I believe, again, the results of what we become and what we use AI for, will be determined by what we do as private citizens, what we do as business leaders, and what we do as philanthropists. One of the beautiful things about America is what philanthropists like Gates and Buffett and all are doing—they’ve got more assets than many countries now, and they’re putting it to work responsibly; like what Cuban’s talking about. So, I do have hope in the great American “heart,” if you may, about innovation, but also responsible application. And I do believe that all of us who are in a position to educate and manage these things, it’s our duty to be able to spread the word, and to be able to lean in, and start helping, and steering this AI towards responsible applications.
Let’s go through your “What AI Isn’t” list, your five things. One of them you said, “An AI is not natural language processing” and obviously, that is true. Do you think, though, the Turing test has any value? If we make a machine that can pass it, is that a benchmark? We have done something extraordinary in that case?
When I was running Watson, I used to believe it had value, but I don’t believe that as much anymore. I think it has limited value in applicability, because of two things. One is, in certain processes where you’re replacing the human brain with a machine, you absolutely need to have some sort of a test to prove or not prove. The more exciting part is not replacement of automated or repetitive human functions, the more exciting part is things that the human brain hasn’t thought of, or hasn’t done. I’ll give you an example: we are working at CognitiveScale with a very large media company, and we were analyzing Super Bowl TV ads, by letting an AI read the video ad, to find out exactly what kinds of creative—is it kids or puppies or celebrities—and at what time, would have the most impact on creating the best TV ad. And what was fascinating was that we just let the AI run at it; we didn’t tell it what to look for. There was no Turing test to say, “This is good or bad.” And the stuff the AI came back with were things that were ten or twelve levels deep in terms of connections it found, things that a human brain normally would have never thought about. And we still can’t describe why there is a connection to it.
It’s stuff like that—the absolute reference is not the human brain, this is the “unknown unknown” part I talked about—that with AI, you can emulate human cognition but, as importantly, with AI you can extend human cognition. The extension part of coming up with patterns or insights and decisions that the human brain may not have used, I think that’s the exciting part of AI. We find when we do projects with customers that there are patterns that we can’t explain, as a human being, why it is, but there’s a strong correlation; it’s eighteen levels deep and it’s buried in there, but it’s a strong correlator. So, I kind of put this into two buckets: first is low-level repetitive tasks that AI can replace; and second is a whole new class of learning that extends human cognition where—this is the unsupervised learning bit—where you start putting a human in the loop to really figure out and learn new ways of doing business. And I think they are both aspects that we need to be cognizant of, and not just try to emulate the current human brain which has, in many cases, proven to be very inefficient in making good decisions.
You have an enormous amount of optimism about it. You’re probably the most optimistic person, that I’ve spoken to, about how far we can get without a general intelligence. But, of course, you keep using words like “existential threat,” you keep identifying concepts like a virus that takes down the electrical grid, warfare, and all of that; you even used “rogue AI” in the context of a business. In that latter case, how would a rogue AI destroy a business? And you can’t legislate your way around that, right? So, give me a example of a rogue AI in an enterprise scenario.
There are so many of them. One of them actually happened when we recently met with a large financial institution. We were sitting and having a meeting, and suddenly we found out that that particular company was going through a massive disruption of business operations because all of their x-number of data centers were shutting down, every 20 minutes or so, and rebooting themselves; all over the world, their data centers were shutting down and rebooting. They were panicking because this was during the middle of a business day, there were billions of dollars being transacted, and they had no idea why these data centers were doing what they were. A few hours into it, they found out that someone wrote a security bot last month, and they launched it into the cloud system that they have, and for some reason, that agent—that AI—felt that it was a good idea to start shutting down these systems every 20 minutes and rebooting it. That was a simple example of how, they finally found it, but there was no visibility in governance of that particular AI that was introduced. That’s one of the reasons we talked about the ability to have a framework for managing visibility and control of these AIs.
The other one could be—and this has not happened yet, but this is one of the threats—you look at underwriting. An insurance company uses technology today a lot, to start underwriting risks. And if, for whatever reason, you have an AI system that sees correlations and patterns, but has not been trained well enough on really understanding risk, you could pretty much have the entire business wiped out. By having the AI—if you depend on it too much without explainability and trust—suggesting you take on risks, that will put your business at an existential risk.
I can go on and on, and I can use examples around cancer, around diabetes, around anything to do with commerce where AI is going to be put to use. I believe as we move forward with AI, the two phrases that are going to become incredibly important for enterprises are “lifecycle management of an AI,” and “responsible AI.” And I think that’s where there’s a tremendous amount of opportunity. That’s why I’m excited about what we’re doing at CognitiveScale to enable those systems.
Two final questions. So, with those scenarios, give me the other side, give us some success stories you’ve personally seen. They can be CognitiveScale or other ones, that you’ve seen have a really positive impact on a business.
I think there are many of them. I’ll pick an area in retail, something as simple as retail, where through an AI we were able to demonstrate how a rules-based system—so this particular large retailer used to have a mobile app where they presented to you a shirt, and trousers, and some accessories and it was like a Tinder or “hot or not” type of a game—and the rules-based system, on average, were getting less than ten percent conversion on what people said they liked. Those were all systems that are not learning. Then we put an AI behind it, and that AI could understand that that particular dress was an off-shoulder dress, and it was a teal color, and it was pairs with an open-toe shoe that’s a shiny leather. As the customers started engaging with it, the AI started personalizing the output, and we demonstrated a twenty-four percent conversion compared to a single-digit conversions, in a matter of seven months. And here’s the beautiful part, every month the AI is getting smarter and smarter, and every percentage conversion equals tens of millions of dollars in top-line growth. So that’s one example of a digital brain, a cognitive digital brain, driving shopper engagement and shopper conversion.
The other thing we saw was in the case of pediatric asthma. How an AI can help nurses do a much better job of preventing children from having an asthma attack, because the AI is able to read a tweet from pollen.com that says there will be a ragweed outbreak on Thursday morning. The AI understands the zip code that it’s talking about, and Thursday is four days out, and there are seventeen children with a risk of ragweed or similar allergies; and it starts tapping the nurse on the shoulder and saying, “There is an ‘unknown unknown’ going on here which is, four days from now there will be a ragweed outbreak, you better get proactive about it and start addressing the kids.” So, there’s an example in healthcare.
There are examples in wealth management, and financial services, around compliance and how we’re using AI to improve compliance. There are examples of how we are changing the dynamics of trading, foreign exchange trading, and how a trader does equities and derivatives trading by the AI guiding them through a chat session where the AI is listening in and guiding them as to what to do. The examples are many, and most of them are things that are written up in case studies, but this is just the beginning. I think this is going to be one of the most exciting innovations that will transform the landscape of businesses over the next five to seven years.
You’re totally right about the product recommendation. I was on Amazon and I bought something, it was a book or something, and it said, “Do you want these salt-and-pepper-shaker robots that you wind up and they walk across the table?” And I was like, “Yes, I do!” But it had nothing to do with the thing that I was buying.
Final question, you’ve talked about Hollywood setting the narrative for AI. You’ve mentioned I, Robot in passing. Are you a consumer of science fiction, and, if so, what vision of the future—book or whatever—do you think, “Aha, that’s really cool, that could happen,” or what have you?
Well, I think probably the closest vision I would have is to Gene Roddenberry, and Star Trek. I think that’s pretty much a great example of a data quarter helping a human being make a better decision—a flight deck, a holodeck, that is helping you steer. It’s still the human, being augmented. It’s still the human making the decisions around empathy, courage, and ethics. And I think that’s the world that AI is going to take us to; the world of augmented intelligence. Where we are being enabled to do much bigger and greater things, and not just a world of artificial intelligence where all our jobs are removed and we are nothing but plastic blobs sitting in a chair.
Roddenberry said that in the twenty-third century there will be no hunger, and there will be no greed, and all the children will know how to read. Do you believe that?
If I had a chance to live to be twice or three times my age, that would be what I’d come in to do. After CognitiveScale, that is going to be my mission through my foundation. Most of my money I’ve donated to my foundation, and it will be focused on AI for good; around addressing problems of education, around addressing problems of environment, and around addressing problems of conflict.
I do believe that’s the most exciting frontier where AI will be applied. And there will be a lot of mishaps along the way, but I do believe, as a race and as a humanity, if we make the right decisions, that is the endpoint that we will reach. I don’t know if it’s 2300, but, certainly, it’s something that I think we will get to.
Thank you for a fascinating hour.
Thank you very much.
It was really extraordinary and I appreciate the time.
Thanks, Byron.
Byron explores issues around artificial intelligence and conscious computers in his upcoming book The Fourth Age, to be published in April by Atria, an imprint of Simon & Schuster. Pre-order a copy here
[voices_in_ai_link_back]

Voices in AI – Episode 18: A Conversation with Roman Yampolskiy

[voices_in_ai_byline]
In this episode Byron and Roman discuss the future of jobs, Roman’s new field of study, “Intellectology”, consciousness and more.
[podcast_player name=”Episode 18: A Conversation with Roman Yampolskiy” artist=”Byron Reese” album=”Voices in AI” url=”https://voicesinai.s3.amazonaws.com/2017-11-20-(00-45-56)-roman-yampolsky.mp3″ cover_art_url=”https://voicesinai.com/wp-content/uploads/2017/11/voices-headshot-card.jpg”]
[voices_in_ai_link_back]
Byron Reese: This is Voices in AI, brought to you by Gigaom, and I’m Byron Reese. Today, our guest is Roman Yampolskiy, a Professor in the Department of Computer Engineering and Computer Science at the Speed School of Engineering, University of Louisville. He is the founding and current director of the Cyber Security Lab, and an author of many books, including Artificial Superintelligence: A Futuristic Approach.
His main areas of interest are AI safety and cyber security. He is the author of over one hundred publications, and his research has been cited by over a thousand scientists around the world.
Welcome to the show.
Roman Yampolskiy: Thank you so much. Thanks for inviting me.
Let’s just jump right in. You’re in the camp that we have to be cautious with artificial intelligence, because it could actually be a threat to our survival. Can you just start right there? What’s your thinking? How do you see the world?
It’s not very different than any other technology. Any advanced technology can be used for good or for evil purposes. The main difference with AI is that it’s not just a tool, it’s actually an independent agent, making its own decisions. So, if you look at the safety situation with other independent agents—take for example, animals—we’re not very good at making sure that there are no accidents with pit bulls, for example.
We have some approaches to doing that. We can put them on a leash, put them in a cage, but at the end of the day, if the animal decides to attack, it decides to attack. The situation is very similar with advanced AI. We try to make it safe, beneficial, but since we don’t control every aspect of its decision-making, it could decide to harm us in multiple ways.
The way you describe it, you’re using language that implies that the AI has volition, it has intentionality, it has wants. Are you suggesting this intelligence is going to be conscious and self-aware?
Consciousness and self-awareness are meaningless concepts in science. That is nothing we can detect or measure. Let’s not talk about those things. I’m saying specific threats will come from the following: one, is mistakes in design. Just like with any software, you have computer bugs; you have misaligned values with human values. Two, purposeful design of malevolent AI. There are people who want to hurt others—hackers, doomsday cults, crazies. They will, on purpose, design intelligent systems to destroy, to kill. The military is a great example; they fund lots of research in developing killer robots. That’s what they do. So, those are some simple examples.
Will AI decide to do something evil, for the sake of doing evil? No. Will it decide to do something which has a side effect of hurting humanity? Quite possible.
As you know, the range on when we might build an artificial general intelligence varies widely. Why do you think that is, and do you care to kind of throw your hat in that lottery, or that pool?
Predicting the future is notoriously difficult. I don’t see myself as someone who has an advantage in that field, so I defer to others. People, like Ray Kurzweil, who have spent their lives building those prediction curves, exponential curves. With him being Director of Engineering at Google, I think he has pretty good inside access to the technology, and if he says something like 2045 is a reasonable estimate, I’ll go with that.
The reason people have different estimates is the same reason we have different betting patterns in the stock market, or horses, or anything else. Different experts give different weights to different variables.
You have advocated research into, quote, “boxing” artificial intelligence. What does that mean, and how would you do it?
In plain English, it means putting it in prison, putting it in a controlled environment. We already do it with computer viruses. When you study a computer virus, you put it in an isolated system which has no access to internet, so you can study its behavior in a safe environment. You control the environment, you control inputs, outputs, and you can figure out how it works, what it does, how dangerous it is.
The same makes sense for intelligence software. You don’t want to just run a test by releasing it on the internet, and seeing what happens. You want to control the training data going in. That’s very important. We saw some terrible fiascos with the recent Microsoft Chat software being released without any controls, and users feeding it really bad data. You want to prevent that, so for that reason, I advocate having protocols, environments in which AI researchers can safely test their software. It makes a lot of sense.
When you think about the great range of intellectual ability, from the smallest and simplest creatures, to us, is there even an appropriate analogy for how smart a superintelligence could be? Is there any way for us to even think about that?
Like, when my cat leaves a mouse on the back porch, everything that cat knows says that I’m going to like that dead mouse, right? Its entire view of the world is that I’m going to want that. It doesn’t have, even remotely, the mental capability to understand why I might not.
Is an AI, do you think, going to be that far advanced, where we can’t even communicate in the same sort of language, because it’s just a whole different thing?
Eventually, yes. Initially, of course, we’ll start with sub-human AI, and slowly it will get to human levels, and very quickly it will start growing almost exponentially, until it’s so much more intelligent. At that point, as you said, it may not be possible for us to understand what it does, how it does it, or even meaningfully communicate with it.
You have launched a new field of study, called Intellectology. Can you talk about what that is, and why you did that? Why you thought there was kind of a missing area in the science?
Sure. There seems to be a lot of different sub-fields of science, all of them looking at different aspects of intelligence: how we can measure intelligence, build intelligence, human intelligence versus non-human intelligence, animals, aliens, communicating across different species. Forensic science tells us that we can look at an artifact, and try to deduce the engineering behind it. What is the minimum intelligence necessary to make this archeological artifact?
It seems to make sense to bring all of those different areas together, under a single umbrella, a single set of terms and tools, so they can be re-used, and benefit each field individually. For example, I look a lot at artificial intelligence, of course. And studying this type of intelligence is not the same as studying human intelligence. That’s where a lot of mistakes come from, assuming that human drives, wants and needs will be transferred.
This idea of a universe of different possible minds is part of this field. We need to understand that, just like our planet is not the middle of the universe, our intelligence is not the middle of that universe of possible minds. We’re just one possible data point, and it’s important to generalize outside of human values.
So it’s called Intellectology. We don’t actually have a consensus definition on what intelligence is. Do you begin there, with “this is what intelligence is”? And if so, what is intelligence?
Sure. There is a very good paper published by one of the co-founders of DeepMind, which surveys, maybe, I don’t know, a hundred different definitions of intelligence, and tries to combine them. The combination sounds something like “intelligence is the ability to optimize for your goals, across multiple environments.” You can say it’s the ability to win in any situation, and that’s pretty general.
It doesn’t matter if you are a human at a college, trying to get a good grade, an alien on another planet trying to survive, it doesn’t matter. The point is if I throw a mind into that situation, eventually it learns to do really well, across all those domains.
We see AIs, for example, capable of learning multiple videos games, and performing really well. So, that’s kind of the beginning of that general intelligence, at least in artificial systems. They’re obviously not at the human level yet, but they are starting to be general enough, where we can pick up quickly what to do in all of those situations. That’s, I think, a very good and useful definition of what intelligence is, one we can work with.
One thing you mentioned in your book, Artificial Superintelligence, is the notion of convincing robots to worship humans as gods. How would you do that, and why that? Where did that idea come from?
I don’t mention it as a good idea, or my idea. I kind of survey what people have proposed, and it’s one of the proposals. I think it comes from the field of theology, and I think it’s quite useless, but I mention it for the sake of listing all of the ideas people have suggested. Me and a colleague, we published a survey about possible solutions for dealing with super-intelligent systems, and we reviewed some three hundred papers. I think that was one of them.
I understand. Alright. What is AI Completeness Theory?
We think that there are certain problems which are fundamental problems. If you can do one of those problems, you can do any problem. Basically, you are as smart as a human being. It’s useful to study those problems, to understand what is the progress in AI, and if we’ve got to that level of performance. So, in one of my publications, I talk about the Turing Test as being a fundamental first AI complete problem. If you can pass the Turing Test, supposedly, you’re as intelligent as a human.
The unrestricted test, obviously not the five-minute version of that, or whatever is being done today. If that’s possible, then you can do all of the other problems. You can do computer vision, you can do translation, maybe you can even do computer programming.
You also write about machine ethics and robot rights. Can you explore that, for just a minute?
With regards to machine ethics, the literature seems to be, basically, everyone trying to propose that a certain ethical theory is the right one, and we should implement it, without considering how it impacts everyone who disagrees with the theory. Philosophers have been trying to come up with a common ethical framework for millennia. We are not going to succeed in the next few years, for sure.
So, my argument was that we should not even try to pick one correct ethical theory. That’s not a solution which will make all of us happy. And for each one of those ethical theories, there are actually problems, well-known problems, which if a system with that type of power is to implement that ethical framework, that’s going to create a lot of problems, a lot of damage.
With regards to rights for robots, I was advocating against giving them equal rights, human rights, voting rights. The reasoning is quite simple. It’s not because I hate robots. It’s because they can reproduce almost infinitely. You can have a trillion copies of any software, almost instantaneously, and if each of them has voting rights, that essentially means that humanity has no rights. We give away human rights. So, anyone who proposes giving that type of civil rights to robots is essentially against human rights.
That’s a really bold statement. Let’s underline that, because I want to come back to it. But in order to do that, I want to return to the first thing I asked you, or one of the earlier things, about consciousness and self-awareness. You said these aren’t really scientific questions, so let’s not talk about them. But at least with self-awareness, that isn’t the case, is it?
I mean, there’s the red dot test—the mirror test—where purportedly, you can put a spot on an animal’s forehead while it’s asleep, and if it gets up and sees that in a mirror, and tries to wipe it off, it therefore knows that that thing in the mirror is it, and it has a notion of self. It’s a hard test to pass, but it is a scientific test. So, self-awareness is a scientific idea, and would an artificial intelligence have that?
We have a paper, still undergoing the review process, which surveys every known test for consciousness, and I guess you include self-awareness with that. All of them measure different correlates of consciousness. The example you give, yes, animals can recognize that it’s them in the mirror, and so we assume that also means they have similar consciousness to ours.
But it’s not the same for a robot. I can program a robot to recognize a red dot, and assume that it’s on its own forehead, in five minutes. It’s not, in any way, a guarantee that it has any conscious or self-awareness properties. It’s basically proving that we can detect red dots.
But all you are saying is we need a different test for AI self-awareness, not that AI self-awareness is a ridiculous question to begin with.
I don’t know what the definition of self-awareness is. If you’re talking about some non-material spiritual self-consciousness thing, I’m not sure what it does, or why it’s useful for us to talk about it.
Let’s ask a different question, then. Sentience is a word which is commonly misused. It’s often used to mean intelligent, but it simply means “able to sense something,” usually pain. So, the question of “is a thing sentient” is really important. Up until the 1990s, in the United States, veterinarians were taught not to anesthetize animals when they operated on them, because they couldn’t feel pain—despite their cries and writhing in apparent agony.
Similarly, it wasn’t until twenty or so years ago that babies, human babies, weren’t anesthetized, to do open heart surgery on them, because again, the theory was that they couldn’t feel pain. Their brains just weren’t well-developed. The notion of sentience, we put it right near rights, because we say, “If something can feel pain, it has a right not to be tortured.”
Wouldn’t that be an equivalent with artificial intelligence? Shouldn’t we ask, “Can it feel pain?” And if it can, you don’t have to say, “Oh yeah, it should be able to vote for the leaders.” But you can’t torture it. That would be just a reasonable thing, a moral thing, an ethical thing to say. If it can feel, then you don’t torture it.
I can easily agree with that. We should not torture anyone, including any reinforcement learners, or anything like that. To the best of my knowledge, there are two papers published on the subject of computer pain, good papers, and both say it’s impossible to do right now.
It’s impossible to measure, or it’s impossible for a computer to feel pain right now?
It’s impossible for us to program a computer to feel pain. Nobody knows how to do it, how to even start. It’s not like with, let’s say pattern recognition, we know how to start, we have some results, we get ten percent accuracy so we work on it and get to fifteen percent, forty percent, ninety percent. With artificial pain, nobody knows how to even start. What’s the first line of code you write for that? There is no clue.
With humans, we assume that other humans feel pain because we feel pain, and we’ve got similar hardware. But there is not a test you can do to measure how much pain someone is in. That’s why we show patients those ten pictures of different screaming faces, and ask, “Well, how close are you to this picture, or that one?” This is all a very kind of non-scientific measurement.
With humans, yes, obviously we know, because we feel it, so similar designs must also experience that. With machines, we have no way of knowing what they feel, and no one, as far as I know, is able to say, “Okay, I programmed it so it feels pain, because this is the design we used.” There are just no ideas for how something like that can be implemented.
Let’s assume that’s true, for a moment. The way, in a sense, that you get to human rights, is you start by saying that humans are self-aware, which as you say, we can all self-report that. If we are self-aware, that implies we have a self, and implying we have a self means that that self can feel, and that’s when you get sentience. And then, you get up to sapience, which is intelligence. So, we have a self, that self can feel, and therefore, because that self can suffer, that self is entitled to some kind of rights.
And you’re saying we don’t know what that would look like in a computer, and so forth. Granting all of that, for just a moment, there are those who say that human intelligence, anything remotely like human intelligence, has to have those building blocks, because from self-awareness you get consciousness, which is a different thing.
And consciousness, in part, embodies our ability to change focus, to be able to do one thing, and then, for whatever reason, do a different thing. It’s the way we switch, and we go from task to task. And further, it’s probably the way we draw analogies, and so forth.
So, there is a notion that, even to get to intelligence, to get to superintelligence, there is no way to kind of cut all of that other stuff out, and just go to intelligence. There are those who say you cannot do that, that all of those other things are components of intelligence. But it sounds like you would disagree with that. If so, why would that be?
I disagree, because we have many examples of humans who are not neurotypical. People, for example, who don’t experience pain. They are human beings, they are intelligent, they certainly have full rights, but they never feel any pain. So that example—that you must feel pain in order to reach those levels of intelligence—is simply not true. There are many variations on human beings, for example, not having visual thinking patterns. They think in words, not in images, like most of us. So, even that goes away.
We don’t seem to have a guaranteed set of properties that a human being must have to be considered human. There are human beings who have very low intelligence, maybe severe mental retardation. They are still human beings. So, there are very different standards for, a) getting human rights, and, b) having all those properties.
Right. Okay. You advocate—to use your words from earlier in this talk—putting the artificial intelligence in a prison. Is that view—we need to lock it up before we even make it—really, in your mind, the best approach?
I wouldn’t be doing it if I didn’t think it was. We definitely need safety mechanisms in place. There are some good ideas we have, for how to make those systems safer, but all of them require testing. Software requires testing. Before you run it, before you release it, you need a test environment. This is not controversial.
What do you think of the OpenAI initiative, which is the idea that as we’re building this we ought to share and make it open source, so that there’s total transparency, so that one bad actor doesn’t get an AGI, and so forth? What are your thoughts on that?
This helps to distribute power amongst humans, so not a single person gets all the power, but a lot of people have access. But at the same time, it increases danger, because all the crazies, all the psychopaths, now get access to the cutting-edge AI, and they can use it for whatever purposes they want. So, it’s not clear cut whether it’s very beneficial or very harmful. People disagree strongly on OpenAI, specifically.
You don’t think that the prospects for humans to remain the dominant species on this planet are good. I remember seeing an Elon Musk quote, he said, “The only reason we are at the top is because we’re the smartest, and if we’re not the smartest anymore, we’re no longer going to be on top.” It sounds like you think something similar to that.
Absolutely, yes. To paraphrase, or quote directly, from Bill Joy, “The future may not need us.”
What do you do about that?
That’s pretty much all of my research. I’m trying to figure out if the problem of AI control, controlling intelligent agents, is actually solvable. A lot of people are working on it, but we never have actually established that it’s possible to do. I have some theoretical results of mine, and from other disciplines, which show certain limitations to what can be done. It seems that intelligence, and how controllable something is, are inversely related. The more intelligent a system becomes, the less control we have over it.
Things like babies have very low intelligence, and we have almost complete control over them. As they grow up, as they become teenagers, they get smarter, but we lose more and more control. With super-intelligent systems, obviously, you have almost no control left.
Let’s back up now, and look at the here and now, and the implications. There’s a lot of debate about AI, and not even talking about an AGI, just all the stuff that’s wrapped up in it, about automation, and it’s going to replace humans, and you’re going to have an unemployable group of people, and social unrest. You know all of that. What are your thoughts on that? What do you see for the immediate future of humanity?
Right. We’re definitely going to have a lot of people lose their jobs. I’m giving a talk for a conference of accountants soon, and I have the bad news to share with them, that something like ninety-four percent of them will lose their jobs in the next twenty years. It’s the reality of it. Hopefully, the smart people will find much better jobs, other jobs.
But for many, many people, who don’t have education, or maybe don’t have cognitive capacity; they will no longer be competitive in this economy, and we’ll have to look at things like unconditional basic income, unconditional basic assets, to, kind of prevent revolutions from happening.
AI is going to advance much faster than robots, which have all these physical constraints, and can’t just double over the course of eighteen months. Would you be of the mind that mental tasks, mental jobs, are more at risk than physical jobs, as a general group?
It’s more about how repetitive your job is. If you’re doing something the same, whether it’s physical or mental, it’s trivial to automate. If you’re always doing something somewhat novel, now that’s getting closer to AI completeness. Not quite, but in that direction, so it’s much harder.
In two hundred and fifty years, this country, the West, has had had economic progress, we’ve had technological revolutions which could, arguably, be on the same level as the artificial intelligence revolution. We had mechanization, the replacement of human power with animal power, the electrification of industry, the adoption of steam, and all of these appeared to be very disruptive technologies.
And yet, through all of that, unemployment, except for the Great Depression, never has bumped out of four to nine percent. You would assume, if technology was able to rapidly displace people, that it would be more erratic than that. You would have these massive transforming industries, and then you would have some period of high unemployment, and then that would settle back down.
So, the theory around that would be that, no, the minute we build a new tool, humans just grab that thing, and use it to increase their own productivity, and that’s why you never have anything outside of four to nine percent unemployment. What’s wrong with that logic, in your mind?
You are confusing tools and agents. AI is not a tool. AI is an independent agent, which can possibly use humans as tools, but not the other way around. So, the examples of saying we had previous singularities, whether it’s cultural or industrial, they are just wrong. You are comparing apples and potatoes. Nothing in common.
So, help me understand that a little better. Unquestionably, technology has come along, and, you know, I haven’t met a telephone switchboard operator in a long time, or a travel agent, or a stockbroker, or typewriter repairman. These were all jobs that were replaced by technology, and whatever word you put on the technology doesn’t really change that simple fact. Technology came out, and it upset the applecart in the employment world, and yet, unemployment never goes up. Help me understand why AI is different again, and forgive me if I’m slow here.
Sure. Let’s say you have a job, you nail roofs to houses, or something like that. So, we give you a new tool, and now you can have a nail gun. You’re using this tool, you become ten times more efficient, so nine of your buddies lose jobs. You’re using a tool. The nail gun will never decide to start a construction company, and go into business on its own, and fire you.
The technology we’re developing now is fundamentally different. It’s an agent. It’s capable—and I’m talking about the future of AI, not AI of today—it’s capable of self-improvement. It’s capable of cross-domain learning. It’s as smart, or smarter, as any human. So, it’s capable of replacing you. You become a bottleneck in that hybrid system. You no longer hold the gun. You have nothing to contribute to the system.
So, it’s very easy to see that all jobs will be fully automated. The logic always was, the job which is automated is gone, but now we have this new job which we don’t know how to automate, so you can get a new, maybe better, job doing this advanced technology control. But if every job is automated, I mean, by definition, you have one hundred percent unemployment. There are still jobs, kind of prestige jobs, because it’s a human desire to get human-made art, or maybe handmade items, expensive and luxury items, but they are a tiny part of the market.
If AI can do better in any domain, humans are not competitive, so all of us are going to lose our jobs. Some sooner, some later, but I don’t see any job which cannot be automated, if you have human level intelligence, by definition.
So, your thesis is that, in the future, once the AI’s pass our abilities, even a relatively small amount, every new job that comes along, they’ll just learn quicker than we will and, therefore, it’s kind of like you never find any way to use it. You’re always just superfluous to the system.
Right. And the new jobs will not be initially designed for a human operator. They’ll be basically streamlined for machines, in the first place, so we won’t have any competitive advantage. Right now, for example, our cars are designed for humans. If you want to add a self-driving component to it, you have to work with the wheel and brake pedals and all that, to make it switch.
Whereas, if from the beginning, you’re designing it to work with machines; you have smart roads, smart signs, humans are not competitive at any point. There is never an entry point where a human has a better answer.
Let me do a sanity check at this point, if I could. So, humans have a brain that has a hundred billion neurons, and countless connections between it, and it’s something we don’t really understand very well. And it perhaps has emergent properties which give us a mind, that give us creativity, and so forth, but it’s just simple emergence.
We have this thing called consciousness. I know you say it’s not scientific, but if you believe that you’re conscious, then you have to grapple with the fact that whatever that is, is a requisite for you being intelligent.
So, we have a brain we don’t understand, an emergent mind we don’t understand, a phenomenon of “consciousness” which is the single fact we are most aware of in our own life, and all of that makes us this. Meanwhile, I have simple pieces of hardware that I’m mightily delighted when they work correctly.
What you’re saying is… It seems you have one core assumption, which is that in the end, the human brain is a machine, and we can make a copy of that machine, and it’s going to do everything a human machine can do, and even better. That, some might argue, is the non-scientific leap. You take something we don’t understand, that has emergent properties we don’t understand, that has consciousness, which we don’t understand, and you say, “oh yes, it’s one hundred percent certain we’re going to be able to exceed our own intelligence.”
Kevin Kelly calls that a Cargo Cult. It’s like this idea that, oh well, if we just build something just like it, it’s going to be smarter than us. It smacks to some of being completely unscientific. What would you say to that?
One, it’s already smarter than us, in pretty much all domains. Whatever you’re talking about, playing games, investing in the stock market… You take a single domain where we know what we’re doing, and it seems like machines are either already at a human level, or quickly surpassing it, so it’s not crazy to think that this trend will continue. It’s been going on for many years.
I don’t need to fully understand the system to do better than a system. I don’t know how to make a bird. I have no idea how the cells in a bird work. It seems to be very complex. But, I take airplanes to go to Europe, not birds.
Can you explain that sentence that you just said, “Domains where we know what we are doing”? Isn’t that kind of the whole point, is that there’s this big area of things where we don’t know what we’re doing, and where we don’t understand how humans have the capabilities? How are they able to solve non-algorithmic problems? How are humans able to do the kind of transferred learning we do, where we know one thing, in one domain, and we’re really good at applying it in others?
We don’t know how children learn, how a two-year-old gets to be a full AGI. So, granted, in the domains where we know what we are doing, all six of them… I mean look, let’s be real: just to beat humans at one game, chess, took a multi-billion-dollar company spending untold millions of dollars, all of the mental effort of many, many people, working for years. And then you finally—and it’s one tiny game—get a computer that can do better than a human.
And you say, “Oh, well. That’s it, then. We’re done. They can do anything, now.” That seems to extrapolate beyond what the data would suggest.
Right. I’m not saying it’s happening now. I’m not saying computers today are capable of those things. I’m saying there is not a reason for why it will not be true in the maybe-distant future. As I said, I don’t make predictions about the date. I’m just pointing out that if you can pick a specific domain of human activity, and you can explain what they do in that domain—it’s not some random psychedelic performance, but actually “this is what they do”—then you have to explain why a computer will never be able to do that.
[36:38 – 36:43 remove awkward pause]
Fair enough. Assuming all of that is going to happen, that gradually, one thing by one thing by one thing, computers will shoot ahead of us, and obsolete us, and I understand you’re not picking dates, but presumably, we can stack-rank the order of things to some very coarse degree… The most common question I get from people is, “Well, what should I study? What should my kids study, in order to be relevant, to have jobs in the future?”
You’re bound to get that question, and what would you say to it?
That goes directly to my paper on AI completeness. Basically, what is the last job to be automated? It’s the person doing AI research. Someone who is advancing machine learning. The moment machines can automate that, there are no other jobs left. But that’s the last job to go.
So, definitely study computer science, study machine learning, study artificial intelligence. Anything which helps you in those fields—mathematics, physics—will be good for you. Don’t major in areas, in domains, which we already know will be automated by the time you graduate. As part of my job I advise students, and I would never advise someone to become a cab driver.
It’s funny, Mark Cuban said, and he’s not necessarily in the field, but he has really interesting thoughts about it. And he said that if he were starting over, he would be a philosophy major, and not pursue a technical job, because the technical jobs are actually probably the easiest things for machines to do. That’s kind of in their own backyard. But the more abstract it is, in a sense, the longer it would take a computer to be able to do it. What would you say to that?
I agree. It’s an awesome job, and if you can get one of those hundred jobs in the world, I say go for it. But the market is pretty small and competitive, whereas for machine learning, it’s growing exponentially. It’s paying well, and you can actually get in.
You mentioned the consciousness paper you’re working on. When will that come out?
That’s a finished draft, and it’s just a survey paper of different methods people propose to detect or measure consciousness. It’s under review right now. We’re working on some revisions. But basically, we reviewed everything we could find in the last ten to fifteen years, and all of them measure some side effect of what people or animals do. They never actually try to measure consciousness itself.
There is some variance which deals with quantum physics, and collapse of wave functions, to Copenhagen interpretations, and things like that; but even that is not well-defined. It’s more of a philosophical kind of an argument. So, it seems like there is this concept, but nobody can tell me what it does, why it’s useful, and how to detect it or measure it.
So, it seems to be somewhat unscientific. Saying that, “Okay, but you feel it in you,” is not an argument. I know people who say, “I hear the voice of Jesus speaking to me.” Should I take that as a scientific theory, and study it? Just because someone is experiencing it doesn’t make it a scientific concept.
Tantalize us a little bit with some of the other things you’re working on, or some of the exciting things that you might be publishing soon.
As I said, I’m looking at, kind of, limitations of what we can do in the AI safety field. One problem I’m looking at is this idea of verifiability. What can be verified scientifically, specifically in mathematical proofs and computer software? Trying to write very good software, with no bugs, is kind of a fundamental holy grail of computer science, computer security, cyber security. There is a lot of very good work on it, but it seems there are limitations on how good we can get. We can remove most bugs, but usually not all bugs.
If you have a system which makes a billion decisions a second, and there is a one in a billion chance that it’s getting something wrong, those mistakes quickly accumulate. Also, there is almost no successful work on how to do software verification for AI in novel domains, systems capable of learning. All of the verification work we know about is for kind of deterministic software, and specific domains.
We can do airplane autopilot software, things like that, and verify it very well, but not something with this ability to learn and self-improve. That’s a very hard-to-open area of research.
Two final questions, if I can. The first one is—I’m sure you think through all of these different kinds of scenarios; this could happen or that could happen—what would happen, in your view, if a single actor, be it a company or a government, or what have you; a single actor invented a super-intelligent system? What would you see the ripple effects of that being?
That’s basically what singularity is, right? We get to that point where machines are the ones inventing and discovering, and we can no longer keep up with what’s going on. So, making a prediction about that is, by definition, impossible.
The most important point I’d like to stress—if they just happen to do it somehow, by some miracle, without any knowledge or understanding of safety and control, just created a random very smart system, in that space of possible minds—there is almost a guarantee that it’s a very dangerous system, which will lead to horrible consequences for all of us.
You mentioned that the first AGI is priceless, right? It’s worth countless trillions of dollars.
Right. It’s basically free labor of every kind—physical, cognitive—it is a huge economic benefit, but if in the process of creating that benefit, it destroys humanity, I’m not sure money is that valuable to you in that scenario.
The final question: You have a lot of scenarios. It seems your job is to figure out, how do we get into this future without blowing ourselves up? Can you give me the optimistic scenario; the one possible way we can get through all of this? What would that look like to you? Let’s end on the optimistic note, if we can.
I’m not sure I have something very good to report. It seems like long-term, everything looks pretty bleak for us. Either we’re going to merge with machines, and eventually become a bottleneck which will be removed, or machines will simply take over, and we’ll become quite dependent on them deciding what to do with us.
It could be a reasonably okay existence, with machines treating us well, or it could be something much worse. But short of some external catastrophic change preventing development of this technology, I don’t see a very good scenario, where we are in charge of those god-like machines and getting to live in paradise. It just doesn’t seem very likely.
So, when you hear about, you know, some solar flare that just missed the Earth by six hours of orbit or something, are you sitting there thinking, “Ah! I wish it had hit us, and just fried all of these things. It would buy humanity another forty years to recover.” Is that the best scenario, that there’s a button you could push that would send a giant electromagnetic pulse and just destroy all electronics? Would you push the button?
I don’t advocate any terrorist acts, natural or human-caused, but it seems like it would be a good idea if people smart enough to develop this technology, were also smart enough to understand possible consequences, and acted accordingly.
Well, this has been fascinating, and I want to thank you for taking the time to be on the show.
Thank you so much for inviting me. I loved it.
Byron explores issues around artificial intelligence and conscious computers in his upcoming book The Fourth Age, to be published in April by Atria, an imprint of Simon & Schuster. Pre-order a copy here.
[voices_in_ai_link_back]

Voices in AI – Episode 17: A Conversation with James Barrat

[voices_in_ai_byline]
In this episode, Byron and James talk about jobs, human vs. artificial intelligence, and more.
[podcast_player name=”Episode 17: A Conversation with James Barrat” artist=”Byron Reese” album=”Voices in AI” url=”https://voicesinai.s3.amazonaws.com/2017-10-30-(00-54-11)-james-barrat.mp3″ cover_art_url=”https://voicesinai.com/wp-content/uploads/2017/10/voices-headshot-card-3-1.jpg”]
[voices_in_ai_link_back]
Byron Reese: Hello, this is Voices in AI, brought to you by Gigaom. I am Byron Reese. Today I am so excited that our guest is James Barrat. He wrote a book called Our Final Invention, subtitled Artificial Intelligence and the End of the Human Era. James Barratt is also a renowned documentary filmmaker, as well as an author. Welcome to the show, James.
James Barrat: Hello.
So, let’s start off with, what is artificial intelligence?
Very good question. Basically, artificial intelligence is when machines perform tasks that are normally ascribed to human intelligence. I have a very simple definition of intelligence that I like. Because ‘artificial intelligence’—the definition just throws the ideas back to humans, and [to] human intelligence, which is the intelligence we know the most about.
The definition I like is: intelligence is the ability to achieve goals in a variety of novel environments, and to learn. And that’s a simple definition, but a lot is packed into it. Your intelligence has to achieve goals, it has to do something—whether that’s play Go, or drive a car, or solve proofs, or navigate, or identify objects. And if it doesn’t have some goal that it achieves, it’s not very useful intelligence.
If it can achieve goals in a variety of environments, if it can do object recognition and do navigation and do car-driving like our intelligence can, then it’s better intelligence. So, it’s goal-achieving in a bunch of novel environments, and then it learns. And that’s probably the most important part. Intelligence learns and it builds on its learning.
And you wrote a widely well-received book, Artificial Intelligence: Our Final Invention. Can you explain to the audience just your overall thesis, and the main ideas of the book?
Sure. Our Final Invention is basically making the argument that AI is a dual-use technology. A dual-use technology is one that can be used for great good, or great harm. Right now we’re in a real honeymoon phase of AI, where we’re seeing a lot of nifty tools come out of it, and a lot more are on the horizon. AI, right now, can find cancer clusters in x-rays better than humans. It can do business analytics better than humans. AI is doing what first year legal associates do, it’s doing legal discovery.
So we are finding a lot of really useful applications. It’s going to make us all better drivers, because we won’t be driving anymore. But it’s a dual-use technology because, for one thing, it’s going to be taking a lot of jobs. You know, there are five million professional drivers in the United States, seven million back-office accountants—those jobs are going to go away. And a lot of others.
So the thesis of my book is that we need to look under the hood of AI, look at its applications, look who’s controlling it, and then in a longer term, look at whether or not we can control it at all.
Let’s start with that point and work backwards. That’s an ominous statement. Can we record it at all? What are you thinking there?
Can we control it at all.
I’m sorry, yes. Control it at all.
Well, let me start, I prefer to start the other way. Stephen Hawking said that the trouble with AI is, in the short term, who controls it, and in the long term, can we control it at all? And in the short term, we’ve already suffered some from AI. You know, the NSA recently was accessing your phone data and mine, and getting your phone book and mine. And it was, basically, seizing our phone records, and that used to be illegal.
Used to be that if I wanted to seize, to get your phone records, I needed to go to a court, and get a court order. And that was to avoid abridging the Fourth Amendment, which prevents illegal search and seizure of property. Your phone messages are your property. The NSA went around that, and grabbed our phone messages and our phone data, and they are able to sift through this ocean of data because of AI, because of advanced data mining software.
One other example—and there are many—one other example of, in the short term, who controls the AI, is, right now there are a lot of countries developing battlefield robots and drones that will be autonomous. And these are robots and drones that kill people without a human in the loop.  And these are AI issues. There are fifty-six nations developing battlefield robots.
The most sought after will be autonomous battlefield robots. There was an article just a couple of days ago about how the Marines have a robot that shoots a machinegun on a battlefield. They control it with a tablet, but their goal, as stated there, is to make it autonomous, to work on its own.
In the longer-term we, I’ll put it in the way that Arthur C. Clark put it to me, when I interviewed him. Arthur C. Clark was a mathematician and a physicist before he was a science fiction writer. And he created the HAL 9000 from 2001: A Space Odyssey, probably the most famous homicidal AI. And he said, when I asked him about the control problem of artificial intelligence, he said something like this: He said, “We humans steer the future not because we are the fastest or the strongest creatures, but because we are the most intelligent. And when we share the planet with something that’s more intelligent than we are, it will steer the future.”
So the problem we’re facing, the problem we’re on the cusp of, I can simplify it with a concept called ‘the intelligence explosion’. The intelligence explosion was an idea created by a statistician named I. J. Good in the 1960s. He said, “Once we create machines that do everything as well or better than humans, one of the things they’ll do is create smart machines.”
And we’ve seen artificial intelligence systems slowly begin to do things better than we do, and it’s not a stretch to think about a time to come, when artificial intelligence systems do advanced AI research and development better that humans. And I. J. Good said, “Then, when that happens, we humans will no longer set the pace of intelligence advancement, it will be machines that will set the pace of advancement.”
The trouble of that is, we know nothing about how to control a machine, or a cognitive architecture, that’s a thousand or million times more intelligent than we are. We have no experience with anything like that. We can look around us for analogies in the animal world.
How do we treat things that we’re a thousand times more intelligent than? Well, we treat all animals in a very negligent way. And the smart ones are either endangered, or they’re in zoos, or we eat them. That’s a very human-centric analogy, but I think it’s probably appropriate.
Let’s push on this just a little bit.  So do you…
Sure.
Do you believe… Some people say ‘AI’ is kind of this specter of a term now, that, it isn’t really anything different than any other computer programs we’ve ever run, right? It’s better and faster and all of that, but it isn’t qualitatively anything different than what we’ve had for decades.
And so why do you think that? And when you say that AIs are going to be smarter than us, a million times smarter than us, ‘smarter’ is also a really nebulous term.
I mean, they may be able to do some incredibly narrow thing better than us. I may not be able to drive a car as well as an AI, but that doesn’t mean that same AI is going to beat me at Parcheesi. So what do you think is different? Why isn’t this just incrementally… Because so far, we haven’t had any trouble.
What do you think is going to be the catalyst, or what is qualitatively different about what we are dealing with now?
Sure. Well, there’s a lot of interesting questions packed into what you just said. And one thing you said—which I think is important to draw out—is that there are many kinds of intelligence. There’s emotional intelligence, there’s rational intelligence, there’s instinctive and animal intelligence.
And so, when I say something will be much more intelligent than we are, I’m using a shorthand for: It will be better at our definition of intelligence, it will be better at solving problems in a variety of novel environments, it will be better at learning.
And to put what you asked in another way, you’re saying that there is an irreducible promise and peril to every technology, including computers. All technologies, back to fire, have some good points and some bad points. AI I find qualitatively different. And I’ll argue by analogy, for a second. AI to me is like nuclear fission. Nuclear fission is a dual-use technology capable of great good and great harm.
Nuclear fission is the power behind atom bombs and behind nuclear reactors. When we were developing it in the ‘20s and ‘30s, we thought that nuclear fission was a way to get free energy by splitting the atom. Then it was quickly weaponized. And then we used it to incinerate cities. And then we as a species held a gun at our own heads for fifty years with the arms race. We threatened to make ourselves extinct. And that almost succeeded a number of times, and that struggle isn’t over.
To me, AI is a lot more like that. You said it hasn’t been used for nefarious reasons, and I totally disagree. I gave you an example with the NSA. A couple of weeks ago, Facebook was caught up because they were targeting emotionally-challenged and despairing children for advertising.
To me, that’s extremely exploitative. It’s a rather soulless and exploitative commercial application of artificial intelligence. So I think these pitfalls are around us. They’re already taking place. So I think the qualitative difference with artificial intelligence is that intelligence is our superpower, the human superpower.
It’s the ability to be creative, the ability to invent technology. That was one thing Stephen Hawking brought up when he was asked about, “What are the pitfalls of artificial intelligence?”
He said, “Well, for one thing, they’ll be able to develop weapons we don’t even understand.” So, I think the qualitative difference is that AI is the invention that creates inventions. And we’re on the cusp, this is happening now, and we’re on the cusp of an AI revolution, it’s going to bring us great profit and also great vulnerability.
You’re no doubt familiar with Searle’s “Chinese Room” kind of question, but all of the readers, all of the listeners might not be… So let me set that up, and then get your thought on it. It goes like this:
There’s a person in a room, a giant room full of very special books. And he doesn’t—we’ll call him the librarian—and the librarian doesn’t speak a word of Chinese. He’s absolutely unfamiliar with the language.
And people slide him questions under the door which are written in Chinese, and what he does—what he’s learned to do—is to look at the first character in that message, and he finds the book, of the tens of thousands that he has, that has that on the spine. And in that book he looks up the second character. And the book then says, “Okay, go pull this book.”
And in that book he looks up the third, and the fourth, and the fifth, all the way until he gets to the end. And when he gets to the end, it says “Copy this down.” And so he copies these characters again that he doesn’t understand, doesn’t have any clue whatsoever what they are.
He copies them down very carefully, very faithfully, slides it back under the door… Somebody’s outside who picks it up, a Chinese speaker. They read it, and it’s just brilliant! It’s just absolutely brilliant! It rhymes, it’s Haiku, I mean it’s just awesome!
Now, the question, the kind of ta-da question at the end is: Does the man, does the librarian understand Chinese? Does he understand Chinese?
Now, many people in the computer world would say yes. I mean, Alan Turing would say yes, right?  The Chinese room passes the Turing Test. The Chinese speakers outside, as far as they know, they are conversing with a Chinese speaker.
So do you think the man understands Chinese? And do you think… And if he doesn’t understand Chinese… Because obviously, the analogy of it is: that’s all that computer does. A computer doesn’t understand anything. It doesn’t know if it’s talking about cholera or coffee beans or anything whatsoever. It runs this program, and it has no idea what it’s doing.
And therefore it has no volition, and therefore it has no consciousness; therefore it has nothing that even remotely looks like human intelligence. So what would you just say to that?
The Chinese Room problem is fascinating, and you could write books about it, because it’s about the nature of consciousness. And what we don’t know about consciousness, you could fill many books with. And I used to think I wanted to explore consciousness, but it made exploring AI look easy.
I don’t know if it matters that the machine thinks as we do or not. I think the point is that it will be able to solve problems. We don’t know about the volition question. Let me give you another analogy. When Ferrucci, [when] he was the head of Team Watson, he was asked a very provocative question: “Was Watson thinking when it beat all those masters at Jeopardy?” And his answer was, “Does a submarine swim?”
And what he meant was—and this is the twist on on the Chinese Room problem—he meant [that] when they created submarines, they learned principles of swimming from fish. But then they created something that swims farther and faster and carries a huge payload, so it’s really much more powerful than fish.
It doesn’t reproduce and it doesn’t do some of the miraculous things fish do, but as far as swimming, it does it.  Does an airplane fly? Well, the aviation pioneers used principles of flight from birds, but quickly went beyond that, to create things that fly farther and faster and carry a huge payload.
I don’t think it matters. So, two answers to your question. One is, I don’t think it matters. And I don’t think it’s possible that a machine will think qualitatively as we do. So, I think it will think farther and faster and carry a huge payload. I think it’s possible for a machine to be generally intelligent in a variety of domains.
We can see intelligence growing in a bunch of domains. If you think of them as rippling pools, ripples in a pool, like different circles of expertise ultimately joining, you can see how general intelligence is sort of demonstrably on its way.
Whether or not it thinks like a human, I think it won’t. And I think that’s a danger, because I think it won’t have our mammalian sense of empathy. It’ll also be good, because it won’t have a lot of sentimentality, and a lot of cognitive biases that our brains are labored with. But you said it won’t have volition. And I don’t think we can bet on that.
In my book, Our Final Invention, I interviewed at length Steve Omohundro, who’s taken upon himself—he’s an AI maker and physicist—and he’d taken it upon himself to create more or less a science for understanding super intelligent machines. Or machines that are more intelligent than we are.
And among the things that he argues for, using rational-age and economic theory—and I won’t go into that whole thing—but it’s in Our Final Invention, it’s also in Steve Omohundro’s many websites. Machines that are self-aware and are self-programming, he thinks, will develop basic drives that are not unlike our own.
And they include things like self-protection, creativity, efficiency with resources,and other drives that will make them very challenging to control—unless we get ahead of the game and create this science for understanding them, as he’s doing.
Right now, computers are not generally intelligent, they are not conscious. All the limitations of the Chinese Room, they have. But I think it’s unrealistic to think that we are frozen in development. I think it’s very realistic to think that we’ll create machines whose cognitive abilities match and then outstrip our own.
But, just kind of going a little deeper on the question. So we have this idea of intelligence, which there is no consensus definition on it. Then within that, you have human intelligence—which, again, is something we certainly don’t understand. Human intelligence comes from our brain, which is—people say—‘the most complicated object in the galaxy’.
We don’t understand how it works. We don’t know how thoughts are encoded. We know incredibly little, in the grand scheme of things, about how the brain works. But we do know that humans have these amazing abilities, like consciousness, and the ability to generalize intelligence very effortlessly. We have something that certainly feels like free will, we certainly have something that feels like… and all of that.
Then on the other hand, you think back to a clockwork, right? You wind up a clock back in the olden days and it just ran a bunch of gears. And while it may be true that the computers of the day add more gears and have more things, all we’re doing is winding it up and letting it go.
And, isn’t it, like… not only a stretch, not only a supposition, not only just sensationalistic, to say, “Oh no, no. Someday we’ll add enough gears that, you wind that thing up, and it’s actually going to be a lot smarter than you.”
Isn’t that, I mean at least it’s fair to say there’s absolutely nothing we understand about human intelligence, and human consciousness, and human will… that even remotely implies that something that’s a hundred percent mechanical, a hundred percent deterministic, a hundred percent… Just wind it and it doesn’t do anything. But…
Well, you’re wrong about being a hundred percent deterministic, and it’s not really a hundred percent mechanical. When you talk about things like will, will is such an anthropomorphic term, I’m not sure if we can really, if we can attribute it to computers.
Well, I’m specifically saying we have something that feels and seems like will, that we don’t understand.
If you look, if you look at artificial neural nets, there’s a great deal about them we don’t understand. We know what the inputs are, and we know what the outputs are; and when we want to make better output—like a better translation—we know how to adjust the inputs. But we don’t know what’s going on in a multilayered neural net system. We don’t know what’s going on in a high resolution way. And that’s why they’re called black box systems, and evolutionary algorithms.
In evolutionary algorithms, we have a sense of how they work. We have a sense of how they combine pieces of algorithms, how we introduce mutations. But often, we don’t understand the output, and we certainly don’t understand how it got there, so that’s not completely deterministic. There’s a bunch of stuff we can’t really determine in there.
And I think we’ve got a lot of unexplained behavior in computers that’s, at this stage, we simply attribute to our lack of understanding. But I think in the longer term, we’ll see that computers are doing things on their own. I’m talking about a lot of the algorithms on Wall Street, a lot of the flash crashes we’ve seen, a lot of the cognitive architectures. There’s not one person who can describe the whole system… the ‘quants’, they call them, or the guys that are programming Wall Street’s algorithms.
They’ve already gone, in complexity, beyond any individual’s ability to really strip them down.
So, we’re surrounded by systems of immense power. Gartner and company think that in the AI space—because of the exponential nature of the investment… I think it started out, and it’s doubled every year since 2009—Gartner estimates that by 2025, that space will be worth twenty-five trillion dollars of value. So to me, that’s a couple of things.
That anticipates enormous growth, and enormous growth in power in what these systems will do. We’re in an era now that’s different from other eras. But it is like other Industrial Revolutions. We’re in an era now where everything that’s electrified—to paraphrase Kevin Kelly, the futurist—everything that’s electrified is being cognitized.
We can’t pretend that it will always be like a clock. Even now it’s not like a clock. A clock you can take apart, and you can understand every piece of it.
The cognitive architectures we’re creating now… When Ferrucci was watching Watson play, and he said, “Why did he answer like that?” There’s nobody on his team that knew the answer. When it made mistakes… It did really, really well; it beat the humans. But comparing [that] to a clock, I think that’s the wrong metaphor.
Well, let’s just poke at it just one more minute, and then we can move on to something else. Is that really fair to say, that because humans don’t understand how it works, it must be somehow working differently than other machines?
Put another way, it is fair to say, because we’ve added enough gears now, that nobody could kind of keep them all straight. I mean nobody understands why the Google algorithm—even at Google—turns up what it does when you search. But nobody’s suggesting anything nondeterministic, nothing emergent, anything like that is happening.
I mean, our computers are completely deterministic, are they not?
I don’t think that they are. I think if they were completely deterministic, then enough brains put together could figure out a multi-tiered neural net, and I don’t think there’s any evidence that we can right now.
Well, that’s exciting.  
I’m not saying that it’s coming up with brilliant new ideas… But a system that’s so sophisticated that it defeats Go, and teaches grandmasters new ideas about Go—which is what the grandmaster who it defeated three out of four times said—[he] said, “I have new insights about this game,” that nobody could explain what it was doing, but it was thinking creatively in a way that we don’t understand.
Go is not like chess. On a chess board, I don’t know how many possible positions there are, but it’s calculable. On a Go board, it’s incalculable. There are more—I’ve heard it said, and I don’t really understand it very well—I heard it said there are more possible positions on a Go board than there are atoms in the universe.
So when it’s beating Go masters… Therefore, playing the game requires a great deal of intuition. It’s not just pattern-matching. Like, I’ve played a million games of Go—and that’s sort of what chess is [pattern-matching].
You know, the grandmasters are people who have seen every board you could possibly come up with. They’ve probably seen it before, and they know what to do. Go’s not like that. It requires a lot more undefinable intuition.
And so we’re moving rapidly into that territory. The program that beat the Go masters is called AlphaGo. It comes out of DeepMind. DeepMind was bought four years ago by Google. Going deep into reinforcement learning and artificial neural nets, I think your argument would be apt if we were talking about some of the old languages—Fortran, Basic, Pascal—where you could look at every line of code and figure out what was going on.
That’s no longer possible, and you’ve got Go grandmasters saying “I learned new insights.” So we’re in a brave new world here.
So you had a great part of the book, where you do a really smart kind of roll-up of when we may have an AGI. Where you went into different ideas behind it. And the question I’m really curious about is this: On the one hand, you have Elon Musk saying we can have it much sooner than you think. You have Stephen Hawking, who you quoted. You have Bill Gates saying he’s worried about it.
So you have all of these people who say it’s soon, it’s real, and it’s potentially scary. We need to watch what we do. Then on the other camp, you have people who are equally immersed in the technology, equally smart, equally, equally, equally all these other things… like Andrew Ng, who up until recently headed up AI at Baidu, who says worrying about AGI is like worrying about overpopulation on Mars. You have other people saying the soonest it could possibly happen is five hundred years from now.
So I’m curious about this. Why do you think, among these big brains, super smart people, why do they have… What is it that they believe or know or think, or whatever, that gives them such radically different views about this technology? How do you get your head around why they differ?
Excellent question. I first heard that Mars analogy from, I think it was Sebastian Thrun, who said we don’t know how to get to Mars. We don’t know how to live on Mars. But we know how to get a rocket to the moon, and gradually and slowly, little by little—No, it was Peter Norvig, who wrote the sort of standard text on artificial intelligence, called AI: A Modern Approach.
He said, you know, “We can’t live on Mars yet, but we’re putting the rockets together. Some companies are putting in some money. We’re eventually going to get to Mars, and there’ll be people living on Mars, and then people will be setting another horizon.” We haven’t left our solar system yet.
It’s a very interesting question, and very timely, about when will we achieve human-level intelligence in a machine, if ever. I did a poll about it. It was kind of a biased poll; it was of people who were at a conference about AGI, about artificial general intelligence. And then I’ve seen a lot of polls, and there’s two points to this.
One is the polls go all over the place. Some people said… Ray Kurzweil says 2029. Ray Kurzweil’s been very good at anticipating the progress of technology, he says 2029. Ray Kurzweil’s working for Google right now—this is parenthetically—he said he wants to create a machine that makes three hundred trillion calculations per second, and to share that with a billion people online. So what’s that? That’s basically reverse engineering of a brain.
Making three hundred trillion calculations per second, which is sort of a rough estimate of what a brain does. And then sharing it with a billion people online, which is making superintelligence a service, which would be incredibly useful. You could do pharmacological research. You could do really advanced weather modeling, and climate modeling. You could do weapons research, you could develop incredible weapons. He says 2029.
Some people said one hundred years from now. The mean date that I got was about 2045 for human-level intelligence in a machine. And then my book, Our Final Invention, got reviewed by Gary Marcus in the New Yorker, and he said something that stuck with me. He said whether or not it’s ten years or one hundred years, the more important question is: What happens next?
Will it be integrated into our lives? Or will it suddenly appear? How are we positioned for our own safety and security when it appears, whether it’s in fifty years or one hundred? So I think about it as… Nobody thought Go was going to be beaten for another ten years.
And here’s another way… So those are the two ways to think about it: one is, there’s a lot of guesses; and two, does it really matter what happens next? But the third part of that is this, and I write about it in Our Final Invention: If we don’t achieve it in one hundred years, do you think we’re just going to stop? Or do you think we’re going to keep beating at this problem until we solve it?
And as I said before, I don’t think we’re going to create exactly human-like intelligence in a machine. I think we’re going to create something extremely smart and extremely useful, to some extent, but something we, in a very deep way, don’t understand. So I don’t think it’ll be like human intelligence… it will be like an alien intelligence.
So that’s kind of where I am on that. I think it could happen in a variety of timelines. It doesn’t really matter when, and we’re not going to stop until we get there. So ultimately, we’re going to be confronted with machines that are a thousand or a million times more intelligent than we are.
And what are we going to do?
Well, I guess the underlying assumption is… it speaks to the credibility of the forecast, right? Like, if there’s a lab, and they’re working on inventing the lightbulb, like: “We’re trying to build the incandescent light bulb.” And you go in there and you say, “When will you have the incandescent light bulb?” and they say “Three or four weeks, five weeks. Five weeks tops, we’re going to have it.”  
Or if they say, “Uh, a hundred years. It may be five hundred, I don’t know.” I mean in those things you take a completely different view of, do we understand the problem? Do we know what we’re building? Do we know how to build an AGI? Do we even have a clue?
Do you believe… or here, let me ask it this way: Do you think an AGI is just an evolutionary… Like, we have AlphaGo, we have Watson, and we’re making them better every day. And eventually, that kind of becomes—gradually—this AGI. Or do you think there’s some “A-ha” thing we don’t know how to do, and at some point we’re like “Oh, here’s how you do it! And this is how you get a synapse to work.”
So, do you think we are nineteen revolutionary breakthroughs away, or “No, no, no, we’re on the path. We’re going to be there in three to five years.”?
Ben Goertzel, who is definitely in the race to make AGI—I interviewed him in my book—said we need some sort of breakthrough. And then we got to artificial neural nets and deep learning, and deep learning combined with reinforcement learning, which is an older technique, and that was kind of a breakthrough. And then people started to beat—IBM’s Deep Blue—to beat chess, it really was just looking up tables of positions.
But to beat Go, as we’ve discussed, was something different.
I think we’ve just had a big breakthrough. I don’t know how many revolutions we are away from a breakthrough that makes intelligence general. But let me give you this… the way I think about it.
There’s long been talk in the AI community about an algorithm… I don’t know exactly what they call it. But it’s basically an open-domain problem-solver that asks something simple like, what’s the next best move? What’s the next best thing to do? Best being based on some goals that you’ve got. What’s the next best thing to do?
Well, that’s sort of how DeepMind took on all the Atari games. They could drop the algorithm into a game, and it didn’t even know the rules. It just noticed when it was scoring or not scoring, and so it was figuring out what’s the next best thing to do.
Well if you can drop it into every Atari game, and then you drop it into something that’s many orders of magnitude above it, like Go, then why are we so far from dropping that into a robot and setting it out into the environment, and having it learn the environment and learn common sense about the environment—like, “Things go under, and things go over; and I can’t jump into the tree; I can climb the tree.”
It seems to me that general intelligence might be as simple as a program that says “What’s the next best thing to do?” And then it learns the environment, and then it solves problems in the environment.
So some people are going about that by training algorithms, artificial neural net systems and defeating games. Some people are really trying to reverse-engineer a brain, one neuron at a time. That’s sort of, in a nutshell—to vastly overgeneralize—that’s called the bottom-up, and the top-down approach for creating AGI.
So are we a certain number of revolutions away, or are we going to be surprised? I’m surprised a little too frequently for my own comfort about how fast things are moving. Faster than when I was writing the book. I’m wondering what the next milestone is. I think the Turing Test has not been achieved, or even close. I think that’s a good milestone.
It wouldn’t surprise me if IBM, which is great at issuing itself grand challenges and then beating them… But what’s great about IBM is, they’re upfront. They take on a big challenge… You know, they were beaten—Deep Blue was beaten several times before it won. When they took on Jeopardy, they weren’t sure they were going to win, but they had the chutzpah to get out there and say, “We’re gonna try.” And then they won.
I bet IBM will say, “You know what, in 2020, we’re going to take on the Turing Test. And we’re going to have a machine that you can’t tell that it’s a machine. You can’t tell the difference between a machine and a human.”
So, I’m surprised all the time. I don’t know how far or how close we are, but I’d say I come at it from a position of caution. So I would say, the window in which we have to create safe AI is closing.
Yes, no… I’m with you; I was just taking that in. I’ll insert some ominous “Dun, dun, dun…” Take that a little further.
Everybody has a role to play in this conversation, and mine happens to be canary in a coal mine. Despite the title of my book, I really like AI. I like its potential. Medical potential. I don’t like its war potential… If we see autonomous battlefield robots on the battlefield, you know what’s going to happen. Like every other piece of used military equipment, it’s going to come home.
Well, the thing is, about the military… and the thing about technology is…If you told my dad that he would invite into his home a representative of Google, and that representative would sit in a chair in a corner of the house, and he would take down everything we said, and would sell that data to our insurance company, so our insurance rates might go up… and it would sell that data to mortgage bankers, so they might cut off our ability to get a mortgage… because dad talks about going bankrupt, or dad talks about his heart condition… and he can’t get insurance anymore.
But if we hire a corporate guy, and we pay for it, and put him in our living room… Well, that’s exactly what we’re doing with Amazon Echo, with all the digital assistants. All this data is being gathered all the time, and it’s being sold… Buying and selling data is a four billion dollar-a-year industry. So we’re doing really foolish things with this technology. Things that are bad for our own interests.
So let me ask you an open-ended question… prognostication over shorter time frames is always easier. Tell me what you think is in store for the world, I don’t know, between now and 2030, the next thirteen years. Talk to me about unemployment, talk to me about economics, all of that. Tell me the next thirteen years.
Well, brace yourself for some futurism, which is a giant gamble and often wrong. To paraphrase Kevin Kelly again, everything that’s electrical will be cognitized. Our economy will be dramatically shaped by the ubiquity of artificial intelligence. With the Internet of Things, with the intelligence of everything around us—our phones, our cars…
I can already talk to my car. I’m inside my car, I can ask for directions, I can do some other basic stuff. That’s just going to get smarter, until my car drives itself. A lot of people… MIT did a study, that was quoting a Cambridge study, that said: “Forty-five percent of our jobs will be able to be replaced within twenty years.” I think they downgraded that to like ten years.
Not that they will be replaced, but they will be able to be replaced. But when AI is a twenty-five trillion dollar—when it’s worth twenty-five trillion dollars in 2025—everybody will be able to do anything, will be able to replace any employee that’s doing anything that’s remotely repetitive, and this includes doctors and lawyers… We’ll be able to replace them with the AI.
And this cuts deep into the middle class. This isn’t just people working in factories or driving cars. This is all accountants, this is a lot of the doctors, this is a lot of the lawyers. So we’re going to see giant dislocation, or giant disruption, in the economy. And giant money being made by fewer and fewer people.
And the trouble with that is, that we’ve got to figure out a way to keep a huge part of our population from starving, from not making a wage. People have proposed a basic minimum income, but to do that we would need tax revenue. And the big companies, Amazon, Google, Facebook, they pay taxes in places like Ireland, where there’s very low corporate tax. They don’t pay taxes where they get their wealth. So they don’t contribute to your roads.
Google is not contributing to your road system. Amazon is not contributing to your water supply, or to making your country safe. So there’s a giant inequity there. So we have to confront that inequity and, unfortunately, that is going to require political solutions, and our politicians are about the most technologically-backward people in our culture.
So, what I see is, a lot of unemployment. I see a lot of nifty things coming out of AI, and I am willing to be surprised by job creation in AI, and robotics, and automation. And I’d like to be surprised by that. But the general trend is… When you replace the biggest contract manufacturer in the world… Foxconn just replaced thirty-thousand people in Asia with thirty-thousand robots.
And all those people can’t be retrained, because if you’re doing something that’s that repetitive, and that mechanical… what can you be retrained to do? Well, maybe one out of every hundred could be a floor manager in a robot factory, but what about all the others? Disruption is going to come from all the people that don’t have jobs, and there’s nothing to be retrained to.
Because our robots are made in factories where robots make the robots. Our cars are made in factories where robots make the cars.
Isn’t that the same argument they used during the Industrial Revolution, when they said, “You got ninety percent of people out there who are farmers, and we’re going to lose all these farm jobs… And you don’t expect those farmers are going to, like, come work in a factory, where they have to learn completely new things.”
Well, what really happened in the different technology revolutions, back from the cotton gin onward is, a small sector… The Industrial Revolution didn’t suddenly put farms out of business. A hundred years ago, ninety percent of people worked on farms, now it’s ten percent.
But what happened with the Industrial Revolution is, sector by sector, it took away jobs, but then those people could retrain, and could go to other sectors, because there were still giant sectors that weren’t replaced by industrialization. There was still a lot of manual labor to do. And some of them could be trained upwards, into management and other things.
This, as the author Ford wrote in The Rise of Robots—and there’s also a great book called The Fourth Industrial Age. As they both argue, what’s different about this revolution is that AI works in every industry. So it’s not like the old revolutions, where one sector was replaced at a time, and there was time to absorb that change, time to reabsorb those workers and retrain them in some fashion.
But everybody is going to be… My point is, all sectors of the economy are going to be hit at once. The ubiquity of AI is going to impact a lot of the economy, all at the same time, and there is going to be a giant dislocation all at the same time. And it’s very unclear, unlike in the old days, how those people can be retrained and retargeted for jobs. So, I think it’s very different from other Industrial Revolutions, or rather technology revolutions.
Other than the adoption of coal—it went from generating five percent to eighty percent of all of our power in twenty years—the electrification of industry happened incredibly fast. Mechanization, replacement of animal power with mechanical power, happened incredibly fast. And yet, unemployment remains between four and nine percent in this country.
Other than the Depression, without ever even hiccupping—like, no matter what disruption, no matter what speed you threw at it—the economy never couldn’t just use that technology to create more jobs. And isn’t that maybe a lack of imagination that says “Well, no, now we’re out. And no more jobs to create. Or not ones that these people who’ve been displaced can do.”
I mean, isn’t that what people would’ve said for two hundred years?
Yes, that’s a somewhat persuasive argument. I think you’ve got a point that the economy was able to absorb those jobs, and the unemployment remained steady. I do think this is different. I think it’s a kind of a puzzle, and we’ll have to see what happens. But I can’t imagine… Where do professional drivers… they’re not unskilled, but they’re right next to it. And it’s the job of choice for people who don’t have a lot of education.
What do you retrain professional drivers to do once their jobs are taken? It’s not going to be factory work, it’s not going to be simple accounting. It’s not going to be anything repetitive, because that’s going to be the job of automation and AI.
So I anticipate problems, but I’d love to be pleasantly surprised. If it worked like the old days, then all those people that were cut off the farm would go to work in the factories, and make Ford automobiles, and make enough money to buy one. I don’t see all those driverless people going off to factories to make cars, or to manufacture anything.
A case in point of what’s happening is… Rethink Robotics, which is Rodney Brooks’ company, just built something called Baxter; and now Baxter is a generation old, and I can’t think of what replaced it. But it costs about twenty-two thousand dollars to get one of these robots. These robots cost basically what a minimum wage worker makes in a year. But they work 24/7, so they really replace three shifts, so they really are replacing three people.
Where do those people go? Do they go to shops that make Baxter? Or maybe you’re right, maybe it’s a failure of imagination to not be able to anticipate the jobs that would be created by Baxter and by autonomous cars. Right now, it’s failing a lot of people’s imagination. And there are not ready answers.
I mean, if it were 1995 and the Internet was, you’re just hearing about it, just getting online, just hearing it… And somebody said, “You know what? There’s going to be a lot of companies that just come out and make hundreds of billions of dollars, one after the other, all because we’ve learned how to connect computers and use this hypertext protocol to communicate.” I mean, that would not have seemed like a reasonable surmise.
No, and that’s a great example. If you were told that trillions of dollars of value are going to come out of this invention, who would’ve thought? And maybe I personally, just can’t imagine the next wave that is going to create that much value. I can see how AI and automation will create a lot of value, I only see it going into a few pockets though. I don’t see it being distributed in any way that the Silicon Valley startups, at least initially, were.
So let’s talk about you for a moment. Your background is in documentary filmmaking. Do you see yourself returning to that world? What are you working on, another book? What kind of thing is keeping you busy by day right now?
Well, I like making documentary films. I just had one on PBS last year… If you Google “Spillover” and “PBS” you can see it is streaming online. It was about spillover diseases—Ebola, Zika and others—and it was about the Ebola crisis, and how viruses spread. And then now I’m working on a film about paleontology, about a recent discovery that’s kind of secret, that I can’t talk about… from sixty-six million years ago.
And I am starting to work on another book that I can’t talk about. So I am keeping an eye on AI, because this issue is… Despite everything I talk about, I really like the technology; I think it’s pretty amazing.
Well, let’s close with, give me a scenario that you think is plausible, that things work out. That we have something that looks like full employment, and…
Good, Byron. That’s a great way to go out. I see people getting individually educated about the promise and peril of AI, so that we as a culture are ready for the revolution that’s coming. And that forces businesses to be responsible, and politicians to be savvy, about developments in artificial intelligence. Then they invest some money to make artificial intelligence advancement transparent and safe.
And therefore, when we get to machines that are as smart as humans, that [they] are actually our allies, and never our competitors. And that somehow on top of this giant wedding cake I’m imagining, we also manage to keep full employment, or nearly-full employment. Because we’re aware, and because we’re working all the time to make sure that the future is kind to humans.
Alright, well, that is a great place to leave it. I am going to thank you very much.
Well, thank you. Great questions. I really enjoyed the back-and-forth.
Byron explores issues around artificial intelligence and conscious computers in his upcoming book The Fourth Age, to be published in April by Atria, an imprint of Simon & Schuster. Pre-order a copy here
[voices_in_ai_link_back]

Voices in AI – Episode 16: A Conversation with Robert J. Sawyer

[voices_in_ai_byline]
In this episode, Byron and Robert talk about human life extension, conscious computers, the future of jobs and more.
[podcast_player name=”Episode 16: A Conversation with Robert J. Sawyer” artist=”Byron Reese” album=”Voices in AI” url=”https://voicesinai.s3.amazonaws.com/2017-10-30-(01-00-58)-robert-j-sawyer.mp3″ cover_art_url=”https://voicesinai.com/wp-content/uploads/2017/10/voices-headshot-card-2-1.jpg”]
[voices_in_ai_link_back]
Byron Reese: This is voices in AI, brought to you by Gigaom. I’m Byron Reese. Our guest today is Robert Sawyer. Robert is a science fiction author, is both a Hugo and a Nebula winner. He’s the author of twenty-three books, many of which explore themes we talk about on this show. Robert, welcome to the show.
Tell me a little bit about your past, how you got into science fiction, and how you choose the themes that you write about?
Robert Sawyer: Well, I think apropos of this particular podcast, the most salient thing to mention is that when I was eight years old, 2001: A Space Odyssey was in theaters, and my father took me to see that film.
I happen to have been born in 1960, so the math was easy. I was obviously eight in ’68, but I would be 41 in 2001, and my dad, when he took me to see the film, was already older than that… which meant that before I was my dad’s age, talking computers [and] intelligent machines would be a part of my life. This was promised. It was in the title, 2001, and that really caught my imagination.
I had already been exposed to science fiction through Star Trek, which obviously premiered two years earlier, [in] ’66. But I was a little young to really absorb it. Heck, I may be a little young right now, at 57, to really absorb all that in 2001: A Space Odyssey. But it was definitely the visual world of science fiction, as opposed to the books… I came to them later.
But again, apropos of this podcast, the first real science fiction books I read… My dad packed me off to summer camp, and he got me two: one was just a space adventure, and the other was a collection of Isaac Asimov’s Robot Stories. Actually the second one [was] The Rest of the Robots, as it was titled in Britain, and I didn’t understand that title at all.
I thought it was about exhausted mechanical men having a nap—the rest of the robots—because I didn’t know there was an earlier volume when I first read it. But right from the very beginning, one of the things that fascinated me most was artificial intelligence, and my first novel, Golden Fleece, is very much my response to 2001… after having mulled it over from the time I was eight years old until the time my first novel came out.
I started writing it when I was twenty-eight, and it came out when I was thirty. So twenty years of mulling over, “What’s the psychology behind an artificial intelligence, HAL, actually deciding to commit murder?” So psychology of non-human beings, whether it’s aliens or AIs—and certainly the whole theme of artificial intelligence—has been right core in my work from the very beginning, and 2001 was definitely what sparked that.
Although many of your books are set in Canada, they are not all in the same fictional universe, correct?
That’s right, and I actually think… you know, I mentioned Isaac Asimov’s [writing] as one of my first exposures to science fiction, and of course still a man I enormously admire. I was lucky enough to meet him during his lifetime. But I think it was a fool’s errand that he spent a great deal of his creative energies, near the later part of his life, trying to fuse his foundation universe with his robot universe to come up with this master plan.
I think, a) it’s just ridiculous, it constrains you as writer; and b) it takes away the power of science fiction. Science fiction is a test bed for new ideas. It’s not about trying to predict the future. It’s about predicting a smorgasbord of possible futures. And if you get constrained into, “every work I did has to be coherent and consistent,” when it’s something I did ten, twenty, thirty, forty—in Asimov’s case, fifty or sixty years—in my past, that’s ridiculous. You’re not expanding the range of possibilities you’re exploring. You’re narrowing down instead of opening up.
So yeah, I have a trilogy about artificial intelligence: Wake, Watch, and Wonder. I have two other trilogies that are on different topics, but out of my twenty-three novels, the bulk of them are standalone, and in no way are meant to be thought of as being in a coherent, same universe. Each one is a fresh—that phrase I like—fresh test bed for a new idea.
That’s Robert Sawyer the author. What do you, Robert Sawyer the person, think the future is going to be like?
I don’t think there’s a distinction, in terms of my outlook. I’m an optimist. I’m known as an optimistic person, a techno-optimist, in that I do think, despite all the obvious downsides of technology—human-caused global climate change didn’t happen because of cow farts, it happened because of coal-burning machines, and so forth—despite that, I’m optimistic, very optimistic, generally as a person, and certainly most of my fiction…
Although my most recent book, my twenty-third, Quantum Night, is almost a deliberate step back, because there had been those that had said I’m almost Pollyanna-ish in my optimism, some have even said possibly naïve. And I don’t think I am. I think I rigorously interrogate the ideas in my fiction, and also in politics and day-to-day life. I’m a skeptic by nature, and I’m not easily swayed to think, “Oh, somebody solved all of our problems.”
Nonetheless, the arrow of progress, through both my personal history and the history of the planet, seems definitely to be pointing in a positive direction.
I’m an optimist as well, and the kind of arguments I get against that viewpoint, the first one invariably is, “Did you not read the paper this morning?”
Yeah.
People look around them, and they see that technology increases our ability to destroy faster than it increases our ability to create. That asymmetry is on the rise, meaning fewer and fewer people can cause more and more havoc; that the magnitude of the kinds of things that can happen due to technology—like genetically-engineered superbugs and what not—are both accessible and real. And when people give you series of that sort of view, what do you say?
Well you know, it’s funny that you should say that… I had to present those views just yesterday. I happen to be involved with developing a TV show here in Canada. I’m the head writer, and I was having a production meeting, and the producer was actually saying, “Well, you know, I don’t think that there is any way that we have to really worry about the planet being destroyed by a rogue operator.”
I said, “No, no, no, man, you have no idea the amount of destructive power that the arrow of history is clearly showing is devolving down into smaller and smaller hands.”
A thousand years ago, the best one person could do is probably kill one or two other people. A hundred years ago they could kill several people. Once we add machine guns, they could kill a whole bunch of people in the shopping mall. Then we found atomic bombs, and so forth, it was only nations we had to worry about, big nations.
And we saw clearly in the Cuban missile crisis, when it comes to big, essentially responsible nations—the USSR and the United States, responsible to their populations and also to their role on the world stage—they weren’t going to do it. It came so close, but Khrushchev and Kennedy backed away. Okay, we don’t have to worry about it.
Well, now rogue states, much smaller states, like North Korea, are pursuing atomic weapons. And before you know it, it’s going to be terrorist groups like the Taliban that will have atomic weapons, and it’s actually a terrifying thought.
If there’s a second theme that permeates my writing, besides my interest in artificial intelligence, it’s my interest in SETI, the search for extra-terrestrial intelligence. And one of the big conundrums… My friends who work at the SETI Institute, Seth Shostak and others, of course are also optimists. And they honestly think, in the defiance of any evidence whatsoever, that the universe actually is teeming with aliens, and they will respond, or at least be sending out—proactively and altruistically—messages for others to pick up.
Enrico Fermi asked, actually, way back in the days of the Manhattan Project—ironically: “Well if the universe is supposed to be teeming with aliens, where are they?” And the most likely response, given the plethora of exoplanets and the banality of the biology of life and so forth, is, “Well, they probably emerge at a steady pace, extra-terrestrial civilizations, and then, you know, they reach a point where they develop atomic weapons. Fifty years later they invent radio that’s the range for us, or fifty years earlier—1945 for atomic weapons, 1895 for radio. That’s half a century during which they can broadcast before they have the ability to destroy themselves.”
Do they survive five-hundred years, five-thousand years, you know, five-hundred-thousand years? All of that is the blink of an eye in terms of the fourteen-billion-year age of the universe. The chances of any two advanced civilizations that haven’t yet destroyed themselves with their own technology existing simultaneously, whatever that means in a relativistic universe, becomes almost nil. That’s a very good possible answer to Fermi, and bodes not well at all for our technological future.
Sagan said something like that. He said that his guess was civilizations had a hundred years after they got radio, to either destroy themselves, or overcome that tendency and go on to live on a timescale of billions of years.
Right, and, you know, when you talk about round numbers—and of course based on our particular orbit… the year is the orbital duration of the Earth—yeah, he’s probably right. It’s on the right order of magnitude. Clearly, we didn’t solve the problem by 1995. But by 2095, which is the same order magnitude, a century plus or minus, I think he’s right. If we don’t solve the problem by 2095, the bicentennial of radio, we’re doomed.
We have to deal with it, because it is within that range of time, a century or two after you develop radio, that you either have to find a way to make sure you’re never going to destroy yourself, or you’re destroyed. So, in that sense he’s right. And then it will be: Will we survive for billions… ‘Billions’ is an awfully long time, but hundreds of millions, you know… We’re quibbling about an order of magnitude on the high-end, there. But basically, yes, I believe in [terms of] round numbers and proximate orders of magnitude, he is absolutely right.
The window is very small to avoid the existential threats that come with radio. The line through the engineering and the physics from radio, and understanding how radio waves work, and so forth, leads directly to atomic power, leads directly to atomic weapons, blah, blah, blah, and leads conceivably directly to the destruction of the planet.
The artificial intelligence pioneer Marvin Minsky said, “Lately, I’ve been inspired by ideas from Robert Sawyer.” What was he talking about, and what ideas in particular, do you think?
Well, Marvin is a wonderful guy, and after he wrote that I had the lovely opportunity to meet him. And, actually ironically, my most significant work about artificial intelligence, Wake, Watch, and Wonder came out after Marvin said that. I went to visit Marvin, who was now professor emeritus by the time I went to visit him at the AI Lab at MIT, when I was researching that trilogy.
So he was talking mostly about my book Mindscan, which was about whether or not we would eventually be able to copy and duplicate human consciousness—or a good simulacrum thereof—in an artificial substrate. He was certainly intrigued by my work, which was—what a flattering thing. I mean, oh my God, you know, Minsky is one of those names science fiction writers conjure with, you named another, Carl Sagan.
These are the people who we voraciously read—science fiction writers, science fiction fans—and to know that you turned around, and they were inspired to some degree… that there was a reciprocity—that they were inspired by what we science fiction writers were doing—is in general a wonderful concept. And the specificity of that, that Marvin Minsky had read and been excited and energized intellectually by things I was writing was, you know, pretty much the biggest compliment I’ve ever had in my life.
What are your thoughts on artificial intelligence. Do you think we’re going to build an AGI, and when? Will it be good for us, and all of that? What’s your view on that?
So, you used the word ‘build’, which is a proactive verb, and honestly I don’t think… Well first, of course, we have a muddying of terms. We all knew what artificial intelligence meant in the 1960s—it meant HAL 9000. Or in the 1980s, it meant Data on Star Trek: The Next Generation. It meant, as HAL said, any self-aware entity could ever hope to be. It meant self-awareness, what we meant by artificial intelligence.
Not really were we talking about intelligence, in terms of the ability to really rapidly play chess, although that is something that HAL did in 2001: A Space Odyssey. We weren’t talking about the ability to recognize faces, although that is something HAL did, in fact. In the film, he manages to recognize specific faces based on an amateur sketch artist’s sketch, right? “Oh, that’s Dr. Hunter, isn’t it?” in a sketch that one of the astronauts has done.
We didn’t mean that. We didn’t mean any of these algorithmic things; we meant the other part of HAL’s name, the heuristic part of HAL: heuristically-programmed algorithmic computer, HAL. We meant something even beyond that; we meant consciousness, self-awareness… And that term has disappeared.
When you ask an AI guy, somebody pounding away at a keyboard in Lisp, “When is it going to say, ‘Cogito ergo sum’?” he looks at you like you’re a moron. So we’ve dulled the term, and I don’t think anybody anywhere has come even remotely close to simulating or generating self-awareness in a computer.
Garry Kasparov was rightly miffed, and possibly humiliated, when he was beaten at the thing he devoted his life to, grandmaster-level chess, by Deep Blue. Deep Blue did not even know that it was playing chess. Watson had no idea that it was playing Jeopardy. It had no inner life, no inner satisfaction, that it had beat Ken Jennings—the best human player at this game. It just crunched numbers, the way my old Texas Instruments 35 calculator from the 1970s crunched numbers.
So in that sense, I don’t think we’ve made any progress at all. Does that mean that I don’t think AI is just around the corner? Not at all; I think it actually is. But I think it’s going to be an emergent property from sufficiently complex systems. The existing proof of that is our own consciousness and self-awareness, which clearly emerged from no design—there’s no teleology to evolution, no divine intervention, if that’s your worldview.
And I don’t mean you, personally—as we talked here—but the listener. Well, we have nothing in common to base a conversation around this about. It emerged because, at some point, there was sufficient synaptic complexity within our brains, and sufficient interpersonal complexity within our social structures, to require self-reflection. I suspect—and in fact I posit in Wake, Watch, and Wonder—that we will get that eventually from the most complex thing we’ve ever built, which is the interconnectivity of the Internet. So many synapse analogues in links—which are both hyperlinks, and links that are physical cable, or fiber-optic, or microwave links—that at some point the same thing will happen… that intelligence and consciousness, true consciousness, [and] self-awareness, are an emergent property of sufficient complexity.
Let’s talk about that for a minute: There are two kinds of emergence… There is what is [known as] ‘weak emergence’, which is, “Hey, I did this thing and something came out of it, and man I wasn’t expecting that to happen.” So, you might study hydrogen, and you might study oxygen, and you put them together and there’s water, and you’re like, “Whoa!”…
And the water is wet, right? Which you cannot possibly [have] perceived that… There’s nothing in the chemistry of hydrogen or oxygen that would make the quality of a human perceiving it as being wet, and pair it to that… It’s an emergent property. Absolutely.
But upon reflection you can say, “Okay, I see how that happened.” And then there is ‘strong emergence’, which many people say doesn’t exist; and if it does exist, there may only be one example of it, which is consciousness itself. And strong emergence is… Now, you did all the stuff… Let’s take a human, you know—you’re made of a trillion cells who don’t know you or anything.
None of those cells have a sense of humor, and yet you have a sense of humor. And so a strong emergent would be something where you can look at what comes out if… And it can’t actually be derived from the ingredients. What do you think consciousness is? Is it a ‘weak emergent’?
So I am lucky enough to be good friends with Stuart Hameroff, and a friendly acquaintance with Hameroff’s partner, Roger Penrose—who is a physicist, of course, who collaborates with Stephen Hawking on black holes. They both think that consciousness is a strong emergent property; that it is not something that, in retrospect, we in fact—at least in terms of classical physics—can say, “Okay, I get what happened”; you know, the way we do about water and wetness, right?
I am quite a proponent of their orchestrated objective reduction model of consciousness. Penrose’s position, first put forward in The Emperor’s New Mind, and later—after he had actually met Hameroff—expounded upon at more length in Shadows of the Mind… so, twenty-year-old ideas now—that human consciousness must be quantum-mechanical in nature.
And I freely admit that a lot of the mathematics that Hameroff and Penrose argue is over my head. But the fundamental notion that the system itself transcends the ability of classical mathematics and classical physics to fully describe it. They have some truly recondite arguments for why that would be the case. The most compelling seems to come from Gödel’s incompleteness theorem, that there’s simply no way you can actually, in classical physics and classical mathematics, derive a system that will be self-reflective.
But from quantum physics, and superposition, perhaps you actually can come up with an explanation for consciousness.
Now, that said, my job as a science fiction writer is not to pick the most likely explanation for any given phenomenon that I turn my auctorial gaze on. Rather, it is to pick the most entertaining or most provocative or most intriguing one that can’t easily be gainsaid by what we already know. So is consciousness, in that sense, an emergent quantum-mechanical property? That’s a fascinating question; we can’t easily gainsay it because we don’t know.
We certainly don’t have a classical model that gives rise to that non-strong, that trivial emergence that we talked about in terms of hydrogen and oxygen. We don’t have any classical model that actually gives rise to an inner life. We have people who want to… you know, the famous book, Consciousness Explained (Dennett), which many of its critics would say is consciousness explained away.
We have the astonishing hypothesis of Crick, which is really, again, explaining away… You think you have consciousness in a sophisticated way, well you don’t really. That clearly flies as much in the face of our own personal experience as somebody saying, “‘Cognito ergo sum‘—nah, you’re actually not thinking, you’re not self-aware.” I can’t buy that.
So in that sense, I do think that consciousness is emergent, but it is not necessarily emergent from classical physics, and therefore not necessarily emergent on any platform that anybody is building at Google at the moment.
Penrose concluded, in the end, that you cannot build a conscious computer. Would you go all that far, or do you have an opinion on that?
You cannot build a conscious classical computer. Absolutely; I think Penrose is probably right. Given the amount of effort we have been trying, and that Moore’s Law gives us a boost to our effort every eighteen months or whatever figure you want to plug into it these days, and that we haven’t attained it yet, I think he’s probably right. A quantum computer is a whole different kettle of fish. I was lucky enough to visit D-Wave computing on my last book tour, a year ago, where it was very gratifying.
You mentioned the lovely thing that Marvin Minsky said… When I went to D-Wave, which is the only commercial company shipping quantum computers—Google has bought from them, NASA has bought from them… When I went there, they asked me to come and give a talk as well, [and] I said, “Well that’s lovely, how come?” And they said, “Everybody at D-Wave reads Robert J. Sawyer.”
I thought, “Oh my God, wow, what a great compliment.” But because I’m a proponent—and they’re certainly intrigued by the notion—that quantum physics may be what underlies the self-reflective ability—which is what we define consciousness as—I do think that if there is going to be a computer in AI, that it is going to be a quantum computer, quantumly-entangled, that gives rise to anything that we would actually say, “Yep, that’s as conscious as we are.”
So, when I started off asking you about an AGI, you kind of looped consciousness in. To be clear, those are two very different things, right? An AGI is something that is intelligent, and can do a list of tasks a human could do. A consciousness… it may have nothing, maybe not be intelligent at all, but it’s a feeling… it’s an inner-feeling.
But see, this is again… but it’s a conflation of terms, right? ‘Intelligence’, until Garry Kasparov was beaten at chess, intelligence was not just the ability to really rapidly crunch numbers, which is all… I’m sorry, no matter what algorithm you put into a computer, a computer is still a Turing machine. It can add a symbol, it can subtract a symbol. It can move left, it can move right—there’s no computer that isn’t a Turing machine.
The general applicability of a Turing machine to simulating a thing that we call intelligence, isn’t, in fact, what the man on the street or the woman on the street means by intelligence. So we say, “Well, we’ve got an artificially-intelligent algorithm for picking stocks.”
“Oh, well, if it picks stocks, which tie should I wear today?”
Any intelligent person would tell you, don’t wear the brown tie with the blue suit, [but] the stock-picking algorithm has no way to crunch that. It is not intelligent, it’s just math. And so when we take a word like ‘intelligence’… And either because it gets us a better stock option, right, we say, “Our company’s going public, and we’re in AI”—not in rapid number crunching—our stock market valuation is way higher… It isn’t intelligence as you and I understand it at all, full stop. Not one wit.
Where did you come down on the uploading-your-consciousness possibility?
So, I actually have a degree in broadcasting… And I can, with absolutely perfect fidelity, go find your favorite symphony orchestra performing Beethoven’s Fifth, let’s say, and give you an absolutely perfect copy of that, without me personally being able to hold a tune—I’m tone deaf—without me personally having the single slightest insight into musical genius.
Nonetheless, technically, I can reproduce musical genius to whatever bitrate of fidelity you require, if it’s a digital recording, or in perfect analog recording, if you give me the proper equipment—equipment that already is well available.
Given that analogy, we don’t have to understand consciousness; all we have to do is vacuum up everything that is between our ears, and find analog or digital ways to reproduce it on another substrate. I think fundamentally there is no barrier to doing that. Whether we’re anywhere near that level of fidelity in recording the data—or the patterns, or whatever it is—that is the domain of consciousness, within our own biological substrate… We may be years away from that, but we’re not centuries away from that.
It’s something we will have the ability to record and simulate and duplicate this century, absolutely. So in terms of uploading ‘consciousness’—again, we play a slippery slope word with language… In terms of making an exact duplicate of my consciousness on another substrate… Absolutely, it’ll be done; it’ll be done this century, no question in my mind.
Is it the same person? That’s where we play these games with words. Uploading consciousness… Well, you know what—I’ve never once uploaded a picture of myself to Facebook, never once— [but] the picture is still on my hard drive; [and] I’ve copied it, and sent it to Facebook servers, too. There’s another version of that picture, and you know what? You upload a high-resolution picture to Facebook, put it up as your profile photo… Facebook compresses it, and reduces the resolution for their purposes at their end.
So, did they really get it? They don’t have the original; it’s not the same picture. But at first blush, it looks like I uploaded something to the vast hive that is Facebook… I have done nothing of the sort. I have duplicated data at a different location.
One of the themes that you write about is human life extension. What do you think of the possibilities there? Is mortality a problem that we can solve, and what not?
This is very interesting… Again, I’m working on this TV project, and this is one of our themes… And yes, I think, absolutely. I do not think that there’s any biological determinism that says all life forms have to die at a certain point. It seems an eminently-tractable problem. Remember, it was only [in] the 1950s that we figured out the double-helix nature of DNA. Rosalind Franklin, Francis Crick and James Watson figured it out, and we have it now.
That’s a blip, right? We’ve had a basic understanding of the structure of the genetic molecule, and the genetic code, and [we’re] only beginning to understand… And every time we think we’ve solved it—”Oh, we’ve got it. We now understand the code for that particular amino acid…” But then we forgot about epigenetics. We thought, in our hubris and arrogance, “Oh, it’s all junk DNA”—when after all, actually they’re these regulatory things that turn it on and off, as is required.
So we’re still quite some significant distance away from totally solving why it is we age… arresting that first, and [then] conceivably reversing that problem. But is it an intractable problem? Is it unsolvable by its nature? Absolutely not. Of course, we will have, again, this century-radical life prolongation—effective practical immortality, barring grotesque bodily accident. Absolutely, without question.
I don’t think it is coming as fast as my friend Aubrey De Grey thinks it’s coming. You know, Aubrey… I just sent him a birthday wish on Facebook; turns out, he’s younger than me… He looks a fair bit older. His partner smokes, and she says, “I don’t worry about it, because we’re going to solve that before the cancers can become an issue.”
I lost my younger brother to lung cancer, and my whole life, people have been saying, “Cancer, we’ll have that solved in twenty years,” and it’s always been twenty years down the road. So I don’t think… I honestly think I’m… you and I, probably, are about the same age I imagine— [we] are at a juncture here. We’re either part of the last generation to live a normal, kind of biblical—threescore and ten, plus or minus a decade or two—lifespan; or we’re the first generation that’s going to live a radically-prolonged lifespan. Who knows which side of that divide you and I happen to be on. I think there are people alive already, the children born in the early—certainly in the second decade, and possibly the first—part of the century who absolutely will live to see not just the next century—twenty-second—but some will live to see beyond that, Kirk’s twenty-third century.
Putting all that together, are you worried about, as our computers get better—get better at crunching numbers, as you say—are you among the camp that worries that automation is going to create an epic-sized social problem in the US, or in the world, because it eliminates too many jobs too quickly?
Yes. You know, everybody is the crucible of their upbringing, and I think it’s always important to interrogate where you came from. I mentioned [that] my father took me to 2001. Well, he took a day off, or had some time off, from his job—which was a professor of economics at the University of Toronto—so that we could go to a movie. So I come from a background… My mother was a statistician, my father an economist…
I come from a background of understanding the science of scarcity, and understanding labor in the marketplace, and capitalism. It’s in my DNA, and it’s in the environment I grew up [in]. I had to do a pie chart to get my allowance as a kid. “Here’s your scarce resources, your $0.75… You want a raise to a dollar? Show me a pie chart of where you’re spending your money now, and how you might usefully spend the additional amount.” That’s the economy of scarcity. That’s the economy of jobs and careers.
My father set out to get his career. He did his PhD at the University of Chicago, and you go through assistant professor, associate professor, professor, now professor emeritus at ninety-two years old—there’s a path. All of that has been disrupted by automation. There’s absolutely no question it’s already upon us in huge parts of the environment, the ecosystem that we live in. And not just in terms of automotive line workers—which, of course, were the first big industrial robots, on the automobile assembly lines…
But, you know, I have friends who are librarians, who are trying to justify why their job should still exist, in a world where they’ve been disintermediated… where the whole world’s knowledge—way more than any physical library ever contained—is at my fingertips the moment I sit down in front of my computer. They’re being automated out of a job, and [although] not replaced by a robot worker, they’re certainly being replaced by the bounty that computers have made possible.
So yeah, absolutely. We’re going to face a seismic shift, and whether we survive it or not is a very interesting sociological question, and one I’m hugely interested in… both as an engaged human being, and definitely as a science fiction writer.
What do you mean survive it?
Survive it recognizably, with the culture and society and individual nation-states that have defined, let’s say, the post-World War II peaceful world order. You know, you look back at why Great Britain has chosen to step out of the European Union.
[The] European Union—one can argue all kind of things about it… but one of the things it basically said was, “Man, that was really dumb, World War I. World War II, that was even worse. All of us guys who live within spitting distance of each other fighting, and now we’ve got atomic weapons. Let’s not do that anymore. In fact, let’s knock down the borders and let’s just get along.”
And then, one of the things that happen to Great Britain… And you see the far right party saying, “Well, immigration is stealing our jobs.” Well, no. You know, immigration is a fact of life in an open world where people travel. And I happen to be—in fact, just parenthetically—I’m a member of the Order of Canada, Canada’s highest civilian honor. One of the perks that comes with that is I’m empowered to, and take great pride in, administering the oath of Canadian citizenship at Canadian citizenship ceremonies.
I’m very much pro-immigration. Immigrants are not what’s causing jobs to disappear, but it’s way easier to point to that guy who looks a bit different, or talks a bit different than you do, and say that he’s the cause, and not that the whole economic sector that you used to work in is being obviated out of existence. Whether it was factory workers, or whether it was stock market traders, the fact is that the AIG, and all of that AGI that we’ve been talking about here, is disappearing those jobs. It’s making those jobs cease to exist, and we’re looking around now, and seeing a great deal of social unrest trying to find another person to blame for that.
I guess implicit in what you’re saying is, yes, technology is going to dislocate people from employment. But what about the corollary, that it will or won’t create new jobs at the same essential rate?
So, clearly it has not created jobs at the same essential rate, and clearly the sad truth is that not everybody can do the new jobs. We used to have a pretty full employment no matter where you fell, you know… as Mr. Spock famously said, “—as with all living things, each according to his gifts.” Now it’s a reality that there is a whole bunch of people who did blue-collar labor, because that was all that was available to them…
And of course, as you know, Neil Degrasse Tyson and others have famously said, “I’m not particularly fascinated by Einstein’s brain per se… I’m mortified by the fact that there were a million ‘Einsteins’ in Africa, or the poorest parts of the United States, or wherever, who never got to give the world what the benefits of their great brains could have, because the economic circumstances didn’t exist for them to do that in.”
What jobs are going to appear… that are going to appear… that aren’t going to be obviated out of existence?
I was actually reading an interesting article, and talking at a pub last night with the gentleman—who was an archaeologist—and an article I read quite recently, about top ten jobs that aren’t soon going to be automated out of existence… and archaeologist was one of them. Why?
One, there’s no particular economic incentive… In fact, archaeologists these days tend to be an impediment to economic growth. That is, they’re the guys to show up when ground has been broken for new skyscrapers and say, “Hang on a minute… indigenous Canadian or Native American remains here… you’ve got a slow down until we collect this stuff,” right?
So no businesses say, “Oh my God, if only archaeologists were even better at finding things, that would stop us from our economic expansion.” And it has such a broadly-based skill set. You have to be able to identify completely unique potsherds, each one is different from another… not something that usually fits a pattern like a defective shoe going down an assembly line: “Oh, not the right number of eyelets on that shoe, reject it.”
So will we come up with job after job after job, that Moore’s Law, hopscotching ahead of us, isn’t going to obviate out of existence ad infinitum? No, we’re not going to do it, even for the next twenty years. There will be massive, massive, massive unemployment… That’s a game changer, a societal shift.
You know, the reality is… Why is it I mentioned World War I? Why do all these countries habitually—and going right back to tribal culture—habitually make war on a routine basis? Because unoccupied young men—and it’s mostly men that are the problem—have always been a detriment to society. And so we ship them off to war to get rid of the surplus.
In the United States, they just lowered the bar on drug possession rules, to define an ability to get the largest incarcerated population of people, who otherwise might just [have] been up to general mischief—not any seismic threat, just general mischief. And societies have always had a problem dealing with surplus young men. Now we have surplus young men, surplus plus young women, surplus old men, surplus old women, surplus everybody.
And there’s no way in hell—and you must know this, if you just stop and think about it—no way in hell that we’re going to generate satisfactory jobs, for the panoply that is humanity, out of ever-accelerating automation. It can’t possibly be true.
Let’s take a minute and go a little deeper in that. You say it in such finality, and such conviction, but you have to start off by saying… There is not, among people in that world… there isn’t universal consensus on the question.
Well, for sure. My job is not to have to say, “Here’s what the consensus is.” My job, as a prognosticator, is to say, “Look. Here is, after decades of thinking about it…”—and, you know, there was Marvin Minsky saying, “Look. This guy is worth listening to”… So no, there isn’t universal consensus. When you ask the guy who’s like, “I had a factory job. I don’t have that anymore, but I drive for Uber.”
Yeah, well, five years from now Uber will have no drivers. Uber is at the cutting edge of automating cars. So after you’ve lost your factory job, and then, “Okay, well I could drive a car.” What’s the next one? It’s going be some high-level diagnostician of arcane psychiatric disorders? That ain’t the career path.
The jobs that are automated out of existence are going to be automated out of existence in a serial fashion… the one that if your skill set was fairly low—a factory worker—then you can hopscotch into [another] fairly low one—driving a car—you tell me what the next fairly low-skillset job, that magically is going to appear, that’s going to be cheaper and easier for corporations to deploy to human beings.
It ain’t counter help at McDonald’s; that’s disappearing. It ain’t cash registers at grocery stores, that’s disappearing. It ain’t bank teller, that already disappeared. It ain’t teaching fundamental primary school. So you give me an example of why the consensus, that there is a consensus… Here you show me. Don’t tell me that the people disagree with me… Tell me how their plot and plan for this actually makes any sense, that bares any scrutiny.
Let’s do that, because of… My only observation was not that you had an opinion, but that [it] was bereft of the word ‘maybe’. Like you just said ATM, bank tellers… but the fact of the matter is—the fact on the ground is—that the number of bank tellers we have today is higher than pre-ATM days.
And the economics that happened, actually, were that by making ATMs, you lowered the cost of building new bank branches. So what banks did was they just put lots more branches everywhere, and each of those needed some number of tellers.
So here’s an interesting question for you… Walk into your bank… I did this recently. And the person I was with was astonished, because every single bank teller was a man, and he hadn’t been into a bank for awhile, and they used to all be women. Now, there’s no fundamental difference between the skill set of men and women; but there is a reality in the glass ceiling of the finance sector.
And you cannot dispute that it exists… that the higher-level jobs were always held by men, and lower-level jobs were held by women. And the reality is… What you call a bank teller is now a guy who doesn’t count out tens and twenties; he is a guy who provides much higher-level financial services… And it’s not that we upgraded the skill set of the displaced.
We didn’t turn all of those counter help people at McDonald’s into Cordon Bleu chefs, either. We simply obviated them out of existence. And the niches, the interstices, in the economy that do exist, that supplement or replace the automation, are not comparably low-level jobs. You do not fill a bank with tellers who are doing routine counting out of money, taking a check and moving it over to the vault. That is not the function.
And they don’t even call them tellers anymore, they call them personal financial advisors or whatever. So, again, your example simply doesn’t bear scrutiny. It doesn’t bear scrutiny that we are taking low-level jobs… And guess what now we have… Show me the automotive plant that has thousands and thousands of more people working on the assembly line, because that particular job over there—spraying the final coat of paint—was done to finer tolerance by a machine… But oh my God, well, let’s move them…
No, that’s not happening. It’s obfuscation to say that we now have many more people involved in bank telling. This is the whole problem that we’ve been talking about here… Let’s take terminology and redefine it, as we go along to avoid facing the harsh reality. We have automated telling machines because we don’t have human telling individuals anymore.
So the challenge with your argument, though, is it is kind of the old one that has been used for centuries. And each time it’s used, it’s due to a lack of imagination.
My business, bucko, is imagination. I have no lack thereof, believe me. Seriously. And it hasn’t been used for generations. Name a single Industrial Revolution argument that invoked Moore’s Law. Name one. Name one that said the invention of the loom will outpace the invention of…
Human inventiveness was always the constraint, and we now do not have human inventiveness as the constraint. Artificial intelligence, whether you define it your way or my way, is something that was not invoked for centuries. We wouldn’t be having this conversation via Skype or whatever, like—Zoom, we’re using here—these are game changers that were not predicted by anybody but science fiction writers.
You can go back and look at Jules Verne, and his novel that he couldn’t get published in his lifetime, Paris in the Twentieth Century, that is incredibly prophetic about television, and so forth, and nobody believed it. Me and my colleagues are the ones that give rise to—not just me, but again that example: “Lately, I’m inspired by Robert J. Sawyer,” says Marvin Minsky. Lately I’m inspired by science fiction… I’m saying belatedly, much of the business world is finally looking and taking seriously science fiction.
I go and give talks worldwide—at Garanti Bank, the second largest bank in Istanbul, awhile ago—about inculcating the science fiction extrapolative and imaginative mindset in business thinkers… Because no, this argument, as we’re framing it today, has not been invoked for centuries.
And to pretend that the advent of the loom, or the printing press, somehow gave rise to people saying, “The seismic shift that’s coming from artificial intelligence, we dealt with that centuries ago, and blah, blah, blah… it’s the same old thing” is to have absolute blinders on, my friend. And you know it well. You wouldn’t be doing a podcast about artificial intelligence if you thought, “Here we are at podcast ‘Loom version 45.2’; we’re gonna have the Loom argument about weaving again for the umpteenth time.” You know the landscape is fundamentally, qualitatively different today.
So that’s a little disingenuous. I never mentioned the loom.
Let’s not be disingenuous. What is your specific example, of the debate related to the automation out of existence—without the replacement of the workers—with comparable skillset jobs from centuries ago? I won’t put an example [in] your mouth; you put one on the table.
I will put two on the table. The first is the electrification of industry. It happened with lightning speed, it was pervasive, it eliminated enormous numbers of jobs… And people at that point said, “What are we going to do with these people?”
I’ll give you a second one… It took twenty years for the US to go from generating five percent of its power with coal, to eighty percent. So in [the span of] twenty years, we started artificially generating our power.
The third one I’d like to give you is the mechanization of the industry… [which] happened so fast, and replaced all of the millions and millions and millions of draft animals that had been used in the past.
So let’s take that one. Where are the draft animals in our economy today? They’re the only life forms in our economy that can be replaced. The draft animals that can be replaced by machines, and the humans that operated them. Now we’ve eliminated draft animals, so the only biology in our economy is Homo sapiens. We are eliminating the Homo sapiens. You’re not gonna find new jobs for the Homo sapiens any more, than except maybe at 1600 Pennsylvania Ave, where we found a job for a jackass.
Are you going to find a place to put a draft animal on the payroll today? And you’re not going to find a place to put Homo sapiens, the last biology in the equation, on the payroll tomorrow, except in a vanishing few economic niches.
So the challenge with that view is that in the history of this country, unemployment has been between four and nine percent the entire time, with the exception of the Depression…  [between] four and nine percent.
Now, ‘this country’ meaning United States of America, which is not where I am.
Oh I’m sorry. Yes, in the United States of America, four to nine percent, with [the] exception of the Depression. During that time, of course, you had an incredible economic upheaval… But, say from 1790 to 1910, or something like that, [unemployment] never moved [from] four to nine percent.
Meanwhile, after World War II, [we] started adding a million new people, out of the blue, to the workforce. You had a million women a year come into the workforce, year after year, for forty years. So you had forty million new people, between 1945 and 1985, come into the workplace… and unemployment never bumped. 
So what it suggests is, that jobs are not these things that kind of magically appear as we go through time, saying “Oh, there’s a job. Oh, that’s an unskilled one, great. Or that’s a job…” It just doesn’t happen that way—
—When women entered into the workforce—and I speak as the son of a woman who was a child prodigy, and was the only woman in her economics class at the University of California at Berkeley, who taught at a prestigious university—I have no doubt that there were occasional high-level jobs. But the jobs that were created, that women came in and filled—and are still butting against the glass ceiling of—were low-level jobs.
It wasn’t, “Oh my God, we suddenly need thousands of new computer programmers.” No, we’ve created a niche that no longer exists for keypunch operators, as an example; or for telephone operators, as an example; or for bank tellers, as an example. And those jobs, to a person, have been obviated out of existence, or will be in the next decade or two.
Yes, I mean, you make an interesting metric about the size of unemployment. But remember, too, that the unemployment figure is a slippery slope, when you say ‘the number of people actively looking for employment’. Now you ask, how many people have just given up any hope of being employed meaningfully, respectfully, in dignity? Again, that number has gone up, as a straight line graph, through automation evolution.
No, I disagree with that. If you’re talking about workforce utilization, or the percentage of people who have gainful employment, there has been a dip over the last… It’s been repeatedly dissected by any number of people, and it seems there’s three things going on with it.
One is, Baby Boomers are retiring, and they’re this lump that passes through the economy, so you get that. Then there’s seasonality baked into that number. So they think the amount of people who have ‘given up’ is about one quarter of one percent, so sure… one in four hundred.
So what I would say is that ’95, ’96, ’97 we have the Internet come along, right? And if you look, between ’97 and the last twenty years, at the literal trillions—not an exaggeration, trillions upon trillions upon trillions of dollars of wealth that it created—you get your Googles, you get your eBays, you get your Amazons… these trillions and trillions and trillions of dollars of wealth. Nobody would have seen that [coming] in 1997.
Nobody says, “Oh yes, the connecting of various computers together, through TCP/IP, and allowing them to communicate with hypertext, is going to create trillions upon trillions upon trillions of dollars’ worth of value, and therefore jobs.” And yet it did. Unemployment is still between four and nine percent, it never budges.
So it suggests that jobs are not magically coming out of the air, that what happens is you take any person at any skill level… they can take anything, and apply some amount of work to it, and some amount of intellectual property to it, and make something worth more. And the value that they added, that’s known as a wage, and whatever they can add to that, whatever they meaningfully add to it, well that just created a job.
It doesn’t matter if it is a low-skill person, or a high-skill person, or what have you; and that’s why people maintain the unemployment rate never moves, because there’s an infinite amount of jobs, they exist kind of in the air… just go outside tomorrow, and knock on somebody’s door, and offer to do something for money, and you just made a job.
So go outside and offer to do what, and make a job? Because there are tons of things that I used to pay people to do that I don’t anymore. My Roomba cleans my floor, as opposed to a cleaning lady, is a hypothetical example.
But don’t you spend that money now on something else, which ergo is a job.
Spending money is a job?
Well spending money definitely, yes, creates employment.
Well that’s a very interesting discussion that we could have here, because certainly I used to have to spend money if I wanted something. Now if I want to watch—as we saw this past week, as we record this… The new season of Orange is the New Black, before the intellectual property creators want to deploy it to me, and collect money for it… Oh, guess what? It’s been pirated, and that’s online for free.
If I want to read, or somebody listening to this wants to read any of my twenty-three novels, they could go and buy the ebook edition, it’s true; but there’s an enormous amount of people who are also pirating them, and also the audio books. So this notion that somehow technology has made sure that we buy things with our money, I think a lot of people would take economic exception with that.
In fact, technology has made sure that there are now ways in which you can steal without… And it comes back to the discussion we’re having about AI: “Oh, I now have a copy, you still have the original. In what possible, meaningful way have I stolen from you?” So we actually have game-changers that I think you’re alighting over. But setting aside that, okay, obviously we disagree on this point. Fine, let’s touch base in fifty years, if either of us still has a job, and discuss it.
The easy thing is to always say there’s no consensus. Then you don’t have to go out on a limb, and nobody can ever come back to you and say you’re wrong. What we do in politics these days is, we don’t want people to change their ideas. Sadly, we say, “Forty years ago, you said so-and-so, you must still hold that view.” No, a science fiction writer is like a scientist; we are open, all the time, to new information and new data. And we’re constantly revising our worldview.
Look at the treatment of artificial intelligence, our subject matter today, from my first novel in 1990, to the most recent one I treated the topic, in which would be Wonder, that came out in 2011—and there’s definitely an evolution of thought there. But it’s a mug’s game to say, “There’s no consensus, I’m not gonna make a prediction.”
And it actually is a job, my friend, and one that turns out to be fairly lucrative—at least in my case—to make a prediction, to look at the data, and say… You know, I synthesize it, look at it this way, and here’s where I think it’s going. And if you want to obviate that job out of existence, by saying yeah but other people disagree with you, I suppose that’s your privilege in this particular economic paradigm.
Well, thank you very much. Tell us what you’re working on in closing?
It’s interesting, because when we talk a lot about AI… But I’m really… AI, and the relationship between us and AI, is a subset, in some ways, of transhumanism. In that you can look at artificial intelligence as a separate thing, but really the reality is that we’re going to find way more effective ways to merge ourselves and artificial intelligence than looking in through the five-inch glass window on your smartphone, right?
So what I’m working on is actually developing a TV series on a transhumanist theme, and one of the key things we’re looking at is really that fundamental question of how much of your biology—one of the things we’ve talked about here—you can give up and still retain your fundamental humanity. And I don’t want to get into too much specifics about that, but I think that is, you know, really comes thematically right back to what we’ve been talking about here, and what Alan Turing was getting at with the Turing test.
A hundred years from now, have I uploaded my consciousness? Have I so infused my body with nanotechnology, am I so constantly plugged into a greater electronic global brain… am I still Homo sapient sapiens? I don’t know, but I hope I’ll have that double dose of wisdom that goes with sapient sapiens by that time. And that’s what I’m working on, is really exploring the human-machine proportionality that still results in individuality and human dignity. And I’m doing it in a science fiction television project that I currently have a development contract for.
Awesome. Alright, well thank you so much.
It was a spirited discussion, and I hope you enjoyed it as much as I did, because one of the things that will never be obviated out of existence, I hope, my friend, is spirited and polite disagreement between human beings. I think that is—if there’s something we’ve come nowhere close to emulating on an artificial platform, it’s that. And if there’s any reason that AI’s will keep us around, I’ve often said, it’s because of our unpredictability, our spontaneity, our creativity, and our good sense of humor.
Absolutely. Thank you very much.
My pleasure, take care.
Byron explores issues around artificial intelligence and conscious computers in his upcoming book The Fourth Age, to be published in April by Atria, an imprint of Simon & Schuster. Pre-order a copy here
[voices_in_ai_link_back]