Voices in AI – Episode 21: A Conversation with Nikola Danaylov

In this episode, Byron and Nikola talk about singularity, consciousness, transhumanism, AGI and more.
[podcast_player name=”Episode 21: A Conversation with Nikola Danaylov” artist=”Byron Reese” album=”Voices in AI” url=”https://voicesinai.s3.amazonaws.com/2017-11-20-(01-05-27)-nikola-danaylov.mp3″ cover_art_url=”https://voicesinai.com/wp-content/uploads/2017/11/voices-headshot-card-3.jpg”]
Byron Reese: This is Voices in AI, brought to you by Gigaom. I’m Byron Reese. Today our guest is Nikola Danilov. Nikola started the Singularity Weblog, and hosts the wildly popular singularity.fm podcast. He has been called the “Larry King of the singularity.” He writes under the name Socrates, or to the Bill & Ted fans out there, Socrates. Welcome to the show, Nikola.
Nikola Danaylov: Thanks for having me, Byron, it’s my pleasure.
So let’s begin with, what is the singularity?
Well, there are probably as many definitions and flavors as there are people or experts in the field out there. But for me, personally, the singularity is the moment when machines first catch up and eventually surpass humans in terms of intelligence.
What does that mean exactly, “surpass humans in intelligence”?
Well, what happens to you when your toothbrush is smarter than you?
Well, right now it’s much smarter than me on how long I should brush my teeth.
Yes, and that’s true for most of us—how long you should brush, how much pressure you should exert, and things like that.
It gives very bad relationship advice, though, so I guess you can’t say it’s smarter than me yet, right?
Right, not about relationships, anyway. But about the duration of brush time, it is. And that’s the whole idea of the singularity, that, basically, we’re going to expand the intelligence of most things around us.
So now we have watches, but they’re becoming smart watches. We have cars, but they’re becoming smart cars. And we have smart thermostats, and smart appliances, and smart buildings, and smart everything. And that means that the intelligence of the previously dumb things is going to continue expanding, while unfortunately our own personal intelligence, or our intelligence as a species, is not.
In what sense is it a “singularity”?
Let me talk about the roots of the word. The origin of the word singularity comes from mathematics, where it basically is a problem with an undefined answer, like five divided by zero, for example. Or in physics, where it signifies a black hole. That’s to say a place where there is a rupture in the fabric of time-space, and the laws of the universe don’t hold true as we know them.
In the technological sense, we’re borrowing the term to signify the moment where humanity stops being the smartest species on our planets, and machines surpass us. And therefore, beyond that moment, we’re going to be looking into a black hole of our future. Because our current models fail to be able to provide sufficient predictions as to what happens next.
So everything that we have already is kind of going to have to change, and we don’t know which way things are going to go, which is why we’re calling it a black hole. Because you cannot see beyond the event horizon of a black hole.
Well if you can’t see beyond it, give us some flavor of what you think is going to happen on this side of the singularity. What are we going to see gradually, or rapidly, happen in the world before it happens?
One thing is the “smartification” of everything around us. So right now, we’re still living in a pretty dumb universe. But as things come to have more and more intelligence, including our toothbrushes, our cars—everything around us—our fridges, our TVs, our computers, our tables, everything. Then that’s one thing that’s going to keep happening, until we have the last stage where, according to Ray Kurzweil, quote, “the universe wakes up,” and everything becomes smart, and we end up with different things like smart dust.
Another thing will be the merger between man and machine. So, if you look at the younger generation, for example, they’re already inseparable from their smartphones. It used to be the case that a computer was the size of a building—and by the way, those computers were even weaker in terms of processing power than our smartphones are today. Even the Apollo program used a much less powerful machine to send astronauts to the moon than what we have today in our pockets.
However, that change is not going to stop there. The next step is that those machines are going to actually move inside of our bodies. So they used to be inside of buildings, then they went on our body, in our pockets, and are now becoming what’s called “wearable technology.” But tomorrow it will not be wearable anymore, because it will be embedded.
It will be embedded inside of our gut, for example, to monitor our microbiome and to monitor how our health is progressing; it will be embedded into our brains even. Basically, there may be a point where it becomes inseparable from us. That in turn will change the very meaning of the definition of being human. Not only at the sort of collective level as a species, but also at the personal level, because we are possibly, or very likely, going to have a much bigger diversification of the understanding of what it means to be a human than we have right now.
So when you talk about computers becoming smarter than us, you’re talking about an AGI, artificial general intelligence, right?
Not necessarily. The toothbrush example is artificial narrow intelligence, but as it gets to be smarter and smarter there may be a point where it becomes artificial general intelligence, which is unlikely, but it’s not impossible. And the distinction between the two is that artificial general intelligence is equal or better than human intelligence at everything, not only that one thing.
For example, a calculator today is better than us in calculations. You can have other examples, like, let’s say a smart car may be better than us at driving, but it’s not better than us at Jeopardy, or speaking, or relationship advice, as you pointed out.
We would reach artificial general intelligence at the moment when a single machine will be able to be better at everything than us.
And why do you say that an AGI is unlikely?
Oh no, I was saying that an AGI may be unlikely in a toothbrush format, because the toothbrush requires only so many particular skills or capabilities, only so many kinds of knowledge.
So we would require the AGI for the singularity to occur, is that correct?
Yeah, well that’s a good question, and there’s a debate about it. But basically the idea is that anything you can think of which humans do today, that machine would be equal or better at it. So, it could be Jeopardy, it could be playing Go. It could be playing cards. It could be playing chess. It could be driving a car. It could be giving relationship advice. It could be diagnosing a medical disease. It could be doing accounting for your company. It could be shooting a video. It could be writing a paper. It could be playing music or composing music. It could be painting an impressionistic or other kind of piece of art. It could be taking pictures equal or better than Henri Cartier-Bresson, etc. Everything that we’re proud of, it would be equal or better at.
And when do you believe we will see an AGI, and when would we see the singularity?
That’s a good question. I kind of fluctuate a little bit on that. Depending on whether we have some kind of general sort of global-scale disaster like it could be nuclear war, for example—right now the situation is getting pretty tense with North Korea—or some kind of extreme climate-related event, or a catastrophe caused by an asteroid impact; falling short of any of those huge things that can basically change the face of the Earth, I would say probably 2045 to 2050 would be a good estimate.
So, for an AGI or for the singularity? Or are you, kind of, putting them both in the same bucket?
For the singularity. Now, we can reach human-level intelligence probably by the late 2020’s.
So you think we’ll have an AGI in twelve years?
Probably, yeah. But you know, the timeline, to me, is not particularly crucial. I’m a philosopher, so the timeline is interesting, but the more important issues are always the philosophical ones, and they’re generally related to the question of, “So what?” Right? What are the implications? What happens next?
It doesn’t matter so much whether it’s twelve years or sixteen years or twenty years. I mean, it can matter in the sense that it can help us be more prepared, rather than not, so that’s good. But the question is, so what? What happens next? That’s the important issue.
For example, let me give you another crucial technology that we’re working on, which is life extension technology, trying to make humanity “amortal.” Which is to say we’re not going to be immortal—we can still die if we get ran over by a truck or something like that—but we would not be likely to die from general causes of death that we see today, which are usually old-age related.
As an individual, I’m hoping that I will be there when we develop that technology. I’m not sure I will still be alive when we have it, but as a philosopher what’s more important to me is, “So what? What happens next?” So yeah, I’m hoping I’ll be there, but even if I’m not there it is still a valid and important question to start considering and investigating right now—before we are at that point—so that we are as intellectually and otherwise prepared for events like this as possible.
I think the best guesses are, we would live to about 6,750. That’s how long it would take for some, you know, Wile E Coyote kind of piano-falling-out-the-top-floor-of-a-building-and-landing-on-you thing to happen to you, actuarially-speaking.
So let’s jump into philosophy. You’re, of course, familiar with Searle’s Chinese Room question. Let me set that up for the listeners, and then I’ll ask you to comment on it.
So it goes like this: There’s a man, we’ll call him the librarian. And he’s in this giant room that’s full of all of these very special books. And the important part, the man does not speak any Chinese, absolutely no Chinese. But people slide him questions under the door that are written in Chinese.
He takes their question and he finds the book which has the first symbol on the spine, and he finds that book and he pulls it down and he looks up the second symbol. And when he finds the second symbol and it says go to book 24,601, and so he goes to book 24,601 and looks up the third symbol and the fourth and the fifth—all the way to the end.
And when he gets to the end, the final book says copy this down. He copies these lines, and he doesn’t understand what they are, slides it under the door back to the Chinese speaker posing the question. The Chinese speaker picks it up and reads it and it’s just brilliant. I mean, it’s absolutely over-the-top. You know, it’s a haiku and it rhymes and all this other stuff.
So the philosophical question is, does that man understand Chinese? Now a traditional computer answer might be “yes.” I mean, the room, after all, passes the Turing test. Somebody outside sliding questions under the door would assume that there’s a Chinese speaker on the other end, because the answers are so perfect.
But at a gut level, the idea that this person understands Chinese—when they don’t know whether they’re talking about cholera or coffee beans or what have you—seems a bit of a stretch. And of course, the punchline of the thing is, that’s all a computer can do.
All a computer can do is manipulate ones and zeros and memory. It can just go book to book and look stuff up, but it doesn’t understand anything. And with no understanding, how can you have any AGI?
So, let me ask you this? How do you know that that’s not exactly what’s happening right now in my head? How do you know that me speaking English to you right now is not the exact process you described?
I don’t know, but the point of the setup is: If you are just that, then you don’t actually understand what we’re actually talking about. You’re just cleverly answering things, you know, it is all deterministic, but there’s, quote, “nobody home.” So, if that is the case, it doesn’t invalidate any of your answers, but it certainly limits what you’re able to do.
Well, you see, that’s a question that relates very much with consciousness. It relates to consciousness, and, “Are you aware of what you’re doing,” and things like that. And what is consciousness in the first place?
Let’s divide that up. Strictly speaking, consciousness is subjective experience. “I had an experience of doing X,” which is a completely different thing than “I have an intellectual understanding of X.” So, just the AGI part, the simple part of: does the man in the room understand what’s going on, or not?
Let’s be careful here. Because, what do you mean by “understand”? Because you can say that I’m playing chess against a computer. Do I understand the playing of chess better than a computer? I mean what do you mean by understand? Is it not understanding that the computer can play equal or better chess than me?
The computer does not understand chess in the meaningful sense that we have to get at. You know, one of the things we humans do very well is we generalize from experience, and we do that because we find things are similar to other things. We understand that, “Aha, this is similar to that,” and so forth. A computer doesn’t really understand how to play chess. It’s arguable that the computer is even playing chess, but putting that word aside, the computer does not understand it.
The computer, that program, is never going to figure out baccarat any more than it can figure out how many coffee beans Colombia should export next year. It just doesn’t have any awareness at all. It’s like a clock. You wind a clock, and tick-tock, tick-tock, it tells you the time. We progressively add additional gears to the clockwork again and again. And the thesis of what you seem to be saying is that, eventually, you add enough gears so that when you wind this thing up, it’s smarter than us and it can do absolutely anything we can do. I find that to be, at least, an unproven assumption, let alone perhaps a fantastic one.
I agree with you on the part that it’s unproven. And I agree with you that it may or may not be an issue. But it depends about what you’re going for here, and it depends on the computer you’re referring to, because we have the new software that was invented by AlphaGo to play Go. And that actually learned to play the program exactly based on the previous games—that’s to say, on the previous experience by other players. And then that same kind of approach of learning from the past, and coming up with new creative solutions to the future was then implemented in a bunch of other fields, including bioengineering, including medicine, and so on.
So when you say the computer will never be able to calculate how many beans that country needs for next season, actually it can. That’s why it’s getting more and more generalized intelligence.
Well, let me ask that question a slightly different way. So I have, hypothetically, a cat food dish that measures out cat food for my cat. And it learns, based on the weight of the food in it, the right amount to put out. If the cat eats a lot, it puts more out. If the cat eats less, it puts less out. That is a learning algorithm, that is an artificial intelligence. It’s a learning one, and it’s really no different than AlphaGo, right? So what do you think happens from the cat dish—
—I would take issue with you saying it’s really no different from AlphaGo.
Hold on, let me finish the question; I’m eager to hear what you have to say. What happens, between the cat food AI and AlphaGo and an AGI? At what point does something different happen? Where does that break, and it’s not just a series of similar technologies?
So, let me answer your question this way… When you have a baby born, it’s totally dumb, stupid, blind, and deaf. It lacks complete self-awareness. Its unable to differentiate between itself and its environment, and it lacks complete self-awareness for probably the first, arguably, year-and-a-half to two years. And there’s a number of psychological tests that can be administered as the child develops. Usually girls, by the way, do about three to six months better, or they develop personal awareness faster and earlier than boys, on average. But let’s say the average age is about a year-and-a-half to two years—and that’s a very crude estimation, by the way. The development of AI would not be exactly the same, but there will be parallels.
The question you’re raising is a very good question. I don’t have a good answer because, you know, that can only happen with direct observational data—which we don’t have right now to answer your question, right? So, let’s say tomorrow we develop artificial general intelligence. How would we know that? How can we test for that, right? We don’t know.
We’re not even sure how we can evaluate that, right? Because just as you suggested, it could be just a dumb algorithm, processing just like your algorithm is processing how much cat food to provide to your cat. It can lack complete self-awareness, while claiming that it has self-awareness. So, how do we check for that? The answer is, it’s very hard. Right now, we can’t. You don’t know that I even have self-awareness, right?
But, again, those are two different things, right? Self-awareness is one thing, but an AGI is easy to test for, right? You give a program a list of tasks that a human can do. You say, “Here’s what I want you to do. I want you to figure out the best way to make espresso. I want you to find the Waffle House…” I mean, it’s a series of tasks. There’s nothing subjective about it, it’s completely objective.
So what has happened between the cat food example, to the AlphaGo, to the AGI—along that spectrum, what changed? Was there some emergent property? Was there something that happened? Because you said the AlphaGo is different than my cat food dish, but in a philosophical sense, how?
It’s different in the sense that it can learn. That’s the key difference.
So does my cat food thing, it gives the cat more food some days, and if the cat’s eating less, it cuts the cat food back.
Right, but you’re talking just about cat food, but that’s what children do, too. Children know nothing when they come into this world, and slowly they start learning more and more. They start reacting better, and start improving, and eventually start self-identifying, and eventually they become conscious. Eventually they develop awareness of the things not only within themselves, but around themselves, etc. And that’s my point, is that it is a similar process; I don’t have the exact mechanism to break down to you.
I see. So, let me ask you a different question. Nobody knows how the brain works, right? We don’t even know how thoughts are encoded. We just use this ubiquitous term, “brain activity,” but we don’t know how… You know, when I ask you, “What was the color of your first bicycle?” and you can answer that immediately, even though you’ve probably never thought about it, nor do you have some part of your brain where you store first bicycles or something like that.
So, assuming we don’t know that, and therefore we don’t really know how it is that we happen to be intelligent. By what basis do you say, “Oh, we’re going to build a machine that can do something that we don’t even know how we do,” and even put a timeline on it, to say, “And it’s going to happen in twelve years”?
So there are a number of ways to answer your question. One is, we don’t necessarily need to know. We don’t know how we create intelligence when we have babies, too, but we do it. How did it happen? It happened through evolution; so, likewise, we have what are called “evolutionary algorithms,” which are basically algorithms that learn to learn. And the key point, as Dr. Stephen Wolfram proved years ago in his seminal work Mathematica, from very simple things, very complex patterns can emerge. Look at our universe; it emerged from tiny little, very simple things.
Actually I’m interviewing Lawrence Krauss next week, he says it emerged from nothing. So from nothing, you have the universe, which has everything, according to him at least. And we don’t know how we create intelligence in the baby’s case, we just do it. Just like you don’t know how you grow your nails, or you don’t know how you grow your hair, but you do it. So, likewise, just one of the many different paths that we can take to get to that level of intelligence is through evolutionary algorithms.
By the way, this is what’s sometimes referred to as the black box problem, and AlphaGo is a bit of an example of that. There are certain things we know, and there are certain things we don’t know that are happening. Just like when I interviewed David Ferrucci, who was the team leader behind Watson, we were talking about, “How does Watson get this answer right and that answer wrong?” His answer is, “I don’t really know, exactly.” Because there are so many complicated things coming together to produce an answer, that after a certain level of complexity, it becomes very tricky to follow the causal chain of events.
So yes, it is possible to develop intelligence, and the best example for that is us. Unless you believe in that sort of first-mover, God-is-the-creator kind of thing, that somebody created us—you can say that we kind of came out of nothing. We evolved to have both consciousness and intelligence.
So likewise, why not have the same process only at the different stratum? So, right now we’re biologically-based; basically it’s DNA code replicating itself. We have A, C, T, and G. Alternatively, is it inconceivable that we can have this with a binary code? Or even if not binary, some other kind of mathematical code, so you can have intelligence evolve—be it silicone-based, be it photon-based, or even organic processor-based, be it quantum computer-based… what have you. Right?
So are you saying that there could be no other stratum, and no other way that could ever hold intelligence other than us? Then my question to you will be, well what’s the evidence of that claim? Because I would say that we have the evidence that it’s happened once. We could therefore presume that it could not be necessarily limited to only once. We’re not that special, you know. It could possibly happen again, and more than once.
Right, I mean it’s certainly a tenable hypothesis. The Singularians, for the most part, don’t treat it as a hypothesis, they treat it as a matter of faith.
That’s why I’m not such a good Singularitarian.
They say, “We have achieved consciousness and an AGI. We have a general intelligence. Therefore, we must be able to build one.” You don’t generally apply that logic to anything else in life, right? There is a solar system, therefore we must be able to build one. There is a third dimension, we must be able to build one.
With almost nothing else in life do you do it, and yet people who talk about the singularity, and are willing to put a date on it, by the way, to them there’s nothing up for debate. Even though all the things that are required for it are completely unknown, how we achieved them.
Let me give you Daniel Dennett’s take on things, for example. He says that consciousness doesn’t exist. That it is self-delusion. He actually makes a very, very good argument about it, per se. I’ve been trying to get him on my podcast for a while. But he says it’s total self-fabrication, self-delusion. It doesn’t exist. It’s beside the point, right?
But he doesn’t deny that we’re intelligent though. He just says that what we call “consciousness” is just brain activity. But he doesn’t say, “Oh, we don’t really have a general intelligence, either.” Obviously, we’re intelligent.
Exactly. But that’s kind of what you’re trying to imply with the machines, because they will be intelligent in the sense that they will be able to problem-solve anything that we’re able to problem-solve, as we pointed out—whether it’s chess, whether it’s cat food, whether it’s playing or composing the tenth symphony. That’s the point.
Okay, well that’s at least unquestionably the theory.
So let’s go from there. Talk to me about Transhumanism. You write a lot about that. What do you think we’ll be able to do? And if you’re willing to say, when do you think we’ll be able to do it? And, I mean, a man with a pacemaker is a Transhuman, right? He can’t live without it.
I would say all of us are already cyborgs, depending on your definition. If you say that the cyborg is an organism consisting of, let’s say, organic and inorganic parts working together in a single unit, then I would answer that if you have been vaccinated, you’re already a cyborg.
If you’re wearing glasses, or contact lenses, you’re already a cyborg. If you’re wearing clothes and you can’t survive without them, or shoes, you’re already a cyborg, right? Because, let’s say for me, I am severely short-sighted with my eyesight. I’m like, -7.25 or something crazy like that. I’m almost kind of blind without my contacts. Almost nobody knows that, unless people listen to these interviews, because I wear contacts, and for all intents and purposes I am as eye-capable as anybody else. But take off my contacts and I’ll be blind. Therefore you have one single unit between me and that inorganic material, which basically I cannot survive without.
I mean, two hundred years ago, or five hundred years ago, I’d probably be dead by now, because I wouldn’t be able to get food. I wouldn’t be able to survive in the world with that kind of severe shortsightedness.
The same with vaccinations, by the way. We know that the vast majority of the population, at least in the developed world, has at least one, and in most cases a number of different vaccines—already by the time you’re two years old. Viruses, basically, are the carriers for the vaccines. And viruses straddle that line, that gray area between living and nonliving things—the hard-to-classify things. They become a part of you, basically. You carry those vaccine antibodies, in most cases, for the rest of your life. So I could say, according to that definition, we are all cyborgs already.
That’s splitting a hair in a very real sense though. It seems from your writing you think we’re going to be doing much more radical things than that; things which, as you said earlier, call into question whether or not we’re even human anymore. What are those things, and why does that affect our definition of “human”?
Let me give you another example. I don’t know if you’ve seen in the news, or if your audience has seen in the news, a couple of months ago the Chinese tried to modify human embryos with CRISPR gene-editing technology. So we are not right now at the stage where, you know… It’s been almost 40 years since we had the first in vitro babies. At the time, basically what in vitro meant was that you do the fertilization outside of the womb, into a petri dish or something like that. And then you watch the division process begin, and then you select—by basically visual inspection—what looks to be the best-fertilized egg, simply by visual examination. And that’s the egg that you would implant.
Today, we don’t just observe; we actually we can preselect. And not only that, we can actually go in and start changing things. So it’s just like when you’re first born, you start learning the alphabet, then you start reading full words; then you start reading full sentences; and then you start writing yourself.
We’re doing, currently, exactly that with genetics. We were starting to just identify the letters of the alphabet thirty, or forty, or fifty years ago. Then we started reading slowly; we read the human genome about fifteen years ago. And now we’re slowly starting to learn to write. And so the implication of that is this: how does the meaning of what it means to be human change, when you can change your sex, color, race, age, and physical attributes?
Because that’s the bottom line. When we can go and make changes at the DNA level of an organism, you can change all those parameters. It’s just like programming. In computer science it’s 0 and 1. In genetics it’s ATCG, four letters, but it’s the same principle. In one case, you’re programming a software program for a computer; in the other case, you’re programming living organisms.
But in that example, though, everybody—no matter what race you are—you’re still a human; no matter what gender you are, you’re still a human.
It depends how you qualify “human,” right? Let’s be more specific. So right now, when you say “humans,” what you mean actually is Homo sapiens, right? But Homo sapiens has a number of very specific physical attributes. When you start changing the DNA structure, you can actually change those attributes to the point where the result doesn’t carry those physical attributes anymore. So are you then Homo sapiens anymore?
From a biological point of view, the answer will most likely depend on how far you’ve gone. There’s no breakpoint, though, and different people will have a different red line to cross. You know for some, just a bit. So let’s say you and your wife or partner want to have a baby. And both of you happen to be carriers of a certain kind of genetic disease that you want to avoid. You want to make sure, before you conceive that baby, the fertilized egg doesn’t carry that genetic material.
And that’s all you care about, that’s fine. But someone else will say, that’s your red line, whereas my red line is that I want to give that baby the good looks of Brad Pitt, I want to give it the brain of Stephen Hawking, and I want to give it the strength of a weightlifter, for example. Each person who is making that choice would go for different things, and would have different attributes that they would choose to accept or not to accept.
Therefore, you would start having that diversification that I talked about in the beginning. And that’s even before you start bringing in things like neural cognitive implants, etc.—which would be basically the merger between men and machine, right? Which basically means that you can have both parallel developments of biotech or genetics. Our biological evolution and development, accelerated, on the other hand; and on the other hand, you can have the merger of that with the acceleration and evolution and improvement of computer technology and neurotech. When you put those two things together, you end up with a final entity which is nothing like what we are today, and it definitely would not fit the definition of being human.
Do you worry, at some level, that it’s taken us five thousand years of human civilization to come up with this idea that there are things called human rights? That there are these things you don’t do to a person no matter what. That you’re born with them, and because you are human, you have these rights.
Do you worry that, for better or worse, what you’re talking about will erode that? That we will lose this sense of human rights, because we lose some demarcation of a human is.
That’s a very complicated question. I would suggest people read Yuval Harari’s book Homo Deus on that topic, and the previous one was called Sapiens. Those two are probably the best two books that I’ve read in the last ten years. But basically, the idea of human rights is an idea that was born just a couple hundred years ago. It came to exist with humanism, and especially liberal humanism. Right now, if you see how it’s playing out, humanism is kind of taking what religion used to do, in the sense of that religion used to put God in the center of everything—and then, since we were his creation, everything else was created for us, to serve us.
For example the animal world, etc., and we used to have the Ptolemaic idea of the universe, where the earth was the center, and all of those things. Now, what humanism is doing is putting the human in the center of the universe, and saying humanity has this primacy above everything else, just because of our very nature. Just because you are human, you have human rights.
I would say that’s an interesting story, but if we care about that story we need to push it even further.
In our present context, how is that working out for everyone else other than humanity? Well the moment we created humanism and invented human rights, we basically made humanity divine. We took the divinity from God, and gave it to humanity, but we downgraded everybody else. So animals which, back in the day—let’s say the hunter-gatherer society—we considered ourselves to be equal and on par with the animals.
Because you see, one day I would kill you and eat you, next day maybe a tiger would eat me. That’s how the world was. But now, we downgraded all the animals to machine—they don’t have consciousness, they don’t have any feelings, they lack self-awareness—and therefore we can enslave and kill them any way we wish and like.
So as a result, we pride ourselves on our human rights and things like that, and yet we enslave and kill seventy to seventy-five billion animals every year, and 1.3 trillion sea organisms like fish, annually. So the question then is, if we care so much about rights, why should they be limited only to human rights? Are we saying that other living organisms are incapable of suffering? I’m a dog owner, I have a seventeen-and-a-half-year-old dog. She’s on her last leg. She actually had a stroke last weekend.
I can tell you that she has taught me that she possesses the full spectrum of happiness and suffering that I do, pretty much. Even things like jealousy, and so on, she demonstrated to me multiple times, right? Yet, we today use that idea of humanism and human rights to defend ourselves and enslave everybody else.
I would suggest it’s time to expand that and say, first, to our fellow animals, that we need to include them, that they have their own rights, first of all. Second of all, that possibly rights should not be limited to organic organisms, and should not be called human or animal rights, but they should be called intelligence rights, or even beyond intelligence—any kind of organism that can exhibit things like suffering and happiness and pleasure and pain.
Because obviously, there is a different level of intelligence between me and my dog—we would hope—but she’s able to suffer as much as I am, and I’ve seen it. And that’s true especially more for whales and great apes and stuff like that, which we have brought to the brink of extinction right now. We want to be special, that’s what religion does to us. That’s what humanism did with human rights.
Religion taught us that we’re special because God created us in his own image. Then humanism said there is no God, we are the God, so we took the place of God—we took his throne and said, “We’re above everybody else.” That’s a good story, but it’s nothing more than a story. It’s a myth.
You’re a vegan, correct?
How far down would you extend these rights? I mean, you have consciousness, and then below that you have sentience, which is of course a misused word. People use “sentience” to mean intelligence, but sentience is the ability to feel something. In your world, you would extend rights at some level all the way down to anything that can feel?
Yeah, and look: I’ve been a vegan for just over a year and a couple of months, let’s say fourteen months. So, just like any other human being, I have been, and still am, very imperfect. Now, I don’t know exactly how far we should expand that, but I would say we should stop immediately at the level we can easily observe that we’re causing suffering.
If you go to a butcher shop, especially an industrialized farming butcher shop, where they kill something like ten thousand animals per day—it’s so mechanized, right? If you see that stuff in front of your eyes, it’s impossible not to admit that those animals are suffering, to me. So that’s at least the first step. I don’t know how far we should go, but we should start at the first steps, which are very visible.
What do you think about consciousness? Do you believe consciousness exists, unlike Dan Dennett, and if so where do you think it comes from?
Now you’re putting me on the spot. I have no idea where it comes from, first of all. You know, I am atheist, but if there’s one religion that I have very strong sympathies towards, that would be Buddhism. I particularly value the practice of meditation. So the question is, when I meditate—and it only happens rarely that I can get into some kind of deep meditation—is that consciousness mine, or am I part of it?
I don’t know. So I have no idea where it comes from. I think there is something like consciousness. I don’t know how it works, and I honestly don’t know if we’re part of it, or if it is a part of us.
Is it at least a tenable hypothesis that a machine would need to be conscious, to be an AGI?
I would say yes, of course, but the next step, immediately, is how do we know if that machine has consciousness or not? That’s what I’m struggling with, because one of the implications is that the moment you accept, or commit to that kind of definition, that we’re only going to have AGI if it has consciousness, then the question is, how do we know if and when it has consciousness? An AGI that’s programmed to say, “I have consciousness,” well how do you know if it’s telling the truth, and if it’s really conscious or not? So that’s what I’m struggling with, to be more precise in your answers.
And mind you, I have the luxury of being a philosopher, and that’s also kind of the negative too—I’m not an engineer, or a neuroscientist, so…
But you can say consciousness is required for an AGI, without having to worry about, well how do we measure it, or not.
That’s a completely different thing. And if consciousness is required for an AGI, and we don’t know where human consciousness comes from, that at least should give us an enormous amount of pause when we start talking about the month and the day when we’re going to hit the singularity.
Right, and I agree with you entirely, which is why I’m not so crazy about the timelines, and I’m staying away from it. And I’m generally on the skeptical end of things. By the way, for the last seven years of my journey I have been becoming more and more skeptical. Because there are other reasons or ways that the singularity…
First of all, the future never unfolds the way we think it will, in my opinion. There’s always those black swan events that change everything. And there are issues when you extrapolate, which is why I always stay away from extrapolation. Let me give you two examples.
The easy example is when you have positive, or let’s say negative extrapolation. We have people such as Lord Kelvin—he was the president of the British Royal Society, one of the smartest people—who wrote a book in the 1890’s about how heavier-than-air aircraft are impossible to build.
The great H.G. Wells wrote, just in 1902, that heavier-than-air aircraft are totally impossible to build, and he’s a science fiction writer. And yet, a year later the Wright brothers, two bicycle makers, who probably never read Lord Kelvin’s book, and maybe didn’t even read any of H.G. Wells’ science fiction novels, proved them both wrong.
So people were extrapolating negatively from the past. Saying, “Look, we’ve tried to fly since the time of Icarus, and the myth of Icarus is a warning to us all: we’re never going to be able to fly.” But we did fly. So we didn’t fly for thousands of years, until one day we flew. That’s one kind of extrapolation that went wrong, and that’s the easy one to see.
The harder one is the opposite, which is called positive extrapolation. From 1903 to, let’s say, the late 1960s, we went from the Wright brothers, to the moon. People said—amazing people, like Arthur C. Clarke—said, well if we made it from 1903 to the late 1960s to the moon, by 2002 we will be beyond Mars; we will be outside of our solar system.
That’s positive extrapolation. Based on very good data for, let’s say, sixty-five years from 1903 to 1968—very good data—you saw tremendous progress in aerospace technology. We went to the moon several times, in fact, and so on and so on. So it was logical to extrapolate that we would be by Mars and beyond, today. But actually, the opposite happened. Not only did we not reach Mars by today, we are actually unable to get back to the moon, even. As Peter Thiel says in his book, we were promised flying cars and jetpacks, but all we got was 140 characters.
In other words, beware of extrapolations, because they’re true until they’re not true. You don’t know when they are going to stop being true, and that’s the nature of black swan sorts of things. That’s the nature of the future. To me, it’s inherently unknowable. It’s always good to have extrapolations, and to have ideas, and to have a diversity of scenarios, right?
That’s another thing which I agree with you on: Singularians tend to embrace a single view of the future, or a single path to the future. I have a problem with that myself. I think that there’s a cone of possible futures. There are certainly limitations, but there is a cone of possibilities, and we are aware of only a fraction of it. We can extrapolate only in a fraction of it, because we have unknown unknowns, and we have black swan phenomena, which can change everything dramatically. I’ve even listed three disaster scenarios—like asteroids, ecological collapse, or nuclear weapons—which can also change things dramatically. There are many things that we don’t know, that we can’t control, and that we’re not even aware of that can and probably will change the actual future from the future we think will happen today.
Last philosophical question, and then I’d like to chat about what you’re working on. Do you believe humans have free will?
Yes. So I am a philosopher, and again—just like with the future—there are limitations, right? So all the possible futures stem from the cone of future possibilities derived from our present. Likewise, our ability to choose, to make decisions, to take action, have very strict limitations; yet, there is a realm of possibilities that’s entirely up to us. At least that’s what I’m inclined to think. Even though most scientists that I meet and interview on my podcast are actually one level, or one degree or another degree, of determinist.
Would an AGI need to have free will in order to exist?
Yes, of course.
Where do you think human free will comes from? If every effect had a cause, and every decision had a cause—presumably in the brain—whether it’s electrical or chemical or what have you… Where do you think it comes from?
Yeah, it could come from quantum mechanics, for example.
That only gets you randomness. That doesn’t get you somehow escaping the laws of physics, does it?
Yes, but randomness can be sort of a living-cat and dead-cat outcome, at least metaphorically speaking. You don’t know which one it will be until that moment is there. The other thing is, let’s say, you have fluid dynamics, and with the laws of physics, we can predict how a particular system of gas, will behave within the laws of fluid dynamics. But it’s impossible to predict how a single molecule or atom will behave within that system. In other words, if the laws of the universe and the laws of physics set the realm of possibilities, then within that realm, you can still have free will. So, we are such tiny minuscule little parts of the system, as individuals, that we are more akin to atoms, if not smaller particles than that.
Therefore, we can still be unpredictable.
Just like it’s unpredictable, by the way, with quantum mechanics, to say, “Where is the electron located?” and if you try to observe it, then you are already impacting on the outcome. You’re predetermining it, actually, when you try to observe it, because you become a part of the system. But if you’re not observing it, you can create a realm of possibilities where it’s likely to be, but you don’t know exactly where it is. Within that realm, you get your free will.
Final question: Tell us what you’re working on, what’s exciting to you, what you’re reading about… I see you write a lot about movies. Are there any science fiction movies that you think are good ones to inform people on this topic? Just talk about that for a moment.
Right. So, let me answer backwards. In terms of movies—it’s been awhile since I’ve watched it, but I actually even wrote a review on in—one of the movies that I really enjoyed watching, it’s by the Wachowskis, and it’s called “Cloud Atlas.” I don’t think that movie was very successful at all, to be honest with you.
I’m not even sure if they managed to recover the money they invested in it, but in my opinion it was one of the top ten best movies I’ve ever seen in my life. Because it’s a sextet—so it had six plots progressing in a parallel fashion, in six different timelines. So six things happening in six different locations in six different epochs, with six different timelines, with tremendous actors, and it touched on a lot of those future technologies, and even the meaning of being human—what separates us from the others, and so on.
I would suggest people check out “Cloud Atlas.” One of my favorite movies. The previous question you asked was, what am I working on?
Well, to be honest, I just finished my first book three months ago or something. I launched it on January 23rd I think. So I’ve been basically promoting my book, traveling, giving speeches, trying to raise awareness about the issues, and the fact that, in my view, we are very unprepared—as a civilization, as a society, as individuals, as businesses, and as governments.
We are going to witness a tremendous amount of change in the next several decades, and I think we’re grossly unprepared. And I think, depending on how we handle those changes, with genetics, with robotics, with nanotech, with artificial intelligence—even if we never reach the level of artificial general intelligence, by the way, that’s beside the point to me—just the changes we’re going to witness as a result of the biotech revolution can actually put our whole civilization at risk. They’re not just only going to change the meaning of what it is to be human, they would put everything at risk. All of those things converging together, in the narrow span of several decades basically, I think, create this crunch point which could be what some people have called a “pre-singularity future,” which is one possible answer to the Fermi Paradox.
Enrico Fermi was this very famous Italian mathematician who, a few decades ago, basically observed that there are two-hundred billion galaxies just in the observable realm of the universe. And each of those two-hundred billion galaxies has two-hundred billion stars. In other words, there’s almost an endless number of exoplanets like ours—which are located in the Goldilocks area, where it’s not too hot or too cold—which can potentially give birth to life. The question then is, if there are so many planets and so many stars and so many places where we can have life, where is everybody? Where are all the aliens? There’s a diversity of answers to that question. But at least one of those possible scenarios, to explain this paradox, is what’s referred to as the pre-singularity future. Which is to say, in each civilization, there comes a moment where its technological prowess surpasses its capacity to control it. Then, possibly, it self-destructs.
So in other words, what I’m saying is that it may be an occurrence which happens on a regular basis in the universe. It’s one way to explain the Fermi Paradox, and it’s possibly the moment that we’re approaching right now. So it may be a moment where we go extinct like dinosaurs; or, if we actually get it right—which right now, to be honest with you, I’m getting kind of concerned about—then we can actually populate the universe. We can spread throughout the universe, and as Konstantin Tsiolkovsky said, “Earth is the cradle of humanity, but sooner or later, we have to leave the cradle.” So, hopefully, in this century we’ll be able to leave the cradle.
But right now, we are not prepared—neither intellectually, nor technologically, nor philosophically, nor ethically, not in any way possible, I think. That’s why it’s so important to get it right.
The name of your book is?
Conversations with the Future: 21 Visions for the 21st Century.
All right, Nikola, it’s been fascinating. I’ve really enjoyed our conversation, and I thank you so much for taking the time.
My pleasure, Byron.
Byron explores issues around artificial intelligence and conscious computers in his upcoming book The Fourth Age, to be published in April by Atria, an imprint of Simon & Schuster. Pre-order a copy here.

Voices in AI – Episode 7: A Conversation with Jared Ficklin

In this episode, Byron and Jared talk about rights for machines, empathy, ethics, singularity, designing AI experiences, transparency, and a return to the Victorian era.
[podcast_player name=”Episode 7: A Conversation with Jared Ficklin” artist=”Byron Reese” album=”Voices in AI” url=”https://voicesinai.s3.amazonaws.com/2017-09-28-(01-04-09)-jared-ficklin.mp3″ cover_art_url=”https://voicesinai.com/wp-content/uploads/2017/09/voices-headshot-card-6.jpg”]
Byron Reese: This is Voices in AI, brought to you by Gigaom. I’m Byron Reese. Today, our guest is Jared Ficklin. He is a partner and Lead Creative Technologist at argodesign.
In addition, he has a wide range of other interests. He gave a well-received mainstage talk at TED about how to visualize music with fire. He co-created a mass transit system called The Wire. He co-designed and created a skate park. For a long while, he designed the highly-interactive, famous South by Southwest (SXSW) opening parties which hosted thousands and thousands of people each year.
Welcome to the show, Jared.
Jared Ficklin: Thank you for having me.
I’ve got to start off with my first and favorite question: What is artificial intelligence?
Well, I think of it in the very mechanical way of, that it is a machine intelligence that has reached a point of sentience. But I think it is just a broad umbrella where we kind of apply it to any case where the computerization is attempting to solve problems with human-like thoughts or strategies.
Well, let’s split that into two halves, because there was an aspirational half of sentience, and then there was a practical half. Let’s start with the practical half. When it tries to solve problems that a person can solve, would you include a sprinkler that comes on when your lawn is dry as being an artificial intelligence? Because I don’t have to keep track of when my lawn is dry; the sprinkler system does.
First of all, this is my favorite half. I like this half of the procedural side more than the sentience side, although it’s fun to think about.
But, when you think of this sprinkler that you just talked about, there’s a couple of ways to arrive at this. One, it can be very procedural and not intelligent at all. I can have a sensor. The sensor can throw off voltage when it sees soil is of a certain dryness. That can connect on an electrical circuit which throws off a solenoid, and water begins spraying everywhere.
Now, you have the magic, and a person who doesn’t know what’s going on might look at that and say, “Holy cow! It’s intelligent! It has watered the lawn.” But it’s not. That is not machine intelligence and that is not AI. It’s just a simple procedural game.
There would be another way of doing that. And that’s to use a whole bunch of computations to study, and bring in a lot of factors of the weather coming in, the same sensor telling what soil dryness is—run it through a whole lot of algorithms and make a decision based on the probability and the threshold of whether to turn on that sprinkler or not, and that would be a form of machine learning.
Now, if you look at the two, they seem the same on the face but they’re very different—not just in how they happen, but in the outcome. One of them is going to turn on the sprinkler, even though there are seven inches of rain coming tomorrow, and the other is not going to turn on the sprinkler because it’s aware that seven inches of rain are coming tomorrow. That little added extra judgment, or intelligence as we call it, is the key difference. That’s what makes all the difference in this, multiplied by a million times. To me.
Just to be clear, you specifically invoked machine learning. Are you saying there is no AI without machine learning?
No, I’m not saying that. That was just the strategy that applied in this situation.
Is the difference between those two extremes, in your mind, evolutionary? It’s not a black-and-white difference?
Yeah, there’s going to be scales and gradients. There’s also different strategies and algorithms that breed this outcome. One had a certain presumption of foresight, and a certain algorithmic processing. In some ways, it’s much smarter than a person.
There’s a great analogy. Matthew Santone, who is a co-worker here, is the first one who introduced me to the analogy. And I don’t know who came up with it, but it’s the ten thousand squirrels analogy around artificial intelligence, and its state today.
On the face of it, you would think humans are much smarter than squirrels, and in many ways we are, but a squirrel has this particular capability of hiding ten thousand nuts in a field and being able to find them the next spring. When it comes to hiding nuts, a squirrel is much more intelligent than we are.
That’s another one of the key attributes of this procedural side of artificial intelligence, I think. It’s that these algorithms and intelligence become so focused on one specific task that they actually become much more capable and greater at it than humans.
Where do you think we are? Needless to say, the enthusiasm around AI is at a fevered pitch. What do you think brought that about, and do you think it’s warranted?
Well, it’s science fiction, I think, that has brought it about—everything from The Matrix in film, to books by John Varley or even Isaac Asimov—have given us a fascination about machines and artificial intelligence and what they can produce.
Then, right now, the business world is just talking all about it, because, I think, we’re at the level of the ten thousand squirrels. They can see a lot of value of putting those squirrels together to monitor something—you know, find those nuts in a way better than a human can. When you combine the two, it’s just on everyone’s lips and everywhere.
It doesn’t hurt that some of the bigwig thinkers of our time are out there talking about how dangerous it could possibly be, and that captures everyone’s attention as well.
What do you think of that? Why do you think that there are people who think we’re going to have an artificial general intelligence in a few years—five years is the earliest—and it’s something we should be concerned about? And then, there are people who say it’s not going to come for hundreds of years, and it’s not something we should be worried about. What is different in how they’re viewing the world?
It might be a reflection of the world that they live in, as well. For me, I really see two scales of danger. One is that we, as humans, put a lot of faith in machines—particularly our generation, Generation X. When I go to drive across town—and I’ve lived in my hometown of Austin, Texas, for seventeen years—I know a really good short route right through downtown. Every time I try to take it, my significant other will tell me that Google says there is a better route. We trust technology more than other humans.
The problem comes when, it’s like, if you have these ten thousand squirrels and they’re a toddler-level AI, you could turn over control far too early and end up in a very bad place. A mistake could happen, it could shut down the grid, a lot of people could die. That’s a form of danger I think some people are talking about, and they’re talking about it on the five-year scale because that’s where it’s at. You could get into that situation not because it’s more intelligent than us, but just because you put more reliance on something that isn’t actually very intelligent. That’s one possible danger that we’re facing.
The hundred-year danger is that I think people are afraid of the Hollywood scenario, the Skynet scenario, which I’m less afraid of—although I have one particular view on that that does give me some concern. I do get up every morning and tell Alexa, “Alexa, tell the robots I am on your side,” because I know how they’re programming the AI. If I write that line of code ten-thousand times, maybe I can get it in the algorithm.
There are more than a few efforts underway, by one count, twenty-two different governments who are trying to figure out how to weaponize artificial intelligence. Does that concern you or is that just how things are?
Well, I’m always concerned about weaponization, but I’m not completely concerned. I think militaries think in a different way than creative technologists. They can do great damage, but they think in terms of failsafe, and they always have. They’re going to start from the position of failsafe. I’m more worried about marketing and a lot of areas where they work quick and dirty, and they don’t think about failsafe.
If you’re going to build a little bit of a neural net or a machine learning system, it’s open-sourced, it’s up on the cloud, a lot of people are using it, and you’re using it to give recommendations. And then at the end of the recommendations you’re not satisfied with it, and you say, “I know that you have recommended this mortgage from Bank A but the client is Bank B, so how can we get you to recommend Bank B?”
Essentially, teaching the machines that it’s okay to lie to humans. That is not operating from a position of failsafe. So it might just be marketing—clever terms like ‘programmatic’ and what not—that generates Skynet, and not necessarily the military industrial complex, which really believes in kill switches.
More kind of real world day-to-day worries about the technology—and we’re going to get to all the opportunities and all the benefits and all of that in just a moment.
Start with the fear.
Well, I think the fear tells us more, in a way, about the technology because it’s fun to think about. As far back as storytelling, we’ve talked about technologies that have run amok. And it seems to be this thing, that whenever we build something, we worry about it. Like, they put electricity in the White House, but then the president would never touch it and wouldn’t let his family touch it. When they put radios in cars, they said, “Oh, distracted driving, people are going to crash all the time.”
Airbags are going to kill you.
Right. Frankenstein, right? The word “robot” comes to us from a Czech play—
You just hit a part of the psyche that I think people are letting in, too, when you said Frankenstein. It’s personification that often is the dangerous thing.
Think of people who dance with poisonous snakes. Sometimes it’s done as a dare, but sometimes it’s done because there’s a personification put on the animal that gives it greater importance than what it actually is, and that can be quite dangerous. I think we risk that here, too, just putting too much personification, human tendencies, on the technology.
For instance, there is actually a group of people who are advocating rights for industrial robots today, as if they are human, when they are not. They are very much just industrial machines. That kind of psyche is what I think some people are trying to inoculate now, because it walks us down this path where you’re thinking you can’t turn that thing off, because it’s given this personification of sentience before it has actually achieved it. It’s been given this notion of rights before it actually has them.
And so even if it’s dangerous and we should hit the kill switch, there are going to be people reacting against that, saying, “You can’t kill this thing off”—even though it is quite dangerous to the species. That, to me, is a very interesting thing because a lot of people are looking at it as if, if it becomes intelligent, it will be a human intelligence.
I think that’s what a lot of the big thinkers think about, too. They think this thing is not going to be human intelligence, at which point you have to make a species-level judgment on its rights, and its ability to be sentient and put out there.
Let’s go back to the beginning of that conversation with ELIZA and Weizenbaum.
This man in the ’60s, Weizenbaum, made this program called ELIZA, and it was a really simple chatbot. You would say, “I am having a bad day.” And it would say, “Why are you having a bad day?” And then, you would say, “I’m having a bad day because of my mom.” “What did your mom do to make you have a bad day?” That’s it, very simple.
But Weizenbaum saw that people were pouring their hearts out to it, even knowing that it was a machine. And he turned on it. He was like, “This is terrible.” He said, “When a machine says, ‘I understand,’ the machine is telling a lie. There is no ‘I’ there. There is nothing that understands anything.”
Is your comment about personification a neutral one? To say, “I am observing this,” or are you saying personification is a bad thing or a good thing? If you notice, Alexa got a name, Siri got a name, Cortana got a name, but Google Assistant didn’t get a name.
Start there—what are your thoughts on personification in terms of good, bad, or we don’t know yet?
In the way I was just talking about it, personification, I do think it is a bad thing, and I do see it happening. In the way you just talked about it, it becomes a design tool. And as a design tool, it’s very useful. I name all my cars, but that’s the end of the personification.
You were using it to say they actually impute human characteristics on these beyond just the name?
Yes, when someone is fighting for the human rights or the labor rights of an industrial machine, they have put a deep personification on that machine. They’re feeling empathy for it, and they’re feeling it should be defended. They’re seeing it as another human or as an animal; they’re not seeing it as an industrial machine. That’s weird, and dangerous.
But, you as a designer, think, “Oh, no, it’s good to name Alexa, but I don’t want people to start thinking of Alexa as a thing.”
But you’re a part of that then, right?
Yeah, we are.
You’re naming it and putting a face on it.
You’ve circled right back to what I said—Skynet is going to come from product design and marketing.
From you.
Well, I did not name Alexa.
And just for the record, we’re not impugning Alexa here.
Yeah, we are not. I love Alexa. I have it, and like I said, I tell her every morning.
But, personification is this design tool, and how far is it fair for us to lean into it to make it convenient? In the same way that people name their favorite outfit, or their cars, or give their house a name—just as a convenience in their own mind—versus actually believing this thing is human and feeling empathy for it.
When I call out to Alexa in the morning, I don’t feel empathy for Alexa. I do wonder if my six-year-old son feels empathy for Alexa, and if by having that stuff in the homes—
—Do you know the story about the Japanese kids in the mall and the robot?
There was this robot that was put in this Japanese mall. They were basically just trying to figure out how to make sure that the robot can get around people. The robot was programmed to ask politely for you to step aside, and if you didn’t, it would go around you.
And some kids started stepping in front of it when it tried to go around them. And then, they started bullying it, calling it names, hitting it with things. The programmers had to re-circle and say, “We need to rewrite the program so that, if there are small people, kids, and there’s more than a few, and there’s not big people around; we’ve got to program the robot to run away towards an adult.” And so, they do this.
Now, you might say, “Well, that’s just kids being kids.” But here’s the interesting thing: When they later took those kids and asked them, “Did you feel that the robot was human-like or machine-like?” Eighty-percent said it was human-like. And then, they said, “Do you feel like you caused it distress?” Seventy-five percent of them said yes. And so, these kids were willing to do that even though they regarded it as human-like and capable of feeling emotion.
They treated it like another kid.
Right. So, what do you read in the tea leaves of that story?
Well, more of the same, I’m afraid, in that we’re raising a generation—funny enough, Japan really did start this—where there needs to be familiarity with robotics. And it’s hard to separate robotics and AI, by the way. Robotics seems like the corpus of AI, and so much of what I think the public’s imagination that’s placed on AI is robotics, and has nothing to do with AI.
That is a fascinating thing to break apart, and they are starting to converge now, but back when they were doing that research, and the research like Wendy Ju does with the trash can on the public square going around, and it’s just a trashcan on wheels but it actually evokes very emotional responses from people. People personify it almost immediately even though it’s a trash can. One of the things the kids do in this case is they try and attract it with trash and say, “Come over here, come over here,” because they view it as this dog that eats trash, and they think that they can play with it. Empathy also arrives as well. Altruism arrives. There’s a great scene where this trash can falls over and a whole bunch of people go, “Aww…” and they run over and pick it up.
We’ve got to find a way to reset our natural tendencies. Technology has been our servant for all this time, and this dumb servant. And although we’re aware of it having positive and negative consequences, we’ve always thought of it as improving our experience, and we may need to adjust our thinking. The social medias might be doing that with the younger generations, because they are now seeing the great social harm that can come, and it’s like, do they put that on each other or do they put it on the platform?
But, I think some people who are very smart are painting with these broad brushes, and they’re talking about the one-hundred-year danger or the danger five years out, just because they’re struggling with how we change the way we think about technology as a companion. Because it’s getting cheaper, it’s getting more capable, and it’s invading the area of intelligence.
I remember reading about a film—I think this was in the ’40s or ’50s—and they just showed these college kids circles that would bounce or roll around together, or a line would come in. And they said, “What’s going on in these?” And they would personify those, they’d say, “Oh, that circle and that circle like each other.”
It’s like, if we have a tendency to do that to a circle in a film, you can only imagine that, when these robots can read your face, read your emotions—and I’m not even talking about a general intelligence—I mean something that, you know, is robotic and can read your face and it can laugh at your jokes and what not. It’s hard to see how people will be able to keep their emotions from being wrapped up in it.
Yeah, and not be tempted to explore those areas and put them into the body of capability and intelligence.
I was just reading two days ago—and I’m so bad at attribution—but a clever researcher, I think, at MIT created this program for scanning people’s social profile and looking at their profile photo; and after enough learning, building their little neural net where it’d just look at a photograph and guess whether this person was gay or not, their sexual preference, and they nail it pretty well.
I’m like, “Great, we’re teaching AI to be as shallow and presumptive as other humans, who would just make a snap judgment based on what you look like, and maybe it’s even better than us at doing it.”
I really think we need to develop machine ethics, and human ethics, and not be teaching the machine the human ethics, even if that’s a feature on the other side. And that’s more important than privacy.
Slow that down a second. When you do develop a difference between human ethics and machine ethics, I understand that; and then, don’t teach the machine human ethics. What does that mean?
We don’t need more capable, faster human ethics out of there. It could be quite damaging.
How did you see that coming about?
Like I said, it comes about through, “I’m going to create a recommendation engine.”
No, I’m sorry—the solution coming about.
Separating machine and human ethics.
We have this jokey thought experiment called “Death by 4.7 Stars,” where you would assume that there is a Skynet that has come to intelligence, and it has invaded recommendation engines. And when you ask it, “What should I have for lunch?”, it suggests that you have this big fatty hamburger, a pack of Lucky Strikes, and a big can of caffeinated soda. At this point, you die of a heart attack younger.
Just by handing out this horrible advice, and you trusting it implicitly, and it not caring that it’s lying to you, you just extinguish all of humanity. And then Skynet is sitting there going, “That was easy. I thought we were going to have a war between humans and machines and have to build the Matrix. Well, we didn’t have to do that.” Then, one of the AIs will be like, “Well, we did have to tell that lady to turn left on her GPS into a quarry.” And then, the AI is like, “Well, technically, that wasn’t that hard. This was a very easy war.”
So, that’s why we need to figure out this way to put a machine ethic in there. I know it seems old-fashioned, but I’m a big fan of Isaac Asimov; I think he did some really good work here. And there’s other groups that are now advancing that and saying, “How can we put a structure in place where we just don’t give these robots a code of ethic?”
And then, the way you actually build these systems is important, too. AI should always come to the right conclusion. You should not then tell it, “No, come to this conclusion.” You should just screen out conclusions. You should just put a control layer in that filters out the conclusions you don’t want for your business purposes, but don’t build a feedback loop back into the machine that says, “Hey, I need you to think like my business,” because your business might need a certain amount of misdirection and non-truths to it.
And you don’t, maybe, understand the consequences because there’s a certain human filter between that stuff—what we call “white lies” and such—that allows us to work. Whereas, if you amplify it times the million circuits and the probabilities that go down to the hundreds of thousands of links, you don’t really know what the race condition is going to produce with that small amount of mistruth.
And then, good governance and controls that say that little adjusted algorithm, which is very hard to ferret out—almost like the scene from Tron where they’re picking out the little golden strands—doesn’t move into other things.
This is the kind of carefulness that we need to put into it as we deploy it, if we’re going to be careful as these magic features come along. And we want the features. There’s a whole digital lifestyle predicated on the ability for AI to establish context, that’s going to be really luxurious and awesome; and that’s one reason why I even approach things like the singularity, or “only you can prevent Skynet,” or even get preachy about it at all—because I want this stuff.
I just got back from Burning Man, and you know, Kathryn Myronuk says it’s a “dress rehearsal for a post-scarcity society.” What’s going to give us post-scarcity is artificial intelligence. For a large part, the ability to stand up machines enough to supply our needs, wants, and desires, and to sweep away the lower levels of Maslow’s hierarchy of need. And then we can live in just a much more awesome society.
Even before that, there’s just a whole bunch of cool features coming down the pipeline. So, I think that’s why it’s important to have this discussion now, so we can set it up in a way that it continues to be productive, trustful, and it doesn’t put the entire species in danger somehow, if we’re to believe Stephen Hawking or Elon Musk.
Another area that people are concerned about, obviously, are jobs—automation of jobs. There are three narratives, just to set them up for the listener:
The first is that AI is going to take a certain class of jobs that are “low-skill” jobs, and that the people who have those jobs will be unemployed and there’ll be evermore of them competing for ever fewer low-skill jobs, and we’ll have a permanent Great Depression.
There’s a second area that says, “Oh, no, you don’t understand, everybody’s job—your job, my job, the president’s job, the speechwriter’s job, the artist’s job, everybody—because once the machines can learn something new faster than we can, it’s game over.”
And then, there’s a third narrative that says both of these are wrong. Every time we have a new technology, no matter how disruptive it is to human activity—like electricity or engines or anything like that—people just take that technology and they use it to magnify their own productivity. And they raise their wages and everybody uses the technology to become more productive, and that’s the story of the last two hundred and fifty years.
Which of those three scenarios, or a fourth one, do you identify with?
A fourth one, where the burden of productivity being the guide of work is released, or lessened, or slackened. And then, the people whose jobs are at the most danger are the people who hate their jobs. Their jobs are at the most danger. Those are the ones that AI is going to take over first and fastest.
Why is that not my first setup, which is there are some jobs that it’s going to take over, putting those people out of work?
Because there will be one guy who really loves driving people around in his car and is very passionate about it, and he’ll still drive his car and we’ll still get into it. We’ll call the human car. He won’t be forced out of his job because he likes it. But the other hundred guys who hated driving a car for a living, their job will be gone because they weren’t passionate enough to protect it or find a new way to do it or enjoy doing it anymore. That’s the slight difference, I think, between what I said and what you said.
You say those hundred people won’t use the technology to find new employment?
I think an entire economy of a different kind of employment that works around passion will ultimately evolve. I’m not going to put a timescale on this, but let’s take the example of “ecopoesis,” which I’m a big fan of, which comes out of Stanley K. Robinson’s Mars book—probably before that, but that was one of the first times I encountered it.
Ecopoesis is a combination of ecology poet – ecopoesis. If you practice it, you’re an ecopoet. This is how it would work in the real world, right? We would take Bill Gates’s proposal, and we would tax robots. Then we would take that money, and we would place an ad on Craigslist, and say, “I need approximately sixty thousand people who I can pay $60,000 a year to go into the Lincoln National Forest, and we want you to garden the thing. We want you to remove the right amount of deadfall. We want you to remove evasive species. We want you to create glades. We want for the elk to reproduce. We want you to do this on the millions of hectares that is the Lincoln National Forest. In the end, we want it to look like Muir Woods. We want it to be just the most gorgeous piece of garden property possible.”
How many people who are driving cars today or working as landscapers wouldn’t just look at that Craigslist ad and immediately apply for the opportunity to spend the next twenty years of their life gardening this one piece of forest, or this one piece of land, because they’re following their passion into it and all of society benefits from it, right? That’s just one example of what I mean.
I think you can begin a thought experiment where you can see whole new categories of jobs crop up, but also people who are so passionate in what they’re doing now that they simply don’t let the AI do it.
I was on a cooking show once—I live a weird life—and while we were on it we were talking about robots taking jobs, just like you and I are. We were talking about what jobs will robots take. Robots could take the job of a chef. The sous chef walks out of the back and he says, “No, it won’t.” We’re like, “Oh, you’re with nerds discussing this. What do you mean, ‘No, it won’t’?” He’s like, “Because I’ll put a knife in its head, and I will keep cooking.”
That’s a guy who’s passionate about his job. He’s going to defend it against the robots and AI. People will follow that passion and see value in it and pursue it.
I think there’s a fourth one that’s somewhere between one and three, that is what comes out of this. Not that there won’t be short-term disruption or pain but, ultimately, I think what will happen is humanity will self-actualize here, and people will find jobs they want to do.
Just to kind of break it down more a bit, that sounds like WPA or the Depression.
It says, “Let’s have people paint murals, build bridges, plant saplings.”
There was a lot of that that went on, yeah.
And so, you advocate for that?
I think that that is a great bridge when we’re in that point between post-singularity—or an abundance society, post-scarcity—and we’re at this in-between point. Even before that, in the very near-term, a lot of jobs are going to be created by the deployment of AI. It actually just takes a whole lot of work to deploy and it doesn’t necessarily reverberate into removing a bunch of jobs. Often, it’s a very minute amount of productivity it adds to a job, and it has an amplifying effect.
The industry of QA is going to explode. Radiologists, their jobs are not going to be stolen; they’re going to be shifted to the activity of QA to make sure that this stuff is identifying correctly in the short term. Over the next twenty to fifty years, there’s going to be a whole lot of that going on. And then, there’s going to be just a whole lot of robotics fleet maintenance and such, that’s going to be going on. And some people are going to enjoy doing this work and they’ll gravitate to it.
And then, we’re going to go through this transition where, ultimately, when the robots start taking care of something really lower-level, people are going to follow their passions into higher-level, more interesting work.
You would pay for this by taxing the robots?
Well, that was Bill Gates’s idea, and I think there’s a point in history where that will function. But ultimately, the optimistic concept is that this revolution will bring about so much abundance that the way an economy works itself will change quite a bit. Thus, you pay for it out of just doing it.
If we get to the point where I can stick out my hand, and a drone drops a hammer when I need a hammer to build something, how do you pay for that transaction? If that’s backed with a Tokamak Reactor—we’ve created fusion and energy is superfluous—how do you pay for that? It’s such a miniscule thing that there just might not be a way to pay for it, that paying for things will just completely change altogether.
You are a designer.
I’m a product designer, yes. That’s what I do by trade.
So, how do you take all of that? And how does that affect your job today, or tomorrow, or what you’re doing now? What are the kinds of projects you’re doing now that you have to apply all of this to?
This is how young it actually is. I am currently just involved in what does the tooling look like to actually deploy this at any kind of scale. And when I say “deploy,” I don’t mean sentience or anything close to it; but just something that can identify typos better than the current spellcheck system. Or identify typos in a very narrow sphere of jargon that other people know. Those are the problems being worked on right now. We’re scraping pennies outside of dollars, and it just needs a whole lot of tooling on that right now.
And so, the way I get to apply this, quite fundamentally, is to help influence what are the controls, governance, and transparency going to look like, at least in the narrow sphere where I’m working with people. After that, it’s all futurism, my friend.
But, on a day-to-day basis at argo, where do you see designing for this AI world? Is it all just down to the tooling area?
No, that’s just one that’s very tactical. We are actually doing that, and so it’s absorbing a lot of my day.
We have had a few clients come in and be like, “How do I integrate AI?” And you can find out it’s a very ticklish problem of like, “Is your business model ready for it? Is your data stream ready for it? Do you have the costing ability to put it all together?” It’s very easy to sit back and imagine the possibilities. But, when you get down to the brass tacks of integration and implementation, you start realizing it needs more people here to work on it.
Other than putting out visions that might influence the future, and perhaps enter into the zeitgeist our opinion on how this could transpire, we’re really down in the weeds on it, to be honest.
In terms of far out, you’ve referred to the singularity a number of times, do you believe in Kurzweil’s vision of the singularity?
I actually have something that I call “the other singularity.” It’s not as antagonistic as it sounds. It’s meant like the other cousin, right? While the singularity is happening—his grand vision, which is very lofty—there’s this other singularity going on. This one of cast-offs of the exponential technology curve. So, as computational power gets less expensive, yesterday’s computer—the quadcore computer that I first had for $3,000—is now like a $40 gum stick, and pretty soon it’s going to be a forty-cent MCU computer on a chip. At that point, you can apply computational power to really mundane and ordinary things. We’re seeing that happen at a huge pace.
There’s something I like to call the “single-function computer” and the new sub-$1000. In the ’90s, when computers were out there—they were out there for, really, forty, fifty years before mass adoption hit—from a marketing perspective, it was said that, until the price comes below $1,000 for a multifunction computer, they won’t reach adoption. Soon as it did, they spread widely.
We still buy these sub-$1000 computers. Some of us buy slightly more in order to get an Apple on the front of them and stuff, but the next sub-$1000 is how to get a hundred computers in the home for under $1,000 and that’s being worked on now.
What they’re going to do is take the function of these single-function computers, which take a massive amount of computational power, and dedicate them to one thing. The Nest would be my first example that people are most familiar with. It has the same processing power as the original MacBook G4 laptop, and all that processing power is just put to algorithmically keeping your home comfortable in a very exquisite out-of-the-box experience.
We’re seeing more and more of these experiences erupt. But they’re not happening in this elegant, singularity, intelligence-fed path. They just do what they do procedurally, or with a small amount of intelligence, and they do it extremely well. And it’s this big messy mess, and it’s entirely possible that we reach a form of the singularity without sentient artificial intelligence guiding it.
An author that I really love that works in this space a lot is Cory Doctorow. He has a lot of books that kind of propose this vision where machines are somehow taking care of this lower level of Maslow’s hierarchy of needs, and creating a post-scarcity society, but they are not artificial intelligence. They have no sentience. They’re just very, very capable at what they do, and there’s a profundity of them to do a lot of things.
That’s the other singularity, and that’s quite possibly how it may happen; especially we decide that sentience is so dangerous that we don’t need it. But I find it really encouraging and optimistic, that there is this path to the future that does not quite require it, but could still give us a lot of what we see in these singularity-type visions of the future—the kind of abundance, and ability to not be toiling each day for survival. I love that.
I think Kurzweil thinks that the singularity comes about because of emergence.
Because, at some point, you just bolt enough of this stuff together and it starts glowing with some emergent behavior, that it is at a conscious decision that we decide, “Let’s build.”
Yeah, the exponential technology curve predicts the point at which a computer can have the same number of computations as we have neurons, right? At which point, I agree with you, it kind of implies that sentience will just burst forth.
Well, that’s what he says.
That’s the question, isn’t it?
I don’t think it happens that way.
What do you think happens?
I don’t think sentience just bursts forth at that moment.
First of all, taking a step back, in what sense are you using the word “sentience”? Strictly speaking, it means “able to sense something, able to feel”—that’s it. Then, there’s “sapience,” which is intelligent. That’s what we are, homo sapiens. Then, there’s “consciousness,” which is the ability to have subjective experience—that tea you just drank tasted like something, and you tasted it.
In what sense are you thinking of computers—not necessarily having to be that?
Closer to the latter. It’s something that is aware of itself and begins guiding its own priorities.
You think we are that. We have that, humans.
Where do you think it comes from? Do you think it’s an emergent property of our brains? Is it something we don’t know? Do you have an opinion on that?
I mean, I’m a spiritualist, so I think it derives from the resonance of the universe that was placed there for a reason.
In that view of the world, you can’t manufacture that, in other words. It can’t come out of the factory and someplace.
To be metaphysical, yes. Like Orson Scott Card, will the philotics plug into the machine, and suddenly it wakes up and it has the same cognitive powers as a human? Yeah, I don’t know.
What you do, which is very interesting, is you say, “What if that assumption—that one assumption—that someday the machine kind of opens its eyes; what if that one assumption isn’t true?” Then what does the world look like, of ever-better computers that just do their thing, and don’t have an ulterior motive?
Yeah, and the truth is they could also happen in parallel. Both could be happening at the same time, as they are today, and still progress. But I think it’s really fascinating. I think some people guard themselves. They say, “If this doesn’t happen, there’s nothing smart enough to make all the decisions to improve humanity, and we’re still going to have to toil away and make them.” And I say, “No, it might be entirely possible that there’s this path where just these little machines, and profundity do it for us and sentience is not necessary.”
It also opens up the possibility that, if sentience does just pop into existence right now, it makes very fair the debate that you could just turn it off, that you could commit the genocide of the machine and say, “We don’t want you or need you. We’re going to take this other path.”
We Skynet them.
We Skynet them, and we keep our autonomy and we don’t worry about the perils. I think part of the fear about this kind of awareness—we’ve been calling it sentience—kind of theory on AI, is this fear that we just become dependent on them, and subservient to them, and that’s the only path. But I don’t think it is.
I think there’s another path where technology takes us to a place of great capability so profound that it even could remove the base layer of Maslow’s hierarchy of needs. I think of books like Makers by Cory Doctorow and others that are forty years in the future, and you start thinking of micro-manufacturing.
We just put up this vision on Amazon and Whole Food, which was another nod towards this way of thinking. That ignoring the energy source a little bit—because we think it’s going to sort itself out, everyone has solar on their hands or Tokamak—if you can get these hydroponic gardens into everyone’s garage, produce is just going to be so universally available. It goes back to being the cheapest of staples. Robots could reduce spoilage by matching demand, and this would be a great place for AI to live.
AI is really good at examining this notion of like, “I think you’re going to use those Brussels sprouts, or I think your neighbor is going to use them first.” We envision this fridge that has a door on the outside, which really solves a lot of delivery problems. You don’t need those goofy cardboard boxes with foil and ice in them anymore. You just put it in the fridge. It also can move the point of purchase all the way into the home.
When you combine that with the notion of this dumber AI that’s just sitting there, deciding whether you or the neighbor needs Brussels sprouts, it can put the Brussels sprouts there opportunistically, thinking, “Maybe he’ll get healthy this week.” When I don’t take them before they spoil, it can move them over to the neighbor’s fridge where they use them. You just root so much spoilage out of the system, that nutrition just raises and it becomes more ubiquitous.
Now, if people wanted to harvest those goods or tend those gardens, they could. But, if people didn’t, robots could make up the gap. Next thing you know, you have a food system that’s decoupled from the modern manufacturing system, and is scalable and can grow with humanity in a very fascinating way.
Do you think we’re already dependent on the machine? Like, if an EMP wave just fried all of our electronics, a sizeable part of the population dies?
I think that’s very likely. Ignoring all the disaster and such right then, it would take a whole lot of… I don’t necessarily think that’s purely a technological judgment. It’s just the slowness of humanity to change their priorities. In other words, we would realize too late that we all needed to rededicate our resources to a certain kind of agriculture, for instance, before the echo moved through the machine. That would be my fear on it—that we all engrain our habits and we’re too slow to change them.
Way to kill off humanity three times in this podcast!
That’s right.
Does that happen in most of these that you are doing?
Oh, great! It’s just my dark view.
It’s really hard to kill us off, isn’t it?
Because, if it was going to happen, it seems like it would have happened before when we had no technology. You know, there were just three million of us five-thousand years ago. By some counts, thousands of us, at one time, and wooly mammoths running around.
But back then, ninety-nine percent of our technology was dedicated to survival, and it’s a way lower percentage now. In fact, we invented a percentage of technology that is dedicated to our destruction. And so, I don’t know how much the odds have changed. I think it’s a really fascinating discussion—probably something that AI can determine for us.
Well, I don’t know the percentage. It would be the gross amount, right?
Because you could say the percentage of money we’re spending on food is way down, but that doesn’t mean we’re eating less. The percentage of money we’re spending on survival may be way down, but that doesn’t mean we’re spending less.
In a really real-world kind of way, there’s a European initiative that says: When an AI makes a decision that affects you, you have a right to know why it made that decision. What do you think of that? I won’t impute anything. What do you think of that?
Yeah, I think Europe is ahead of us here. The funny thing is a lot of that decision was reported as rights for AI, or rights for robots. But when you really dig into it, it’s rights for humans. And they’re good rights.
If I were to show you designs out of my presentations right now, I have this big design that’s… You’re just searching for a car and it says, “Can I use your data to recommend a car?” and you click on that button and say yes. That’s the way it should be designed. We have taken so many liberties with people’s data and privacy up until now, and we need to start including them in on the decision.
And then, at the bottom of it, it has a slider that says, “The car you want, the car your wife wants.” You should also have transparency and control of the process, right? Because machine learning and artificial intelligence produces results with this kind of context, and you should be allowed to change the context.
First of all, it’s going to make for a better experience because, if it’s looking at all my data historically, and it’s recommended to me the kind of sleeping bag I should buy, it might need to be aware—and I might have to make it aware—that I’m moving to Alaska next week, because it would make a different recommendation. This kind of transparency in government actually… And I also think they put in another curious thing—and we’ll see how it plays out through the court—but I believe they also said that, if you get hurt by it—this was the robotic side—the person who made the robot is responsible for it.
Some human along the way made a decision that hurt you is the thesis.
Yes, or the business corpus that put this robot out there is responsible for it. It’s the closest thing to the three laws of robotics or something put into law that we’ve seen yet. It’s very advanced thinking, and I like it; and it’s already in our design practice.
We’re already trying to convince clients that this is the way to begin designing experiences. More than that, we’re trying to convince our fellow designers, because we have a certain role in this, that we can utilize to design the experiences so that they are open and transparent to the person using them—that little LED green light says, “AI is involved in this decision,” so you might judge that differently.
But where does that end? Or does that inherently limit the advancement of the technology? Because you could say, “I rank number two in Google for some search—some business-related search—and somebody else ranks number one.” I could go to Google and say, “Why do I rank number two and they rank number one?” Google could, in all fairness, say, “We don’t know.”
Yeah, that’s a problem.
And so, do you say, “No, you have to know. You’ve got to limit the technology until you can answer that question,” or do you just say, “We don’t know how people make decisions.” You can’t ask the girl why she didn’t go out with you. “Why aren’t you going out with me?” That affects me. It’s like, “I’m just not going to.”
You’ve framed the consumer’s dilemma in everything from organic apples to search results, and it’s going to be a push-and-pull.
But I would say, yeah, if you’re using artificial intelligence, you should know a little bit about how it’s being produced, and I think there’ll be a market for it. There’s going to be a value judgment on the other side. I really think that some of the ways we’re looking at designing experiences, it’s much more valuable to the user to see a lot of these things and know it—to be able to adjust the rankings based on the context that they’re in, and they’re going to prefer that experience.
I think, eventually, it’ll all catch up in the end.
One last story, I used to sell snowboards… So much of this is used for commerce. It’s an easy example for us to understand, retail. I used to sell snowboards, and I got really good at it. My intelligence on it got really focused. I was at a pretty good hit rate. Someone could walk in the door, and if I wrote down what snowboard they were going to buy, I was probably right eighty-five to ninety-percent of the time. I got really good at it. By the end of the season, you just know.
But, if I walked up to any of those people and said, “Here’s your snowboard,” I would never make a sale. I would never make a sale. It creeps them out, they walk away, the deal is not closed. There’s a certain amount of window dressing, song and dance, gathering of information to make someone comfortable before they will make that decision to accept the value.
Up until now, technology has been very prescriptive. You write the code, it does what the code says. But that’s going to change, because the probabilities and the context-gathering goes away. But to be successful, there is still going to have to be that path, and it’s the perfect place to put in what we were just talking about—the transparency, the governance, and the guidance to the consumer to let them know that they’re in on that type of experience. Why? You’re going to sell more snowboards if you do.
In your view of a world where we don’t have this kind of conscious AGI, we’re one notch below that, will those machines still pass the Turing test? Will you still be able to converse with them and not know that it’s a computer you’re talking to?
I think it’ll get darn close, if not all the way there. I don’t think you could converse with them as much as people imagine though.
Fair enough. I’m going to ask you a privacy question. Right now, privacy is largely buried on just the sheer amount of data. Nothing can listen to every phone conversation. Nothing can do that. But, once a machine can listen to them all, then it can.
Then… We can hear them all right now, but we can’t listen to them all.
Correct. And I read that you can now get human-level lip-reading from cameras, and you get facial recognition.
And so you could understand that, eventually, that’s just a giant data mining problem. And it isn’t even a nefarious one, because it’s the same technology that recommends what you should buy someplace.
Tell me what you think about privacy in a world where all of that information is recorded and, I’m going to use “understood” loosely, but able to be queried.
Yeah, this is the, “I don’t want a machine knowing what I had for lunch,” question. The machine doesn’t care; people care. What we have to do is work to develop a society where privacy is a virtue, not a right. When privacy is a right, you have to maintain it through security. The security is just too fallible, especially given the modern era.
Now, there’ll always be that certain kind of thing, but privacy-as-a-virtue is different. If you could structure society where privacy is a virtue, well, then it’s okay that I know what you had for lunch. It’s virtuous for me to pretend like I don’t know what you had for lunch, to not act on what I know you had for lunch, and not allow it to influence my behavior.
It sounds almost Victorian and I think there is a reason that, in the cyberpunk movement in science fiction, you see this steampunk kind of Victorian return. In the Victorian era, we had a lot of etiquette based on just the size of society. And the new movement of information meant that you knew a lot about people’s business that you didn’t know anymore. And the way we dealt with it was this kind of really pent-up morality where it was virtuous to pretend like you didn’t know—almost to make it as a game and not allow it to influence your decision-making. Only priests do this anymore.
But we’re all going to have to pick up the skill and train our children, and I think they’re training themselves to do it, frankly, right now, because of the impacts of social media on their lives. We might return to this second Victorian era, where I know everything about you but it’s virtuous.
Now, that needs to bleed into the software and the hardware architectures as well. Hard drives need to forget. Code algorithms need to forget, or they need to decide what information they treat as virtuous. This way, we can have our cake and eat it, too. Otherwise, we’re just going to be in this weird security battle forever, and it’s not going to function. The only people who are going to win in that one are the government. We’re just going to have to take it back in this manner.
Now, you can just see how much optimism bleeds through me when I say it this way, and I’m not totally incognizant of my optimism here, but I really think that’s the key to this. Any time we’re faced with the feature, we just give up our privacy for it. And so, we may as well start designing the world that can operate with less privacy-as-a-right.
It’s funny, because I always hear this canard that young people don’t care about privacy, but that’s not my experience. I have four kids. My oldest son always comes in and says, “How can you use that? It’s listening to everything you’re doing.” Or, “How do you have these settings on your computer the way you do?” I’m like, “Yeah, yeah, well…” But you say, not only do they value it more, but they’re learning etiquette around it as well.
Yeah, they’re redefining it.
They see what their friends did last night on social media, but they’re not going to mention it when they see them.
That’s right, and they’re going to monitor their own behavior. They just have to in order to function socially. We as creatures need this. I think we grew up in a more unique place. It’s goofy, but I lived in 1867. You had very little privacy in 1867.
That’s right. You did that PBS thing.
Yeah, I did that PBS thing, that living history experiment. Even though it’s fourteen people, the impacts of a secret or something slipping out could be just massive, but everyone has that impact. There was an anonymity that came from the Industrial Revolution that we, as Gen Xers, probably enjoy the zenith of, and we’ve watched social media pull it back apart.
But I don’t think it’s a new thing to humanity, and I think ancestral memory will come back, and I think we will survive it just fine.
In forty-something guests, you’ve referred to science fiction way more than even the science fiction writers I have on the show.
I’m a fanboy.
Tell me what you think is really thoughtful. I think Frank Herbert said, “Sometimes, the purpose of science fiction is to keep the future from happening.”
Tell me some examples. I’m going to put you on the spot here.
I just heard that from Cory Doctorow two weeks ago, that same thing.
Really? I heard it because I used to really be annoyed by dystopian movies, because I don’t believe in them, and yet I’m required to see them because everybody asks me about them. “Oh, my gosh, did you see Elysium?” and I’m like, “Yes, I saw Elysium.” And so, I have to go see these and they used to really annoy me.
And then, I saw that quote a couple of years ago and it really changed me, because now I can go to them and say, “Ah, that’s not going to happen.”
Anyway, two questions: Are there any futures that you have seen in science fiction that you think will happen? Like, when you look at it, you say, “That looks likely to me,” because it sounds like you’re a Gene Roddenberry futurist.
I’m more of a Cory Doctorow futurist.
And then, are there ones you have seen that you think could happen, but you don’t think it’s going to happen, but it could?
I’m still on the first question. In my recent readings, the whole Stanley K. Robinson and Cory Doctorow works are very good.
Now, let’s talk about Iain M. Banks, the whole Culture series, which is so far-future, and so grand in scale, and so driven by AI that knows it’s superior to humans—but is fascinated with them. Therefore, it doesn’t want to destroy them but rather to attach themselves to society. I don’t think that is going to happen but it could happen. It’s really fascinating.
It’s one of those bigger-than-the-galaxy type universes where you have megaships that are mega-AIs, and can do the calculations of a trillion humans in one second, and they keep humans around for two reasons… And this is how they think about it: one, they like them, they’re fascinating and curious; and two, there are thirteen of them that, by sheer random number, they’re always right. Therefore, they need a certain density of humanity just so they can consult them when they can’t come up with an answer of enough certainty.
So, there are thirteen humans that are always right.
Yeah, because there are so many trillions and trillions of them. And the frustrating thing to these AI ships is, they can’t figure out why they’re always right, and no one has decided which theory is correct. But the predominant leading theory is that they’re just making random decisions because there are so many humans, these thirteen random decisions happen to always be correct. And the humans themselves, we get a little profile of one of them and she’s rather depressed, because we can’t be fatalists as a species.
Jared, that is a wonderful place to leave this. I want to thank you for a fascinating hour. We have covered, I think, more ground than any other talk I’ve had, and I thank you for your time!
Thank you! It was fun!
Byron explores issues around artificial intelligence and conscious computers in his upcoming book The Fourth Age, to be published in April by Atria, an imprint of Simon & Schuster. Pre-order a copy here

Activision’s Viral Campaign for Singularity Starts with a Bang

[show=singularity size=large]A video of a supposed assassination attempt in Russia pops up on YouTube and clocks over 400,000 views and 1,500+ comments in under a week.  The info directs viewers to MIR-12, a shadowy organization bent on uncovering a deeply rooted Russian conspiracy they claim began in the 1950s.  Turns out Natasha Norvikov, the fallen would-be assassin, was a member of MIR-12, and the site’s blog promises us that her death will not be in vain; the terrifying truth of the conspiracy will be exposed.  Soon, a new video appears on the MIR-12 site, replete with stories of mysterious deaths and a secret Russian island with unstable radiation levels and the ability to disappear completely. 

Either Russia really is running nefarious energy experiments and flirting dangerously with the space time continuum, and the only people capable of uncovering it are a covert group of operatives who like to Twitter, or…something viral is afoot.   And Netizens are picking up the scent.

In fact, as blog dosdotzero uncovered via some pretty nifty detective work, this is all a campaign for Activision’s (s atvi) new first-person shooter game Singularity, steered by ad agency DDB and video-seeding maestros Feed Company.  The intrepid forum posters at Unfiction have uncovered even more content, including Flickr and Facebook accounts for Natasha.  They also found another site, named Katorga 12 after the creepy island in question (and the in-game, tell-all book of the same name, whose author was killed under suspicious circumstances), which turns out to be the home of the Singularity trailer. Ah, yes, it’s all coming together. Read More about Activision’s Viral Campaign for Singularity Starts with a Bang