Voices in AI – Episode 18: A Conversation with Roman Yampolskiy

[voices_in_ai_byline]
In this episode Byron and Roman discuss the future of jobs, Roman’s new field of study, “Intellectology”, consciousness and more.
[podcast_player name=”Episode 18: A Conversation with Roman Yampolskiy” artist=”Byron Reese” album=”Voices in AI” url=”https://voicesinai.s3.amazonaws.com/2017-11-20-(00-45-56)-roman-yampolsky.mp3″ cover_art_url=”https://voicesinai.com/wp-content/uploads/2017/11/voices-headshot-card.jpg”]
[voices_in_ai_link_back]
Byron Reese: This is Voices in AI, brought to you by Gigaom, and I’m Byron Reese. Today, our guest is Roman Yampolskiy, a Professor in the Department of Computer Engineering and Computer Science at the Speed School of Engineering, University of Louisville. He is the founding and current director of the Cyber Security Lab, and an author of many books, including Artificial Superintelligence: A Futuristic Approach.
His main areas of interest are AI safety and cyber security. He is the author of over one hundred publications, and his research has been cited by over a thousand scientists around the world.
Welcome to the show.
Roman Yampolskiy: Thank you so much. Thanks for inviting me.
Let’s just jump right in. You’re in the camp that we have to be cautious with artificial intelligence, because it could actually be a threat to our survival. Can you just start right there? What’s your thinking? How do you see the world?
It’s not very different than any other technology. Any advanced technology can be used for good or for evil purposes. The main difference with AI is that it’s not just a tool, it’s actually an independent agent, making its own decisions. So, if you look at the safety situation with other independent agents—take for example, animals—we’re not very good at making sure that there are no accidents with pit bulls, for example.
We have some approaches to doing that. We can put them on a leash, put them in a cage, but at the end of the day, if the animal decides to attack, it decides to attack. The situation is very similar with advanced AI. We try to make it safe, beneficial, but since we don’t control every aspect of its decision-making, it could decide to harm us in multiple ways.
The way you describe it, you’re using language that implies that the AI has volition, it has intentionality, it has wants. Are you suggesting this intelligence is going to be conscious and self-aware?
Consciousness and self-awareness are meaningless concepts in science. That is nothing we can detect or measure. Let’s not talk about those things. I’m saying specific threats will come from the following: one, is mistakes in design. Just like with any software, you have computer bugs; you have misaligned values with human values. Two, purposeful design of malevolent AI. There are people who want to hurt others—hackers, doomsday cults, crazies. They will, on purpose, design intelligent systems to destroy, to kill. The military is a great example; they fund lots of research in developing killer robots. That’s what they do. So, those are some simple examples.
Will AI decide to do something evil, for the sake of doing evil? No. Will it decide to do something which has a side effect of hurting humanity? Quite possible.
As you know, the range on when we might build an artificial general intelligence varies widely. Why do you think that is, and do you care to kind of throw your hat in that lottery, or that pool?
Predicting the future is notoriously difficult. I don’t see myself as someone who has an advantage in that field, so I defer to others. People, like Ray Kurzweil, who have spent their lives building those prediction curves, exponential curves. With him being Director of Engineering at Google, I think he has pretty good inside access to the technology, and if he says something like 2045 is a reasonable estimate, I’ll go with that.
The reason people have different estimates is the same reason we have different betting patterns in the stock market, or horses, or anything else. Different experts give different weights to different variables.
You have advocated research into, quote, “boxing” artificial intelligence. What does that mean, and how would you do it?
In plain English, it means putting it in prison, putting it in a controlled environment. We already do it with computer viruses. When you study a computer virus, you put it in an isolated system which has no access to internet, so you can study its behavior in a safe environment. You control the environment, you control inputs, outputs, and you can figure out how it works, what it does, how dangerous it is.
The same makes sense for intelligence software. You don’t want to just run a test by releasing it on the internet, and seeing what happens. You want to control the training data going in. That’s very important. We saw some terrible fiascos with the recent Microsoft Chat software being released without any controls, and users feeding it really bad data. You want to prevent that, so for that reason, I advocate having protocols, environments in which AI researchers can safely test their software. It makes a lot of sense.
When you think about the great range of intellectual ability, from the smallest and simplest creatures, to us, is there even an appropriate analogy for how smart a superintelligence could be? Is there any way for us to even think about that?
Like, when my cat leaves a mouse on the back porch, everything that cat knows says that I’m going to like that dead mouse, right? Its entire view of the world is that I’m going to want that. It doesn’t have, even remotely, the mental capability to understand why I might not.
Is an AI, do you think, going to be that far advanced, where we can’t even communicate in the same sort of language, because it’s just a whole different thing?
Eventually, yes. Initially, of course, we’ll start with sub-human AI, and slowly it will get to human levels, and very quickly it will start growing almost exponentially, until it’s so much more intelligent. At that point, as you said, it may not be possible for us to understand what it does, how it does it, or even meaningfully communicate with it.
You have launched a new field of study, called Intellectology. Can you talk about what that is, and why you did that? Why you thought there was kind of a missing area in the science?
Sure. There seems to be a lot of different sub-fields of science, all of them looking at different aspects of intelligence: how we can measure intelligence, build intelligence, human intelligence versus non-human intelligence, animals, aliens, communicating across different species. Forensic science tells us that we can look at an artifact, and try to deduce the engineering behind it. What is the minimum intelligence necessary to make this archeological artifact?
It seems to make sense to bring all of those different areas together, under a single umbrella, a single set of terms and tools, so they can be re-used, and benefit each field individually. For example, I look a lot at artificial intelligence, of course. And studying this type of intelligence is not the same as studying human intelligence. That’s where a lot of mistakes come from, assuming that human drives, wants and needs will be transferred.
This idea of a universe of different possible minds is part of this field. We need to understand that, just like our planet is not the middle of the universe, our intelligence is not the middle of that universe of possible minds. We’re just one possible data point, and it’s important to generalize outside of human values.
So it’s called Intellectology. We don’t actually have a consensus definition on what intelligence is. Do you begin there, with “this is what intelligence is”? And if so, what is intelligence?
Sure. There is a very good paper published by one of the co-founders of DeepMind, which surveys, maybe, I don’t know, a hundred different definitions of intelligence, and tries to combine them. The combination sounds something like “intelligence is the ability to optimize for your goals, across multiple environments.” You can say it’s the ability to win in any situation, and that’s pretty general.
It doesn’t matter if you are a human at a college, trying to get a good grade, an alien on another planet trying to survive, it doesn’t matter. The point is if I throw a mind into that situation, eventually it learns to do really well, across all those domains.
We see AIs, for example, capable of learning multiple videos games, and performing really well. So, that’s kind of the beginning of that general intelligence, at least in artificial systems. They’re obviously not at the human level yet, but they are starting to be general enough, where we can pick up quickly what to do in all of those situations. That’s, I think, a very good and useful definition of what intelligence is, one we can work with.
One thing you mentioned in your book, Artificial Superintelligence, is the notion of convincing robots to worship humans as gods. How would you do that, and why that? Where did that idea come from?
I don’t mention it as a good idea, or my idea. I kind of survey what people have proposed, and it’s one of the proposals. I think it comes from the field of theology, and I think it’s quite useless, but I mention it for the sake of listing all of the ideas people have suggested. Me and a colleague, we published a survey about possible solutions for dealing with super-intelligent systems, and we reviewed some three hundred papers. I think that was one of them.
I understand. Alright. What is AI Completeness Theory?
We think that there are certain problems which are fundamental problems. If you can do one of those problems, you can do any problem. Basically, you are as smart as a human being. It’s useful to study those problems, to understand what is the progress in AI, and if we’ve got to that level of performance. So, in one of my publications, I talk about the Turing Test as being a fundamental first AI complete problem. If you can pass the Turing Test, supposedly, you’re as intelligent as a human.
The unrestricted test, obviously not the five-minute version of that, or whatever is being done today. If that’s possible, then you can do all of the other problems. You can do computer vision, you can do translation, maybe you can even do computer programming.
You also write about machine ethics and robot rights. Can you explore that, for just a minute?
With regards to machine ethics, the literature seems to be, basically, everyone trying to propose that a certain ethical theory is the right one, and we should implement it, without considering how it impacts everyone who disagrees with the theory. Philosophers have been trying to come up with a common ethical framework for millennia. We are not going to succeed in the next few years, for sure.
So, my argument was that we should not even try to pick one correct ethical theory. That’s not a solution which will make all of us happy. And for each one of those ethical theories, there are actually problems, well-known problems, which if a system with that type of power is to implement that ethical framework, that’s going to create a lot of problems, a lot of damage.
With regards to rights for robots, I was advocating against giving them equal rights, human rights, voting rights. The reasoning is quite simple. It’s not because I hate robots. It’s because they can reproduce almost infinitely. You can have a trillion copies of any software, almost instantaneously, and if each of them has voting rights, that essentially means that humanity has no rights. We give away human rights. So, anyone who proposes giving that type of civil rights to robots is essentially against human rights.
That’s a really bold statement. Let’s underline that, because I want to come back to it. But in order to do that, I want to return to the first thing I asked you, or one of the earlier things, about consciousness and self-awareness. You said these aren’t really scientific questions, so let’s not talk about them. But at least with self-awareness, that isn’t the case, is it?
I mean, there’s the red dot test—the mirror test—where purportedly, you can put a spot on an animal’s forehead while it’s asleep, and if it gets up and sees that in a mirror, and tries to wipe it off, it therefore knows that that thing in the mirror is it, and it has a notion of self. It’s a hard test to pass, but it is a scientific test. So, self-awareness is a scientific idea, and would an artificial intelligence have that?
We have a paper, still undergoing the review process, which surveys every known test for consciousness, and I guess you include self-awareness with that. All of them measure different correlates of consciousness. The example you give, yes, animals can recognize that it’s them in the mirror, and so we assume that also means they have similar consciousness to ours.
But it’s not the same for a robot. I can program a robot to recognize a red dot, and assume that it’s on its own forehead, in five minutes. It’s not, in any way, a guarantee that it has any conscious or self-awareness properties. It’s basically proving that we can detect red dots.
But all you are saying is we need a different test for AI self-awareness, not that AI self-awareness is a ridiculous question to begin with.
I don’t know what the definition of self-awareness is. If you’re talking about some non-material spiritual self-consciousness thing, I’m not sure what it does, or why it’s useful for us to talk about it.
Let’s ask a different question, then. Sentience is a word which is commonly misused. It’s often used to mean intelligent, but it simply means “able to sense something,” usually pain. So, the question of “is a thing sentient” is really important. Up until the 1990s, in the United States, veterinarians were taught not to anesthetize animals when they operated on them, because they couldn’t feel pain—despite their cries and writhing in apparent agony.
Similarly, it wasn’t until twenty or so years ago that babies, human babies, weren’t anesthetized, to do open heart surgery on them, because again, the theory was that they couldn’t feel pain. Their brains just weren’t well-developed. The notion of sentience, we put it right near rights, because we say, “If something can feel pain, it has a right not to be tortured.”
Wouldn’t that be an equivalent with artificial intelligence? Shouldn’t we ask, “Can it feel pain?” And if it can, you don’t have to say, “Oh yeah, it should be able to vote for the leaders.” But you can’t torture it. That would be just a reasonable thing, a moral thing, an ethical thing to say. If it can feel, then you don’t torture it.
I can easily agree with that. We should not torture anyone, including any reinforcement learners, or anything like that. To the best of my knowledge, there are two papers published on the subject of computer pain, good papers, and both say it’s impossible to do right now.
It’s impossible to measure, or it’s impossible for a computer to feel pain right now?
It’s impossible for us to program a computer to feel pain. Nobody knows how to do it, how to even start. It’s not like with, let’s say pattern recognition, we know how to start, we have some results, we get ten percent accuracy so we work on it and get to fifteen percent, forty percent, ninety percent. With artificial pain, nobody knows how to even start. What’s the first line of code you write for that? There is no clue.
With humans, we assume that other humans feel pain because we feel pain, and we’ve got similar hardware. But there is not a test you can do to measure how much pain someone is in. That’s why we show patients those ten pictures of different screaming faces, and ask, “Well, how close are you to this picture, or that one?” This is all a very kind of non-scientific measurement.
With humans, yes, obviously we know, because we feel it, so similar designs must also experience that. With machines, we have no way of knowing what they feel, and no one, as far as I know, is able to say, “Okay, I programmed it so it feels pain, because this is the design we used.” There are just no ideas for how something like that can be implemented.
Let’s assume that’s true, for a moment. The way, in a sense, that you get to human rights, is you start by saying that humans are self-aware, which as you say, we can all self-report that. If we are self-aware, that implies we have a self, and implying we have a self means that that self can feel, and that’s when you get sentience. And then, you get up to sapience, which is intelligence. So, we have a self, that self can feel, and therefore, because that self can suffer, that self is entitled to some kind of rights.
And you’re saying we don’t know what that would look like in a computer, and so forth. Granting all of that, for just a moment, there are those who say that human intelligence, anything remotely like human intelligence, has to have those building blocks, because from self-awareness you get consciousness, which is a different thing.
And consciousness, in part, embodies our ability to change focus, to be able to do one thing, and then, for whatever reason, do a different thing. It’s the way we switch, and we go from task to task. And further, it’s probably the way we draw analogies, and so forth.
So, there is a notion that, even to get to intelligence, to get to superintelligence, there is no way to kind of cut all of that other stuff out, and just go to intelligence. There are those who say you cannot do that, that all of those other things are components of intelligence. But it sounds like you would disagree with that. If so, why would that be?
I disagree, because we have many examples of humans who are not neurotypical. People, for example, who don’t experience pain. They are human beings, they are intelligent, they certainly have full rights, but they never feel any pain. So that example—that you must feel pain in order to reach those levels of intelligence—is simply not true. There are many variations on human beings, for example, not having visual thinking patterns. They think in words, not in images, like most of us. So, even that goes away.
We don’t seem to have a guaranteed set of properties that a human being must have to be considered human. There are human beings who have very low intelligence, maybe severe mental retardation. They are still human beings. So, there are very different standards for, a) getting human rights, and, b) having all those properties.
Right. Okay. You advocate—to use your words from earlier in this talk—putting the artificial intelligence in a prison. Is that view—we need to lock it up before we even make it—really, in your mind, the best approach?
I wouldn’t be doing it if I didn’t think it was. We definitely need safety mechanisms in place. There are some good ideas we have, for how to make those systems safer, but all of them require testing. Software requires testing. Before you run it, before you release it, you need a test environment. This is not controversial.
What do you think of the OpenAI initiative, which is the idea that as we’re building this we ought to share and make it open source, so that there’s total transparency, so that one bad actor doesn’t get an AGI, and so forth? What are your thoughts on that?
This helps to distribute power amongst humans, so not a single person gets all the power, but a lot of people have access. But at the same time, it increases danger, because all the crazies, all the psychopaths, now get access to the cutting-edge AI, and they can use it for whatever purposes they want. So, it’s not clear cut whether it’s very beneficial or very harmful. People disagree strongly on OpenAI, specifically.
You don’t think that the prospects for humans to remain the dominant species on this planet are good. I remember seeing an Elon Musk quote, he said, “The only reason we are at the top is because we’re the smartest, and if we’re not the smartest anymore, we’re no longer going to be on top.” It sounds like you think something similar to that.
Absolutely, yes. To paraphrase, or quote directly, from Bill Joy, “The future may not need us.”
What do you do about that?
That’s pretty much all of my research. I’m trying to figure out if the problem of AI control, controlling intelligent agents, is actually solvable. A lot of people are working on it, but we never have actually established that it’s possible to do. I have some theoretical results of mine, and from other disciplines, which show certain limitations to what can be done. It seems that intelligence, and how controllable something is, are inversely related. The more intelligent a system becomes, the less control we have over it.
Things like babies have very low intelligence, and we have almost complete control over them. As they grow up, as they become teenagers, they get smarter, but we lose more and more control. With super-intelligent systems, obviously, you have almost no control left.
Let’s back up now, and look at the here and now, and the implications. There’s a lot of debate about AI, and not even talking about an AGI, just all the stuff that’s wrapped up in it, about automation, and it’s going to replace humans, and you’re going to have an unemployable group of people, and social unrest. You know all of that. What are your thoughts on that? What do you see for the immediate future of humanity?
Right. We’re definitely going to have a lot of people lose their jobs. I’m giving a talk for a conference of accountants soon, and I have the bad news to share with them, that something like ninety-four percent of them will lose their jobs in the next twenty years. It’s the reality of it. Hopefully, the smart people will find much better jobs, other jobs.
But for many, many people, who don’t have education, or maybe don’t have cognitive capacity; they will no longer be competitive in this economy, and we’ll have to look at things like unconditional basic income, unconditional basic assets, to, kind of prevent revolutions from happening.
AI is going to advance much faster than robots, which have all these physical constraints, and can’t just double over the course of eighteen months. Would you be of the mind that mental tasks, mental jobs, are more at risk than physical jobs, as a general group?
It’s more about how repetitive your job is. If you’re doing something the same, whether it’s physical or mental, it’s trivial to automate. If you’re always doing something somewhat novel, now that’s getting closer to AI completeness. Not quite, but in that direction, so it’s much harder.
In two hundred and fifty years, this country, the West, has had had economic progress, we’ve had technological revolutions which could, arguably, be on the same level as the artificial intelligence revolution. We had mechanization, the replacement of human power with animal power, the electrification of industry, the adoption of steam, and all of these appeared to be very disruptive technologies.
And yet, through all of that, unemployment, except for the Great Depression, never has bumped out of four to nine percent. You would assume, if technology was able to rapidly displace people, that it would be more erratic than that. You would have these massive transforming industries, and then you would have some period of high unemployment, and then that would settle back down.
So, the theory around that would be that, no, the minute we build a new tool, humans just grab that thing, and use it to increase their own productivity, and that’s why you never have anything outside of four to nine percent unemployment. What’s wrong with that logic, in your mind?
You are confusing tools and agents. AI is not a tool. AI is an independent agent, which can possibly use humans as tools, but not the other way around. So, the examples of saying we had previous singularities, whether it’s cultural or industrial, they are just wrong. You are comparing apples and potatoes. Nothing in common.
So, help me understand that a little better. Unquestionably, technology has come along, and, you know, I haven’t met a telephone switchboard operator in a long time, or a travel agent, or a stockbroker, or typewriter repairman. These were all jobs that were replaced by technology, and whatever word you put on the technology doesn’t really change that simple fact. Technology came out, and it upset the applecart in the employment world, and yet, unemployment never goes up. Help me understand why AI is different again, and forgive me if I’m slow here.
Sure. Let’s say you have a job, you nail roofs to houses, or something like that. So, we give you a new tool, and now you can have a nail gun. You’re using this tool, you become ten times more efficient, so nine of your buddies lose jobs. You’re using a tool. The nail gun will never decide to start a construction company, and go into business on its own, and fire you.
The technology we’re developing now is fundamentally different. It’s an agent. It’s capable—and I’m talking about the future of AI, not AI of today—it’s capable of self-improvement. It’s capable of cross-domain learning. It’s as smart, or smarter, as any human. So, it’s capable of replacing you. You become a bottleneck in that hybrid system. You no longer hold the gun. You have nothing to contribute to the system.
So, it’s very easy to see that all jobs will be fully automated. The logic always was, the job which is automated is gone, but now we have this new job which we don’t know how to automate, so you can get a new, maybe better, job doing this advanced technology control. But if every job is automated, I mean, by definition, you have one hundred percent unemployment. There are still jobs, kind of prestige jobs, because it’s a human desire to get human-made art, or maybe handmade items, expensive and luxury items, but they are a tiny part of the market.
If AI can do better in any domain, humans are not competitive, so all of us are going to lose our jobs. Some sooner, some later, but I don’t see any job which cannot be automated, if you have human level intelligence, by definition.
So, your thesis is that, in the future, once the AI’s pass our abilities, even a relatively small amount, every new job that comes along, they’ll just learn quicker than we will and, therefore, it’s kind of like you never find any way to use it. You’re always just superfluous to the system.
Right. And the new jobs will not be initially designed for a human operator. They’ll be basically streamlined for machines, in the first place, so we won’t have any competitive advantage. Right now, for example, our cars are designed for humans. If you want to add a self-driving component to it, you have to work with the wheel and brake pedals and all that, to make it switch.
Whereas, if from the beginning, you’re designing it to work with machines; you have smart roads, smart signs, humans are not competitive at any point. There is never an entry point where a human has a better answer.
Let me do a sanity check at this point, if I could. So, humans have a brain that has a hundred billion neurons, and countless connections between it, and it’s something we don’t really understand very well. And it perhaps has emergent properties which give us a mind, that give us creativity, and so forth, but it’s just simple emergence.
We have this thing called consciousness. I know you say it’s not scientific, but if you believe that you’re conscious, then you have to grapple with the fact that whatever that is, is a requisite for you being intelligent.
So, we have a brain we don’t understand, an emergent mind we don’t understand, a phenomenon of “consciousness” which is the single fact we are most aware of in our own life, and all of that makes us this. Meanwhile, I have simple pieces of hardware that I’m mightily delighted when they work correctly.
What you’re saying is… It seems you have one core assumption, which is that in the end, the human brain is a machine, and we can make a copy of that machine, and it’s going to do everything a human machine can do, and even better. That, some might argue, is the non-scientific leap. You take something we don’t understand, that has emergent properties we don’t understand, that has consciousness, which we don’t understand, and you say, “oh yes, it’s one hundred percent certain we’re going to be able to exceed our own intelligence.”
Kevin Kelly calls that a Cargo Cult. It’s like this idea that, oh well, if we just build something just like it, it’s going to be smarter than us. It smacks to some of being completely unscientific. What would you say to that?
One, it’s already smarter than us, in pretty much all domains. Whatever you’re talking about, playing games, investing in the stock market… You take a single domain where we know what we’re doing, and it seems like machines are either already at a human level, or quickly surpassing it, so it’s not crazy to think that this trend will continue. It’s been going on for many years.
I don’t need to fully understand the system to do better than a system. I don’t know how to make a bird. I have no idea how the cells in a bird work. It seems to be very complex. But, I take airplanes to go to Europe, not birds.
Can you explain that sentence that you just said, “Domains where we know what we are doing”? Isn’t that kind of the whole point, is that there’s this big area of things where we don’t know what we’re doing, and where we don’t understand how humans have the capabilities? How are they able to solve non-algorithmic problems? How are humans able to do the kind of transferred learning we do, where we know one thing, in one domain, and we’re really good at applying it in others?
We don’t know how children learn, how a two-year-old gets to be a full AGI. So, granted, in the domains where we know what we are doing, all six of them… I mean look, let’s be real: just to beat humans at one game, chess, took a multi-billion-dollar company spending untold millions of dollars, all of the mental effort of many, many people, working for years. And then you finally—and it’s one tiny game—get a computer that can do better than a human.
And you say, “Oh, well. That’s it, then. We’re done. They can do anything, now.” That seems to extrapolate beyond what the data would suggest.
Right. I’m not saying it’s happening now. I’m not saying computers today are capable of those things. I’m saying there is not a reason for why it will not be true in the maybe-distant future. As I said, I don’t make predictions about the date. I’m just pointing out that if you can pick a specific domain of human activity, and you can explain what they do in that domain—it’s not some random psychedelic performance, but actually “this is what they do”—then you have to explain why a computer will never be able to do that.
[36:38 – 36:43 remove awkward pause]
Fair enough. Assuming all of that is going to happen, that gradually, one thing by one thing by one thing, computers will shoot ahead of us, and obsolete us, and I understand you’re not picking dates, but presumably, we can stack-rank the order of things to some very coarse degree… The most common question I get from people is, “Well, what should I study? What should my kids study, in order to be relevant, to have jobs in the future?”
You’re bound to get that question, and what would you say to it?
That goes directly to my paper on AI completeness. Basically, what is the last job to be automated? It’s the person doing AI research. Someone who is advancing machine learning. The moment machines can automate that, there are no other jobs left. But that’s the last job to go.
So, definitely study computer science, study machine learning, study artificial intelligence. Anything which helps you in those fields—mathematics, physics—will be good for you. Don’t major in areas, in domains, which we already know will be automated by the time you graduate. As part of my job I advise students, and I would never advise someone to become a cab driver.
It’s funny, Mark Cuban said, and he’s not necessarily in the field, but he has really interesting thoughts about it. And he said that if he were starting over, he would be a philosophy major, and not pursue a technical job, because the technical jobs are actually probably the easiest things for machines to do. That’s kind of in their own backyard. But the more abstract it is, in a sense, the longer it would take a computer to be able to do it. What would you say to that?
I agree. It’s an awesome job, and if you can get one of those hundred jobs in the world, I say go for it. But the market is pretty small and competitive, whereas for machine learning, it’s growing exponentially. It’s paying well, and you can actually get in.
You mentioned the consciousness paper you’re working on. When will that come out?
That’s a finished draft, and it’s just a survey paper of different methods people propose to detect or measure consciousness. It’s under review right now. We’re working on some revisions. But basically, we reviewed everything we could find in the last ten to fifteen years, and all of them measure some side effect of what people or animals do. They never actually try to measure consciousness itself.
There is some variance which deals with quantum physics, and collapse of wave functions, to Copenhagen interpretations, and things like that; but even that is not well-defined. It’s more of a philosophical kind of an argument. So, it seems like there is this concept, but nobody can tell me what it does, why it’s useful, and how to detect it or measure it.
So, it seems to be somewhat unscientific. Saying that, “Okay, but you feel it in you,” is not an argument. I know people who say, “I hear the voice of Jesus speaking to me.” Should I take that as a scientific theory, and study it? Just because someone is experiencing it doesn’t make it a scientific concept.
Tantalize us a little bit with some of the other things you’re working on, or some of the exciting things that you might be publishing soon.
As I said, I’m looking at, kind of, limitations of what we can do in the AI safety field. One problem I’m looking at is this idea of verifiability. What can be verified scientifically, specifically in mathematical proofs and computer software? Trying to write very good software, with no bugs, is kind of a fundamental holy grail of computer science, computer security, cyber security. There is a lot of very good work on it, but it seems there are limitations on how good we can get. We can remove most bugs, but usually not all bugs.
If you have a system which makes a billion decisions a second, and there is a one in a billion chance that it’s getting something wrong, those mistakes quickly accumulate. Also, there is almost no successful work on how to do software verification for AI in novel domains, systems capable of learning. All of the verification work we know about is for kind of deterministic software, and specific domains.
We can do airplane autopilot software, things like that, and verify it very well, but not something with this ability to learn and self-improve. That’s a very hard-to-open area of research.
Two final questions, if I can. The first one is—I’m sure you think through all of these different kinds of scenarios; this could happen or that could happen—what would happen, in your view, if a single actor, be it a company or a government, or what have you; a single actor invented a super-intelligent system? What would you see the ripple effects of that being?
That’s basically what singularity is, right? We get to that point where machines are the ones inventing and discovering, and we can no longer keep up with what’s going on. So, making a prediction about that is, by definition, impossible.
The most important point I’d like to stress—if they just happen to do it somehow, by some miracle, without any knowledge or understanding of safety and control, just created a random very smart system, in that space of possible minds—there is almost a guarantee that it’s a very dangerous system, which will lead to horrible consequences for all of us.
You mentioned that the first AGI is priceless, right? It’s worth countless trillions of dollars.
Right. It’s basically free labor of every kind—physical, cognitive—it is a huge economic benefit, but if in the process of creating that benefit, it destroys humanity, I’m not sure money is that valuable to you in that scenario.
The final question: You have a lot of scenarios. It seems your job is to figure out, how do we get into this future without blowing ourselves up? Can you give me the optimistic scenario; the one possible way we can get through all of this? What would that look like to you? Let’s end on the optimistic note, if we can.
I’m not sure I have something very good to report. It seems like long-term, everything looks pretty bleak for us. Either we’re going to merge with machines, and eventually become a bottleneck which will be removed, or machines will simply take over, and we’ll become quite dependent on them deciding what to do with us.
It could be a reasonably okay existence, with machines treating us well, or it could be something much worse. But short of some external catastrophic change preventing development of this technology, I don’t see a very good scenario, where we are in charge of those god-like machines and getting to live in paradise. It just doesn’t seem very likely.
So, when you hear about, you know, some solar flare that just missed the Earth by six hours of orbit or something, are you sitting there thinking, “Ah! I wish it had hit us, and just fried all of these things. It would buy humanity another forty years to recover.” Is that the best scenario, that there’s a button you could push that would send a giant electromagnetic pulse and just destroy all electronics? Would you push the button?
I don’t advocate any terrorist acts, natural or human-caused, but it seems like it would be a good idea if people smart enough to develop this technology, were also smart enough to understand possible consequences, and acted accordingly.
Well, this has been fascinating, and I want to thank you for taking the time to be on the show.
Thank you so much for inviting me. I loved it.
Byron explores issues around artificial intelligence and conscious computers in his upcoming book The Fourth Age, to be published in April by Atria, an imprint of Simon & Schuster. Pre-order a copy here.
[voices_in_ai_link_back]

Voices in AI – Episode 14: A Conversation with Martin Ford

[voices_in_ai_byline]
In this episode Byron and Martin talk about the future of jobs, AGI, consciousness and more.
[podcast_player name=”Episode 14: A Conversation with Martin Ford” artist=”Byron Reese” album=”Voices in AI” url=”https://voicesinai.s3.amazonaws.com/2017-10-30-(00-40-18)-martin-ford.mp3″ cover_art_url=”https://voicesinai.com/wp-content/uploads/2017/10/voices-headshot-card-7.jpg”]
[voices_in_ai_link_back]
Byron Reese: Welcome to Voices in AI, I’m Byron Reese. Today we’re delighted to have as our guest Martin Ford. Martin Ford is a well-known futurist, and he has two incredibly notable books out. The most recent one is called The Rise of the Robots: Technology and the Threat of a Jobless Future, and the second one is The Lights in the Tunnel: Automation Accelerating Technology and the Economy of the Future.
I have read them both cover-to-cover, and Martin is second-to-none in coming up with original ideas and envisioning a kind of future. What is that future that you envision, Martin?
Martin Ford: Well, I do believe that artificial intelligence and robotics is going to have a dramatic impact on the job market. I’m one of those that believes that this time is different, relative to what we’ve seen in the past, and that, therefore, we probably are going to find a way to adapt to that.
I do see a future where there certainly is potential for significant unemployment, and even if that doesn’t develop, at a minimum we’re probably going to have underemployment and a continuation of stagnant wages, maybe even declining wages, and probably soaring inequality. And all of those things are just going to put an enormous amount of stress both on society and on the economy, and I think that’s going to be one of the biggest issues we need to think about over the next few decades.
So, taking a big step back, you said, quote: “This time is different.” And that’s obviously a reference to the oft-cited argument that we’ve heard this since the beginning of the Industrial Revolution, that machines were going to advance too quickly, and people weren’t going to be able to find new skills.
And I think everybody agrees, up to now, it’s been fantastically false, but your argument that this time is different is based on what? What exactly is different?
Well, the key is that the machines, in a limited sense, are beginning to think. I mean, they’re taking on cognitive capabilities. So what that means is that technology is finally encroaching on that fundamental capability that so far has allowed us to really stay ahead of the march of progress, and remain relevant.
I mean, you can ask the question, “Why are there still so many jobs? Why don’t we have unemployment already?” And surely the answer to that is our ability to learn and to adapt. To find new things to do. And yet, we’re now at a point where machines… especially in the form of machine learning, are beginning to move into that space.
And it’s going to, I think, eventually get to what you might think of as a kind of a tipping point, or an inflection point, where technology begins to outcompete a lot of people, in terms of their basic capability to really contribute to the economy.
No one is saying that all the jobs are going to disappear, and that there’s literally going to be no one working. But, I think it’s reasonable to be concerned that a significant fraction of our workforce—in particular those people that are perhaps best-equipped to do things that are fundamentally routine and repetitive and predictable—those people are probably going to have a harder and harder time adapting to this, and finding a foothold in the economy.
But, specifically, why do you think that? Give me a case-in-point. Because we’ve seen enormous, disruptive technologies on par with AI, right? Like, the harnessing of artificial power has to be up there with artificial intelligence. We’ve seen entire categories of jobs vanish. We’ve seen technology replace any number of people already.
And yet, unemployment, with the exception of the depression, never gets out from between four and nine percent in this country. What holds it in that stasis, and why? I still kind of want more meat on that, why this time is different. Because everything kind of hinges on that.
Well, I think that historically, we’ve seen primarily technology displacing muscle power. That’s been the case up until recently. Now, you talk about harnessing power… Obviously that did displace a lot of people doing manual labor, but people were able to move into more cognitively-oriented tasks.
Even if it was a manual job, it was one that required more brain power. But now, machines are encroaching on that as well. Clearly, we see many examples of that. There are algorithms that can do a lot of the things that journalists do, in terms of generating news stories. There are algorithms beginning to take on tasks done by lawyers, and radiologists, and so forth.
The most dramatic example perhaps I’ve seen is what DeepMind did with its AlphaGo system, where it was able to build a system that taught itself to play the ancient game of Go, and eventually became superhuman at that, and was able to beat the best players in the world.
And to me, I would’ve looked at that and I would’ve said, “If there’s any task out there that is uniquely human, and ought to be protected from automation, playing the game of Go—given the sophistication of the game—really, should probably be on that list.” But it’s fallen to the machines already.
So, I do think that when you really look at this focus on cognitive capability, on the fact that the machines are beginning to move into that space which so far has protected people… that, as we look forward—again, I’m not talking about next year, or three years from now even, but I’m thinking in terms of decades, ten years from now, twenty years from now—what’s it going to look like as these technologies continue to accelerate?
It does seem to me that there’s very likely to be a disruption.
So, if you’d been alive in the Industrial Revolution, and somebody said, “Oh, the farm jobs, they’re vanishing because of technology. There’s going to be less people employed in the farm industry in the future.” And then, wouldn’t somebody have asked the question, “Well, what are all those people going to do? Like, all they really know how to do are plant seeds.”
All the things they ended up doing were things that by-and-large didn’t exist at the time. So isn’t it the case that whatever the machines can do, humans figure out ways to use those skills to make jobs that are higher in productivity than the ones that they’re replacing?
Yeah, I think what you’re saying is absolutely correct. The question though, is… I’m not questioning that some of those jobs are going to exist. The question is, are there going to be enough of those jobs, and will those jobs be accessible to average people in our population?
Now, the example you are giving with agriculture is the classic one that everyone always cites, and here’s what I would say: Yes, you’re right. Those jobs did disappear, and maybe people didn’t anticipate what the new things were going to be. But it turned out that there was the whole rest of the economy out there to absorb those workers.
Agricultural machinery, tractors and combines and all the rest of it, was a specific mechanical technology that had a dramatic impact on one sector of the economy. And then those workers eventually moved to other sectors, and as they moved from sector to sector… first they moved from agriculture to manufacturing, and that was a transition. It wasn’t instant, it took some time.
But basically, what they were doing was moving from routine work in the field to, fundamentally, routine work in factories. And that may have taken some training and some adaptation, but it was something that basically involved moving from one routine to another routine thing. And then, of course, there was another transition that came later, as manufacturing also automated or offshored, and now everyone works in the service sector.
But still, most people, at least a very large percentage of people, are still doing things that are fundamentally routine and repetitive. A hundred years ago, you might’ve been doing routine work in the field, in the 1950s maybe you were doing routine work in a factory, now you’re scanning barcodes at Wal-Mart, or you’re stocking the shelves at Wal-Mart, or you’re doing some other relatively routine thing.
The point I’m making is that in the future, technology is going to basically consume all of that routine, repetitive, predictable work… And that there still will be things left, yes, but there will be more creative work, or it’ll be work that involves, perhaps, deep interaction with other people and so forth, that really are going to require a different skill set.
So it’s not the same kind of transition that we’ve seen in the past. It’s really more of, I think, a dramatic transition, where people, if they want to remain relevant, are going to have to really have an entirely different set of capabilities.
So, what I’m saying is that a significant fraction of our workforce is going to have a really hard time adapting to that. Even if the jobs are there, if there are sufficient jobs out there, they may not be a good match for a lot of people who are doing routine things right now.
Have you tried to put any sort of, even in your own head, any kind of model around this, like how much unemployment, or at what rate you think the economy will shed jobs, or what sort of timing, or anything like that?
I make guesses at it. Of course, there are some relatively high-profile studies that have been done, and I personally believe that you should take that with a grain of salt. The most famous one was the one done back in 2013, by a couple of guys at Oxford.
Which is arguably the most misquoted study on the thing.
Exactly, because what they said was that roughly forty-seven percent—which is a remarkably precise number, obviously—roughly half the jobs in the United States are going to be susceptible, could be automated, within the next couple of decades.
I thought what it says is that forty-seven percent of the things that people do in their jobs is able to be automated.
Yeah, this particular study, they did look at actual jobs. But the key point is that they said roughly half of those jobs could be automated, they didn’t say they will be automated. And when the press picked that up, it in some cases became “half the jobs are definitely going to go away.” There was another later study, which you may be referring to, [that] was done by McKinsey, and that one did look at tasks, not at jobs.
And they came up with approximately the same number. They came up with the idea that about half of the tasks within jobs would be susceptible to automation, or in some cases may already be able to be automated in theory… but that was looking at the task level. Now again, the press kind of looked at that and they took a very optimistic take on it. They said, “Well, your whole job then can’t be automated, only half of your job can be automated. So your employer’s going to leave you there to do higher-level stuff.” And in some cases, that may happen.
But the other alternative, of course, is that if you’ve got two people doing two jobs, and half of each of those can be automated, then we could well see a consolidation there, and maybe that just becomes one job, right? So, different studies have looked at it in different ways. Again, I would take all of these studies with some skepticism, because I don’t think anyone can really make predictions this precise.
But the main takeaway from it, I think, is that the amount of work that is going to be susceptible to automation could be very significant. And I would say, to my mind, it doesn’t make much difference whether it’s twenty percent of fifty percent. Those are both staggering numbers. They would both have a dramatic impact on society and on the economy. So regardless of what the exact figure is, it’s something that we need to think about.
In terms of timing, I tend to think in terms of between ten and twenty years as being the timeframe where this becomes kind of unambiguous, where we’ve clearly reached the point where we’re not going to have this debate anymore—where everyone agrees that this is an issue.
I tend to think ten to twenty years, but I certainly know people that are involved, for example, in machine learning, that are much more aggressive than that; and they say it could be five years. So that is something of a guess, but I do think that there are good reasons to be concerned that the disruption is coming.
The other thing I would say is that, even if I turn to be wrong about that, and it doesn’t happen within ten to twenty years, it probably is going to happen within fifty years. It seems inevitable to me at some point.
So, you talk about not having the debate anymore. And I think one of the most intriguing aspects of quote, ‘the debate’, is that when you talk to self-identified futurists, or when you talk to economists on the effect technology is going to have on jobs, they’re almost always remarkably split.
So you’ve got this camp of fifty percent-ish that says, “Oh, come on, this is ridiculous. There is no finite number of jobs. Anytime a person can pick up something and add value to it, they’ve just created a job. We want to get people out of tasks that machines can do, because they’re capable of doing more things,” and so forth.
So you get that whole camp, and then you have the side which, it sounds you’re more on, which is, “No, there’s a point at which the machines are able to improve faster than people are able to train,” and that that’s kind of an escape philosophy, and that has those repercussions. So all of that is a buildup to the question… like, these are two very different views of the future that people who think a lot about this have.
What assumptions do the two camps have, underneath their beliefs, that are making them so different, in your mind?
Right, I do think you’re right. It’s just an extraordinary range of opinion. I would say it’s even broader than that. You’re talking about the issue of whether or not jobs will be automated, but, on the same spectrum, I’m sure you can find famous economists, maybe economists with Nobel Prizes that would tell you, “This is all a kind of a silly issue. It’s repetition of the Luddite fears that we’ve had basically forever and nothing is different this time.”
And then at the other end of that spectrum you’ve got people not just talking about jobs, you’ve got Elon Musk and you’ve got Stephen Hawking saying, “It’s not even an issue of machines taking our jobs. They’re going to just take over. They might threaten us, be an existential threat, that might actually become super-intelligent and decide they don’t want us around.
So that’s just an incredible range of opinions on this issue, and I guess it points to the fact that it really is just extraordinarily unpredictable, in the sense that we really don’t know what’s going to happen with artificial intelligence.
Now, my view is that I do think that there is often a kind of a line you can draw. The people that tend to be more skeptical, maybe, are more geared toward being economists, and they do tend to put an enormous amount of weight on that historical record, and on the fact that, so far, this has not happened. And they give great weight to that.
The people that are more on my side of it, and see something different happening, tend to be people more on the technology side, that are involved deeply in machine learning and so forth, and really see how this technology is going.
I think that they maybe have a sense that something dramatic is really going to happen. That’s not a clear division, but it’s my sense that it kind of breaks down that way in many cases. But, for sure, I absolutely have a lot of respect for the people that disagree with me. This is a very meaningful, important debate, with a lot of different perspectives, and I think it’s going to be really, really fascinating to see how it plays out.
So, you touched on the existential threat of artificial intelligence. Let me just start with a couple of questions: Do you believe that an AGI, a general intelligence, is possible?
Yes, I don’t know of any reason that it’s not possible.
Fair enough.
That doesn’t mean I think it will happen, but I think it’s certainly possible.
And then, if it’s possible, everybody, you know… When you line up everybody’s prediction on when, they range from five years to five hundred years, which is also a telling thing. Where are you in that?
I’m not a true expert in this area, because I’m obviously not doing that research. But based on the people I’ve talked to that are in the field, I would put it further out than maybe most people. I think of it as being probably fifty years out… would be a guess, at least, and quite possibly more than that.
I am open to the possibility that I could be wrong about that, and it could be sooner. But it’s hard for me to imagine it sooner than maybe twenty-five to thirty years. But again, this is just extraordinarily unpredictable. Maybe there’s some project going on right now that we don’t know about that is going to prove something much sooner. But my sense is that it’s pretty far out—measured in terms of decades.
And do you believe computers can become conscious?
I believe it’s possible. What I would say is that the human brain is a biological machine. That’s what I believe. And I see absolutely no reason why the experience of the human mind, as it exists within the brain, can’t be replicated in some other medium, whether it’s silicon or quantum computing or whatever.
I don’t see why consciousness is something that is restricted, in principle, to a biological brain.
So I assume, then, it’s fair to say that you hope you’re wrong?
Well, I don’t know about that. I definitely am concerned about the more dystopian outcomes. I don’t dismiss those concerns, I think they’re real. I’m kind of agnostic on that; I don’t see that it’s definitely the case that we’re going to have a bad outcome if we do have conscious, super-intelligent machines. But it’s a risk.
But I also see it as something that’s inevitable. I don’t think we can stop it. So probably the best strategy is to begin thinking about that. And what I would say is that the issue that I’m focused on, which is what’s going to happen to the job market, is much more immediate. That’s something that is happening within the next ten to twenty years.
This other issue of super-intelligence and conscious machines is another important issue that’s, I think, a bit further out, but it’s also a real challenge that we should be thinking about it. And for that reason, I think that it’s great that people like Elon Musk are making investments there, in think tanks and so forth, and they’re beginning to focus on that.
I think it would be pretty hard to justify a big government public expenditure on thinking about this issue at this point in time, so it’s great that some people are focused on that.
And, so, I’m sure you get this question that I get all the time, which is, “I have young children. What should they study today to make sure that they have a relevant, useful job in the future?” You get that question?
Oh, I get that question. Yeah, it’s probably the most common question I get.
Yeah, me too. What do you say?
I probably am going to bet that I say something very similar to what you say, because I think the answer is almost a cliché. It’s that first and foremost, avoid studying to prepare yourself for a job that is on some level routine, repetitive, or predictable. Instead, you want to be, for example, doing something creative, where you’re building something genuinely new.
Or, you want to be doing something that really involves deep interaction with other people, that has that human element to it. For example, in the business world that might be building very sophisticated relationships with clients. A great job that I think is going to be relatively safe for the foreseeable future is nursing, because it has that human element to it, where you’re building relationships with people, and then there’s also a tremendous amount of dexterity, mobility, where you’re running around, doing lots of things.
That’s the other aspect of it, is that a lot of jobs that require that kind of dexterity, mobility, flexibility, are going to be hard to automate in the foreseeable future—things like electricians and plumbers and so forth are going to be relatively safe, I think. But of course, those aren’t necessarily jobs that people going to universities want to take.
So, prepare for something that incorporates those aspects. Creativity and human element, and maybe something beyond sitting in front of a computer, right? Because that in itself is going to be fairly susceptible to this.
So, let’s do a scenario here. Let’s say you’re right, and in fifteen years’ time—to take kind of your midpoint—we have enough job loss that is, say, commensurate with the Great Depression. So, that would be twenty-two percent. And it happens quickly… twenty-two percent of people are unemployed with few prospects. Tell me what you think happens in that world. Are there riots? What does the government do? Is there basic income? Like, what will happen?
Well, that’s going to be our choice. But the negative, let’s talk about the dystopian scenario first. Yes, I think there would absolutely be social unrest. You’re talking about people that in their lifetimes have experienced the middle-class lifestyle that are suddenly… I mean, everything just kind of disappears, right?
So, that’s certainly on the social side, there’s going to be enormous stress. And I would argue that we’re seeing the leading edge of that already. You ask yourself, why is Donald Trump in the Oval Office? Well, it’s because in part, at least, these blue-collar people, perhaps focused especially in the industrial Midwest, have this strong sense that they’ve been left behind.
And they may point to globalization or immigration as the reason for that. But in fact, technology has probably been the most important force in causing those people to no longer have access to the good, solid jobs that they once had. So, we see that already, [and] that could get orders of magnitude worse. So that’s on a social side and a political side.
Now, the other thing that’s happening is economic. We have a market economy, and that means that the whole economy relies on consumers that have got the purchasing power to go out and buy the stuff we’re producing, right?
Businesses need customers in order to thrive. This is true of virtually every business of any size, you need customers. In fact, if you really look at the industries that drive our economy, they’re almost invariably mass-market industries, whether it’s cars, or smartphones, or financial services. These are all industries that rely on tens, hundreds of millions of viable customers out there.
So, if people start losing their jobs and also their confidence… if they start worrying about the fact that they’re going to lose their jobs in the future, then they will start spending less, and that means we’re going to have an economic problem, right? We’re going to have potentially a deflationary scenario, where there’s simply not enough demand out there to drive the economy.
There’s also the potential for a financial crisis, obviously. Think back to 2008, what happened? How did that start? It started with the subprime crisis, where a lot of people did not have sufficient income to pay their mortgages.
So, obviously you can imagine a scenario in the future where lots of people can’t pay their mortgages, or their student loans, or their credit cards or whatever, and that has real implications for the financial sector. So no one should think that this is just about, “Well, it’s going to be some people that are less educated than I am, and they’re unlucky, too bad, but I’m going to be okay.”
No, I don’t think so. This is something that drags everyone into a major, major problem, both socially, politically, and economically.
The depression, though, it wasn’t notable for social unrest like that. There weren’t really riots.
There may not have been riots, but there was a genuine—in terms of the politics—there was a genuine fear out there that both democracy and capitalism were threatened. One of the most famous quotes comes from Joe Kennedy, who was the patriarch, the first Kennedy who made his money on Wall Street.
And he famously said, during that time, that he would gladly give up half of everything that he had if he could be certain that he’d get to keep the other half. Because there was genuine fear that there was going to be a revolution. Maybe a Communist revolution, something on that order, in the United States. So, it would be wrong to say that there was not this revolutionary fear out there.
Right. So, you said let’s start with the dystopian outcome…
Right, right… so, that’s the bad outcome. Now, if we do something about this, I think we can have a much more optimistic outcome. And the way to do that is going to be finding a way to decouple incomes from traditional forms of work. In other words, we’re going to have to find a way to make sure that people that aren’t working, and can’t find a regular job, have nonetheless got an income.
And there are two reasons to do that. The first reason is, obviously, that people have got to survive economically, and that addresses the social upheaval issue, to some extent at least. And the second issue is that people have got to have money to spend, if they’re going to be able to drive the economy. So, I personally think that some kind of a guaranteed minimum income, or a universal basic income, is probably going to be the way to go there.
Now there are lots of criticisms that people will say, “That’s paying people to be alive.” People will point out that if you just give money to people, that’s not going to solve the problem. Because people aren’t going to have any dignity, they’re not going to have any sense of fulfillment, or anything to occupy their time. They’re just going to take drugs, or be in a virtual reality environment.
And those are all legitimate concerns. Because, partly of those concerns, my view is that a basic income is not just this plug-and-play panacea that—okay, a basic income; that’s it. I think it’s a starting point. I think it’s the foundation that we can build on. And one thing that I’ve talked a lot about in my writing is the idea that we could build explicit incentives into a basic income.
Just to give an example, imagine that you are a struggling high school student. So, you’re in some difficult environment in high school, you’re really at risk of dropping out of school. Now, suppose you know that no matter what, you’re going to get the same basic income as everyone else. So, to me, that creates a very powerful perverse incentive for you to just drop out of school. To me that seems silly. We shouldn’t do that.
So, why not instead structure things a bit differently? Let’s say if you graduate from high school, then you’ll get a somewhat higher basic income than someone that just drops out. And we could take that idea of incentives and maybe extend it to other areas. Maybe if you go and work in the community, do things to help others, you’ll get a little bit higher basic income.
Or if you do things that are positive for the environment. You could extend it in many ways to incorporate incentives. And as you do that, then you take at least a few steps towards also solving that problem of, where do we find meaning and fulfillment and dignity in this world where maybe there just is less need for traditional work?
But that definitely is a problem that we need to solve, so I think we need to think creatively about that. How can we take a basic income and build it into something that is going to help us really solve some of these problems? And at the same time, as we do that, maybe we also take steps toward making a basic income more politically and socially acceptable and feasible. Because, obviously, right now it’s not politically feasible.
So, I think it’s really important to think in those terms: What can we really do to expand on this idea [of basic income]? But, if you figure that out, then you solve this problem, right? People then have an income, and then they have money to spend, and they can pay their debts and all the rest of it, and I think then it becomes much more positive.
If you think of the economy… think of not the real-world economy, but imagine it’s a simulation. And you’re doing this simulation of the whole market economy, and suddenly you tweak the simulation so that jobs begin to disappear. What could you do? Well, you could make a small fix to it so that you replace jobs with some other mechanism in this simulation, and then you could just keep the whole thing going.
You could continue to have thriving capitalism, a thriving market economy. I think when you think of it in those terms, as kind of a programmer tweaking a simulation, it’s not so hard to make it work. Obviously in the real world, given politics and everything, it’s going to be a lot harder, but my view is that it is a solvable problem.
Mark Cuban said the first trillionaires, or the first trillion-dollar companies, will be AI companies, that it has the capability of creating that kind of unmeasurable wealth. Would you agree with that?
Yeah, as long as we solve this problem. Again, it doesn’t matter whether you’re doing AI or any other business… that money is coming from somewhere, okay? Or when you talk about the way a company is valuated, whether it’s a million-dollar company or a trillion-dollar company, the value essentially comes from cash flows coming in in the future. That’s how you value a company.
Where are those cash flows coming [from]? Ultimately, they’re coming from consumers. They’re coming from people spending money, and people have to have money to spend. So, think of the economy as being kind of a virtuous cycle, where you cycle money from consumers to businesses and then back to consumers, and that it’s kind of a self-fulfilling, expanding, growing cycle over time.
The problem is that if the jobs go away, then that cycle is threatened, because that’s the mechanism that’s getting income back from producers to consumers so that the whole thing continues to be sustainable. So, we solve that problem and yeah, of course you’re going to have trillion-dollar companies.
And so, that’s the scenario if everything you say comes to pass. Take the opposite for just a minute. Say that fifteen years goes by, unemployment is five-and-a-quarter percent, and there’s been some turn of the jobs, and there’s no kind of phase shift, or paradigm shift, or anything like that. What would that mean?
Like, what does that mean long term for humanity? Do we just kind of go on in the way we are ad infinitum, or are there other things, other factors that could really upset the apple cart?
Well, again my argument would be that if that happens, and fifteen years from now things basically look the way they do now, then it means that people like me got the timing wrong. This isn’t really going to happen within fifteen years, maybe it’s going to be fifty years or a hundred years. But I still think it’s kind of inevitable.
The other thing, though, is be careful when you say fifteen years from now the unemployment rate is going to be five percent. One thing to be really careful about is that you’re measuring everything carefully, because, of course, the unemployment rate right now doesn’t catch a lot of people that are dropping out of the workforce.
In fact, it doesn’t capture anyone that drops out of the workforce, and we do have a declining labor force participation rate. So it’s possible for a lot of people to be left behind, and be disenfranchised, and still not be captured in that headline unemployment rate.
But a declining labor participation rate isn’t necessarily people who can’t find work, right? Like, if enough people just make a lot of money, and you’ve got the Baby Boomers retiring. Is it your impression that the numbers we’re seeing in labor participation are indicative of people getting discouraged and dropping out of the job market?
Yeah, to some extent. There are a number of things going on there. Obviously, as you say, part of it is the demographic shift, and there are two things happening there. One is that people are retiring; certainly, that’s part of it. The other thing is that people are staying in school longer, so younger people are less in the workforce than they might’ve been decades ago, because they’ve got to stay in school longer in order to have access to a job.
So, that’s certainly having an impact. But that doesn’t explain it totally, by any means. In fact, if you look at the labor force participation rate for what we call prime-age workers—and that would be people that are, maybe, between thirty and fifty… in other words, too old to be in school generally, and too young to retire—that’s also declining, especially for men. So, yes, there is definitely an impact from people leaving the workforce for whatever reason, very often [the reason is] being discouraged.
We’ve also seen a spike in applications for the social security disability program, which is what you’re supposed to get if you become disabled, and there really is no evidence that people are getting injured on the job at some extraordinarily new rate. So, I think many people think that people are using that as kind of a last resort basic income program.
They’re, in many cases, saying maybe they have a back injury that’s hard to verify and they’re getting onto that because they really just don’t have any other alternative. So, there definitely is something going on there, with that falling labor force participation rate.
And final question: What gives you the most hope, that whatever trials await us in the future—or do you have hope—that we’re going to get through them and go on to bigger and better things as a species?
Well, certainly the fact that we’ve always got through things in the past is some reason to be confident. We’ve faced enormous challenges of all kinds, including global wars, and plagues, and financial crises in the past and we’ve made it through. I think we can make it through this time. It doesn’t mean it will be easy. It rarely is easy.
There aren’t many cases in history, that we can point to, where we’ve just smoothly said, “Hey, look, there’s this problem coming at us. Let’s figure out what to do and adapt to it.” That rarely is the way it works. Generally, the way it works is that you get into a crisis, and eventually you end up solving the problem. And I suspect that that’s the way it will go this time. But, yeah, specifically, there are positive things that I see.
There are lots of important experiments, for example, with basic income, going on around the world. Even here in Silicon Valley, Y Combinator is doing something with an experiment with basic income that you may have heard about. So, I think that’s tremendously positive. That’s what we should be doing right now.
We should be gathering information about these solutions, and how exactly they’re going to work, so that we have the data that we’re going to need to maybe craft a much broader-based program at some point in the future. That’s all positive, you know? People are beginning to think seriously about these issues, and so I think that there is reason to be optimistic.
Okay, and real last question: If people want more of your thinking, do you have a website you suggest to go to?
The best place to go is my Twitter feed, which is @MFordFuture, and I also have a blog and a website which is the same, MFordFuture.com.
And are you working on anything new?
I am not working right now on a new book, but I go around doing a lot of speaking engagements on this. I’m on the board of directors of a startup company, which is actually doing something quite different. It’s actually going to do atmospheric water generation. In other words, generating water directly from air.
That’s a company called Genesis Systems, and I’m really excited about that, because it’s a chance for me to get involved in something really tangible. I think you’ve heard the quote from Peter Thiel, that we were promised flying cars and we got 140 characters. And I actually believe strongly in that.
I think there are too many people in Silicon Valley working on social media, and getting people to click on ads. So I’m really excited to get involved in a company that’s doing something really tangible, that’s going to maybe be transformative… If we can figure out how to directly generate water in very arid regions of the earth—in the Middle East, in North Africa, and so forth—that could be transformative.
Wow! I think by one estimate, if everybody had access to clean water, half of the hospital beds would be emptied in the whole world.
Yeah, it’s just an important problem, just on a human level and also in terms of security, in terms of the geopolitics of these regions. So I’m really excited to be involved with it.
Alright, well thank you so much for your time, and you have a good day.
Okay. Thank you.
Byron explores issues around artificial intelligence and conscious computers in his upcoming book The Fourth Age, to be published in April by Atria, an imprint of Simon & Schuster. Pre-order a copy here
[voices_in_ai_link_back]