Voices in AI – Episode 14: A Conversation with Martin Ford

[voices_in_ai_byline]
In this episode Byron and Martin talk about the future of jobs, AGI, consciousness and more.
[podcast_player name=”Episode 14: A Conversation with Martin Ford” artist=”Byron Reese” album=”Voices in AI” url=”https://voicesinai.s3.amazonaws.com/2017-10-30-(00-40-18)-martin-ford.mp3″ cover_art_url=”https://voicesinai.com/wp-content/uploads/2017/10/voices-headshot-card-7.jpg”]
[voices_in_ai_link_back]
Byron Reese: Welcome to Voices in AI, I’m Byron Reese. Today we’re delighted to have as our guest Martin Ford. Martin Ford is a well-known futurist, and he has two incredibly notable books out. The most recent one is called The Rise of the Robots: Technology and the Threat of a Jobless Future, and the second one is The Lights in the Tunnel: Automation Accelerating Technology and the Economy of the Future.
I have read them both cover-to-cover, and Martin is second-to-none in coming up with original ideas and envisioning a kind of future. What is that future that you envision, Martin?
Martin Ford: Well, I do believe that artificial intelligence and robotics is going to have a dramatic impact on the job market. I’m one of those that believes that this time is different, relative to what we’ve seen in the past, and that, therefore, we probably are going to find a way to adapt to that.
I do see a future where there certainly is potential for significant unemployment, and even if that doesn’t develop, at a minimum we’re probably going to have underemployment and a continuation of stagnant wages, maybe even declining wages, and probably soaring inequality. And all of those things are just going to put an enormous amount of stress both on society and on the economy, and I think that’s going to be one of the biggest issues we need to think about over the next few decades.
So, taking a big step back, you said, quote: “This time is different.” And that’s obviously a reference to the oft-cited argument that we’ve heard this since the beginning of the Industrial Revolution, that machines were going to advance too quickly, and people weren’t going to be able to find new skills.
And I think everybody agrees, up to now, it’s been fantastically false, but your argument that this time is different is based on what? What exactly is different?
Well, the key is that the machines, in a limited sense, are beginning to think. I mean, they’re taking on cognitive capabilities. So what that means is that technology is finally encroaching on that fundamental capability that so far has allowed us to really stay ahead of the march of progress, and remain relevant.
I mean, you can ask the question, “Why are there still so many jobs? Why don’t we have unemployment already?” And surely the answer to that is our ability to learn and to adapt. To find new things to do. And yet, we’re now at a point where machines… especially in the form of machine learning, are beginning to move into that space.
And it’s going to, I think, eventually get to what you might think of as a kind of a tipping point, or an inflection point, where technology begins to outcompete a lot of people, in terms of their basic capability to really contribute to the economy.
No one is saying that all the jobs are going to disappear, and that there’s literally going to be no one working. But, I think it’s reasonable to be concerned that a significant fraction of our workforce—in particular those people that are perhaps best-equipped to do things that are fundamentally routine and repetitive and predictable—those people are probably going to have a harder and harder time adapting to this, and finding a foothold in the economy.
But, specifically, why do you think that? Give me a case-in-point. Because we’ve seen enormous, disruptive technologies on par with AI, right? Like, the harnessing of artificial power has to be up there with artificial intelligence. We’ve seen entire categories of jobs vanish. We’ve seen technology replace any number of people already.
And yet, unemployment, with the exception of the depression, never gets out from between four and nine percent in this country. What holds it in that stasis, and why? I still kind of want more meat on that, why this time is different. Because everything kind of hinges on that.
Well, I think that historically, we’ve seen primarily technology displacing muscle power. That’s been the case up until recently. Now, you talk about harnessing power… Obviously that did displace a lot of people doing manual labor, but people were able to move into more cognitively-oriented tasks.
Even if it was a manual job, it was one that required more brain power. But now, machines are encroaching on that as well. Clearly, we see many examples of that. There are algorithms that can do a lot of the things that journalists do, in terms of generating news stories. There are algorithms beginning to take on tasks done by lawyers, and radiologists, and so forth.
The most dramatic example perhaps I’ve seen is what DeepMind did with its AlphaGo system, where it was able to build a system that taught itself to play the ancient game of Go, and eventually became superhuman at that, and was able to beat the best players in the world.
And to me, I would’ve looked at that and I would’ve said, “If there’s any task out there that is uniquely human, and ought to be protected from automation, playing the game of Go—given the sophistication of the game—really, should probably be on that list.” But it’s fallen to the machines already.
So, I do think that when you really look at this focus on cognitive capability, on the fact that the machines are beginning to move into that space which so far has protected people… that, as we look forward—again, I’m not talking about next year, or three years from now even, but I’m thinking in terms of decades, ten years from now, twenty years from now—what’s it going to look like as these technologies continue to accelerate?
It does seem to me that there’s very likely to be a disruption.
So, if you’d been alive in the Industrial Revolution, and somebody said, “Oh, the farm jobs, they’re vanishing because of technology. There’s going to be less people employed in the farm industry in the future.” And then, wouldn’t somebody have asked the question, “Well, what are all those people going to do? Like, all they really know how to do are plant seeds.”
All the things they ended up doing were things that by-and-large didn’t exist at the time. So isn’t it the case that whatever the machines can do, humans figure out ways to use those skills to make jobs that are higher in productivity than the ones that they’re replacing?
Yeah, I think what you’re saying is absolutely correct. The question though, is… I’m not questioning that some of those jobs are going to exist. The question is, are there going to be enough of those jobs, and will those jobs be accessible to average people in our population?
Now, the example you are giving with agriculture is the classic one that everyone always cites, and here’s what I would say: Yes, you’re right. Those jobs did disappear, and maybe people didn’t anticipate what the new things were going to be. But it turned out that there was the whole rest of the economy out there to absorb those workers.
Agricultural machinery, tractors and combines and all the rest of it, was a specific mechanical technology that had a dramatic impact on one sector of the economy. And then those workers eventually moved to other sectors, and as they moved from sector to sector… first they moved from agriculture to manufacturing, and that was a transition. It wasn’t instant, it took some time.
But basically, what they were doing was moving from routine work in the field to, fundamentally, routine work in factories. And that may have taken some training and some adaptation, but it was something that basically involved moving from one routine to another routine thing. And then, of course, there was another transition that came later, as manufacturing also automated or offshored, and now everyone works in the service sector.
But still, most people, at least a very large percentage of people, are still doing things that are fundamentally routine and repetitive. A hundred years ago, you might’ve been doing routine work in the field, in the 1950s maybe you were doing routine work in a factory, now you’re scanning barcodes at Wal-Mart, or you’re stocking the shelves at Wal-Mart, or you’re doing some other relatively routine thing.
The point I’m making is that in the future, technology is going to basically consume all of that routine, repetitive, predictable work… And that there still will be things left, yes, but there will be more creative work, or it’ll be work that involves, perhaps, deep interaction with other people and so forth, that really are going to require a different skill set.
So it’s not the same kind of transition that we’ve seen in the past. It’s really more of, I think, a dramatic transition, where people, if they want to remain relevant, are going to have to really have an entirely different set of capabilities.
So, what I’m saying is that a significant fraction of our workforce is going to have a really hard time adapting to that. Even if the jobs are there, if there are sufficient jobs out there, they may not be a good match for a lot of people who are doing routine things right now.
Have you tried to put any sort of, even in your own head, any kind of model around this, like how much unemployment, or at what rate you think the economy will shed jobs, or what sort of timing, or anything like that?
I make guesses at it. Of course, there are some relatively high-profile studies that have been done, and I personally believe that you should take that with a grain of salt. The most famous one was the one done back in 2013, by a couple of guys at Oxford.
Which is arguably the most misquoted study on the thing.
Exactly, because what they said was that roughly forty-seven percent—which is a remarkably precise number, obviously—roughly half the jobs in the United States are going to be susceptible, could be automated, within the next couple of decades.
I thought what it says is that forty-seven percent of the things that people do in their jobs is able to be automated.
Yeah, this particular study, they did look at actual jobs. But the key point is that they said roughly half of those jobs could be automated, they didn’t say they will be automated. And when the press picked that up, it in some cases became “half the jobs are definitely going to go away.” There was another later study, which you may be referring to, [that] was done by McKinsey, and that one did look at tasks, not at jobs.
And they came up with approximately the same number. They came up with the idea that about half of the tasks within jobs would be susceptible to automation, or in some cases may already be able to be automated in theory… but that was looking at the task level. Now again, the press kind of looked at that and they took a very optimistic take on it. They said, “Well, your whole job then can’t be automated, only half of your job can be automated. So your employer’s going to leave you there to do higher-level stuff.” And in some cases, that may happen.
But the other alternative, of course, is that if you’ve got two people doing two jobs, and half of each of those can be automated, then we could well see a consolidation there, and maybe that just becomes one job, right? So, different studies have looked at it in different ways. Again, I would take all of these studies with some skepticism, because I don’t think anyone can really make predictions this precise.
But the main takeaway from it, I think, is that the amount of work that is going to be susceptible to automation could be very significant. And I would say, to my mind, it doesn’t make much difference whether it’s twenty percent of fifty percent. Those are both staggering numbers. They would both have a dramatic impact on society and on the economy. So regardless of what the exact figure is, it’s something that we need to think about.
In terms of timing, I tend to think in terms of between ten and twenty years as being the timeframe where this becomes kind of unambiguous, where we’ve clearly reached the point where we’re not going to have this debate anymore—where everyone agrees that this is an issue.
I tend to think ten to twenty years, but I certainly know people that are involved, for example, in machine learning, that are much more aggressive than that; and they say it could be five years. So that is something of a guess, but I do think that there are good reasons to be concerned that the disruption is coming.
The other thing I would say is that, even if I turn to be wrong about that, and it doesn’t happen within ten to twenty years, it probably is going to happen within fifty years. It seems inevitable to me at some point.
So, you talk about not having the debate anymore. And I think one of the most intriguing aspects of quote, ‘the debate’, is that when you talk to self-identified futurists, or when you talk to economists on the effect technology is going to have on jobs, they’re almost always remarkably split.
So you’ve got this camp of fifty percent-ish that says, “Oh, come on, this is ridiculous. There is no finite number of jobs. Anytime a person can pick up something and add value to it, they’ve just created a job. We want to get people out of tasks that machines can do, because they’re capable of doing more things,” and so forth.
So you get that whole camp, and then you have the side which, it sounds you’re more on, which is, “No, there’s a point at which the machines are able to improve faster than people are able to train,” and that that’s kind of an escape philosophy, and that has those repercussions. So all of that is a buildup to the question… like, these are two very different views of the future that people who think a lot about this have.
What assumptions do the two camps have, underneath their beliefs, that are making them so different, in your mind?
Right, I do think you’re right. It’s just an extraordinary range of opinion. I would say it’s even broader than that. You’re talking about the issue of whether or not jobs will be automated, but, on the same spectrum, I’m sure you can find famous economists, maybe economists with Nobel Prizes that would tell you, “This is all a kind of a silly issue. It’s repetition of the Luddite fears that we’ve had basically forever and nothing is different this time.”
And then at the other end of that spectrum you’ve got people not just talking about jobs, you’ve got Elon Musk and you’ve got Stephen Hawking saying, “It’s not even an issue of machines taking our jobs. They’re going to just take over. They might threaten us, be an existential threat, that might actually become super-intelligent and decide they don’t want us around.
So that’s just an incredible range of opinions on this issue, and I guess it points to the fact that it really is just extraordinarily unpredictable, in the sense that we really don’t know what’s going to happen with artificial intelligence.
Now, my view is that I do think that there is often a kind of a line you can draw. The people that tend to be more skeptical, maybe, are more geared toward being economists, and they do tend to put an enormous amount of weight on that historical record, and on the fact that, so far, this has not happened. And they give great weight to that.
The people that are more on my side of it, and see something different happening, tend to be people more on the technology side, that are involved deeply in machine learning and so forth, and really see how this technology is going.
I think that they maybe have a sense that something dramatic is really going to happen. That’s not a clear division, but it’s my sense that it kind of breaks down that way in many cases. But, for sure, I absolutely have a lot of respect for the people that disagree with me. This is a very meaningful, important debate, with a lot of different perspectives, and I think it’s going to be really, really fascinating to see how it plays out.
So, you touched on the existential threat of artificial intelligence. Let me just start with a couple of questions: Do you believe that an AGI, a general intelligence, is possible?
Yes, I don’t know of any reason that it’s not possible.
Fair enough.
That doesn’t mean I think it will happen, but I think it’s certainly possible.
And then, if it’s possible, everybody, you know… When you line up everybody’s prediction on when, they range from five years to five hundred years, which is also a telling thing. Where are you in that?
I’m not a true expert in this area, because I’m obviously not doing that research. But based on the people I’ve talked to that are in the field, I would put it further out than maybe most people. I think of it as being probably fifty years out… would be a guess, at least, and quite possibly more than that.
I am open to the possibility that I could be wrong about that, and it could be sooner. But it’s hard for me to imagine it sooner than maybe twenty-five to thirty years. But again, this is just extraordinarily unpredictable. Maybe there’s some project going on right now that we don’t know about that is going to prove something much sooner. But my sense is that it’s pretty far out—measured in terms of decades.
And do you believe computers can become conscious?
I believe it’s possible. What I would say is that the human brain is a biological machine. That’s what I believe. And I see absolutely no reason why the experience of the human mind, as it exists within the brain, can’t be replicated in some other medium, whether it’s silicon or quantum computing or whatever.
I don’t see why consciousness is something that is restricted, in principle, to a biological brain.
So I assume, then, it’s fair to say that you hope you’re wrong?
Well, I don’t know about that. I definitely am concerned about the more dystopian outcomes. I don’t dismiss those concerns, I think they’re real. I’m kind of agnostic on that; I don’t see that it’s definitely the case that we’re going to have a bad outcome if we do have conscious, super-intelligent machines. But it’s a risk.
But I also see it as something that’s inevitable. I don’t think we can stop it. So probably the best strategy is to begin thinking about that. And what I would say is that the issue that I’m focused on, which is what’s going to happen to the job market, is much more immediate. That’s something that is happening within the next ten to twenty years.
This other issue of super-intelligence and conscious machines is another important issue that’s, I think, a bit further out, but it’s also a real challenge that we should be thinking about it. And for that reason, I think that it’s great that people like Elon Musk are making investments there, in think tanks and so forth, and they’re beginning to focus on that.
I think it would be pretty hard to justify a big government public expenditure on thinking about this issue at this point in time, so it’s great that some people are focused on that.
And, so, I’m sure you get this question that I get all the time, which is, “I have young children. What should they study today to make sure that they have a relevant, useful job in the future?” You get that question?
Oh, I get that question. Yeah, it’s probably the most common question I get.
Yeah, me too. What do you say?
I probably am going to bet that I say something very similar to what you say, because I think the answer is almost a cliché. It’s that first and foremost, avoid studying to prepare yourself for a job that is on some level routine, repetitive, or predictable. Instead, you want to be, for example, doing something creative, where you’re building something genuinely new.
Or, you want to be doing something that really involves deep interaction with other people, that has that human element to it. For example, in the business world that might be building very sophisticated relationships with clients. A great job that I think is going to be relatively safe for the foreseeable future is nursing, because it has that human element to it, where you’re building relationships with people, and then there’s also a tremendous amount of dexterity, mobility, where you’re running around, doing lots of things.
That’s the other aspect of it, is that a lot of jobs that require that kind of dexterity, mobility, flexibility, are going to be hard to automate in the foreseeable future—things like electricians and plumbers and so forth are going to be relatively safe, I think. But of course, those aren’t necessarily jobs that people going to universities want to take.
So, prepare for something that incorporates those aspects. Creativity and human element, and maybe something beyond sitting in front of a computer, right? Because that in itself is going to be fairly susceptible to this.
So, let’s do a scenario here. Let’s say you’re right, and in fifteen years’ time—to take kind of your midpoint—we have enough job loss that is, say, commensurate with the Great Depression. So, that would be twenty-two percent. And it happens quickly… twenty-two percent of people are unemployed with few prospects. Tell me what you think happens in that world. Are there riots? What does the government do? Is there basic income? Like, what will happen?
Well, that’s going to be our choice. But the negative, let’s talk about the dystopian scenario first. Yes, I think there would absolutely be social unrest. You’re talking about people that in their lifetimes have experienced the middle-class lifestyle that are suddenly… I mean, everything just kind of disappears, right?
So, that’s certainly on the social side, there’s going to be enormous stress. And I would argue that we’re seeing the leading edge of that already. You ask yourself, why is Donald Trump in the Oval Office? Well, it’s because in part, at least, these blue-collar people, perhaps focused especially in the industrial Midwest, have this strong sense that they’ve been left behind.
And they may point to globalization or immigration as the reason for that. But in fact, technology has probably been the most important force in causing those people to no longer have access to the good, solid jobs that they once had. So, we see that already, [and] that could get orders of magnitude worse. So that’s on a social side and a political side.
Now, the other thing that’s happening is economic. We have a market economy, and that means that the whole economy relies on consumers that have got the purchasing power to go out and buy the stuff we’re producing, right?
Businesses need customers in order to thrive. This is true of virtually every business of any size, you need customers. In fact, if you really look at the industries that drive our economy, they’re almost invariably mass-market industries, whether it’s cars, or smartphones, or financial services. These are all industries that rely on tens, hundreds of millions of viable customers out there.
So, if people start losing their jobs and also their confidence… if they start worrying about the fact that they’re going to lose their jobs in the future, then they will start spending less, and that means we’re going to have an economic problem, right? We’re going to have potentially a deflationary scenario, where there’s simply not enough demand out there to drive the economy.
There’s also the potential for a financial crisis, obviously. Think back to 2008, what happened? How did that start? It started with the subprime crisis, where a lot of people did not have sufficient income to pay their mortgages.
So, obviously you can imagine a scenario in the future where lots of people can’t pay their mortgages, or their student loans, or their credit cards or whatever, and that has real implications for the financial sector. So no one should think that this is just about, “Well, it’s going to be some people that are less educated than I am, and they’re unlucky, too bad, but I’m going to be okay.”
No, I don’t think so. This is something that drags everyone into a major, major problem, both socially, politically, and economically.
The depression, though, it wasn’t notable for social unrest like that. There weren’t really riots.
There may not have been riots, but there was a genuine—in terms of the politics—there was a genuine fear out there that both democracy and capitalism were threatened. One of the most famous quotes comes from Joe Kennedy, who was the patriarch, the first Kennedy who made his money on Wall Street.
And he famously said, during that time, that he would gladly give up half of everything that he had if he could be certain that he’d get to keep the other half. Because there was genuine fear that there was going to be a revolution. Maybe a Communist revolution, something on that order, in the United States. So, it would be wrong to say that there was not this revolutionary fear out there.
Right. So, you said let’s start with the dystopian outcome…
Right, right… so, that’s the bad outcome. Now, if we do something about this, I think we can have a much more optimistic outcome. And the way to do that is going to be finding a way to decouple incomes from traditional forms of work. In other words, we’re going to have to find a way to make sure that people that aren’t working, and can’t find a regular job, have nonetheless got an income.
And there are two reasons to do that. The first reason is, obviously, that people have got to survive economically, and that addresses the social upheaval issue, to some extent at least. And the second issue is that people have got to have money to spend, if they’re going to be able to drive the economy. So, I personally think that some kind of a guaranteed minimum income, or a universal basic income, is probably going to be the way to go there.
Now there are lots of criticisms that people will say, “That’s paying people to be alive.” People will point out that if you just give money to people, that’s not going to solve the problem. Because people aren’t going to have any dignity, they’re not going to have any sense of fulfillment, or anything to occupy their time. They’re just going to take drugs, or be in a virtual reality environment.
And those are all legitimate concerns. Because, partly of those concerns, my view is that a basic income is not just this plug-and-play panacea that—okay, a basic income; that’s it. I think it’s a starting point. I think it’s the foundation that we can build on. And one thing that I’ve talked a lot about in my writing is the idea that we could build explicit incentives into a basic income.
Just to give an example, imagine that you are a struggling high school student. So, you’re in some difficult environment in high school, you’re really at risk of dropping out of school. Now, suppose you know that no matter what, you’re going to get the same basic income as everyone else. So, to me, that creates a very powerful perverse incentive for you to just drop out of school. To me that seems silly. We shouldn’t do that.
So, why not instead structure things a bit differently? Let’s say if you graduate from high school, then you’ll get a somewhat higher basic income than someone that just drops out. And we could take that idea of incentives and maybe extend it to other areas. Maybe if you go and work in the community, do things to help others, you’ll get a little bit higher basic income.
Or if you do things that are positive for the environment. You could extend it in many ways to incorporate incentives. And as you do that, then you take at least a few steps towards also solving that problem of, where do we find meaning and fulfillment and dignity in this world where maybe there just is less need for traditional work?
But that definitely is a problem that we need to solve, so I think we need to think creatively about that. How can we take a basic income and build it into something that is going to help us really solve some of these problems? And at the same time, as we do that, maybe we also take steps toward making a basic income more politically and socially acceptable and feasible. Because, obviously, right now it’s not politically feasible.
So, I think it’s really important to think in those terms: What can we really do to expand on this idea [of basic income]? But, if you figure that out, then you solve this problem, right? People then have an income, and then they have money to spend, and they can pay their debts and all the rest of it, and I think then it becomes much more positive.
If you think of the economy… think of not the real-world economy, but imagine it’s a simulation. And you’re doing this simulation of the whole market economy, and suddenly you tweak the simulation so that jobs begin to disappear. What could you do? Well, you could make a small fix to it so that you replace jobs with some other mechanism in this simulation, and then you could just keep the whole thing going.
You could continue to have thriving capitalism, a thriving market economy. I think when you think of it in those terms, as kind of a programmer tweaking a simulation, it’s not so hard to make it work. Obviously in the real world, given politics and everything, it’s going to be a lot harder, but my view is that it is a solvable problem.
Mark Cuban said the first trillionaires, or the first trillion-dollar companies, will be AI companies, that it has the capability of creating that kind of unmeasurable wealth. Would you agree with that?
Yeah, as long as we solve this problem. Again, it doesn’t matter whether you’re doing AI or any other business… that money is coming from somewhere, okay? Or when you talk about the way a company is valuated, whether it’s a million-dollar company or a trillion-dollar company, the value essentially comes from cash flows coming in in the future. That’s how you value a company.
Where are those cash flows coming [from]? Ultimately, they’re coming from consumers. They’re coming from people spending money, and people have to have money to spend. So, think of the economy as being kind of a virtuous cycle, where you cycle money from consumers to businesses and then back to consumers, and that it’s kind of a self-fulfilling, expanding, growing cycle over time.
The problem is that if the jobs go away, then that cycle is threatened, because that’s the mechanism that’s getting income back from producers to consumers so that the whole thing continues to be sustainable. So, we solve that problem and yeah, of course you’re going to have trillion-dollar companies.
And so, that’s the scenario if everything you say comes to pass. Take the opposite for just a minute. Say that fifteen years goes by, unemployment is five-and-a-quarter percent, and there’s been some turn of the jobs, and there’s no kind of phase shift, or paradigm shift, or anything like that. What would that mean?
Like, what does that mean long term for humanity? Do we just kind of go on in the way we are ad infinitum, or are there other things, other factors that could really upset the apple cart?
Well, again my argument would be that if that happens, and fifteen years from now things basically look the way they do now, then it means that people like me got the timing wrong. This isn’t really going to happen within fifteen years, maybe it’s going to be fifty years or a hundred years. But I still think it’s kind of inevitable.
The other thing, though, is be careful when you say fifteen years from now the unemployment rate is going to be five percent. One thing to be really careful about is that you’re measuring everything carefully, because, of course, the unemployment rate right now doesn’t catch a lot of people that are dropping out of the workforce.
In fact, it doesn’t capture anyone that drops out of the workforce, and we do have a declining labor force participation rate. So it’s possible for a lot of people to be left behind, and be disenfranchised, and still not be captured in that headline unemployment rate.
But a declining labor participation rate isn’t necessarily people who can’t find work, right? Like, if enough people just make a lot of money, and you’ve got the Baby Boomers retiring. Is it your impression that the numbers we’re seeing in labor participation are indicative of people getting discouraged and dropping out of the job market?
Yeah, to some extent. There are a number of things going on there. Obviously, as you say, part of it is the demographic shift, and there are two things happening there. One is that people are retiring; certainly, that’s part of it. The other thing is that people are staying in school longer, so younger people are less in the workforce than they might’ve been decades ago, because they’ve got to stay in school longer in order to have access to a job.
So, that’s certainly having an impact. But that doesn’t explain it totally, by any means. In fact, if you look at the labor force participation rate for what we call prime-age workers—and that would be people that are, maybe, between thirty and fifty… in other words, too old to be in school generally, and too young to retire—that’s also declining, especially for men. So, yes, there is definitely an impact from people leaving the workforce for whatever reason, very often [the reason is] being discouraged.
We’ve also seen a spike in applications for the social security disability program, which is what you’re supposed to get if you become disabled, and there really is no evidence that people are getting injured on the job at some extraordinarily new rate. So, I think many people think that people are using that as kind of a last resort basic income program.
They’re, in many cases, saying maybe they have a back injury that’s hard to verify and they’re getting onto that because they really just don’t have any other alternative. So, there definitely is something going on there, with that falling labor force participation rate.
And final question: What gives you the most hope, that whatever trials await us in the future—or do you have hope—that we’re going to get through them and go on to bigger and better things as a species?
Well, certainly the fact that we’ve always got through things in the past is some reason to be confident. We’ve faced enormous challenges of all kinds, including global wars, and plagues, and financial crises in the past and we’ve made it through. I think we can make it through this time. It doesn’t mean it will be easy. It rarely is easy.
There aren’t many cases in history, that we can point to, where we’ve just smoothly said, “Hey, look, there’s this problem coming at us. Let’s figure out what to do and adapt to it.” That rarely is the way it works. Generally, the way it works is that you get into a crisis, and eventually you end up solving the problem. And I suspect that that’s the way it will go this time. But, yeah, specifically, there are positive things that I see.
There are lots of important experiments, for example, with basic income, going on around the world. Even here in Silicon Valley, Y Combinator is doing something with an experiment with basic income that you may have heard about. So, I think that’s tremendously positive. That’s what we should be doing right now.
We should be gathering information about these solutions, and how exactly they’re going to work, so that we have the data that we’re going to need to maybe craft a much broader-based program at some point in the future. That’s all positive, you know? People are beginning to think seriously about these issues, and so I think that there is reason to be optimistic.
Okay, and real last question: If people want more of your thinking, do you have a website you suggest to go to?
The best place to go is my Twitter feed, which is @MFordFuture, and I also have a blog and a website which is the same, MFordFuture.com.
And are you working on anything new?
I am not working right now on a new book, but I go around doing a lot of speaking engagements on this. I’m on the board of directors of a startup company, which is actually doing something quite different. It’s actually going to do atmospheric water generation. In other words, generating water directly from air.
That’s a company called Genesis Systems, and I’m really excited about that, because it’s a chance for me to get involved in something really tangible. I think you’ve heard the quote from Peter Thiel, that we were promised flying cars and we got 140 characters. And I actually believe strongly in that.
I think there are too many people in Silicon Valley working on social media, and getting people to click on ads. So I’m really excited to get involved in a company that’s doing something really tangible, that’s going to maybe be transformative… If we can figure out how to directly generate water in very arid regions of the earth—in the Middle East, in North Africa, and so forth—that could be transformative.
Wow! I think by one estimate, if everybody had access to clean water, half of the hospital beds would be emptied in the whole world.
Yeah, it’s just an important problem, just on a human level and also in terms of security, in terms of the geopolitics of these regions. So I’m really excited to be involved with it.
Alright, well thank you so much for your time, and you have a good day.
Okay. Thank you.
Byron explores issues around artificial intelligence and conscious computers in his upcoming book The Fourth Age, to be published in April by Atria, an imprint of Simon & Schuster. Pre-order a copy here
[voices_in_ai_link_back]

Voices in AI – Episode 6: A Conversation with Nick Bostrum

[voices_in_ai_byline]
In this episode Byron and Nick talk about human consciousness, superintelligence, agi, and the future of jobs and more.
[podcast_player name=”Episode 6: A conversation with Nick Bostrom” artist=”Byron Reese” album=”Voices in AI” url=”https://voicesinai.s3.amazonaws.com/2017-09-28-(00-28-25)-nick-bostrum.mp3″ cover_art_url=”https://voicesinai.com/wp-content/uploads/2017/06/voices-headshot-card-5.jpg”]
[voices_in_ai_link_back]
Byron Reese: This is Voices in AI, brought to you by Gigaom. I’m Byron Reese. Today our guest is Nick Bostrom. He’s a Swedish philosopher at the University of Oxford, known for his work on superintelligence risk. He founded the Strategic Artificial Intelligence Research Center at Oxford, which he runs, and is currently the Founding Director of the Future of Humanity Institute at Oxford as well. He’s the author of over two hundred publications, including Superintelligence: Paths, Dangers, and Strategies, a New York Times bestseller. Welcome to the show, Nick.  
Nick Bostrom: Hey, thanks for having me.
So let’s jump right in. How do you see the future unfolding with regards to artificial intelligence?  
I think the transition to the machine intelligence era will be perhaps the most important thing that has ever happened in all of human history when it unfolds. But that varies, [there is] considerable uncertainty as to the time scales.
Ultimately, I think we will have full human-level general artificial intelligence, and shortly after that probably superintelligence. And this transition to the machine superintelligence era, I think has enormous potential to benefit humans in all areas. Health, entertainment, the economy, space colonization, you name it. But there might also be some risks, including existential risks associated with creating, bringing into the world, machines that are radically smarter than us.
I mean it’s a pretty bold claim when you look at two facts. First, the state of the technology. I don’t have any indication that my smartphone is on a path to sapience, one. And two, the only human-level artificial intelligence we know of is human intelligence. And that is something, coupled with consciousness and the human brain and the mind and all of that, that…
To say we don’t understand it is an exercise in understatement. So how do you take those two things—that we have no precedent for this, and we have no real knowledge of how human intelligence works—and then you come to this conclusion that this is all but inevitable?  
Well it’s certainly the case that we don’t yet have human-level general artificial intelligence, let alone superintelligence, and we probably won’t have in a while. But ultimately it seems to be possible. I mean, we know from the existence of the human brain that there can exist systems that have at least human-level intelligence, and it’s a finite biological system. Three pounds of sort of squishy matter inside craniums can achieve this level of performance.
There is no reason to think that’s anywhere close to the maximum. And we can see several different paths by which we could technologically, eventually, get to the point where we can build this in machine substrates.
So one would be indeed to reverse engineer the human brain, to figure out what architectures it uses, what learning algorithms, and then run that in computer substrate. But it might well be that we will get there faster by adopting a purely synthetic approach.
There just seems to be no particular barrier along this path that it would be, in principle, impossible to overcome. It’s a difficult problem, but we’ve only been hacking away at it since… I mean, we’ve only really had computers since just before the middle of the last century. And then the field of AI is quite young, maybe since 1956 or so. And in these few decades we’ve already come a pretty long way, and if we continue in this way we will eventually, I think, figure out how to craft algorithms that have the same powerful learning and planning ability that makes us humans smart.
Well let’s dig on that for just one more minute, and then let’s move on, accepting that assumption. Where do you think human consciousness comes from?  
The brain.
But specifically, what mechanism gives rise to it? What would even be a potential answer to that question?  
Well, so, I tend towards a computationalist view here, which is that… My guess is that it’s the running of a certain type of computation that would generate consciousness in the sense of morally-relevant subjective experience. And that in principle, you could have consciousness implemented on structures built out of silicon atoms just as well as structures built out of carbon atoms. It’s not the material that is the crucial thing here, but the structure of the computation being implemented.
So that means, in principle, you could have machines being conscious. I do think, though, that the question of artificial intelligence, the intellectual performance of machines, is often best approached without also immediately introducing the question of consciousness. Even if you thought machines could not really be conscious, you could still ask whether they will be very intelligent. And even if they were only intelligent but not conscious, that still could be a technology with enormous impacts on the world.
So the last question I’ll ask you along those lines, and then I would love to just dive down into some specifics of how you see all of this unfolding, is: You’re undoubtedly familiar with Searle’s thought experiment about the Chinese room. But I’ll say it real briefly for the audience, who may not be on it. The scenario is that there exists a person, hypothetically, who’s in this enormously large room that’s full of an enormous number of these very special books. And the important thing to know about this man is he speaks no Chinese whatsoever, and yet people slide questions under the door to him written in Chinese.
He picks them up, he looks at the first character, he goes and finds the book with that on the spine. He turns, finds the second character. He follows all the way through the message until he gets to a book that says write this down. He copies it. Again, he doesn’t know if he’s talking about cholera or coffee beans or what. And then he slides the answer back under the door. And somebody, a Chinese speaker, reads it and it’s just brilliant. I mean, it’s like a perfect answer.  
And so the question is… The analogy obviously is that that room, that system, passes the Turing test splendidly. And yet, it does so without understanding anything about what it’s talking about. And that this lack of understanding, this fact that it cannot understand something, is a really concrete limit to what it is able to do, in the sense that it really can’t think and understand in the way we do. And that analogy is of course what a computer does.   
And so, Searle uses it to conclude that a computer can never really be like a human, because it doesn’t understand anything. How do you answer that?  
Well I’m not very convinced about it, that’s for sure. I mean for a start you need to think, in this thought experiment, not just about what the human inside this room can or cannot do, or understands or doesn’t understand—but you [also] have to think about the system as a whole.
So the room, plus all these books in the room, plus the human—as an entity—is able to map out inputs to outputs in a way that appears quite smart from the outside. If anything has understanding here, it would presumably be the system. Just as you would say a computer program, it would be the entire thing—the computer and the program and its memory—that would achieve a certain level of performance, not a particular data box inside this device.
Right. The traditional answer to that though is, okay, the guy memorizes the content of every book. He’s out walking around, somebody hands him a note, and he writes a brilliant answer and hands it back. Again, he doesn’t understand it. But you can’t kind of go back to, “It’s the system.”  
So then you have to think about, realistically, if it’s really that the function you want to capture is one that would map all possible English inputs to Chinese outputs. To learn that mapping by just having a big lookup table would be infeasible just in terms of the number of entries. It certainly wouldn’t fit into a human brain. Or indeed, into any sort of physically-confined—there wouldn’t be enough atoms in the observable universe to implement it in that way.
And so it might well be that understanding includes not just the ability to map a certain set of inputs to outputs, but to do that in a certain way that involves data compression. So that to understand something might be to know what the relevant patterns are, the regularities, in a particular domain—maybe, you know, have some kind of mental model of that domain—and therefore achieve a compactification of this input/output mapping. And that allows you to generalize to things that were not explicitly listed in the initial set as well.
So one way of implementing this Chinese room argument, if we tried to do it through a lookup table… Well A, it would be impossible because there just isn’t enough memory and couldn’t be. And B, even if you somehow magically could have enough memory, maybe it still wouldn’t count as true understanding, because it lacks this compression, the extraction of regularities.
So people who are concerned about a superintelligence broadly have three concerns. One is that it’s misused deliberately by humans. Second one is that it’s accidentally misused by humans. And the third one is that it somehow gets a will or volition of its own, and has goals that are contrary to human goals. Are you concerned about all three of those, or any one in particular? Or how do you shake that out?  
Yeah, I think there are challenges in each of these areas. I think that the one you listed last, it is in a sense the first one. That is, we will need—by the time we figure out how to make machines truly smart—we will need to have figured out ways to align them with human goals and intentions so that we can get them to do what we want.
So right now you can define an objective function. In many cases it’s quite easy. If you want to train some agent to play chess, you can define what good performance is. You get a 1 if you win a game and a 0 if you lose a game, and half a point perhaps if you make a draw. So that’s an objective we can define.
But in the real world, all things considered, we humans care about things like happiness and justice and beauty and pleasure. None of those things are very easy to sort of sit down and type out a definition in C++ or Python. So you’d need to figure out a way to get potentially superintelligent agents to nevertheless service an extension of the human will, so that they would realize what your intentions are, and then be able to execute that faithfully.
That’s a big technical research challenge that there are now groups bringing up and pursuing that technical research. And assuming that we can solve that technical control problem then, we get the luxury of confronting these wider policy issues. Like who should decide what this AI is used for, what social purposes should it be used for, how do we want this future world with superintelligence to look like.
Now, you need to ultimately, I think, be successful both on this narrow technical problem, and on these wider policy problems, to really get a good outcome. But I think they both seem kind of important and challenging. Like you divided the policy into two sub-problems… I think you distinguish between deliberate misuse and accidental misuse. I’m not sure precisely what you had in mind there, but it sounds like we want to make sure that neither of those happens.
Any existential threat to humanity kind of gets our attention. Is it your view that there’s a small chance, but because it’s such a big deal, we really need to think about this? Or do you think there’s an incredibly large chance that that’s going to happen?  
Somewhere in between. I think that there’s enough of a chance both that we will develop superintelligence, perhaps within the lifetime of people alive today, or some people alive today, and that things could go wrong. Enough of a chance of that happening that it is a very wise investment of some research funding, and some research talent. To have some people in the world starting to figure out the solution to this problem of scalable control, as it’s now starting to happen.
And perhaps also to have some people thinking ahead about the policy questions. Like what kind of global governance system could really cope with a world where there are superintelligent entities? To appreciate that this is a big profound challenge…
I think when you’re talking about general superintelligence, you’re not just talking about advances in AI, you’re talking about advances in all technical fields. You’re really, I think… At that point, when you have AIs being better able to do research in science and innovation than we humans can do, then you have a kind of telescoping of the future, so that [includes] all those other possible technologies that you could imagine the human species developing in the fullness of time.
If we had 40,000 years to work on it, maybe we would have a cure for aging or the ability to effectively colonize the galaxy or upload it to computers, all these kinds of science fiction-like technologies that we know are possible given the laws of physics—but just very hard to develop—that we could have developed in the fullness of time. All of those might be developed quite soon after you have superintelligence conducting this development at digital time scales.
So you have, with this transition, potentially within short order, the arrival of something like technological maturity where we have this whole suite of science fiction-like super powers. And I think to construe a kind of governance system that works for that very different world will require some fundamental rethink. And that’s also some work that perhaps makes sense to start in advance.
And I think that the case for thinking that we should start that work in advance does not depend super sensitively on exactly how big you think the probability is that this will happen within a certain number of years. It seems that there’s enough probability that it sure makes sense, if nothing else as an insurance policy, for some humans to work on this.
Do you think we’re up to the challenge to rethink these fundamental structures of society? Do you have any precedent in human history for some equivalent thing being done?  
Nothing very closely equivalent. You can still reach out and try to find some more distant analogies. Perhaps in certain respects, the invention of nuclear weapons captures some parallels there. Where there was the realization, including among some of the nuclear physicists developing this, that it would really change the nature of international affairs. And people anticipated subsequent arms races and such, and there was some attempt to think ahead about that, how you could try to do anti-proliferation.
Other than that, I don’t think that humanity has a very great track record of thinking ahead about where it wants to go, anticipating problems and then taking proactive measures to avoid them. Like, most of the time we just stumble along, try different things, and gradually we learn.
We learn that cars sometimes crash, so we invent seat belts and street lights. We figure out things as we go along. But the problem with existential risks is that you really only get one chance, so you’ve got to get it right on the first time. And that might require seeing the problem in advance and avoiding it. And that’s what we are not very good at, as a civilization, and hence the need for an extra effort there.
Do you think, in terms of the pathway to building an AGI, that we’re on an evolutionary path already? Is it like, yeah, we kind of know. We have the basic technologies and all of that. What we just need to do is faster machines, better algorithms, more data, and all these other things, and that will eventually give us an AGI.
Or do you think that it’s going to require something that we don’t even understand right now—like a quantum computer—and how that might lead to one or not. Are we on the path or not?  
I don’t think it will require a quantum computer. Maybe that would help, but I don’t think that’s necessary. I mean if you said faster computers, more data, and better algorithms, I think in combination that would be sufficient. The question I guess is just how much better algorithms.
So there’s great excitement, of course, in the progress that’s been made in recent years in machine learning, with deep learning and reinforcement learning. I think the jury’s still out as to whether… Basically, we have most of the fundamental building blocks, and maybe we just need some clever ways of combining what we have, and build things on top of them—or whether there will have to be other deep conceptual breakthroughs. That is hard to anticipate.
Certainly there will have to be further, dramatic algorithmic improvements to get all the way there. But it might be that the further improvements might be more, sort of, ways of using some of the basic building blocks we have and putting them together in interesting ways. To maybe build on top of deep learning structures, ways to better learn and extract concepts and combine them in reasoning, and then use that to learn language. Ways to do better hierarchical planning.
But that it still will use some of the building blocks we already have, as opposed to something that kind of sweeps the table clean and starts over with a radical new approach. I think at least there’s some credence [to the idea] that we’re on the right path with these current advances that we’re seeing.
To the best of my knowledge, as I try to figure out when different people think we’re going to get an AGI—and looking at people who are in the industry who have written some code or something—I get a range between five and five hundred years, which I think is a pretty telling fact alone.
But you undoubtedly know that there are people in the industry who don’t think that this is a particularly useful use of thought resources and cycles right now. Where do you think—broadly speaking—people who dismiss all of this, where do they err? What are they missing?
Well they might err by being overconfident in their impression of being correct. So it depends a little bit on what precisely it is that they believe. If they believe, say, that there is basically zero chance that we will have this in the next several hundred years, then I think they’re just overconfident. This is not the kind of thing that humans have a great track record of predicting, what technological advances are or are not possible over century time scales. And so it would be radical overconfidence to do that. Also, it would be in disagreement with the median opinion among their expert peers.
We did some opinion surveys of world-leading machine learning experts a couple of years ago. And one of the questions we asked was: By which year do you think there is a fifty percent probability that we will have high-level or human-level machine intelligence—defined as, AI that can do everything that humans can do? And the median answer to that was 2040 or 2050, depending on which group of experts we asked. Now, these are subjective opinions of course.
There’s no sort of rigorous data set from which you can prove statistically that that’s the correct estimate. But it does show, I think, that the notion that this could happen in this century, indeed by mid-century—and indeed in the lifetime of a lot of people listening to this program—is not some outlandish opinion that nobody who actually knows this stuff believes.
But on the contrary, it’s sort of the median opinion among leading researchers. But of course there’s great uncertainty on this. It could take a lot longer. It could also happen sooner. And we just need to learn to think in terms of probability distributions, credence distributions over a wide range of possible arrival dates.
Even absent of AGI, even in the intermediate time before we have one, what’s your prognosis about the number one fear people have about artificial intelligence… which is its impact on jobs and employment?  
The goal is full unemployment. Ultimately what you want are systems that can do everything that humans can do so that we don’t have to do it.
I think that that will create two challenges. One is the economic challenge of, how do people have a livelihood. Right now a lot of people are dependent on wage income, and so they would need some other source of income instead, either from capital or from social redistribution. And fortunately I think that in that kind of scenario, where AI really achieves intelligence, it’s going to create a tremendous economic boom.
And so the overall size of the pie would grow enormously. So it should be relatively feasible, given political will, to—even by redistributing a small part of that, through universal basic income or other means—to make sure that everybody could have high levels of prosperity.
Now, that then leaves the second problem, which is meaning. At the moment a lot of people think that their self-worth, their dignity, is tied up with their roles as economically productive citizens, as breadwinners to the family. That would have to change.
The education system would have to change, to train people to find meaning in leisure, to develop hobbies and interests. The art of conversation, interests in music and hobbies, in all kinds of things. And to learn to do things for their own sake, because they’re valuable and meaningful, rather than as a means to getting money to do something else.
And I think that is definitely possible. There are groups who have lived in that condition historically. Aristocrats in the UK, for example, thought it was demeaning to have to work for a living. It was almost like prostituting yourself, like you had to sell your labor. The high status thing was not to have to work. Today we’re in this weird situation where the higher status people are the ones who work the most. It’s like the entrepreneurs and CEOs who work eighty hour weeks. But that could change, but it would require a cultural transition.
Finally, your book Superintelligence is arguably one of the most influential books on the topic ever written. And you’ve taken it upon yourself to kind of sound this alarm and say we need to think seriously about this, and we need to put in safeguards and all of that. Can you close with maybe a path that we maybe get through it, and things work out really well for humanity, and we live happily ever after?  
Yeah. Since the book came out, in the last couple of years there has been a big shift actually. Both in the global conversation around these topics and also in technical research now starting to be carried out on this alignment problem—on the problem of finding scalable control methods that could be used for very, very advanced artificial agents. And so there are a number of research groups bringing that up. We are doing some of the technical research here. There are groups in Berkeley, where we are having regular research seminars with DeepMind. So that’s encouraging.
And hopefully the problem will turn out not to be too hard. In that case, I think what this AI transition does, is really unlock a whole new level. It kind of enables human civilization to transition from the current human condition to some very different condition. The condition of technological maturity, maybe a post-human condition where our descendants can colonize the universe, build millions of flourishing civilizations of superhuman minds that live for billions of years in states of bliss and ecstasy, exploring spaces of experience, modes of being and interaction with one another, and creative activities that are maybe literally beyond the human brain’s ability to imagine.
I just think that in this vast space of possibilities, there are some modes of being that would be extremely valuable. It’s like a giant stadium or a cathedral, where we are like a little child crouching in one corner—and that’s this space of possible modes of things that are accessible to a biological human organism, given our current conditions.
It’s just a small, small fraction of all the possibilities that exist which are currently close to us, but that we could start to unlock once we figure out how to create these artificial intellects and artificial minds that could then help us. And so with enough wisdom and a little bit of luck, I think the future could be wonderful, literally beyond our ability to dream.
All right. Well, you keep working on making it that way. I thank you so much for your time, Nick, and any time you want to come back and continue the conversation you’re more than welcome.
Super. Thank you.
Byron explores issues around artificial intelligence and conscious computers in his upcoming book The Fourth Age, to be published in April by Atria, an imprint of Simon & Schuster. Pre-order a copy here
[voices_in_ai_link_back]

Voices in AI – Episode 4: A Conversation with Jeff Dean

[voices_in_ai_byline]
In this episode, Byron and Jeff talk about AGI, machine learning, and healthcare.
[podcast_player name=”Episode 4: A Conversation with Jeff Dean” artist=”Byron Reese” album=”Voices in AI” url=”https://voicesinai.s3.amazonaws.com/2017-09-28-(00-31-10)-jeff-dean.mp3″ cover_art_url=”https://voicesinai.com/wp-content/uploads/2017/09/voices-headshot-card-3.jpg”]
[voices_in_ai_link_back]
Byron Reese: Hello, this is Voices in AI brought to you by Gigaom. I am your host, Byron Reese. Today we welcome Jeff Dean onto the show. Jeff is a Google Senior Fellow and he leads the Google Brain project. His work probably touches my life, and maybe yours, about every hour of every day, so I can’t wait to begin the conversation. Welcome to the show Jeff. 
Jeff Dean: Hi Byron, this is Jeff Dean. How are you?
I’m really good, Jeff, thanks for taking the time to chat. You went to work for Google, I believe, in the second millennium. Is that true?
Yes, I did, in 1999.
So the company wasn’t even a year old at that time.
That’s right, yeah it was pretty small. We were all kind of wedged in the second-floor office area, above what is now a T-Mobile store in downtown Palo Alto.
And did it feel like a start-up back then, you know? All the normal trappings that you would associate with one?
We had a ping pong table, I guess. That also doubled as where we served food for lunch. I don’t know—yeah, it felt exciting and vibrant, and we were trying to build a search engine that people would want to use. And so there was a lot of work in that area, which is exciting.
And so, over the last seventeen years… Just touch on, it’s an amazing list of the various things you’ve worked on.
Sure. The first thing I did was put together the initial skeleton of what became our advertising system, and I worked on that for a little while. Then mostly for the next four or five years I spent my time with a handful of other people working on our core search system. That’s everything from the calling system—when it goes out and fetches all the pages on the web that we can get our hands on—to the indexing system that then turns that into a system that we can actually query quickly when users are asking a question.
They type something into Google, and we want to be able to very quickly analyze what pages are going to be relevant to that query, and return the results we return today. And then the serving system that, when a query comes into Google, decides how to distribute that request over lots and lots of computers to have them farm that work out and then combine the results of their individual analyses into something that we can then return back to the user.
And that was kind of a pretty long stretch of time, where I worked on the core search and indexing system.
And now you lead the Google Brain project. What is that?
Right. So, it’s basically we have a fairly large research effort around doing machine learning and artificial intelligence research, and then using the results of our research to make intelligent systems. Where an intelligent system may be something that goes into a product, it might be something that enables new kinds of products, it might be, you know, some combination of that.
When we’re working with getting things into existing products, we often collaborate closely with different Google product teams to get the results of our work out into products. And then we also do a lot of research that is sort of pure research, untied to any particular products. It’s just something that we think will advance the capabilities of the kinds of systems we’re able to build, and ultimately will be useful even if they don’t have a particular application in mind at the moment.
“Artificial intelligence” is that phrase that everybody kind of disowns, but what does it mean to you? What is AI? When you think about it, what is it? How would you define it in simple English?
Right, so it’s a term that’s been around since the very beginning of computing. And to me it means essentially trying to build something that appears intelligent. So, the way we distinguish humans from other organisms is that we have these higher-level intelligence capabilities. We can communicate, we can absorb information, and understand it at a very high level.
We can imagine the consequences of doing different things as we decide how we’re going to behave in the world. And so we want to build systems that embody as many aspects of intelligence as we can. And sometimes those aspects are narrowly defined, like we want them to be able to do a particular task that we think is important, and requires a narrow intelligence.
But we also want to build systems that are flexible in their intelligence, and can do many different things. I think the narrow intelligence aspects are working pretty well in some areas today. The broad, really flexible intelligence is clearly an open research problem, and it’s going to consume people for a long time—to actually figure out how to build systems that can behave intelligently across a huge range of conditions.
It’s interesting that you emphasize “behave intelligently” or “appear intelligent.” So, you think artificial intelligence, like artificial turf, isn’t really turf—so the system isn’t really intelligent, it is emulating intelligence. Would you agree with that?
I mean, I would say it exhibits many of the same characteristics that we think of when we think of intelligence. It may be doing things differently, because I think you know biology and silicon have very different strengths and weaknesses, but ultimately what you care about is, “Can this system or agent operate in a manner that is useful and can augment what human intelligence can do?”
You mentioned AGI, an artificial general intelligence. The range of estimates on when we would get such a technology are somewhere between five and five hundred years. Why do you think there’s such a disparity in what people think?
I think there’s a huge range there because there’s a lot of uncertainty about what we actually need. We don’t quite know how humans process all the different kinds of information that they receive, and formulate strategies. We have some understanding of that, but we don’t have deep understanding of that, and so that means we don’t really know the scope of work that we need to do to build systems that exhibit similar behaviors.
And that leads to these wildly varying estimates. You know, some people think it’s right around the corner, some think it’s nearly impossible. I’m kind of somewhere in the middle. I think we’ve made a lot of progress in the last five or ten years, building on stuff that was done in the twenty or thirty years before that. And I think we will have systems that exhibit pretty broad kinds of intelligence, maybe in the next twenty or thirty years, but I have high error bars on those estimates.
And the way you describe that, it sounds like you think an AGI is an evolution from the work that we’re doing now, as opposed to it being something completely different we don’t even know. You know, we haven’t really started working on the AGI problem. Would you agree with that or not?
I think some of what we’re doing is starting to touch on the kind of work that we’ll need to build artificial general intelligence systems. I think we have a huge set of things that we don’t know how to solve yet, and that we don’t even know that we need yet, which is why this is an open and exciting research problem. But I do think some of the stuff we’re doing today will be part of the solution.
So you think you’ll live to see an AGI, while you’re still kind of in your prime?
Ah well, the future is unpredictable. I could have a bike accident tomorrow or something, but I think if you look out fifteen or twenty years, there will be things that are not really imaginable, that we don’t have today, that will do impressive things ten, fifteen, twenty years down the run.
Would that put us on our way to an AGI being conscious, or is machine consciousness a completely different thing which may or may not be possible?
I don’t really know. I tend not to get into the philosophical debates of what is consciousness. To my untrained neuroscience eye, consciousness is really just a certain kind of electrical activity in the neurons in a living system—that it can be aware of itself, that it can understand consequences, and so on. And so, from that standpoint consciousness doesn’t seem like a uniquely special thing. It seems like a property that is similar to other properties that intelligent systems exhibit.
So, absent your bicycle crash, what would that world look like, a world twenty years from now where we’ve made incredible strides in what AI can do, and maybe have something that is close to being an AGI? How do you think that plays out in the world? Is that good for humanity?
I think it will almost uniformly be good. I think if you look at technological improvements in the past—major things like the shift from an agrarian society to one that the Industrial Revolution fueled, which allowed what used to be ninety-nine percent of people working to grow food now, is now a few percent of people in many countries working on producing food supply. And that has freed up people to do many, many other things, all the other things that we see in our society, as a result of that big shift.
So, I think like any technology, there can be uses for it that are not so great, but by-and-large the vast set of things that happen will be improvements. I think the way to view this is, a really intelligent sidekick is something that would really improve humanity.
If I have a question, a very complicated thing—that today I can do via search engine, if I sit down for nine hours or ten hours and really think through and say, “I really want to learn about a particular topic, so I need to find all these papers and then read them and summarize them myself.” If I had an intelligent system that could do that for me, and I could say, “Find me all the papers on reinforcement learning for robotics and summarize them.” And the system could go back, and in twenty seconds do that, that would be hugely useful for humanity.
Oh absolutely. So, what are some of the challenges that you think separate us from that world? Like what are the next obstacles we need to overcome in the field?
One of the things that I think is really important today in the field of machine learning research, that we’ll need to overcome, is… Right now, when we want to build a machine learning system for a particular task we tend to have a human machine learning expert involved in that. So, we have some data, we have some computation capability, and then we have a human machine learning expert sit down and decide: Okay, we want to solve this problem, this is the way we’re going to go about it roughly. And then we have the system that can learn from observations that are provided to it, how to accomplish that task.
That’s sort of what generally works, and that’s driving a huge number of really interesting things in the world today. And you know this is why computer vision has made such great strides in the last five years. This is why speech recognition works much better. This is why machine translation now works much, much better than it did a year or two ago. So that’s hugely important.
But the problem with that is you’re building these narrowly defined systems that can do one thing and do it extremely well, or do a handful of things. And what we really want is a system that can do a hundred thousand things, and then when the hundred thousand-and-first thing comes along that it’s never seen before, we want it to learn from its experience to be able to apply the experience it’s gotten in solving the first hundred thousand things to be able to quickly learn how to do thing hundred thousand-and-one.
And that kind of meta learning, you want that to happen without a human machine learning expert in the loop to teach it how to do the hundred thousand-and-first thing.
And that might actually be your AGI at that point, right?  
I mean it will start to look more like a system that can improve on itself over time, and can add the ability to do new novel tasks by building on what it already knows how to do.
Broadly speaking, that’s transferred learning, right? Where we take something in one space and use that to influence the other one. Is that a new area of study, or is that something that people have thought about for a long time, and we just haven’t gotten around to building a bunch of—
People have thought about that for quite a while, but usually in the context of, I have a few tasks that I want to do, and I’m going to learn to do three of them. And then, use the results of learning to do three, to do the fourth better with less data, maybe. Not so much at the scale of a million tasks… And then completely new ones come along, and without any sort of human involvement, the system can pick up and learn to do that new task.
So I think that’s the main difference. Multitask learning and transfer learning have been done with some success at very small scale, and we need to make it so that we can apply them at very large scales.
And the other thing that’s new is this meta learning work, that is starting to emerge as an important area of machine learning research—essentially learning to learn. And that’s where you’ll be able to have a system that can see a completely novel task and learn to accomplish it based on its experience, and maybe experiments that it conducts itself about what approaches it might want to try to solve this new task.
And that is currently where we have a human in the loop, to try different approaches and where we think this ‘learning to learn’ research is going to make faster progress.
There are those who worry that the advances in artificial intelligence will have implications for human jobs. That eventually machines can learn new tasks faster than a human can, and then there’s a group of people who are economically locked out of the productive economy. What are your thoughts on that?
So, I mean I think it’s very clear that computers are going to be able to automate some aspects of some kinds of jobs, and that those jobs—the things they’re going to be able to automate—are a growing set over time. And that has happened before, like the shift from agrarian societies to an industrial-based economy happened largely because we were able to automate a lot of the aspects of farm production, and that caused job displacement.
But people found other things to do. And so, I’m a bit of an optimist in general and I think, you know, politicians and policymakers should be thinking about what the society structures we want to have in place should be if computers can suddenly do a lot more things than they used to be able to. But I think that’s of largely a governmental and policy set of issues.
My view is, a lot of the things that computers will be able to automate are these kinds of repetitive tasks that humans currently do because they’re too complicated for our computers to learn how to do.
So am I reading you correctly, that you’re not worried about a large number of workers displaced from their jobs, from the technology?
Well I definitely think that there will be some job displacement, and it’s going to be uneven. Certain kinds of jobs are going to be much more amenable to automation than others. The way I like to think about it is, if you look at the set of things that a person does in their job, if it’s a handful of things that are all repetitive, that’s something that’s more likely to be automatable, than someone whose job involves a thousand different things every day, and you come in tomorrow and your job is pretty different from what you did today.
And within that, what are the things that you’re working on—on a regular basis—in AI right now?
Our group as a whole does a lot of different things, and so I’m leading our group to help provide direction for some of the things we’re doing. Some of the things we’re working on within our group that I’m personally involved in are use of machine learning for various healthcare related problems. I think machine learning has a real opportunity to make a significant difference in how healthcare is provided.
And then I’m personally working on how can we actually build the right kinds of computer hardware and computer software systems that enable us to build machine learning systems which can successfully try out lots of different machine learning ideas quickly—so that you can build machine learning systems that can scale.
So that’s everything from, working with our hardware design team to make sure we build the right kind of machine learning hardware. TensorFlow is an open source package that our group has produced—that we open-sourced about a year and a half ago—that is how we express our machine learning research ideas, and use it for training machine learning systems for our products. And we’ve now released it, so lots of people outside Google are using this system as well, and working collaboratively to improve it over time.
And then we have a number of different kinds of research efforts, and I’m personally following pretty closely our “learning to learn” efforts, because I think that’s going to be a pretty important area.
Many people believe that if we build an AGI, it will come out of a Google. Is that a possibility?
Well, I think there’s enough unknowns in what we need to do that it could come from anywhere. I think we have a fairly broad research effort because we think this is, you know, a pretty important field to push forward, and we certainly are working on building systems that can do more and more. But AGI is a pretty long-term goal, I would say.
It isn’t inconceivable that Google itself reaches some size where it takes on some emergent properties which are well, I guess, by their definition unforeseeable?
I don’t quite know what that means, I guess.
People are emergent, right? You’re a trillion cells that don’t know who you are, but collectively… You know none of your cells have a sense of humor, but you do. And so at some level the entire system itself acquires characteristics that no parts of it have. I don’t mean it in any ominous way. Just to say that it’s when you start looking at numbers, like the number of connections in the human brain and what not, that we start seeing things of the same sort of orders in the digital world. It just invites one to speculate.
Yeah, I think we’re still a few orders of magnitude off in terms of where a single human brain is, versus what the capabilities of computing systems are. We’re maybe at like newt or something. But, yes, I mean presumably the goal is to build more intelligent systems, and as you add more computational capability, those systems will get more capable.
Is it fair to say that the reason we’ve had such a surge in success with AI in the last decade is this, kind of, perfect storm of GPUs, plus better algorithms, plus better data collection—so better training sets, plus Moore’s Law at your back? Is it nothing more complicated than that? That there have just been a number of factors that have come together? Or did something happen, some watershed event that maybe passed unnoticed, that gave us this AI Renaissance that were in now?
So, let me frame it like this: A lot of the algorithms that we’re using today were actually developed twenty, twenty-five years ago during the first upsurge in interest in neural networks, which is a particular kind of machine learning model. One that’s working extremely well today, but twenty or twenty-five years ago showed interesting signs of life on a very small problem… But we lacked the computational capabilities to make them work well on large problems.
So, if you fast-forward twenty years to maybe 2007, 2008, 2009, we started to have enough computational ability, and data sets that were big enough and interesting enough, to make neural networks work on practical interesting problems—things like computer vision problems or speech recognition problems.
And what’s happened is neural networks have become the best way to solve many of these problems, because we now have enough computational ability and big enough data sets. And we’ve done a bunch of work in the last decade, as well, to augment the sort of foundational algorithms that were developed twenty, thirty years ago with new techniques and all of that.
GPUs are one interesting aspect of that, but I think the fundamental thing is the realization that neural nets in particular, and these machine learning models, really have different computational characteristics than most code you run today on computers. And those characteristics are that they essentially mostly do linear algebra kinds of operations—matrix multiply vector operations—and that they are also fairly tolerant of reduced precision. So you don’t need six or seven digits of precision when you’re doing the computations for a neural net—you need many fewer digits of precision.
Those two factors together allow you to build specialized kinds of hardware for very low-precision linear algebra. And that’s what’s kind of augmented the ability of us to apply more computation to some of these problems. GPUs being one thing, Google has developed a new kind of custom chip called the Tensor processing unit, a TPU, that uses lower-precision than GPUs and offers significant performance advantages, for example. And I think this is an interesting and exploding area. Because when building specialized hardware that’s tailored to a subset of things, as opposed to very general kinds of computations like a CPU does, you run the risk that that specialized subset is only a little bit of what you want to do in a computing system.
But the thing that neural nets and machine learning models have today is that they’re applicable to a really broad range of things. Speech recognition and translation and computer vision and medicine and robotics—all these things can use that same underlying set of primitives, you know, accelerated linear algebra to do vastly different things. So you can build specialized hardware that applies to a lot of different things.
I got you. Alright, well I think we’re at time. Do you have any closing remarks, or any tantalizing things we might look forward to coming out of your work?
Well, I’m very excited about a lot of different things. I’ll just name a few…
So, I think the use of machine learning for medicine and healthcare is going to be really important. It’s going to be a huge aid to physicians and other healthcare workers to be able to give them quick second opinions about what kinds of things might make sense for patients, or to interpret a medical image and give people advice about what kinds of things they should focus on in a medical image.
I’m very excited about robotics. I think machine learning for robotics is going to be an interesting and emerging field in the next five years, ten years. And I think this “learning to learn” work will lead to more flexible systems which can learn to do new things without requiring as much machine learning expertise. I think that’s going to be pretty interesting to watch, as that evolves.
Then, beneath all the machine learning work, this trend toward building customized hardware that is tailored to particular kinds of machine learning models is going to be an interesting one to watch over the next five years, I think.
Alright, well…
One final thought, I guess, is that I think the field of machine learning has the ability to touch not just computer science but lots and lots of fields of human endeavor. And so, I think that it’s a really exciting time as people realize this and want to enter the field, and start to study and do machine learning research, and understand the implications of machine learning for different fields of science or different kinds of application areas.
And so that’s been really exciting to see over the last five or eight years, is more and more people from all different kinds of backgrounds are entering the field and doing really interesting, cool new work in this field.
Excellent. Well I want to thank you for taking the time today. It has been a fantastically interesting hour.
Okay thanks very much. Appreciate it.
Byron explores issues around artificial intelligence and conscious computers in his upcoming book The Fourth Age, to be published in April by Atria, an imprint of Simon & Schuster. Pre-order a copy here
[voices_in_ai_link_back]

Voices in AI – Episode 3: A Conversation with Mark Rolston

[voices_in_ai_byline]
In this episode, Byron and Mark talk about computer versus human creativity, connectivity with digital systems, AGI, and the workforce.
[podcast_player name=”Episode 3: A Conversation with Mark Rolston” artist=”Byron Reese” album=”Voices in AI” url=”https://voicesinai.s3.amazonaws.com/2017-09-28-(01-13-40)-mark-rolston.mp3″ cover_art_url=”https://voicesinai.com/wp-content/uploads/2017/09/voices-headshot-card-2.jpg”]
[voices_in_ai_link_back]
Byron Reese: This is Voices in AI, brought to you by Gigaom. I’m Byron Reese. Today, our guest is Mark Rolston. He’s the co-founder and Chief Creative Officer of argodesign. He’s a renowned designer with a twenty-five year career of creating for the world’s largest and most innovative companies.
An early pioneer of software user experience, Mark helped forge the disciplines around user interface design and mobile platforms. A veteran design leader, innovator, and patent holder, he is one of Fast Company’s Most Creative People, and he was nominated for Fast Company’s World’s Greatest Designer in 2014.
Welcome to the show, Mark!
Mark Rolston: Yeah, welcome, thanks!
I want to start off with my question that I ask everybody. So far, no two answers have been the same. What is artificial intelligence?
Oh, god, what is AI? Big question, okay.
I think it’s probably easy to start with what AI isn’t, especially given all the attention that it gets right now. Certainly, every time the topic of AI comes up for me, especially with, let’s say, my family around me, the expectation is that it’s somehow on the level of another fully-living, breathing person—that level of cognition. I think that, every time we want to talk about AI, when we go immediately to that, the idea of a fully-competent mind, we really lose sight of what AI is and what it’s valuable at.
I also think in terms of so much marketing that’s going on, where everyone wants to place AI at the front of their product, say that it’s powered by AI, I think that while the world’s software has gotten a lot better in terms of applying rich data like historical behavior data—you know, you continue to rent movies of this type, then maybe you would like this next movie—or rich algorithms—understanding how to optimize a path home—those things have made software a lot more “intelligent,” but those things are not AI.
For me, I think of it as a spectrum of capabilities that transcends that basic sort of rich data and algorithmic intelligence that software has. To where AI can take a cognitively-complex situation that involves context, that involves ongoing computational value—meaning it’s not simply answering algorithmically or data-based queries immediately, but it can understand something over the course of time. Let’s say, like a habit that somebody has of doing something, or a large set of medical records—to be able to resolve that against immediate context and come up with a conclusion.
I think one of the things, anecdotally, that I tend to do to help people get away from the idea of the sort of Terminator notion of AI, or the 2001 HAL notion of AI, is to ask them to liken it to a two-year-old in intelligence, or maybe even a one-year-old. Except that this one-year-old has, let’s say, every medical record in a tristate area available to it, and can sift through it and find consistent cases and conditions and give you back an answer. Or it can understand every stock fluctuation for a particular stock or industry instantaneously, and give you some thoughtful ideas about that. It still, on other bases, may be a complete idiot. It can’t tie its own shoes. It still wets the bed. It’s still a very simple system.
And so, I think that helps me, helps others, sort of get away from the idea of talking about AI in general terms. Certainly, one day, we’ll get to general AI. I expect we will, but right now to talk about that is incredibly distracting from some of the real practical things that are happening in AI.
Well, help me understand a distinction you’re making. You explicitly said the program that guides my car to where I’m going—routing—isn’t artificial intelligence. Even though it knows the context of where I am, it might have a real-time traffic feed and all of that. And yet, presumably, you think something like IBM Watson – which is able to go through a bunch of cancer journals and recommend treatments for different types of cancer, is a form of artificial intelligence.
Assuming that that is the case, what’s the essential difference between what’s going on in those two cases?
I think it’s just the level of overall complexity, and the ability to apply those subsystems to other problems. You know, a mapping system is maybe algorithmically-rich, but it’s really just applied to one problem.
Now, of course, if you used Watson to apply to a mapping problem, then we might call that AI. I think it gets academic, but I’d say the simple answer to your question is: It’s a level of richness and sophistication, and the complexity of the data sets and the models we’re bringing to the problem.
You used the phrase “it understands something over time” about the artificial intelligence. Is that useful? Do you actually think the computer “understands” anything?
Oh, I know. We use that—sorry—and we’re going to use that language because it’s readily at hand, and it’s a frame of human understanding. But, no, of course, it doesn’t understand it. It’s just able to prepare a set of variables that it can apply further in its course of “thinking.” But, of course, thinking is processing in this case.
So, no, it doesn’t understand any more than a bug understands the greater world around it. It just can see in front of it.
We’re going to get out of the definitional thing here, but it’s really telling to me in the sense that… Do you think that the word “artificial” means that it’s like artificial turf? It’s not really grass; it looks like it. It’s not really intelligent.
Yeah, this has been an interesting line of questioning, and I’m probably terrible at answering this… But I think it’s fun to maybe step outside of this technical boundary, and try and start from a philosophical angle the other way, and break down the notion of intelligence, given the choice of the phrase “artificial intelligence.” I do believe very much that human intelligence is—while on a great many orders, more complex—it is no more different than the basic processing systems we’re discussing.
In that sense, yes, the term is perfectly appropriate. Yet, on a conversational basis, it’s very distracting to talk about it that way. Actually, in our studio and with my current active client in this space, we really talk about it as a cognitive system. And that, I know in a lot of ways, is just wordsmithing. But it helps break away from the burdensome history of the term “artificial intelligence” and the greater philosophical demands put on the term.
So for us, a cognitive system has some of the basic tenets of a thinking process, namely: that it’s complex in its ability to process information, and it is able to resolve questions over time. Those two are the most interesting factors that make it transcend normal software.
But the idea that human intelligence and machine intelligence… What I think I just heard you say is that they are the same substance, as it were. The machine intelligence is just one quintillionth as much as human right now. Is that what you’re saying?
Exactly. In fact, we came across an idea that lends itself to this line of thinking—and, certainly, if you’re religious, or if you’re a philosopher, it’s easy to find this a repulsive notion—called the “bicameral mind.”
Of course, yes.
Yes, bicameralism. It’s a really interesting idea, just the notion—
—Yes, that we didn’t use to be conscious three thousand years ago. It’s the notion that one half of the brain spoke to the other, which we perceived as the voice of God. And then, over time, they merged and we became conscious. And then, we felt that we were lacking something, that the gods no longer spoke to us. Therefore, we created oracles and prayer, and all of these ways to try and reclaim that. I guess the people that believe that, talk about Homer and say that he didn’t have introspection and all of that. Just framing it for the listener, but go ahead.
Yes, there’s this historical idea of bicameralism, where we heard voices in our head, and those voices we attributed to external forces. It shows how fragile the mind is, first of all, and that’s why I find it applicable to this question. It shows how the mind isn’t a sort of perfect, immutable structure. It hears itself and might mistake it for something else —whole cloth.
And by the way, the reason the topic came up for us was not for this philosophical reason, but because we’re seeing a sort of new bicameralism emerge. It’s highly-connected to this question of AI, but it’s somewhat a digression. But I’ll share it anyway.
Today, we’re experiencing digital systems that are, in increasingly-sophisticated ways, thinking for us. They’re helping us get home. They’re helping advise us on who to call right now and what to do right now, and where that thing is I forgot where it was. You know, I can ask Siri or Alexa just about anything, so why remember so many things that I used to have to remember? In any case, I have this sort of external mind now.
Just like, historically, we had this idea that that other voice was not really our own. These digital systems that are extensions of us—like Facebook, they have deep properties that we helped to imbue them with about us—we think of them as very external forces right now. They are Facebook. They are Siri. Yet, increasingly, we depend on them in the same way that we depend on our own internal conscience or our own internal voices.
Eventually, I think, much like we came to have a unified mind, the digital systems that we depend on—largely, we’re talking about these intelligent systems, these artificial intelligence assistants that are emerging around us for everything—will become one with us. And I don’t mean that in some sci-fi way. I mean, in the sense that when we think about our identity—”Who am I? How smart am I? What am I best at? What do I know the most of? Am I funny? Am I clever? Am I witty?”—anything like that will be inseparably connected to those digital systems, that we tie to us, that we use on a frequent basis. We’ll be unable to judge ourselves in any sort of immutable way of flesh and blood; it will be as this newly-joined cyber-creature.
To me, that again spells out more and more that the idea of our own cognition—our own idea of what does it mean to be intelligent as a human, sort of natural intelligence—isn’t that magically different. It is entwined with not only the digital systems we subscribe to, but these digital systems are drawing on the same underlying basis of decision-making and context-forming. Like you said, they’re just one-quintillionth the level of sophistication.
If you have a computer, and you put a sensor on it that can detect temperature, and then you program it to play a WAV file of somebody screaming if that sensor ever hits five hundred degrees, and then you hold a match to the sensor, it hits five hundred degrees, the computer starts screaming—the computer can sense the temperature, in the sense that it can measure the temperature.
I guess that is the better way to say it. It can’t actually sense it; it can measure the temperature. With a human, you put a match on a human’s finger and they scream, there’s something different going on. A human is experiencing the world, and human intelligence arguably comes from that, whereas a machine doesn’t have any experience of the world. Therefore, it seems to be an entirely different thing.
But you just described a dirty shortcut to all of the underlying context and calculations that go on in the human mind before that scream. You just plugged the scream right into the sensor.
If we try and break down the human system, there was an initial sensor—the skin—and there was the initial output design—the scream. There were many, many more computations and while, yes, the two external net results were the same between the two systems, you just obfuscated or ignored all of the other things that might cause a difference.
For example, am I drunk, and I put my hand over the flame and I don’t notice in time? Or, am I masochistic, and I’m doing it just to prove I can; so I’m running a deep calculation in my mind to hold onto that scream. And so, then I’ve got other sensors that start going off, or other external signals like sweat and grimacing and on and on and on.
In a lot of ways, you’re still talking about a computational input-output scenario, but there are just so many more variables. We start to dismiss that it is yet still the same kind of computational structure, and we think of it as more magical or something else, and I don’t think so. I think it is just a massively more complex computer. And, I think, when you look at some of the things DARPA is doing, they’re starting to uncover that.
Here’s an interesting example. I went to a DARPA workshop around basically analog-to-digital IO. Which, what they really meant was, how do we build computers that can plug into the human body? One of the things they showed off was some early lab work, with embedded sensors into the speech center of the mind, and asked people to say words out loud. They said, “Hello!” And they got a set of neurons firing off against the statement, “Hello.”
But then, they asked the person to think of the word “hello” using their internal voice. Lo and behold, the signals were very similar. In other words, they could read your mind. You could, using your internal voice, think “hello” and give the computer the same input. We were able to decipher what the human mind was doing through some sensors.
Now, it’s early, very rough, very sort of brute force, and there’s a whole other subject about how much we’ll ever really be able to wire up to the mind. Simply because, at the end of the day, it’s a three-dimensional structure and, if you have to put leads on it, there’s no way you can wire into it effectively. You end up destroying the thing you’re trying to read. But, in these simple tests, it sort of proved how much the human brain itself was even usable as almost like a USB port. It’s fascinating.
I’m still not drawing the same analogy. I’m emphatically not saying something in your word “magical” is going to on. I think what I’m trying to capture is the sense that the person experiences something. The person has an experience. There is a “self” that experiences the world. Let’s just say “consciousness” for that matter. It is an aware Self that exists, that has these experiences with the world.
But isn’t consciousness some form of playback.
Let me set this up for you, and maybe we can go from there. There’s a classic problem of this by Frank Jackson called “The Problem with Mary.” And the setup is that there’s this person named Mary who knows everything about color, like, everything—God-like knowledge of color—everything about photons, everything about cones in your eyes, every single thing there is to know about color.
The setup is that she, however, has lived her whole life in a room, and has only ever seen black and white. One day, she goes outside. She sees red for the first time. The question is, did she learn something new? In other words, is experiencing something different than knowing something? I think most people would say, “Yes, she’d never experienced color, and therefore she learned this new thing.” A machine cannot experience the world, and therefore one would argue perhaps its intelligence is not anything like ours.
Yes, it’s a fascinating example. But wouldn’t seeing the red for the first time be essentially the first time you’ve encoded that? So, same thing with a computer, it hasn’t seen red, so therefore it hasn’t encoded it. It’s not part of its data set. It encodes it for the first time, and it has to place that in context to the rest of its knowledge system. I don’t know how we couldn’t still codify it in the way that I’m describing.
Well, let’s go on to consciousness.
This is great! It’s a nice philosophical track you’re running.
Well, the reason I do it, it isn’t supposed to be like senior year college, up late at night with your friends. The thesis of mine is that all of this stuff that people have talked about—you know, angels dancing on pinheads—for all these thousands of years; we’re all about to know. It all matters. We’re about to build systems, and if the system one day says, “I feel pain,” does it or doesn’t it?
I’ll try and blow up even the whole presumption. I don’t think it matters.
Well, I think it matters to it.
Well, I’m going to argue that we’re unlikely to arrive at a machine where we either would ever hear it say, “I feel pain,” or we would care. Because, if it can arrive at that level of sophistication, it will likely have surpassed us in its utilization and its role, and therefore won’t be offering those human-like analogies. It will be offering other kinds of fail information, other kinds of sensor alerts that won’t be familiar to us as flesh objects.
Fear, pain—those are things that are very clear illustrations of, you might say, fault conditions a human encounters. Fear is a prediction of bodily harm. Pain is the actual reflection of body harm. But the body of an AI system either doesn’t exist in a meaningful way, or it’s just not going to be the way we’re interfacing with it. In other words, there’s someone else who’s concerned about the uptime of the machine, and we, interfacing with that AI system, will never encounter those factors. So we won’t encounter these human-like moments of reflection with them. Instead, we’ll encounter its impression of us.
To me, it’s much more interesting to think about how will they understand us, and what’s dangerous or enlightening about that. To me, it is the idea that, rather than these moments that are very human-like, the idea that it’s superhuman… Where, let’s say, it’s talking to a doctor, and it knows the records of every single human being in the United States and, therefore, can come out with presumptions about someone’s pain in their knee—our pain, in this example, that the doctor has no understanding of how it’s coming out with this conclusion.
Unless it is, of course, a very familiar conclusion. But that’s going to be boring, and that’s not going to be a moment we’re going to reflect on; instead it’s just going to reaffirm our own intelligence. But there are going to be these other moments where it comes out with something we never expected, or we thought was absolutely wrong.
Think politics. “Here’s the best tax structure for the United States.” You know, politics is all about all kinds of decisions that are abhorrent to people. But, if a computer comes out with something that’s very non-intuitive, yet is influenced by a one-million-x level of background calculation—you know, something humans just could not do—we won’t know how to deal with that. That, to me, is the disconnect, that human-to-AI reflection that’s more interesting than what is their pain like versus our pain.
Does that make sense? I know that was kind of a huge digression.
No, I’m happy to engage it because you seem to be saying that we don’t really have to worry about the issue of machine pain until we get an AGI, and we’re not going to have an AGI for so long.
Even an AGI.
There are those who argue that the Internet may be conscious already. If consciousness is an emergent property—it comes out of complexity—then there could be latent consciousness already in machines.
There could be, but that’s like, to me, the way I think about the question of God. It’s silly to think of God as just a greater human-like thing. If there were a God, it wouldn’t be thinking the way we think. And so, the question of, “Is he mad at me for doing this?” is a silly question. In the same way the idea of, “Is the Internet conscious?”
It may be, in fact—in some definable way—conscious, but beyond the philosophical question, it’s not that important.
Again, these questions of a general AI thinking along our lines, I don’t think is as important as, “How will they understand us, and how will we interface with them?” Because that’s the scary part: They will be a million times more intelligent than us on particular topics, yet maybe dangerously ignorant on adjacent topics, and that’s what keeps me up.
I would love to discuss that next. I will just say that, up until the ‘90s in the United States, a veterinarian’s standard of care, I am told, was to operate on animals without anesthesia, because the thesis was they couldn’t feel pain. Open-heart surgery was done on babies in the United States up until the ‘90s with no anesthesia, because the thinking was their brains weren’t developed enough to feel pain.
I think it behooves us to reflect on those things, to say perhaps we weren’t thinking all that through at the time, that those were easy questions to avoid out of convenience. Look, I don’t know if plants are conscious; I have no idea. I’m just asking the question, “How would you know?” In the end, all of these are really questions about us, right?
In the end, the question that all of this reveals is, “Are we machines?” And, if we’re machines that experience the world, if that’s all we are, then are the machines that we make experiencing the world? That’s the question I’m trying to wrap my head around. I don’t know that it’s premature, as you were saying it is.
Because, if I’m hearing you correctly, you’re saying that by the time it matters, the vocabulary will have changed so much, and the world would be so different—and it would be so different that these questions are going to seem childishly naïve and simple and provincial.
Maybe not childish. I don’t really mean that. They just won’t be, and I don’t think they should be, our primary concerns. To me, the idea of how do we interface with a growing set of machines that are smarter than us? How does society—where you-to-me or me-to-my-neighbor interface today on fairly normalized terms, you know, something that took thousands of years to break through to a more democratic, fair society—how do we continue to interface when we may, in the future, have asymmetrical access to these super machines? Machines that not just help us get to work a little quicker than the next guy using Waze, but have a million times more intelligence, or a million times more financial wit as an investor than the next guy.
How do we deal with normalization of intelligence? When I make a decision and you make a decision, society benefits from the discord. Invention, and fashion, and greater advancements in society happen from those disagreements because, ideally, the better ideas break through. The bad ideas are tested and so forth. But, when we grow to be dependent on these systems and we all start to use the same system, you start to imbue society with the sort of same line of thinking, and there’s a friction to breaking free from that which is super interesting.
Let’s just take driving to work, since we’re using that example: The friction to drive your own path, versus what the map is telling you to do in a new city, is pretty high. If you were to take that kind of quantification and move that to everything you eat, the jobs you take, the people you date, the friends you associate with, and just about every little thing—you know, Amazon is trying to help you get dressed in the morning, “Does this look good on me?”—it’s fascinating.
It not only grows a dependence on a set of proprietors, you know, the people who are behind these systems, but a dependence on each other in decisions that might normalize in a way we don’t want, and that isn’t good for society. That’s, to me, the truly exciting space because, again, these questions end up being about us when we’re imbued with AI—as opposed to AI itself, and will it feel pain. I guess I’m much less concerned about that guy versus the humanity behind it.
Fair enough. Let’s do this: Let’s chat about jobs in the near future, because I think that’ll set up the context for this conversation which you’re talking about. Which is, when one segment of society can make better decisions than the other, and those better decisions compound, how do you deal with that?
Let’s start with just the immediate future. There’s three views about how automation and robots and AI are going to affect the job market, just to set this up again. The first is that there’s going to be a permanent group of people that are unprepared to add economic value in the world of tomorrow, and we’re going to be in this permanent Great Depression, where some sizeable number of people can’t get jobs or can’t earn a living wage.
And then, there’s a second one that says, “Oh, no, no! You don’t understand, it’s worse than that—every single person on the planet, everybody, every job in the world can be replaced. Don’t think you’re something special. Machines are just going to zoom right past you, too.”
And then, there’s a third one that says, “No, no, no, this is all ridiculous. Every major technological event since the Industrial Revolution—even ones arguably as big as AI, like electricity and mechanical power—all they have done is boost the productivity of humans. Which means it boosts everybody’s income. We all get wealthier, and everybody is better off, and that’s the entire story of why you live a better life than your great-great-grandparents, but you don’t work nearly as hard as they do.”
Which of those three camps, or a fourth one, do you fall into? How do you see that all shaping out?
All of the above, and I don’t mean to chicken out, but just very asymmetrically.
What I believe is that the net product will ideally be a better society, if we don’t blow ourselves up in the process. So, with that caveat, I think we are headed toward a much more ideal future. However, I think, in the short term, we’re in for a really ugly shakeup where AI will displace a great amount of the population. A great deal of the population is not prepared, and even some of that population is not capable of moving up past a manual labor world. The pessimist in me says, there aren’t that many creative jobs, and the most suspect, immediately replaceable jobs will be manual labor.
Hold on. I want to challenge that point. I don’t know that that’s demonstrable at all. Even if they make a robot tomorrow that can mow a yard, and everybody who mows a yard is out of work, they didn’t make a robot that can plant a grape arbor. Even when they make a grape arbor robot, they didn’t make a robot that can plant historically-accurate gardens.
You know, my plumber and my electrician, they have to be so dynamic that they come out and they have to figure out, “Hmm… What do I do with this?” and all of that. I don’t see a robot painting a curb. I’m looking out my window right now and there are like four hundred things that need doing out there, and I don’t know if a robot can do any of them.
Yeah, okay, fair enough. I was going to get to this. I think the actual twist to the story is… The presumption is, yes, robotics with AI could replace everything. But, and like you started to suggest, the trick in that—and Uber, to me, is the sort of leading example to that—the introduction of AI—or intelligent software, because I don’t think you need the full suite of AI to get there—in society usually means that we end up working for machines from the middle to the bottom of the job structure in society.
When I look at Uber, if you step back from it, it’s basically humans are the sort of last vestigial robot in the chain. They’re being told by a piece of software where to drive. The money is being taken, all the commercial exchange is by the software. The human is just the cheapest technical means of driving the machine around. And I think we look at a lot of labor—all of your examples, the plumber—software will increasingly take out the creative factors in those businesses. But the manual part of it, the sort of robot, rather than devising a humanoid robot to send in to do your plumbing, will be humanity.
A trip to Japan can show you what it looks like when you have this really large population that is, in essence, sort of overemployed. In India, I was visiting a client, and there was somebody opening the door to the building, there was somebody literally there opening the door to the bathroom, and there was somebody there to hand me a little towel in the bathroom. It felt really weird, and it was a symbol of what happens—and I’m sort of getting off-topic here—when the cost of labor goes down. And technology, in the case of Uber, is fantastic for pushing that cost of labor down.
I don’t know if that would be my interpretation of it.
The manual labor picture, I absolutely believe that; but I think there’s some sunlight in that process, which is a lot of the jobs today that have been sort of whittled down to “just get it done,” things like a plumber, will become more artisanal jobs. We will hire people to do more interesting versions of it.
I think, humanity, in the greater sense of things, has a real knack for taking something that normalizes and almost always blowing it up, either for very bad reasons or good reasons. It just can’t help itself to take anything that’s stabilized and upset it. You know, you look at the way governments work.
I think the idea of the world of Etsy-based makers or creative technicians will emerge. I think that will help, but I think that still the greater forces are many, many more people performing very robotic jobs.
It would seem just the opposite, right? Like, once you can automate those jobs, you don’t, actually.
I guess the analogy people always go to is farming, right? It used to be ninety percent of people farmed, now it’s two percent in this country. If you look at that from one angle, you say, “Well, what are all those people going to do? They can’t go into factories and learn how to add value.”
They did. They went into factories.
Right, and then, they figured out, “Every time you automate something, you lower the cost of it.” Who would have ever guessed?
They became marketers and middle management.
Right, 1995, somebody says, “We’re going to use a common protocol, hypertext protocol, and we’re going to use Ethernet and TCP/IP, and we’re going to connect a whole bunch of computers.” Who would have ever said, “Okay, that’s going to create trillions and trillions and trillions and trillions of dollars of wealth. You’re going to make Google. You’re going to make eBay. You’re going to make Etsy. You’re going to make Amazon. You’re going to make Facebook.” Nobody, right?
They created that much wealth but they haven’t distributed it, nor distributed the same amount of labor that their historical counterparts did. In each of these cases, it’s required less and less labor.
I definitely believe in the idea of the overall value in the economy and the overall comfort level available to society, but society’s ability to distribute that in a way that’s fair doesn’t have a great track record in the twentieth century.
So, you’re arguing that, in the twentieth century, the average standard of living didn’t go up?
It did, but the delta between the bottom and the top also got worse.
Well, nobody argues that technology hasn’t made it easier to make a billion dollars, at least for some people, not for me. But, that aside, the question is, “Has the median income, the median standard of living of somebody in 1900, 1950, to 2000 gone up?” I mean, that’s a straight line.
Of course.
And so, what is it in your mind that is different about 2000 to 2050?
I think, if you look at those lines, the baseline of what is the poorest person living like and what is the wealthiest person living like are no longer following each other.
There’s a great photo array showing what the poorest people in Africa and the poorest in the United States live like. Like, where do they sleep? What does a median income look like? It’s interesting in that it’s gone up from the median incomes upwards in a lot of places in the world. But it’s also interesting how poor the poor remain in the United States. That delta is what interests me, and the fact that that line for the lowest income has stopped moving up.
I think looking back gives us some hope, but I don’t think it gives us automatic confidence. I don’t think it should. I think we should take a warning from the level of income inequality that technology is driving. I don’t think it’s fair to just assume, “Well, it worked out in the past so it should work out again,” because it doesn’t seem to be right now. There seems to be some very accelerating forces for those who have access to technology versus those who don’t.
In every technology—you said it right—when electricity came out, we thought, “This is going to be different. This is something to be concerned about.” And, yes, I may be one of those voices, and I hope to be wrong, who’s saying, “With AI, I think this is going to be different and we need to be very concerned about it.”
Well, let’s assume, and put a pin in everything you’re saying, and say it’s all absolutely right, and it’s all going to unfold exactly that way. With that context, let’s get to that conversation we started to have which is, “What do you do about it?”
Yeah, that’s really tough.
The universal income seems like just a path to inflation. I don’t know. I’m not an economist. For my role, as a designer in the world, we keep looking for ways to try and express AI in the most human moments in life. How to, for example, give us better control over the homes around us.
But I feel, in a lot of ways, sometimes, as a designer, at this moment in time, a little bit like what—I don’t want to overstate this but—a little bit like the folks designing the nuclear bomb may have felt like. They were advancing technology in the interest of technology, and it was sort of a passionate expression at the time. But, at the same time, they could tell, “This is maybe not going to turn out right.”
You know, that’s sort of an overstated comparison, but the idea here is that we in software and design are helping advance the cause for a lot of products that ostensibly have great purposes for everybody in society but a lot of them—let’s say, designing a better experience for Uber—don’t seem to be netting out the way they should.
Let’s take the work for CognitiveScale. To me, that’s the most relevant example. We’re working with this company that makes AI systems, and it helps people like doctors or financial analysts think. It helps them answer questions. It helps them look ahead or look at large, large data sets and deliver to them things that they might not have realized or been able to find themselves—essentially the needle in the haystack.
For each of those customers that employs those systems, they will be potentially thousands of times more powerful than the next guy. That’s a huge tipping force. Could it be that all of them adopt it uniformly and the world of finance or the world of medical care all gets better at once? That’d be awesome.
But, at the same time, we’re dealing with an extremely competitive and a very non-democratic business environment. And so, I don’t see it necessarily happening that way. So that’s the concern side of the argument. We’re giving a select few these really immense superpowers, and what are the ramifications of that?
Of course, I don’t think, practically, the financial analyst or the doctor is anything in particular to worry about. But, if we imagine extending this out to average consumers, these things aren’t going to be free. It’s not like the US government is going to distribute these tools. It’s going to be something that people charge for; that people with better Internet access, better financial capabilities have access to, and it creates further imbalance. That, to me, is the downside to the sort of magic that we’re creating day-to-day.
And so, what do you do with that? Do you just kind of stew on it and then just file forward?
I don’t know yet.
Like, feel appropriately guilty about it?
Yeah, it’s an awesome question, Byron. I don’t have a great answer. I wish I could just declare, on this podcast, the fight. But this is early enough, that to try and declare an answer would be premature. Because I may be totally wrong, and you may be right, and I should just simply weigh in on a better future. I feel much like a lawyer who is faced with defending someone. I suspect this guy might be guilty, but my job as a lawyer is to make sure he gets free. My job as a designer is to create better human experiences, even if some of them might not drive a net society improvement.
If something looks like a gun and its purpose is ninety-nine percent to deliver harm, then that’s pretty easy for a designer to avoid. But the topic of AI has brought us closer to this question of, “Is design really driving the shape of society?” than ever.
For years, we’ve designed things like toasters or music players, and they had a natural place in society. You weren’t really reshaping—you could make the toaster more fun, you could make the music player easier to use, but they weren’t really that tied to what it meant to be human. But to design a decision system really does start to get into the heart of what it means to be somebody. And I’m not sure, as designers, we’ve been introduced to the toolset to think through that—you know, the social ramifications of the problem.
Right. But is it really all that different? You had a time in history where some people could read and some people couldn’t, and the people who could read, they were financially a lot better off, right?
True.
And then, you had people who had education and people who didn’t, and the people with education were better off.
Yes, true. You’re weighing into exactly the case I’m describing. In both of those cases, society was crippled until they decided to offer education broadly, until books were cheaply printable.
Well, you could say “crippled,” but it was on a path. And then, eventually, you got computers…
When I think about before the printing press, the people who could not read could be told just about anything by the people who had very few books.
But the thing is, technology in the past—again—always lowers the price. These things expand over time. More people have access to them. More people go to school. More people are literate. More people have computers now. More people have access to the Internet. All of these things just show that it eventually works.
Eventually, yes, and I go back to my statement: I believe this eventually nets out to a beautiful society. But we’re a much more destructively-capable society today than we ever were. And all of those paths you talk about—the path towards unified education, path towards even introducing books—involved lots of wars as the sort of asymmetry of people moving into various stages of modernity occurred. But there was only so much damage that they could do to each other. Today, the level of damage we can do to each other may surmount the complexity of getting through those similar stages.
I’m saying that the near term is going to be painful, but the net opportunity in society is fantastic. I am on the optimistic side with you.
I don’t know. You say that, but you’re talking about building a nuclear weapon—that’s what you feel like you’re doing.
Yeah.
This is a debate that happened in this country a long time ago when people saw the Industrial Revolution coming. There used to be this idea that once you learned to read, you didn’t need school anymore. And there was a very vocal debate in the United States about “post-literacy education,” which I think is a really fun phrase. And the United States, because it deliberately decided everybody needs more education, became the first country in the world to guarantee high school education to everybody.
I want to switch gears a little bit. You wrote a really fascinating article called, “AI’s Biggest Danger Is So Subtle, You Might Not Even Notice It.” The thesis of it, in a sentence—and correct me if I’m wrong—is that, behind all artificial intelligence is a programmer, and behind every programmer is a set of biases—either deliberate, explicit or implicit, either things they know or they don’t know—and those get coded into these systems and you don’t even notice it.
That’s a good summary. There’s been a lot written about this since. I may have been a little early on it, because I thought the reaction was interesting.
Working with CognitiveScale, one of the things we’re doing that’s most interesting… And they’re facing this issue of, every time they build these systems, they’re one-offs. Most AI systems tend to be one-offs, and it’s very difficult to tune the intelligence.
In other words, it’s a dark art. We don’t know exactly how these machines are coming out with their conclusions. We’re pouring so much complexity into these agents, and the models, and the processors that are transforming information, that it’s hard to predict how one set of questions or one set of inputs might come out of it at the end of that process.
You know, we’re just testing them against sample cases. But sample cases don’t give you a total assurance of how the system is working. It’s like your match example: Ninety-nine percent of the time, somebody screams; so you just assume that it’s wired right. But one percent of the time, somebody is really happy about that fire, and the machine breaks. Sorry, I’m digressing…
To me, what’s really interesting is that there are a lot of commercial interests who would like—let’s say, my drive home, since we’ve used that example—who would like me to drive a certain way, because they want me to go by their restaurant or by their business. And it only takes the slightest bit of a twist to that data to slowly mold a population—whether it be a driving population or an eating population or a buying population—to behave a certain way.
Normally, these things are externalities so they are easier to legislate. Like, how much signage are you allowed to put up in a city? What kinds of things are you allowed to say in an advertisement? Those are things attempting to shape our minds, right?
But, when you have direct decision systems—if you go back to what I was describing where we’re becoming more and more like the bicameral mind—inseparably associated with the digital systems that advise us, those things are now, you might say, black boxes in our minds—trying to get us to eat at McDonald’s, or drive that certain way, or invest in a certain stock. Much more difficult to legislate, much more difficult to police, to even discover that it’s happening to us.
That’s the concern, and there’s very little early, kind of, rule sets or policy around how to protect against that. And we’re building these systems at an incredible rate.
You’re, of course, familiar with European efforts to basically say: “You have the right to know why that artificial intelligence made the suggestion about something to you, if it affects your life.”
Yes.
And so, a) is that possible; b) is that good; and c) what is going to be the ultimate outcome of that debate?
Oh, I don’t know. I applaud Europe’s attempt to do this. It’s a bit ham-fisted because the delta between these technical systems and the politics of legislating it is too great. They just don’t know what they’re dealing with, so they tend to do it in kind of brute force ways. The companies are still young enough that they’re not on the wrong side of the argument, they’re just trying to get them out there in the quickest, most brute force way, and are less concerned about the lasting effects.
I think, as designers and developers creating these systems, there’s not enough in the stack, you might say. It’s actually one of the things we’re trying to do, and one of the goals in the UI work with CognitiveScale—and sorry, I’m not really trying to pitch you hard on CognitiveScale, I’m just saying this is where our direct experience lies—is we have these problems that are right in line with this conversation.
For example, the system comes out with an insight: It tells a doctor, “Hey, there are these eight patients you should contact, because I believe they’re going to have an asthma attack.” Of course, the doctor looks and goes, “Why do you think that? It doesn’t make any sense to me as a human. I can’t see why.” So, we have to unfold the argument quickly and in human terms, even though the computation that might have arrived at that conclusion is much denser than most humans can comprehend.
It involves a larger data set, or a larger computed set of circumstances than is easily told. And so, the design of those problems is really tough and it’s just very, very early in that process. Again, part of it is because there’s not enough in the stack, you might say.
You know what I mean by the stack? Early operating systems, the underlying firmware that talked to the hardware, the data management system, and the applications were entwined as a single entity. Of course, we’ve eventually built these as many, many independent elements stacked on top of each other, which allow programmers to edit one of those layers without destroying the function of the whole system.
Today, in AI, it’s a similar problem. We have too much of a combined stack. And so auditing how the system is thinking—where did it find these conclusions, and what data is it drawing on—is really tough, especially if you’re not a programmer. It’s still the domain of a lot of specialists, a lot of scientists. If you’re just a financial company or a hospital chain trying to use these systems, you’re an expert in your field—like healthcare—but not an expert in AI, so it’s really tough to employ these systems and trust them, and understand, “Why did you come out with that conclusion?”
You also wrote a piece called “Computers Will Prove to be More Creative Than Most Humans.” What’s your thesis there? And before you answer that, what is “creativity”?
Let’s start with a simple—we’ll call it a theory—just call it the Mick Jagger theory. Mick Jagger is a rock musician, you know, The Rolling Stones. If you look at Mick, he’s not that pretty. He doesn’t, on any sort of technical sense, sing that well, and he dances really kind of funny. But something about how he assembles all that has endeared us to him, and it’s this force of will that makes him, in design terms, sort of a winning design.  A lot of rock and roll represents this. Rock and roll turns out to be a great example of that kind of decision factor in humanity.
You learn this in product design. You could be very technical about what your audience wants and still utterly fail. It is quite often a very singular human expression, various set of accidental factors, that turn out a magical design.
To that end, what I was trying to propose was the computer—through its willingness to try things, non-obvious things, and its ability to sift through more ideas than we could—may one day lend to something we really define as creativity.
Creativity is sort of the nexus between what we think we need, and what we didn’t expect to see. We tend to register that as, “Oh, that was creative!” It’s both suitable—I can imagine using that or enjoying that—yet I didn’t think that was going to be the answer.
Given that nexus, I think computing—especially this intelligent processing of possibilities—is going to be extremely powerful. The reason I wanted to introduce the topic is not really to threaten designers, even though that sort of ended up being the quotable part of it, but to suggest the safe zone that a lot of people were talking about at the time that I wrote that—that creative fields of humanity are safe from AI, and that it’s really just people doing manual labor—is wrong. There are lots and lots of creative tasks that will be highly-performant through an AI system. It may be first through symbiosis, putting a designer and a computer together, which is happening to a degree now.
In fact, it worked with a London 3D software company that helps people like shoemakers go through a range of shoe possibilities, and one of the things the software is really good at is arraying out all of these possibilities of texture and materials and color. And then, the human, of course, has a really easy job of just picking. “Oh, I think that looks cool.” It helps them in the process. You might say it’s making them more creative.
Well, eventually, the computer could have enough information to make some of those choices on its own. Maybe not in that exact circumstance, because fashion is sort of the last bastion of elusive human behavior. Fashion is often nonsensical, but it works explicitly because of that. But in so many other fields, the sort of near-styles of creativity, I think that’s very possible.
You know, you started at the very beginning of this talk, to speak about, “We will probably get an AGI sometime in the distant future.” I’m really intrigued by the fact that recently, we had Elon Musk saying, “We’re going to get one really soon, or relatively soon, and it’s going to be bad for us.” Mark Cuban kind of threw in his lot, “Yeah, it’s going to be kind of bad for us.” And then, you get Zuckerberg who says, “No, no, no, it’s going to be way far out and it’s going to be good for us.” You get Andrew Ng who says, “Worrying about that kind of stuff is like worrying about overpopulation on Mars.”
When you really distill it all down, you get the range of people who have some arguable case that they have a relevant opinion, they predict sometime between five and five hundred years.
My questions to you: first, why do you think that is? In any other field of life… If you ask ten astronauts, “When are we going to get to Mars?” It’s going to be twenty to fifty years, you know. It’s not going to be five to five hundred. And so, why do you think that these people are so far apart in what they think? And second, where are you in that?
Where am I? Yeah, okay.
First of all, I think most of those answers are more telling of the person than the topic. You know, AI is very political right now. And all of those folks are very much, as an industry, they’re not scientists, they’re—as an industry—very vested in the idea.
For example, Mark’s answer was, I think, driven by his desire to sound like and be an optimistic voice in tech. Facebook profits from technology optimism. They need people to feel safe around tech for Facebook to continue to grow. Whereas, Elon Musk is much more about infrastructure. And so, for him, anything he talks about in tech needs to sound big and amazing. Trips to Mars and talking about AI in that way makes sense.
I still would go back to… I think the idea of talking about general AI, the kind that the average person would recognize, is a silly conversation. It’s not likely to happen for a hundred years in the way that you would maybe think, to sit down and have a beer with it. Maybe that will never happen. I think we’ll get to the point where AI is much more life-changing—dominant in our lives—way, way before then.
So, the question will become moot, like asking about overpopulation on Mars. I don’t know if this answer is going to be crisp enough to net out in a one-line statement, but I agree with the guy that it’s not coming anytime soon. But I do say, very strongly—and I’m seeing it directly with the clients we’re working with—that AI that is dramatically impactful to the shape of society—in how individuals think of themselves, how they interact with other individuals, how they compete in business and in social means—is going to dramatically reshape, and potentially upend society in not-too-long of the future.
I say that as in, it’s happening now, so it’s already started. And you might say, for the next twenty years, it will be pretty dramatic, and increasingly dramatic. But I don’t know if there’s a sort of gating moment. You know, all of these questions seem to sound like you’re turning a timer on some toast. This isn’t that. The toast doesn’t just pop up out of the oven and say, “We’ve got AI now.”
We have it today. We’ll have more of it tomorrow, and we’ll have more of it the next day. You can already see society reshaping to competitive capabilities around intelligent systems. I think it’s here. Maybe that’s my sort of net answer.
I’ll put you down for seventeen years. My final question is this. You seem to obviously be a guy that thinks about this stuff a lot, and you’ve made a few references to science fiction. I’m curious—if you read, watch, consume science fiction—is there some view of the world that you think, “Yeah, that could happen. That is something that they have right, I think.” Is there anything that’s influenced your thinking from that genre?
Honestly, when I read science fiction, it tends to be a little more… I thought some of the thinking going on in the ‘50s was really interesting, like Philip K. Dick’s work. I know some of it that’s much more magical, or dealing with the apocalyptic side effects. But, connected to that, the vision in Blade Runner—well, you know, skip the android question and skip the perpetual rain; I’m not necessarily going to worry about those things—the part that is really interesting is society is built upon itself.
To me, that’s the most pertinent part of trying to envision the future. As a designer, you know, fashion piles up onto itself and refers to itself. Architecture does the same thing, not only in terms of style but, literally, we build onto our past, we upgrade buildings, we renovate them. In a sense, social behavior does the same.
To me, my influence or closest reference would be looking at those aspects of what we see in Blade Runner, and we’ll see a lot of society grappling with very historical attitudes about who we are as individuals, how we should associate with each other, how the economy is supposed to work—yet trying to retrofit into that some very, very dynamic new realities.
And I don’t think they’re like better toasters, and that’s the part where I think maybe you and I disagree strongly. I don’t think, in these historical analogies, we came up with a better tractor. I think these new layers of technology reshape us in a way that is not that comparable to the past. Rather than better tractors, we have greater minds and greater reach with our minds. That part, I think, is the most interesting.
To close on that: Three hundred years ago, William Shakespeare lived and wrote, and he wrote these plays that we still watch today, and we still read today. They still make movies with Leonardo DiCaprio in them. Three hundred years later, in a world that has changed as dramatically as you could imagine, people still know an Iago and Lady Macbeth, and still have love triangles and still have family rivalries, and still have all of that stuff.
You watch Shakespeare because you still recognize those people; not because it’s an alien world, but it’s like, “I get that.” So is your thesis that, in fifty years, Shakespeare will no longer make sense to us?
Oh, no.
So, we really aren’t going to change, are we?
That would be history building on itself. We will change in some layers. So we may tell a story about Romeo and Juliet where they get to know each other only in their minds, and have yet to ever meet. But the story fundamentals are the same. The human passions are the same. I love the latest Romeo and Juliet from, what’s his name, the Australian guy… Anyway, he did that really super hip interpretation of Romeo and Juliet and he changed a whole lot about it. It was real street culture-focused, but it was still the same story underlying it.
So, yeah, I do believe that part: Human passions—finding love, finding each other, understanding yourself, understanding the world around you, being important to the world, having some sense of relevance—those things are persistent. They’re, sort of, million-year-old truths about us. But how that happens I do think is critically fundamental. The struggle to identify yourself may today involve a lot of subsystems that aren’t flesh. The means of understanding the world may come with an argument about access to a layer of technology.
Well, let’s leave it there. It was a fascinating hour, Mark. I hope I can entice you to come back, and I feel like there’s still a whole lot of ground we didn’t cover.
Oh, yeah, there is. It’s fun talking about the philosophical stuff, and I’m glad to disagree with you on some of those things. It makes it fun.
Byron explores issues around artificial intelligence and conscious computers in his upcoming book The Fourth Age, to be published in April by Atria, an imprint of Simon & Schuster. Pre-order a copy here
[voices_in_ai_link_back]
 

Voices in AI – Episode 2: A Conversation with Oren Etzioni

[voices_in_ai_byline]
In this episode Byron and Oren talk about AGI, Aristo, the future of work, conscious machines, and Alexa.
[podcast_player name=”Episode 2: A Conversation with Oren Etzioni” artist=”Byron Reese” album=”Voices in AI” url=”https://voicesinai.s3.amazonaws.com/2017-09-28-(00-57-00)-oren-etzioni.mp3″ cover_art_url=”https://voicesinai.com/wp-content/uploads/2017/09/voices-headshot-card-1.jpg”]
[voices_in_ai_link_back]
Byron Reese: This is Voices in AI, brought to you by Gigaom. I’m Byron Reese. Today, our guest is Oren Etzioni. He’s a professor of computer science who founded and ran University of Washington’s Turing Center. And since 2013, he’s been the CEO of the Allen Institute for Artificial Intelligence. The Institute investigates problems in data mining, natural language processing, and the semantic web. And if all of that weren’t enough to keep a person busy, he’s also a venture partner at the Madrona Venture Group. Business Insider called him, quote: “The most successful entrepreneur you’ve never heard of.”
Welcome to the show, Oren.
Oren Etzioni: Thank you, and thanks for the kind introduction. I think the key emphasis there would be, “you’ve never heard of.”
Well, I’ve heard of you, and I’ve followed your work and the Allen Institute’s as well. And let’s start there. You’re doing some fascinating things. So if you would just start off by telling us a bit about the Allen Institute, and then I would love to go through the four projects that you feature prominently on the website. And just talk about each one; they’re all really interesting.
Well, thanks. I’d love to. The Allen Institute for AI is really Paul Allen’s brainchild. He’s had a passion for AI for decades, and he’s founded a series of institutes—scientific institutes—in Seattle, which were modeled after the Allen Institute for Brain Science, which has been very successful running since 2003. We were founded—got started—in 2013. We were launched as a nonprofit on January 1, 2014, and it’s a great honor to serve as CEO. Our mission is AI for the common good, and as you mentioned, we have four projects that I’m really excited about.
Our first project is the Aristo project, and that’s about building a computer program that’s able to answer science questions of the sort that we would ask a fourth grader, and now we’re also working with eighth-grade science. And people sometimes ask me, “Well, gosh, why do you want to do that? Are you trying to put 10-year-olds out of work?” And the answer is, of course not.
We really want to use that test—science test questions—as a benchmark for how well are we doing in intelligence, right? We see tremendous success in computer programs like AlphaGo, beating the world champion in Go. And we say, “Well, how does that translate to language—and particularly to understanding language—and understanding diagrams, understanding science?”
And one way to answer that question is to, kind of, level the playing field with, “Let’s ask machines and people the same questions.” And so we started with these science tests, and we can see that, in fact, people do much better. It turns out, paradoxically, that things that are relatively easy for people are really quite hard for machines, and things that are hard for people—like playing Go at world championship level—those are actually relatively easy for the machine.
Hold on there a minute: I want to take a moment and really dissect this. Any time there’s a candidate chatbot that can make a go at the Turing test, I have a standard question that I start with, and none of them have ever answered it correctly.
It’s a question a four-year-old could answer, which is, “Which is bigger, a nickel or the sun?” So why is that a hard problem? Is what you’re doing, would it be able to answer that? And why would you start with a fourth grader instead of a four-year-old, like really go back to the most basic, basic questions? So the first part of that is: Is what you’re doing, would it be able to answer the question?
Certainly our goal is to give it the background knowledge and understanding ability to be able to answer those types of questions, which combine both basic knowledge, basic reasoning, and enough understanding of language to know that, when you say “a nickel,” you’re not referring to the metal, but you’re referring to a particular coin, with a particular size, and so on.
The reason that’s so hard for the machine is that it’s part of what’s called ‘common sense’ knowledge, right? Of course, the machine, if you programmed it, could answer that particular question—but that’s a stand-in for literally billions of other questions that you could ask about relative sizes, about animal behavior, about the properties of paper versus feathers versus furniture.
There’s really a seemingly infinite—or certainly a very, very large number—of basic questions that people, that certainly eight-year-olds can answer, or four-year-olds, but that machines struggle with. And they struggle with it because, what’s their basis for answering the questions? How would they acquire all that knowledge?
Now, to say, “Well, gosh, why don’t we build a four-year-old, or maybe even a one-year-old?” I’ve actually thought about that. So at the university, we investigated for a summer, trying to follow the developmental ladder, saying: “Let’s start with a six-month-old, and a one-year-old, etc., etc.”
And my interest, in particular, is in language. So I said, “Well, gosh, surely we can build something that can say ‘dada’ or ‘mama’, right?” And then work our way from there. What we found is that, even a very young child, their ability to process language and understand the world around them is so involved with their body—with their gaze, with their understanding of people’s facial expressions—that the net effect was that we could not build a one-year-old.
So, in a funny way, once you’re getting to the level of a fourth grader, who’s reading and answering multiple choice science questions, it gets easier and it gets more focused on language and semantics, and less on having a body, being able to crawl—which, of course, are challenging robotics problems.
So, we chose to start higher up in the ladder, and it was kind of a Goldilocks thing, right? It was more language-focused and, in a funny way, easier than doing a one-year-old, or a four-year-old. And—at the same time—not as hard as, say, college-level biology questions or AP questions, which involve very complicated language and reasoning.
So it’s your thinking that by talking about school science examinations, in particular, that you have a really, really narrow vocabulary that you have to master, a really narrow set of objects you have to understand the property of, is that the idea? Like, AI does well at games because they’re constrained worlds with fixed rules. Are you trying to build that, an analog to that?
It is an analog, right? In the sense that AI has done well with having narrow tasks and, you know, limited domains. At the same time, it’s probably not the word, really. There are, if you look—and this is something that we’ve learned—at the tremendous variety in these questions, and not only variety of ways of saying things, but also variety because these tests often require you to take something that you could have an understanding of—like gravity or photosynthesis—but then apply it to a particular situation.
“What happens if we take a plant and move it nearer to the window?” So that combination means that the combination of basic scientific knowledge, with an application to a real-world situation, means that it’s really quite varied. And it’s really a much harder AI problem to answer fourth-grade science questions than it is to solve Go.
I completely get that. I’m going to ask you a question, and it’s going to sound like I’m changing the topic, but it is germane. Do you believe that we’re on a path to building an AGI—a general intelligence? You’re going to learn things doing this, and is it, like, all we will need to do is scale them up more and more, faster, faster, better and better, and you’ll have an AGI? Is this on that trajectory, or is an AGI something completely unrelated to what you’re trying to do here?
That’s a very, very key question. And I would say that we are not on a path to building an AGI—in the sense that, if you build Aristo, and then you scale it to twelfth grade, and more complex vocabulary, and more complex reasoning, and, “Hey, if we just keep scaling this further, we’ll end up with artificial general intelligence, with an AGI.” I don’t think that’s the case.
I think there are many other problems that we have to solve, and this is a part of a very complex picture. And if it’s a path, it’s a very meandering one. But really, the point is that the word “research,” which is obviously what we’re doing here, has the word “search” in it. And that means that we’re iterating, we’re going here, we’re going there, we’re looking, you know.
“Oh, where did I put my keys?” Right? How many times do you retrace your steps and open that drawer, and say, “Oh, but I forgot to look under the socks,” or “I forgot to look under the bed”? It’s this very complex, uncertain process; it’s quite the opposite of, “Oh, I’m going down the path, the goal is clear, and I just have to go uphill for five miles, and I’ll get there.”
I’ve got a book on AI coming out towards the end of this year, and in it, I talk about the Turing test. And I talk about, like, the hardest question I can think of to ask a computer so that I could detect if it’s a computer or a person. And here’s a variant of what I came up with, which is:
“Doctor Smith is eating at his favorite restaurant, that he eats at frequently. He gets a call, an emergency call, and he runs out without paying his bill. Are the owners likely to prosecute?” So, if you think about that… Wow, you’ve got to know he’s a doctor, the call he got is probably a medical emergency, you have to infer that he eats there a lot, that they know who he is, they might even know he’s a doctor. Are they going to prosecute? So, it’s a gazillion social things that you have to know in order to answer that question.
Now, is that also on the same trajectory as solving twelfth grade science problems? Or is that question that I posed, would that require an AGI to answer?
Well, one of the things that we’ve learned is that, whenever you define a task—say answering story types of questions that involve social nuance, and maybe would involve ethical and practical considerations—that is on the trajectory of our research. You can imagine Aristo, over time, being challenged by these more nuanced questions.
But, again, we’ve gotten so good at identifying those tasks, building training sets, building models and then answering those questions, and that program might get good at answering those questions but still have a hard time crossing the street. Still have a hard time reading a poem or telling a joke.
So, the key to AGI is the “G”; the generality is surprisingly elusive. And that’s the amazing thing, because that four-year-old that we were talking about has generality in spades, even though she’s not necessarily a great chess player or a great Go player. So that’s what we learned.
As our AI technology evolves, we keep learning about what is the most elusive aspect of AI. At first, if you read some of the stuff that was written in the ’60s and the ’70s, people were very skeptical that the program could ever play chess, because that was really seen as, very intelligent people are very good chess players.
And then, that became solved, and people talked about learning. They said, “Well, gosh, but programs can’t learn.” And as we’ve gotten better, at least at certain kinds of learning, now the emphasis is on generality, right? How do we build a general program, given that all of our successes, whether it’s poker or chess or certain kinds of question answering, have been on very narrow tasks?
So, one sentence I read about Aristo says, “The focus of the project is explained by the guiding philosophy that artificial intelligence is about having a mental model for how things operate, and refining that mental model based on new knowledge.” Can you break that down for us? What do you mean?
Well, I think, again, lots of things. But I think a key thing not to forget—and it goes from your favorite question about a nickel and the sun—is that so much of what we do makes use of background knowledge, just extensive knowledge of facts, of words, of all kinds of social nuances, etc., etc.
And the hottest thing going is deep learning methods. Deep learning methods are responsible for the success in Go, but the thing to remember is that often, at least by any classical definition, those programs are very knowledge-poor. If you could talk to them and ask them, “What do you know?” you’d find out that—while they may have stored a lot of implicit information, say, about the game of Go—they don’t know a whole heck of a lot. And that, of course, touches onto the topic of consciousness, which I understand is also covered in your book. If I asked AlphaGo, “Hey, did you know you won?” AlphaGo can’t answer that question. And it’s not because it doesn’t understand natural languages. It’s not conscious.
Kasparov said that about Deep Blue. He said, “Well, at least it can’t gloat. At least it doesn’t know that it beat me.” To that point, Claude Shannon wrote about computers playing chess back in the ’50s, but it was an enormous amount of work. It took the best minds a long time to build something that could beat Kasparov. Do you think that something like that is generalizable to a lot of other things? Or am I hearing you correctly that that is not a step towards anything general? That’s a whole different kind of thing, and therefore Aristo is, kind of, doing something very different than AlphaGo or chess, or Jeopardy?
I do think that we can generalize from that experience. But I think that generalization isn’t always the one that people make. So what we can generalize is that, when we have a very clear “objective function” or “performance criteria”—basically it’s very clear who won and who lost—and we have a lot of data, that as computer scientists we’re very, very good—and it still, as you mentioned, took decades—but we’re very, very good at continuing to chip away at that with faster computers, more data, more sophisticated algorithms, and ultimately solving the problem.
However, in the case of natural language: If you and I, let’s say we’re having a conversation here on this podcast—who won that conversation? Let’s say I want to do a better job if you ever invite me for another podcast. How do I do that? And if my method for getting better involves looking at literally millions of training examples, you’re not going to do millions of podcasts. Right?
So you’re right, that a very different thing needs to happen when things are vaguer, or more uncertain, or more nuanced, when there’s less training data, etc., etc.—all these characteristics that make Aristo and some of our other projects very, very different than chess or Go.
So, where is Aristo? Give me a question it can answer and a question it can’t. Or is that even a cogent question? Where are you with it?
First of all, we keep track of our scores. So, I can give you an example in a second. But when we look at what we call “non-diagram multiple choice”—questions that are purely in language, because diagrams can be challenging for the machine to interpret—we’ve been able to reach very close to eighty percent correctness. Eighty percent accuracy on non-diagram multiple choice questions for fourth grade.
When you say any questions, there we’re at sixty percent. Which is either great, because when we started—all these questions with diagrams and what’s called “direct answer questions,” where you had to answer them with a phrase or a sentence, you don’t just get to choose between four choices—we were close to twenty percent. We were far lower.
So, we’ve made a lot of progress, so that’s on the glass-half-full side. And the glass-half-empty side, we’re still getting a D on a fourth-grade science test. So it’s all a question of how you look at it. Now, when you ask, “What questions can we solve?” We actually have a demo on our website, on AllenAI.org, that illustrates some of these.
If I go to the Aristo project there, and I click on “live demo,” I see questions like, “What is the main source of energy for the water cycle?” Or even, “The diagram below shows a food chain. If the wheat plants died, the population of mice would likely _______?” So, these are fairly complex questions, right?
But they’re not paragraph-long, and the thing that we’re still struggling with is what we call “brittleness.” If you take any one of these questions that we can answer, and then change the way you ask the question a bit, all of a sudden we fail. This is, by the way, a characteristic of many AI systems, this notion of brittleness—where a small change that a human might say, “Oh, that’s no different at all.” It can make a big difference to the machine.
It’s true. I’ve been playing around with an Amazon Alexa, and I noticed that if I say, “How many countries are there?” it gives me one number. If I say, “How many countries are there in the world?” it gives me a different number. Even though a human would see that as the same question. Is that the sort of thing you’re talking about?
That’s exactly the sort of thing I’m talking about, and it’s very frustrating. And, by the way, Alexa and Siri, for the people who want to take the pulse of AI—I mean, again, we’re one of the largest nonprofit AI research institutes in the world, but we’re still pretty small at 72 people—Alexa or Siri, that’s for-profit companies; there are thousands of people working on those, and it’s still the case that you can’t carry on a halfway decent dialogue with these programs.
And I’m not talking about the cutesy answers about, you know, “Siri, what are you doing tonight?” Or, “Are you better than Alexa?” I’m talking about, let’s say, the kind of dialogue you’d have with a concierge of a hotel, to help you find a good restaurant downtown. And, again, it’s because how do you score dialogues? Right? Who won the dialogue? All those questions, that are very easy to solve in games, are not even really well-posed in the context of a dialogue.
I pinned an article about how—and I have to whisper her name, otherwise it will start talking to me—Alexa and Google Assistant give you different answers to factual questions.
So if you ask, “How many seconds are there in a year?” they give you different answers. And if you say, “Who designed the American flag?” they’ll give you different answers. Seconds in a year, you would think that’s an objective question, there’s a right and a wrong answer, but actually one gives you a calendar year, and one gives you a solar year, which is a quarter-day different.
And with the American flag, one says Betsy Ross, and the other one says the person who designed the 50-star configuration of the flag, which is our current flag. And in the end, both times those were the questioner’s fault, because the question itself is inherently vague, right? And so, even if the system is good, if the questions are poorly phrased, it still breaks, right? It’s still brittle.
I would say that it’s the computer’s fault. In other words, again, an aspect of intelligence is being able to answer vague questions and being able to explain yourself. But these systems, even if their fact store is enormous—and one day, they’ll certainly exceed ours—if all it can do when you say, “Well, why did you give me this number?” is say, “Well, I found it here,” then really it’s a big lookup table.
It’s not able to deal with the vagueness, or to explain itself in a more meaningful way. What if you put the number three in that table? You ask, “How many seconds are there in a year?” The program would happily say, “Three.” And you say, “Does that really make sense?” And it would say, “Oh, I can’t answer that question.” Right? Whereas a person, would say, “Wait a minute. It can’t be three seconds in a year. That just doesn’t make sense!” Right? So, we have such a long way to go.
Right. Well, let’s talk about that. You’re undoubtedly familiar with John Searle’s Chinese Room question, and I’ll set it up for the listener—because what I’m going to ask you is, is it possible for a computer to ever understand anything?
The setup, very briefly—I mean, I encourage people to look it up—is that there’s a person in a room and he doesn’t speak any Chinese, and he’s given Chinese questions, and he’s got all these books he can look it up in, but he just copies characters down and hands them back. And he doesn’t know if he’s talking about cholera or coffee beans or what have you. And the analogy is, obviously, that’s what a computer does. So can a computer actually understand anything?
You know, the Chinese Room thought experiment is really one of the most tantalizing and fun thought experiments in philosophy of mind. And so many articles have been written about it, arguing this, that or the other thing. In short, I think it does expose some of the issues, and the bottom line is when you look under the hood at this Chinese Room and the system there, you say, “Gosh, it sure seems like it doesn’t understand anything.”
And when you take a computer apart, you say, “Gosh, how could it understand? It’s just a bunch of circuits and wires and chips.” The only problem with that line of reasoning is, it turns out that if you look under the hood in a person’s mind—in other words, if you look at their brain—you see the same thing. You see neurons and ion potentials and chemical processes and neurotransmitters and hormones.
And when you look at it at that level, surely, neurons can’t understand anything either. I think, again, without getting to a whole other podcast on the Chinese Room, I think that it’s a fascinating thing to think about, but it’s a little bit misleading. Understanding is something that emerges from a complex technical system. That technical system could be built on top of neurons, or it could be built on top of circuits and chips. It’s an emergent phenomenon.
Well, then I would ask you, is it strong emergence or is it weak emergence? But, we’ve got three more projects to discuss. Let’s talk about Euclid.
Euclid is, really, a sibling of Aristo, and in Euclid we’re looking at SAT math problems. The Euclid problems are easier in the sense that you don’t need all this background knowledge to answer these pure math questions. You surely need a lot less of that. However, you really need to very fully and comprehensively understand the sentence. So, I’ll give you my favorite example.
This is a question that is based on a story about Ramanujan, the Indian number theorist. He said, “What’s the smallest number that’s the sum of two cubes in two different ways?” And the answer to that question is a particular number, which the listeners can look up on Google. But, to answer that correctly, you really have to fully parse that rather long and complicated sentence and understand “the sum of two cubes in two different ways.” What on earth does that mean?
And so, Euclid is working to have a full understanding of sentences and paragraphs, which are the kind of questions that we have on the SATs. Whereas often with Aristo—and certainly, you know, with things like Watson and Jeopardy—you could get away with a much more approximate understanding, “this question is sort of about this.” There’s no “sort of” when you’re dealing with math questions, and you have to give the answer.
And so that is, as you say, a sibling to Aristo; but Plato, the third one we’re going to discuss, is something very different, right?
Right. Maybe if we’re using this family metaphor, Plato is Aristo’s and Euclid’s cousin, and what’s going on there is we don’t have a natural benchmark test, but we’re very, very interested in vision. We’ve realized that a lot of the questions that we want to address, a lot of the knowledge that is present in the world isn’t expressed in text, certainly not in any convenient way.
One great way to learn about the sizes of things—not just the sun and a nickel, but maybe even a giraffe and a butterfly—is through pictures. You’re not going to find the sentence that says, “A giraffe is much bigger than a butterfly,” but if you see pictures of them, you can make that connection. Plato is about extracting knowledge from images, from videos, from diagrams, and being able to reason over that to draw conclusions.
So, Ali Farhadi, who leads that project and who shares his time between us and the Allen School at University of Washington, has done an amazing job generating result after result, where we’re able to do remarkable things based on images.
My favorite example of this—you kind of have to visualize it—imagine drawing a diagonal line and then a ball on top of that line. What’s going to happen to that ball? Well, if you can visualize it, of course the ball’s going to roll down the line—it’s going to roll downhill.
It turns out that most algorithms are actually really challenged to make that kind of prediction, because to make that kind of prediction, you have to actually reason about what’s going on. It’s not just enough to say, “There’s a ball here on a line,” but you have to understand that this is a slope, and that gravity is going to come into play, and predict what’s going to happen. So, we really have some of the state-of-the-art capabilities, in terms of reasoning over images and making predictions.
Isn’t video a whole different thing, because you’re really looking at the differences between images, or is it the same basic technology?
At a technical level, there are many differences. But actually, the elegant thing about video, as you intimated, a video is just a sequence of images. It’s really our eye, or our mind, that constructs the continuous motion. All it is, is a number of images shown per second. Well, for us, it’s a wonderful source of training data, because I can take the image at Second 1 and make a prediction about what’s going to happen in Second 2. And then I can look at what happened at Second 2, and see whether the prediction was correct or not. Did the ball roll down the hill? Did the butterfly land on the giraffe? So there’s a lot of commonalities, and video is actually a very rich source of images and training data.
One of the challenges with images is—well, let me give an example, then we can discuss it. Say I lived on a cul-de-sac, and the couple across the street were expecting—the woman is nine months pregnant—and one time I get up at three in the morning and I look out the window and their car is gone. I would say, “Aha, they must have gone to the hospital.” In other words, I’m reasoning from what’s not in the image. That would be really hard, wouldn’t it?
Yes. You’re way ahead of Plato. It’s very, very true.
To anticipate that you’ll go to Semantic Scholar; I want to make sure that we get to that. With Semantic Scholar, a number of the capabilities that we see in these other projects come together. Semantic Scholar is a scientific search engine, it’s available 24/7 at semanticscholar.org and it allows people to look for computer science papers, for neuroscience papers. Soon we’re going to be launching the ability to cover all the papers in biomedicine that are available on engines like PubMed.
And what we’re trying to do there is deal with the fact that there are so many, you know, over a hundred million scientific research papers, and more are coming out every day, and it’s virtually impossible for anybody to keep up. Our nickname for Semantic Scholar sometimes is Da Vinci, because we say Da Vinci was the last Renaissance man, right?
The person who, kind of, knew all of science. There are no Renaissance men or women anymore, because we just can’t keep up. And that’s a great place for AI to help us, to make scientists more efficient in their literature searches, more efficient in their abilities to generate hypotheses and design experiments.
That’s what we’re trying to do with Semantic Scholar, and that involves understanding language, and that involves understanding images and diagrams, and it involves a lot more.
Why do you think the semantic web hasn’t taken off more, and what is your prediction about the semantic web?
I think it’s important to distinguish between “semantics,” as we use it at Semantic Scholar, and “semantic” in the semantic web. In Semantic Scholar, we try to associate semantic information with text. For example, this paper is about a particular brain region, or this paper uses fMRI methodology, etc. It’s pretty simple semantic distinctions.
The semantic web was a very rich notion of semantics that, frankly, is superhuman and is way, way, way beyond what we can do in a distributed world. So that vision by Tim Berners-Lee really evolved over the years into something called “linked open data,” where, again, the semantics is very simple and the emphasis is much more about different players on the web linking their data together.
I think that very, very few people are working on the original notion of the semantic web, because it’s just way too hard.
I’m just curious, this is a somewhat frivolous question: But the names of your projects don’t seem to follow an overarching naming scheme. Is that because they were created and named elsewhere or what?
Well, it’s because, you know, if you let a computer scientist, which is me, if you put him or her in charge of branding, you’re going to run into problems. So, I think, Aristo and Euclid are what we started with and those were roughly analogous. Then we added Plato, which is an imperfect name, but still roughly in the mythological world. And then Semantic Scholar really is a play off of Google Scholar.
So Semantic Scholar is, if you will, really the odd duck here. And when we had a project, we were considering doing work on dialogue—which we still are—we called that project Socrates. But then I’m also thinking “Do we really want all the projects to be named after men?” which is definitely not our intent. So, I think the bottom line is it’s an imperfect naming scheme and it’s all my fault.
So, the mission of the Allen Institute for AI is, quote: “Our mission is to contribute to humanity through high-impact AI research and engineering.” Talk to me about the “contribute to humanity” part of that. What do you envision? What do you hope comes of all of this?
Sure. So, I think that when we started, we realized that so often AI is either vilified—particularly in Hollywood films, but also by folks like Stephen Hawking and Elon Musk—and we wanted to emphasize AI for the common good, AI for humanity, where we saw some real benefits to it.
And also, in a lot of for-profit companies, AI is used to target advertising, or to get you to buy more things, or to violate your privacy, if it’s being used by intelligence agencies or by aggressive marketing. And we really wanted to find places like Semantic Scholar, where AI can help solve some of humanity’s thorniest problems by helping scientists.
And so, that’s where it comes from; it’s a contrast to these other, either more negative uses, or more negative views of AI. And we’ve been really pleased that, since we were founded, organizations like OpenAI or the Partnership on AI, which is an industry consortium, have adopted missions that are very consistent and kind of echo ours, you know: AI to benefit humanity and society and things like that. So it seems like more and more of us in the field are really focused on using AI for good.
You mentioned fear of AI, and the fear manifests—and you can understand Hollywood, I mean, it’s drama, right—but the fear manifests in two different ways. One is what you alluded to, that it’s somehow bad, you know, Terminator or what have you. But the other one that is on everybody’s mind is, what do you think about AI’s effect on employment and jobs?
I think that’s a very serious concern. As you can tell, I’m not a big fan of the doomsday scenarios about AI. I tell people we should not confuse science with science fiction. But another reason why we shouldn’t concern ourselves with Skynet and doomsday scenarios is because we have a lot more realistic and pressing problems to worry about. And that, for example, is AIs impact on jobs. That’s a very real concern.
We’ll see it in the transportation sector, I predict, particularly soon. Where truck drivers and Uber drivers and so on are going to be gradually squeezed out of the market, and that’s a very significant number of workers. And it’s a challenge, of course, to help these people to retrain them, to help them find other jobs in an increasingly digital economy.
But, you know, in the history of the United States, at least, over the past couple of hundred years, there have been a number of really disruptive technologies that have come along—the electrification of industry, the mechanization of industry, the replacement of animal power, going into steam—things that really impacted quickly, and yet unemployment never once budged because of that. Because what happens is, people just use the new technology. And isn’t it at least possible that, as we move along with the development of artificial intelligence, that it actually is an empowering technology that lets people use it to increase their own productivity? Like, anybody could use it to increase their productivity.
I do think that AI will have that role, and I do think that, as you intimated, these technological forces have some real positives. So, the reason that we have phones and cars and washing machines and modern medicine, all these things that make our lives better and that are broadly shared through society, is because of technological advances. So I don’t think of these technological advances, including AI advances, as either a) negative; or b) avoidable.
If we say, “Okay, we’re not going to have AI,” or “We’re not going to have computers,” well, other countries will and they’ll overtake us. I think that it’s very, very difficult, if not impossible to stop broad-based technology change. Narrow technologies that are particularly terrible, like landmines or biological weapons, we’ve been able to stop. But I think AI isn’t stoppable because it’s much broader, and it’s not something that should be stopped, it’s not like that.
So I very much agree with what you said, but with one key caveat. We survived those things and we emerged thriving, but the disruption over significant periods of time and for millions of people was very, very difficult. So right as we went from a society that was ninety-something percent agricultural to one where there were only two percent workers in agriculture—people suffered and people were unemployed. And so, I do think that we need to have the programs in place to help people with these transitions.
And I don’t think that they’re simple because some people say, “Sure, those old jobs went away, but look at all these great jobs. You know, web developer, computer programmer, somebody who leverages these technologies to make themselves more effective at their jobs.” That’s true, but the reality is a lot more complicated. Are all these truck drivers really going to become web developers?
Well, I don’t think that’s the argument, right? The argument is that everybody moves one small notch up. So somebody who was a math teacher in a college, maybe becomes a web developer, and a high school teacher becomes the college teacher, and then a substitute teacher gets the full time job.
Nobody says, “Oh, no, no, we’re going to take these people, you know, who have less training and we’re going to put them in these highly technical jobs.” That’s not what happened in the past either, right? The question is can everybody do a job a little more complicated than the one they have today? And if the answer to that is yes, then do we have a big disruption coming?
Well, first of all, you’re making a fair point. I was oversimplifying by mapping the truck drivers to the developers. But, at the same time, I think we need to remember that these changes are very disruptive. And, so, the easiest example to give, because it’s fresh in my mind and, I think, other people’s mind—let’s look at Detroit. This isn’t technological changes, it’s more due to globalization and to the shifting of manufacturing jobs out of the US.
But nevertheless, these people didn’t just each take a little step up or a little step to the right, whatever you want to say. These people and their families suffered tremendously. And it’s had very significant ramifications, including Detroit going bankrupt, including many people losing their health care, including the vote for President Trump. So I think if you think on a twenty-year time scale, will the negative changes be offset by positive changes? Yes, to a large extent. But if you think on shorter time scales, and you think about particular populations, I don’t think we can just say, “Hey, it’s going to all be alright.” I think we have a lot of work to do.
Well, I’m with you there, and if there’s anything that I think we can take comfort in, it’s that the country did that before. There used to be a debate in the country about whether post-literacy education was worth it. This was back when we were an agricultural society. And you can understand the logic, right? “Well once somebody learns to read, why do you need to keep them in school?” And then, people said, “Well, the jobs of the future are going to need a lot more skills.” That’s why the United States became the first country in the world to guarantee a high school education to every single person.
And it sounds like you’re saying something like that, where we need to make sure that our education opportunities stay in sync with the requirements of the jobs we’re creating.
Absolutely. I think we are agreeing that there’s a tremendous potential for this to be positive, you know? Some people, again, have a doomsday scenario for jobs and society. And I agree with you a hundred percent; I don’t buy into that. And it sounds like we also agree, though, that there are things that we could do to make these transitions smoother and easier on large segments of society.
And it definitely has to do with improving education and finding opportunities etc., etc. So, I think it’s really a question of how painful will this change be, and how long will it take until we’re at a new equilibrium that, by the way, could be a fantastic one? Because, you know, the interesting thing about the truck jobs, and the toll jobs that went away, and a lot of other jobs that went away; some of these jobs are awful. They’re terrible, right? People aren’t excited about a lot of these jobs. They do them because they don’t have something better. If we can offer them something better, then the world will be a better place.
Absolutely. So we’ve talked about AGI. I assume you think that we’ll eventually build a general intelligence.
I do think so. I think it will easily take more than twenty-five years, it could take as long as a thousand years, but I’m what’s called a materialist; which doesn’t mean that I like to shop on Amazon; it means that I believe that when you get down to it, we’re constructed out of atoms and molecules, and there’s nothing magical about intelligence. Sorry—there’s something tremendously magical about it, but there’s nothing ineffable about it. And, so, I think that, ultimately, we will build computer programs that can do and exceed what we can do.
So, by extension, you believe that we’ll build conscious machines as well?
Yes. I think consciousness emerges from it. I don’t think there’s anything uniquely human or biological about consciousness.
The range of time that people think it will be before we create an AGI, in my personal conversations, range from five to five hundred years. Where in that spectrum would you cast your ballot?
Well, I would give anyone a thousand-to-one odds that it won’t happen in the next five years. I’ll bet ten dollars against ten thousand dollars, because I’m in the trenches working on these problems right now and we are just so, so far from anything remotely resembling an AGI. And I don’t know anybody in the field who would say or think otherwise.
I know there are some, you know, so-called futurists or what have you… But people actively working on AI don’t see that. And furthermore, even if somebody says some random thing, then I would ask them, “Back it up with data.” What’s your basis for saying that? Look at our progress rates on specific benchmarks and challenges; they’re very promising but they’re very promising for a very narrow task, like object detection or speech recognition or language understanding etc., etc.
Now, when you go beyond ten, twenty, thirty years, who can predict what will happen? So I’m very comfortable saying it won’t happen in the next twenty-five years, and I think that it is extremely difficult to predict beyond that, whether it’s fifty or a hundred or more, I couldn’t tell you.
So, do you think we have all the parts we need to build an AGI? Is it going to take some breakthrough that we can’t even fathom right now? Or with enough deep learning and faster processors and better algorithms and more data, could you say we are on a path to it now? Or is your sole reason for believing we’re going to build an AGI that you’re a materialist—you know, we’re made of atoms, we can build something made of atoms.
I think it’s going to require multiple breakthroughs which are very difficult to imagine today. And let me give you a pretty concrete example of that.
We want to take the information that’s in text and images and videos and all that, and represent that internally using a representation language that captures the meaning, the gist of it, like a listener to this podcast has kind of a gist of what we’ve talked about. We don’t even know what that language looks like. We have various representational languages, none of them are equal to the task.
Let me give you another way to think about it as a thought experiment. Let’s suppose I was able to give you a computer, a computer that was as fast as I wanted, with as much memory as I wanted. Using that unbelievable computer, would I now be able to construct an artificial intelligence that’s human-level? The answer is, “No.” And it’s not about me. None of us can.
So, if it was really about just the speed and so on, then I would be a lot more optimistic about doing it in a short term, because we’re so good at making it run two times faster, making it run ten times faster, building a faster computer, storing information. We used to store it on floppy disk, and now we store it here. Next we’re going to be storing it in DNA. This exponential march of technology under Moore’s Law—keep getting faster and cheaper—in that sense, is phenomenal. But that’s not enough to achieve AGI.
Earlier you said that you tell people not to get confused with science and science fiction. But, about science fiction, is there anything that you’ve seen, read, watched that you actually think is a realistic scenario of what we may be able to do, what the future may hold? Is there anything that you look at and say, well, it’s fiction, but it’s possible?
You know, one of my favorite pieces of fiction is the book Snow Crash, where it, kind of, sketches this future of Facebook and the future of our society and so on. If I were to recommend one book, it would be that. I think a lot of the books about AI are long on science fiction and short on what you call “hard science fiction”; short on reality.
And if we’re talking about science fiction, I’d love to end with a note where, you know, there’s this famous Arthur C. Clarke quote, “Any sufficiently advanced technology is indistinguishable from magic.” So, I think, to a lot of people AI seems like magic, right? We can beat the world champion in Go—and my message to people, again, as somebody who works in the field day in and day out, it couldn’t be further from magic.
It’s blood, sweat, and tears—and, by the way, human blood, sweat and tears—of really talented people, to achieve the limited successes that we’ve had in AI. And AlphaGo, by the way, is the ultimate illustration of that. Because it’s not that AlphaGo defeated Lee Sedol, or the machine defeated the human. It’s this remarkably-talented team of engineers and scientists at Google, working at Google DeepMind, working for years; they’re the ones who defeated Lee Sedol, with some help from technology.
Alright. Well, that’s a great place to leave it, and I want to thank you so much. It’s been fascinating.
It’s a real pleasure for me, and I look forward both to listening to this podcast, to your other ones, and to reading your book.
Thank you.
[voices_in_ai_link_back]
Byron explores issues around artificial intelligence and conscious computers in his upcoming book The Fourth Age, to be published in April by Atria, an imprint of Simon & Schuster. Pre-order a copy here

Four Questions For: David Brin

What are the key revolutionary developments that are about to happen or that are happening in artificial intelligence?
Portions of the intelligencia – typified by Google’s Ray Kurzweil – foresee AI, or Artificial General Intelligence (AGI) bringing good news, perhaps even transcendence for members of the Olde Race of bio-organic humanity 1.0.
Others, such as Stephen Hawking and Francis Fukayama, warn that the arrival of sapient, or super- sapient machinery may bring an end to our species – or at least its relevance on the cosmic stage – a potentiality evoked in many a lurid Hollywood film.
Taking middle ground, SpaceX/Tesla entrepreneur Elon Musk has joined with YCombinator founder Sam Altman to establish OpenAI, an endeavor that aims to keep artificial intelligence research – and its products – accountable by maximizing transparency and accountability.
In fact, the panoply of dangers and opportunities may depend on which of half a dozen paths to AI wind up bearing fruit first. Can AI be designed from scratch, via logic, like IBM’s Watson? In that case we might use “laws” like Asimov predicted, to try to keep control. But there are five other general approaches and the lesson when you study them is that “control” just may not be in the cards.
Why are you a proponent of radical transparency, and do you believe that our world is moving in that direction?
A great many modern citizens are rightfully worried about Big Brother. Some fear tyranny coming from snooty academics and faceless government bureaucrats. Others see Orwellian despots arising from conniving aristocrats and faceless corporations. They are all right to worry! Because across 6000 years, only rarely was something other than feudalism or dictatorship tried. Our experiment has been by far the most successful of those exceptions and we should study why.
We did not achieve this by hiding. Those who fret that governments and corporations and the rich will know too much about us — these folks are right to worry, but they reach the wrong conclusion. We will never successfully hide from elites. It never happened and never will. “Encryption” and other romantic fantasies never work for very long. But there is another approach. Not to hide, but to aggressively strip all elites naked enough to supervise them and hold them accountable. We may not be able to stop them from knowing about us. But we can still deter them from doing bad things to us.
That is how we got our current freedom, by answering surveillance with sousveillance, or supervising authority. Proof of this is the spread of video and cell phone cameras on our streets, which are cornering abuses by authorities and year by year making it harder for them to do bad things. It’s not perfect. It never will be. But the Moore’s Law of Cameras – (sometimes called “Brin’s Corollary to Moore’s Law” I’m told) – seems to be providing citizens with a Great Equalizer, even better than the old Colt 45. This runs diametrically opposite to the Hollywood lesson that technology never works in favor of citizenship. It can and it does.
In any event, the spread of cameras – faster, better, cheaper, more mobile, and vastly more numerous – cannot be stopped. If the elites monopolize this light, we will have Big Brother. But if citizens grab the light, then BB hasn’t a chance.
What are your biggest concerns surrounding developments in artificial intelligence, if any?
Anything done in secret is more likely to result in terrible errors. Secrecy is the underlying mistake that makes every innovation go wrong, in Michael Crichton novels and films! If AI happens in the open, then errors and flaws may be discovered in time… perhaps by other, wary AIs!
Hence, the branch of AI research I fear most is High Frequency Trading (HFT) programs. Wall Street firms have poured more money into this particular realm of AI research than is spent by all of the top universities, combined. Notably, HFT systems are designed in utter secrecy, evading normal feedback loops of scientific criticism and peer-review. Moreover the ethos designed into these mostly unsupervised systems is inherently parasitical, predatory, amoral (at-best) and insatiable.
Not only are they a potential disaster, waiting to happen… they can only possibly lead to disaster. No other outcome is even remotely plausible.
Why do some people fear AI? Is some amount of caution called for?
We fear that advanced, super-intelligent and powerful entities will do to us what human high achievers always did in the past. They took over our tribes, nations and so on, making themselves kings and lords and priests and tyrants, bossing over us and limiting the potential of those below. Adam Smith wrote that such inherited oligarchies were always far deadlier enemies of creative and competitive, flat-open-fair enterprise than government civil servants can ever be.
Especially, because they suppressed criticism, those feudal kings and lords became very, very bad rulers, performing horrifically stupid statecraft while deeming themselves to be so smart. It’s no accident that human civilization only started taking off when we discovered tricks for preventing that failure mode… for keeping things flat-open-fair-competitive.
If new AI minds truly are super intelligent, then they will see what a mistake it would be to emulate that hoary old pattern. What a blunder, if they copy the approach used by puerile-dopey human lords. In my novel EXISTENCE, I explore how AI might choose to take a very different approach than we have seen portrayed in real life or in films.
Now, if only the new AI overlords will read what I wrote, before deciding…
david brin
David Brin is an astrophysicist whose international best-selling novels include The Postman, Earth, and recently Existence. His nonfiction book about the information age – The Transparent Society – won the Freedom of Speech Award of the American Library Association.