Voices in AI – Episode 57: A Conversation with Akshay Sabhikhi

[voices_in_ai_byline]

About this Episode

Episode 57 of Voices in AI features host Byron Reese and Akshay Sabhikhi talking about how AI augments and informs human intelligence. Akshay Sabhikhi is the CEO and Co-founder of CognitiveScale. He’s got more than 18 years of entrepreneurial leadership, product development and management experience with growth stage venture backed companies and high growth software divisions within Fortune 50 companies. He was a global leader for Smarter Care at IBM, and he successfully led and managed the acquisition of Cúram Software to establish IBM’s leadership at the intersection of social programs and healthcare. He has a BS and MS in electrical and computer engineering from UT at Austin and an MBA from the Acton School of Entrepreneurship.
Visit www.VoicesinAI.com to listen to this one-hour podcast or read the full transcript.

Transcript Excerpt

Byron Reese: This is Voices in AI brought to you by GigaOm, I’m Byron Reese. Today my guest is Akshay Sabhikhi. He is the CEO and Co-founder of CognitiveScale. He’s got more than 18 years of entrepreneurial leadership, product development and management experience with growth stage venture backed companies and high growth software divisions within Fortune 50 companies. He was a global leader for Smarter Care at IBM, and he successfully led and managed the acquisition of Cúram Software to establish IBM’s leadership at the intersection of social programs and healthcare. He has a BS and MS in electrical and computer engineering from UT at Austin and an MBA from the Acton School of Entrepreneurship. Welcome to the show, Akshay.
Akhay Sabhikhi: Thank you Byron, great to be here.
Why is artificial intelligence working so well now? I mean like, my gosh, what has changed in the last 5-10 years?
You know, the big difference is everyone knows artificial intelligence has been around for decades, but the big difference this time as I’d like to say, is there’s a whole supporting cast of characters that’s making AI really come into its own. And it all starts firstly with the fact that it’s delivering real value to clients, so let’s dig into that.
Firstly, data is a field for AI and we all know with the amount of information we’re surrounded with, we certainly hear about big data all over the place, and you know, it’s the amount and the volume of the information, but it’s also systems that are able to interpret that information. So the type of information I’m talking about is not just your classic databases, nice neatly packaged structured information; it is highly unstructured and messy information that includes, you know, audio, video, certainly different formats of text, images, right? And our ability to really bring that data and reason over that data is a huge difference.
We talk about a second big supporting cost or supporting character here is the prominence of social, and I say social because this is the amount of data that’s available through social media, where we can in real time see consumers and how they behave, or whether it is mobile, and the fact that you have devices now in the hands of every consumer, and so you have touch points where insights can be pushed out. Those are the different, I guess supporting costs that are now there which didn’t exist before, and that’s one of the biggest changes with the prominence and true, sort of, value people are seeing with AI.
And so give us some examples, I mean you’re at the forefront of this with CognitiveScale. What are some of the things that you see that are working that wouldn’t have worked 5-10 years ago?
Well, so let’s take some examples. So, we use an analogy which is, we all sort have used WAZE as an application to get from point A to point B, right? When you look at WAZE, it’s a great consumer tool that tells you exactly what’s ahead of you: cop, traffic, debris on the road and so on, and it guides you through your journey right? Well if you look at applying a WAZE-like analogy to the enterprise where you have a patient, and I’ll use a patient as an example because that’s how we started the company. You’re largely unmanaged, all you do is you show up to your appointments, you get prescriptions, you’re told about your health condition, but then once you leave that appointment, you’re pretty much on your own right? But think about everything that’s happening around you, think about social determinants, for example, the city you live in, whether you live in the suburbs or you live in downtown, the weather patterns, the air quality, such as the pollen counts for example, or allergens that affect you or whether it is a specific zip code within the city that tells us about the food choices that exist around me.
There’s a lot of determinants that go well beyond your pure sort of structured information that comes from an electronic medical record. If you bring all of those pieces of data together, an AI system is able to look at that information and the biggest difference here being in the context of the consumer, in this case, the patient, and surface unique insights to them, but it doesn’t stop right there. What an AI system does is, it takes it a step or two further by saying, “I’m going to push insights based on what I’ve learned from data that surrounds you, and hopefully it makes sense to you. And I will give you the mechanisms to provide a thumbs up/thumbs down or specific feedback that I can then incorporate back into a system to learn from it. So that’s a real life example of an AI system that we’ve stood up for many of our clients using various kinds of structured and unstructured information to be brought together.
Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com
[voices_in_ai_link_back]
 
Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.

Voices in AI – Episode 17: A Conversation with James Barrat

[voices_in_ai_byline]
In this episode, Byron and James talk about jobs, human vs. artificial intelligence, and more.
[podcast_player name=”Episode 17: A Conversation with James Barrat” artist=”Byron Reese” album=”Voices in AI” url=”https://voicesinai.s3.amazonaws.com/2017-10-30-(00-54-11)-james-barrat.mp3″ cover_art_url=”https://voicesinai.com/wp-content/uploads/2017/10/voices-headshot-card-3-1.jpg”]
[voices_in_ai_link_back]
Byron Reese: Hello, this is Voices in AI, brought to you by Gigaom. I am Byron Reese. Today I am so excited that our guest is James Barrat. He wrote a book called Our Final Invention, subtitled Artificial Intelligence and the End of the Human Era. James Barratt is also a renowned documentary filmmaker, as well as an author. Welcome to the show, James.
James Barrat: Hello.
So, let’s start off with, what is artificial intelligence?
Very good question. Basically, artificial intelligence is when machines perform tasks that are normally ascribed to human intelligence. I have a very simple definition of intelligence that I like. Because ‘artificial intelligence’—the definition just throws the ideas back to humans, and [to] human intelligence, which is the intelligence we know the most about.
The definition I like is: intelligence is the ability to achieve goals in a variety of novel environments, and to learn. And that’s a simple definition, but a lot is packed into it. Your intelligence has to achieve goals, it has to do something—whether that’s play Go, or drive a car, or solve proofs, or navigate, or identify objects. And if it doesn’t have some goal that it achieves, it’s not very useful intelligence.
If it can achieve goals in a variety of environments, if it can do object recognition and do navigation and do car-driving like our intelligence can, then it’s better intelligence. So, it’s goal-achieving in a bunch of novel environments, and then it learns. And that’s probably the most important part. Intelligence learns and it builds on its learning.
And you wrote a widely well-received book, Artificial Intelligence: Our Final Invention. Can you explain to the audience just your overall thesis, and the main ideas of the book?
Sure. Our Final Invention is basically making the argument that AI is a dual-use technology. A dual-use technology is one that can be used for great good, or great harm. Right now we’re in a real honeymoon phase of AI, where we’re seeing a lot of nifty tools come out of it, and a lot more are on the horizon. AI, right now, can find cancer clusters in x-rays better than humans. It can do business analytics better than humans. AI is doing what first year legal associates do, it’s doing legal discovery.
So we are finding a lot of really useful applications. It’s going to make us all better drivers, because we won’t be driving anymore. But it’s a dual-use technology because, for one thing, it’s going to be taking a lot of jobs. You know, there are five million professional drivers in the United States, seven million back-office accountants—those jobs are going to go away. And a lot of others.
So the thesis of my book is that we need to look under the hood of AI, look at its applications, look who’s controlling it, and then in a longer term, look at whether or not we can control it at all.
Let’s start with that point and work backwards. That’s an ominous statement. Can we record it at all? What are you thinking there?
Can we control it at all.
I’m sorry, yes. Control it at all.
Well, let me start, I prefer to start the other way. Stephen Hawking said that the trouble with AI is, in the short term, who controls it, and in the long term, can we control it at all? And in the short term, we’ve already suffered some from AI. You know, the NSA recently was accessing your phone data and mine, and getting your phone book and mine. And it was, basically, seizing our phone records, and that used to be illegal.
Used to be that if I wanted to seize, to get your phone records, I needed to go to a court, and get a court order. And that was to avoid abridging the Fourth Amendment, which prevents illegal search and seizure of property. Your phone messages are your property. The NSA went around that, and grabbed our phone messages and our phone data, and they are able to sift through this ocean of data because of AI, because of advanced data mining software.
One other example—and there are many—one other example of, in the short term, who controls the AI, is, right now there are a lot of countries developing battlefield robots and drones that will be autonomous. And these are robots and drones that kill people without a human in the loop.  And these are AI issues. There are fifty-six nations developing battlefield robots.
The most sought after will be autonomous battlefield robots. There was an article just a couple of days ago about how the Marines have a robot that shoots a machinegun on a battlefield. They control it with a tablet, but their goal, as stated there, is to make it autonomous, to work on its own.
In the longer-term we, I’ll put it in the way that Arthur C. Clark put it to me, when I interviewed him. Arthur C. Clark was a mathematician and a physicist before he was a science fiction writer. And he created the HAL 9000 from 2001: A Space Odyssey, probably the most famous homicidal AI. And he said, when I asked him about the control problem of artificial intelligence, he said something like this: He said, “We humans steer the future not because we are the fastest or the strongest creatures, but because we are the most intelligent. And when we share the planet with something that’s more intelligent than we are, it will steer the future.”
So the problem we’re facing, the problem we’re on the cusp of, I can simplify it with a concept called ‘the intelligence explosion’. The intelligence explosion was an idea created by a statistician named I. J. Good in the 1960s. He said, “Once we create machines that do everything as well or better than humans, one of the things they’ll do is create smart machines.”
And we’ve seen artificial intelligence systems slowly begin to do things better than we do, and it’s not a stretch to think about a time to come, when artificial intelligence systems do advanced AI research and development better that humans. And I. J. Good said, “Then, when that happens, we humans will no longer set the pace of intelligence advancement, it will be machines that will set the pace of advancement.”
The trouble of that is, we know nothing about how to control a machine, or a cognitive architecture, that’s a thousand or million times more intelligent than we are. We have no experience with anything like that. We can look around us for analogies in the animal world.
How do we treat things that we’re a thousand times more intelligent than? Well, we treat all animals in a very negligent way. And the smart ones are either endangered, or they’re in zoos, or we eat them. That’s a very human-centric analogy, but I think it’s probably appropriate.
Let’s push on this just a little bit.  So do you…
Sure.
Do you believe… Some people say ‘AI’ is kind of this specter of a term now, that, it isn’t really anything different than any other computer programs we’ve ever run, right? It’s better and faster and all of that, but it isn’t qualitatively anything different than what we’ve had for decades.
And so why do you think that? And when you say that AIs are going to be smarter than us, a million times smarter than us, ‘smarter’ is also a really nebulous term.
I mean, they may be able to do some incredibly narrow thing better than us. I may not be able to drive a car as well as an AI, but that doesn’t mean that same AI is going to beat me at Parcheesi. So what do you think is different? Why isn’t this just incrementally… Because so far, we haven’t had any trouble.
What do you think is going to be the catalyst, or what is qualitatively different about what we are dealing with now?
Sure. Well, there’s a lot of interesting questions packed into what you just said. And one thing you said—which I think is important to draw out—is that there are many kinds of intelligence. There’s emotional intelligence, there’s rational intelligence, there’s instinctive and animal intelligence.
And so, when I say something will be much more intelligent than we are, I’m using a shorthand for: It will be better at our definition of intelligence, it will be better at solving problems in a variety of novel environments, it will be better at learning.
And to put what you asked in another way, you’re saying that there is an irreducible promise and peril to every technology, including computers. All technologies, back to fire, have some good points and some bad points. AI I find qualitatively different. And I’ll argue by analogy, for a second. AI to me is like nuclear fission. Nuclear fission is a dual-use technology capable of great good and great harm.
Nuclear fission is the power behind atom bombs and behind nuclear reactors. When we were developing it in the ‘20s and ‘30s, we thought that nuclear fission was a way to get free energy by splitting the atom. Then it was quickly weaponized. And then we used it to incinerate cities. And then we as a species held a gun at our own heads for fifty years with the arms race. We threatened to make ourselves extinct. And that almost succeeded a number of times, and that struggle isn’t over.
To me, AI is a lot more like that. You said it hasn’t been used for nefarious reasons, and I totally disagree. I gave you an example with the NSA. A couple of weeks ago, Facebook was caught up because they were targeting emotionally-challenged and despairing children for advertising.
To me, that’s extremely exploitative. It’s a rather soulless and exploitative commercial application of artificial intelligence. So I think these pitfalls are around us. They’re already taking place. So I think the qualitative difference with artificial intelligence is that intelligence is our superpower, the human superpower.
It’s the ability to be creative, the ability to invent technology. That was one thing Stephen Hawking brought up when he was asked about, “What are the pitfalls of artificial intelligence?”
He said, “Well, for one thing, they’ll be able to develop weapons we don’t even understand.” So, I think the qualitative difference is that AI is the invention that creates inventions. And we’re on the cusp, this is happening now, and we’re on the cusp of an AI revolution, it’s going to bring us great profit and also great vulnerability.
You’re no doubt familiar with Searle’s “Chinese Room” kind of question, but all of the readers, all of the listeners might not be… So let me set that up, and then get your thought on it. It goes like this:
There’s a person in a room, a giant room full of very special books. And he doesn’t—we’ll call him the librarian—and the librarian doesn’t speak a word of Chinese. He’s absolutely unfamiliar with the language.
And people slide him questions under the door which are written in Chinese, and what he does—what he’s learned to do—is to look at the first character in that message, and he finds the book, of the tens of thousands that he has, that has that on the spine. And in that book he looks up the second character. And the book then says, “Okay, go pull this book.”
And in that book he looks up the third, and the fourth, and the fifth, all the way until he gets to the end. And when he gets to the end, it says “Copy this down.” And so he copies these characters again that he doesn’t understand, doesn’t have any clue whatsoever what they are.
He copies them down very carefully, very faithfully, slides it back under the door… Somebody’s outside who picks it up, a Chinese speaker. They read it, and it’s just brilliant! It’s just absolutely brilliant! It rhymes, it’s Haiku, I mean it’s just awesome!
Now, the question, the kind of ta-da question at the end is: Does the man, does the librarian understand Chinese? Does he understand Chinese?
Now, many people in the computer world would say yes. I mean, Alan Turing would say yes, right?  The Chinese room passes the Turing Test. The Chinese speakers outside, as far as they know, they are conversing with a Chinese speaker.
So do you think the man understands Chinese? And do you think… And if he doesn’t understand Chinese… Because obviously, the analogy of it is: that’s all that computer does. A computer doesn’t understand anything. It doesn’t know if it’s talking about cholera or coffee beans or anything whatsoever. It runs this program, and it has no idea what it’s doing.
And therefore it has no volition, and therefore it has no consciousness; therefore it has nothing that even remotely looks like human intelligence. So what would you just say to that?
The Chinese Room problem is fascinating, and you could write books about it, because it’s about the nature of consciousness. And what we don’t know about consciousness, you could fill many books with. And I used to think I wanted to explore consciousness, but it made exploring AI look easy.
I don’t know if it matters that the machine thinks as we do or not. I think the point is that it will be able to solve problems. We don’t know about the volition question. Let me give you another analogy. When Ferrucci, [when] he was the head of Team Watson, he was asked a very provocative question: “Was Watson thinking when it beat all those masters at Jeopardy?” And his answer was, “Does a submarine swim?”
And what he meant was—and this is the twist on on the Chinese Room problem—he meant [that] when they created submarines, they learned principles of swimming from fish. But then they created something that swims farther and faster and carries a huge payload, so it’s really much more powerful than fish.
It doesn’t reproduce and it doesn’t do some of the miraculous things fish do, but as far as swimming, it does it.  Does an airplane fly? Well, the aviation pioneers used principles of flight from birds, but quickly went beyond that, to create things that fly farther and faster and carry a huge payload.
I don’t think it matters. So, two answers to your question. One is, I don’t think it matters. And I don’t think it’s possible that a machine will think qualitatively as we do. So, I think it will think farther and faster and carry a huge payload. I think it’s possible for a machine to be generally intelligent in a variety of domains.
We can see intelligence growing in a bunch of domains. If you think of them as rippling pools, ripples in a pool, like different circles of expertise ultimately joining, you can see how general intelligence is sort of demonstrably on its way.
Whether or not it thinks like a human, I think it won’t. And I think that’s a danger, because I think it won’t have our mammalian sense of empathy. It’ll also be good, because it won’t have a lot of sentimentality, and a lot of cognitive biases that our brains are labored with. But you said it won’t have volition. And I don’t think we can bet on that.
In my book, Our Final Invention, I interviewed at length Steve Omohundro, who’s taken upon himself—he’s an AI maker and physicist—and he’d taken it upon himself to create more or less a science for understanding super intelligent machines. Or machines that are more intelligent than we are.
And among the things that he argues for, using rational-age and economic theory—and I won’t go into that whole thing—but it’s in Our Final Invention, it’s also in Steve Omohundro’s many websites. Machines that are self-aware and are self-programming, he thinks, will develop basic drives that are not unlike our own.
And they include things like self-protection, creativity, efficiency with resources,and other drives that will make them very challenging to control—unless we get ahead of the game and create this science for understanding them, as he’s doing.
Right now, computers are not generally intelligent, they are not conscious. All the limitations of the Chinese Room, they have. But I think it’s unrealistic to think that we are frozen in development. I think it’s very realistic to think that we’ll create machines whose cognitive abilities match and then outstrip our own.
But, just kind of going a little deeper on the question. So we have this idea of intelligence, which there is no consensus definition on it. Then within that, you have human intelligence—which, again, is something we certainly don’t understand. Human intelligence comes from our brain, which is—people say—‘the most complicated object in the galaxy’.
We don’t understand how it works. We don’t know how thoughts are encoded. We know incredibly little, in the grand scheme of things, about how the brain works. But we do know that humans have these amazing abilities, like consciousness, and the ability to generalize intelligence very effortlessly. We have something that certainly feels like free will, we certainly have something that feels like… and all of that.
Then on the other hand, you think back to a clockwork, right? You wind up a clock back in the olden days and it just ran a bunch of gears. And while it may be true that the computers of the day add more gears and have more things, all we’re doing is winding it up and letting it go.
And, isn’t it, like… not only a stretch, not only a supposition, not only just sensationalistic, to say, “Oh no, no. Someday we’ll add enough gears that, you wind that thing up, and it’s actually going to be a lot smarter than you.”
Isn’t that, I mean at least it’s fair to say there’s absolutely nothing we understand about human intelligence, and human consciousness, and human will… that even remotely implies that something that’s a hundred percent mechanical, a hundred percent deterministic, a hundred percent… Just wind it and it doesn’t do anything. But…
Well, you’re wrong about being a hundred percent deterministic, and it’s not really a hundred percent mechanical. When you talk about things like will, will is such an anthropomorphic term, I’m not sure if we can really, if we can attribute it to computers.
Well, I’m specifically saying we have something that feels and seems like will, that we don’t understand.
If you look, if you look at artificial neural nets, there’s a great deal about them we don’t understand. We know what the inputs are, and we know what the outputs are; and when we want to make better output—like a better translation—we know how to adjust the inputs. But we don’t know what’s going on in a multilayered neural net system. We don’t know what’s going on in a high resolution way. And that’s why they’re called black box systems, and evolutionary algorithms.
In evolutionary algorithms, we have a sense of how they work. We have a sense of how they combine pieces of algorithms, how we introduce mutations. But often, we don’t understand the output, and we certainly don’t understand how it got there, so that’s not completely deterministic. There’s a bunch of stuff we can’t really determine in there.
And I think we’ve got a lot of unexplained behavior in computers that’s, at this stage, we simply attribute to our lack of understanding. But I think in the longer term, we’ll see that computers are doing things on their own. I’m talking about a lot of the algorithms on Wall Street, a lot of the flash crashes we’ve seen, a lot of the cognitive architectures. There’s not one person who can describe the whole system… the ‘quants’, they call them, or the guys that are programming Wall Street’s algorithms.
They’ve already gone, in complexity, beyond any individual’s ability to really strip them down.
So, we’re surrounded by systems of immense power. Gartner and company think that in the AI space—because of the exponential nature of the investment… I think it started out, and it’s doubled every year since 2009—Gartner estimates that by 2025, that space will be worth twenty-five trillion dollars of value. So to me, that’s a couple of things.
That anticipates enormous growth, and enormous growth in power in what these systems will do. We’re in an era now that’s different from other eras. But it is like other Industrial Revolutions. We’re in an era now where everything that’s electrified—to paraphrase Kevin Kelly, the futurist—everything that’s electrified is being cognitized.
We can’t pretend that it will always be like a clock. Even now it’s not like a clock. A clock you can take apart, and you can understand every piece of it.
The cognitive architectures we’re creating now… When Ferrucci was watching Watson play, and he said, “Why did he answer like that?” There’s nobody on his team that knew the answer. When it made mistakes… It did really, really well; it beat the humans. But comparing [that] to a clock, I think that’s the wrong metaphor.
Well, let’s just poke at it just one more minute, and then we can move on to something else. Is that really fair to say, that because humans don’t understand how it works, it must be somehow working differently than other machines?
Put another way, it is fair to say, because we’ve added enough gears now, that nobody could kind of keep them all straight. I mean nobody understands why the Google algorithm—even at Google—turns up what it does when you search. But nobody’s suggesting anything nondeterministic, nothing emergent, anything like that is happening.
I mean, our computers are completely deterministic, are they not?
I don’t think that they are. I think if they were completely deterministic, then enough brains put together could figure out a multi-tiered neural net, and I don’t think there’s any evidence that we can right now.
Well, that’s exciting.  
I’m not saying that it’s coming up with brilliant new ideas… But a system that’s so sophisticated that it defeats Go, and teaches grandmasters new ideas about Go—which is what the grandmaster who it defeated three out of four times said—[he] said, “I have new insights about this game,” that nobody could explain what it was doing, but it was thinking creatively in a way that we don’t understand.
Go is not like chess. On a chess board, I don’t know how many possible positions there are, but it’s calculable. On a Go board, it’s incalculable. There are more—I’ve heard it said, and I don’t really understand it very well—I heard it said there are more possible positions on a Go board than there are atoms in the universe.
So when it’s beating Go masters… Therefore, playing the game requires a great deal of intuition. It’s not just pattern-matching. Like, I’ve played a million games of Go—and that’s sort of what chess is [pattern-matching].
You know, the grandmasters are people who have seen every board you could possibly come up with. They’ve probably seen it before, and they know what to do. Go’s not like that. It requires a lot more undefinable intuition.
And so we’re moving rapidly into that territory. The program that beat the Go masters is called AlphaGo. It comes out of DeepMind. DeepMind was bought four years ago by Google. Going deep into reinforcement learning and artificial neural nets, I think your argument would be apt if we were talking about some of the old languages—Fortran, Basic, Pascal—where you could look at every line of code and figure out what was going on.
That’s no longer possible, and you’ve got Go grandmasters saying “I learned new insights.” So we’re in a brave new world here.
So you had a great part of the book, where you do a really smart kind of roll-up of when we may have an AGI. Where you went into different ideas behind it. And the question I’m really curious about is this: On the one hand, you have Elon Musk saying we can have it much sooner than you think. You have Stephen Hawking, who you quoted. You have Bill Gates saying he’s worried about it.
So you have all of these people who say it’s soon, it’s real, and it’s potentially scary. We need to watch what we do. Then on the other camp, you have people who are equally immersed in the technology, equally smart, equally, equally, equally all these other things… like Andrew Ng, who up until recently headed up AI at Baidu, who says worrying about AGI is like worrying about overpopulation on Mars. You have other people saying the soonest it could possibly happen is five hundred years from now.
So I’m curious about this. Why do you think, among these big brains, super smart people, why do they have… What is it that they believe or know or think, or whatever, that gives them such radically different views about this technology? How do you get your head around why they differ?
Excellent question. I first heard that Mars analogy from, I think it was Sebastian Thrun, who said we don’t know how to get to Mars. We don’t know how to live on Mars. But we know how to get a rocket to the moon, and gradually and slowly, little by little—No, it was Peter Norvig, who wrote the sort of standard text on artificial intelligence, called AI: A Modern Approach.
He said, you know, “We can’t live on Mars yet, but we’re putting the rockets together. Some companies are putting in some money. We’re eventually going to get to Mars, and there’ll be people living on Mars, and then people will be setting another horizon.” We haven’t left our solar system yet.
It’s a very interesting question, and very timely, about when will we achieve human-level intelligence in a machine, if ever. I did a poll about it. It was kind of a biased poll; it was of people who were at a conference about AGI, about artificial general intelligence. And then I’ve seen a lot of polls, and there’s two points to this.
One is the polls go all over the place. Some people said… Ray Kurzweil says 2029. Ray Kurzweil’s been very good at anticipating the progress of technology, he says 2029. Ray Kurzweil’s working for Google right now—this is parenthetically—he said he wants to create a machine that makes three hundred trillion calculations per second, and to share that with a billion people online. So what’s that? That’s basically reverse engineering of a brain.
Making three hundred trillion calculations per second, which is sort of a rough estimate of what a brain does. And then sharing it with a billion people online, which is making superintelligence a service, which would be incredibly useful. You could do pharmacological research. You could do really advanced weather modeling, and climate modeling. You could do weapons research, you could develop incredible weapons. He says 2029.
Some people said one hundred years from now. The mean date that I got was about 2045 for human-level intelligence in a machine. And then my book, Our Final Invention, got reviewed by Gary Marcus in the New Yorker, and he said something that stuck with me. He said whether or not it’s ten years or one hundred years, the more important question is: What happens next?
Will it be integrated into our lives? Or will it suddenly appear? How are we positioned for our own safety and security when it appears, whether it’s in fifty years or one hundred? So I think about it as… Nobody thought Go was going to be beaten for another ten years.
And here’s another way… So those are the two ways to think about it: one is, there’s a lot of guesses; and two, does it really matter what happens next? But the third part of that is this, and I write about it in Our Final Invention: If we don’t achieve it in one hundred years, do you think we’re just going to stop? Or do you think we’re going to keep beating at this problem until we solve it?
And as I said before, I don’t think we’re going to create exactly human-like intelligence in a machine. I think we’re going to create something extremely smart and extremely useful, to some extent, but something we, in a very deep way, don’t understand. So I don’t think it’ll be like human intelligence… it will be like an alien intelligence.
So that’s kind of where I am on that. I think it could happen in a variety of timelines. It doesn’t really matter when, and we’re not going to stop until we get there. So ultimately, we’re going to be confronted with machines that are a thousand or a million times more intelligent than we are.
And what are we going to do?
Well, I guess the underlying assumption is… it speaks to the credibility of the forecast, right? Like, if there’s a lab, and they’re working on inventing the lightbulb, like: “We’re trying to build the incandescent light bulb.” And you go in there and you say, “When will you have the incandescent light bulb?” and they say “Three or four weeks, five weeks. Five weeks tops, we’re going to have it.”  
Or if they say, “Uh, a hundred years. It may be five hundred, I don’t know.” I mean in those things you take a completely different view of, do we understand the problem? Do we know what we’re building? Do we know how to build an AGI? Do we even have a clue?
Do you believe… or here, let me ask it this way: Do you think an AGI is just an evolutionary… Like, we have AlphaGo, we have Watson, and we’re making them better every day. And eventually, that kind of becomes—gradually—this AGI. Or do you think there’s some “A-ha” thing we don’t know how to do, and at some point we’re like “Oh, here’s how you do it! And this is how you get a synapse to work.”
So, do you think we are nineteen revolutionary breakthroughs away, or “No, no, no, we’re on the path. We’re going to be there in three to five years.”?
Ben Goertzel, who is definitely in the race to make AGI—I interviewed him in my book—said we need some sort of breakthrough. And then we got to artificial neural nets and deep learning, and deep learning combined with reinforcement learning, which is an older technique, and that was kind of a breakthrough. And then people started to beat—IBM’s Deep Blue—to beat chess, it really was just looking up tables of positions.
But to beat Go, as we’ve discussed, was something different.
I think we’ve just had a big breakthrough. I don’t know how many revolutions we are away from a breakthrough that makes intelligence general. But let me give you this… the way I think about it.
There’s long been talk in the AI community about an algorithm… I don’t know exactly what they call it. But it’s basically an open-domain problem-solver that asks something simple like, what’s the next best move? What’s the next best thing to do? Best being based on some goals that you’ve got. What’s the next best thing to do?
Well, that’s sort of how DeepMind took on all the Atari games. They could drop the algorithm into a game, and it didn’t even know the rules. It just noticed when it was scoring or not scoring, and so it was figuring out what’s the next best thing to do.
Well if you can drop it into every Atari game, and then you drop it into something that’s many orders of magnitude above it, like Go, then why are we so far from dropping that into a robot and setting it out into the environment, and having it learn the environment and learn common sense about the environment—like, “Things go under, and things go over; and I can’t jump into the tree; I can climb the tree.”
It seems to me that general intelligence might be as simple as a program that says “What’s the next best thing to do?” And then it learns the environment, and then it solves problems in the environment.
So some people are going about that by training algorithms, artificial neural net systems and defeating games. Some people are really trying to reverse-engineer a brain, one neuron at a time. That’s sort of, in a nutshell—to vastly overgeneralize—that’s called the bottom-up, and the top-down approach for creating AGI.
So are we a certain number of revolutions away, or are we going to be surprised? I’m surprised a little too frequently for my own comfort about how fast things are moving. Faster than when I was writing the book. I’m wondering what the next milestone is. I think the Turing Test has not been achieved, or even close. I think that’s a good milestone.
It wouldn’t surprise me if IBM, which is great at issuing itself grand challenges and then beating them… But what’s great about IBM is, they’re upfront. They take on a big challenge… You know, they were beaten—Deep Blue was beaten several times before it won. When they took on Jeopardy, they weren’t sure they were going to win, but they had the chutzpah to get out there and say, “We’re gonna try.” And then they won.
I bet IBM will say, “You know what, in 2020, we’re going to take on the Turing Test. And we’re going to have a machine that you can’t tell that it’s a machine. You can’t tell the difference between a machine and a human.”
So, I’m surprised all the time. I don’t know how far or how close we are, but I’d say I come at it from a position of caution. So I would say, the window in which we have to create safe AI is closing.
Yes, no… I’m with you; I was just taking that in. I’ll insert some ominous “Dun, dun, dun…” Take that a little further.
Everybody has a role to play in this conversation, and mine happens to be canary in a coal mine. Despite the title of my book, I really like AI. I like its potential. Medical potential. I don’t like its war potential… If we see autonomous battlefield robots on the battlefield, you know what’s going to happen. Like every other piece of used military equipment, it’s going to come home.
Well, the thing is, about the military… and the thing about technology is…If you told my dad that he would invite into his home a representative of Google, and that representative would sit in a chair in a corner of the house, and he would take down everything we said, and would sell that data to our insurance company, so our insurance rates might go up… and it would sell that data to mortgage bankers, so they might cut off our ability to get a mortgage… because dad talks about going bankrupt, or dad talks about his heart condition… and he can’t get insurance anymore.
But if we hire a corporate guy, and we pay for it, and put him in our living room… Well, that’s exactly what we’re doing with Amazon Echo, with all the digital assistants. All this data is being gathered all the time, and it’s being sold… Buying and selling data is a four billion dollar-a-year industry. So we’re doing really foolish things with this technology. Things that are bad for our own interests.
So let me ask you an open-ended question… prognostication over shorter time frames is always easier. Tell me what you think is in store for the world, I don’t know, between now and 2030, the next thirteen years. Talk to me about unemployment, talk to me about economics, all of that. Tell me the next thirteen years.
Well, brace yourself for some futurism, which is a giant gamble and often wrong. To paraphrase Kevin Kelly again, everything that’s electrical will be cognitized. Our economy will be dramatically shaped by the ubiquity of artificial intelligence. With the Internet of Things, with the intelligence of everything around us—our phones, our cars…
I can already talk to my car. I’m inside my car, I can ask for directions, I can do some other basic stuff. That’s just going to get smarter, until my car drives itself. A lot of people… MIT did a study, that was quoting a Cambridge study, that said: “Forty-five percent of our jobs will be able to be replaced within twenty years.” I think they downgraded that to like ten years.
Not that they will be replaced, but they will be able to be replaced. But when AI is a twenty-five trillion dollar—when it’s worth twenty-five trillion dollars in 2025—everybody will be able to do anything, will be able to replace any employee that’s doing anything that’s remotely repetitive, and this includes doctors and lawyers… We’ll be able to replace them with the AI.
And this cuts deep into the middle class. This isn’t just people working in factories or driving cars. This is all accountants, this is a lot of the doctors, this is a lot of the lawyers. So we’re going to see giant dislocation, or giant disruption, in the economy. And giant money being made by fewer and fewer people.
And the trouble with that is, that we’ve got to figure out a way to keep a huge part of our population from starving, from not making a wage. People have proposed a basic minimum income, but to do that we would need tax revenue. And the big companies, Amazon, Google, Facebook, they pay taxes in places like Ireland, where there’s very low corporate tax. They don’t pay taxes where they get their wealth. So they don’t contribute to your roads.
Google is not contributing to your road system. Amazon is not contributing to your water supply, or to making your country safe. So there’s a giant inequity there. So we have to confront that inequity and, unfortunately, that is going to require political solutions, and our politicians are about the most technologically-backward people in our culture.
So, what I see is, a lot of unemployment. I see a lot of nifty things coming out of AI, and I am willing to be surprised by job creation in AI, and robotics, and automation. And I’d like to be surprised by that. But the general trend is… When you replace the biggest contract manufacturer in the world… Foxconn just replaced thirty-thousand people in Asia with thirty-thousand robots.
And all those people can’t be retrained, because if you’re doing something that’s that repetitive, and that mechanical… what can you be retrained to do? Well, maybe one out of every hundred could be a floor manager in a robot factory, but what about all the others? Disruption is going to come from all the people that don’t have jobs, and there’s nothing to be retrained to.
Because our robots are made in factories where robots make the robots. Our cars are made in factories where robots make the cars.
Isn’t that the same argument they used during the Industrial Revolution, when they said, “You got ninety percent of people out there who are farmers, and we’re going to lose all these farm jobs… And you don’t expect those farmers are going to, like, come work in a factory, where they have to learn completely new things.”
Well, what really happened in the different technology revolutions, back from the cotton gin onward is, a small sector… The Industrial Revolution didn’t suddenly put farms out of business. A hundred years ago, ninety percent of people worked on farms, now it’s ten percent.
But what happened with the Industrial Revolution is, sector by sector, it took away jobs, but then those people could retrain, and could go to other sectors, because there were still giant sectors that weren’t replaced by industrialization. There was still a lot of manual labor to do. And some of them could be trained upwards, into management and other things.
This, as the author Ford wrote in The Rise of Robots—and there’s also a great book called The Fourth Industrial Age. As they both argue, what’s different about this revolution is that AI works in every industry. So it’s not like the old revolutions, where one sector was replaced at a time, and there was time to absorb that change, time to reabsorb those workers and retrain them in some fashion.
But everybody is going to be… My point is, all sectors of the economy are going to be hit at once. The ubiquity of AI is going to impact a lot of the economy, all at the same time, and there is going to be a giant dislocation all at the same time. And it’s very unclear, unlike in the old days, how those people can be retrained and retargeted for jobs. So, I think it’s very different from other Industrial Revolutions, or rather technology revolutions.
Other than the adoption of coal—it went from generating five percent to eighty percent of all of our power in twenty years—the electrification of industry happened incredibly fast. Mechanization, replacement of animal power with mechanical power, happened incredibly fast. And yet, unemployment remains between four and nine percent in this country.
Other than the Depression, without ever even hiccupping—like, no matter what disruption, no matter what speed you threw at it—the economy never couldn’t just use that technology to create more jobs. And isn’t that maybe a lack of imagination that says “Well, no, now we’re out. And no more jobs to create. Or not ones that these people who’ve been displaced can do.”
I mean, isn’t that what people would’ve said for two hundred years?
Yes, that’s a somewhat persuasive argument. I think you’ve got a point that the economy was able to absorb those jobs, and the unemployment remained steady. I do think this is different. I think it’s a kind of a puzzle, and we’ll have to see what happens. But I can’t imagine… Where do professional drivers… they’re not unskilled, but they’re right next to it. And it’s the job of choice for people who don’t have a lot of education.
What do you retrain professional drivers to do once their jobs are taken? It’s not going to be factory work, it’s not going to be simple accounting. It’s not going to be anything repetitive, because that’s going to be the job of automation and AI.
So I anticipate problems, but I’d love to be pleasantly surprised. If it worked like the old days, then all those people that were cut off the farm would go to work in the factories, and make Ford automobiles, and make enough money to buy one. I don’t see all those driverless people going off to factories to make cars, or to manufacture anything.
A case in point of what’s happening is… Rethink Robotics, which is Rodney Brooks’ company, just built something called Baxter; and now Baxter is a generation old, and I can’t think of what replaced it. But it costs about twenty-two thousand dollars to get one of these robots. These robots cost basically what a minimum wage worker makes in a year. But they work 24/7, so they really replace three shifts, so they really are replacing three people.
Where do those people go? Do they go to shops that make Baxter? Or maybe you’re right, maybe it’s a failure of imagination to not be able to anticipate the jobs that would be created by Baxter and by autonomous cars. Right now, it’s failing a lot of people’s imagination. And there are not ready answers.
I mean, if it were 1995 and the Internet was, you’re just hearing about it, just getting online, just hearing it… And somebody said, “You know what? There’s going to be a lot of companies that just come out and make hundreds of billions of dollars, one after the other, all because we’ve learned how to connect computers and use this hypertext protocol to communicate.” I mean, that would not have seemed like a reasonable surmise.
No, and that’s a great example. If you were told that trillions of dollars of value are going to come out of this invention, who would’ve thought? And maybe I personally, just can’t imagine the next wave that is going to create that much value. I can see how AI and automation will create a lot of value, I only see it going into a few pockets though. I don’t see it being distributed in any way that the Silicon Valley startups, at least initially, were.
So let’s talk about you for a moment. Your background is in documentary filmmaking. Do you see yourself returning to that world? What are you working on, another book? What kind of thing is keeping you busy by day right now?
Well, I like making documentary films. I just had one on PBS last year… If you Google “Spillover” and “PBS” you can see it is streaming online. It was about spillover diseases—Ebola, Zika and others—and it was about the Ebola crisis, and how viruses spread. And then now I’m working on a film about paleontology, about a recent discovery that’s kind of secret, that I can’t talk about… from sixty-six million years ago.
And I am starting to work on another book that I can’t talk about. So I am keeping an eye on AI, because this issue is… Despite everything I talk about, I really like the technology; I think it’s pretty amazing.
Well, let’s close with, give me a scenario that you think is plausible, that things work out. That we have something that looks like full employment, and…
Good, Byron. That’s a great way to go out. I see people getting individually educated about the promise and peril of AI, so that we as a culture are ready for the revolution that’s coming. And that forces businesses to be responsible, and politicians to be savvy, about developments in artificial intelligence. Then they invest some money to make artificial intelligence advancement transparent and safe.
And therefore, when we get to machines that are as smart as humans, that [they] are actually our allies, and never our competitors. And that somehow on top of this giant wedding cake I’m imagining, we also manage to keep full employment, or nearly-full employment. Because we’re aware, and because we’re working all the time to make sure that the future is kind to humans.
Alright, well, that is a great place to leave it. I am going to thank you very much.
Well, thank you. Great questions. I really enjoyed the back-and-forth.
Byron explores issues around artificial intelligence and conscious computers in his upcoming book The Fourth Age, to be published in April by Atria, an imprint of Simon & Schuster. Pre-order a copy here
[voices_in_ai_link_back]

Voices in AI – Episode 12: A Conversation with Scott Clark

[voices_in_ai_byline]
In this episode, Byron and Scott talk about algorithms, transfer learning, human intelligence, and pain and suffering.
[podcast_player name=”Episode 12: A Conversation with Scott Clark” artist=”Byron Reese” album=”Voices in AI” url=”https://voicesinai.s3.amazonaws.com/2017-10-16-(00-56-02)-scott-clark.mp3″ cover_art_url=”https://voicesinai.com/wp-content/uploads/2017/10/voices-headshot-card-4.jpg”]
[voices_in_ai_link_back]
Byron Reese: This is Voices in AI, brought to you by Gigaom. I’m Byron Reese. Today our guest is Scott Clark. He is the CEO and co-founder of SigOpt. They’re a SaaS startup for tuning complex systems and machine learning models. Before that, Scott worked on the ad targeting team at Yelp, leading the charge on academic research and outreach. He holds a PhD in Applied Mathematics and an MS in Computer Science from Cornell, and a BS in Mathematics, Physics, and Computational Physics from Oregon State University. He was chosen as one of Forbes 30 under 30 in 2016. Welcome to the show, Scott.
Scott Clark: Thanks for having me.
I’d like to start with the question, because I know two people never answer it the same: What is artificial intelligence?
I like to go back to an old quote… I don’t remember the attribution for it, but I think it actually fits the definition pretty well. Artificial intelligence is what machines can’t currently do. It’s the idea that there’s this moving goalpost for what artificial intelligence actually means. Ten years ago, artificial intelligence meant being able to classify images; like, can a machine look at a picture and tell you what’s in the picture?
Now we can do that pretty well. Maybe twenty, thirty years ago, if you told somebody that there would be a browser where you can type in words, and it would automatically correct your spelling and grammar and understand language, he would think that’s artificial intelligence. And I think there’s been a slight shift, somewhat recently, where people are calling deep learning artificial intelligence and things like that.
It’s got a little bit conflated with specific tools. So now people talk about artificial general intelligence as this impossible next thing. But I think a lot of people, in their minds, think of artificial intelligence as whatever it is that’s next that computers haven’t figured out how to do yet, that humans can do. But, as computers continually make progress on those fronts, the goalposts continually change.
I’d say today, people think of it as conversational systems, basic tasks that humans can do in five seconds or less, and then artificial general intelligence is everything after that. And things like spell check, or being able to do anomaly detection, are just taken for granted and that’s just machine learning now.
I’ll accept all of that, but that’s more of a sociological observation about how we think of it, and then actually… I’ll change the question. What is intelligence?
That’s a much more difficult question. Maybe the ability to reason about your environment and draw conclusions from it.
Do you think that what we’re building, our systems, are they artificial in the sense that we just built them, but they can do that? Or are they artificial in the sense that they can’t really do that, but they sure can think it well?
I think they’re artificial in the sense that they’re not biological systems. They seem to be able to perceive input in the same way that a human can perceive input, and draw conclusions based off of that input. Usually, the reward system in place in an artificial intelligence framework is designed to do a very specific thing, very well.
So is there a cat in this picture or not? As opposed to a human: It’s, “Try to live a fulfilling life.” The objective functions are slightly different, but they are interpreting outside stimuli via some input mechanism, and then trying to apply that towards a specific goal. The goals for artificial intelligence today are extremely short-term, but I think that they are performing them on the same level—or better sometimes—than a human presented with the exact same short-term goal.
The artificial component comes into the fact that they were constructed, non-biologically. But other than that, I think they meet the definition of observing stimuli, reasoning about an environment, and achieving some outcome.
You used the phrase ‘they draw conclusions’. Are you using that colloquially, or does the machine actually conclude? Or does it merely calculate?
It calculates, but then it comes to, I guess, a decision at the end of the day. If it’s a classification system, for example… going back to “Is there a cat in this picture?” It draws the conclusion that “Yes, there was a cat. No, that wasn’t a cat.” It can do that with various levels of certainty in the same way that, potentially, a human would solve the exact same problem. If I showed you a blurry Polaroid picture you might be able to say, “I’m pretty sure there’s a cat in there, but I’m not 100 percent certain.”
And if I show you a very crisp picture of a kitten, you could be like, “Yes, there’s a cat there.” And I think convolutional neural network is doing the exact same thing: taking in that outside stimuli. Not through an optical nerve, but through the raw encoding of pixels, and then coming to the exact same conclusion.
You make the really useful distinction between an AGI, which is a general intelligence—something as versatile as a human—and then the kinds of stuff we’re building now, which we call AI—which is doing this reasoning or drawing conclusions.
Is an AGI a linear development from what we have now? In other words, do we have all the pieces, and we just need faster computers, better algorithms, more data, a few nips and tucks, and we’re eventually going to get an AGI? Or is an AGI something very different, that is a whole different ball of wax?
I’m not convinced that, with the current tooling we have today, that it’s just like… if we add one more hidden layer to a neural network, all of a sudden it’ll be AGI. That being said, I think this is how science and computer science and progress in general works. Is that techniques are built upon each other, we make advancements.
It might be a completely new type of algorithm. It might not be a neural network. It might be reinforcement learning. It might not be reinforcement learning. It might be the next thing. It might not be on a CPU or a GPU. Maybe it’s on a quantum computer. If you think of scientific and technological process as this linear evolution of different techniques and ideas, then I definitely think we are marching towards that as an eventual outcome.
That being said, I don’t think that there’s some magic combinatorial setting of what we have today that will turn into this. I don’t think it’s one more hidden layer. I don’t think it’s a GPU that can do one more teraflop—or something like that—that’s going to push us over the edge. I think it’s going to be things built from the foundation that we have today, but it will continue to be new and novel techniques.
There was an interesting talk at the International Conference on Machine Learning in Sydney last week about AlphaGo, and how they got this massive speed-up when they put in deep learning. They were able to break through this plateau that they had found in terms of playing ability, where they could play at the amateur level.
And then once they started applying deep learning networks, that got them to the professional, and now best-in-the-world level. I think we’re going to continue to see plateaus for some of these current techniques, but then we’ll come up with some new strategy that will blast us through and get to the next plateau. But I think that’s an ever-stratifying process.
To continue on that vein… When in 1955, they convened in Dartmouth and said, “We can solve a big part of AI in the summer, with five people,” the assumption was that general intelligence, like all the other sciences, had a few simple laws.
You had Newton, Maxwell; you had electricity and magnetism, and all these things, and they were just a few simple laws. The idea was that all we need to do is figure out those for intelligence. And Pedro Domingos argues in The Master Algorithm, from a biological perspective that, in a sense, that may be true.  
That if you look at the DNA difference between us and an animal that isn’t generally intelligent… the amount of code is just a few megabytes that’s different, which teaches how to make my brain and your brain. It sounded like you were saying, “No, there’s not going to be some silver bullet, it’s going to be a bunch of silver buckshot and we’ll eventually get there.”
But do you hold any hope that maybe it is a simple and elegant thing?
Going back to my original statement about what is AI, I think when Marvin Minsky and everybody sat down in Dartmouth, the goalposts for AI were somewhat different. Because they were attacking it for the first time, some of the things were definitely overambitious. But certain things that they set out to do that summer, they actually accomplished reasonably well.
Things like the Lisp programming language, and things like that, came out of that and were extremely successful. But then, once these goals are accomplished, the next thing comes up. Obviously, in hindsight, it was overambitious to think that they could maybe match a human, but I think if you were to go back to Dartmouth and show them what we have today, and say: “Look, this computer can describe the scene in this picture completely accurately.”
I think that could be indistinguishable from the artificial intelligence that they were seeking, even if today what we want is someone we can have a conversation with. And then once we can have a conversation, the next thing is we want them to be able to plan our lives for us, or whatever it may be, solve world peace.
While I think there are some of the fundamental building blocks that will continue to be used—like, linear algebra and calculus, and things like that, will definitely be a core component of the algorithms that make up whatever does become AGI—I think there is a pretty big jump between that. Even if there’s only a few megabytes difference between us and a starfish or something like that, every piece of DNA is two bits.
If you have millions of differences, four-to-the-several million—like the state space for DNA—even though you can store it in a small amount of megabytes, there are so many different combinatorial combinations that it’s not like we’re just going to stumble upon it by editing something that we currently have.
It could be something very different in that configuration space. And I think those are the algorithmic advancements that will continue to push us to the next plateau, and the next plateau, until eventually we meet and/or surpass the human plateau.
You invoked quantum computers in passing, but putting that aside for a moment… Would you believe, just at a gut level—because nobody knows—that we have enough computing power to build an AGI, we just don’t know how?
Well, in the sense that if the human brain is general intelligence, the computing power in the human brain, while impressive… All of the computers in the world are probably better at performing some simple calculations than the biological gray matter mess that exists in all of our skulls. I think the raw amount of transistors and things like that might be there, if we had the right way to apply them, if they were all applied in the same direction.
That being said… Whether or not that’s enough to make it ubiquitous, or whether or not having all the computers in the world mimic a single human child will be considered artificial general intelligence, or if we’re going to need to apply it to many different situations before we claim victory, I think that’s up for semantic debate.
Do you think about how the brain works, even if [the context] is not biological? Is that how you start a problem: “Well, how do humans do this?” Does that even guide you? Does that even begin the conversation? And I know none of this is a map: Birds fly with wings, and airplanes, all of that. Is there anything to learn from human intelligence that you, in a practical, day-to-day sense, use?
Yeah, definitely. I think it often helps to try to approach a problem from fundamentally different ways. One way to approach that problem is from the purely mathematical, axiomatic way; where we’re trying to build up from first principles, and trying to get to something that has a nice proof or something associated with it.
Another way to try to attack the problem is from a more biological setting. If I had to solve this problem, and I couldn’t assume any of those axioms, then how would I begin to try to build heuristics around it? Sometimes you can go from that back to the proof, but there are many different ways to attack that problem. Obviously, there are a lot of things in computer science, and optimization in general, that are motivated by physical phenomena.
So a neural network, if you squint, looks kind of like a biological brain neural network. There’s things like simulated annealing, which is a global optimization strategy that mimics the way that like steel is annealed… where it tries to find some local lattice structure that has low energy, and then you pound the steel with the hammer, and that increases the energy to find a better global optima lattice structure that is harder steel.
But that’s also an extremely popular algorithm in the scientific literature. So it was come to from this auxiliary way, or a genetic algorithm where you’re slowly evolving a population to try to get to a good result. I think there is definitely room for a lot of these algorithms to be inspired by biological or physical phenomenon, whether or not they are required to be from that to be proficient. I would have trouble, off the top of my head, coming up with the biological equivalent for a support vector machine or something like that. So there’s two different ways to attack it, but both can produce really interesting results.
Let’s take a normal thing that a human does, which is: You show a human training data of the Maltese Falcon, the little statue from the movie, and then you show him a bunch of photos. And a human can instantly say, “There’s the falcon under water, and there it’s half-hidden by a tree, and there it’s upside down…” A human does that naturally. So it’s some kind of transferred learning. How do we do that?
Transfer learning is the way that that happens. You’ve seen trees before. You’ve seen water. You’ve seen how objects look inside and outside of water before. And then you’re able to apply that knowledge to this new context.
It might be difficult for a human who grew up in a sensory deprivation chamber to look at this object… and then you start to show them things that they’ve never seen before: “Here’s this object and a tree,” and they might not ‘see the forest for the trees’ as it were.
In addition to that, without any context whatsoever, you take someone who was raised in a sensory deprivation chamber, and you start showing them pictures and ask them to do classification type tasks. They may be completely unaware of what’s the reward function here. Who is this thing telling me to do things for the first time I’ve never seen before?
What does it mean to even classify things or describe an object? Because you’ve never seen an object before.
And when you start training these systems from scratch, with no previous knowledge, that’s how they work. They need to slowly learn what’s good, what’s bad. There’s a reward function associated with that.
But with no context, with no previous information, it’s actually very surprising how well they are able to perform these tasks; considering [that when] a child is born, four hours later it isn’t able to do this. A machine algorithm that’s trained from scratch over the course of four hours on a couple of GPUs is able to do this.
You mentioned the sensory deprivation chamber a couple of times. Do you have a sense that we’re going to need to embody these AIs to allow them to—and I use the word very loosely—‘experience’ the world? Are they locked in a sensory deprivation chamber right now, and that’s limiting them?
I think with transfer learning, and pre-training of data, and some reinforcement algorithm work, there’s definitely this idea of trying to make that better, and bootstrapping based off of previous knowledge in the same way that a human would attack this problem. I think it is a limitation. It would be very difficult to go from zero to artificial general intelligence without providing more of this context.
There’s been many papers recently, and OpenAI had this great blog post recently where, if you teach the machine language first, if you show it a bunch of contextual information—this idea of this unsupervised learning component of it, where it’s just absorbing information about the potential inputs it can get—that allows it to perform much better on a specific task, in the same way that a baby absorbs language for a long time before it actually starts to produce it itself.
And it could be in a very unstructured way, but it’s able to learn some of the actual language structure or sounds from the particular culture in which it was raised in this unstructured way.
Let’s talk a minute about human intelligence. Why do you think we understand so poorly how the brain works?
That’s a great question. It’s easier scientifically, with my background in math and physics—it seems like it’s easier to break down modular decomposable systems. Humanity has done a very good job at understanding, at least at a high level, how physical systems work, or things like chemistry.
Biology starts to get a little bit messier, because it’s less modular and less decomposable. And as you start to build larger and larger biological systems, it becomes a lot harder to understand all the different moving pieces. Then you go to the brain, and then you start to look at psychology and sociology, and all of the lines get much fuzzier.
It’s very difficult to build an axiomatic rule system. And humans aren’t even able to do that in some sort of grand unified way with physics, or understand quantum mechanics, or things like that; let alone being able to do it for these sometimes infinitely more complex systems.
Right. But the most successful animal on the planet is a nematode worm. Ten percent of all animals are nematode worms. They’re successful, they find food, and they reproduce and they move. Their brains have 302 neurons. We’ve spent twenty years trying to model that, a bunch of very smart people in the OpenWorm project…
 But twenty years trying to model 300 neurons to just reproduce this worm, make a digital version of it, and even to this day people in the project say it may not be possible.
I guess the argument is, 300 sounds like a small amount. One thing that’s very difficult for humans to internalize is the exponential function. So if intelligence grew linearly, then yeah. If we could understand one, then 300 might not be that much, whatever it is. But if the state space grows exponentially, or the complexity grows exponentially… if there’s ten different positions for every single one of those neurons, like 10300, that’s more than the number of atoms in the universe.
Right. But we aren’t starting by just rolling 300 dice and hoping for them all to be—we know how those neurons are arranged.
At a very high level we do.
I’m getting to a point, that we maybe don’t even understand how a neuron works. A neuron may be doing stuff down at the quantum level. It may be this gigantic supercomputer we don’t even have a hope of understanding, a single neuron.
From a chemical way, we can have an understanding of, “Okay, so we have neurotransmitters that carry a positive charge, that then cause a reaction based off of some threshold of charge, and there’s this catalyst that happens.” I think from a physics and chemical understanding, we can understand the base components of it, but as you start to build these complex systems that have this combinatorial set of states, it does become much more difficult.
And I think that’s that abstraction, where we can understand how simple chemical reactions work. But then it becomes much more difficult once you start adding more and more. Or even in physics… like if you have two bodies, and you’re trying to calculate the gravity, that’s relatively easy. Three? Harder. Four? Maybe impossible. It becomes much harder to solve these higher-order, higher-body problems. And even with 302 neurons, that starts to get pretty complex.
Oddly, two of them aren’t connected to anything, just like floating out there…
Do you think human intelligence is emergent?
In what respect?
I will clarify that. There are two sorts of emergence: one is weak, and one is strong. Weak emergence is where a system takes on characteristics which don’t appear at first glance to be derivable from them. So the intelligence displayed by an ant colony, or a beehive—the way that some bees can shimmer in unison to scare off predators. No bee is saying, “We need to do this.”  
The anthill behaves intelligently, even though… The queen isn’t, like, in charge; the queen is just another ant, but somehow it all adds intelligence. So that would be something where it takes on these attributes.
Can you really intuitively derive intelligence from neurons?
And then, to push that a step further, there are some who believe in something called ‘strong emergence’, where they literally are not derivable. You cannot look at a bunch of matter and explain how it can become conscious, for instance. It is what the minority of people believe about emergence, that there is some additional property of the universe we do not understand that makes these things happen.
The question I’m asking you is: Is reductionism the way to go to figure out intelligence? Is that how we’re going to kind of make advances towards an AGI? Just break it down into enough small pieces.
I think that is an approach, whether or not that’s ‘the’ ultimate approach that works is to be seen. As I was mentioning before, there are ways to take biological or physical systems, and then try to work them back into something that then can be used and applied in a different context. There’s other ways, where you start from the more theoretical or axiomatic way, and try to move forward into something that then can be applied to a specific problem.
I think there’s wide swaths of the universe that we don’t understand at many levels. Mathematics isn’t solved. Physics isn’t solved. Chemistry isn’t solved. All of these build on each other to get to these large, complex, biological systems. It may be a very long time, or we might need an AGI to help us solve some of these systems.
I don’t think it’s required to understand everything to be able to observe intelligence—like, proof by example. I can’t tell you why my brain thinks, but my brain is thinking, if you can assume that humans are thinking. So you don’t necessarily need to understand all of it to put it all together.
Let me ask you one more far-out question, and then we’ll go to a little more immediate future. Do you have an opinion on how consciousness comes about? And if you do or don’t, do you believe we’re going to build conscious machines?
Even to throw a little more into that one, do you think consciousness—that ability to change focus and all of that—is a requisite for general intelligence?
So, I would like to hear your definition of consciousness.
I would define it by example, to say that it’s subjective experience. It’s how you experience things. We’ve all had that experience when you’re driving, that you kind of space out, and then, all of a sudden, you kind of snap to. “Whoa! I don’t even remember getting here.”
And so that time when you were driving, your brain was elsewhere, you were clearly intelligent, because you were merging in and out of traffic. But in the sense I’m using the word, you were not ‘conscious’, you were not experiencing the world. If your foot caught on fire, you would feel it; but you weren’t experiencing the world. And then instantly, it all came on and you were an entity that experienced something.
Or, put another way… this is often illustrated with the problem of Mary by Frank Jackson:
He offers somebody named Mary, who knows everything about color, like, at a god-like level—knows every single thing about color. But the catch is, you might guess, she’s never seen it. She’s lived in a room, black-and-white, never seen it [color]. And one day, she opens the door, she looks outside and she sees red.  
The question becomes: Does she learn anything? Did she learn something new?  
In other words, is experiencing something different than knowing something? Those two things taken together, defining consciousness, is having an experience of the world…
I’ll give one final one. You can hook a sensor up to a computer, and you can program the computer to play an mp3 of somebody screaming if the sensor hits 500 degrees. But nobody would say, at this day and age, the computer feels the pain. Could a computer feel anything?
Okay. I think there’s a lot to unpack there. I think computers can perceive the environment. Your webcam is able to record the environment in the same way that your optical nerves are able to record the environment. When you’re driving a car, and daydreaming, and kind of going on autopilot, as it were, there still are processes running in the background.
If you were to close your eyes, you would be much worse at doing lane merging and things like that. And that’s because you’re still getting the sensory input, even if you’re not actively, consciously aware of the fact that you’re observing that input.
Maybe that’s where you’re getting at with consciousness here, is: Not only the actual task that’s being performed, which I think computers are very good at—and we have self-driving cars out on the street in the Bay Area every day—but that awareness of the fact that you are performing this task, is kind of meta-level of: “I’m assembling together all of these different subcomponents.”
Whether that’s driving a car, thinking about the meeting that I’m running late to, some fight that I had with my significant other the night before, or whatever it is. There’s all these individual processes running, and there could be this kind of global awareness of all of these different tasks.
I think today, where artificial intelligence sits is, performing each one of these individual tasks extremely well, toward some kind of objective function of, “I need to not crash this car. I need to figure out how to resolve this conflict,” or whatever it may be; or, “Play this game in an artificial intelligence setting.” But we don’t yet have that kind of governing overall strategy that’s aware of making these tradeoffs, and then making those tradeoffs in an intelligent way. But that overall strategy itself is just going to be going toward some specific reward function.
Probably when you’re out driving your car, and you’re spacing out, your overall reward function is, “I want to be happy and healthy. I want to live a meaningful life,” or something like that. It can be something nebulous, but you’re also just this collection of subroutines that are driving towards this specific end result.
But the direct question of what would it mean for a computer to feel pain? Will a computer feel pain? Now they can sense things, but nobody argues they have a self that experiences the pain. It matters, doesn’t it?
It depends on what you mean by pain. If you mean there’s a response of your nervous system to some outside stimuli that you perceive as pain, a negative response, and—
—It involves emotional distress. People know what pain is. It hurts. Can a computer ever hurt?
It’s a fundamentally negative response to what you’re trying to achieve. So pain and suffering is the opposite of happiness. And your objective function as a human is happiness, let’s say. So, by failing to achieve that objective, you feel something like pain. Evolutionarily, we might have evolved this in order to avoid specific things. Like, you get pain when you touch flame, so don’t touch flame.
And the reason behind that is biological systems degrade in high-temperature environments, and you’re not going to be able to reproduce or something like that.
You could argue that when a classification system fails to classify something, and it gets penalized in its reward function, that’s the equivalent of it finding something where, in its state of the world, it has failed to achieve its goal, and it’s getting the opposite of what its purpose is. And that’s similar to pain and suffering in some way.
But is it? Let’s be candid. You can’t take a person and torture them, because that’s a terrible thing to do… because they experience pain. [Whereas if] you write a program that has an infinite loop that causes your computer to crash, nobody’s going to suggest you should go to jail for that. Because people know that those are two very different things.
It is a negative neurological response based off of outside stimuli. A computer can have a negative response, and perform based off of outside stimuli poorly, relative to what it’s trying to achieve… Although I would definitely agree with you that that’s not a computer experiencing pain.
But from a pure chemical level, down to the algorithmic component of it, they’re not as fundamentally different… that because it’s a human, there’s something magic about it being a human. A dog can also experience pain.
These worms—I’m not as familiar with the literature on that, but [they] could potentially experience pain. And as you derive that further and further back, you might have to bend your definition of pain. Maybe they’re not feeling something in a central nervous system, like a human or a dog would, but they’re perceiving something that’s negative to what they’re trying to achieve with this utility function.
But we do draw a line. And I don’t know that I would use the word ‘magic’ the way you’re doing it. We draw this line by saying that dogs feel pain, so we outlaw animal cruelty. Bacteria don’t, so we don’t outlaw antibiotics. There is a material difference between those two things.
So if the difference is a central nervous system, and pain is being defined as a nervous response to some outside stimuli… then unless we explicitly design machines to have central nervous systems, then I don’t think they will ever experience pain.
Thanks for indulging me in all of that, because I think it matters… Because up until thirty years ago, veterinarians typically didn’t use anesthetic. They were told that animals couldn’t feel pain. Babies were operated on in the ‘90s—open heart surgery—under the theory they couldn’t feel pain.  
What really intrigues me is the idea of how would we know if a machine did? That’s what I’m trying to deconstruct. But enough of that. We’ll talk about jobs here in a minute, and those concerns…
There’s groups of people that are legitimately afraid of AI. You know all the names. You’ve got Elon Musk, you get Stephen Hawking. Bill Gates has thrown in his hat with that, Wozniak has. Nick Bostrom wrote a book that addressed existential threat and all of that. Then you have Mark Zuckerberg, who says no, no, no. You get Oren Etzioni over at the Allen Institute, just working on some very basic problem. You get Andrew Ng with his “overpopulation on Mars. This is not helpful to even have this conversation.”
What is different about those two groups in your mind? What is the difference in how they view the world that gives them these incredibly different viewpoints?
I think it goes down to a definition problem. As you mentioned at the beginning of this podcast, when you ask people, “What is artificial intelligence?” everybody gives you a different answer. I think each one of these experts would also give you a different answer.
If you define artificial intelligence as matrix multiplication and gradient descent in a deep learning system, trying to achieve a very specific classification output given some pixel input—or something like that—it’s very difficult to conceive that as some sort of existential threat for humanity.
But if you define artificial intelligence as this general intelligence, this kind of emergent singularity where the machines don’t hit the plateau, that they continue to advance well beyond humans… maybe to the point where they don’t need humans, or we become the ants in that system… that becomes very rapidly a very existential threat.
As I said before, I don’t think there’s an incremental improvement from algorithms—as they exist in the academic literature today—to that singularity, but I think it can be a slippery slope. And I think that’s what a lot of these experts are talking about… Where if it does become this dynamic system that feeds on itself, by the time we realize it’s happening, it’ll be too late.
Whether or not that’s because of the algorithms that we have today, or algorithms down the line, it does make sense to start having conversations about that, just because of the time scales over which governments and policies tend to work. But I don’t think someone is going to design a TensorFlow or MXNet algorithm tomorrow that’s going to take over the world.
There’s legislation in Europe to basically say, if an AI makes a decision about whether you should get an auto loan or something, you deserve to know why it turned you down. Is that a legitimate request, or is it like you go to somebody at Google and say, “Why is this site ranked number one and this site ranked number two?” There’s no way to know at this point.  
Or is that something that, with the auto loan thing, you’re like, “Nope, here are the big bullet points of what went into it.” And if that becomes the norm, does that slow down AI in any way?
I think it’s important to make sure, just from a societal standpoint, that we continue to strive towards not being discriminatory towards specific groups and people. It can be very difficult, when you have something that looks like a black box from the outside, to be able to say, “Okay, was this being fair?” based off of the fairness that we as a society have agreed upon.
The machine doesn’t have that context. The machine doesn’t have the policy, necessarily, inside to make sure that it’s being as fair as possible. We need to make sure that we do put these constraints on these systems, so that it meets what we’ve agreed upon as a society, in laws, etc., to adhere to. And that it should be held to the same standard as if there was a human making that same decision.
There is, of course, a lot of legitimate fear wrapped up about the effect of automation and artificial intelligence on employment. And just to set the problem up for the listeners, there’s broadly three camps, everybody intuitively knows this.
 There’s one group that says, “We’re going to advance our technology to the point that there will be a group of people who do not have the educational skills needed to compete with the machines, and we’ll have a permanent underclass of people who are unemployable.” It would be like the Great Depression never goes away.
And then there are people who say, “Oh, no, no, no. You don’t understand. Everything, every job, a machine is going to be able to do.” You’ll reach a point where the machine will learn it faster than the human, and that’s it.
And then you’ve got a third group that says, “No, that’s all ridiculous. We’ve had technology come along, as transformative as it is… We’ve had electricity, and machines replacing animals… and we’ve always maintained full employment.” Because people just learn how to use these tools to increase their own productivity, maintain full employment—and we have growing wages.
So, which of those, or a fourth one, do you identify with?
This might be an unsatisfying answer, but I think we’re going to go through all three phases. I think we’re in the third camp right now, where people are learning new systems, and it’s happening at a pace where people can go to a computer science boot camp and become an engineer, and try to retrain and learn some of these systems, and adapt to this changing scenario.
I think, very rapidly—especially at the exponential pace that technology tends to evolve—it does become very difficult. Fifty years ago, if you wanted to take apart your telephone and try to figure out how it works, repair it, that was something that a kid could do at a camp kind of thing, like an entry circuits camp. That’s impossible to do with an iPhone.
I think that’s going to continue to happen with some of these more advanced systems, and you’re going to need to spend your entire life understanding some subcomponent of it. And then, in the further future, as we move towards this direction of artificial general intelligence… Like, once a machine is a thousand times, ten thousand times, one hundred thousand times smarter—by whatever definition—than a human, and that increases at an exponential pace… We won’t need a lot of different things.
Whether or not that’s a fundamentally bad thing is up for debate. I think one thing that’s different about this than the Industrial Revolution, or the agricultural revolution, or things like that, that have happened throughout human history… is that instead of this happening over the course of generations or decades… Maybe if your father, and your grandfather, and your entire family tree did a specific job, but then that job doesn’t exist anymore, you train yourself to do something different.
Once it starts to happen over the course of a decade, or a year, or a month, it becomes much harder to completely retrain. That being said, there’s lots of thoughts about whether or not humans need to be working to be happy. And whether or not there could be some other fundamental thing that would increase the net happiness and fulfillment of people in the world, besides sitting at a desk for forty hours a week.
And maybe that’s actually a good thing, if we can set up the societal constructs to allow people to do that in a healthy and happy way.
Do you have any thoughts on computers displaying emotions, emulating emotions? Is that going to be a space where people are going to want authentic human experiences in those in the future? Or are we like, “No, look at how people talk to their dog,” or something? If it’s good enough to fool you, you just go along with the conceit?
The great thing about computers, and artificial intelligence systems, and things like that is if you point them towards a specific target, they’ll get pretty good at hitting that target. So if the goal is to mimic human emotion, I think that that’s something that’s achievable. Whether or not a human cares, or is even able to distinguish between that and actual human emotion, could be very difficult.
At Cornell, where I did my PhD, they had this psychology chatbot called ELIZA—I think this was back in the ‘70s. It went through a specific school of psychological behavioral therapy thought, replied with specific ways, and people found it incredibly helpful.
Even if they knew that it was just a machine responding to them, it was a way for them to get out their emotions and work through specific problems. As these machines get more sophisticated and able, as long as it’s providing utility to the end user, does it matter who’s behind the screen?
That’s a big question. Weizenbaum shut down ELIZA because he said that when a machine says, “I understand” that it’s a lie, there’s no ‘I’, and there’s nothing [there] that understands anything. He had real issues with that.
But then when they shut it down, some of the end users were upset, because they were still getting quite a bit of utility out of it. There’s this moral question of whether or not you can take away something from someone who is deriving benefit from it as well.
So I guess the concern is that maybe we reach a day where an AI best friend is better than a real one. An AI one doesn’t stand you up. And an AI spouse is better than a human spouse, because of all of those reasons. Is that a better world, or is it not?
I think it becomes a much more dangerous world, because as you said before, someone could decide to turn off the machine. When it’s someone taking away your psychologist, that could be very dangerous. When it’s someone deciding that you didn’t pay your monthly fee, so they’re going to turn off your spouse, that could be quite a bit worse as well.
As you mentioned before, people don’t necessarily associate the feelings or pain or anything like that with the machine, but as these get more and more life-like, and as they are designed with the reward function of becoming more and more human-like, I think that distinction is going to become quite a bit harder for us to understand.
And it not only affects the machine—which you can make the argument doesn’t have a voice—but it’ll start to affect the people as well.
One more question along these lines. You were a Forbes 30 Under 30. You’re fine with computer emotions, and you have this set of views. Do you notice any generational difference between researchers who have been in it longer than you, and people of your age and training? Do you look at it, as a whole, differently than another generation might have?
I think there are always going to be generational differences. People grow up in different times and contexts, societal norms shift… I would argue usually for the better, but not always. So I think that that context in which you were raised, that initial training data that you apply your transfer learning to for the rest of your life, has a huge effect on what you’re actually going to do, and how you perceive the world moving forward.
I spent a good amount of time today at SigOpt. Can you tell me what you’re trying to do there, and why you started or co-founded it, and what the mission is? Give me that whole story.
Yeah, definitely. SigOpt is an optimization-as-a-service company, or a software-as-a-service offering. What we do is help people configure these complex systems. So when you’re building a neural network—or maybe it’s a reinforcement learning system, or an algorithmic trading strategy—there’s often many different tunable configuration parameters.
These are the settings that you need to put in place before the system itself starts to do any sort of learning: things like the depth of the neural network, the learning rates, some of these stochastic gradient descent parameters, etc.
These are often kind of nuisance parameters that are brushed under the rug. They’re typically solved via relatively simplistic methods like brute forcing it or trying random configurations. What we do is we take an ensemble of the state-of-the-art research from academia, and Bayesian and global optimization, and we ensemble all of these algorithms behind a simple API.
So when you are downloading MxNet, or TensorFlow, or Caffe2, whatever it is, you don’t have to waste a bunch of time trying different things via trial-and-error. We can guide you to the best solution quite a bit faster.
Do you have any success stories that you like to talk about?
Yeah, definitely. One of our customers is Hotwire. They’re using us to do things like ranking systems. We work with a variety of different algorithmic trading firms to make their strategies more efficient. We also have this great academic program where SigOpt is free for any academic at any university or national lab anywhere in the world.
So we’re helping accelerate the flywheel of science by allowing people to spend less time doing trial-and-error. I wasted way too much of my PhD on this, to be completely honest—fine-tuning different configuration settings and bioinformatics algorithms.
So our goal is… If we can have humans do what they’re really good at, which is creativity—understanding the context in the domain of a problem—and then we can make the trial-and-error component as little as possible, hopefully, everything happens a little bit faster and a little bit better and more efficiently.
What are the big challenges you’re facing?
Where this system makes the biggest difference is in large complex systems, where it’s very difficult to manually tune, or brute force this problem. Humans tend to be pretty bad at doing 20-dimensional optimization in their head. But a surprising number of people still take that approach, because they’re unable to access some of this incredible research that’s been going on in academia for the last several decades.
Our goal is to make that as easy as possible. One of our challenges is finding people with these interesting complex problems. I think the recent surge of interest in deep learning and reinforcement learning, and the complexity that’s being imbued in a lot of these systems, is extremely good for us, and we’re able to ride that wave and help these people realize the potential of these systems quite a bit faster than they would otherwise.
But having the market come to us is something that we’re really excited about, but it’s not instant.
Do you find that people come to you and say, “Hey, we have this dataset, and we think somewhere in here we can figure out whatever”? Or do they just say, “We have this data, what can we do with it?” Or do they come to you and say, “We’ve heard about this AI thing, and want to know what we can do”?
There are companies that help solve that particular problem, where they’re given raw data and they help you build a model and apply it to some business context. Where SigOpt sits, which is slightly different than that, is when people come to us, they have something in place. They already have data scientists or machine learning engineers.
They’ve already applied their domain expertise to really understand their customers, the business problem they’re trying to solve, everything like that. And what they’re looking for is to get the most out of these systems that they’ve built. Or they want to build a more advanced system as rapidly as possible.
And so SigOpt bolts on top of these pre-existing systems, and gives them that boost by fine-tuning all of these different configuration parameters to get to their maximal performance. So, sometimes we do meet people like that, and we pass them on to some of our great partners. When someone has a problem and they just want to get the most out of it, that’s where we can come in and provide this black box optimization on top of it.
Final question-and-a-half. Do you speak a lot? Do you tweet? If people want to follow you and keep up with what you’re doing, what’s the best way to do that?
They can follow @SigOpt on Twitter. We have a blog where we post technical and high-level blog posts about optimization and some of the different advancements, and deep learning and reinforcement learning. We publish papers, but blog.sigopt.com and on Twitter @SigOpt is the best way to follow us along.
Alright. It has been an incredibly fascinating hour, and I want to thank you for taking the time.
Excellent. Thank you for having me. I’m really honored to be on the show.
Byron explores issues around artificial intelligence and conscious computers in his upcoming book The Fourth Age, to be published in April by Atria, an imprint of Simon & Schuster. Pre-order a copy here
[voices_in_ai_link_back]