Voices in AI – Episode 30: A Conversation with Robert Mittendorff and Mudit Garg

[voices_in_ai_byline]
In this episode, Byron, Robert and Mudit talk about Qventus, healthcare, machine learning, AGI, consciousness, and medical AI.
[podcast_player name=”Episode 30 – A Conversation with Robert Mittendorff and Mudit Garg” artist=”Byron Reese” album=”Voices in AI” url=”https://voicesinai.s3.amazonaws.com/2018-01-22-(00-58-58)-garg-and-mittendorf.mp3″ cover_art_url=”https://voicesinai.com/wp-content/uploads/2018/01/voices-headshot-card-3.jpg”]
[voices_in_ai_byline]
Byron Reese: This is Voices in AI, brought to you by Gigaom. I’m Byron Reese. Today is a first for Voices in AI, we have two guests. The first one is from Qventus; his name is Mudit Garg. He’s here with Robert Mittendorff, who’s with Norwest Venture Partners, who also serves on Qventus’ board. Mudit Garg is the co-founder and CEO of Qventus, and they are a company that offers artificial-intelligence-based software designed to simplify hospital operations. He’s founded multiple technology companies before Qventus, including Hive, a group messaging platform. He spent two years as a consultant with Seattle-based McKinsey & Company, focusing, I think, on hospital operations.
Robert, from Norwest Ventures, before he was VP of Marketing and Business Development at Hansen Medical, a publicly traded NASDAQ company. He’s also a board-certified emergency physician who completed his residency training at Stanford. He received his MD from Harvard Medical School, his MBA from Harvard Business School, and he has a BS in Biomedical Engineering from Johns Hopkins University. Welcome to the show, gentlemen.
Mudit Gard: Thank you. Good morning. Thank you for having us.
Robert Mittendorff: Thank you, Byron.
Mudit, I’ll start with you. Tell us about Qventus and its mission. Get us all oriented with why we’re here today.
Mudit: Absolutely. The best way to think of Qventus, our customers often describe us like air traffic control. Much like what air traffic control does for airports, where it allows many flights to land, much more than if they were uncoordinated, and much more safely than if they were uncoordinated. We do the same for healthcare and hospitals.
For me—as, kind of, boring and uncool as a world of operations and processes might be—I had a chance to see that firsthand working in hospitals when I was at McKinsey & Company, and really just felt that we were letting all of our clinicians down. If you think about the US healthcare system, we have the best clinicians in the world, we have great therapies, great equipment, but we fail at providing great medicine. Much of that was being held back by the complex operations that surround the delivery of care.
I got really excited about using data and using AI to help support these frontline clinicians in improving the core delivery of care in the operation. Things like, as a patient sitting in an emergency department, you might wonder what’s going on and why you aren’t being taken care of faster. On the flip side, there’s a set of clinicians who are putting in heroic efforts trying to do that, but they are managing so many different variables and processes simultaneously that it’s almost humanly impossible to do that.
So, our system observes and anticipates problems like, it’s the Monday after Thanksgiving, it’s really cold outside, Dr. Smith is working, he tends to order more labs, our labs are slow—all these factors that would be hard for someone to keep in front of them all the time. When it realizes we might run out of capacity, three or four hours in advance, they will look and find the bottleneck, and create a discussion on how to fix that. We do things like that at about forty to fifty hospitals across the country, and have seen good outcomes through that. That’s what we do, and that’s been my focus in the application of AI.
And Robert how did you get involved with Qventus?
Robert: Well, so Qventus was a company that fit within a theme that we had been looking at for quite some time in artificial intelligence and machine learning, as it applies to healthcare. And within that search we found this amazing company that was founded by a brilliant team of engineers/business leaders who had a particular set of insights from their work with hospitals, at McKinsey, and it identified a problem set that was very tractable for machine learning and narrow AI which we’ll get into. So, within that context in the Bay Area, we found Qventus and we’re just delighted to meet the team and their customers, and really find a way to make a bet in this space.
We’re always interested in case studies. We’re really interested in how people are applying artificial intelligence. Today, in the here and now, put a little flesh on the bones of what are you doing, what’s real and here, how did you build it, what technology you are using, what did you learn? Just give us a little bit of that kind of perspective.
Mudit: Absolutely. I’ll first start with the kinds of things that we are doing, and then we’ll go into how did we build it, and some of the lessons along the way as well. I just gave you one example of running an emergency department. In today’s world, there is a charge nurse that is responsible for managing the flow of patients through that emergency department, constantly trying to stay ahead of it. The example I gave was where, instead the systems are observing it, realizing, learning from it, and then creating a discussion among folks about how to change it.
We have many different things—we call them recipes internally—many different recipes that the system keeps looking for. It looks for, “Hey, here’s a female who is younger, who is waiting and there are four other people waiting around her, and is an acute pain.” She is likely to get up and leave without being seen by a doctor much more than other folks, and you might nudge and greet her, to go up and talk to them. We have many recipes and examples like these, I won’t go into specific examples in each of those, but we do that in different areas of delivery of healthcare.
So, patient flow, just having patients go through the health systems in ways that don’t require them to add resources, but allow them to provide the same care is one big category. You do that in the emergency department, in unison to the hospital and in the operating room. More recently, starting to do that in pharmacy operations, pharmacy costs have started rising. What are the things that today require a human to manually realize, follow up on, escalate and manage, and how can we help the AIs with that process? We’ve seen really good results with that.
I think you’re asking about case studies, in the emergency department side alone, one of our customers treated three thousand more patients in that ED this year than last, without adding resources. They saved almost a million minutes of patient wait time in that single ED alone and that’s been fascinating. What’s been even more amazing is hearing from the nurse manager there how the staff feel like they have the ability to shape the events versus always being behind, and always feeling like they are trying to solve the problem after the fact. They’ve seen some reductions in turnover and that ability of using AI to, in some ways, making health care more human for the people who help us, the caregivers, is what’s extremely exciting in this work for me.
Just to visualize that for a moment, if I looked at it from thirty thousand feet—people come into a hospital, all different ways, and they have all different characteristics of all the things you would normally think, and then there’s a number of routings through the hospital experience, right? Rush them straight into here, or there, or this, so it’s kind of a routing problem. It’s a resource allocation problem, right? What does all of that look like? This is not a rhetorical question, what is all that similar to outside of the hospital? Where is that approach broadly and generally applicable to? It’s not a traffic routing problem, it’s not an inventory management problem, are there any corollaries you can think of?
Mudit: Yeah. In many ways there are similarities to anywhere where there are high fixed asset businesses and there’s a distributed workforce, there’s lots of similarities. I mean, logistics is a good example of it. Thinking about how different deliveries are routed and how they are organized in a way that you meet the SLAs for different folks, but your cost of delivery is not too high. It has similarities to it.
I think hospitals are, in many ways, one of the most complex businesses, and given the variability is much, much higher, traditional methods have failed. In many of the other such logistical and management problems you could use your optimization techniques, and you could do fairly well with them. But given the level of variability is much, much higher in healthcare—because the patients that walk in are different, you might have a ton walk in one day and very few walk in the next, the types of resources they need can vary quite a bit—that makes the traditional methods alone much, much harder to apply. In many ways, the problems are similar, right? How do you place the most product in a warehouse to make sure that deliveries are happening as fast as possible? How do you make sure you route flights and cancel flights in a way that causes minimum disruption but still maximize the benefit of the entirety of the system? How do you manage the delivery of packages across a busy holiday season? Those problems have very similar elements to them and the importance of doing those well is probably similar in some ways, but the techniques needed are different.
Robert, I want to get to you in just a minute, and talk about how you as a physician see this, but I have a couple more technical questions. There’s an emergency room near my house that has a big billboard and it has on there the number of minutes of wait time to get into the ER. And I don’t know, I’ve always wondered is the idea that people drive by and think, “Oh, only a four-minute wait, I’ll go to the ER.” But, in any case, two questions, one, you said that there’s somebody who’s in acute pain and they’ve got four people, and they might get up and leave, and we should send a greeter over… In that example, how is that data acquired about that person? Is that done with cameras, or is that a human entering the information—how is data acquisition happening? And then, second, what was your training set to use AI on this process, how did you get an initial training set?
Mudit: Both great questions. Much of this is part of the first-mile problem for AI in healthcare, that much of that data is actually already generated. About six or seven years ago a mass wave of digitization started in healthcare, and most of the digitization was taking existing paper-based processes and having them run through electronic medical record systems.
So, what happens is when you walk into the emergency department, let’s say, Byron, you walk in, someone would say, “Okay, what’s your name? What are you here for?” They type your name in, and a timestamp is stored alongside that, and we can use that timestamp to realize a person’s walked in. We know that they walked in for this reason. When you got assigned a room or assigned a doctor then I can, again, get a sense of, okay, at this time they got assigned a room, at this time they got assigned a doctor, at this time their blood was drawn. All of that is getting stored in existing systems of record already, and we take the data from the systems of record, learn historically—so before we start we are able to learn historically—and then in the moment, we’re able to intervene when a change needs to take place.
And then the data acquisition part of the acute patient’s pain?
Mudit: The pain in that example is actually coming from the kind of what they have complained about.
I see, perfect.
Mudit: So, we’re looking the types of patients who complain about similar pieces, what’s their likelihood versus this likelihood, that’s what we will be learning on it.
Robert, I have to ask you before we dive into this, I’m just really intensely curious about your personal journey, because I’m guessing you began planning to be a medical practitioner, and then somewhere along the way you decided to get an MBA, and then somewhere along the way you decided to invest in technology companies and be on their boards. How did all of that happen? What was your progressive realization that took you from place to place to place?
Robert: I’ll spend just a couple of minutes on it, but not exactly. I would say in my heart I am an engineer. I started out as an engineer. I did biomedical electrical engineering and then I spent time at MIT when I was a medical student. I was in a very technical program between Harvard and MIT as a medical student. In my heart, I’m an engineer which means I try to reduce reality to systems of practice and methods. And coupled with that is my interest in mission-driven organizations that also make money, so that’s where healthcare and engineering intersect.
Not to go into too much detail on a podcast about myself, I think the next step in my career was to try to figure out how I could deeply understand the needs of healthcare, so that I could help others and myself bring to bear technology to solve and address those needs. The choice to become a practitioner was partially because I do enjoy solving problems in the emergency department, but also because it gave me a broad understanding of opportunities in healthcare at the ground level and above in this way.
I’ll just give you an example, when I first saw what Mudit and his team had done in the most amazing way at Qventus, I really understood the hospital as an airport with fifty percent of the planes landing on schedule. So, to go back to your emergency department example, imagine if you were responsible for safety and efficiency at SFO, San Francisco airport, without a tower and knowing only the schedule landing times for half of the jets, where each jet is patient. Of the volume of patients that spend their night in the hospital, about half come to the ED, and when I show up for a shift that first, second, and third patient can be stroke, heart attack, broken leg, can be shortness of breath, skin rash, etcetera. The level of complexity in health care to operationalize improvements in the way that Mudit has is incredibly high. We’re just at the beginning, they are clearly the leader here, but what I saw in my personal journey in this company is the usage of significant technology to address key throughput needs in healthcare.
When one stack-ranks what we hope artificial intelligence does for the world, on most people’s list, right up there at the very top is impact health. Do you think that’s overly hyped because there’s all kinds of, you know, we have an unending series of wishes that we hope artificial intelligence can do? Do you think it’s possible that it delivers eventually on all of that, that it really is a transformative technology that materially alters human health at a global level?
Robert: Absolutely and wholeheartedly. My background as a researcher in neuroscience was using neural networks to model brain function in various animal models, and I would tell you that the variety of ways that machine learning and AI, which are the terms we use now for these technologies, the variety of ways they will affect human health are massive. I would say within the Gartner hype cycle we are early, we are overhyping in the short term the value of this technology. We are not overhyping the value of this technology in the next ten, twenty, or thirty years. I believe that AI is the driver of our Industrial Revolution. This will be looked back at as an industrial revolution of sorts. I think there’s a huge benefit that are going to be accrued to healthcare providers and patients to the usage of these technologies.
Talk about that a little more, paint a picture of the world in thirty years, assuming all goes well. Assuming all goes well, what would our health experience look like in that world?
Robert: Yeah, well, hopefully your health experience, and I think Mudit’s done a great job describing this, will return to a human experience between a patient and a physician, or provider. I think in the backroom, or when you’re at home interacting with that practice, I think you’re going to see a lot more AI.
Let me give you one example. We have a company that went public, a digital health company, that uses machine learning to read EKG data, so cardiac electrical activity data. A typical human would take eight hours to read a single study on a patient, but by using machine learning they get down to five to tens of minutes. The human is still there, overreading what the machine learned software is producing—this company is called iRhythm—and what that allows us to do is reach a lot more patients at a lower cost than you could achieve with human labor. You’ll see this in radiology. You’ll see this in coaching patients. You’ll see this in where I think Mudit has really innovated, which is he has created a platform that is enabling.
In the case that I gave you with humans being augmented by, what I call, the automation or semi-automation of a human task, that’s one thing, but what Mudit is doing is truly enabling AI. Humans cannot do what he does in the time and scale that he does it. That is what’s really exciting—machines that can do things that humans cannot do. Just to visualize that system, there are some things that are not easily understood today, but I think you will see radiology improve with semi-automation. I think patients will be coached with smart AI to improve their well-being, and that’s already being seen today. Human providers will have leverage because the computer, the machine will help prioritize their day, which patient talk to about, what, when, how, why. So, I think you’ll see a more human experience.
That’s the concern is that we will see a more manufactured experience. I don’t think that’s the case at all. The design that we’ll probably see succeed is one where the human will become front and center again, where physicians will no longer be looking at screens typing in data, they’ll be communicating face to face with a human, with an AI helping out, advising, enabling those tedious tasks that the human shouldn’t be burdened with, to allow the relationship between the patient and physician to return.
So, Mudit, when you think of artificial intelligence and applying artificial intelligence to this particular problem, where do you go from that? Is the plan to take that learning—and, obviously, scale it out to more hospitals—but what is the next level to add depth to it to be able to say, “Okay, we can land all the planes now safely, now we want to refuel them faster, or…”? I don’t know, the analogy breaks down at some point. Where would you go from here?
Mudit: We already as customers are starting to see results of this approach in one area. We’ve started expanding already and have a lot more expansion coming down the line as well. If you think of it, at the end of the day, so much of healthcare delivery is heavily process driven, right? Anywhere from how your bills get generated to when you get calls. I’ve had times when I might get a call from a health system saying I have a ten-dollar bill that they are about to send to collection but I paid all the bills today. There are things like that that are constantly happening that are breakdowns in processes, across delivery, across the board.
We started, as I said, four or five years ago and very specifically focused on the emergency department. Going from there into the surgery area, where operating rooms can cost upwards of hundreds of dollars a minute, so how do you manage that complex an operation, and the logistics setting to deliver the best value? And I’ve seen really good results there, managing the entirety of all the units in the hospital. More recently, as I was saying, we are now starting to work with Sutter Health across twenty-six of their hospital pharmacies, in looking at what are the key pieces around operations in the pharmacy which are, again, manually holding people back from delivering the best care. These are the different pieces across the board that we are already starting to see.
The common thread across all of these I find is that we have amazing, incredible clinicians today, that, if they had all the time and energy in the world to focus on anticipating these problems and delivering the best care, they would do a great job, but we cannot afford to keep having more people solve these problems. There are significant margin pressures across healthcare. The same people who were able to do these things before have to-do lists that are growing faster than they can ever comprehend. The job of AI really is to act as, kind of, their assistant and watch those decisions on their behalf, and help make those really, really easy. To take all of the boring, mundane logistics out of their hands, so they can focus on what they can do best which is deliver care to their patients. So, right now, as I said, we started on the flow side, pharmacies are a new area, outpatient clinics, and imaging centers is another area that we are working with a few select customers on and there’s some really, really exciting stuff there in increasing the access to care—when you might call a physician to get access—while reducing the burden on that physician, that we are working on.
Another really exciting piece for me is, in many ways the US healthcare system is unique, but in this complexity of logistics and operation it is not. So, we are already signed to work with hospitals globally, just started with working with our first international customer recently, and the same problems exist everywhere. There was an article in BBC, I think a week or two weeks ago, where there’s a long surgery waiting lists in the UK, and they are struggling to get those patients seen in that system, due to lack of efficiency in these logistics. So, that’s the other piece that I’m really excited about, it’s not only the breadth of these problems where there’s complexity of processes, but also the global applicability of it.
The exciting thing to me about this episode of Voices is that I have two people who are engineers, who understand AI, and who have a deep knowledge of health. I just have several questions that kind of sit at the intersection of all of that I would love to throw at you.
My first one is this, the human genome is, however many billions of base pairs that works out to something like 762MB of data, but if you look at what makes us different than, say, chimps, it may be one percent of that. So, it’s something like 7MB or 8MB of data is the code you need to build an intelligent brain, a person. Does that imply to you that artificial intelligence might have a breakthrough, there might be a relatively straightforward and simple thing about intelligence that we’re going to learn, that will supercharge it? Or, is your view that, no, unfortunately, something like a general intelligence is going to be, you know, hunks of spaghetti code that kind of work together and pull off this AGI thing. Mudit, I’ll ask you first.
Mudit: Yeah, and boy that’s a tough question. I will do my best in answering that one. Do I believe that we’ll be able to get a general-purpose AI, with, like, 7MB or 8MB of code? There’s a part of me that does believe in that simplicity, and does want to believe in that the answer. If you look at a lot of the machine learning code, it’s not the code itself that’s actually that complex, it’s the first mile and the last mile of that code that ends up taking the vast majority of the code. How to get the training sets in and how do you get the output out—that is what takes the majority of the AI code today.
The fundamental learning code isn’t that big today. I don’t know if we’ll solve general purpose AI anytime soon. I’m certainly not holding my breath for that, but there’s a part of me that feels and hopes that the fundamental concepts of the learning and the intelligence, will not be that complicated at an individual micro scale. Much like ourselves, we’ll be able to understand them, and there will be some beauty and harmony and symphony in how they all come together. And that actually won’t be complex in hindsight, but it will be extremely complex to figure out the first time around. That’s purely speculative but that would be might be my belief and my hunch right now.
Robert, do you want to add anything to that, or let that answer stand?
Robert: I’d be happy to. I think it’s an interesting analogy to make. There are some parts of it that will break down and parts that will parallel between the human genomes complexity, and utility, and the human brain. You know, just I think when we think about the genome you’re right, it’s several billion base pairs where we only have twenty thousand genes, and a small minority percentage that actually code for protein, and a minority of those that we understand affect the human in a diseased way, like a thousand genes to two thousand genes. There’s a lot of base pairs that we don’t understand and could be related to structure of the genome as it needs to do what it does in the human body, in the cell.
On the brain side, though, I think I would go with your latter response which is if you look at the human brain—and I’ve had the privilege of working with animal models and looking at human data—the brain is segmented into various functional units. For example, the auditory cortex is responsible for taking information from the ear and converting it to signals that then are pattern-recognized in to, say, language, and where those symbols of what words we’re speaking are then processed by other parts of the cortex. Similarly, the hippocampus, which sits in, kind of, the oldest part of the brain, is responsible for learning. It is able to look at various inputs from all of these, from the visual and auditory and other courtesies, and then upload them to long-term memory from short-term memory, so that the brain is functionally segmented and physically segmented.
I believe that a general-purpose AI will have the same kind of structure. It’s funny we have this thing called the AI effect where when we solve a problem with code or with machinery, it’s no longer AI. So, for example, natural language processing, some would consider now not part of AI because we’ve somewhat solved it, or speech recognition used to be AI, but now it’s an input to the AI, because the AI is thinking about more understanding than interpretation of audio signals and converting them into words. I would say what we’re going to see, which is similar to the human body encoded by these twenty thousand genes, is you will have functional expertise with, presumably, code that is used for segmenting the problem of creating a general AI.
A second question then. You, Robert, waxed earlier about how big the possibilities are for using artificial intelligence with health. Of course, we know that the number of people who are living to one hundred keeps going up, up, up. The number of people who become supercentenarians is in the dozens, who’ve gotten to one hundred and ten. The number of people who have lived to one hundred and twenty-five is stubbornly fixed at zero. Do you believe—and not even getting aspirational about “curing death”—that what’s most likely to happen is more of us are going to make it to one hundred healthily, or do you think that one hundred and twenty-five is something we’ll break and maybe somebody will live to one hundred and fifty. What do you think about that?
Robert: That’s a really hard question. I would say that if I look at the trajectory of gains that, public health, primarily, with things like treated water to medicine, we’ve seen a dramatic increase in human longevity in the developed world. From taking down the number of children dying during childbirth, which lowers the average obviously, to extending life in the later years, and if you look at the effects there those conclusions have never effects on society. For example, when Social Security was invented a minority of individuals would live to the age in which they would start accruing significant benefits, obviously that’s no longer the case.
So, to answer your question, there is no theoretical reason that I can come up with that I can’t imagine someone making it to one hundred and twenty-five. One hundred and fifty is obviously harder to imagine. But we understand the human cell at a certain level, and the genome, and the machinery of the human body, and we’ve been able to thwart the body’s effort to fatigue and expire, a number of times now. Whether it’s cardiovascular disease or cancer, and we’ve studied longevity—“we” meaning the field, not myself—so, I don’t see any reason why we would say we will not have individuals reach one hundred and twenty-five, or even one hundred and fifty.
Now, what is the time course of that? Do we want that to happen and what are the implications for society? Those are big questions to answer. But science will continue to push the limits of understanding human function at the cellular and the physiologic level to extend the human life. And I don’t see a limit to that currently.
So, there is this worm, called the nematode worm, little bitty fella, he’s as long as a hair is wide, the most successful animal on the planet. Something like seventy percent of all animals are nematode worms. The brain of the nematode worm has 302 neurons, and for twenty years or so, people have been trying to model those 302 neurons in a computer, the OpenWorm project. And even today they don’t know if they can do it. That’s how little we understand. We don’t not understand the human brain because it’s so complex, we don’t understand anything—or I don’t want to say anything—we don’t understand just how neurons themselves work.  
Do you think that, one, we need to understand how our brains work—or how the nematode brain works for that matter—to make strides towards an AGI? And, second, is it possible that a neuron has stuff going on at the Planck level that it’s as complicated as a supercomputer, making intelligence acquired that way incredibly difficult? Do either of you want to comment on that?
Mudit: It’s funny that you mention that, when I was at Stanford doing some work in the engineering, one of the professors used to say that our study of the human brain is sort of like someone just had a supercomputer and two electrodes and they’re poking the electrodes in different places and trying to figure out how it works. And I can’t imagine ever figuring out how a computer works outside-in by just having like two electrodes and seeing the different voltages coming out of it. So, I do see the complexity of it.
Is it necessary for us to understand how the neuron works? I’m not sure it’s necessary for us to understand how the neuron works, but if you were to come up with a way where we can build a system that’s, both resilient, redundant, and simple, that can do that level of intelligence, I think that’s hundreds of thousands of years of evolution that have helped us get to that solution, so it would, I think, be a critical input.
Without that, I see a different approach, which is what we are taking today, which is inspired, likely, but it’s not the same. In our brain when neurons fire, yes, we now have a similar transfer function for many of our neural networks of how the neuron fires, but for any kind of meaningful signal to come out we have a population of neurons firing in our brain that makes the impulsing more continuous and very redundant and very resilient. It wouldn’t fail even if some portion of those neurons stopped working. But that’s not how our models work, that’s not how our math works today. I think in finding the most optimized, probably, elegant and resilient way of doing it, I think it would be remiss not to take inspiration from what has been evolved over a long, long period of time, to, perhaps, being one of the most efficient ways of having general purpose AI. So, at least my belief would be we will have to learn, and I would think that our understanding is still largely simplistic and, at least, I would hope and believe that we’ll learn a lot more and find out that, yeah, each one of those perhaps either communicates more, or does it in a way that brings the system to the optimal solution a lot faster than we would imagine.
Robert: Just to add to that I would say, I agree with everything Mudit said, I would say do we need to study the neuron and neural networks in vivo, in animals? And the answer to that is, as humans, we do. I mean, I believe that we have an innate curiosity to understand ourselves and that we need to do. Whether it’s funded or not, the curiosity to understand who we are, where we came from, how we work, will drive that just like it’s driven fields as diverse as astronomy to aviation.
I think, do we need to understand at the level of detail you’re describing, for example, what exactly happens at the synapse stochastically, where neurotransmitters find the receptors that open ion channels that change the resting potential of a neuron, such that additional axonal effects occur where at the end of that neuron you then release another neurotransmitter. I don’t think so. Because I think we learn a lot, as Mudit said, from understanding how these highly developed and trained systems we call, animals and humans, work, but they were molded over large periods of time for specific survival tasks, to live in the environment that they live in.
The systems we’re building, or Mudit’s building, and others, are designed for other uses, and so we can take, as he said, inspiration from them, but we don’t need to model how a nematode thinks to help the hospital work more effectively. In the same way that, there are two ways, for example, someone could fly from here in San Francisco, where I’m sitting, to, let’s say, Los Angeles. You could be a bird, which is a highly evolved flying creature which has sensors, which has, clearly, neural networks that are able to control wing movement, and effectively the wing surface area to create lift, etcetera. Or, you could build a metal tube with jets on it that gets you there as well. I think they have different use cases and different criteria.
The airplane is inspired by birds. The wing of an airplane, the cross-section of the wing is designed like a bird’s wing is in that the one pathway is longer than the other which changes pressure above and below the wing that allows flight to occur. But clearly, the rest of it is very different. And so, I think the inspiration drove aviation to a solution that has many parts from what birds have, but it’s incredibly different because the solution was to the problem of transporting humans.
Mudit, earlier you said we’re not going to have an AGI anytime soon. I have two questions to follow up on that thought. The first is that among people who are in the tech space there’s a range of something like five to five hundred years as to when we might get a general intelligence. I’m curious, one, why do you think there’s such a range? And, two, I’m curious, with both of you, if you were going to throw a dart at that dartboard, where would you place your bet, to mix a metaphor.
Mudit: I think in the dart metaphor, chances of being right are pretty low, but we’ll give it a shot. I think part of it, at least I ask myself, is the bar we hold for AGI too high? At what point do we start feeling that a collection of special-purpose AIs that are welded together can start feeling like an AGI and is that good enough? I don’t know the answer to that question and I think that’s part of what makes the answer harder. Similar to what Robert was saying where the more problems we solve, the more we see them as algorithmic and less as AI.
But I do think at some point, at least in my mind, if I can see an AI starting to question the constraints of the problem and the goal it’s trying to maximize, that’s where true creativity for humans comes from; when we break rules and when we don’t follow the rules we were given. And that’s also the scary part of AI comes from because it can do that at scale then. I don’t see us close to that today. And if I had to guess I’m going to just say, on this exponential curve, I’m going to probably not pick out the right point, but four to five decades is when we start seeing enough of the framework and maybe essentially, we can see some tangible general-purpose AI come to form.
Robert, do you want to weigh in, or you will take a pass on that one?
Robert: I’ll weigh in quickly. I think we often see this in all of investing, actually—whether it’s augmented reality, virtual reality, whether it’s stenting or robotics in medicine—we as investors have to work hard to not overestimate the effect of technology now, and not underestimate the effect of technology in the long run. This came from, I believe a Stanford professor Roy Amara, who unfortunately passed a while ago, but that idea of saying, “Let’s not overhype it, but it’s going to be much more profound than we can even imagine today,” puts my estimate, probably—and it depends how you define general AI which is probably not worth doing—I would say it’s within fifteen to twenty years.
We have this brain, the only general intelligence that we know of. And then we have the mind and, kind of, a definition of that which I think everybody can agree to that the mind as a set of abilities that don’t seem, at first glance, to be something an organ could do, like creativity, or a sense of humor. And then we have consciousness, we actually experience the world. A computer to measure temperature, but we can burn our finger and feel that. My questions are, we would expect the computer to have a “mind,” we would expect an AGI to be creative, do you think, one, that consciousness is required for general intelligence, and, to follow up on that, do you believe computers can become conscious? That they can experience the world as opposed to just measure it?
Mudit: That’s a really hard one too. I think actually in my mind what’s most important, and there’s kind of a grey line between the two, is creativity is what’s most important, the element of surprise is what’s most important. The more an AI can surprise you, the more you feel like it is truly intelligent. So, that creativity is extremely important. But I think the reason I said there’s kind of a path from one to the other is—and this is very philosophical of how to define consciousness—in many ways it’s when we start taking a specific task that is given to us, but really start asking the larger objective, the larger purpose, that’s when, I feel like, that’s what truly distinguishes a being or a person being conscious.
Until the AIs are able to be creative and break the bounds of the specific rules, or the specific expected behavior that it’s programmed to do, certainly the path to consciousness is very, very hard. So, I feel like creativity and surprising us is probably the first piece, which is also the one that honestly scares us as humans the most, because that’s when we feel a sense of losing control over the AI. I don’t think true consciousness is necessary, but they might go hand in hand. I can’t think of it being necessary, but they might evolve simultaneously and they might go hand in hand.
Robert: I would just add one other thought there which is, so I spent many hours in college having this debate of what is consciousness, you know, where is the sea of consciousness? Anatomists for centuries have dissected and dissected it, you know, is it this gland, or is it that place, or is it an organized effect of the structure and function of all of these parts. I think that’s why we need to study the brain, to be fair.
One of the underlying efforts there is to understand consciousness. What is it that makes a physical entity able to do what you said, to experience what you said? More than just experiencing a location, experiencing things like love. How could a human do that if they were a machine? Can a machine of empathy?
But I think beyond that, as I think practically as an investor and as a physician, I frankly, I don’t know if I care if the machine is conscious or not, I care more about who do I assign responsibility to for the actions and thoughts of that entity. So, for example, if they make a decision that harms someone, or if they make the wrong diagnosis, what recourse do I have? Consciousness in human beings, well, we believe in free will, and that’s where all of our entities around human justice come from. But if the machine is deterministic, then a higher power, may be the human that designed it, is ultimately responsible. For me, it’s a big question about responsibility with effect to these AIs, and less about whether they’re conscious or not. If they’re conscious then we might be able to assign responsibility to the machine, but then how do we penalize it—financially, otherwise? If they’re not conscious, then we probably need to assign responsibility to the owner, or the person that configured the machine.
I started the question earlier about why is there such a range of beliefs about when we might get a general intelligence, but the other interesting thing, which you’re kind of touching on, is there’s a wide range of belief about whether we would want one. You’ve got the Elon Musk camp of summoning the demon, Professor Hawking saying it’s an existential threat, and Bill Gates said, “I don’t understand why more people aren’t worried about it,” and so forth. And on the other end, you have people like Andrew Ng who said, “That’s like worrying about overpopulation of Mars,” and Rodney Brooks the roboticist, and so forth, who dismissed those. It’s almost eye-rolling, that you can see. What are the core assumptions that those two groups have, and why are they so different from each other in their regard to this technology?
Mudit: To me it boils down to the same things that make me excited about large-scale potential, from a general-purpose side, are the things that make me scared. You know how we were talking about what creativity is, if I go back to creativity for a second. Creativity will come from if an AI is told to maximize an objective function and the objective function has constraints, should it be allowed to question the constraints and the problem itself? If it is allowed to do that that’s where true creativity would come from, right? That’s what a human would do. I might give someone a task or a problem, but then they might come back and question it, and that’s where true creativity will come from. But the minute we allow an AI to do that is also when we lose that sense of control. We also don’t have that sense of control in humans today, but what freaks us out about AI is that AI can take that and do that at very, very rapid scale, at a pace at which we may not even as a society catch up to, realize, and be able to control or regulate, which we can in case of humans. I think that’s both the exciting part and the fear, they are really hand in hand.
The pace at which AI can then bring about the change once those constraints are loosened is something we haven’t seen before. And we already see, in today’s environment, our inability to keep pace with how fast technology is changing, from a regulation, from a framework standpoint as a society. And I think once that happens that will be called into question even more. I think that’s probably why many in the camp of Elon Musk, Sam Altman, and others, in many ways, I think, the part of their ask that resonates with me is we probably should start thinking about how we will tackle the problem, what framework should we have in place earlier, so we have time as a society to wrestle with it before it comes and it’s right in our face.
Robert: I would add to that with four things. I would say the four areas that I think kind of define us a bit—and there were a couple of them that were mentioned by Mudit—I think it’s speed, so speed of computation of affecting the world in which the machine would be in; scalability; the fact that it can affect the physical environment; and the fact that machines as we currently believe them do not have morals or ethics, I don’t know how you define it. So, there’s four things. Something that’s super fast, that’s highly scaled, that can affect the physical world with no ethics or morality, that is a scary thing, right? That is a truck on 101 with a robotic driver that is going to go 100 MPH and doesn’t care what it hits. That’s the scary part of it. But there’s a lot of technology that looks like that. If you are able to design it properly and constrain it, it can be incredibly powerful. It’s just that the conflict in those four areas could be very detrimental to us.
So, to pull the conversation back closer to the here and now, I want to ask each of you what’s a breakthrough in artificial intelligence in the medical profession that we may not have heard about, because there are so many of them? And then tell me something—I’ll put both of you on the spot on this—you think we’re going to see in, like, two or three years; something that’s on a time horizon where we can be very confident we’re going to go see that. Mudit, why don’t you start, what is something we may not know about, and what is something that will happen pretty soon do you think, in AI and medicine?
Mudit: I think—and this might go back to what I was saying—the breakthrough is less in the machine learning itself, but the operationalization of it. The ability—if we have the first mile and the last mile solved—to learn exists, but in the real, complex world of high emotions, messy human-generated data, the ability to actually, not only predict, but, in the moment, prescribe and persuade people to take action, is what I’m most excited about and I’m starting to see happen today, that I think is going to be transformative in the ability of existing machine learning prowess to actually impact our health and our healthcare system. So, that’s the part that I’m most excited about. It may not be, Byron, exactly what you’re looking for in terms of what breakthrough, but I think it’s a breakthrough of a different type. It’s not an algorithmic breakthrough, but it’s an operationalization breakthrough which I’m super excited about.
The part you asked about, what do I think in two to three years we could start doing, that we perhaps don’t do as well now… I know one that is very clear is places where there’s high degrees of structured data that we require humans to pore through—and I know Robert spent a lot of time on this, so I’ll leave this one to him—around radiology, around EKG data, around these huge quantities of structured data that are just impossible to monitor. But the number of poor quality outcomes, mortality, and bad events like that that happen which, if it was humanly feasible to monitor all that and realize, I believe we are two to three years away from starting to meaningfully bend that, both kind of process-wise, logistically, and then from a diagnosis standpoint. And it will be basic stuff, it will be stuff that we have known for a long time that we should do. But, you know, as the classic saying goes, it takes seventeen years from knowing something should be done, to doing it at scale in healthcare; I think it will be that kind of stuff where it will start rapidly shortening and reducing that cycle time and seeing vast effects of that in a healthcare system.
Robert: I’ll give you my two, briefly. I think it’s hard to come up with something that you may not have heard about, Byron, with your background, so I’ll think more about the general audience. First of all, I agree with Mudit, I think the two to three year time frame what’s obvious is that any signal processing in healthcare that is being done by human is going to be rapidly moved to a computer. So, iRhythm as an example of a company trading over a billion in a little over a year out of its IPO does that for cardiology data, EKG data, acquired through a patch. There are over forty companies that we have tracked in the radiology space that are prereading, or in some sense providing a pre-diagnostic read of CTs, MRIs, x-rays, for human radiology overreads for diagnosis. That is happening in the next two to five years. That is absolutely going to happen in the next two to five years. Companies like GE and Philips are leading it, there are lots of startups doing work there.
I think the area that might not be so available to the general public is the usage of machine learning on human conversation. Imagine in therapy, for example, therapy is moving to teletherapy, telemedicine; those are digitized conversations, they can be recorded and translated into language symbols, which can then be evaluated. Computational technology is being developed and is available today that can look at those conversations to decipher whether, for example, someone is anxious today, or depressed, needs more attention, may need a cognitive behavioral therapy intervention that is compatible with their state. And that allows, not only the scaling of signal processing, but the scaling of human labor that is providing psychological therapy to these patients. And so, I think, where we start looking at conversations, this is already being done in the management of sales forces with companies using AI to monitor sales calls and coach sales reps as to how to position things in those calls, to more effectively increase the conversion of a sale, we’re seeing that in healthcare as well.
All right, well that is all very promising, that’s all like kind of lifts up our day to know that there’s stuff coming and it’s going to be here relatively soon. I think that’s probably a good place to leave it. As I look at our timer, we are out of time, but I want to thank both of you for taking the time out of, I’m sure, your very busy days, to have this conversation with us and let us in on a little bit of what you’re thinking, what you’re working on, so thank you.
Mudit: Thank you very much, thanks, Byron.
Robert: You’re welcome.
Byron explores issues around artificial intelligence and conscious computers in his upcoming book The Fourth Age, to be published in April by Atria, an imprint of Simon & Schuster. Pre-order a copy here.
[voices_in_ai_link_back]

Voices in AI – Episode 28: A Conversation with Mark Stevenson

[voices_in_ai_byline]
In this episode, Byron and Mark discuss the future of jobs, energy and more.
[podcast_player name=”Episode 28 – A Conversation with Mark Stevenson” artist=”Byron Reese” album=”Voices in AI” url=”https://voicesinai.s3.amazonaws.com/2018-01-15-(00-58-06)-mark-stevenson.mp3″ cover_art_url=”https://voicesinai.com/wp-content/uploads/2018/01/voices-headshot-card-1.jpg”]
[voices_in_ai_byline]
Byron Reese: This is “Voices in AI,” brought to you by Gigaom. I’m Byron Reese. Today I’m excited we have Mark Stevenson. Mark is a London-based British author, businessman, public speaker, futurologist and occasionally musician and comedian. He is also a fellow of The Royal Society for the Encouragement of Arts, Manufactures and Commerce. His first book, An Optimist’s Tour of the Future was released in 2011 and his second one, We Do Things Differently came out in 2017. He also co-founded and helps run the London-based League of Pragmatic Optimists. Welcome to the show, Mark! 
Mark Stevenson: Thank you for having me on, Byron! It’s a pleasure.
So, the subtitle of your Optimist’s Tour of the Future is, “One curious man sets out to answer what’s next.” Assuming you’re the curious man, what is next?
You can take “curious” in two ways, can’t you? Somebody is interested in new stuff, or somebody’s just a little bit odd, and I am probably a bit of both. Actually, I don’t conclude what’s next. I actually said the question is its own answer. My work is about getting people to be literate about the questions the future is asking them. What’s next will depend on how we collectively answer those questions.
What’s next could be a climate change, dystopian, highly unequal world; or what’s next could be a green-powered, prosperous, abundant, distributed economy for everybody. And each is likely. What’s next is what we decide to do about it, and that’s why I do the work I do, which is trying to educate people about the questions we’re being asked, and allowing them to imagine for themselves.
You said that’s why you do the work that you do. What do you do?
Well, I guess I am a professional irritant. I work with governments, corporations, universities helping them become literate about the questions the future is asking them. You’ll find that most organizations have a very narrow view of the world, because they are kind of governed by their particular marketplace or whatever, and same with governments and government departments.
So, I’ll give you an example, I was working with an insurance company recently who wanted me to come in and help them, and I just put up a picture of two cars having an accident and I said, “What happens if one or both of these is a driverless car?” and the head of insurance went, “I don’t know.” And I’m like, “Well, you should really be asking yourself that question because that question is coming.” And he said, “Mark, we insure drivers. If there aren’t any, it’s a real fucker on the balance sheet.”
It’s funny, but I used to work on old cars, and they were always junkers when I got them, and one time, I had one parked at the top of the hill and in the middle of the night, the brakes failed evidently and it rolled down the hill and hit another car. That scenario actually happened.
The other thing I said was, “What’s your biggest cost?” and he said, “Of course, it’s claims.” And ninety-seven percent or something of claims are because of human error, and it turns out driverless cars are way safer than cars with drivers in them; so maybe that’s good for him, because maybe it will reduce claims. My point was that I don’t know what he should do. He’s the expert in insurance, but my point is, you should be asking yourselves these questions.
Another example from insurance—I was working with the reinsurance industry, the insurers that insure the insurers. On the one hand, you’re being asked to underpin businesses that are insuring a coal-fired power plant. On the other hand, you’re being asked to insure businesses that are going to be absolutely decimated by climate risk.
And you can’t do both and it’s that lack of systems thinking, I suppose, I bring to my clients. And how the food system, the energy system, the government system, the education system, what’s happening in physics, what’s happening in the arts and culture, what’s happening in technology, what’s happening in economics, what’s happening in politics—how they all interrelate, and what questions they ask you?
And then what are you going to do about it, with the levers you have and the position you’re in, to make our world more sustainable, equitable, humane and just? And if you’re not doing that, why are you getting up in the morning and what is the point of you? That’s kind of my business.
When you deal with people, are they, generally speaking, optimistic, are they pessimistic, or are they agnostic on that, because they’re basically just looking at the future from a business standpoint?
That’s a really good question. They’re often quite optimistic about their own chances and often pessimistic about everybody else’s. [Laughter] If you ask people, “Are you optimistic about the future?” they’re going to go, “Yeah, I’m optimistic about the future.” Then, you go, “Are you optimistic about the future generally, like, for the human race?” And you hear, “Oh, no, it’s terrible.”
Of course, those two things are incompatible. People are convinced of their ability to prevail against the odds, but not for everybody else. And so, I often get hired by companies who are saying to me, “We want you to help us be more successful in the future,” and then, I’ll point out to them that actually there’s some existential threats to their business model that may mean they’ll be irrelevant in five years, which they haven’t even thought about.
A really good example of this from the past, which is quite famous, is what happened to Blockbuster. So Netflix went to Blockbuster—I think in 2006—and said, “You should invest in us. You should buy us. We’ll be your online distribution arm.” And the management at Blockbuster went, “I don’t know. I think people will always want to take a cassette home.” But also, Blockbuster made a large amount of their profits from late returns.
So they weren’t likely to embrace downloads, because that would kind of cannibalize one of their revenue streams. Of course, that was very short-sighted of them. And one of the things I say to a lot of my clients is, “Taking the future seriously is going to cost some people their jobs, and I am sorry about that, but not taking the future seriously is going to cost everybody their jobs. So it’s kind of your choice.”
Are your clients continental, British, American… primarily? 
All over. I’m under non-disclosure agreements with most of them.
Fair enough. My follow-up question is going to be, there’s of course a stereotype that Europeans overall are more pessimistic about the future and Americans are less so. Is that true or is it that there’s a grain of truth somewhere, but it’s not really material?
I think there is something in it, and I think it’s because certainly, people from the United States are very confident about the wonderfulness of the United States and how it will prevail. There’s that “American Dream” kind of culture, whereas Europe is in a lot of smaller nations that up until quite recently have been beating the crap out of each other. Perhaps we are a little bit more circumspect, but yeah, it’s a very slight skewing in one direction or the other.
You subtitle your book “What’s Next?” and then, you say, “The question is the answer,” kind of in this Zen fashion, but at some level you must have an opinion, like, it could go either way, but it will likely do what? What do you personally think?
 I don’t know. I feel it’s really up for grabs. If we carry on the way we’re going, it’s going to be terrible; there’s no doubt about that. I think it’s an ancient Chinese proverb that says, “If we don’t change the direction we’re going, we’re going to end up where we’re headed.” And where we’re heading to at the moment is a four-degree world, mass inequality, mass unemployment from the subject we’re going to get into a bit later, which is AI replacing a lot of middle-class jobs, etc. That’s certainly possible.
Then, on the other hand, because of the other work I do with Atlas of the Future, I’m constantly at the cutting-edge, finding people doing amazing stuff. There’s all sorts of people out there putting different futures on the table that make it imminently possible for us to have a humane and just and sustainable world. When you realize, for instance, that we’re installing half a million solar panels a day at the moment. Solar is doubling in capacity every two or three years, and it’s a sort of low starting point, but if it carries on like that, we’ll be completely on renewables within a generation.
And that’s not just good for the environment. Even if you don’t care about the environment, it’s really good for the economy, because the marginal cost of renewable energy is zero and the energy price is very, very stable, which is great when you want to invest long-term. Because one of the problems with the world’s economy is that the oil price keeps going up and down, and nobody knows what’s going to happen to their economy as a result.
You’ll remember—I don’t know how old you are, but certainly some of your listeners will remember—what happened after the Yom Kippur War, where the Arab nations, in protest of American support for Israel, just upped the oil price by about fivefold and suddenly, you had a fifty-five mile-per-hour speed limit, there were states that banned Christmas lights because it was a frivolous use of energy, there was gas rationing, etc. That’s a very extreme example of what’s wrong with relying on fossil fuels, just from an economic perspective, not even an environmental one.
So there are all sorts of great opportunities out there, and I think we really are on the dividing line at the moment. And I suppose I have just decided to put my shoulder against fighting for the side of sustainability and humanity and justice, rather than business as usual, and I don’t have a view. People call me an optimist because I fight, I suppose, for the optimistic side, but we could lose, and we could lose very badly.
Of course, you’re right that if we don’t change direction, you can see what’s going to happen. But there are other things that no force on heaven and earth could stop, like the trend toward automation, the trend toward computerization, the development of artificial intelligence, and those sorts of things.  
Those are known things that will happen. Let’s dive into that topic. Putting aside climate and energy and those topics for the moment, what do you think are just things that will certainly happen in the future?
This is really interesting. The problem with futurology as a profession—and I use that word “profession” very loosely—is that it’s associated with prediction, and predictions are usually wrong. As you said, there are some things you can definitely see happening, and it’s therefore very easy to predict what I would call the “first-order effects” of that.
A good example: When the internet arrived, it’s not hard to predict the rise of email, as you’ve got a network of computers with people sat behind them, typing on keyboards. Email is not a massive leap. So predicting the rise of email is not a problem, but does anybody predict the invention of social media? Does anybody predict the role of social media in spreading fake news or whatever? You can’t. These are second, third-order, fourth-order effects. So each technology is really not an answer, it’s just a question.
If you look at AI, we are looking very much at the automation of lots of jobs that previously we would’ve thought “un-automatable.” As already mentioned, driverless cars is one example of artificial intelligence. A great report came out last year from the Oxford Martin School listing literally hundreds of middle-class jobs that are on the brink of being replaced by automation—
Let me put a pin there, because that’s not actually what they say, they go to great pains to say just the opposite. What they say is that forty-seven percent of things people do in their jobs are potentially automatable. That’s why things on their list are things like pharmacist assistants or whatnot. So all they really say is, “We make no predictions whatsoever about what is going to happen in jobs.”
So if a futurologist does anything, the futurologist looks at the past, and says, “We know human nature is a constant, and we know things that have happened in the past, again and again and again. And we can look at that and say ‘Okay, that will probably happen again.’” So we know that for two hundred and fifty years, three hundred years since the Industrial Revolution in the West, unemployment has remained very narrow in this broad band of five to ten percent.
Aside from the Depression, all over the West, even though you’ve had, arguably, more disruptive technologies—you’ve had the electrification of industry, the mechanization of industry, the end of animal power being a force of locomotion, coal grew from generating five percent of energy to eighty percent of energy in just twenty years—all these enormous disrupting things that did, to use your exact words, “automated jobs that we would’ve thought were not automatable,” and yet, we never ever had a hiccup or a surge in unemployment from that. So wouldn’t it be incumbent on somebody saying something different is going to happen, to really go into a lot of detail about what’s different with this? 
I absolutely agree with you there, and I am not worried about employment in the long run. Because if you look at what’s happened in employment, it’s what you call “non-routine things,” things that humans are good at, that have been hard to automate. A really good example is the beginning of the Industrial Revolution, lots of farm laborers, end of Industrial Revolution, not nearly as many farm laborers—I think five percent of the number—because we introduced automation to the farming industry, tractors, etcetera; now far fewer people can farm the same amount of land.
And by the same token, at the beginning of the Industrial Revolution, not so many accountants; by the end of it, stacks of accountants—thirty times more accountants. We usually end up creating these higher-value, more complex jobs. The problem is the transition. In my experience, not many farm laborers want to become accountants, and even if they did, there’s no transition route for them. So whole families, whole swathes of the populace can get blindsided by this change, because they’re not literate about it, or their education system isn’t thinking about it in a sensible way.
Let’s look at driverless technology again. There’s 3.5 million truck drivers in the United States, and it’s very likely that a large chunk of them will not have that job available to them in ten or fifteen years, and it’s not just them. Actually, if you go to the American Trucking Association, they will say that one in fifteen of the American workforce are somehow related to the trucking industry.
A lot of those jobs will be at threat. Other jobs may replace them, but my concern is what happens to the people who are currently truck drivers? What happens to an education system that doesn’t tell people that truck drivers won’t be existing in such numbers in ten or fifteen years’ time? What does the American Trucking Association do? What do logistics firms that employ those truckers do?
They’ve all got a responsibility to think about this problem in a systemic way, and they often don’t, which is where my work comes in, saying, “Look, Government, you have to think about an education that is very different, because AI is going to be creating a job market that’s entirely different from the one you’re currently educating your children into.”
Fair enough. I don’t think that anybody would argue that an industrial economy education system is going to make workers successful in this world of tomorrow, but that set up that you just gave, it strikes me as a bit disingenuous. Which is to say, well, let’s just take truck driving for example. The facts on the ground are that it will be gradual, because you’ve got, likely, ten years to replace all the truckers, and it’s going to be gradual. So, fewer people are going to enter the field, people who might retire earlier are going to retire out of it. Technology seldom does it all that quickly.
But the thing that I think might be different is that, usually, what people say is, “We’re going to lose these lower-skill jobs and we’re going to make jobs for geneticists,” and those people who had these lower-skill jobs are going to become geneticists, and nobody actually ever says that that’s what happens.
The question is, “Can everybody already do a job a little harder than the one they presently have?” So, each person just goes up one layer, one notch in the food chain that doesn’t actually require that you take truck drivers and send them to graduate school for twelve years.
Indeed, and this is why having conversations like this is so important, because, as I said, my thing is about making people literate about the questions the future is asking them. And so, now, we’re having quite a literate conversation about that, and that’s really important. It’s why podcasts like this are important, it’s why the research you do is important. But in my experience, a lot of people, particularly in government, they would not even be having this conversation or asking this question. And the same for lots of people in business as well, because they’re very focused on a very narrow way of looking at things. So, I think I’m in violent agreement with you.
And I with you. I am just trying to dissect it and think it through, because one could also say that about the electrification of industry, all those things I just listed. Nobody said, “Electrification is coming.” We’ve always been reactive, and, luckily, change has come at a pace that our reactive skills have been able to keep up. Do you think this time is different? Are you saying there’s a better way to do it?
I just think it’s going to be faster this time. I think it’s an arguable truism in the work of futurism that technology waves speed up. If you look at, for instance, there are some figures I’ve got for the United States National Intelligence Council, and it’s really interesting just to look at how long it took the United States population to adopt certain technologies. It took forty-six years for twenty-five percent of the United States population to bring electricity into their homes from its introduction to the market.
It took just seven for the World Wide Web, and there were two and a half times as many citizens there. And that makes sense, because each technology provides the platform and the tools to build the next one. You can’t have the World Wide Web until you have electricity. And so you see this speeding up because now you have more powerful tools than you had the last time to help you build the next one, and they distribute much more quickly as well.
So what we have—and this is what my third book is going to be about—is this problem between the speed of change of technology and also, the speed of change of thought and philosophy and new ideas about how we might organize ourselves, and the speed of our bureaucracies and our governments and our administration, which is still painfully slow. And it’s that mismatch of those gears that I think causes the most problems. The education system being a really good example. If your education system isn’t keeping up with those changes, isn’t in lockstep with them, then inevitably, you’re going to do a disservice to many of the students going through it.
Where do you think that goes to? Because, if it took forty-seven years for electricity and seven for the web, eventually, it’s like that movie Spaceballs, where they had that scene where the video hits the video store before they finish shooting it. At some point, there’s an actual physical limit to that, right? You don’t have a technology that comes out on Thursday and by Friday, half the world is using it. So what does that world look like?
Exactly, and all of these things move at slightly different speeds. If you look at what’s happening with energy at the moment, which is one of my favorite topics because I think it kind of underpins everything else, the speed at which the efficiency of solar panels is rising, the speed at which the price of solar is going down, the invention of energy Internet technology, based on ideas from Bob Metcalfe, is extraordinary.
I was at the EU Commission a few weeks ago, talking to them about their energy policy and looking at it and saying, “Look guys, you have a fantastic energy policy for 1994. What’s going on here? How come I am having to tell you about this stuff? Because actually, we should be moving to a decentralized, decarbonized, much more efficient, much cheaper energy system because that’s good for everybody, but you’re still writing energy policy as if it was the mid ‘90s.” And that really worries me. Energy is not going to move as fast as a new social networking application, because you do have to actually build stuff and stick it in the ground and connect to each other, but it is still moving way faster than the administration, and that is my major concern.
The focus of my work for the next two-three years is working at, how do we get those things working at the same speed or at least nearly enough at the same speed so they can usefully talk to each other. Because governments, at the moment, don’t talk to technology in any useful way. Data protection law, I was just talking to a lawyer yesterday and he’s saying, “I’m in the middle of this data protection case. I am dealing with data protection law that was written in 1985.”
Let’s spend one more minute on energy, because it obviously makes the world go around, literally. My question is, the promise of nuclear way back was that it would be too cheap to meter, or in theory it could’ve been, and it didn’t work out. There were all kinds of things that weren’t foreseen and whatnot. Energy is arguably the most abundant thing in the universe, so do you think we’ll get to a point where it’s too cheap to meter, it’s like radio waves, it’s like the water fountain at the department store that nobody makes you put a quarter in?
Yeah, I think we will, but I think that comes from a distributed system, rather than a centralized one. One of my pet tropes that I trot out quite regularly is this idea that we’re moving from economies of scale to economies of distribution. It used to be that the most efficient way to do things was to get everything in a centralized place and do it all there because it was cheaper that way, given the technology we had at that time. Whether it was schools where we get all the children into a room and teach at them, whether it was power stations where we dig up a bunch of coal, take it to a big factory or power station, burn it and then send it out through the wires. Even though in your average coal-fired power plant, you would lose sixty-seven percent of the energy through waste-heat, it was still the most efficient way to do things.
Now, we have these technologies that are distributed. Even though they might be slightly less efficient or not quite as cost-effective, in and of themselves, when you connect them all together and distribute them, you start to see the ability to do things that the centralized system can’t. Energy, I think, is a really good example of that.
All our energy is derived from the sun, and the sun’s energy doesn’t hit just power plants. It hits the entire planet, and there’s that very famous statistic, that there’s more energy that hits the Earth’s surface in an hour than the human race uses in a year, I think. The sun has been waving this massive energy paycheck in our face every second since it started burning, and we haven’t been able to bank it very well.
So we’ve been running into the savings account, which is fossil fuels. That’s sunshine that has been laid down for us very dutifully by Mother Nature for billions of years and we can dig it up, thank you very much. Thank you for the savings account, but now, we don’t need the savings account so much because we can actually bank the stuff as it’s coming towards us with the improving renewable technologies that are out there. Couple that with an energy Internet, and you start to make your energy and your fuel where you are. I’m also an advisor to Richard Branson’s “Virgin Earth Challenge”, which is a twenty-five million dollar prize for taking carbon out of the atmosphere.
You have to be able to do that in an environmentally-sustainable way, and make a profit while you’re doing it. And I have to be very careful and say this is not the view of the Virgin Earth Challenge; it’s not the official view, but I am fairly confident that we will award that prize in the next three to four years, because we’ve got finalists that are taking carbon directly out of the air and turning it into fuel, and they’re doing it at a price point that’s competitive with the fossil fuel.
So if you distribute the production of liquid fuels and electricity and anybody can do it, that means you as a school can do it, you as a local business can do it. And what you find is when people do take control of the energy system, because they’re not so motivated by making a profit, the energy is cheaper, they maintain it better, and everybody’s happier.
There’s a town in the middle of Texas right now called Georgetown—65,000 Trump voters who I imagine are not that interested about the climate change threat, as conservatives generally don’t seem to think that that is a problem—and they’re all moving over to renewables, because it’s just cheaper than using oil, and they are in the middle of central Texas. I think we’re definitely going in that direction.
You’re entirely right. I am going to pull these numbers from my head, so they could be off, but something like four million exajoules of sunlight comes on the planet every year, and humanity needs five hundred. That’s what it is right now. It’s like four million raining down and we have to figure out how to pull five hundred of them and harvest those economically. Maybe, if the Virgin Earth Prize works, there’s going to be a crisis in the future—there’s not enough carbon in the air! They’ve pulled it all out at a profit.
That would be a nice problem to have, because we’ve already proven to ourselves that we can put carbon in the air. That’s not going to be a problem if it’s getting too low.
So let’s return to artificial intelligence for a moment. I want to throw a few things at you. Two different views of the world—I’d love to talk about each one by itself. One of them is that the time it takes for a computer to learn to do a task gets shorter and shorter as we learn how to do it better, and that there’s some point at which it is possible for the computer to learn to do everything a human can do, faster than a human can do it. And it would be at that point that there are literally no jobs, or could be literally no jobs if we chose that view. So, whether you think that or not, I am curious about, but assuming that that is true, what do you think happens?
I think we find new kinds of jobs. I really do. The thing is that the clue is in the name, “artificial intelligence.” We have planes; that’s artificial flying. We don’t fly the same way that birds fly. We’ve created an entire artificial way of doing it. And the intelligences that will come out of computers will not be the same as human intelligence.
They might be as intelligent, arguably, although I am not convinced of that yet, but they will be very different intelligences—in the same way that a dog’s intelligence is not the same as an ant’s intelligence, which is not the same as my Apple MacBook’s intelligence, if it has any, which is not the same as human intelligence. These intelligences will do different things.
They’ll be artificial intelligences and they’ll be very, very good at some things and very bad at other things. And the human intelligence will have certain abilities that I don’t think a machine will ever be able to replicate, in the same way that I don’t believe a wasp is ever going to be as good as me at playing the bass guitar and I am never going to be as good as it at flying.
So what would be one of those things that you would be dubious that artificial intelligence would be able to do?
I think it is the moral questions. It’s the actual philosophy of life—what are we here for, where are we going, why are we doing it, what’s the right thing to do, what do we value, and also the curiosity. I interviewed Hod Lipson at Columbia and he was very occupied with the idea of creating a computer that was curious, because I think curiosity is one of those things that sort of defines a human intelligence, that machines, to my knowledge, don’t have in any measurable sense.
So I think it would be those kind of very uniquely human things—the ability to abstract across ideas and ask moral, ethical questions and be curious about the world. Those are things that I don’t see machines doing very well at the moment, at all, and I am not convinced they’ll do them in the future. But it’s such a rapidly evolving field and I’m not a deep expert in AI, and I’m willing to be proved wrong.
So, you don’t think there will ever be a book One Curious Computer Sets Out To Answer What’s Next? 
Do you know what? I don’t, but I really wish there was because I’d love to go on stage and have that panel discussion with that computer.
Then, let’s push the scenario one step further. I would have to say it’s an overwhelming majority of people who work in the AI field who believe that we will someday—and interestingly, the estimates range from five to five hundred years—make a general intelligence. And it begins with the assumption that we, our brains and our minds, are machines and therefore, we can eventually build a mechanical one. It sounds like you do not hold that view.
It’s a nuance view. Again, it’s interesting to discuss these things. What we’re really talking about here is consciousness, because if you want to build an “artificial general intelligence,” as they call it, what you’re talking about is building a conscious machine that can have the same kind of thoughts and reflections that we associate with our general intelligence. Now, there are two things I’d say.
The first is, to build a conscious machine, you’d have to know what consciousness is, and we don’t. And we’ve been arguing about it for two thousand years. I would also say that some of the most interesting work in that field is happening in AI, particularly in robotics, because in nature, there is no consciousness without a body. It may be that when we say, “What is consciousness?” consciousness isn’t actually one thing; it’s actually eight separate questions we have to answer, and we worked out what those eight are, and we can answer with technology. I think that might be a plausible route.
And clearly, as you point out, consciousness must be computable, because we are computing it right now. Me and you are “just” DNA computer code being read, and that computer code generates proteins and lipids and all kinds of things to make us work, and we’re having this conversation as a result of these computer programs that are running in ourselves. So clearly, consciousness is computable, but I am still very much to be convinced that we have any idea of what consciousness really is, or if we’re even asking the right questions about it.
To your point, we’re way ahead of ourselves in one sense, but do you think that in the end, if you really did have a conscious computer, a conscious machine, does that in some way undermine human rights? In the sense that we think people have these rights by virtue of being conscious and by virtue of being sentient, being able to feel pain? Do you think that if all of a sudden, the refrigerator and everything in your house also made that claim, that we are somehow lessened by it, not that the machines are somehow ennobled by it?
I would hope not. George Church, who runs Harvard Medical School said to me, “If you could show me a conscious machine, I wouldn’t be frightened by it. I’d be emboldened by it, I’d be curious about how that thing works, because then I’d be able to understand myself better.”
I was asked just recently by the people who are making “The Handmaid’s Tale,” the TV series based on the Margaret Atwood book, “What do you think AI is going to do for humanity?” Hopefully, one scenario is that it helps us understand ourselves better, because if we are able to create that machine that is conscious, we will have to answer the question, “What is consciousness?” as I said earlier, and when we’ve done that, we will have also unlocked also some of the great secrets about ourselves, about our own motivations, about our emotions, why we fight, what’s good for us, what’s bad for us, how to handle depression. We might open a whole new toolbox on actually understanding ourselves better.
One interpretation of it is that actually creating artificial general intelligence is one of the best things that could happen to humanity, because it will help us understand ourselves better, which might help us achieve more and be better human beings.
At the beginning of our chat, you listed a litany of what you saw as the big challenges which face our planet. You mentioned income inequality. So, absent wide-scale redistribution, technology, in a sense, promotes that in a way, doesn’t it?
Microsoft, Google and Facebook between them have generated 12 billionaires, so it’s evidently easier to make a billion dollars now—not me, but for some people to make billions now—than it would’ve been twenty years ago or five hundred years ago for that matter. Do you think that technology in itself, by multiplying the abilities of people and magnifying it ever-more, is a root cause of income inequality? Or do you think that comes from somewhere else?
I think income inequality comes from the way our capital markets and our property law works. If you look at democracy for instance, there’s several pillars to it. If you talk to a political philosopher, they’ll say, you know, a functioning democracy has several things that need to be working. One is you need to have universal suffrage, so everybody gets to vote, you need to have free and fair elections, you need to have free press, you need to have a judiciary that isn’t influenced by the government, etcetera.
The other thing that’s mentioned but less talked about is working property rights. Working property rights say that you, as a citizen, have the right to own something, whether that’s some property or machinery or an idea, and you are allowed to generate an income from that and profit from it. Now that’s a great idea, and it’s part of entrepreneurship and going and creating something, but the problem is once you have a certain amount of property that you’ve profited from, you would then have more ability to go and buy some property from other people.
What’s happening is the property rights, whether they’re intellectual or physical, have concentrated themselves in fewer and fewer hands, because as you get rich, it’s easier to buy other stuff. And I know this from my own experience. I used to be a poor musician-student. Now, I’m doing pretty well and I find myself today buying some shares in a company that I thought was going to do really well… and they did. And I find myself just thinking, “Wow, that was easy.” It’s easy for me now because I have more property rights to acquire more property rights, and that’s what we’re seeing. There’s a fundamental problem there somewhere, and I am not quite sure how we deal with it.
After World War II, England toyed with incredibly high, sometimes over 100% marginal taxes on unearned income, and I think The Beatles figured they needed to leave. What is your take on that? Did that work, is that an experiment you would advocate repeating, or what did we learn from that? 
I think we’ve learnt that’s a very bad way of doing it. Again, it comes back to how much do things cost? If things are expensive and you’re running a state, you need to collect more taxes. We’re having this huge debate in the UK at the moment about the cost of National Health Service, and how do you fund that. To go back to some of our earlier conversation, if you suddenly reduce the cost of energy to very little, actually everything gets cheaper—healthcare, education, building roads.
If you have a whole bunch of machines that can do stuff for you cheaper that humans could do it, in one way, that’s really good, because now you can provide health care, education, road building, whatever… cheaper. The question is, “How does the job market change then? Where do human beings find value? Do we create these higher-valued jobs?” One radical idea that’s come out at the moment is this idea of universal basic income.
The state has now enough money coming in because the cost of energy has gone down, and it can build stuff much more cheaply. We’ll just get a salary anyway from the state to follow our dreams. That’s one plausible scenario.
Moving on, I would love to hear more about the book that’s just come out. I’ve read what I could find online, I don’t have a copy of it yet. What made you write We Do Things Differently, and what are you hoping it accomplishes?
So with my first book, which is really an attempt to talk about the cutting-edge of technology and what’s happening with the environment in an entertaining way for the layman, I got to the end of that book and it became very clear to me that we have all the technology that we need to solve the world’s grand challenges, whether that’s the energy price, or climate change, or problems with manufacturing.
We’re not short of technology. If we didn’t invent another thing from tomorrow, we could deal with all the world’s grand challenges, we could distribute wealth better, we could do all the things. But it’s not technology that’s the problem. It’s the administration, it’s the way we organize ourselves, it’s the way our systems have been built, and how they’ve become kind of fossilized in the way they work.
What I wanted to do with this book is look at systems and look at five key human systems—energy, healthcare, food, education and governance—and say, “Is there a way to do these better?” It wasn’t about me saying, “Here’s my idea.” It was about me going around the world and finding people who’ve already done it better and prevailed and say, “What do these people tell us about the future?”
Do they give us a roadmap to and a window on a future that is better run, more sustainable, kinder to everybody, etcetera? And that’s what it is. It’s a collection of stories of people who’ve gone and looked at existing systems, challenged those systems, built something better, and they’ve succeeded and they’ve been there for a while—so you can’t say it was just like a six-month thing. They’re actually prevailing, and it’s those stories in education, healthcare, food, energy and governance.
I think the saddest fact I know, in all the litany of the things you run across, any time food comes up, it jumps to the front of my mind. There’s a billion people more or less—960 something million—that are hungry. You can go to the UN’s website, you can download a spreadsheet, it lists them out by country.
The sad truth is that seventy-nine percent of hungry people in the world live in nations that are net food exporters. So, the food that’s made inside of the country can be sold on the world market for more than the local people can pay for it. The truth in the modern age is not that you starve to death if you have no food; it is that you starve to death if you have no money. What did you find?
 There’s an even worse fact that I can tell you, which is, the human race wastes between thirty and fifty percent of the food it makes, depending on where you are in the world, before it even reaches the market. It spoils or it rots or it gets wasted or damaged between the field and the supermarket shelf, and this is particularly prevalent in the global south, the hotter countries. And the reason is we simply don’t have enough refrigeration, we don’t have enough cold chains, as they’re called.
So one of the great pillars of civilization, which we kind of take for granted and don’t really think about, is refrigeration and cooling. In the UK, where I am, sixteen percent of our electricity is spent on cooling stuff, and it’s not just food as well. It’s medical tissues and medicines and all that kind of stuff. And if you look at sub-Saharan Africa, it’s disastrous because the food they are growing, they are not even eating because it ruins too quickly, because we don’t have a sustainable refrigeration system for them to use. And one of the things I look at in the book is a new sustainable refrigeration system that looks like it could solve that problem.
You also talk about education. What do you advocate there? What are your thoughts and findings?
I try not to advocate anything, because I think that’s generally vainglorious and I’m all about debate and getting people to ask the right questions. What I will do is sort of say, look, this person over here seems to have done something pretty extraordinary. What lessons can we draw from them?
So, I went to see a school in a very, very rough housing estate in Northern England. This is not an urban paradise; this is a tough neighborhood, lots of violence, drug dealing, etcetera, low levels of social cohesion, and in the middle of this housing estate there was a school that, I think the government called it the fifth worst school in the entire UK, and they were about to close it. A guy called Carl turns up as new headmaster and two years later, it’s considered one of the best schools in the world, and he’s done all that without changing any staff. It took the same staff everybody thought was rubbish and two years later, they’re regarded as some of the best educators in the world.
And the way he did that is not rocket science. It was really about creating a collaborative learning environment. One of the things he said was, “Teachers don’t work in teams anymore. They don’t watch each other teach. They don’t learn about the latest of what’s happening in education; they don’t do that. They kind of become automatized and do their lessons, so I’m going to get them working as a team.”
He also said they lost any culture of aspiration about what they should be doing, so they were just trying to get to the end of the week, rather than saying, “Let’s create the greatest school in the world.” So he took some very simple management practices which is about, ‘We’re going to aspire to be the best, and we’re going to start working together, and we’re going to start working with our kids.”
And he did the same with the kids, even though they were turning up at this school four years old, most of them still in nappies, most of them without language, even at four—by the time they were leaving, they were outperforming the national average, from this very rough working-class estate. By also working with the kids in the same way and saying, “Look, what’s your aspiration? How are we going to design this together collectively as a school—you the students, us the teachers?”
This is actually good management practice, but introduced into a school environment, and it worked very well. I am vastly trivializing the amount of effort and sweat and emotional effort he had to put into that. But, again, talking about teamwork: Rather than splitting the world up into subjects, which is what we tend to do in schools, he’s like, “Let’s pick things that the kids are really interested in, and we’ll teach the subjects along the way because they’ll all be interrelated with each other.”
I walked into a classroom there and it’s bedecked out like NASA headquarters, because they picked the theme of space for this term for this particular class. But of course, as they talk about space and astronauts, they learn about the physics, the maths, they learn about the communications, they learn about history…
And I said to Carl, “Once they’re given this free environment, how do they feel when exams come along, which is a very constraining environment?” He said, “Oh, they love it.” I’m like, “You’re kidding me!” He said, “No, they can’t wait to prove how much they’ve learnt.”
None of this is rocket science, but it’s really interesting that education is one of those places where, when you try and do anything new, someone is going to try to kill you, because education is autobiography. Everybody’s been through it, and everybody has a very prejudiced view of what it should be like. So for any change, it’s always going to upset somebody.
You made the statement that even if we didn’t invent any new technology, we would know how to solve all of life’s greatest challenges. I would like to challenge that and say, we actually don’t know how to solve the single biggest challenge.
This sounds good.
Death.
Death! That’s an interesting question, whether you view it as a challenge or not.
I think most people, even if they don’t want to live indefinitely, that the power to choose the moment of your own demise is something that I think many people would aspire to—to live a full life and then choose the terms of their own ending. Do you think death is solvable? Or at least aging?
 I think aging is probably solvable. Again, I am not a high-ranking scientist in this area, but I know a number of them. I was working with the chief scientist at one of our big aging charities recently, and if you look at the research that’s coming out from places like Stanford and Harvard, there’s an incredible roadmap to humans living healthy lives in healthy bodies till one hundred and ten, one hundred and thirty. Stanford have been reversing human aging in certain human cell lines since 2014.
The problem is, of course, it turns out that what’s good for helping humans live longer is also often quite good for promoting cancer. And so that’s the big conundrum we have at the moment. Certainly, we are living longer and healthier anyway. Average life expectancy has been rising a quarter-year for every year, for the last hundred years. Technology is clearly doing something in that direction.
Well what it seems to be doing is ending premature death, but the number of people who live to be supercentenarians, one hundred and ten and above is forty, and it doesn’t seem to be going up particularly.
Yeah, I think that’s true. But it depends what you call “premature death,” because actually, certainly the age at which we die is definitely creeping up. But if we can keep ourselves a bit younger, if we can, for instance, find a way to lengthen the telomeres in our cells without encouraging cancer, that’s a really good thing because most of the diseases we end up dying from are the diseases of aging—cardiovascular disease, stroke, etcetera.
We haven’t solved it yet. You asked me if I think it’s solvable. Like you, I think I am fairly optimistic about the human race’s ability to finally ask the right questions, and then find answers to them. But I think we still don’t really understand aging well enough yet to solve it, but I think we’re getting there much faster, I would say, than we are perhaps with an artificial general intelligence.
Talk about the “Atlas of the Future” project.
 Ah, I love the Atlas. The Atlas is kind of the first instantiation of something from the Democratizing the Future society. What we’re trying to do is to say, “Look, if we want the world to progress in a way that’s good for everybody, it needs to involve everybody.” And therefore, you need to be literate about the questions the future asks you, and not just literate about threats. Which is what we get from the media. The general media will just walk in and go, “It’s all going to be terrible, everyone’s trying to kill you.” They’ll drop that bomb and then just walk away, because that gets your attention.
We are trying to say, “Yeah, all those stories are worth paying attention to, and there are a whole other bunch of stories worth paying attention to, about what we can do with renewables, what we can do to improve healthcare, what we can do to improve social cohesion, what we can do to improve happiness, what we can do to improve nations understanding each other, what we can do to reduce partisan political divides, etcetera.” And we collect all that stuff. So it’s a huge media project.
If you go to “The Atlas of the Future,” you’ll find all these projects of people doing amazing stuff—some of them very big-picture stuff, some of it small-picture stuff. Subsequently, what we’re doing with that is we’re farming out that content either via TV series, the books I write, there’s a podcast—by The Futurenauts, which is me and my friend, Ed Gillespie—where we talk about the stuff on the Atlas and we interview people.
So it’s about a way of creating a culture of the future that’s aspirational, because we kind of feel that, at the moment, we’re being asked to be fearful of the future and run away in the opposite direction. And we’d like to put on the table the idea that the future could be great, and we’d like to run towards that, and get involved in making it.
And then, what’s this third book you are working on?
The third book is just an idea at the moment, but it is about how do we get our administration, our government, our bureaucracy to move at something like a similar pace to the pace of ideas and technology, because it seems to me that it’s that friction that causes so many of the problems—that we don’t move forward fast enough. The time it takes to approve a drug is stratospheric, and there’s some good reasons for that, I am not against the work the FDA does, but when you’re looking at, sometimes, twelve or thirteen years for a drug to reach the market, that’s got to be too slow.
And so, we have to think about ways to get those parts of the human experience—the technology, the philosophy and the bureaucracy—working at roughly the same clock speed, then I think things would be better for everybody. And that’s the idea I want to explore in the next book—how we go about doing that. Some of it, I think, will be blockchain technology, some of it might be the use of virtual reality, and a whole bunch of stuff I haven’t probably found out yet. I’m really just asking that question. If any of your listeners have any ideas about what some of the technologies or approaches or philosophies that will help us solve that, I’d love to hear from them.
You mentioned a TV program earlier. In views of the future, science fiction movies, TV, books, all of that, what do you read or watch that you think, “Huh, that could happen. That is a possible outcome”? What do you think is done really well?
It’s interesting, because I have a sixteen-month old child, and I am trying to write a book and save the world, so I hardly watch anything. I think it’s very difficult to cite fiction as a good source. It’s an inspiration, it’s a question, but it never turns out how we imagine. So I take all those things with a pinch of salt, and just enjoy them for what they are.
I have no idea what the future is going to be like, but I have an idea that it could be great, and I’d like it to be so. And actually, there is no fiction really like that, because if you look at science fiction, generally, it’s dystopian, or it’s about conflict, and there’s a very good reason for that—which is that it’s entertaining. Nobody wants to watch a James Cameron movie where the robots do your gardening. That’s not entertaining to watch. Terminator 3: Gardening Day is nothing that anybody is going to the cinema to see.
I’m in full agreement with that. I authored a book called Infinite Progress, and, unlike you, I have a clearer idea of what I think the future is going to be. And I used to really be bothered by dystopian movies, mainly because I am required to go see them. Because everybody’s like, “Did you see Elysium?” So, I have to go see and read everything, because I’m in that space. And it used to bother me, until I read a quote, I think by Frank Robert—I apologize if it isn’t him—who said, “Sometimes, the job of science fiction is to warn you of something that could happen so that you have your guard up about it,” so you’re like, “A-ha! I’m not going to let that happen.” It kind of lets the cat out of the bag. And so I was able to kind of switch my view on it by keeping that in mind, that these are cautionary tales.
I think we also have to adopt that view with the media. The media leads on the stuff that is terrifying, because that will get our attention, and we are programmed as human beings to be cautious first and optimistic second. That makes perfect sense on the African savanna. If one of your tribe goes over the hill without checking for big cats, and gets eaten by a big cat, you’re pretty cynical about hills from that moment on. You’re nervous of them, you approach them carefully. That’s the way we’re kind of programmed to look at the world.
But of course, that kind of pessimism doesn’t move us forward very much. It keeps us where we are, and even worse than that is the cynicism. And of course, cynicism is just obedience to the status quo, so I think you can enjoy the entertainment, and enjoy the dystopia, enjoy us fighting the robots, all that kind of stuff. One thing you do see about all those movies is that eventually, we win, even if we are being attacked by aliens or whatever; we usually prevail. So whilst they are dystopian, there is this yearning amongst us, saying, “Actually, we will prevail, we will get somewhere.” And maybe it will be a rocky ride, but hopefully, we’ll end up in the sunshine.
An Optimist’s Tour of the Future is still available all over the world—I saw it was in, like, nine languages—and you can order that from your local book proprietor and We Do Things Differently, is that out in the US? When will that be out in US? 
It’s out in the US early next year. We don’t have a publication date yet, but I am told by my lovely publishers that that will be sort of January-February next year. Yet you can buy the UK edition on Amazon.com and various other online stores, I’m sure.
If people want to follow you and follow what you do and whatnot, what’s the best way to do that? 
My Twitter handle is @Optimistontour. You can learn about me at my website, which is markstevenson.org, and check out “The Futurenauts” podcast at thefuturenauts.com where we do something similar to this, although we have more swearing and nakedness than your podcast. Also, get yourself down to “Atlas of the Future.” I think that would be the central place to go. It’s a great resource for everybody, and that’s not just about me—there’s a whole bunch of future, forward-thinking people on that. Future heroes. We should probably get you on there at some point, Byron.
I would be delighted. This was an amazing hour! There could be a Mark Stevenson show. It’s every topic under the sun. You’ve got wonderful insights, and thank you so much for taking the time to share them with us. Bye!
 Cheers! Bye!
Byron explores issues around artificial intelligence and conscious computers in his upcoming book The Fourth Age, to be published in April by Atria, an imprint of Simon & Schuster. Pre-order a copy here.
[voices_in_ai_link_back]

Voices in AI – Episode 27: A Conversation with Adrian McDermott

[voices_in_ai_byline]
In this episode, Byron and Adrian discuss intelligence, consciousness, self-driving cars and more.
[podcast_player name=”Episode 27 – A Conversation with Adrian McDermott” artist=”Byron Reese” album=”Voices in AI” url=”https://voicesinai.s3.amazonaws.com/2018-01-15-(00-58-48)-adrian-mcdermott.mp3″ cover_art_url=”https://voicesinai.com/wp-content/uploads/2018/01/voices-headshot-card.jpg”]
[voices_in_ai_byline]
Byron Reese: This is Voices in AI, brought to you by Gigaom. I’m Byron Reese. Today our guest is Adrian McDermott, he is Zendesk’s President of Products where he works to build software for better customer relationships, including, of course, exploring how AI and machine learning impacts the way customers engage with businesses. Adrian is a Yorkshireman, living in San Francisco, and he holds a Bachelor of Science and Computer Science from De Montfort University. Welcome to the show, Adrian!
Adrian McDermott: Thanks, Byron! Great to be here!
My first question is almost always: What is artificial intelligence?
When I think about artificial intelligence, I think about AI as a system that can interact with and learn from its environment in an independent manner. I think that’s where the intelligence comes from. AI systems have traditionally been optimized for achieving specific tasks. In computer science, we used to write programs using procedural languages and we would tell them exactly what to do at every stage of that language. With AI, it can actually learn and adapt from its environment and, you know, reason to a certain extent and build the capabilities to do that. Narrowly, I think that’s what AI is, but societally I think the term has a series of connotations it takes on, some scary and some super interesting and exciting meanings and consequences when we think about it and when we talk about it.
We’ll get to that in due course, but back to your narrow definition, “It learns for its environment,” that’s a pretty high bar, actually. By that measure, my dog food bowl that automatically refills when it runs out, even though it’s reacting to its environment, it’s not learning from its environment; whereas a Nest thermometer, you would say, is learning from its environment and therefore is AI. Did I call the ball right on both of those, kind of the way you see the world?
I think so. I mean, your dog bowl, perhaps, it learns, over time, how much food your dog needs every day, and it adapts to its environment, I don’t know. You could have an intelligent dog bowl, dog feeding system, hopefully one that understands the nature of most dogs is to keep eating until they choke. That would be an important governor on that system, let’s be honest, but I think in general that characterization is good.
We, as biological computational devices, learn from our environment and take in a series of inputs from those environments and then use those experiences, I think, to pattern match new stimuli and new situations that we encounter so that we know what to do, even though we’ve never seen that exact situation before.
So, and not to put any words in your mouth, but it sounds like you think that humans react to our environment and that is the source of our intelligence, and a computer that reacts to its environment, it’s artificial intelligence, but it really is intelligent. It’s not artificial, it’s not faking it, it really is intelligent. Is that correct?
I think artificial intelligence is this ability to learn from the environment, and come up with new behaviors as a result of this learning. There is a tremendous number of examples of AI systems that have created new ways of doing things and have learned. I think one of the most famous is move thirty-four in Google’s AlphaGo when it’s playing the game Go against Lee Sedol, one of the greatest players in the world. It performed a move that was shocking to the Go community and the Go intelligentsia because it had learned and it had evolved its thinking to a point where it created new ways of doing things that were not natural for us as humans. I think artificial intelligence, really, when it fulfills its promises, is able to create and learn in that way, but currently most systems do that within a very narrow problem domain.
With regard to an artificial general intelligence, do you think that the way we think of AI today eventually evolves into an AGI? In other words, are we on a path to create one? Or do you think a truly generalized intelligence will be built in a completely different way than how we are currently building AI systems today?
I mean, there are a series of characteristics of intelligence that we have, right, that we think about. One of them is the ability to think about a problem, think about a scenario, and run our head through different ways of handling that scenario and imagine different outcomes, and almost to self actualize in those situations. I think that modern deep-learning techniques actually are, you know, the construction is such that they are looking at different scenarios to come up with different outcomes. Ultimately, we don’t necessarily, I believe it’s true to say, understand a great deal about the nature of consciousness and the way that our brains work.
We know a lot about the physiology, not necessarily about the philosophy. It does seem like our brains are sort of neuron-based computation devices that take a whole bunch of inputs and process them based on stored experiences and learnings, and it does seem like that’s the kind of systems that we’re building with artificial-intelligence-based machines and computers.
Given that technology gets better every year, year over year, it seems like a natural conclusion that ultimately technology advancements will be such that we can reach the same point of general intelligence that our cerebral cortex reached hundreds of thousands of years ago. I think we have to assume that we will eventually get there. It seems like we’re building the systems in the same way that our brains function right now.
That’s fascinating because, that description of human’s ability to imagine different scenarios is in fact some people’s theory as to how consciousness emerged. And, not putting you on the spot because, as you said, we don’t really know, but is that plausible to you? That being able to essentially, kind of, carry on that internal dialogue, “I wonder if I should go pull that tiger’s tail,” you know, is that what you think made us conscious or are you indifferent on that question?
I only have a layman’s opinion, but, you know, there’s a test—I don’t know if it’s in evolutionary biology or psychology—the mirror test where if you put a dog in front of a mirror it doesn’t recognize itself, but Asian elephants and dolphins do recognize themselves in the mirror. So, it’s an interesting question of that ability to self-actualize, to understand who you are, and to make plans and go forward. That is the nature of intelligence and from an evolutionary point of view you can imagine a number of ways in which that consciousness of self and that ability to make plans was essential for the species to thrive and move forward. You know we’re not the largest species on the planet, but we’ve become somewhat dominant as a result of our ability to plan and take actions.
I think certain behaviors that we manifest came from the advantageous nature of cooperation between members of our species, and the way that we act together and act independently and dream independently and move together. I think it seems clear that that is probably how consciousness evolved, it was an evolutionary advantage to be conscious, to be able to make plans, to think about oneself, and we seem to be on the path where we’re emulating those structures in artificial intelligence work.
Yeah, the mirror test is fascinating because only one bird passes it and that is the magpie.
The magpie?
Yeah, and there’s recent research, very recent, that suggests that ants pass it, which would be staggering. It looks like they’ve controlled for so many things, but it is unquestionably a fascinating thing. Of course, people disagree on what exactly it means.
Yeah, what does it mean? It’s interesting that ants pass because ants do form a multi-role complex society. So, is it one of the requirements of a multi-role complex society that you need to be able to pass the mirror test, and understand who you are and what your place is in that society?
Yeah, that is fascinating. I actually emailed Gallup and asked him, “Did you know ants passed the test?” And he’s like, “Really, I hadn’t heard that?” You know, because he’s the originator of it.
The argument against the test goes like this: If you put a red dot on a dog’s paw, the dog knows that’s its paw and it might lick it off its own paw, right? The dog has a sense of self, it knows that’s its foot. And so, maybe all the mirror test is doing is testing to see if the dog is smart enough to understand what a mirror is, which is a completely different thing.
Do you think, by extension, and again with your qualification that it’s a layman’s viewpoint, I asked you a question about AGI and you launched into a description of consciousness. Can I infer from your answer that you believe that an AGI will be conscious?
You can infer from my answer that I believe that to have a truly artificial general intelligence, I think that consciousness is a requirement, or some kind of ability to have freedom in thought direction. I think that is part of the nature of consciousness or one way of thinking about it.
I would tend to agree, but let me just… Everybody’s had that sensation where you’re driving and you kind of space, right, and all of a sudden you snap to a minute later and you’re like, “Whoa, I don’t have any memory of driving to this spot,” and, in that moment, you merged traffic, you changed lanes, and all of that. So, you acted intelligently but you were not, in a sense, conscious at that moment. Do you think that saying, “Oh, that’s an example of intelligence without consciousness,” is the problem? Like, “No, no you really were conscious all that time,” or is it like, “No, no, you didn’t have, like, some new idea or anything, you just managed off rote.” Do you have a thought on that?
I think it’s true that so much of what we do as beings is managed off rote, but probably a lot of the reason we’re successful as a species is because we don’t just go off rote. Like, if someone had driven in front of you or the phone had rung, if all these things had happened, that would have created a suitably justifiable, stored in short-term memory because it’s important event while you were driving, then you would have moved into a different mode of consciousness. I think the human brain takes in a massive amount of input in some ways but filters it down to just this, quote unquote, “stream of consciousness” of experiences that are important, or things that are happening. And it’s that filter of consciousness, or the filter of the brain, that puts you in the moment where you’re dealing with the most important thing. That, in some ways, characterizes us.
When we think about artificial intelligence and how machines experience the world, I mean, we have five sensory inputs falling into our brains and our memories, but a machine can have, yes, vision, sound, but GPS, infrared, just some random event stream from another machine. There are all of these inputs that act in some ways as sensors for an artificially-intelligent machine that are much, in some ways, richer and more diverse, or could be. And that governor, that thing that filters that down, figures out what the objective is for the artificial intelligence machine and takes the right inputs and does the right pattern matching and does the right thinking, is going to be incredibly important to achieve, I think, artificial general intelligence. Where, it knows how to direct, if you like, it’s thoughts and how to plan and how to do and how to act, how to think about solving problems.
This is fascinating to me, so I have just a few more questions about AGI, if you’ll just indulge me for another minute. The range of time that people think it’s going to take us to get it, by my reckoning, is five years on the soonest and five-hundred on the longest. Do you have any opinion of when we might develop an AGI?
I think I agree with five years on the soonest, but, you know, honestly one of the things I struggle with as we think about that is, who really knows? We have so little understanding of how the brain actually works to produce intelligence and sentience that it’s hard to know how rapidly we’re approaching that or replicating it. It could be that, as we build smarter and smarter non-general artificial intelligence, eventually we’ll just wander into a greater understanding of consciousness or sentience by accident just because we built a machine that emulates the brain. That’s, in some ways, a plausible outcome, like, we’ll get enough computation that eventually we’ll figure it out or it will become apparent. I think, if you were to ask me, I think that’s ten to fifteen years away.
Do you think we already have computers fast enough to do it, we just don’t know how to do it, or do you think we’re waiting on hardware improvements as well?
I think the primary improvements we’re waiting on are software, but software activities are often constrained by the power and limits of the hardware that we’re running it on. Until you see a more advanced machine, it’s hard to practically imagine or design a system that could run upon it. The two things improve in parallel, I think.
If you believe we’ll, maybe, have an AGI in fifteen years, that if we have one it could very easily be conscious, and if it’s conscious therefore it would have a will, presumably, are you one of the people that worries about that? The super intelligence scenario, that it has different goals and ambitions than we have?
I think that’s one of many scenarios that we need to worry about. In our current society, any great idea, it seems like, is either weaponizable in a very direct way, which is scary. The way that we’re setup, locally and globally, is intensely competitive. Where any advantage one could eek out is then used to dominate, or take advantage of, or gain advantage from our position against our fellow man in this country and other countries, globally, etcetera.
There’s quite a bit of fear-mongering about artificial general intelligence, but, artificial intelligence does give the owner of those technologies, the inventor of those technologies, innate advantages in terms of taking and using those technologies to get great gain. I think there’s many stages along the way where someone can very competitively put those technologies to work without even achieving artificial general intelligence.
So, yes, the moment of singularity, when artificial general intelligence machines can invent machines that are considerably faster in ways that we can’t understand. That’s a scary thought, and technology may be out-thinking our moral and philosophical understanding of the implications of that, but at the same time some of the things that we’re building now—like you said, are just fifty percent better or seventy-seven percent smarter—could actually be, through weaponization or just through extreme mercantile advantage taking, those could have serious effects on the planet, humankind, etcetera. I do believe that we’re in an AI arms race and I do find that a little bit scary.
Vladimir Putin just said that he thinks the future is going to belong to whoever masters AI, and Elon Musk recently said, “World War Three will be fought over AI.” It sounds like you think that’s maybe a more real-world concern than the rogue AGI.
I think it is, because we’ve seen tremendous leaps in the capability of technology just in the last five years, certainly no less than five to ten years. More and more people are working in this problem domain; that number must be doubling every six months, or something ridiculous like that, in terms of the number of people who are starting to think about AI, the number of companies deploying some kind of technology. As a result, there are breakthroughs that are going to begin happening, either in public academia or more likely, in private labs that will be leverageable by the entities that create them in really meaningful ways.
I think by one count there are twenty different nations whose militaries are working on AI weapons. It’s hard to get a firm grip on it because: A, they wouldn’t necessarily say so, and, B, there’s not a lot of agreement on what the term AI means. In terms of machines that can make kill decisions, that’s probably a reasonable guess.
I think one shift that we’ve seen, and, you know, this is just anecdotal and my own opinion, is that so much of base research in computer science or artificial intelligence is done in academia and done basically publicly, publishable, and for the public good, I think, traditionally. And if you look at artificial intelligence where, you know, the greatest minds of our generation are not necessarily working in the public sphere on artificial intelligence; they’re locked up, tied up in private entity companies, generally very, very large companies, or they’re working on the military-industrial complex. I think that’s a shift, I think that’s different from scientific discovery, medical research, all these things in the past.
The closed-door nature of this R&D effort, and the fact that it’s becoming almost a national or nationalistic concern, with very little… You know there are weapons treaties, there are nuclear treaties, there are research weapons treaties, right? I think we’re only just beginning to talk about AI treaties, and AI understanding and we’re a long way from any resolve because the potential gains for whomever goes first, or makes the biggest discovery first, makes a great breakthrough first, are tremendous. It’s a very competitive world, and it’s going on behind closed doors.
The thing about the atomic bomb is that they were hard to build, and so even if you knew how to build it, it was hard. AI won’t be that way. It’ll fit on a flash drive, or at least the core technology would, right?
I think building an AGI, some of these things require web-scale computational power that currently, based on today’s technology, that requires data centers not flash drives. So, there is a barrier to entry to some of these things, but, that said, the great breakthrough more than likely will be an algorithm or some great thinking, and that will, yes, indeed, fit on a modern flash drive without any problem.
What do you think of the open AI initiative which says, “Let’s make this all public and share it all. It’s going to happen, we might as well make sure everybody has access to it and not just one party.”
I work at SaaS company, we build products to sell, and through open-source technologies, through cloud platforms, we get to stand on the shoulders of giants and use amazing stuff and shorten our development cycles and do things that we would never be able to do as a small company founded in Copenhagen. I’m a huge believer in those initiatives. I think that part of the reason that open-source has been so successful in just the problems of computer science and computer infrastructure is that, to a certain extent, there’s been a maturation of thought where not every company believes its ability to store and retrieve its data quickly is a defining characteristic for them. You know, I work at Zendesk and we’re in the business of customer service software, we build software that tries to help our customers have better relationships with their customers. It’s not clear that having the best cloud hosting engine or being able to use NoSQL technology is something that’s of tremendous commercial value to us.
We believe in open-sources, so we contribute back and we contribute because there’s no perceived risk of commercial impairment by doing that. This isn’t our core IP, our core IP is around how we treat customers. I think that, while I’m a huge believer in the open AI initiative, I think that there isn’t necessarily that widespread same belief when the parties are at investment levels in AI research, and at the forefront of thinking. I think that there’s a clear, for some of those entities, there’s a clear notion that they can gain tremendous advantage by keeping anything that they invent inside of the walled garden for as long as possible and using it to their advantage. I would dearly love that initiative to succeed. I don’t know that right now we have the environment in which it will truly succeed.
You’ve made a couple of references to artificial intelligence mirroring the human brain. Do you follow the human brain project in Europe, which is taking that approach? They’re saying, “Why don’t we just try to replicate the thing that we know can think already?”
I don’t really. I’m delighted by the idea, but I haven’t read too much about it. What are they learning?
It’s expensive, and they’re behind schedule. But it’s been funded to the tune of one and a half billion dollars, I mean it’s a really serious effort. The challenge is going to be if it turns out that a neuron is as complicated as a supercomputer, that things go on at the Planck level, that it is this incredible machine. Because I think the hope is that it if you take it at face value, that is something maybe we can duplicate, but if there’s other stuff going on it might be more problematic.
As an AI researcher yourself, do you ever start with the question, “How do humans do that?” Is that how you do it when you’re thinking about how to solve a problem? Or do you not find a lot of corollaries, in your day to day, between how a human does something and how a computer would do it?
When we’re thinking about solving problems with AI, we’re at the basic level of directed AI technology, and what we’re thinking about is, “How can we remove these tasks that humans perform on a regular basis? How can we enrich the lives of, in our case, the person needing customer service or the person providing customer services?” It’s relatively simple, and so the standard approach for that is to, yes, look directly at the activities of a person, look at ways that you can automate and take advantage of the benefits that the AI is going to buy you. In customer service land, you can remember every interaction very easily that every customer has had with a particular brand, and then you can look at the outcomes that those interactions have had, good or bad, through the satisfaction, the success and the timing. And you can start to emulate those things, remove friction, replace the need for people whatsoever, and build out really interesting things to do.
The primal way to approach the problem is really to look at what humans are doing, and try and replace them certainly where it’s not their cognitive ability that is necessarily to the fore or being used, and that’s something that we do a lot. But I think that misses the magic, because one of the things that happens with an AI system can be that it produces results that are, to use Arthur C. Clarke’s phrase, “sufficiently advanced to be indistinguishable from magic.” You can invent new things that were not possible because of the human brains limited bandwidth, because our limited memories or other things. You can basically remember all experiences all at once and then use those to create new things.
In our own work, we realize that it’s incredibly difficult, with any accuracy, given an input from a customer, a question from a customer, to predict the ultimate customer satisfaction score, the CSAT score that you’ll get. But it’s an incredibly important number for customer service departments, and knowing ahead of time that you’re going to have a bad experience with this customer based on signals in the input is incredibly useful. So, one of the things we built was a satisfaction-prediction engine, using various models, that allows us to basically route tickets to experts and do other things. There’s no human who sits there and gives out predictions on how a ticket is going to go, how our experience with the customer is going to go; that’s something that we invented because only a machine can do that.
So, yes, there is an approach to what we do which is, “How can we automate these human tasks?” But there’s also an approach of, “What is it that we can do that is impossible for humans that would be awesome to do?” Is there magic here that we can put in place?
In addition to there being a lot of concern about the things we talked about, about war and about AGI and all of that, in the narrow AI, in the here and now, of course, there’s a big debate about automation, and what these technologies are going to do for jobs. Just to, kind of, set the question up, there are three different narratives people offer. One is that automation is going to take all of the really low-skilled jobs, and they’ll be a group of people who are unable to compete against machines and we’ll have, kind of, permanent unemployment at the level of the Great Depression or something like that. Then there’s a second camp that says, “Oh, no, no, you don’t understand, it’s far worse than that, they’re going to take everybody’s job, everybody, because there’ll be a moment that the machine can learn something faster than a human.” Then there’s a third one that says, “No, with these technologies, people just take the technology and they use it to increase their own productivity, and they don’t actually ever cause unemployment.” Electricity and mechanization and all of that didn’t increase unemployment at all. Do you believe one of those three, or maybe a fourth one? What do you think about the effects of AI on employment?
I think the parallel that’s often drawn is a parallel to the Industrial Revolution. The Industrial Revolution brought us a way to transform energy from one form into another, and allowed us to mechanize manufacturing which altered the nature of society from agrarian to industrial, which created cities which had this big transformation. The Industrial Revolution took a long time. It took a long time for people to move from the farms to the factories, it took a long time to transform the landscape, comparatively. I think that one of the reasons that there’s trepidation and nervousness around artificial intelligence is it doesn’t seem like it will take that long, it’s almost fantastical science fiction to me that I get to see different vendors, self-driving cars mapping San Francisco on a regular basis, and I see people driving around with no hands on the wheel. I mean, that’s extraordinary, I don’t think even five years ago I would believe that we would have self-driving cars on public roads, it didn’t seem like a thing, and now it seems like automated driving machines are not very far away.
If you think about the societal impacts of that, well, according to an NPR study in 2014, I think, truck driving is the number one job in twenty-nine states in America. There are literally millions of driving jobs, and I think it’s one of the fastest growing categories of jobs. Things like that will all disappear, or to a certain extent will disappear, and it will happen rapidly.
It’s really hard for me to subscribe to the… Yes, we’re improving customer service software here at Zendesk in such a way that we’re making agents more efficient and they’re getting to spend more time with customers and they’re upping the CSAT rating, and consequently those businesses have better Net Promoter scores and they’re thriving. I believe that that’s what we’re doing and I believe that that’s what’s going to happen. If we can answer automatically ten percent of a customers’ tickets that means that you need ten percent less agents to answer those tickets, unless they’re going to invest more in customer service. The profit motive says that there needs to be a return on investment analysis between those two things. So, in my own industry I see this, and across society it’s hard not to believe that there won’t be a fairly large-scale disruption.
I don’t know that, as a society, we’re necessarily in a position to absorb that destruction yet. I know in Finland, they’re experimenting with a guaranteed minimum income to take away the stress of having to find work or qualify for unemployment benefit and all these things, so that people have a better quality of life and can hopefully find ways to be productive in society. Not many countries are as progressive as Finland. I would put myself in the “very nervous about the societal effects of large-scale removal of sources of employment,” because it’s not clear what the alternative structures are, that are set up in society to find meaningful work and sustenance for people who were losing those jobs. We’ve been under a trajectory since, I think, the 1970s, of polarization in society, and generating inequality. And I worry that the large-scale creation of an unemployed mass could be a tipping point. I take a very pessimistic view.
Let me give you a different narrative on that, and tell me what what’s wrong with it, how the logic falls down on it. Let’s talk just about truck drivers. That would go like this, it would say, “That concern that you’re going to have in mass all these unemployed truck drivers is beyond ill-founded. To begin with, the technology’s not done, and it will still need to be worked out. Then the legislative hurdles will have to be worked out, and that’ll be done gradually state by state by state. Then, there’ll be a long period of time when law will require that there be a driver, and self-driving technology would kick in when it feels like the driver’s making a mistake, but there’ll be an override; just like we can fly airplanes without pilots now but we insist on having a pilot.
Then, the driving part of the job is actually not the whole job, and so like any other job when you automate part of it, like the driving, that person takes on more things. Then, on top of that, the equipment’s not retrofit to it, so you going to have to figure out how do you retrofit all this stuff. Then, on top of that, having self-driving cars is going to open up all kinds of new employment, and because we talk about this all the time, there are probably fewer people going into truck driving, and there are people who retire in it every year. And that, just like every other thing, it’s just going to gradually work as the economy reallocates resources. Why do you think truck driving is like this big tipping point thing?
I think driving jobs in general are a tipping point thing because, yes, there are challenges to rolling it out, and obviously there’s legislative challenges, but it’s not hard to see, certainly interstate trucking going first and then drivers meeting those trucks and driving through urban areas and various things like that happening. I think people are working on retrofit devices for trucks. What will happen is truck drivers who are not actually driving will be allowed to work more hours, so you’ll need less truck drivers. In general, as a society, we’re shifting from going and getting our stuff to having our stuff delivered to us. And so, the voracious appetite for more drivers, in my opinion, is not going to abate. Yeah, the last mile isn’t driven by trucks, it’s smaller delivery drivers or things that can be done by smarter robots, etcetera.
I think those challenges you communicated are going to be moderating forces of the disruption, but when something reaches the tipping point of acceptance and cost acceptability, change tends to be rapid if driven by the profit motive. I think that is what we’re going to see. The efficiency of Amazon, and the fact that every product is online in that marketplace is driving a tremendous change in the nature of retail. I think the delivery logistics of that need are going to go through a similar turnaround, and companies driving that are going to be very aggressive about it because the economics is so appealing.
Of course, again, the general answer to that is that when technology does lower the price of something dramatically—like you’re talking about the cost of delivery, self-driving cars would lower it—that that in turn increases demand. That lowering of cost means all of a sudden you can afford to deliver all kinds of things, and that ripple effect in turn creates those jobs. Like, people spend all their money, more or less, and if something becomes cheaper they turn around and spend that money on something else which, by definition, therefore creates downstream employment. I’m just having a hard time seeing this idea that somehow costs are going to fall and that money won’t be redeployed in other places that in turn creates employment, which is kind of two hundred and fifty years of history.
I wouldn’t necessarily say that as costs fall in industries all of those profits are generally returned to the consumer, right? Businesses in the logistics retail space, generally, retailers run at a two percent margin, right, and businesses in logistics run with low margins. So, there’s room for those people to kind of optimize their own businesses, and not necessarily pass down all those benefits for the consumer. Obviously, there’s room for disruption where someone will come in, shave back down the margins and pass on those benefits. But, in general, you know, online banking is more efficient because we prefer it, and so there are less people working in banking. Conversely, when banks shifted to ATMs banking became much more of a part of our lives, and more convenient so we ended up with more bank tellers because personal service was a thing.
I think that there just are a lot of driving jobs out there that don’t necessarily need to be done by humans, but we’ll still be spending the same amount on getting driven around, so there’ll be more self-driving cars. Self-driving cars crash less, hopefully, and so there’s less need for auto repair shops. There’s a bunch of knock-on effects of using that technology, and for certain classes of jobs there’s clearly going to be a shift where those jobs disappear. There is a question of how readily the people doing those jobs are able to transfer their skills to other employment, and is there other employment out there for them.
Fair enough. Let’s talk with Zendesk for a moment. You’ve alluded to a couple of ways that you employ artificial intelligence, but can you just kind of give me an idea of, like, what gets you excited in the morning, when you wake up and you think, “I have this great new technology, artificial intelligence, that can do all these wondrous things, I want to use it to make people’s lives better who are in charge of customer relationships”? Entice me with some things that you’re thinking of doing, that you’re working on, that you’ve learned, and just kind of tell me about your day-to-day?
So many customer service inquiries begin with someone who has a thirst for knowledge, right? Seventy-six percent of people try to self-serve when trying to find the answer to a question, and many people who do get on the phone or online at the same time trying to discover the answer to that problem. I think often there’s a challenge in terms of having enough context to know what someone is looking for, having that context available to all of the systems that they’re interacting with. I think technology, not just artificial intelligence technology, but artificial intelligence can help us pinpoint the intention of users because the goal of the software that we provide, and the customer service ethos that we have is that we need to remove friction.
The thing that really generates bad experiences in customer service interactions isn’t that someone said no, or we didn’t get the outcome that we want, or we didn’t get our return processed or something like that, it’s that negative experiences tend to be generated from an excess of friction. It’s that I had to switch from one channel to another, it’s that I had to repeat myself over and over again because everyone I was talking to didn’t have context on my account or my experience as the customer and these things. I think that if you look at that sort of pile of problems, you see real opportunities to give people better experiences just by holding a lot more data at one time about that context, and then being able to process that data and make intelligent predictions and guesses and estimations about what it is they’re looking for and what is going to help them.
We recently launched a service we call “answer bot” which uses deep learning to look at the data we have when an email comes in and figure out, quite simply, which knowledgebase article is going to best serve that customer. It’s not driving a car down to the supermarket, this sounds very simple, but in another way these are millions and millions of experiences that can be optimized over time. Similarly, the people on the other side of that conversation generally don’t know what it is that customers are searching for or asking for, for which there is no answer. And so by using the same analysis of environment queries that we have and knowledge bases we can give them cues as to what content to write, and, sort of, direct them to build a better experience and improve their customer experience in that way.
I think from an enterprise software builder’s point of view, artificial intelligence is a tool that you can use at so many points of interaction between brand and consumer, between the two parties basically on either side of any transaction inside of your knowledge base. It’s something that you can use to shave off little moments of pain, and remove friction, and apply intelligence, and just make the world seem frictionless and a little smarter. Our goal internally is basically to meander through our product in a directed way, finding those experiences and making them better. At the end of the day we want someone who’s deploying our stuff and giving a customer experience with it, and we want the consumers experiencing that brand, the people interacting with that brand, to be like, “I’m not sure why that was good, but I did really enjoy that customer service experience. I got what I wanted, it was quick. I don’t know how they quite did that, but I really enjoyed it.” We all have had those moments in service where someone just totally got what you were after and it was delightful because it was just smooth and efficient, good, and no drama—prescient almost.
I think what we are trying to do, what we would like to do is adapt all of our software and experiences that we have to be able to be that anticipatory and smart and enjoyable. I think the enterprise software world—for all types of software like CRM, ERP, all these kind of things—is filled with sharp edges, friction, and pain, you know, pieces of acquisitions glued together, and you’re using products that represent someone’s broken dreams acquired by someone else and shoehorned into other experiences. I think, generally, the consumer of enterprise software at this point is a little bit tired of the pain of form-filling and repetition and other things. Our approach to smoothing those edges, to grinding the stone and polishing the mirror, is to slowly but surely improve each of those experiences with intelligence.
It sounds like you have a broad charter to look at kind of all levels of the customer interaction and look for opportunity. I’m going to ask you a question that probably doesn’t have an answer but I’m going to try anyway, “Do you prefer to find places where there was an epic fail where it was so bad it was just terrible and the person was angry and it was just awful, or would you rather fix ten of a minor annoyance where somebody had entered data too many times?” I mean, are you working to cut the edges off the bad experiences, or just generally make the system phase shift up a little bit?
I think, to a certain extent, I like to think of that as a false dichotomy, because the person who has a terrible experience and gets angry, chances are there wasn’t a momentary snap, there was a drip feed of annoyances that took them to that point. So, our goal, when we think about it, is to pick out the most impactful rough edges that cumulatively are going to engulf someone into the red mist of homicidal fury on the end of the phone, complaining about their broken widget. I think most people do not flip their anger bit over a tiny infraction or over a larger fraction, it’s over a period, it’s a lifetime of infractions, it’s a lifetime of inconveniences that gets you to that point, or the lifetime of that incident and that inquiry and how you got there. We’re generally, sort of, emotionally-rational beings who’ve been through many customer service experiences, so exhibiting that level of frustration, generally, requires a continued and sustained effort on the part of a brand to get us there.
I assume that you have good data to work off of. I mean, there are good metrics in your field and so you get to wade through a lot of data and say, “Wow, here’s a pattern of annoyances that we can fix.” Is that the case?
Yeah, we have an anonymized data set that encompasses billions of interactions. And the beauty of that data set is they’re rated, right? They’re rated either by the time it took to solve the problem, or they’re rated by an explicit rating, where someone said that was a good interaction, that was a bad interaction. When we did the CSAT prediction we were really leveraging the millions of scores that we have that tell us how customer service interactions went. In general, though, we talk about the data asset that we have available to us, that we can use to train and learn a query and analyze.
Last question, you quoted Arthur C. Clarke, so I have to ask you, is there any science fiction about AI that you enjoy or like or think that could happen? Like Her or Westworld or iRobot or any of that, even books or whatnot?
I did find Westworld to be, probably, the most compelling thing I watched this year, and just truly delightful in its thinking about memory and everything else, although it was, obviously, pure fiction. I think Her was also just a, you know, a disturbing look at the way that we will be able to identify with inanimate machines and build relationships that, you know, it was all too believable. I think you quoted two my favorite things, but Westworld was so awesome.
It, interestingly, had a different theory of consciousness from the bicameral mind, not to give anything away.
Well, let’s stop there. This was a magnificently interesting hour, I think we touched on so many fascinating topics, and I appreciate you taking the time!
Adrian McDermott: Thank you, Byron, it’s wonderful to chat too!
Byron explores issues around artificial intelligence and conscious computers in his upcoming book The Fourth Age, to be published in April by Atria, an imprint of Simon & Schuster. Pre-order a copy here.
[voices_in_ai_link_back]

Voices in AI – Episode 21: A Conversation with Nikola Danaylov

[voices_in_ai_byline]
In this episode, Byron and Nikola talk about singularity, consciousness, transhumanism, AGI and more.
[podcast_player name=”Episode 21: A Conversation with Nikola Danaylov” artist=”Byron Reese” album=”Voices in AI” url=”https://voicesinai.s3.amazonaws.com/2017-11-20-(01-05-27)-nikola-danaylov.mp3″ cover_art_url=”https://voicesinai.com/wp-content/uploads/2017/11/voices-headshot-card-3.jpg”]
[voices_in_ai_link_back]
Byron Reese: This is Voices in AI, brought to you by Gigaom. I’m Byron Reese. Today our guest is Nikola Danilov. Nikola started the Singularity Weblog, and hosts the wildly popular singularity.fm podcast. He has been called the “Larry King of the singularity.” He writes under the name Socrates, or to the Bill & Ted fans out there, Socrates. Welcome to the show, Nikola.
Nikola Danaylov: Thanks for having me, Byron, it’s my pleasure.
So let’s begin with, what is the singularity?
Well, there are probably as many definitions and flavors as there are people or experts in the field out there. But for me, personally, the singularity is the moment when machines first catch up and eventually surpass humans in terms of intelligence.
What does that mean exactly, “surpass humans in intelligence”?
Well, what happens to you when your toothbrush is smarter than you?
Well, right now it’s much smarter than me on how long I should brush my teeth.
Yes, and that’s true for most of us—how long you should brush, how much pressure you should exert, and things like that.
It gives very bad relationship advice, though, so I guess you can’t say it’s smarter than me yet, right?
Right, not about relationships, anyway. But about the duration of brush time, it is. And that’s the whole idea of the singularity, that, basically, we’re going to expand the intelligence of most things around us.
So now we have watches, but they’re becoming smart watches. We have cars, but they’re becoming smart cars. And we have smart thermostats, and smart appliances, and smart buildings, and smart everything. And that means that the intelligence of the previously dumb things is going to continue expanding, while unfortunately our own personal intelligence, or our intelligence as a species, is not.
In what sense is it a “singularity”?
Let me talk about the roots of the word. The origin of the word singularity comes from mathematics, where it basically is a problem with an undefined answer, like five divided by zero, for example. Or in physics, where it signifies a black hole. That’s to say a place where there is a rupture in the fabric of time-space, and the laws of the universe don’t hold true as we know them.
In the technological sense, we’re borrowing the term to signify the moment where humanity stops being the smartest species on our planets, and machines surpass us. And therefore, beyond that moment, we’re going to be looking into a black hole of our future. Because our current models fail to be able to provide sufficient predictions as to what happens next.
So everything that we have already is kind of going to have to change, and we don’t know which way things are going to go, which is why we’re calling it a black hole. Because you cannot see beyond the event horizon of a black hole.
Well if you can’t see beyond it, give us some flavor of what you think is going to happen on this side of the singularity. What are we going to see gradually, or rapidly, happen in the world before it happens?
One thing is the “smartification” of everything around us. So right now, we’re still living in a pretty dumb universe. But as things come to have more and more intelligence, including our toothbrushes, our cars—everything around us—our fridges, our TVs, our computers, our tables, everything. Then that’s one thing that’s going to keep happening, until we have the last stage where, according to Ray Kurzweil, quote, “the universe wakes up,” and everything becomes smart, and we end up with different things like smart dust.
Another thing will be the merger between man and machine. So, if you look at the younger generation, for example, they’re already inseparable from their smartphones. It used to be the case that a computer was the size of a building—and by the way, those computers were even weaker in terms of processing power than our smartphones are today. Even the Apollo program used a much less powerful machine to send astronauts to the moon than what we have today in our pockets.
However, that change is not going to stop there. The next step is that those machines are going to actually move inside of our bodies. So they used to be inside of buildings, then they went on our body, in our pockets, and are now becoming what’s called “wearable technology.” But tomorrow it will not be wearable anymore, because it will be embedded.
It will be embedded inside of our gut, for example, to monitor our microbiome and to monitor how our health is progressing; it will be embedded into our brains even. Basically, there may be a point where it becomes inseparable from us. That in turn will change the very meaning of the definition of being human. Not only at the sort of collective level as a species, but also at the personal level, because we are possibly, or very likely, going to have a much bigger diversification of the understanding of what it means to be a human than we have right now.
So when you talk about computers becoming smarter than us, you’re talking about an AGI, artificial general intelligence, right?
Not necessarily. The toothbrush example is artificial narrow intelligence, but as it gets to be smarter and smarter there may be a point where it becomes artificial general intelligence, which is unlikely, but it’s not impossible. And the distinction between the two is that artificial general intelligence is equal or better than human intelligence at everything, not only that one thing.
For example, a calculator today is better than us in calculations. You can have other examples, like, let’s say a smart car may be better than us at driving, but it’s not better than us at Jeopardy, or speaking, or relationship advice, as you pointed out.
We would reach artificial general intelligence at the moment when a single machine will be able to be better at everything than us.
And why do you say that an AGI is unlikely?
Oh no, I was saying that an AGI may be unlikely in a toothbrush format, because the toothbrush requires only so many particular skills or capabilities, only so many kinds of knowledge.
So we would require the AGI for the singularity to occur, is that correct?
Yeah, well that’s a good question, and there’s a debate about it. But basically the idea is that anything you can think of which humans do today, that machine would be equal or better at it. So, it could be Jeopardy, it could be playing Go. It could be playing cards. It could be playing chess. It could be driving a car. It could be giving relationship advice. It could be diagnosing a medical disease. It could be doing accounting for your company. It could be shooting a video. It could be writing a paper. It could be playing music or composing music. It could be painting an impressionistic or other kind of piece of art. It could be taking pictures equal or better than Henri Cartier-Bresson, etc. Everything that we’re proud of, it would be equal or better at.
And when do you believe we will see an AGI, and when would we see the singularity?
That’s a good question. I kind of fluctuate a little bit on that. Depending on whether we have some kind of general sort of global-scale disaster like it could be nuclear war, for example—right now the situation is getting pretty tense with North Korea—or some kind of extreme climate-related event, or a catastrophe caused by an asteroid impact; falling short of any of those huge things that can basically change the face of the Earth, I would say probably 2045 to 2050 would be a good estimate.
So, for an AGI or for the singularity? Or are you, kind of, putting them both in the same bucket?
For the singularity. Now, we can reach human-level intelligence probably by the late 2020’s.
So you think we’ll have an AGI in twelve years?
Probably, yeah. But you know, the timeline, to me, is not particularly crucial. I’m a philosopher, so the timeline is interesting, but the more important issues are always the philosophical ones, and they’re generally related to the question of, “So what?” Right? What are the implications? What happens next?
It doesn’t matter so much whether it’s twelve years or sixteen years or twenty years. I mean, it can matter in the sense that it can help us be more prepared, rather than not, so that’s good. But the question is, so what? What happens next? That’s the important issue.
For example, let me give you another crucial technology that we’re working on, which is life extension technology, trying to make humanity “amortal.” Which is to say we’re not going to be immortal—we can still die if we get ran over by a truck or something like that—but we would not be likely to die from general causes of death that we see today, which are usually old-age related.
As an individual, I’m hoping that I will be there when we develop that technology. I’m not sure I will still be alive when we have it, but as a philosopher what’s more important to me is, “So what? What happens next?” So yeah, I’m hoping I’ll be there, but even if I’m not there it is still a valid and important question to start considering and investigating right now—before we are at that point—so that we are as intellectually and otherwise prepared for events like this as possible.
I think the best guesses are, we would live to about 6,750. That’s how long it would take for some, you know, Wile E Coyote kind of piano-falling-out-the-top-floor-of-a-building-and-landing-on-you thing to happen to you, actuarially-speaking.
So let’s jump into philosophy. You’re, of course, familiar with Searle’s Chinese Room question. Let me set that up for the listeners, and then I’ll ask you to comment on it.
So it goes like this: There’s a man, we’ll call him the librarian. And he’s in this giant room that’s full of all of these very special books. And the important part, the man does not speak any Chinese, absolutely no Chinese. But people slide him questions under the door that are written in Chinese.
He takes their question and he finds the book which has the first symbol on the spine, and he finds that book and he pulls it down and he looks up the second symbol. And when he finds the second symbol and it says go to book 24,601, and so he goes to book 24,601 and looks up the third symbol and the fourth and the fifth—all the way to the end.
And when he gets to the end, the final book says copy this down. He copies these lines, and he doesn’t understand what they are, slides it under the door back to the Chinese speaker posing the question. The Chinese speaker picks it up and reads it and it’s just brilliant. I mean, it’s absolutely over-the-top. You know, it’s a haiku and it rhymes and all this other stuff.
So the philosophical question is, does that man understand Chinese? Now a traditional computer answer might be “yes.” I mean, the room, after all, passes the Turing test. Somebody outside sliding questions under the door would assume that there’s a Chinese speaker on the other end, because the answers are so perfect.
But at a gut level, the idea that this person understands Chinese—when they don’t know whether they’re talking about cholera or coffee beans or what have you—seems a bit of a stretch. And of course, the punchline of the thing is, that’s all a computer can do.
All a computer can do is manipulate ones and zeros and memory. It can just go book to book and look stuff up, but it doesn’t understand anything. And with no understanding, how can you have any AGI?
So, let me ask you this? How do you know that that’s not exactly what’s happening right now in my head? How do you know that me speaking English to you right now is not the exact process you described?
I don’t know, but the point of the setup is: If you are just that, then you don’t actually understand what we’re actually talking about. You’re just cleverly answering things, you know, it is all deterministic, but there’s, quote, “nobody home.” So, if that is the case, it doesn’t invalidate any of your answers, but it certainly limits what you’re able to do.
Well, you see, that’s a question that relates very much with consciousness. It relates to consciousness, and, “Are you aware of what you’re doing,” and things like that. And what is consciousness in the first place?
Let’s divide that up. Strictly speaking, consciousness is subjective experience. “I had an experience of doing X,” which is a completely different thing than “I have an intellectual understanding of X.” So, just the AGI part, the simple part of: does the man in the room understand what’s going on, or not?
Let’s be careful here. Because, what do you mean by “understand”? Because you can say that I’m playing chess against a computer. Do I understand the playing of chess better than a computer? I mean what do you mean by understand? Is it not understanding that the computer can play equal or better chess than me?
The computer does not understand chess in the meaningful sense that we have to get at. You know, one of the things we humans do very well is we generalize from experience, and we do that because we find things are similar to other things. We understand that, “Aha, this is similar to that,” and so forth. A computer doesn’t really understand how to play chess. It’s arguable that the computer is even playing chess, but putting that word aside, the computer does not understand it.
The computer, that program, is never going to figure out baccarat any more than it can figure out how many coffee beans Colombia should export next year. It just doesn’t have any awareness at all. It’s like a clock. You wind a clock, and tick-tock, tick-tock, it tells you the time. We progressively add additional gears to the clockwork again and again. And the thesis of what you seem to be saying is that, eventually, you add enough gears so that when you wind this thing up, it’s smarter than us and it can do absolutely anything we can do. I find that to be, at least, an unproven assumption, let alone perhaps a fantastic one.
I agree with you on the part that it’s unproven. And I agree with you that it may or may not be an issue. But it depends about what you’re going for here, and it depends on the computer you’re referring to, because we have the new software that was invented by AlphaGo to play Go. And that actually learned to play the program exactly based on the previous games—that’s to say, on the previous experience by other players. And then that same kind of approach of learning from the past, and coming up with new creative solutions to the future was then implemented in a bunch of other fields, including bioengineering, including medicine, and so on.
So when you say the computer will never be able to calculate how many beans that country needs for next season, actually it can. That’s why it’s getting more and more generalized intelligence.
Well, let me ask that question a slightly different way. So I have, hypothetically, a cat food dish that measures out cat food for my cat. And it learns, based on the weight of the food in it, the right amount to put out. If the cat eats a lot, it puts more out. If the cat eats less, it puts less out. That is a learning algorithm, that is an artificial intelligence. It’s a learning one, and it’s really no different than AlphaGo, right? So what do you think happens from the cat dish—
—I would take issue with you saying it’s really no different from AlphaGo.
Hold on, let me finish the question; I’m eager to hear what you have to say. What happens, between the cat food AI and AlphaGo and an AGI? At what point does something different happen? Where does that break, and it’s not just a series of similar technologies?
So, let me answer your question this way… When you have a baby born, it’s totally dumb, stupid, blind, and deaf. It lacks complete self-awareness. Its unable to differentiate between itself and its environment, and it lacks complete self-awareness for probably the first, arguably, year-and-a-half to two years. And there’s a number of psychological tests that can be administered as the child develops. Usually girls, by the way, do about three to six months better, or they develop personal awareness faster and earlier than boys, on average. But let’s say the average age is about a year-and-a-half to two years—and that’s a very crude estimation, by the way. The development of AI would not be exactly the same, but there will be parallels.
The question you’re raising is a very good question. I don’t have a good answer because, you know, that can only happen with direct observational data—which we don’t have right now to answer your question, right? So, let’s say tomorrow we develop artificial general intelligence. How would we know that? How can we test for that, right? We don’t know.
We’re not even sure how we can evaluate that, right? Because just as you suggested, it could be just a dumb algorithm, processing just like your algorithm is processing how much cat food to provide to your cat. It can lack complete self-awareness, while claiming that it has self-awareness. So, how do we check for that? The answer is, it’s very hard. Right now, we can’t. You don’t know that I even have self-awareness, right?
But, again, those are two different things, right? Self-awareness is one thing, but an AGI is easy to test for, right? You give a program a list of tasks that a human can do. You say, “Here’s what I want you to do. I want you to figure out the best way to make espresso. I want you to find the Waffle House…” I mean, it’s a series of tasks. There’s nothing subjective about it, it’s completely objective.
Yes.
So what has happened between the cat food example, to the AlphaGo, to the AGI—along that spectrum, what changed? Was there some emergent property? Was there something that happened? Because you said the AlphaGo is different than my cat food dish, but in a philosophical sense, how?
It’s different in the sense that it can learn. That’s the key difference.
So does my cat food thing, it gives the cat more food some days, and if the cat’s eating less, it cuts the cat food back.
Right, but you’re talking just about cat food, but that’s what children do, too. Children know nothing when they come into this world, and slowly they start learning more and more. They start reacting better, and start improving, and eventually start self-identifying, and eventually they become conscious. Eventually they develop awareness of the things not only within themselves, but around themselves, etc. And that’s my point, is that it is a similar process; I don’t have the exact mechanism to break down to you.
I see. So, let me ask you a different question. Nobody knows how the brain works, right? We don’t even know how thoughts are encoded. We just use this ubiquitous term, “brain activity,” but we don’t know how… You know, when I ask you, “What was the color of your first bicycle?” and you can answer that immediately, even though you’ve probably never thought about it, nor do you have some part of your brain where you store first bicycles or something like that.
So, assuming we don’t know that, and therefore we don’t really know how it is that we happen to be intelligent. By what basis do you say, “Oh, we’re going to build a machine that can do something that we don’t even know how we do,” and even put a timeline on it, to say, “And it’s going to happen in twelve years”?
So there are a number of ways to answer your question. One is, we don’t necessarily need to know. We don’t know how we create intelligence when we have babies, too, but we do it. How did it happen? It happened through evolution; so, likewise, we have what are called “evolutionary algorithms,” which are basically algorithms that learn to learn. And the key point, as Dr. Stephen Wolfram proved years ago in his seminal work Mathematica, from very simple things, very complex patterns can emerge. Look at our universe; it emerged from tiny little, very simple things.
Actually I’m interviewing Lawrence Krauss next week, he says it emerged from nothing. So from nothing, you have the universe, which has everything, according to him at least. And we don’t know how we create intelligence in the baby’s case, we just do it. Just like you don’t know how you grow your nails, or you don’t know how you grow your hair, but you do it. So, likewise, just one of the many different paths that we can take to get to that level of intelligence is through evolutionary algorithms.
By the way, this is what’s sometimes referred to as the black box problem, and AlphaGo is a bit of an example of that. There are certain things we know, and there are certain things we don’t know that are happening. Just like when I interviewed David Ferrucci, who was the team leader behind Watson, we were talking about, “How does Watson get this answer right and that answer wrong?” His answer is, “I don’t really know, exactly.” Because there are so many complicated things coming together to produce an answer, that after a certain level of complexity, it becomes very tricky to follow the causal chain of events.
So yes, it is possible to develop intelligence, and the best example for that is us. Unless you believe in that sort of first-mover, God-is-the-creator kind of thing, that somebody created us—you can say that we kind of came out of nothing. We evolved to have both consciousness and intelligence.
So likewise, why not have the same process only at the different stratum? So, right now we’re biologically-based; basically it’s DNA code replicating itself. We have A, C, T, and G. Alternatively, is it inconceivable that we can have this with a binary code? Or even if not binary, some other kind of mathematical code, so you can have intelligence evolve—be it silicone-based, be it photon-based, or even organic processor-based, be it quantum computer-based… what have you. Right?
So are you saying that there could be no other stratum, and no other way that could ever hold intelligence other than us? Then my question to you will be, well what’s the evidence of that claim? Because I would say that we have the evidence that it’s happened once. We could therefore presume that it could not be necessarily limited to only once. We’re not that special, you know. It could possibly happen again, and more than once.
Right, I mean it’s certainly a tenable hypothesis. The Singularians, for the most part, don’t treat it as a hypothesis, they treat it as a matter of faith.
That’s why I’m not such a good Singularitarian.
They say, “We have achieved consciousness and an AGI. We have a general intelligence. Therefore, we must be able to build one.” You don’t generally apply that logic to anything else in life, right? There is a solar system, therefore we must be able to build one. There is a third dimension, we must be able to build one.
With almost nothing else in life do you do it, and yet people who talk about the singularity, and are willing to put a date on it, by the way, to them there’s nothing up for debate. Even though all the things that are required for it are completely unknown, how we achieved them.
Let me give you Daniel Dennett’s take on things, for example. He says that consciousness doesn’t exist. That it is self-delusion. He actually makes a very, very good argument about it, per se. I’ve been trying to get him on my podcast for a while. But he says it’s total self-fabrication, self-delusion. It doesn’t exist. It’s beside the point, right?
But he doesn’t deny that we’re intelligent though. He just says that what we call “consciousness” is just brain activity. But he doesn’t say, “Oh, we don’t really have a general intelligence, either.” Obviously, we’re intelligent.
Exactly. But that’s kind of what you’re trying to imply with the machines, because they will be intelligent in the sense that they will be able to problem-solve anything that we’re able to problem-solve, as we pointed out—whether it’s chess, whether it’s cat food, whether it’s playing or composing the tenth symphony. That’s the point.
Okay, well that’s at least unquestionably the theory.
Sure.
So let’s go from there. Talk to me about Transhumanism. You write a lot about that. What do you think we’ll be able to do? And if you’re willing to say, when do you think we’ll be able to do it? And, I mean, a man with a pacemaker is a Transhuman, right? He can’t live without it.
I would say all of us are already cyborgs, depending on your definition. If you say that the cyborg is an organism consisting of, let’s say, organic and inorganic parts working together in a single unit, then I would answer that if you have been vaccinated, you’re already a cyborg.
If you’re wearing glasses, or contact lenses, you’re already a cyborg. If you’re wearing clothes and you can’t survive without them, or shoes, you’re already a cyborg, right? Because, let’s say for me, I am severely short-sighted with my eyesight. I’m like, -7.25 or something crazy like that. I’m almost kind of blind without my contacts. Almost nobody knows that, unless people listen to these interviews, because I wear contacts, and for all intents and purposes I am as eye-capable as anybody else. But take off my contacts and I’ll be blind. Therefore you have one single unit between me and that inorganic material, which basically I cannot survive without.
I mean, two hundred years ago, or five hundred years ago, I’d probably be dead by now, because I wouldn’t be able to get food. I wouldn’t be able to survive in the world with that kind of severe shortsightedness.
The same with vaccinations, by the way. We know that the vast majority of the population, at least in the developed world, has at least one, and in most cases a number of different vaccines—already by the time you’re two years old. Viruses, basically, are the carriers for the vaccines. And viruses straddle that line, that gray area between living and nonliving things—the hard-to-classify things. They become a part of you, basically. You carry those vaccine antibodies, in most cases, for the rest of your life. So I could say, according to that definition, we are all cyborgs already.
That’s splitting a hair in a very real sense though. It seems from your writing you think we’re going to be doing much more radical things than that; things which, as you said earlier, call into question whether or not we’re even human anymore. What are those things, and why does that affect our definition of “human”?
Let me give you another example. I don’t know if you’ve seen in the news, or if your audience has seen in the news, a couple of months ago the Chinese tried to modify human embryos with CRISPR gene-editing technology. So we are not right now at the stage where, you know… It’s been almost 40 years since we had the first in vitro babies. At the time, basically what in vitro meant was that you do the fertilization outside of the womb, into a petri dish or something like that. And then you watch the division process begin, and then you select—by basically visual inspection—what looks to be the best-fertilized egg, simply by visual examination. And that’s the egg that you would implant.
Today, we don’t just observe; we actually we can preselect. And not only that, we can actually go in and start changing things. So it’s just like when you’re first born, you start learning the alphabet, then you start reading full words; then you start reading full sentences; and then you start writing yourself.
We’re doing, currently, exactly that with genetics. We were starting to just identify the letters of the alphabet thirty, or forty, or fifty years ago. Then we started reading slowly; we read the human genome about fifteen years ago. And now we’re slowly starting to learn to write. And so the implication of that is this: how does the meaning of what it means to be human change, when you can change your sex, color, race, age, and physical attributes?
Because that’s the bottom line. When we can go and make changes at the DNA level of an organism, you can change all those parameters. It’s just like programming. In computer science it’s 0 and 1. In genetics it’s ATCG, four letters, but it’s the same principle. In one case, you’re programming a software program for a computer; in the other case, you’re programming living organisms.
But in that example, though, everybody—no matter what race you are—you’re still a human; no matter what gender you are, you’re still a human.
It depends how you qualify “human,” right? Let’s be more specific. So right now, when you say “humans,” what you mean actually is Homo sapiens, right? But Homo sapiens has a number of very specific physical attributes. When you start changing the DNA structure, you can actually change those attributes to the point where the result doesn’t carry those physical attributes anymore. So are you then Homo sapiens anymore?
From a biological point of view, the answer will most likely depend on how far you’ve gone. There’s no breakpoint, though, and different people will have a different red line to cross. You know for some, just a bit. So let’s say you and your wife or partner want to have a baby. And both of you happen to be carriers of a certain kind of genetic disease that you want to avoid. You want to make sure, before you conceive that baby, the fertilized egg doesn’t carry that genetic material.
And that’s all you care about, that’s fine. But someone else will say, that’s your red line, whereas my red line is that I want to give that baby the good looks of Brad Pitt, I want to give it the brain of Stephen Hawking, and I want to give it the strength of a weightlifter, for example. Each person who is making that choice would go for different things, and would have different attributes that they would choose to accept or not to accept.
Therefore, you would start having that diversification that I talked about in the beginning. And that’s even before you start bringing in things like neural cognitive implants, etc.—which would be basically the merger between men and machine, right? Which basically means that you can have both parallel developments of biotech or genetics. Our biological evolution and development, accelerated, on the other hand; and on the other hand, you can have the merger of that with the acceleration and evolution and improvement of computer technology and neurotech. When you put those two things together, you end up with a final entity which is nothing like what we are today, and it definitely would not fit the definition of being human.
Do you worry, at some level, that it’s taken us five thousand years of human civilization to come up with this idea that there are things called human rights? That there are these things you don’t do to a person no matter what. That you’re born with them, and because you are human, you have these rights.
Do you worry that, for better or worse, what you’re talking about will erode that? That we will lose this sense of human rights, because we lose some demarcation of a human is.
That’s a very complicated question. I would suggest people read Yuval Harari’s book Homo Deus on that topic, and the previous one was called Sapiens. Those two are probably the best two books that I’ve read in the last ten years. But basically, the idea of human rights is an idea that was born just a couple hundred years ago. It came to exist with humanism, and especially liberal humanism. Right now, if you see how it’s playing out, humanism is kind of taking what religion used to do, in the sense of that religion used to put God in the center of everything—and then, since we were his creation, everything else was created for us, to serve us.
For example the animal world, etc., and we used to have the Ptolemaic idea of the universe, where the earth was the center, and all of those things. Now, what humanism is doing is putting the human in the center of the universe, and saying humanity has this primacy above everything else, just because of our very nature. Just because you are human, you have human rights.
I would say that’s an interesting story, but if we care about that story we need to push it even further.
In our present context, how is that working out for everyone else other than humanity? Well the moment we created humanism and invented human rights, we basically made humanity divine. We took the divinity from God, and gave it to humanity, but we downgraded everybody else. So animals which, back in the day—let’s say the hunter-gatherer society—we considered ourselves to be equal and on par with the animals.
Because you see, one day I would kill you and eat you, next day maybe a tiger would eat me. That’s how the world was. But now, we downgraded all the animals to machine—they don’t have consciousness, they don’t have any feelings, they lack self-awareness—and therefore we can enslave and kill them any way we wish and like.
So as a result, we pride ourselves on our human rights and things like that, and yet we enslave and kill seventy to seventy-five billion animals every year, and 1.3 trillion sea organisms like fish, annually. So the question then is, if we care so much about rights, why should they be limited only to human rights? Are we saying that other living organisms are incapable of suffering? I’m a dog owner, I have a seventeen-and-a-half-year-old dog. She’s on her last leg. She actually had a stroke last weekend.
I can tell you that she has taught me that she possesses the full spectrum of happiness and suffering that I do, pretty much. Even things like jealousy, and so on, she demonstrated to me multiple times, right? Yet, we today use that idea of humanism and human rights to defend ourselves and enslave everybody else.
I would suggest it’s time to expand that and say, first, to our fellow animals, that we need to include them, that they have their own rights, first of all. Second of all, that possibly rights should not be limited to organic organisms, and should not be called human or animal rights, but they should be called intelligence rights, or even beyond intelligence—any kind of organism that can exhibit things like suffering and happiness and pleasure and pain.
Because obviously, there is a different level of intelligence between me and my dog—we would hope—but she’s able to suffer as much as I am, and I’ve seen it. And that’s true especially more for whales and great apes and stuff like that, which we have brought to the brink of extinction right now. We want to be special, that’s what religion does to us. That’s what humanism did with human rights.
Religion taught us that we’re special because God created us in his own image. Then humanism said there is no God, we are the God, so we took the place of God—we took his throne and said, “We’re above everybody else.” That’s a good story, but it’s nothing more than a story. It’s a myth.
You’re a vegan, correct?
Yes.
How far down would you extend these rights? I mean, you have consciousness, and then below that you have sentience, which is of course a misused word. People use “sentience” to mean intelligence, but sentience is the ability to feel something. In your world, you would extend rights at some level all the way down to anything that can feel?
Yeah, and look: I’ve been a vegan for just over a year and a couple of months, let’s say fourteen months. So, just like any other human being, I have been, and still am, very imperfect. Now, I don’t know exactly how far we should expand that, but I would say we should stop immediately at the level we can easily observe that we’re causing suffering.
If you go to a butcher shop, especially an industrialized farming butcher shop, where they kill something like ten thousand animals per day—it’s so mechanized, right? If you see that stuff in front of your eyes, it’s impossible not to admit that those animals are suffering, to me. So that’s at least the first step. I don’t know how far we should go, but we should start at the first steps, which are very visible.
What do you think about consciousness? Do you believe consciousness exists, unlike Dan Dennett, and if so where do you think it comes from?
Now you’re putting me on the spot. I have no idea where it comes from, first of all. You know, I am atheist, but if there’s one religion that I have very strong sympathies towards, that would be Buddhism. I particularly value the practice of meditation. So the question is, when I meditate—and it only happens rarely that I can get into some kind of deep meditation—is that consciousness mine, or am I part of it?
I don’t know. So I have no idea where it comes from. I think there is something like consciousness. I don’t know how it works, and I honestly don’t know if we’re part of it, or if it is a part of us.
Is it at least a tenable hypothesis that a machine would need to be conscious, to be an AGI?
I would say yes, of course, but the next step, immediately, is how do we know if that machine has consciousness or not? That’s what I’m struggling with, because one of the implications is that the moment you accept, or commit to that kind of definition, that we’re only going to have AGI if it has consciousness, then the question is, how do we know if and when it has consciousness? An AGI that’s programmed to say, “I have consciousness,” well how do you know if it’s telling the truth, and if it’s really conscious or not? So that’s what I’m struggling with, to be more precise in your answers.
And mind you, I have the luxury of being a philosopher, and that’s also kind of the negative too—I’m not an engineer, or a neuroscientist, so…
But you can say consciousness is required for an AGI, without having to worry about, well how do we measure it, or not.
Yes.
That’s a completely different thing. And if consciousness is required for an AGI, and we don’t know where human consciousness comes from, that at least should give us an enormous amount of pause when we start talking about the month and the day when we’re going to hit the singularity.
Right, and I agree with you entirely, which is why I’m not so crazy about the timelines, and I’m staying away from it. And I’m generally on the skeptical end of things. By the way, for the last seven years of my journey I have been becoming more and more skeptical. Because there are other reasons or ways that the singularity…
First of all, the future never unfolds the way we think it will, in my opinion. There’s always those black swan events that change everything. And there are issues when you extrapolate, which is why I always stay away from extrapolation. Let me give you two examples.
The easy example is when you have positive, or let’s say negative extrapolation. We have people such as Lord Kelvin—he was the president of the British Royal Society, one of the smartest people—who wrote a book in the 1890’s about how heavier-than-air aircraft are impossible to build.
The great H.G. Wells wrote, just in 1902, that heavier-than-air aircraft are totally impossible to build, and he’s a science fiction writer. And yet, a year later the Wright brothers, two bicycle makers, who probably never read Lord Kelvin’s book, and maybe didn’t even read any of H.G. Wells’ science fiction novels, proved them both wrong.
So people were extrapolating negatively from the past. Saying, “Look, we’ve tried to fly since the time of Icarus, and the myth of Icarus is a warning to us all: we’re never going to be able to fly.” But we did fly. So we didn’t fly for thousands of years, until one day we flew. That’s one kind of extrapolation that went wrong, and that’s the easy one to see.
The harder one is the opposite, which is called positive extrapolation. From 1903 to, let’s say, the late 1960s, we went from the Wright brothers, to the moon. People said—amazing people, like Arthur C. Clarke—said, well if we made it from 1903 to the late 1960s to the moon, by 2002 we will be beyond Mars; we will be outside of our solar system.
That’s positive extrapolation. Based on very good data for, let’s say, sixty-five years from 1903 to 1968—very good data—you saw tremendous progress in aerospace technology. We went to the moon several times, in fact, and so on and so on. So it was logical to extrapolate that we would be by Mars and beyond, today. But actually, the opposite happened. Not only did we not reach Mars by today, we are actually unable to get back to the moon, even. As Peter Thiel says in his book, we were promised flying cars and jetpacks, but all we got was 140 characters.
In other words, beware of extrapolations, because they’re true until they’re not true. You don’t know when they are going to stop being true, and that’s the nature of black swan sorts of things. That’s the nature of the future. To me, it’s inherently unknowable. It’s always good to have extrapolations, and to have ideas, and to have a diversity of scenarios, right?
That’s another thing which I agree with you on: Singularians tend to embrace a single view of the future, or a single path to the future. I have a problem with that myself. I think that there’s a cone of possible futures. There are certainly limitations, but there is a cone of possibilities, and we are aware of only a fraction of it. We can extrapolate only in a fraction of it, because we have unknown unknowns, and we have black swan phenomena, which can change everything dramatically. I’ve even listed three disaster scenarios—like asteroids, ecological collapse, or nuclear weapons—which can also change things dramatically. There are many things that we don’t know, that we can’t control, and that we’re not even aware of that can and probably will change the actual future from the future we think will happen today.
Last philosophical question, and then I’d like to chat about what you’re working on. Do you believe humans have free will?
Yes. So I am a philosopher, and again—just like with the future—there are limitations, right? So all the possible futures stem from the cone of future possibilities derived from our present. Likewise, our ability to choose, to make decisions, to take action, have very strict limitations; yet, there is a realm of possibilities that’s entirely up to us. At least that’s what I’m inclined to think. Even though most scientists that I meet and interview on my podcast are actually one level, or one degree or another degree, of determinist.
Would an AGI need to have free will in order to exist?
Yes, of course.
Where do you think human free will comes from? If every effect had a cause, and every decision had a cause—presumably in the brain—whether it’s electrical or chemical or what have you… Where do you think it comes from?
Yeah, it could come from quantum mechanics, for example.
That only gets you randomness. That doesn’t get you somehow escaping the laws of physics, does it?
Yes, but randomness can be sort of a living-cat and dead-cat outcome, at least metaphorically speaking. You don’t know which one it will be until that moment is there. The other thing is, let’s say, you have fluid dynamics, and with the laws of physics, we can predict how a particular system of gas, will behave within the laws of fluid dynamics. But it’s impossible to predict how a single molecule or atom will behave within that system. In other words, if the laws of the universe and the laws of physics set the realm of possibilities, then within that realm, you can still have free will. So, we are such tiny minuscule little parts of the system, as individuals, that we are more akin to atoms, if not smaller particles than that.
Therefore, we can still be unpredictable.
Just like it’s unpredictable, by the way, with quantum mechanics, to say, “Where is the electron located?” and if you try to observe it, then you are already impacting on the outcome. You’re predetermining it, actually, when you try to observe it, because you become a part of the system. But if you’re not observing it, you can create a realm of possibilities where it’s likely to be, but you don’t know exactly where it is. Within that realm, you get your free will.
Final question: Tell us what you’re working on, what’s exciting to you, what you’re reading about… I see you write a lot about movies. Are there any science fiction movies that you think are good ones to inform people on this topic? Just talk about that for a moment.
Right. So, let me answer backwards. In terms of movies—it’s been awhile since I’ve watched it, but I actually even wrote a review on in—one of the movies that I really enjoyed watching, it’s by the Wachowskis, and it’s called “Cloud Atlas.” I don’t think that movie was very successful at all, to be honest with you.
I’m not even sure if they managed to recover the money they invested in it, but in my opinion it was one of the top ten best movies I’ve ever seen in my life. Because it’s a sextet—so it had six plots progressing in a parallel fashion, in six different timelines. So six things happening in six different locations in six different epochs, with six different timelines, with tremendous actors, and it touched on a lot of those future technologies, and even the meaning of being human—what separates us from the others, and so on.
I would suggest people check out “Cloud Atlas.” One of my favorite movies. The previous question you asked was, what am I working on?
Mm-hmm.
Well, to be honest, I just finished my first book three months ago or something. I launched it on January 23rd I think. So I’ve been basically promoting my book, traveling, giving speeches, trying to raise awareness about the issues, and the fact that, in my view, we are very unprepared—as a civilization, as a society, as individuals, as businesses, and as governments.
We are going to witness a tremendous amount of change in the next several decades, and I think we’re grossly unprepared. And I think, depending on how we handle those changes, with genetics, with robotics, with nanotech, with artificial intelligence—even if we never reach the level of artificial general intelligence, by the way, that’s beside the point to me—just the changes we’re going to witness as a result of the biotech revolution can actually put our whole civilization at risk. They’re not just only going to change the meaning of what it is to be human, they would put everything at risk. All of those things converging together, in the narrow span of several decades basically, I think, create this crunch point which could be what some people have called a “pre-singularity future,” which is one possible answer to the Fermi Paradox.
Enrico Fermi was this very famous Italian mathematician who, a few decades ago, basically observed that there are two-hundred billion galaxies just in the observable realm of the universe. And each of those two-hundred billion galaxies has two-hundred billion stars. In other words, there’s almost an endless number of exoplanets like ours—which are located in the Goldilocks area, where it’s not too hot or too cold—which can potentially give birth to life. The question then is, if there are so many planets and so many stars and so many places where we can have life, where is everybody? Where are all the aliens? There’s a diversity of answers to that question. But at least one of those possible scenarios, to explain this paradox, is what’s referred to as the pre-singularity future. Which is to say, in each civilization, there comes a moment where its technological prowess surpasses its capacity to control it. Then, possibly, it self-destructs.
So in other words, what I’m saying is that it may be an occurrence which happens on a regular basis in the universe. It’s one way to explain the Fermi Paradox, and it’s possibly the moment that we’re approaching right now. So it may be a moment where we go extinct like dinosaurs; or, if we actually get it right—which right now, to be honest with you, I’m getting kind of concerned about—then we can actually populate the universe. We can spread throughout the universe, and as Konstantin Tsiolkovsky said, “Earth is the cradle of humanity, but sooner or later, we have to leave the cradle.” So, hopefully, in this century we’ll be able to leave the cradle.
But right now, we are not prepared—neither intellectually, nor technologically, nor philosophically, nor ethically, not in any way possible, I think. That’s why it’s so important to get it right.
The name of your book is?
Conversations with the Future: 21 Visions for the 21st Century.
All right, Nikola, it’s been fascinating. I’ve really enjoyed our conversation, and I thank you so much for taking the time.
My pleasure, Byron.
Byron explores issues around artificial intelligence and conscious computers in his upcoming book The Fourth Age, to be published in April by Atria, an imprint of Simon & Schuster. Pre-order a copy here.
[voices_in_ai_link_back]

Voices in AI – Episode 19: A Conversation with Manoj Saxena

[voices_in_ai_byline]
In this episode, Byron and Manoj discuss cognitive computing, consciousness, data, DARPA, explainability, and superconvergence.
[podcast_player name=”Episode 19: A Conversation with Manoj Saxena” artist=”Byron Reese” album=”Voices in AI” url=”https://voicesinai.s3.amazonaws.com/2017-11-20-(01-02-35)-manoj-saxena.mp3″ cover_art_url=”https://voicesinai.com/wp-content/uploads/2017/11/voices-headshot-card-1.jpg”]
[voices_in_ai_link_back]
Byron Reese: This is Voices in AI, brought to you by Gigaom. Today my guest is Manoj Saxena. He is the Executive Chairman of CognitiveScale. Before that, he was the General Manager of IBM Watson, the first General Manager, in fact. He’s also a successful entrepreneur who founded and sold two venture-backed companies within five years. He’s the Founding Managing Director of the Entrepreneur’s Fund IV, a 100-million-dollar seed fund focused exclusively on cognitive computing. He holds an MBA from Michigan State University and a Master’s in Management Sciences from the Birla Institute of Technology and Science in Pilani, India. Welcome to the show, Manoj.
Manoj Saxena: Thank you.
You’re well-known for eschewing the term “artificial intelligence” in favor of “cognitive computing”; even your bio says cognitive computing. Why is that?
AI, to me, is the science of making intelligent systems and intelligent machines. Cognitive computing, and most of AI, is around replacing the human mind and creating systems that do the jobs of human beings. I think the biggest opportunity and it has been proven out in multiple research reports, is augmenting human beings. So, AI for me is not artificial intelligence; AI for me is augmented intelligence. It’s how you could use machines to augment and extend the capabilities of human beings. And cognitive computing uses artificial intelligence technologies and others, to pair man and machine in a way that augments human decision-making and augments human experience.
I look at cognitive computing as the application of artificial intelligence and other technologies to create—I call it the Iron Man J.A.R.V.I.S. suit, that makes every human being a superhuman being. That’s what cognitive computing is, and that was frankly the category that we started off when I was running IBM Watson as, what we believed, was the next big thing to happen in IT and in enterprise.
When AI was first conceived, and they met at Dartmouth and all that, they thought they could kind of knock it out in the summer. And I think the thesis was, Minsky later said, it was just like physics had just a few laws, and electricity had just a few laws, they thought there was just a couple of laws. And then AIs had a few false starts, expert systems and so forth, but, right now, there’s an enormous amount of optimism about it, of what we’re going to be able to do. What’s changed in the last, say, decade?
I think a couple of dimensions in that, one is, when AI initially got going the whole intention was, “AI to model the world.” Then it shifted to, “AI to model the human mind.” And now, where I believe the most potential is, is, “AI to model human and business experiences.” Because each of those are gigantic. The first ones, “AI to model the world” and “AI to model the mind,” are massive exercises. In many cases, we don’t even know how the mind works, so how do you model something that you don’t understand? The world is too complex and too dynamic to be able to model something that large.
I believe the more pragmatic way is to use AI to model micro-experiences, whether it’s an Uber app, or a Waze. Or it is to model a business process, whether it’s a claim settlement, or underwriting, or management of diabetes. I think that’s where the third age of AI will be more focused around, not modeling the world or modeling the mind, but to model the human experience and a business process.
So is that saying we’ve lowered our expectations of it?
I think we have specialized in it. If you look at the human mind, again, you don’t go from being a child to a genius overnight. Let alone a genius that understands all sciences and all languages and all countries. I think we have gotten more pragmatic and more outcome driven, rather than more research and science-driven on how and where to apply AI.
I notice you’ve twice used the word “the mind,” and not “the brain.” Is that deliberate, and if so, where do you think “the mind” comes from?
I think there is a lot of hype, and there is a lot of misperception about AI right now. I like saying that, “AI today is both: AI equals ‘artificially inflated,’ and AI equals ‘amazing innovations.’” And I think in the realm of “AI equals artificially inflated,” there are five myths. One of the first myths is that AI equals replacement of the human mind. And I separate the human brain from the human mind, and from human consciousness. So, at best, what we’re trying to do is emulate functions of a human brain in certain parts of AI, let alone human mind or human consciousness.
We talked about this last time, we don’t even know what consciousness is, other than a doctor saying whether the patient is dead or alive. There is no consciousness detector. And a human mind, there is a saying that you probably need a quantum machine to really figure out how a human mind works—it’s not a Boolean machine or von Neumann machine; it’s a different kind of a processor. But a human brain, I think, can be broken down and can be augmented through AI to create exceptional outcomes. And we’ve seen that happen in radiology, at Wall Street, the quants, and other areas. I think that’s much more exciting, to apply AI pragmatically into these niches.
You know, it’s really interesting because there’s been a twenty-year effort called OpenWorm Project, to take the nematode worm’s brain, which is 302 neurons, and to model it. And even after twenty years, people in the project say it may not be possible. And so, if you can’t do a nematode… One thing is certain, you’re not going to do a human before you do a nematode worm.
Exactly. You know the way I see that, Byron, is that I’m more interested in “richer,” and not “smarter.” We need to get smarter but also we need to equally get richer. By “richer,” I don’t mean just making money, by “richer,” I mean: how do we use AI to improve our society, and our businesses, and our way of life? That’s where I think coming at it in the way of “outcome in,” rather than “science out,” is a more pragmatic way to apply AI.
So, you’ve mentioned five misconceptions, that was one of them. What are some of the other ones?
The first misconception was, AI equals replacing human mind. Second misconception is, AI is the same as natural language processing, which is far from the truth—NLP is just a technique within AI. It’s like saying, “My ability to understand and read a book, is the same as my brain.” That’s the second misconception.
The third is, AI is the same as big data and analytics. Big data and analytics are tools that are used to capture more input for an AI to work on. Saying that big data is the same as AI is saying, “Just because I can sense more, I can be smarter.” All big data is giving you is more input; it’s giving you more senses. It’s not making you smarter, or more intelligent. That’s the third myth.
The fourth myth is that AI is something that is better implemented horizontally versus vertically. I believe true AI, and successful AI—particularly in the business world—will have to be verticalized AI. Because it’s one thing to say, “I’ve got an AI.” It’s another thing to say, “I have an AI that understands underwriting,” versus an AI that understands diabetes, versus an AI that understands Super Bowl ads. Each of these require a domain-specific optimization of data and models and algorithms and experience. And that’s the fourth one.
The fifth one is that AI is all about technology. At best, AI is only half about technology. The other half of the equation has to do with skills, has to do with new processes, and methods, and governance on how to manage AI responsibly in the enterprise. Just like when the Internet came about, you didn’t have the methods and processes to create a web page, to build a website, to manage the website from getting hacked, to manage updates of the website. Similarly, there is a whole AI lifecycle management, and that’s what CognitiveScale focuses on: how do you create, deploy, and manage AI responsibly and at scale?
Because, unlike traditional IT systems—which do not learn; they are mostly rules-based systems, and rules-based systems don’t learn—AI-based systems are pattern-based, and they learn from patterns. So, unlike traditional IT systems that did not learn, AI systems have an ability to self-learn and geometrically improve themselves. If you can’t get visibility and control over these AI systems, you could have a massive problem of “rogue AI”—is what CognitiveScale calls it—where it’s irresponsible AI. You know that character Chucky from the horror movie, it’s like having a bunch of Chuckys running around in your enterprise, opening up your systems. What is needed is a comprehensive end-to-end view of managing AI from design, from deployment, to production, and governance of it at scale. That requires a lot more than technology; it requires skills and methods; and processes.
When we were chatting earlier you mentioned that some people were having difficulty scaling their projects, that they began in their enterprise, making them kind of enterprise-ready. Talk about that for a moment. Why is that, and what’s the solution to that?
Yes. I’ve talked to over six hundred customers just in the last five years—everything from IT level to board level and CEO level. There are three big things that are going on that they’re struggling with getting value of AI. Number one is, AI is seen as something that can be done by data scientists and analytics people. AI is far too important to be left to just data scientists. AI has to be done as a business strategy. AI has to be done top-down to drive business outcomes, not bottom-up as a way of finding data patterns. That’s the first part. I see a lot of science projects that are happening. One of the customers called it darts versus bubbles. He says, “There are lots of darts of projects that are going on, but where do I know where the big bubbles are, which really move the needle for a multibillion-dollar business that I have?” There is a lot of, I call it, bottom-up engineering experiments that are going on, that are not moving the needle. That’s one thing.
Number two is, the data scientists and application developers are struggling with taking these projects into production, because they are not able to provide fundamental capabilities to AI that you need in an enterprise, such as explainability. I believe 99.9% of the AI companies today that are funded will not make it in the next three years, because they lack some fundamental capability, like explainability. It’s one thing to find pictures of cats on the internet using a deep learning network, it’s another thing to explain to a chief risk officer why a particular claim was denied, and the patient died, and now they have a hundred-million-dollar lawsuit. The AI has to be responsible, trustworthy, and explainable; able to say why was that decision made at that time. Because of lack of these kinds of capabilities—and there are five such capabilities that we call enterprise-grade AI—most of these projects are not able to move into production, because they’re not able to meet the requirements from a security and performance perspective.
And then last but not least, these skills are very sparse. There are very few skills. Someone told me there are only seven thousand people in this world who have the skills to be able to understand and run AI models and networks like deep learning and others. Imagine that, seven thousand. I know of a bank who’s got twenty-two thousand developers, one bank alone. There is a tremendous gap in the way AI is being practiced today, versus the skills that are available in trying to get this production-ready.
That’s another thing that CognitiveScale is doing, we have created this platform to democratize AI. How do you take application developers and data scientists and machine learning people, and get them to collaborate, and deploy AI in 90-day increments? We have this method called “10-10-10,” where, in 10 hours we select a use case, and in 10 days we build the reference application using their data, and in 10 weeks we take them into production. We do that by helping these groups of people collaborate on a new platform called Cortex, that lets you take AI safely and securely into production, at scale.
Backing that up a little bit, there are European efforts to require that if the AI makes a decision about you, that you have a right to understand to know why—why it denies you a loan. So, you’re saying that that is something that isn’t happening now, but it is something that’s possible.
Actually, there are some efforts that are going on right now. DARPA has got some initiatives around this notion of XAI, explainable AI. And I know other companies are exploring this, but it’s still a very low-level technology effort. It is not coming up—explainable AI—at a business process level, and at an industry level, because explainability requirements of an AI vary from process to process, and from industry to industry. The explainability requirements for a throat cancer specialist talking talk about why he recommended a treatment, are different than explainability requirements for an investment advice manager in wealth management, who says, “Here’s the portfolio I recommended to you with our systems of AI.” So, explainability exists at two levels. It exists at a horizontal level as a technology, and it exists at an industry-optimized level, and that’s why I believe AI has to be verticalized and industry-optimized for it to really take off.
You think that’s a valid request to ask of an AI system.
I think it’s a requirement.
But if you ask a Google person, “I rank number three for this search. Somebody else ranks number four. Why am I three and they’re four?” They’d be like, “I don’t know. There are six thousand different things going on.”
Exactly. Yeah.
So wouldn’t an explainability requirement impede the development of the technology?
Or, it can create a new class of leaders who know how to crack that nut. That’s the basis on which we have founded CognitiveScale. It’s one of the six requirements, that we’ve talked about, in creating enterprise-grade AI. One of the big things—and I learned this while we were doing Watson—was how do you build AI systems you can trust, as a human being? Explainability is one of them. Another one is, recommendations with reasons. When your AI gives you an insight, can it also give you evidence to support, “Why I’m suggesting this as the best course of action for you”? That builds trust in the AI, and that’s when the human being can take action. Evidence and explainability are two of those dimensions that are requirements of enterprise-grade AI and for AI to be successful at large.
There’s seven thousand people who understand that. Assuming it’s true, is that a function of how difficult it is, or how new it is?
I think it’s a function of how different a skill set it is that we’re trying to bring into the enterprise. It is also how difficult it is. It’s like the Web; I keep going back to Internet. We are like where the Internet was in 1997. There were probably, at that time, only a few thousand people who knew how to develop HTML-based applications or web pages. AI today is where the Internet was in 1996 and 1997, where people were building a web page by hand. It’s far different from building a web application, which is connecting a series of these web pages, and orchestrating them to a business process to drive an outcome. That’s far different from optimizing that process to an industry, and managing it at the requirement of explainability, governance, and scalability. There is a lot of innovation around enterprise AI that is yet to come about, and we have not even scratched the surface yet.
When the Web came out in ’97, people rushed to have a web department in their company. Are we there, are we making AI departments and is that, like, not the way to do it?
Absolutely. I won’t say it’s not the way to do it. I’ll say it’s a required first step; to really understand and learn. Not only just AI, even blockchain—CognitiveScale calls it “blockchain with a brain.” I think that’s the big transformation, which has yet to happen, that’s on the horizon in the next three to four years—where you start building self-learning and self-assuring processes. Coming back to the Web analogy, that was the first step of three or four, in making a business become an e-business. Twenty-five years ago when the Web came about, everyone became in e-business, every process became “webified.” Now, with AI, everyone will become an i-business, or a c-business—a cognitive business—and everyone is going to get “cognitized.” Every process is going to get cognitized. Every process will learn from new data, and new interactions.
The steps they will go through are not unlike what they went through with the Web. Initially, they had a group of people building web apps, and the CEO said after a while, 1998, “I’ve spent half a million dollars, all I have is an intelligent digital brochure on the website. What has it done for my business?” That is exactly the stage we are at. Then, someone else came up and said, “Hey, I can connect a shopping cart to this particular set of web pages. I can put a payment system around it. I can create an e-commerce system out of it. And I have this open-source thing called JBoss, that you can build off of.” That’s kind of similar to what Google TensorFlow is doing today for AI. Then, there are next-generation companies like Siebel and Salesforce that came in and said, “I can build for you a commercial, web-based CRM system.” Similarly, that’s what CognitiveScale does. We are building the next-generation intelligent CRM system, or intelligent HRM system, that lets you get value out of these systems in a reliable and scalable manner. So it’s sort of the same progression that they’re going to go through with AI, like we went through with the Web. And there’s still a tremendous amount of innovation and new market leadership. I believe there will be a new hundred-billion-dollar AI company and that will get formed in the next seven to ten years.
What’s the timescale on AI going to be, is it going to be faster or slower?
I think it’ll be faster. I think it’ll be faster for multiple reasons. We have, and I gave a little TED Talk on this, around this notion of a superconvergence of technologies. When the Web came about, we were shifting from just one technology to another—we moved from client-server to Web. Right now, you’ve got these super six technologies that are converging that will make AI adoption much faster—they are cloud, mobile, social, big data, blockchain, and analytics. All of these are coming together at a rate and pace that is enabling compute and access, at a scale that was never possible before, and you combine that with an ability for a business to get disrupted dramatically.
One of the biggest reasons that AI is different than the Web is that those web systems are rules-based. They did not geometrically learn and improve. The concern and the worry that the CEOs and boards have this time around is—unlike a web-based system—an AI-based system improves with time, and learns with time, so either I’m going to dramatically get ahead of the competition, or I’m going to be dramatically left behind. What some people call “the Uber-ification” of businesses. There is this threat, and an opportunity to use AI as a great transformation and accelerator for their business model. That’s where this becomes an incredibly exciting technology, riding on the back of the superconvergence that we have.
If a CEO is listening, and they hear that, and they say, “That sounds plausible. What is my first step?”
I think there are three steps. The first step is to educate yourself, and your leadership team, on the business possibilities of AI—AI-powered business transformation, not technology possibilities of AI. So, one step is just education; educate yourself. Second is, to start experimenting. Experiment by deploying 90-day projects that cost a few hundred thousand dollars, not a two-year project with multiple million dollars put into it, so you can really start understanding the possibilities. Also you can start cutting through the vendor hype about what is product and what is PowerPoint. The narrative for AI, unfortunately, today, is being written by either Hollywood, or by glue-sniffing marketers from large companies, so the 90-day projects will help you cut through it. So, first is to educate, second is experiment, and third is enable. Enable your workforce to really start having the skill sets and the governance and the processes and enable an ecosystem, to really build out the right set of partners—with technology, data, and skills—to start cognitizing your business.
You know AI has always kind of been benchmarked against games, and what games it can beat people at. And that’s, I assume, because games are these closed environments with fixed rules. Is that the way an enterprise should go about looking for candidate projects, look for things that look like games? I have a stack of resumes, I have a bunch of employees who got great performance reviews, I have a bunch of employees that didn’t. Which ones match?
I think that’s the wrong metaphor to use. I think the way to have a business think about AI, is in the context of three things: their customers, their employees, and their business processes. They have to think about, “How can I use AI in a way that my customer experience is transformed? That every customer feels very individualized, and personalized, in terms of how I’m engaging them?” So, that’s one, the customer experiences that are highly personalized and highly contextualized. Second is employee expertise. “How do I augment my experience and expertise of my employees such that every employee becomes my smartest employee?” This is the Iron Man J.A.R.V.I.S. suit. It’s, “How do I upskill my employees to be the smartest at making decisions, to be the smartest in handling exceptions?” The third thing is my business processes. “How do I implement business processes that are constantly learning on their own, from new data and from new customer interaction?” I think if I were a CEO of a business, I would look at it from those three vectors and then implement projects in 90-day increments to learn about what’s possible across those three dimensions.
Talk a minute about CognitiveScale. How does it fit into that mix?
CognitiveScale was founded by a series of executives who were part of IBM Watson, so it was me and the guy who ran Watson Labs. We ran it for the first three years, and one thing we immediately realized was how powerful and transformative this technology is. We came away with three things: first, we realized that for AI to be really successful, it has to be verticalized and it has to really be optimized to an industry. Number two is that the power of AI is not in a human being asking the question of an AI, but it’s the AI telling the human being what questions to ask and what information to look for. We call it the “known unknowns” versus “unknown unknowns.” Today, why is it that I have to ask an Alexa? Why doesn’t Alexa tell me when I wake up, “Hey, while you were sleeping, Brexit happened. And—” if I’m an investment adviser, “—here are the seventeen customers you should call today and take them through the implications, because they’re probably panicking.” It’s using a system which is the opposite of a BI. A BI is a known-unknown—I know I don’t know something, therefore I run a query. An AI is an unknown unknown, which means it’s tapping me on the shoulder and saying, “You ought to know this,” or, “You ought to do this.” So, that was the second thesis. One is verticalize, second is unknown unknowns, and the third is quick value in 90-day increments—this is delivered using the method we call “10-10-10,” where we can stand up little AIs in 90-day increments.
The company got started about three-and-a-half years ago and the mission is to create exponential business outcomes in healthcare, financial services, telecom, and media. The company has done incredibly well, we have investments from Microsoft, Intel, IBM, Norwest—raised over $50 million. There are offices in Austin, New York, London and India. And the who’s-who, there are over thirty customers who are deploying this, and now scaling this as an enterprise-wide initiative, and it’s, again, built on this whole hypothesis of driving exponential business outcomes, not driving science projects with AI.
CognitiveScale is an Austin-based company, Gigaom is an Austin-based company, and there’s a lot of AI activity in Austin. How did that come about, and is Austin an AI hub?
Absolutely, that’s one of the exciting things I’m working on. One of my roles is Executive Chairman of CognitiveScale. Another of my roles is that I have a hundred-million-dollar seed fund that focuses on investing in vertical AI companies. And for my third thing, we just announced last year, is an initiative called AI Global—out of Austin—whose focus is on fostering the deployment of responsible AI.
I believe East Coast and West Coast will have their own technology innovations in AI. AI will be bigger than the Internet was. AI will be at the scale of what electricity was. Everything we know around us—from our chairs to our lightbulbs and our glasses—is going to have elements of AI woven into it over the next ten years. And, I believe one of the opportunities that Austin has—and that’s why we founded AI Global in Austin—is to help businesses implement AI in a responsible way so that it creates good for the business in an ethical and a responsible manner.
Part of the ethical use of AI and responsible use of AI involves bringing a community of people together in Austin, and have Austin be known as the place to go, for designing responsible AI systems. We have the UT Law school working with us, the UT Design school, the UT Business school, the UT IT school—all of them are working together as one. We have the mayor’s office and the city working together extensively. We also have some local companies like USAA, who is coming in as a founding member of this. What we are doing now is helping companies that come to us for getting a prescription on how to design, deploy, and manage responsible AI systems. And I think there are tremendous opportunities, like you and I have talked about, for Gigaom and AI Global to start doing things together to foster implementation of responsible AI systems.
You may have heard that IBM Watson beat Ken Jennings at Jeopardy. Well, he gave a TED Talk about that, and he said that there was a graph that, as Watson got better, it would show the progress, and every week they would send him an update and their line would be closer to his. He said he would look at it with dread. He said, “That’s really what AI is, it’s not the Terminator coming for you. It’s the one thing you do great, and it just gets better and better and better and better at it.” And you talked about Hollywood driving the narrative of AI, but one of the narratives is AIs effect on jobs, and there’s a lot of disagreement about it. Some people believe it’s going to eat a bunch of low-skill work, and we will have permanent unemployment and it will be like the Depression, and all of that. While some think that it’s actually going to create a bunch of jobs. That, just like any other transformative technology, it’s going to raise productivity which is how we raise wages. So which of those narratives, or a different one, do you follow?
And there’s a third group that says that AI could be our last big innovation, and it’s going to wipe us out as a species. I think the first two, in fact, all three are true, elements of them.
So it will wipe us out as a civilization?
If you don’t make the right decisions. I’m hearing things like autonomous warfare which scares the daylights out of me.
Let’s take all three. In terms of AI dislocating jobs, I think every major technology—from the steam engine to the tractor to semiconductors—has always dislocated jobs; and AI will be no different. There’s a projection that by the year 2020 eighteen million jobs will be dislocated by AI. These are tasks that are routine tasks that can be automated by a machine.
Hold on just a second, that’s twenty-seven months from now.
Yeah, eighteen million jobs.
Who would say that?
It’s a report that was done by, I believe it was World Economic Forum, but here’s the thing, I think that’s quite true. But I don’t worry about that as much as I focus on the 1.3 billion jobs that AI will uplift the roles on. That’s why I look at augmentation as a bigger opportunity than replacement of human beings. Yes, AI is going to remove and kill some jobs but there is a much, much larger opportunity by using AI to augment and skill your employees, just like the Web did. The Web gave you reach and access and connection, at a scale that was never possible before—just like the telephone did before that, and the telegraph did before that. And I think AI is going to give us a tremendous amount of opportunities for creating—someone called it the “new collar jobs,” I think it was IBM—not just blue collar or white collar, but “new collar” jobs. I do believe in that; I do believe there is an entire range of jobs that AI will bring about. That’s one.
The second narrative was around AI being the last big innovation that we will make. And I think that is absolutely the possibility. If you even look at the Internet when it came about, the top two applications in the early days of the Internet were gambling and pornography. Then we started putting the Internet to work for the betterment of businesses and people, and we made choices that made us use the Internet for greater good. I think the same thing is going to happen with AI. Today, AI is being used for everything from parking tickets being contested, to Starbucks using it for coffee, to concert tickets being scalped. But I think there are going to be decisions as a society that we have to make, on how we use AI responsibly. I’ve heard the whole Elon Musk and Zuckerberg argument; I believe both of them are right. I think it all comes down to the choices we make as a society, and the way we scale our workforce on using AI as the next competitive advantage.
Now, the big unknown in all of this is what a bad actor, or nation states, can do using AI. The part that I still don’t have a full answer to, but it worries the hell out of me, is this notion of autonomous warfare. Where people think that by using AI they can actually restrict the damage, and they can start taking out targets in a very finite way. But the problem is, there’s so much that is unknown about an AI. An AI today is not trustworthy. You put that into things that can be weapons of mass destruction, and if something goes wrong—because the technology is still maturing—you’re talking about creating massive destruction at a scale that we’ve never seen before. So, I would say all three elements of the narrative: removing jobs, creating new jobs, creating an existential threat to us as a race—all of those elements are a possibility going forward. The one I’m the most excited about is how it’s going to extend and enhance our jobs.
Let’s come back to jobs in just a minute, but you brought up warfare. First of all, there appear to be eighteen countries working to make AI-based systems. And their arguments are twofold. One argument is, “There’s seventeen other people working to develop it, if I don’t…”
Someone else will. 
And second, right now, the military drops a bomb and it blows up everything… Let’s look at a landmine. A landmine isn’t AI. It will blow up anything over forty pounds. And so if somebody came and said, “I can make an AI landmine that sniffs for gunpowder, and it will only blow up somebody who’s carrying a weapon.” Then somebody else says, “I can make one that actually scans the person and looks for drab.” And so forth. If you take warfare as something that is a reality of life, why wouldn’t you want systems that were more discriminative?
That’s a great question, and I believe that will absolutely happen, and probably needs to happen, but over a period of time—maybe that’s five or ten years away. We are in the most dangerous time right now, where the hype about AI has far exceeded the reality of AI. These AIs are extremely unstable systems today. Like I said before, they are not evidence-based, there is no kill-switch in an AI, there is no explainability; there is no performance that you can really figure out. Take your example of something that can sniff gunpowder and will explode. What if I store that mine in a gun depot, in the middle of a city, and it sniffs the gunpowder from the other weapons there, and it blows itself up. Today, we don’t have the visibility and control at a fine-grain level with AI to warrant an application of it in that scale.
My view is that it will be a prerogative for everyone to get on it as nation-states—you saw Putin talk about it, saying, “He who controls AI will control the future world.” There is no putting the genie back in the bottle. And just like we did with the rules of war, and just like we did with nuclear warfare; there will be new Geneva Convention-like rules that we will have to come up with as a society on how and where these responsible AI systems have to be deployed, and managed, and measured. So, just like we have done that for chemical warfare, I think there will be new rules that will come up for AI-based warfare.
But the trick with it is… A nuclear event is a binary thing; it either happened or it didn’t. A chemical weapon, there is a list of chemicals, that’s a binary thing. AI isn’t though. You can say your dog-food dish that refills automatically when it’s empty, that’s AI. How would you even phrase the law, assuming people followed it, how would you phrase it in just plain English?
In a very simple way. You’ve heard Isaac Asimov’s three rules in I, Robot. I think as a society we will have to—in fact, I’m doing a conference on this next year north of London around how to use AI and drones in warfare in a responsible way—come up with a collective mindset and will from the nations to propose something like this. And I think the first event has not happened yet, though you could argue that the “fake news” event was one of the big AI events that’s happened, that, potentially, altered the direction of a presidential race. People are worried about hacking; I’m more worried about attacks that you can’t trace the source of. And I think that’s work to be done, going forward.
There was a weapons system that did make autonomous kill decisions, and the militaries that were evaluating it said, “We need it to have a human in the middle.” So they added that, but of course you can turn that off. It’s almost intractable to define it in such a way.
It sounds like you’re in favor of AI weapons, as long as they’re not buggy.
I’m not in favor of AI weapons. In general, as a person, I’m anti-war. But it’s one of those human frailties and human limitations that war is a necessary—as ugly as it is—part of our lives. I think people and countries will adopt AI and they will start using it for warfare. What is needed, I think, is a new set of agreements and a new set of principles on how they go about using it, much like they do with chemical weapons and nuclear warfare. I don’t think it’s something we can control. What we can do is regulate and manage and enforce it.
So, moving past warfare, do you believe Putin’s statement that he who controls AI in the future will control the world?
Absolutely. I think that’s a given.
Back to jobs for a moment. Working through the examples you gave, it is true that steam and electricity and mechanization destroyed jobs, but, what they didn’t do is cause unemployment. Unemployment in this country, in the US, at least, has been between five and ten percent for two hundred years, other than the Depression, which wasn’t technology’s fault. So, what has happened is, yes, we put all of the elevator operators out of business when we invented the button and you no longer had to have a person, but we never saw a spike in unemployment. Is that what’s going to happen? Because if we really lost eighteen million jobs in the next twenty-seven months, that would just be… That’s massive.
No, but here’s the thing, that eighteen million number is a global number.
Okay, that’s a lot better then. Fair enough, then.
And you have to put this number in context of the total workforce. So today, there are somewhere between seven hundred million to 1.3 billion workers that are employed globally and eighteen million is a fraction of that. That’s number one. Number two, I believe there is a much bigger potential in using AI as a muse, and AI as a partner, to create a whole new class of jobs, rather than be afraid of the machine replacing the job. Machines have always replaced jobs, and they will continue to do that. But I believe—and this is where I get worried about our education system, one of the first things we did with Watson was we started a university program to start skilling people with the next generation skillsets that are needed to deploy and manage AI systems—that over the next decade or, for that matter over the next five decades, there is a whole new class of human creativity and human potential that can and will be unleashed through AI by creating whole new types of job.
If you look at CognitiveScale, we’re somewhere around one hundred and sixty people today. Half of those jobs did not exist four years ago. And many of the people who would have never even considered a job in a tech company are employed by CognitiveScale today. We have linguists who are joining a software company because we have made their job into computational linguistics, where they’re taking what they knew of linguistics, combining it with a machine, and creating a whole new class of applications and systems. We have people who are creating a whole new type of testing mechanisms for AI. These testers never existed before. We have people who are now designing and composing intelligent agents using AI, with skills that they are blending from data science to application development, to machine learning. These are new skills that have come about. Not to mention salespeople, and business strategists, who are coming up with new applications of this. I tend to believe that this is one of the most exciting times—from the point of view of economic growth and jobs—that we, and every country in this world, has in front of them. It all depends on how we commercialize it. One of the great things we have going for the US is a very rich and vibrant venture investment community and a very rich and vibrant stock market that values innovation, not just revenues and profits. As long as we have those, and as long as we have patent coverage and good enforcement of law, I see a very good future for this country.
At the dawn of the Industrial Revolution, there was a debate in this country, in the United States, about the value of post-literacy education. Think about that. Why would most people, who are just going to be farmers, need to go to school after they learn how to read? And then along came some people who said that the jobs of the future, i.e. Industrial Revolution jobs, will require more education. So the US was the first country in the world to guarantee every single person could go to high school, all the way through. So, Mark Cuban said, if he were coming up now, he would study philosophy. He’s the one who said, “The first trillionaires are going to be AI people.” So he’s bullish on this, he said, “I would study philosophy because that’s what you need to know.” If you were to advise young people, what should they study today to be relevant and employable in the future?
I think that’s a great question. I would say, I would study three different things. One, I would study linguistics, literature—soft sciences—things around how decisions are made and how the human mind works, cognitive sciences, things like that. That’s one area. The second thing I would study is business models and how businesses are built and designed and scaled. And the third thing I would study is technology to really understand the art of the possible with these systems. It’s at the intersection of these three things, the creative aspects of design and literature and philosophy around how the human mind works, to the commercial aspect of what to make, and how to build a successful business model, to the technological underpinnings of how to power these business models. I would be focusing on the intersection of those three skills; all embraced under the umbrella of entrepreneurship. I’m very passionate about entrepreneurship. They are the ones who will really lead these country forward, entrepreneurs, both in big companies, and small.
You and I have spoken on the topic of an artificial general intelligence, and you said it was forty or fifty years away, that’s just a number, and that it might require quantum computers. You mentioned Elon and his fear of the existential threat. He believes, evidently, that we’re very close to an AGI and that’s where the fear is. That’s what he’s concerned about. That’s what Hawking is concerned about. You said, “I agree with the concern, if we screw up, it’s an existential threat.” How do you reconcile that with, “I don’t think we’ll have an AGI for forty years”?
Because I think you don’t need an AGI to create an existential threat. There are two different dimensions. You can create an existential threat by just building a highly unreliable autonomous weapons system that doesn’t know anything about general intelligence. It only knows how to seek out and kill. And that, in the wrong hands, could really be the existential threat. You could create a virus on the Internet that could bring down all public utilities and emergency systems, without it having to know anything about general intelligence. If that somehow is released without proper testing or controls, you could bring down economies and societies. You could have devastation, unfortunately, at the scale of what Puerto Rico is now going through without a hurricane going through it; it could be an AI-powered disaster like that. I think these are the kinds of outcomes we have to be aware of. These are the kinds of outcomes we have to start putting rules and guidelines and enforcements around. And that’s an area, that and skills, are the two that I think we are lagging behind significantly today.
The OpenAI initiative is an effort to make AI so that one player doesn’t develop it—in that case an AGI, but all along the way. Do you think that is a good initiative?
Yeah, absolutely. I think OpenAI, we probably need a hundred other initiatives like that, that focus on different aspects of AI. Like what we’re doing at AI Austin, and AI Global. We are focusing on the ethical use of AI. It’s one thing to have a self-driving car, it’s another thing to have a self-driving missile. How do you take a self-driving car that ran over four people, and how do you cross-examine that in a witness box? How is that AI explainable? Who’s responsible for it? So there is a whole new set of ethics and laws that have to be considered when putting this into the intelligent products. Almost like the underwriter labs equivalent of AI that needs to be woven into every product and every process. Those are the things that our governments need to get aware of, and our regulators need to get savvy about, and start implementing.
There is one theory that says that if it’s going to rely on government, that we are all in bad shape because the science will develop faster than the legislative ability to respond to it. Do you have a solution for that?
I think there’s a lot of truth to that, particularly with what we’re seeing recently in the government around technology, there’s a lot of merit to that. I believe, again, the results of what we become and what we use AI for, will be determined by what we do as private citizens, what we do as business leaders, and what we do as philanthropists. One of the beautiful things about America is what philanthropists like Gates and Buffett and all are doing—they’ve got more assets than many countries now, and they’re putting it to work responsibly; like what Cuban’s talking about. So, I do have hope in the great American “heart,” if you may, about innovation, but also responsible application. And I do believe that all of us who are in a position to educate and manage these things, it’s our duty to be able to spread the word, and to be able to lean in, and start helping, and steering this AI towards responsible applications.
Let’s go through your “What AI Isn’t” list, your five things. One of them you said, “An AI is not natural language processing” and obviously, that is true. Do you think, though, the Turing test has any value? If we make a machine that can pass it, is that a benchmark? We have done something extraordinary in that case?
When I was running Watson, I used to believe it had value, but I don’t believe that as much anymore. I think it has limited value in applicability, because of two things. One is, in certain processes where you’re replacing the human brain with a machine, you absolutely need to have some sort of a test to prove or not prove. The more exciting part is not replacement of automated or repetitive human functions, the more exciting part is things that the human brain hasn’t thought of, or hasn’t done. I’ll give you an example: we are working at CognitiveScale with a very large media company, and we were analyzing Super Bowl TV ads, by letting an AI read the video ad, to find out exactly what kinds of creative—is it kids or puppies or celebrities—and at what time, would have the most impact on creating the best TV ad. And what was fascinating was that we just let the AI run at it; we didn’t tell it what to look for. There was no Turing test to say, “This is good or bad.” And the stuff the AI came back with were things that were ten or twelve levels deep in terms of connections it found, things that a human brain normally would have never thought about. And we still can’t describe why there is a connection to it.
It’s stuff like that—the absolute reference is not the human brain, this is the “unknown unknown” part I talked about—that with AI, you can emulate human cognition but, as importantly, with AI you can extend human cognition. The extension part of coming up with patterns or insights and decisions that the human brain may not have used, I think that’s the exciting part of AI. We find when we do projects with customers that there are patterns that we can’t explain, as a human being, why it is, but there’s a strong correlation; it’s eighteen levels deep and it’s buried in there, but it’s a strong correlator. So, I kind of put this into two buckets: first is low-level repetitive tasks that AI can replace; and second is a whole new class of learning that extends human cognition where—this is the unsupervised learning bit—where you start putting a human in the loop to really figure out and learn new ways of doing business. And I think they are both aspects that we need to be cognizant of, and not just try to emulate the current human brain which has, in many cases, proven to be very inefficient in making good decisions.
You have an enormous amount of optimism about it. You’re probably the most optimistic person, that I’ve spoken to, about how far we can get without a general intelligence. But, of course, you keep using words like “existential threat,” you keep identifying concepts like a virus that takes down the electrical grid, warfare, and all of that; you even used “rogue AI” in the context of a business. In that latter case, how would a rogue AI destroy a business? And you can’t legislate your way around that, right? So, give me a example of a rogue AI in an enterprise scenario.
There are so many of them. One of them actually happened when we recently met with a large financial institution. We were sitting and having a meeting, and suddenly we found out that that particular company was going through a massive disruption of business operations because all of their x-number of data centers were shutting down, every 20 minutes or so, and rebooting themselves; all over the world, their data centers were shutting down and rebooting. They were panicking because this was during the middle of a business day, there were billions of dollars being transacted, and they had no idea why these data centers were doing what they were. A few hours into it, they found out that someone wrote a security bot last month, and they launched it into the cloud system that they have, and for some reason, that agent—that AI—felt that it was a good idea to start shutting down these systems every 20 minutes and rebooting it. That was a simple example of how, they finally found it, but there was no visibility in governance of that particular AI that was introduced. That’s one of the reasons we talked about the ability to have a framework for managing visibility and control of these AIs.
The other one could be—and this has not happened yet, but this is one of the threats—you look at underwriting. An insurance company uses technology today a lot, to start underwriting risks. And if, for whatever reason, you have an AI system that sees correlations and patterns, but has not been trained well enough on really understanding risk, you could pretty much have the entire business wiped out. By having the AI—if you depend on it too much without explainability and trust—suggesting you take on risks, that will put your business at an existential risk.
I can go on and on, and I can use examples around cancer, around diabetes, around anything to do with commerce where AI is going to be put to use. I believe as we move forward with AI, the two phrases that are going to become incredibly important for enterprises are “lifecycle management of an AI,” and “responsible AI.” And I think that’s where there’s a tremendous amount of opportunity. That’s why I’m excited about what we’re doing at CognitiveScale to enable those systems.
Two final questions. So, with those scenarios, give me the other side, give us some success stories you’ve personally seen. They can be CognitiveScale or other ones, that you’ve seen have a really positive impact on a business.
I think there are many of them. I’ll pick an area in retail, something as simple as retail, where through an AI we were able to demonstrate how a rules-based system—so this particular large retailer used to have a mobile app where they presented to you a shirt, and trousers, and some accessories and it was like a Tinder or “hot or not” type of a game—and the rules-based system, on average, were getting less than ten percent conversion on what people said they liked. Those were all systems that are not learning. Then we put an AI behind it, and that AI could understand that that particular dress was an off-shoulder dress, and it was a teal color, and it was pairs with an open-toe shoe that’s a shiny leather. As the customers started engaging with it, the AI started personalizing the output, and we demonstrated a twenty-four percent conversion compared to a single-digit conversions, in a matter of seven months. And here’s the beautiful part, every month the AI is getting smarter and smarter, and every percentage conversion equals tens of millions of dollars in top-line growth. So that’s one example of a digital brain, a cognitive digital brain, driving shopper engagement and shopper conversion.
The other thing we saw was in the case of pediatric asthma. How an AI can help nurses do a much better job of preventing children from having an asthma attack, because the AI is able to read a tweet from pollen.com that says there will be a ragweed outbreak on Thursday morning. The AI understands the zip code that it’s talking about, and Thursday is four days out, and there are seventeen children with a risk of ragweed or similar allergies; and it starts tapping the nurse on the shoulder and saying, “There is an ‘unknown unknown’ going on here which is, four days from now there will be a ragweed outbreak, you better get proactive about it and start addressing the kids.” So, there’s an example in healthcare.
There are examples in wealth management, and financial services, around compliance and how we’re using AI to improve compliance. There are examples of how we are changing the dynamics of trading, foreign exchange trading, and how a trader does equities and derivatives trading by the AI guiding them through a chat session where the AI is listening in and guiding them as to what to do. The examples are many, and most of them are things that are written up in case studies, but this is just the beginning. I think this is going to be one of the most exciting innovations that will transform the landscape of businesses over the next five to seven years.
You’re totally right about the product recommendation. I was on Amazon and I bought something, it was a book or something, and it said, “Do you want these salt-and-pepper-shaker robots that you wind up and they walk across the table?” And I was like, “Yes, I do!” But it had nothing to do with the thing that I was buying.
Final question, you’ve talked about Hollywood setting the narrative for AI. You’ve mentioned I, Robot in passing. Are you a consumer of science fiction, and, if so, what vision of the future—book or whatever—do you think, “Aha, that’s really cool, that could happen,” or what have you?
Well, I think probably the closest vision I would have is to Gene Roddenberry, and Star Trek. I think that’s pretty much a great example of a data quarter helping a human being make a better decision—a flight deck, a holodeck, that is helping you steer. It’s still the human, being augmented. It’s still the human making the decisions around empathy, courage, and ethics. And I think that’s the world that AI is going to take us to; the world of augmented intelligence. Where we are being enabled to do much bigger and greater things, and not just a world of artificial intelligence where all our jobs are removed and we are nothing but plastic blobs sitting in a chair.
Roddenberry said that in the twenty-third century there will be no hunger, and there will be no greed, and all the children will know how to read. Do you believe that?
If I had a chance to live to be twice or three times my age, that would be what I’d come in to do. After CognitiveScale, that is going to be my mission through my foundation. Most of my money I’ve donated to my foundation, and it will be focused on AI for good; around addressing problems of education, around addressing problems of environment, and around addressing problems of conflict.
I do believe that’s the most exciting frontier where AI will be applied. And there will be a lot of mishaps along the way, but I do believe, as a race and as a humanity, if we make the right decisions, that is the endpoint that we will reach. I don’t know if it’s 2300, but, certainly, it’s something that I think we will get to.
Thank you for a fascinating hour.
Thank you very much.
It was really extraordinary and I appreciate the time.
Thanks, Byron.
Byron explores issues around artificial intelligence and conscious computers in his upcoming book The Fourth Age, to be published in April by Atria, an imprint of Simon & Schuster. Pre-order a copy here
[voices_in_ai_link_back]

Voices in AI – Episode 18: A Conversation with Roman Yampolskiy

[voices_in_ai_byline]
In this episode Byron and Roman discuss the future of jobs, Roman’s new field of study, “Intellectology”, consciousness and more.
[podcast_player name=”Episode 18: A Conversation with Roman Yampolskiy” artist=”Byron Reese” album=”Voices in AI” url=”https://voicesinai.s3.amazonaws.com/2017-11-20-(00-45-56)-roman-yampolsky.mp3″ cover_art_url=”https://voicesinai.com/wp-content/uploads/2017/11/voices-headshot-card.jpg”]
[voices_in_ai_link_back]
Byron Reese: This is Voices in AI, brought to you by Gigaom, and I’m Byron Reese. Today, our guest is Roman Yampolskiy, a Professor in the Department of Computer Engineering and Computer Science at the Speed School of Engineering, University of Louisville. He is the founding and current director of the Cyber Security Lab, and an author of many books, including Artificial Superintelligence: A Futuristic Approach.
His main areas of interest are AI safety and cyber security. He is the author of over one hundred publications, and his research has been cited by over a thousand scientists around the world.
Welcome to the show.
Roman Yampolskiy: Thank you so much. Thanks for inviting me.
Let’s just jump right in. You’re in the camp that we have to be cautious with artificial intelligence, because it could actually be a threat to our survival. Can you just start right there? What’s your thinking? How do you see the world?
It’s not very different than any other technology. Any advanced technology can be used for good or for evil purposes. The main difference with AI is that it’s not just a tool, it’s actually an independent agent, making its own decisions. So, if you look at the safety situation with other independent agents—take for example, animals—we’re not very good at making sure that there are no accidents with pit bulls, for example.
We have some approaches to doing that. We can put them on a leash, put them in a cage, but at the end of the day, if the animal decides to attack, it decides to attack. The situation is very similar with advanced AI. We try to make it safe, beneficial, but since we don’t control every aspect of its decision-making, it could decide to harm us in multiple ways.
The way you describe it, you’re using language that implies that the AI has volition, it has intentionality, it has wants. Are you suggesting this intelligence is going to be conscious and self-aware?
Consciousness and self-awareness are meaningless concepts in science. That is nothing we can detect or measure. Let’s not talk about those things. I’m saying specific threats will come from the following: one, is mistakes in design. Just like with any software, you have computer bugs; you have misaligned values with human values. Two, purposeful design of malevolent AI. There are people who want to hurt others—hackers, doomsday cults, crazies. They will, on purpose, design intelligent systems to destroy, to kill. The military is a great example; they fund lots of research in developing killer robots. That’s what they do. So, those are some simple examples.
Will AI decide to do something evil, for the sake of doing evil? No. Will it decide to do something which has a side effect of hurting humanity? Quite possible.
As you know, the range on when we might build an artificial general intelligence varies widely. Why do you think that is, and do you care to kind of throw your hat in that lottery, or that pool?
Predicting the future is notoriously difficult. I don’t see myself as someone who has an advantage in that field, so I defer to others. People, like Ray Kurzweil, who have spent their lives building those prediction curves, exponential curves. With him being Director of Engineering at Google, I think he has pretty good inside access to the technology, and if he says something like 2045 is a reasonable estimate, I’ll go with that.
The reason people have different estimates is the same reason we have different betting patterns in the stock market, or horses, or anything else. Different experts give different weights to different variables.
You have advocated research into, quote, “boxing” artificial intelligence. What does that mean, and how would you do it?
In plain English, it means putting it in prison, putting it in a controlled environment. We already do it with computer viruses. When you study a computer virus, you put it in an isolated system which has no access to internet, so you can study its behavior in a safe environment. You control the environment, you control inputs, outputs, and you can figure out how it works, what it does, how dangerous it is.
The same makes sense for intelligence software. You don’t want to just run a test by releasing it on the internet, and seeing what happens. You want to control the training data going in. That’s very important. We saw some terrible fiascos with the recent Microsoft Chat software being released without any controls, and users feeding it really bad data. You want to prevent that, so for that reason, I advocate having protocols, environments in which AI researchers can safely test their software. It makes a lot of sense.
When you think about the great range of intellectual ability, from the smallest and simplest creatures, to us, is there even an appropriate analogy for how smart a superintelligence could be? Is there any way for us to even think about that?
Like, when my cat leaves a mouse on the back porch, everything that cat knows says that I’m going to like that dead mouse, right? Its entire view of the world is that I’m going to want that. It doesn’t have, even remotely, the mental capability to understand why I might not.
Is an AI, do you think, going to be that far advanced, where we can’t even communicate in the same sort of language, because it’s just a whole different thing?
Eventually, yes. Initially, of course, we’ll start with sub-human AI, and slowly it will get to human levels, and very quickly it will start growing almost exponentially, until it’s so much more intelligent. At that point, as you said, it may not be possible for us to understand what it does, how it does it, or even meaningfully communicate with it.
You have launched a new field of study, called Intellectology. Can you talk about what that is, and why you did that? Why you thought there was kind of a missing area in the science?
Sure. There seems to be a lot of different sub-fields of science, all of them looking at different aspects of intelligence: how we can measure intelligence, build intelligence, human intelligence versus non-human intelligence, animals, aliens, communicating across different species. Forensic science tells us that we can look at an artifact, and try to deduce the engineering behind it. What is the minimum intelligence necessary to make this archeological artifact?
It seems to make sense to bring all of those different areas together, under a single umbrella, a single set of terms and tools, so they can be re-used, and benefit each field individually. For example, I look a lot at artificial intelligence, of course. And studying this type of intelligence is not the same as studying human intelligence. That’s where a lot of mistakes come from, assuming that human drives, wants and needs will be transferred.
This idea of a universe of different possible minds is part of this field. We need to understand that, just like our planet is not the middle of the universe, our intelligence is not the middle of that universe of possible minds. We’re just one possible data point, and it’s important to generalize outside of human values.
So it’s called Intellectology. We don’t actually have a consensus definition on what intelligence is. Do you begin there, with “this is what intelligence is”? And if so, what is intelligence?
Sure. There is a very good paper published by one of the co-founders of DeepMind, which surveys, maybe, I don’t know, a hundred different definitions of intelligence, and tries to combine them. The combination sounds something like “intelligence is the ability to optimize for your goals, across multiple environments.” You can say it’s the ability to win in any situation, and that’s pretty general.
It doesn’t matter if you are a human at a college, trying to get a good grade, an alien on another planet trying to survive, it doesn’t matter. The point is if I throw a mind into that situation, eventually it learns to do really well, across all those domains.
We see AIs, for example, capable of learning multiple videos games, and performing really well. So, that’s kind of the beginning of that general intelligence, at least in artificial systems. They’re obviously not at the human level yet, but they are starting to be general enough, where we can pick up quickly what to do in all of those situations. That’s, I think, a very good and useful definition of what intelligence is, one we can work with.
One thing you mentioned in your book, Artificial Superintelligence, is the notion of convincing robots to worship humans as gods. How would you do that, and why that? Where did that idea come from?
I don’t mention it as a good idea, or my idea. I kind of survey what people have proposed, and it’s one of the proposals. I think it comes from the field of theology, and I think it’s quite useless, but I mention it for the sake of listing all of the ideas people have suggested. Me and a colleague, we published a survey about possible solutions for dealing with super-intelligent systems, and we reviewed some three hundred papers. I think that was one of them.
I understand. Alright. What is AI Completeness Theory?
We think that there are certain problems which are fundamental problems. If you can do one of those problems, you can do any problem. Basically, you are as smart as a human being. It’s useful to study those problems, to understand what is the progress in AI, and if we’ve got to that level of performance. So, in one of my publications, I talk about the Turing Test as being a fundamental first AI complete problem. If you can pass the Turing Test, supposedly, you’re as intelligent as a human.
The unrestricted test, obviously not the five-minute version of that, or whatever is being done today. If that’s possible, then you can do all of the other problems. You can do computer vision, you can do translation, maybe you can even do computer programming.
You also write about machine ethics and robot rights. Can you explore that, for just a minute?
With regards to machine ethics, the literature seems to be, basically, everyone trying to propose that a certain ethical theory is the right one, and we should implement it, without considering how it impacts everyone who disagrees with the theory. Philosophers have been trying to come up with a common ethical framework for millennia. We are not going to succeed in the next few years, for sure.
So, my argument was that we should not even try to pick one correct ethical theory. That’s not a solution which will make all of us happy. And for each one of those ethical theories, there are actually problems, well-known problems, which if a system with that type of power is to implement that ethical framework, that’s going to create a lot of problems, a lot of damage.
With regards to rights for robots, I was advocating against giving them equal rights, human rights, voting rights. The reasoning is quite simple. It’s not because I hate robots. It’s because they can reproduce almost infinitely. You can have a trillion copies of any software, almost instantaneously, and if each of them has voting rights, that essentially means that humanity has no rights. We give away human rights. So, anyone who proposes giving that type of civil rights to robots is essentially against human rights.
That’s a really bold statement. Let’s underline that, because I want to come back to it. But in order to do that, I want to return to the first thing I asked you, or one of the earlier things, about consciousness and self-awareness. You said these aren’t really scientific questions, so let’s not talk about them. But at least with self-awareness, that isn’t the case, is it?
I mean, there’s the red dot test—the mirror test—where purportedly, you can put a spot on an animal’s forehead while it’s asleep, and if it gets up and sees that in a mirror, and tries to wipe it off, it therefore knows that that thing in the mirror is it, and it has a notion of self. It’s a hard test to pass, but it is a scientific test. So, self-awareness is a scientific idea, and would an artificial intelligence have that?
We have a paper, still undergoing the review process, which surveys every known test for consciousness, and I guess you include self-awareness with that. All of them measure different correlates of consciousness. The example you give, yes, animals can recognize that it’s them in the mirror, and so we assume that also means they have similar consciousness to ours.
But it’s not the same for a robot. I can program a robot to recognize a red dot, and assume that it’s on its own forehead, in five minutes. It’s not, in any way, a guarantee that it has any conscious or self-awareness properties. It’s basically proving that we can detect red dots.
But all you are saying is we need a different test for AI self-awareness, not that AI self-awareness is a ridiculous question to begin with.
I don’t know what the definition of self-awareness is. If you’re talking about some non-material spiritual self-consciousness thing, I’m not sure what it does, or why it’s useful for us to talk about it.
Let’s ask a different question, then. Sentience is a word which is commonly misused. It’s often used to mean intelligent, but it simply means “able to sense something,” usually pain. So, the question of “is a thing sentient” is really important. Up until the 1990s, in the United States, veterinarians were taught not to anesthetize animals when they operated on them, because they couldn’t feel pain—despite their cries and writhing in apparent agony.
Similarly, it wasn’t until twenty or so years ago that babies, human babies, weren’t anesthetized, to do open heart surgery on them, because again, the theory was that they couldn’t feel pain. Their brains just weren’t well-developed. The notion of sentience, we put it right near rights, because we say, “If something can feel pain, it has a right not to be tortured.”
Wouldn’t that be an equivalent with artificial intelligence? Shouldn’t we ask, “Can it feel pain?” And if it can, you don’t have to say, “Oh yeah, it should be able to vote for the leaders.” But you can’t torture it. That would be just a reasonable thing, a moral thing, an ethical thing to say. If it can feel, then you don’t torture it.
I can easily agree with that. We should not torture anyone, including any reinforcement learners, or anything like that. To the best of my knowledge, there are two papers published on the subject of computer pain, good papers, and both say it’s impossible to do right now.
It’s impossible to measure, or it’s impossible for a computer to feel pain right now?
It’s impossible for us to program a computer to feel pain. Nobody knows how to do it, how to even start. It’s not like with, let’s say pattern recognition, we know how to start, we have some results, we get ten percent accuracy so we work on it and get to fifteen percent, forty percent, ninety percent. With artificial pain, nobody knows how to even start. What’s the first line of code you write for that? There is no clue.
With humans, we assume that other humans feel pain because we feel pain, and we’ve got similar hardware. But there is not a test you can do to measure how much pain someone is in. That’s why we show patients those ten pictures of different screaming faces, and ask, “Well, how close are you to this picture, or that one?” This is all a very kind of non-scientific measurement.
With humans, yes, obviously we know, because we feel it, so similar designs must also experience that. With machines, we have no way of knowing what they feel, and no one, as far as I know, is able to say, “Okay, I programmed it so it feels pain, because this is the design we used.” There are just no ideas for how something like that can be implemented.
Let’s assume that’s true, for a moment. The way, in a sense, that you get to human rights, is you start by saying that humans are self-aware, which as you say, we can all self-report that. If we are self-aware, that implies we have a self, and implying we have a self means that that self can feel, and that’s when you get sentience. And then, you get up to sapience, which is intelligence. So, we have a self, that self can feel, and therefore, because that self can suffer, that self is entitled to some kind of rights.
And you’re saying we don’t know what that would look like in a computer, and so forth. Granting all of that, for just a moment, there are those who say that human intelligence, anything remotely like human intelligence, has to have those building blocks, because from self-awareness you get consciousness, which is a different thing.
And consciousness, in part, embodies our ability to change focus, to be able to do one thing, and then, for whatever reason, do a different thing. It’s the way we switch, and we go from task to task. And further, it’s probably the way we draw analogies, and so forth.
So, there is a notion that, even to get to intelligence, to get to superintelligence, there is no way to kind of cut all of that other stuff out, and just go to intelligence. There are those who say you cannot do that, that all of those other things are components of intelligence. But it sounds like you would disagree with that. If so, why would that be?
I disagree, because we have many examples of humans who are not neurotypical. People, for example, who don’t experience pain. They are human beings, they are intelligent, they certainly have full rights, but they never feel any pain. So that example—that you must feel pain in order to reach those levels of intelligence—is simply not true. There are many variations on human beings, for example, not having visual thinking patterns. They think in words, not in images, like most of us. So, even that goes away.
We don’t seem to have a guaranteed set of properties that a human being must have to be considered human. There are human beings who have very low intelligence, maybe severe mental retardation. They are still human beings. So, there are very different standards for, a) getting human rights, and, b) having all those properties.
Right. Okay. You advocate—to use your words from earlier in this talk—putting the artificial intelligence in a prison. Is that view—we need to lock it up before we even make it—really, in your mind, the best approach?
I wouldn’t be doing it if I didn’t think it was. We definitely need safety mechanisms in place. There are some good ideas we have, for how to make those systems safer, but all of them require testing. Software requires testing. Before you run it, before you release it, you need a test environment. This is not controversial.
What do you think of the OpenAI initiative, which is the idea that as we’re building this we ought to share and make it open source, so that there’s total transparency, so that one bad actor doesn’t get an AGI, and so forth? What are your thoughts on that?
This helps to distribute power amongst humans, so not a single person gets all the power, but a lot of people have access. But at the same time, it increases danger, because all the crazies, all the psychopaths, now get access to the cutting-edge AI, and they can use it for whatever purposes they want. So, it’s not clear cut whether it’s very beneficial or very harmful. People disagree strongly on OpenAI, specifically.
You don’t think that the prospects for humans to remain the dominant species on this planet are good. I remember seeing an Elon Musk quote, he said, “The only reason we are at the top is because we’re the smartest, and if we’re not the smartest anymore, we’re no longer going to be on top.” It sounds like you think something similar to that.
Absolutely, yes. To paraphrase, or quote directly, from Bill Joy, “The future may not need us.”
What do you do about that?
That’s pretty much all of my research. I’m trying to figure out if the problem of AI control, controlling intelligent agents, is actually solvable. A lot of people are working on it, but we never have actually established that it’s possible to do. I have some theoretical results of mine, and from other disciplines, which show certain limitations to what can be done. It seems that intelligence, and how controllable something is, are inversely related. The more intelligent a system becomes, the less control we have over it.
Things like babies have very low intelligence, and we have almost complete control over them. As they grow up, as they become teenagers, they get smarter, but we lose more and more control. With super-intelligent systems, obviously, you have almost no control left.
Let’s back up now, and look at the here and now, and the implications. There’s a lot of debate about AI, and not even talking about an AGI, just all the stuff that’s wrapped up in it, about automation, and it’s going to replace humans, and you’re going to have an unemployable group of people, and social unrest. You know all of that. What are your thoughts on that? What do you see for the immediate future of humanity?
Right. We’re definitely going to have a lot of people lose their jobs. I’m giving a talk for a conference of accountants soon, and I have the bad news to share with them, that something like ninety-four percent of them will lose their jobs in the next twenty years. It’s the reality of it. Hopefully, the smart people will find much better jobs, other jobs.
But for many, many people, who don’t have education, or maybe don’t have cognitive capacity; they will no longer be competitive in this economy, and we’ll have to look at things like unconditional basic income, unconditional basic assets, to, kind of prevent revolutions from happening.
AI is going to advance much faster than robots, which have all these physical constraints, and can’t just double over the course of eighteen months. Would you be of the mind that mental tasks, mental jobs, are more at risk than physical jobs, as a general group?
It’s more about how repetitive your job is. If you’re doing something the same, whether it’s physical or mental, it’s trivial to automate. If you’re always doing something somewhat novel, now that’s getting closer to AI completeness. Not quite, but in that direction, so it’s much harder.
In two hundred and fifty years, this country, the West, has had had economic progress, we’ve had technological revolutions which could, arguably, be on the same level as the artificial intelligence revolution. We had mechanization, the replacement of human power with animal power, the electrification of industry, the adoption of steam, and all of these appeared to be very disruptive technologies.
And yet, through all of that, unemployment, except for the Great Depression, never has bumped out of four to nine percent. You would assume, if technology was able to rapidly displace people, that it would be more erratic than that. You would have these massive transforming industries, and then you would have some period of high unemployment, and then that would settle back down.
So, the theory around that would be that, no, the minute we build a new tool, humans just grab that thing, and use it to increase their own productivity, and that’s why you never have anything outside of four to nine percent unemployment. What’s wrong with that logic, in your mind?
You are confusing tools and agents. AI is not a tool. AI is an independent agent, which can possibly use humans as tools, but not the other way around. So, the examples of saying we had previous singularities, whether it’s cultural or industrial, they are just wrong. You are comparing apples and potatoes. Nothing in common.
So, help me understand that a little better. Unquestionably, technology has come along, and, you know, I haven’t met a telephone switchboard operator in a long time, or a travel agent, or a stockbroker, or typewriter repairman. These were all jobs that were replaced by technology, and whatever word you put on the technology doesn’t really change that simple fact. Technology came out, and it upset the applecart in the employment world, and yet, unemployment never goes up. Help me understand why AI is different again, and forgive me if I’m slow here.
Sure. Let’s say you have a job, you nail roofs to houses, or something like that. So, we give you a new tool, and now you can have a nail gun. You’re using this tool, you become ten times more efficient, so nine of your buddies lose jobs. You’re using a tool. The nail gun will never decide to start a construction company, and go into business on its own, and fire you.
The technology we’re developing now is fundamentally different. It’s an agent. It’s capable—and I’m talking about the future of AI, not AI of today—it’s capable of self-improvement. It’s capable of cross-domain learning. It’s as smart, or smarter, as any human. So, it’s capable of replacing you. You become a bottleneck in that hybrid system. You no longer hold the gun. You have nothing to contribute to the system.
So, it’s very easy to see that all jobs will be fully automated. The logic always was, the job which is automated is gone, but now we have this new job which we don’t know how to automate, so you can get a new, maybe better, job doing this advanced technology control. But if every job is automated, I mean, by definition, you have one hundred percent unemployment. There are still jobs, kind of prestige jobs, because it’s a human desire to get human-made art, or maybe handmade items, expensive and luxury items, but they are a tiny part of the market.
If AI can do better in any domain, humans are not competitive, so all of us are going to lose our jobs. Some sooner, some later, but I don’t see any job which cannot be automated, if you have human level intelligence, by definition.
So, your thesis is that, in the future, once the AI’s pass our abilities, even a relatively small amount, every new job that comes along, they’ll just learn quicker than we will and, therefore, it’s kind of like you never find any way to use it. You’re always just superfluous to the system.
Right. And the new jobs will not be initially designed for a human operator. They’ll be basically streamlined for machines, in the first place, so we won’t have any competitive advantage. Right now, for example, our cars are designed for humans. If you want to add a self-driving component to it, you have to work with the wheel and brake pedals and all that, to make it switch.
Whereas, if from the beginning, you’re designing it to work with machines; you have smart roads, smart signs, humans are not competitive at any point. There is never an entry point where a human has a better answer.
Let me do a sanity check at this point, if I could. So, humans have a brain that has a hundred billion neurons, and countless connections between it, and it’s something we don’t really understand very well. And it perhaps has emergent properties which give us a mind, that give us creativity, and so forth, but it’s just simple emergence.
We have this thing called consciousness. I know you say it’s not scientific, but if you believe that you’re conscious, then you have to grapple with the fact that whatever that is, is a requisite for you being intelligent.
So, we have a brain we don’t understand, an emergent mind we don’t understand, a phenomenon of “consciousness” which is the single fact we are most aware of in our own life, and all of that makes us this. Meanwhile, I have simple pieces of hardware that I’m mightily delighted when they work correctly.
What you’re saying is… It seems you have one core assumption, which is that in the end, the human brain is a machine, and we can make a copy of that machine, and it’s going to do everything a human machine can do, and even better. That, some might argue, is the non-scientific leap. You take something we don’t understand, that has emergent properties we don’t understand, that has consciousness, which we don’t understand, and you say, “oh yes, it’s one hundred percent certain we’re going to be able to exceed our own intelligence.”
Kevin Kelly calls that a Cargo Cult. It’s like this idea that, oh well, if we just build something just like it, it’s going to be smarter than us. It smacks to some of being completely unscientific. What would you say to that?
One, it’s already smarter than us, in pretty much all domains. Whatever you’re talking about, playing games, investing in the stock market… You take a single domain where we know what we’re doing, and it seems like machines are either already at a human level, or quickly surpassing it, so it’s not crazy to think that this trend will continue. It’s been going on for many years.
I don’t need to fully understand the system to do better than a system. I don’t know how to make a bird. I have no idea how the cells in a bird work. It seems to be very complex. But, I take airplanes to go to Europe, not birds.
Can you explain that sentence that you just said, “Domains where we know what we are doing”? Isn’t that kind of the whole point, is that there’s this big area of things where we don’t know what we’re doing, and where we don’t understand how humans have the capabilities? How are they able to solve non-algorithmic problems? How are humans able to do the kind of transferred learning we do, where we know one thing, in one domain, and we’re really good at applying it in others?
We don’t know how children learn, how a two-year-old gets to be a full AGI. So, granted, in the domains where we know what we are doing, all six of them… I mean look, let’s be real: just to beat humans at one game, chess, took a multi-billion-dollar company spending untold millions of dollars, all of the mental effort of many, many people, working for years. And then you finally—and it’s one tiny game—get a computer that can do better than a human.
And you say, “Oh, well. That’s it, then. We’re done. They can do anything, now.” That seems to extrapolate beyond what the data would suggest.
Right. I’m not saying it’s happening now. I’m not saying computers today are capable of those things. I’m saying there is not a reason for why it will not be true in the maybe-distant future. As I said, I don’t make predictions about the date. I’m just pointing out that if you can pick a specific domain of human activity, and you can explain what they do in that domain—it’s not some random psychedelic performance, but actually “this is what they do”—then you have to explain why a computer will never be able to do that.
[36:38 – 36:43 remove awkward pause]
Fair enough. Assuming all of that is going to happen, that gradually, one thing by one thing by one thing, computers will shoot ahead of us, and obsolete us, and I understand you’re not picking dates, but presumably, we can stack-rank the order of things to some very coarse degree… The most common question I get from people is, “Well, what should I study? What should my kids study, in order to be relevant, to have jobs in the future?”
You’re bound to get that question, and what would you say to it?
That goes directly to my paper on AI completeness. Basically, what is the last job to be automated? It’s the person doing AI research. Someone who is advancing machine learning. The moment machines can automate that, there are no other jobs left. But that’s the last job to go.
So, definitely study computer science, study machine learning, study artificial intelligence. Anything which helps you in those fields—mathematics, physics—will be good for you. Don’t major in areas, in domains, which we already know will be automated by the time you graduate. As part of my job I advise students, and I would never advise someone to become a cab driver.
It’s funny, Mark Cuban said, and he’s not necessarily in the field, but he has really interesting thoughts about it. And he said that if he were starting over, he would be a philosophy major, and not pursue a technical job, because the technical jobs are actually probably the easiest things for machines to do. That’s kind of in their own backyard. But the more abstract it is, in a sense, the longer it would take a computer to be able to do it. What would you say to that?
I agree. It’s an awesome job, and if you can get one of those hundred jobs in the world, I say go for it. But the market is pretty small and competitive, whereas for machine learning, it’s growing exponentially. It’s paying well, and you can actually get in.
You mentioned the consciousness paper you’re working on. When will that come out?
That’s a finished draft, and it’s just a survey paper of different methods people propose to detect or measure consciousness. It’s under review right now. We’re working on some revisions. But basically, we reviewed everything we could find in the last ten to fifteen years, and all of them measure some side effect of what people or animals do. They never actually try to measure consciousness itself.
There is some variance which deals with quantum physics, and collapse of wave functions, to Copenhagen interpretations, and things like that; but even that is not well-defined. It’s more of a philosophical kind of an argument. So, it seems like there is this concept, but nobody can tell me what it does, why it’s useful, and how to detect it or measure it.
So, it seems to be somewhat unscientific. Saying that, “Okay, but you feel it in you,” is not an argument. I know people who say, “I hear the voice of Jesus speaking to me.” Should I take that as a scientific theory, and study it? Just because someone is experiencing it doesn’t make it a scientific concept.
Tantalize us a little bit with some of the other things you’re working on, or some of the exciting things that you might be publishing soon.
As I said, I’m looking at, kind of, limitations of what we can do in the AI safety field. One problem I’m looking at is this idea of verifiability. What can be verified scientifically, specifically in mathematical proofs and computer software? Trying to write very good software, with no bugs, is kind of a fundamental holy grail of computer science, computer security, cyber security. There is a lot of very good work on it, but it seems there are limitations on how good we can get. We can remove most bugs, but usually not all bugs.
If you have a system which makes a billion decisions a second, and there is a one in a billion chance that it’s getting something wrong, those mistakes quickly accumulate. Also, there is almost no successful work on how to do software verification for AI in novel domains, systems capable of learning. All of the verification work we know about is for kind of deterministic software, and specific domains.
We can do airplane autopilot software, things like that, and verify it very well, but not something with this ability to learn and self-improve. That’s a very hard-to-open area of research.
Two final questions, if I can. The first one is—I’m sure you think through all of these different kinds of scenarios; this could happen or that could happen—what would happen, in your view, if a single actor, be it a company or a government, or what have you; a single actor invented a super-intelligent system? What would you see the ripple effects of that being?
That’s basically what singularity is, right? We get to that point where machines are the ones inventing and discovering, and we can no longer keep up with what’s going on. So, making a prediction about that is, by definition, impossible.
The most important point I’d like to stress—if they just happen to do it somehow, by some miracle, without any knowledge or understanding of safety and control, just created a random very smart system, in that space of possible minds—there is almost a guarantee that it’s a very dangerous system, which will lead to horrible consequences for all of us.
You mentioned that the first AGI is priceless, right? It’s worth countless trillions of dollars.
Right. It’s basically free labor of every kind—physical, cognitive—it is a huge economic benefit, but if in the process of creating that benefit, it destroys humanity, I’m not sure money is that valuable to you in that scenario.
The final question: You have a lot of scenarios. It seems your job is to figure out, how do we get into this future without blowing ourselves up? Can you give me the optimistic scenario; the one possible way we can get through all of this? What would that look like to you? Let’s end on the optimistic note, if we can.
I’m not sure I have something very good to report. It seems like long-term, everything looks pretty bleak for us. Either we’re going to merge with machines, and eventually become a bottleneck which will be removed, or machines will simply take over, and we’ll become quite dependent on them deciding what to do with us.
It could be a reasonably okay existence, with machines treating us well, or it could be something much worse. But short of some external catastrophic change preventing development of this technology, I don’t see a very good scenario, where we are in charge of those god-like machines and getting to live in paradise. It just doesn’t seem very likely.
So, when you hear about, you know, some solar flare that just missed the Earth by six hours of orbit or something, are you sitting there thinking, “Ah! I wish it had hit us, and just fried all of these things. It would buy humanity another forty years to recover.” Is that the best scenario, that there’s a button you could push that would send a giant electromagnetic pulse and just destroy all electronics? Would you push the button?
I don’t advocate any terrorist acts, natural or human-caused, but it seems like it would be a good idea if people smart enough to develop this technology, were also smart enough to understand possible consequences, and acted accordingly.
Well, this has been fascinating, and I want to thank you for taking the time to be on the show.
Thank you so much for inviting me. I loved it.
Byron explores issues around artificial intelligence and conscious computers in his upcoming book The Fourth Age, to be published in April by Atria, an imprint of Simon & Schuster. Pre-order a copy here.
[voices_in_ai_link_back]

Voices in AI – Episode 14: A Conversation with Martin Ford

[voices_in_ai_byline]
In this episode Byron and Martin talk about the future of jobs, AGI, consciousness and more.
[podcast_player name=”Episode 14: A Conversation with Martin Ford” artist=”Byron Reese” album=”Voices in AI” url=”https://voicesinai.s3.amazonaws.com/2017-10-30-(00-40-18)-martin-ford.mp3″ cover_art_url=”https://voicesinai.com/wp-content/uploads/2017/10/voices-headshot-card-7.jpg”]
[voices_in_ai_link_back]
Byron Reese: Welcome to Voices in AI, I’m Byron Reese. Today we’re delighted to have as our guest Martin Ford. Martin Ford is a well-known futurist, and he has two incredibly notable books out. The most recent one is called The Rise of the Robots: Technology and the Threat of a Jobless Future, and the second one is The Lights in the Tunnel: Automation Accelerating Technology and the Economy of the Future.
I have read them both cover-to-cover, and Martin is second-to-none in coming up with original ideas and envisioning a kind of future. What is that future that you envision, Martin?
Martin Ford: Well, I do believe that artificial intelligence and robotics is going to have a dramatic impact on the job market. I’m one of those that believes that this time is different, relative to what we’ve seen in the past, and that, therefore, we probably are going to find a way to adapt to that.
I do see a future where there certainly is potential for significant unemployment, and even if that doesn’t develop, at a minimum we’re probably going to have underemployment and a continuation of stagnant wages, maybe even declining wages, and probably soaring inequality. And all of those things are just going to put an enormous amount of stress both on society and on the economy, and I think that’s going to be one of the biggest issues we need to think about over the next few decades.
So, taking a big step back, you said, quote: “This time is different.” And that’s obviously a reference to the oft-cited argument that we’ve heard this since the beginning of the Industrial Revolution, that machines were going to advance too quickly, and people weren’t going to be able to find new skills.
And I think everybody agrees, up to now, it’s been fantastically false, but your argument that this time is different is based on what? What exactly is different?
Well, the key is that the machines, in a limited sense, are beginning to think. I mean, they’re taking on cognitive capabilities. So what that means is that technology is finally encroaching on that fundamental capability that so far has allowed us to really stay ahead of the march of progress, and remain relevant.
I mean, you can ask the question, “Why are there still so many jobs? Why don’t we have unemployment already?” And surely the answer to that is our ability to learn and to adapt. To find new things to do. And yet, we’re now at a point where machines… especially in the form of machine learning, are beginning to move into that space.
And it’s going to, I think, eventually get to what you might think of as a kind of a tipping point, or an inflection point, where technology begins to outcompete a lot of people, in terms of their basic capability to really contribute to the economy.
No one is saying that all the jobs are going to disappear, and that there’s literally going to be no one working. But, I think it’s reasonable to be concerned that a significant fraction of our workforce—in particular those people that are perhaps best-equipped to do things that are fundamentally routine and repetitive and predictable—those people are probably going to have a harder and harder time adapting to this, and finding a foothold in the economy.
But, specifically, why do you think that? Give me a case-in-point. Because we’ve seen enormous, disruptive technologies on par with AI, right? Like, the harnessing of artificial power has to be up there with artificial intelligence. We’ve seen entire categories of jobs vanish. We’ve seen technology replace any number of people already.
And yet, unemployment, with the exception of the depression, never gets out from between four and nine percent in this country. What holds it in that stasis, and why? I still kind of want more meat on that, why this time is different. Because everything kind of hinges on that.
Well, I think that historically, we’ve seen primarily technology displacing muscle power. That’s been the case up until recently. Now, you talk about harnessing power… Obviously that did displace a lot of people doing manual labor, but people were able to move into more cognitively-oriented tasks.
Even if it was a manual job, it was one that required more brain power. But now, machines are encroaching on that as well. Clearly, we see many examples of that. There are algorithms that can do a lot of the things that journalists do, in terms of generating news stories. There are algorithms beginning to take on tasks done by lawyers, and radiologists, and so forth.
The most dramatic example perhaps I’ve seen is what DeepMind did with its AlphaGo system, where it was able to build a system that taught itself to play the ancient game of Go, and eventually became superhuman at that, and was able to beat the best players in the world.
And to me, I would’ve looked at that and I would’ve said, “If there’s any task out there that is uniquely human, and ought to be protected from automation, playing the game of Go—given the sophistication of the game—really, should probably be on that list.” But it’s fallen to the machines already.
So, I do think that when you really look at this focus on cognitive capability, on the fact that the machines are beginning to move into that space which so far has protected people… that, as we look forward—again, I’m not talking about next year, or three years from now even, but I’m thinking in terms of decades, ten years from now, twenty years from now—what’s it going to look like as these technologies continue to accelerate?
It does seem to me that there’s very likely to be a disruption.
So, if you’d been alive in the Industrial Revolution, and somebody said, “Oh, the farm jobs, they’re vanishing because of technology. There’s going to be less people employed in the farm industry in the future.” And then, wouldn’t somebody have asked the question, “Well, what are all those people going to do? Like, all they really know how to do are plant seeds.”
All the things they ended up doing were things that by-and-large didn’t exist at the time. So isn’t it the case that whatever the machines can do, humans figure out ways to use those skills to make jobs that are higher in productivity than the ones that they’re replacing?
Yeah, I think what you’re saying is absolutely correct. The question though, is… I’m not questioning that some of those jobs are going to exist. The question is, are there going to be enough of those jobs, and will those jobs be accessible to average people in our population?
Now, the example you are giving with agriculture is the classic one that everyone always cites, and here’s what I would say: Yes, you’re right. Those jobs did disappear, and maybe people didn’t anticipate what the new things were going to be. But it turned out that there was the whole rest of the economy out there to absorb those workers.
Agricultural machinery, tractors and combines and all the rest of it, was a specific mechanical technology that had a dramatic impact on one sector of the economy. And then those workers eventually moved to other sectors, and as they moved from sector to sector… first they moved from agriculture to manufacturing, and that was a transition. It wasn’t instant, it took some time.
But basically, what they were doing was moving from routine work in the field to, fundamentally, routine work in factories. And that may have taken some training and some adaptation, but it was something that basically involved moving from one routine to another routine thing. And then, of course, there was another transition that came later, as manufacturing also automated or offshored, and now everyone works in the service sector.
But still, most people, at least a very large percentage of people, are still doing things that are fundamentally routine and repetitive. A hundred years ago, you might’ve been doing routine work in the field, in the 1950s maybe you were doing routine work in a factory, now you’re scanning barcodes at Wal-Mart, or you’re stocking the shelves at Wal-Mart, or you’re doing some other relatively routine thing.
The point I’m making is that in the future, technology is going to basically consume all of that routine, repetitive, predictable work… And that there still will be things left, yes, but there will be more creative work, or it’ll be work that involves, perhaps, deep interaction with other people and so forth, that really are going to require a different skill set.
So it’s not the same kind of transition that we’ve seen in the past. It’s really more of, I think, a dramatic transition, where people, if they want to remain relevant, are going to have to really have an entirely different set of capabilities.
So, what I’m saying is that a significant fraction of our workforce is going to have a really hard time adapting to that. Even if the jobs are there, if there are sufficient jobs out there, they may not be a good match for a lot of people who are doing routine things right now.
Have you tried to put any sort of, even in your own head, any kind of model around this, like how much unemployment, or at what rate you think the economy will shed jobs, or what sort of timing, or anything like that?
I make guesses at it. Of course, there are some relatively high-profile studies that have been done, and I personally believe that you should take that with a grain of salt. The most famous one was the one done back in 2013, by a couple of guys at Oxford.
Which is arguably the most misquoted study on the thing.
Exactly, because what they said was that roughly forty-seven percent—which is a remarkably precise number, obviously—roughly half the jobs in the United States are going to be susceptible, could be automated, within the next couple of decades.
I thought what it says is that forty-seven percent of the things that people do in their jobs is able to be automated.
Yeah, this particular study, they did look at actual jobs. But the key point is that they said roughly half of those jobs could be automated, they didn’t say they will be automated. And when the press picked that up, it in some cases became “half the jobs are definitely going to go away.” There was another later study, which you may be referring to, [that] was done by McKinsey, and that one did look at tasks, not at jobs.
And they came up with approximately the same number. They came up with the idea that about half of the tasks within jobs would be susceptible to automation, or in some cases may already be able to be automated in theory… but that was looking at the task level. Now again, the press kind of looked at that and they took a very optimistic take on it. They said, “Well, your whole job then can’t be automated, only half of your job can be automated. So your employer’s going to leave you there to do higher-level stuff.” And in some cases, that may happen.
But the other alternative, of course, is that if you’ve got two people doing two jobs, and half of each of those can be automated, then we could well see a consolidation there, and maybe that just becomes one job, right? So, different studies have looked at it in different ways. Again, I would take all of these studies with some skepticism, because I don’t think anyone can really make predictions this precise.
But the main takeaway from it, I think, is that the amount of work that is going to be susceptible to automation could be very significant. And I would say, to my mind, it doesn’t make much difference whether it’s twenty percent of fifty percent. Those are both staggering numbers. They would both have a dramatic impact on society and on the economy. So regardless of what the exact figure is, it’s something that we need to think about.
In terms of timing, I tend to think in terms of between ten and twenty years as being the timeframe where this becomes kind of unambiguous, where we’ve clearly reached the point where we’re not going to have this debate anymore—where everyone agrees that this is an issue.
I tend to think ten to twenty years, but I certainly know people that are involved, for example, in machine learning, that are much more aggressive than that; and they say it could be five years. So that is something of a guess, but I do think that there are good reasons to be concerned that the disruption is coming.
The other thing I would say is that, even if I turn to be wrong about that, and it doesn’t happen within ten to twenty years, it probably is going to happen within fifty years. It seems inevitable to me at some point.
So, you talk about not having the debate anymore. And I think one of the most intriguing aspects of quote, ‘the debate’, is that when you talk to self-identified futurists, or when you talk to economists on the effect technology is going to have on jobs, they’re almost always remarkably split.
So you’ve got this camp of fifty percent-ish that says, “Oh, come on, this is ridiculous. There is no finite number of jobs. Anytime a person can pick up something and add value to it, they’ve just created a job. We want to get people out of tasks that machines can do, because they’re capable of doing more things,” and so forth.
So you get that whole camp, and then you have the side which, it sounds you’re more on, which is, “No, there’s a point at which the machines are able to improve faster than people are able to train,” and that that’s kind of an escape philosophy, and that has those repercussions. So all of that is a buildup to the question… like, these are two very different views of the future that people who think a lot about this have.
What assumptions do the two camps have, underneath their beliefs, that are making them so different, in your mind?
Right, I do think you’re right. It’s just an extraordinary range of opinion. I would say it’s even broader than that. You’re talking about the issue of whether or not jobs will be automated, but, on the same spectrum, I’m sure you can find famous economists, maybe economists with Nobel Prizes that would tell you, “This is all a kind of a silly issue. It’s repetition of the Luddite fears that we’ve had basically forever and nothing is different this time.”
And then at the other end of that spectrum you’ve got people not just talking about jobs, you’ve got Elon Musk and you’ve got Stephen Hawking saying, “It’s not even an issue of machines taking our jobs. They’re going to just take over. They might threaten us, be an existential threat, that might actually become super-intelligent and decide they don’t want us around.
So that’s just an incredible range of opinions on this issue, and I guess it points to the fact that it really is just extraordinarily unpredictable, in the sense that we really don’t know what’s going to happen with artificial intelligence.
Now, my view is that I do think that there is often a kind of a line you can draw. The people that tend to be more skeptical, maybe, are more geared toward being economists, and they do tend to put an enormous amount of weight on that historical record, and on the fact that, so far, this has not happened. And they give great weight to that.
The people that are more on my side of it, and see something different happening, tend to be people more on the technology side, that are involved deeply in machine learning and so forth, and really see how this technology is going.
I think that they maybe have a sense that something dramatic is really going to happen. That’s not a clear division, but it’s my sense that it kind of breaks down that way in many cases. But, for sure, I absolutely have a lot of respect for the people that disagree with me. This is a very meaningful, important debate, with a lot of different perspectives, and I think it’s going to be really, really fascinating to see how it plays out.
So, you touched on the existential threat of artificial intelligence. Let me just start with a couple of questions: Do you believe that an AGI, a general intelligence, is possible?
Yes, I don’t know of any reason that it’s not possible.
Fair enough.
That doesn’t mean I think it will happen, but I think it’s certainly possible.
And then, if it’s possible, everybody, you know… When you line up everybody’s prediction on when, they range from five years to five hundred years, which is also a telling thing. Where are you in that?
I’m not a true expert in this area, because I’m obviously not doing that research. But based on the people I’ve talked to that are in the field, I would put it further out than maybe most people. I think of it as being probably fifty years out… would be a guess, at least, and quite possibly more than that.
I am open to the possibility that I could be wrong about that, and it could be sooner. But it’s hard for me to imagine it sooner than maybe twenty-five to thirty years. But again, this is just extraordinarily unpredictable. Maybe there’s some project going on right now that we don’t know about that is going to prove something much sooner. But my sense is that it’s pretty far out—measured in terms of decades.
And do you believe computers can become conscious?
I believe it’s possible. What I would say is that the human brain is a biological machine. That’s what I believe. And I see absolutely no reason why the experience of the human mind, as it exists within the brain, can’t be replicated in some other medium, whether it’s silicon or quantum computing or whatever.
I don’t see why consciousness is something that is restricted, in principle, to a biological brain.
So I assume, then, it’s fair to say that you hope you’re wrong?
Well, I don’t know about that. I definitely am concerned about the more dystopian outcomes. I don’t dismiss those concerns, I think they’re real. I’m kind of agnostic on that; I don’t see that it’s definitely the case that we’re going to have a bad outcome if we do have conscious, super-intelligent machines. But it’s a risk.
But I also see it as something that’s inevitable. I don’t think we can stop it. So probably the best strategy is to begin thinking about that. And what I would say is that the issue that I’m focused on, which is what’s going to happen to the job market, is much more immediate. That’s something that is happening within the next ten to twenty years.
This other issue of super-intelligence and conscious machines is another important issue that’s, I think, a bit further out, but it’s also a real challenge that we should be thinking about it. And for that reason, I think that it’s great that people like Elon Musk are making investments there, in think tanks and so forth, and they’re beginning to focus on that.
I think it would be pretty hard to justify a big government public expenditure on thinking about this issue at this point in time, so it’s great that some people are focused on that.
And, so, I’m sure you get this question that I get all the time, which is, “I have young children. What should they study today to make sure that they have a relevant, useful job in the future?” You get that question?
Oh, I get that question. Yeah, it’s probably the most common question I get.
Yeah, me too. What do you say?
I probably am going to bet that I say something very similar to what you say, because I think the answer is almost a cliché. It’s that first and foremost, avoid studying to prepare yourself for a job that is on some level routine, repetitive, or predictable. Instead, you want to be, for example, doing something creative, where you’re building something genuinely new.
Or, you want to be doing something that really involves deep interaction with other people, that has that human element to it. For example, in the business world that might be building very sophisticated relationships with clients. A great job that I think is going to be relatively safe for the foreseeable future is nursing, because it has that human element to it, where you’re building relationships with people, and then there’s also a tremendous amount of dexterity, mobility, where you’re running around, doing lots of things.
That’s the other aspect of it, is that a lot of jobs that require that kind of dexterity, mobility, flexibility, are going to be hard to automate in the foreseeable future—things like electricians and plumbers and so forth are going to be relatively safe, I think. But of course, those aren’t necessarily jobs that people going to universities want to take.
So, prepare for something that incorporates those aspects. Creativity and human element, and maybe something beyond sitting in front of a computer, right? Because that in itself is going to be fairly susceptible to this.
So, let’s do a scenario here. Let’s say you’re right, and in fifteen years’ time—to take kind of your midpoint—we have enough job loss that is, say, commensurate with the Great Depression. So, that would be twenty-two percent. And it happens quickly… twenty-two percent of people are unemployed with few prospects. Tell me what you think happens in that world. Are there riots? What does the government do? Is there basic income? Like, what will happen?
Well, that’s going to be our choice. But the negative, let’s talk about the dystopian scenario first. Yes, I think there would absolutely be social unrest. You’re talking about people that in their lifetimes have experienced the middle-class lifestyle that are suddenly… I mean, everything just kind of disappears, right?
So, that’s certainly on the social side, there’s going to be enormous stress. And I would argue that we’re seeing the leading edge of that already. You ask yourself, why is Donald Trump in the Oval Office? Well, it’s because in part, at least, these blue-collar people, perhaps focused especially in the industrial Midwest, have this strong sense that they’ve been left behind.
And they may point to globalization or immigration as the reason for that. But in fact, technology has probably been the most important force in causing those people to no longer have access to the good, solid jobs that they once had. So, we see that already, [and] that could get orders of magnitude worse. So that’s on a social side and a political side.
Now, the other thing that’s happening is economic. We have a market economy, and that means that the whole economy relies on consumers that have got the purchasing power to go out and buy the stuff we’re producing, right?
Businesses need customers in order to thrive. This is true of virtually every business of any size, you need customers. In fact, if you really look at the industries that drive our economy, they’re almost invariably mass-market industries, whether it’s cars, or smartphones, or financial services. These are all industries that rely on tens, hundreds of millions of viable customers out there.
So, if people start losing their jobs and also their confidence… if they start worrying about the fact that they’re going to lose their jobs in the future, then they will start spending less, and that means we’re going to have an economic problem, right? We’re going to have potentially a deflationary scenario, where there’s simply not enough demand out there to drive the economy.
There’s also the potential for a financial crisis, obviously. Think back to 2008, what happened? How did that start? It started with the subprime crisis, where a lot of people did not have sufficient income to pay their mortgages.
So, obviously you can imagine a scenario in the future where lots of people can’t pay their mortgages, or their student loans, or their credit cards or whatever, and that has real implications for the financial sector. So no one should think that this is just about, “Well, it’s going to be some people that are less educated than I am, and they’re unlucky, too bad, but I’m going to be okay.”
No, I don’t think so. This is something that drags everyone into a major, major problem, both socially, politically, and economically.
The depression, though, it wasn’t notable for social unrest like that. There weren’t really riots.
There may not have been riots, but there was a genuine—in terms of the politics—there was a genuine fear out there that both democracy and capitalism were threatened. One of the most famous quotes comes from Joe Kennedy, who was the patriarch, the first Kennedy who made his money on Wall Street.
And he famously said, during that time, that he would gladly give up half of everything that he had if he could be certain that he’d get to keep the other half. Because there was genuine fear that there was going to be a revolution. Maybe a Communist revolution, something on that order, in the United States. So, it would be wrong to say that there was not this revolutionary fear out there.
Right. So, you said let’s start with the dystopian outcome…
Right, right… so, that’s the bad outcome. Now, if we do something about this, I think we can have a much more optimistic outcome. And the way to do that is going to be finding a way to decouple incomes from traditional forms of work. In other words, we’re going to have to find a way to make sure that people that aren’t working, and can’t find a regular job, have nonetheless got an income.
And there are two reasons to do that. The first reason is, obviously, that people have got to survive economically, and that addresses the social upheaval issue, to some extent at least. And the second issue is that people have got to have money to spend, if they’re going to be able to drive the economy. So, I personally think that some kind of a guaranteed minimum income, or a universal basic income, is probably going to be the way to go there.
Now there are lots of criticisms that people will say, “That’s paying people to be alive.” People will point out that if you just give money to people, that’s not going to solve the problem. Because people aren’t going to have any dignity, they’re not going to have any sense of fulfillment, or anything to occupy their time. They’re just going to take drugs, or be in a virtual reality environment.
And those are all legitimate concerns. Because, partly of those concerns, my view is that a basic income is not just this plug-and-play panacea that—okay, a basic income; that’s it. I think it’s a starting point. I think it’s the foundation that we can build on. And one thing that I’ve talked a lot about in my writing is the idea that we could build explicit incentives into a basic income.
Just to give an example, imagine that you are a struggling high school student. So, you’re in some difficult environment in high school, you’re really at risk of dropping out of school. Now, suppose you know that no matter what, you’re going to get the same basic income as everyone else. So, to me, that creates a very powerful perverse incentive for you to just drop out of school. To me that seems silly. We shouldn’t do that.
So, why not instead structure things a bit differently? Let’s say if you graduate from high school, then you’ll get a somewhat higher basic income than someone that just drops out. And we could take that idea of incentives and maybe extend it to other areas. Maybe if you go and work in the community, do things to help others, you’ll get a little bit higher basic income.
Or if you do things that are positive for the environment. You could extend it in many ways to incorporate incentives. And as you do that, then you take at least a few steps towards also solving that problem of, where do we find meaning and fulfillment and dignity in this world where maybe there just is less need for traditional work?
But that definitely is a problem that we need to solve, so I think we need to think creatively about that. How can we take a basic income and build it into something that is going to help us really solve some of these problems? And at the same time, as we do that, maybe we also take steps toward making a basic income more politically and socially acceptable and feasible. Because, obviously, right now it’s not politically feasible.
So, I think it’s really important to think in those terms: What can we really do to expand on this idea [of basic income]? But, if you figure that out, then you solve this problem, right? People then have an income, and then they have money to spend, and they can pay their debts and all the rest of it, and I think then it becomes much more positive.
If you think of the economy… think of not the real-world economy, but imagine it’s a simulation. And you’re doing this simulation of the whole market economy, and suddenly you tweak the simulation so that jobs begin to disappear. What could you do? Well, you could make a small fix to it so that you replace jobs with some other mechanism in this simulation, and then you could just keep the whole thing going.
You could continue to have thriving capitalism, a thriving market economy. I think when you think of it in those terms, as kind of a programmer tweaking a simulation, it’s not so hard to make it work. Obviously in the real world, given politics and everything, it’s going to be a lot harder, but my view is that it is a solvable problem.
Mark Cuban said the first trillionaires, or the first trillion-dollar companies, will be AI companies, that it has the capability of creating that kind of unmeasurable wealth. Would you agree with that?
Yeah, as long as we solve this problem. Again, it doesn’t matter whether you’re doing AI or any other business… that money is coming from somewhere, okay? Or when you talk about the way a company is valuated, whether it’s a million-dollar company or a trillion-dollar company, the value essentially comes from cash flows coming in in the future. That’s how you value a company.
Where are those cash flows coming [from]? Ultimately, they’re coming from consumers. They’re coming from people spending money, and people have to have money to spend. So, think of the economy as being kind of a virtuous cycle, where you cycle money from consumers to businesses and then back to consumers, and that it’s kind of a self-fulfilling, expanding, growing cycle over time.
The problem is that if the jobs go away, then that cycle is threatened, because that’s the mechanism that’s getting income back from producers to consumers so that the whole thing continues to be sustainable. So, we solve that problem and yeah, of course you’re going to have trillion-dollar companies.
And so, that’s the scenario if everything you say comes to pass. Take the opposite for just a minute. Say that fifteen years goes by, unemployment is five-and-a-quarter percent, and there’s been some turn of the jobs, and there’s no kind of phase shift, or paradigm shift, or anything like that. What would that mean?
Like, what does that mean long term for humanity? Do we just kind of go on in the way we are ad infinitum, or are there other things, other factors that could really upset the apple cart?
Well, again my argument would be that if that happens, and fifteen years from now things basically look the way they do now, then it means that people like me got the timing wrong. This isn’t really going to happen within fifteen years, maybe it’s going to be fifty years or a hundred years. But I still think it’s kind of inevitable.
The other thing, though, is be careful when you say fifteen years from now the unemployment rate is going to be five percent. One thing to be really careful about is that you’re measuring everything carefully, because, of course, the unemployment rate right now doesn’t catch a lot of people that are dropping out of the workforce.
In fact, it doesn’t capture anyone that drops out of the workforce, and we do have a declining labor force participation rate. So it’s possible for a lot of people to be left behind, and be disenfranchised, and still not be captured in that headline unemployment rate.
But a declining labor participation rate isn’t necessarily people who can’t find work, right? Like, if enough people just make a lot of money, and you’ve got the Baby Boomers retiring. Is it your impression that the numbers we’re seeing in labor participation are indicative of people getting discouraged and dropping out of the job market?
Yeah, to some extent. There are a number of things going on there. Obviously, as you say, part of it is the demographic shift, and there are two things happening there. One is that people are retiring; certainly, that’s part of it. The other thing is that people are staying in school longer, so younger people are less in the workforce than they might’ve been decades ago, because they’ve got to stay in school longer in order to have access to a job.
So, that’s certainly having an impact. But that doesn’t explain it totally, by any means. In fact, if you look at the labor force participation rate for what we call prime-age workers—and that would be people that are, maybe, between thirty and fifty… in other words, too old to be in school generally, and too young to retire—that’s also declining, especially for men. So, yes, there is definitely an impact from people leaving the workforce for whatever reason, very often [the reason is] being discouraged.
We’ve also seen a spike in applications for the social security disability program, which is what you’re supposed to get if you become disabled, and there really is no evidence that people are getting injured on the job at some extraordinarily new rate. So, I think many people think that people are using that as kind of a last resort basic income program.
They’re, in many cases, saying maybe they have a back injury that’s hard to verify and they’re getting onto that because they really just don’t have any other alternative. So, there definitely is something going on there, with that falling labor force participation rate.
And final question: What gives you the most hope, that whatever trials await us in the future—or do you have hope—that we’re going to get through them and go on to bigger and better things as a species?
Well, certainly the fact that we’ve always got through things in the past is some reason to be confident. We’ve faced enormous challenges of all kinds, including global wars, and plagues, and financial crises in the past and we’ve made it through. I think we can make it through this time. It doesn’t mean it will be easy. It rarely is easy.
There aren’t many cases in history, that we can point to, where we’ve just smoothly said, “Hey, look, there’s this problem coming at us. Let’s figure out what to do and adapt to it.” That rarely is the way it works. Generally, the way it works is that you get into a crisis, and eventually you end up solving the problem. And I suspect that that’s the way it will go this time. But, yeah, specifically, there are positive things that I see.
There are lots of important experiments, for example, with basic income, going on around the world. Even here in Silicon Valley, Y Combinator is doing something with an experiment with basic income that you may have heard about. So, I think that’s tremendously positive. That’s what we should be doing right now.
We should be gathering information about these solutions, and how exactly they’re going to work, so that we have the data that we’re going to need to maybe craft a much broader-based program at some point in the future. That’s all positive, you know? People are beginning to think seriously about these issues, and so I think that there is reason to be optimistic.
Okay, and real last question: If people want more of your thinking, do you have a website you suggest to go to?
The best place to go is my Twitter feed, which is @MFordFuture, and I also have a blog and a website which is the same, MFordFuture.com.
And are you working on anything new?
I am not working right now on a new book, but I go around doing a lot of speaking engagements on this. I’m on the board of directors of a startup company, which is actually doing something quite different. It’s actually going to do atmospheric water generation. In other words, generating water directly from air.
That’s a company called Genesis Systems, and I’m really excited about that, because it’s a chance for me to get involved in something really tangible. I think you’ve heard the quote from Peter Thiel, that we were promised flying cars and we got 140 characters. And I actually believe strongly in that.
I think there are too many people in Silicon Valley working on social media, and getting people to click on ads. So I’m really excited to get involved in a company that’s doing something really tangible, that’s going to maybe be transformative… If we can figure out how to directly generate water in very arid regions of the earth—in the Middle East, in North Africa, and so forth—that could be transformative.
Wow! I think by one estimate, if everybody had access to clean water, half of the hospital beds would be emptied in the whole world.
Yeah, it’s just an important problem, just on a human level and also in terms of security, in terms of the geopolitics of these regions. So I’m really excited to be involved with it.
Alright, well thank you so much for your time, and you have a good day.
Okay. Thank you.
Byron explores issues around artificial intelligence and conscious computers in his upcoming book The Fourth Age, to be published in April by Atria, an imprint of Simon & Schuster. Pre-order a copy here
[voices_in_ai_link_back]

Voices in AI – Episode 11: A Conversation with Gregory Piatetsky-Shapiro

[voices_in_ai_byline]
In this episode, Byron and Gregory talk about consciousness, jobs, data science, transfer learning.
[podcast_player name=”Episode 11: A Conversation with Gregory Piatetsky-Shapiro” artist=”Byron Reese” album=”Voices in AI” url=”https://voicesinai.s3.amazonaws.com/2017-10-16-(00-43-05)-gregory-piatestsky.mp3″ cover_art_url=”https://voicesinai.com/wp-content/uploads/2017/10/voices-headshot-card-3.jpg”]
[voices_in_ai_link_back]
Byron Reese: This is “Voices in AI”, brought to you by Gigaom. I’m Byron Reese. Today our guest is Gregory Piatetsky. He’s a leading voice in Business Analytics, Data Mining, and Data Science. Twenty years ago, he founded and continues to operate a site called KDnuggets about knowledge discovery. It’s dedicated to the various topics he’s interested in. Many people think it’s a must-read resource. It has over 400,000 regular monthly readers. He holds an MS and a PhD in computer science from NYU. 
Welcome to the show.
Gregory Piatetsky: Thank you, Byron. Glad to be with you.
I always like to start off with definitions, because in a way we’re in such a nascent field in the grand scheme of things that people don’t necessarily start off agreeing on what terms mean. How do you define artificial intelligence?
Artificial intelligence is really machines doing things that people think require intelligence, and by that definition the goalposts of artificial intelligence are constantly moving. It was considered very intelligent to play checkers back in the 1950s, then there was a program. The next boundary was playing chess, and then computers mastered it. Then people thought playing Go would be incredibly difficult, or driving cars. General artificial intelligence is the field that tries to develop intelligent machines. And what is intelligence? I’m sure we will discuss, but it’s usually in the eye of the beholder.
Well, you’re right. I think a lot of the problem with the term artificial intelligence is that there is no consensus definition of what intelligence is. So, are you saying if we’re constantly moving the goalposts, it sounds like you’re saying we don’t have systems today that are intelligent.
No, no. On the contrary, we have lots of systems today that would have been considered amazingly intelligent 20 or even 10 years ago. And the progress is such that I think it’s very likely that those systems will exceed our intelligence in many areas, you know maybe not everywhere, but in many narrow, defined areas they’ve already exceeded our intelligence. We have many systems that are somewhat useful. We don’t have any systems that are fully intelligent, possessing what is a new term now, AGI, Artificial General Intelligence. Those systems remain still ahead in the future.
Well, let’s talk about that. Let’s talk about an AGI. We have a set of techniques that we use to build the weak or narrow AI we use today. Do you think that achieving an AGI is just continuing to apply to evolve those faster chips, better algorithms, bigger datasets, and all of that? Or do you think that an AGI really is qualitatively a different thing?
I think AGI is qualitatively a different thing, but I think that it is not only achievable but also inevitable. Humans also can be considered as biological machines, so unless there is something magical that we possess that we cannot transfer to machines, I think it’s quite possible that the smartest people can develop some of the smartest algorithms, and machines can eventually achieve AGI. And I’m sure it will require additional breakthroughs. Just like deep learning was a major breakthrough that contributed to significant advances in state of the art, I think we will see several such great breakthroughs before AGI is achieved.
So if you read the press about it and you look at people’s predictions on when we might get an AGI, they range, in my experience, from 5 to 500 years, which is a pretty telling fact alone that it’s that kind of range. Do you care to even throw in a dart in that general area? Like do you think you’ll live to see it or not?
Well, my specialty as a data scientist is making predictions, and I know when we don’t have enough information. I think nobody really knows. And I have no basis on which to make a prediction. I hope it’s not 5 years and I think our experience as a society shows that we have no idea how to make predictions for 100 years from now. It’s very instructive to find so-called futurology articles, things that were written 50 years ago about what will happen in 50 years, and see how naive were those people 50 years ago. I don’t think we will be very successful in predicting in 50 years. I have no idea how long it will take, but I think it will be more than 5 years.
So some people think that what makes us intelligent, or an indispensable part of our intelligence, is our consciousness. Do you think a machine would need to achieve consciousness in order to be an AGI?
We don’t know what is consciousness. I think machine intelligence would be very different from human intelligence, just like airplane flight is very different from a bird, you know. Both airplanes and birds fly, the flight is governed by the same laws of aerodynamics and physics, but they use very different principles. The airplane flight does not copy bird flight, it is inspired by it. I think in the same way, we’re likely to see that machine intelligence doesn’t copy human intelligence, or human consciousness. “What exactly is consciousness?” is more a question for philosophers, but probably it involves some form of self-awareness. And we can certainly see that machines and robots can develop self-awareness. And you know, self-driving cars already need to do some of that. They need to know exactly where they’re located. They need to predict what will happen. If they do something, what will other cars do? They have a form that is called model of the mind, mirror intelligence. One interesting anecdote on this topic is that when Google’s self-driving car was originally started their experiments, it couldn’t cross the intersection because it was always yielding to other cars. It was following the rules as they were written, but not the rules as people actually execute them. And so it was stuck at that intersection supposedly for an hour or so. Then the engineers adjusted the algorithm so it would better predict what people will do and what it will do, and it’s now able to negotiate the intersections. It has some form of self-awareness. I think other robots and machine intelligence will develop some form of self-awareness, and whether it will be called consciousness or not will be to our descendants to discuss.
Well, I think that there is an agreed upon definition of consciousness. I mean, you’re right that nobody knows how it comes about, but it’s qualia, it’s experiencing things. It’s, if you’ve ever had that sensation when you’re driving and you kind of space, and all of a sudden two miles later you kind of snap to and think, “Oh my gosh, I’ve got no recollection of how I got here.” That time you were driving, that’s intelligence without consciousness. And then when you kind of snap to, and all of the sudden you’re aware, you’re experiencing the world again. Do you think a computer can actually experience something? Because wouldn’t it need to experience the world in order to really be intelligent?
Well computers, if they have sensors, actually they already experience the world. The self-driving car is experiencing the world through its radar and LIDAR and various other sensors and so on, so they do experience and they do have sensors. I think it’s not useful to debate computer consciousness, because it’s like a question of, you know, how many angels can fit on the pin of a needle. I think what we can discuss is what they can or cannot do. How they experience it is more a question for philosophers.
So a lot of people are worried – you know all of this, of course – there’s two big buckets of worry about artificial intelligence. The first one is that it’s going to take human jobs and they’re going to have mass unemployment, and any number of dystopian movies play that scenario out. And then other people say, no, every technology that’s come along, even disruptive ones like electricity, and mechanical power replacing animal power and all of that, were merely then turned around and used by humans to increase their productivity, and that’s how you get increases in standard of living. On that question, where do you come down?
I’m much more worried than I am optimistic. I’m optimistic that technology will progress. What I’m concerned with is it will lead to increasing inequality and increasingly unequal distribution of wealth and benefits. In Massachusetts, there used to be many toll collectors. And toll collector is not a very sophisticated job, but recently they were eliminated. And the machines that eliminated them didn’t require full intelligence, basically just an RFID sensor. So we already see many jobs being eliminated by a simpler form of automation. And what society will do about it is not clear. I think the previous disruptions had much longer timespans. But now when people like these toll collectors are being laid off, they don’t have enough time to retrain themselves to become, let’s say computer programmers or doctors. What I’d like to do about it, I’m not sure. But I like a proposal by Andrew Ng, who was from Stanford Coursera. Andrew, he proposed the modified version of basic income, that people who are unemployed and cannot find jobs get some form of basic income. Not just to sit around, but they would be required to learn new skills and learn something new and useful. So maybe that would be a possible solution.
So do you really think that when you look back across time – you know, the United States, I can only speak to that, went from generating 5% of its energy with steam to 80% in just 22 years. Electrification happened electrifyingly fast. The minute we had engines there was wholesale replacement of the animals, they were just so much more efficient. Isn’t it actually the case that when these destructive technologies come along, they are so empowering that they are actually adopted incredibly quickly? And again, just talking about the US, unemployment for 230 years has been between 5% and 9%, other than the Great Depression, but in all the other time, it never bumped. When these highly disruptive technologies came along, it didn’t cause unemployment generally to go up, and they happened quickly, and they eliminated an enormous number of positions. Why do you think this one is different?
The main reason why I think it is different is because it is qualitatively different. Previously, the machines that came, like the steam and electricity-driven, it would eliminate some of the manual work and people could climb up on the pyramid of skills to do more sophisticated work. But nowadays, artificial general intelligence sort of captures this pyramid of skills, and it now competes with people on the cognitive skills. And it can eventually climb to the top of the pyramid, so there will be nowhere to climb to exceed it. And once you generate one general intelligence, it’s very easy to copy it. So you would have a very large number, let’s say, of intelligent robots that will do a very large number of things. They will compete with people to do other things. It’s just very hard to retrain, let’s say, a coal miner to become, let’s say, producer of YouTube videos.
Well that isn’t really how it ever happens, is it? I mean, that’s kind of a rigged set-up, isn’t it? What matters is, can everybody do a job a little bit harder than they have? Because the maker of YouTube videos is a film student. And then somebody else goes to film school, and then the junior college professor decides to… I mean, everybody just goes up a little bit. You never take one group of people and train them to do an incredibly radically different thing, do you?
Well, I don’t know about that exactly, but to return to your analogy, you mentioned that the United States for 200 years the pattern was such. But, you know, the United States is not the only country in the world, and 200 years is a very small part of our history. We look at several thousand years, and look with what happened in the north, we see they’re very complex things. Unemployment rate in the Middle Ages was much higher than 5% or 10%.
Well, I think the important thing, and the reason why I used 200 years is because that’s the period of industrialization that we’ve seen, and automation. And so the argument is Artificial Intelligence is going to automate jobs, so you really only need to look over the period you’ve had other things automating jobs to say, “What happens when you automate a lot of jobs?” I mean, by your analogy, wouldn’t the invention of the calculator have put mathematicians out of business? I mean like with ATM machines, an ATM machine in theory replaces a bank teller. And yet we have more bank tellers today than we did when the ATM was introduced, because that too allows banks to open more branches and hire more tellers. I mean, is it really as simple as, “Well, you’ve built this tool, now there’s a machine doing a job a human did and now you have an unemployed human.” Is that kind of the only force at work?
Of course it’s not simple, there are many forces at work. And there are forces that resist change, as we’ve seen from Luddites in 18th century. And now there are people, for example coal mining districts, who want to go back to coal mining. Of course, it’s not that simple. What I’m saying is we only had a few examples of industrial revolutions, and as data scientists say, it’s very hard to generalize from few examples. It’s true that past technologies have generated more work. It doesn’t follow that this new technology, which is different, will generate more work for all the people. It may very well be different. We cannot rely on three or four past examples to generalize for the future.
Fair enough. So let’s talk, if we can, about how you spend your days, which is in data science, what are some recent advances that you think have materially changed the job of a data scientist? Are there ones? And are there more things that you can kind of see that are about to change and begin? Like how is that job evolving as technology changes?
Yes, well data scientists now live in the golden age of the field. There are now more powerful tools that make data science much easier, tools like Python and R. And Python and R both have a very large ecosystem of tools, like scikit-learn for example in the case of Python, or whatever Hadley Wickham comes up in the case of R. There are tools like Spark and various things on top of that that allow data scientists to access very large amount of data. It’s much easier and much faster for data scientists to build models. The danger for data scientists, again, is automation, because as those tools make it easier and easier, and soon they make the work, you know, a large part of it automated. In fact, there are already companies like DataRobot and others that allow business users who are not data scientists just to plug their data, and DataRobot or their competitors just generate the results. No data scientist needed. That is already happening in many areas. For example, ads on the internet are automatically placed, and there are algorithms that make millions of decisions per second and build lots of models. Again, no human involvement because humans just cannot do millions of models a second. There are many areas where this automation is already happening. And recently I had a poll in KDnuggets asking, when do you think data science work will be automated? Then the median answer was about 20 or 25. So although this is a golden age for data scientists, I think they should enjoy it because who knows what will happen in the next 8 to 10 years.
So, when Mark Cuban was talking about the first – he gave a talk earlier this year – he said the first trillionaires will be in businesses that utilize AI. But he said something very interesting, which is, he said that if he were coming up through university again, he would study philosophy. That’s the last thing that’s going to be automated. What would you suggest to a young person today listening to this? What do you think they should study, in the cognitive area, that is either blossoming or is it likely to go away?
I think what will be very much in demand is at the intersection of humanities and technology. If I was younger I would still study machine learning and databases, which is actually what I studied for my PhD 30 years ago. I probably would study more mathematics. The deep learning algorithms that are making tremendous advances are very mathematically intensive. And the other aspect is, kind of maybe the hardest to automate is human intuition and empathy, understanding what other people need and want, and how to best connect with them. I don’t know how much that can be studied, but if philosophy or social studies or poetry is the way to it, then I would encourage young people to study it. I think we need a balanced approach, not just technology but humanities as well.
So, I’m intrigued that our DNA is– I’m going to be off here, whatever I say. I think is about is about 740 meg, it’s on that order. But when you look at how much of it we share with, let’s say, a banana, it’s 80-something percent, and then how much we share with a chimp, it’s 99%. So somewhere in that 1%, that 7 or 8 meg of code that tells how to build you, is the secret to artificial general intelligence, presumably. Is it possible that the code to do an AGI is really quite modest and simple? Not simple – you know, there’s two different camps in the AGI world. And one is that humans are a hack of 100 or 200 or 300 different skills that you put them all together and that’s us. Another one is, we had Pedro Domingos on the show and he had a book called The Master Algorithm, which posits that there is an algorithm that can solve any problem, or any solvable problem, the way human is. Where on that spectrum would you fall? And do you think there is a simple answer to an AGI?
I don’t think there is a simple answer. Actually, I’m a good friend with Pedro and I moderated his webcast on his book last year. But I think that the master algorithm that he looks for may exist, but it doesn’t exclude having lots of additional specialized skills. I think there is very good evidence that there is such a thing as general intelligence in humans, that people, for example, make have different scores on SAT on verbal and math. I know that my verbal score would be much lower than my math score. But usually if you’re above average on one, you would be above average on the other. And likewise, if you’re below average on one, you will be below average. People seem to have some general skills, and in addition there are a lot of specialized skills. You know, you can be a great chess player but have no idea how to play music, or vice versa. I think there are some general algorithms, and there are lots of specialized algorithms that leverage special structure of the domain. You can think of it this way, that when people were developing chess-playing programs, they initially applied some general algorithms, but then they found that they could speed up these programs by building specialized hardware that was very specific to chess. Likewise, people when they start new skills they approach it generally, then they develop the specialized expertise which speeds up their work. I think likewise it could be with intelligence. There may be some general algorithm, but it would have ways to develop lots of special skills that would leverage whatever specific or particular tasks.
Broadly speaking, I guess data science relies on three things: it relies on hardware, faster and faster hardware; better and better data, more of it and labeled better; and then better and better algorithms. If you kind of had to put those three things side by side, where are we most efficient? Like if you could really amp one of those three things way up, what would it be? 
That’s a very good question. With current algorithms, it seems that more data produces much better results than a smarter algorithm, especially if it is relevant data. For example, for image recognition there was a big quantitative jump when deep learning trained on millions of images as opposed to thousands of images. But I think what we need for next big advances is having somewhat smarter algorithms. One big shortcoming for deep learning is, again, it requests so much data. People seem to be able to learn from very few examples. And the algorithms that we have are not yet able to do that. In algorithm’s defense, I have to say that when I say people can learn from very few examples, we assume those are adults and they’ve already spent maybe 30 or 40 years of training interacting with the world. So maybe if algorithms can spend some years training and interacting with the world, they’ll acquire enough knowledge so they’ll be able to generalize to other similar examples. Yes, I think probably data, then algorithms, and then hardware. That would be my order.
So, you’re alluding to transfer learning, which is something humans seem to be able to do. Like you said, you could show a person who’s never seen an Academy Award, what that little statue that looks like, and then you could show them photographs of it in the dark, on its side, underwater, and they could pick it out. And what you just said is very interesting, which is, well yeah, we only had one photo of this thing, but we had a lifetime of learning how to recognize things underwater and in different lighting and all that. What do you think about transfer learning for computers? Do you think we’re going to be able to use the datasets that we have that are very mature, like the image one, or handwriting recognition, or speech translation, are we going to be able to use those to solve completely unrelated problems? Is there some kind of meta-knowledge buried in those things we’re doing really well now, that we can apply to things we don’t have good data on?
I think so. I think because the world itself is the best representation. So recently I read a paper that applied this negative transformation to ImageNet, and it turns out that now a deep learning system that was trained to recognize, I don’t remember exactly what it was, but let’s say cats, would not be able to recognize negatives of cats, because the negative transformation is not part of its repertoire. But that is very easy to remedy if you just add negative vocabulary image to the training. I think there is maybe a large but finite number of such transformations that humans are familiar with, like the negative and rotated and other things. And it’s quite possible that by doing such transformation to very large existing databases, we could teach those machine learning systems to achieve and exceed human levels. Because humans themselves are not perfect in recognition.
Earlier, this conversation we’re having, we’re taking human knowledge and how people do things and we’re kind of applying that to computers. Do you think AI researchers learn much from brain science? Do they learn much from psychology? Or is it more that’s handy for telling stories or helping people understand things? But as you started at the very beginning with airplanes and birds we were talking, there really isn’t a lot of mapping between how humans do things and how machines do them.
Yes, by the way, the airplanes and birds analogy I think is due to Yann LeCun. And I think some AI researchers are inspired by how humans do things, and the prime example is Geoff Hinton who is an amazing researcher, not only because of what he achieved, but he has extremely good understanding of both computers and human consciousness. And several talks that I’ve heard of him and some conversation afterwards, he suggested he uses his knowledge of how human brain works as an inspiration for coming up with new algorithms. Again, not copying them but inspiring the algorithms. So to answer your question, yes, I think human consciousness is very relevant to understanding how intelligence could be achieved, and as Geoff Hinton says, that’s the only working example we have at the moment.
We were able to kind of do chess in AI so easily because there were so many – not so easily, obviously people worked very hard on it – but because there were so many well-kept records of games that would be training data. We can do handwriting recognition well because we have a lot of handwriting and it’s been transcribed. We do translation well because there is a lot of training data. What are some problems that would be solvable if we just had the data for them, and we just don’t have it nor do we have any good way of getting it? Like, what’s a solvable problem that really our only impediment is that we don’t have the data?
I think at the forefront of such problem is medical diagnosis, because there are many diseases where the data already exists, it’s just maybe not collected in electronic form. There is a lot of genetic information that could be collected and correlated with both diseases and treatment, what works. Again, it’s not yet collected, but Google and 23andMe and many other companies are working on that. Medical radiology recently witnessed great success of a startup called Enlitic, where they were able to identify tumors using deep learning on almost the same quality as human radiologists. So I think in medicine and health care we will see big advances. And in many other areas where there is a lot of data, we can also see big advances. But the flipside of data, or what we can touch on it, is people, at least in some part of the political spectrum, are losing connection on whether it’s actually true or not. Last year’s election saw a tremendous amount of fake news stories that seemed to have significant influence. So while on one hand we’re training machines to do a better and better job in recognizing what is true, many humans are losing their ability to recognize what is true and what is happening. Just to witness denial of climate change by many people in this country.
You mention text analysis on your LinkedIn profile. I just saw that that was something that you evidently know a lot about. Is the problem you’re describing solvable? If you had to say the number one problem of the worldwide web is you don’t know what to believe, you don’t know what’s true, and you just don’t have a way necessarily of sorting results by truthiness, do you think that that is a machine learning problem, or is that not one? Is it going to require moderation in humans? Or is truth not a defined enough concept on which to train 50 billion web pages?
I think the technical part certainly can be solved from machine learning point of view. But the worldwide web does not exist in vacuum, it is embedded in human society. And as such, it suffers from all the advantages and problems of humans. If there are human actors that will find it beneficial to bend the truth and use the worldwide web to convince other people what they want to convince them of, they will find some ways to leverage the algorithms. The operator by itself is not a panacea as long as there are humans with all of our good and evil intentions around it.
But do you think it’s really solvable? Because I remember this Dilbert comic strip I saw once where Dilberts on a sales call and the person that he’s talking to says, “Your salesmen says your product cures cancer!” And Dilbert says, “That is true.” And the guy says, “Wait a minute! It’s true that it cures cancer or it’s true that he said that?” And so it’s like that, that statement, “Your salesperson said your product cures cancer,” is a true statement. But that subtlety, that nuance, that it’s-true-but-it’s-not-true aspect of it, I just wonder, it doesn’t feel like chess, this very clear-cut win/lose kind of situation. And I just wonder even if everybody wanted the true results to rise to the top, could we actually do that?
Again, I think technically it is possible. Of course, you know nothing will work perfectly, but humans also do not do perfect decisions. For example, Facebook already has an algorithm that can identify clickbait. And one of the signals is relatively simple, just look at the number of people, let’s say, who look at a particular headline, click on a particular link, and then how much time they spend there or whether they return and click backwards. The headline like, “Nine amazing things you can do to cure X,” and you go to that website and it’s something completely different, then you quickly return. Your behavior will be different than if you go to a website that matches the headline. And you know, Facebook and Google and other sites, they can measure those signals and they can see which type or which headlines are deceptive. The problem is that the ecosystem that has evolved seems to reward capturing attention of people, and headlines are more likely to be shared, are worth capturing attention of people, generate emotion in either anger or some cute things. We’re evolving toward internet of anger, partisan anger, and cute kittens. That’s the two extreme axes of what gets attention. I think the technical part is solvable. The problem is that, again, there are humans around it that make a very different motivation from you and me. It’s very hard to work when your enemy is using various cyber-weapons against you.
Do you think nutrition may be something that would be really hard as well? Because no two people – you eat however many times a day, however many every different foods, and there is nobody else who does that same combination on the planet, even for seven consecutive days or something. Do you think that nutrition is a solvable thing, or there are too many variables for there to ever be a dataset that would be able to say, “If you eat broccoli, chocolate ice cream, and go to the movie at 6:15, you’ll live longer?
I think that is certainly solvable. Again, the problem is that humans are not completely logical. That’s our duty and our problem. People know what is good for them, but sometimes they just want something else. We sort of have our own animal instinct that is very hard to control. That’s why all the diets work, but just not for a very long time. People who go on diets very frequently and then you know, find that it didn’t work and go on it again. Yes, for information, nutrition can be solved. How motivation to convince people to follow good nutrition, that is a much, much harder problem.
All right! Well it looks like we are out of time. Would you go ahead and tell the listeners how they can keep up with you, go on your website, and any ways they can follow you, how to get hold of you and all of that?
Yes. Thank you, Byron. You can find me on Twitter @KDnuggets, and visit the website KDnuggets.com. It’s a magazine for data scientists and machine learning professionals. We publish only a few interesting articles a day. And I hope you can read it, or if you have something to say, contribute to it! And thank you for the interview, I enjoyed it.
Thank you very much.
Byron explores issues around artificial intelligence and conscious computers in his upcoming book The Fourth Age, to be published in April by Atria, an imprint of Simon & Schuster. Pre-order a copy here
[voices_in_ai_link_back]

Voices in AI – Episode 5: A Conversation with Daphne Koller

[voices_in_ai_byline]
In this episode, Byron and Daphne talk about consciousness, personalized medicine, and transfer learning.
[podcast_player name=”Episode 5: A Conversation with Daphne Koller” artist=”Byron Reese” album=”Voices in AI” url=”https://voicesinai.s3.amazonaws.com/2017-09-28-(00-56-17)-daphne-koller.mp3″ cover_art_url=”https://voicesinai.com/wp-content/uploads/2017/09/voices-headshot-card-4.jpg”]
[voices_in_ai_link_back]
Byron Reese: This is Voices in AI, brought to you by Gigaom. I’m Byron Reese. Today our guest is Daphne Koller. She’s the Chief Computing Officer over at Calico. She has a PhD in Computer Science from Stanford, which she must have liked a whole lot, because she shortly thereafter became a professor there for eighteen years. And it was during that time that she founded Coursera with Andrew Ng. She is the recipient of so many awards, I would do them an injustice to try to list them all. Two of them that just stick out are the Presidential Early Career Award for Scientists and Engineers, and, famously, The MacArthur Foundation Fellowship.
Welcome to the show, Daphne.
Daphne Koller: Good to be here, Byron. Thank you for inviting me.
I watched a number of your videos, and you do a really interesting thing where you open up by defining your terms often, so that everybody has, as you say, a shared vocabulary. So what is ‘artificial intelligence’ when you use that term?
Well, I think artificial intelligence is one of the harder things to define because in many ways, it’s a moving target. Things that used to be considered artificial intelligence twenty years ago are now considered so mundane that no one even thinks of them as artificial intelligence—for instance, optical character recognition.
So, there is the big lofty AI goal of general artificial intelligence, building a single agent that achieves human-level type intelligence, but I actually think that artificial intelligence should—and in many people’s minds I hope still does—encompass the very many things that five years ago would have been considered completely out of reach, and now are becoming part of our day-to-day life. For instance, the ability to type a sentence in English and have it come out in Spanish or Chinese or even Swahili.
With regard to that, there isn’t an agreed-upon definition of intelligence to begin with. So what do you think of when you think of intelligence, and secondly, in which sense is it artificial? Is it artificial like artificial turf, is it really turf, or it just pretends to be? Do you think AI is actually “intelligent,” or is it a faux imitation intelligence?
Boy, that’s a really good question, Byron. I think intelligence is a very broad spectrum that ranges from very common sense reasoning that people just take for granted, to much more specialized tasks that require what people might consider to be a deeper level of intelligence, but in many cases are actually simpler for a computer to do. I think we should have a broad umbrella of all of these as being manifestations of the phenomenon of intelligence.
In terms of it being false intelligence, no; I think what makes artificial intelligence “artificial” is that it’s humanly-constructed. That is, it didn’t organically emerge as a phenomenon, but rather we built it. Now you could question whether the new machine learning techniques are in fact organic growth, and I would say that you could make the case that if we build an architecture, that you put it in the world with the same level of intelligence as a newborn infant, and it really learns to become intelligent—maybe we shouldn’t call it artificial intelligence at that point.
But I think, arguably, the reason for the use of the word “artificial” is because it’s human-constructed as opposed to biologically-constructed.
Interestingly, McCarthy, the man who coined the phrase, later regretted it. And that actually brings to mind another question, which is: When five scientists convened at Dartmouth for the summer of 1956, to “solve the problem with artificial intelligence,” they really thought they could do it in a summer of hard work.
Because they assumed that intelligence was like, you know, in physical laws… We found just a few laws that explained all physical phenomenon, and electricity just a few, and magnetism just a few, and there was a hope that intelligence was really something quite simple. You know, iteratively-complex but had just a few overriding laws. Do we still think that? Do you think that? Is it not like that at all?
That was the day of logical AI, and I think people thought that one could reason about the world using the rules of logic, where you have a whole bunch of facts that you know—dogs are mammals; Fido is a dog, therefore Fido is a mammal—and that all you would need is to write down those facts, and the laws of logic would then take care of the rest. I think we now understand that that is just not the case, and that there is a lot of complexity both on the fact side, and then how you synthesize those facts to create broader conclusions, and how do you deal with the noise, and so on and so forth.
So I don’t think anyone thinks that it’s as simple as that. As to whether there is a single, general architecture that you can embed all of intelligence in, I think some of the people who believe that deep neural networks are the solution to the future of AI would advocate that point of view. I’m agnostic about that. I personally think that that’s probably not going to be quite there, and you’re probably going to need at least one or two other big ideas, and then a heck of a lot of learning to fine-tune parts of the model to very different use models—in the same way that our visual system is quite different from our common sense reasoning system.
And I do want to get on, in a minute, to the here and now, but just in terms of thinking through what you just said, it sounds like you don’t necessarily think that an AGI is something that we are on the way towards. You know, we can make one percent of it, and when algorithms get a little better, and computers get a little faster, and we get a little more data, we’ll evolve our way there.
It sounds like what you said is that there is some breakthrough that we need that we don’t yet have; that AGI is something very different than the, kind of, weak AI we have today. Would you agree with that?
I would agree with that. I wouldn’t necessarily agree with the fact that we are not on the right path. I think there has been a huge amount of progress in the last, well, not only in the last few years, but across the evolution of AI. But it is definitely putting us on the path there. I just think that we need additional major breakthroughs to get us there.
So with regard to the human genome, you know it’s x-number of billions of base pairs, which map to something like 700 megabytes. But most of that we share with all life, even like plants and bananas and all of that, and if you look at the part that makes us different than say a bonobo or a chimp, it may only be half of one percent.
So it may only be like three megabytes. So does that imply to you that to build an AGI, the code might be very… We are an AGI, and our intelligence is evidently built with those three megabytes of code. When working to build an AGI computer, is that useful, or is that a fair way to think about it? Or is that apples and oranges in your view?
Boy! Well, first of all, I think I would argue that a bonobo is actually quite intelligent, and a lot of the things that make us generally intelligent are shared with a bonobo. Their visual system, their ability to manipulate objects, to create tools and so on is something that certainly we share with monkeys.
Fair enough.
I think there is that piece of it. I also think that there is an awful lot of complexity that happens as part of the learning process, that we as well as monkeys and other animals go through as we encounter the world. It evolves our neural system, and so that part of it is something that emerges as well, and could be shared. So I think it’s more nuanced than that, in terms of counting the number of bits.
Right. So, we have this brain, the only AGI that we know of…  And we, of course, don’t know how our brains work. We really don’t. We can’t even model a nematode worm’s 302 neurons in a computer, let alone our hundred billion. And then we have something we call the “mind,” which is a set of capabilities that the brain manifests that don’t seem to be—with the emphasis on seem to be—derivable from neurons firing.
And then you have consciousness, which of course… Nobody purports to say they know exactly how it is that hydrogen came to name itself. So, doesn’t that suggest that you need to understand the mind, and you need to understand consciousness, in order to actually make something that is intelligent? And it will also need those things.
You know, that’s a question that artificial intelligence has struggled with a lot. What is the mind, and to what extent does that emerge from neurons firing? And if you really dive into that question, it starts to relate to the notion of soul and religion, and all sorts of things that I’m not sure I am qualified to comment on. Most people wouldn’t necessarily agree with the others’ point of view on this anyway.
I think in this respect, Turing had it right. I don’t know that you’re conscious. All I can see is your observed behavior, and if you behave as if you are conscious, I take it on faith that you are. So if we build a general artificial intelligence that acts intelligent, that is able to interact with us, understand our emotions, express things that look like disappointment or anger or frustration or joy…
I think we should give it the benefit of the doubt that it has evolved a consciousness, regardless of our ability to understand how that came about.
So tell me about your newest gig, the Chief Computing Officer at Calico. Calico, according to their website, are aiming to devise interventions that slow aging and counteract age-related diseases. What’s your mission there, within that?
I came on board to create, at Calico, what you might call a second pillar of Calico’s efforts. One pillar being the science that we’re doing here, that drives toward an understanding of the basic concepts of aging, and the ability to turn that into therapeutics for aging and age-related diseases.
But we all know that biology, like many other disciplines, is turning into data science, where we have—”we” being the community at large—developed a remarkable range of technologies that can measure all sorts of things about biological systems, from the most microscopic level, all the way up to the organismal level—interventions that allow us to perturb single genes or even single nucleotides.
And how do you take this enormity of data and really extract insights from it, is a computational question. There need to be tools developed to do this. And this is not something that biologists can learn on their own. It’s also something computer scientists can’t do on their own. It requires a true partnership between those two communities working together to make sense of the data using computational techniques, and so what I am building here at Calico is an organization within Calico that does exactly that—in partnership with our pre-existing world class biology team.
Do you think there is broad consensus, or any consensus about the bigger questions of what is possible? Like do humans need to have a natural life span? Are we going to be able to better tailor medicines to people’s genomes? What are some of those things that are, kind of, within sight?
I am very excited about the personalized medicine, precision medicine trajectory. I completely agree with you that that is on the horizon. I find it remarkably frustrating that we treat people as one-size-fits-all. And, you know, a patient walks into a doctor’s office, and there is a standard of care that was devised for a population of people that is sometimes very different from the specifics of the person… Even to the point that there are a whole bunch of treatments which were designed largely based on a cohort of men, and you have a woman coming into the doctor’s office, and it might not work for her at all. Or similarly with people of different ethnic origins.
I think that’s clearly on the horizon, and it will happen gradually over the course of the coming years. I think the ability to intelligently design medications, in a way that is geared towards achieving particular biological effects that we’re able to detect using mechanisms like CRISPR, for instance.
CRISPR, by the way, for those of you who’ve not heard of this, is a gene editing system that was developed over the last five or ten years—probably more like five—and is remarkably able to do very targeted interventions in a genome. And then one can measure the effects and say, “Oh, wait a minute, that achieved this phenotypic outcome, let’s now create a therapeutic around that.” And that therapeutic might be a drug, or it could—as we get closer to viral therapies or even gene editing—be something that actually does the exact same thing that we did in the lab, but in the context of real patients. So that’s another thing that is on the horizon, and all of this is something that requires a huge understanding of the amounts of data that are being created, and a set of machine learning artificial intelligence tools.
Now, prior to World War II, I read that we only had about five medicines. You had quinine, you had penicillin—well, you didn’t have penicillin—you had aspirin, you had morphine; and they were all, fortunately, very inexpensive.
And then Jonas Salk develops the Salk vaccine, and they ask him who owns the patent and he says, there is no patent, you can’t patent the sun. And so you know, you get the Salk vaccine, so inexpensive. Now, though, we have these issues that, you know, if you have Hepatitis-C and you need that full treatment, that’s $70,000. Are we not on a path to create ever more and more expensive medications and therapies that will create a huge gulf between the haves and the have-nots?
I think it’s a very important moral dilemma. I tend to be, rightly or wrongly, I guess, an optimist about this, in that I think some medications are really expensive because we don’t have productionized processes for creating a medication. And we certainly don’t have productionized processes, or even a template, for how to come up with a new medication for an indication that’s discovered.
But—and again, I am an optimist—as we get a better understanding of, for instance, the human genome and maybe the microbiome, and how different aspects of that and the environment come together to create both healthy and aberrant phenotypes, it will become easier to construct new drugs that are better able to cure people. And as we get better at it, I hope that costs will come down.
Now, that’s sort of a longer-term solution. In the shorter term, I think that it’s incumbent upon us as a society to help the have-nots who are sick to get access to the best medications, or at least to a certain common baseline of medications that are important to help people stay alive. I think that that’s a place where some societies do this well, and others maybe not so well. And I don’t think that’s fair.
Of course, you know, there are more and more people that hit the age of 100, but the number of supercentenarian—people who hit 110—seems stubbornly fixed. I mean, you can go to Wikipedia and read a list of all of them. And the number of people who’ve hit 125 seems to be, you know, zero. People who’ve hit 130, zero. Why is it that, although the number of centenarians goes way up—and it’s in the hundreds of thousands—the number of people who make it to 125 is zero?
That’s a topic that’s been highly-discussed very recently. There’s been a series of papers that have talked about this. I think there’s a number of hypotheses. One that I find compelling is that what causes people to die, at a certain time in history, changes over time. I mean, there was a time, not that long ago, when women’s life spans were considerably shorter than that of men, because many of them died in childbirth. So the average lifespan of a woman was relatively shorter, until we realized that we needed to sterilize the doctor’s hands when they were delivering the baby, and now it’s different.
We discovered antibiotics, which allowed us to address many of the deaths that are attributed to pathogens, though not all of them. AIDS was a killer, and then we invented retroviral therapy which allows AIDS patients to live a much longer life. So, over time, we get through additional bottlenecks that are killing people at later and later points in time. So right now, for instance, we don’t have a cure for Alzheimer’s and Parkinson’s and other forms of dementia, and that kills a lot of people.
It kills at a much later age than they would have died from in earlier cases, at earlier times in history. But I hope that at some point in the next twenty years, someone will discover a cure for Alzheimer’s, and then people will be able to live longer. So I think over time, we solve the thing that kills you next, and that allows the thing that’s next down the line to kill you next, and then we go ahead and try and cure that one.
You know, when you look at the task before you, if you are trying to do machine learning to help people live longer and healthier lives, it’s got to be frustrating that, like, all the data must be bad, because symptoms generally aren’t recorded in a consistent way. You don’t have a control, like for example twins who, five minutes into the world go down different paths.
Everybody has different genomes. Everybody eats different food, breathes different air. How much of a hurdle is that to us being able to do really good machine learning on things like nutrition, which seems, you know… We don’t even know if eggs are good for you or bad for you and those sorts of things.
It’s a huge hurdle, and I think it was one of the big obstacles to the advancement of machine learning in other domains, up until relatively recently, when people were able to acquire enough data to get around that. If you look at the earlier days of, for instance, computer vision, the data sets were tiny—and that’s not that long ago, we’re talking about less than a decade.
You had data sets with a few hundred, and a few thousand images was considered large, and you couldn’t do much machine learning on that because when you think about the variation of a standard category… Like, a wedding banquet that ranges from photos of a roomful of people milling around to someone cutting a wedding cake.
And so the variability there is extremely large, and if all you have is twenty images of a wedding banquet, you’re not going to get very far training on that. Now, the data is still as noisy—and arguably even noisier when you download it from Google Images or Flickr or such—but there’s enough of it that you get to explore a sufficient part of the space for a machine learning algorithm. So that you can, not counteract the noise, but simply accommodate it as a variability in your models.
If we get enough data on the medical side, the hope is that we’ll be able to get to a similar place where, yes, the variability will remain, but if you have enough of the ethnic diversity, and enough of the people’s lifestyle, and so on, all represented in your data set, then that will allow us to address the variability. But that requires a major data collection effort, and I think we have not done a very good job as a society of making that a priority to collect, consolidate, and to some extent clean medical data so that we can learn from it.
The UK, for instance, has a project that I think is really exciting. It’s the UK Biobank project. It’s 500,000 people that were genotyped, densely-phenotyped, and their records are tied to the UK National Health Service; so you have ongoing outcome data for them. It’s still not perfect. It doesn’t tell you what they eat every day, but they asked them that in the initial survey, so you get at least some visibility into that. I think it’s an incredibly exciting project, and we should have more of those.
They don’t necessarily have to use the exact same technique that the UK Biobank is using, but if we have medical data for millions of people, we will be able to learn a lot more. Now we all understand there are serious privacy issues there, and we have to be really thoughtful about how to do this.
But if you talk to your average patient, especially ones who are suffering from a devastating illness, you will find that many of them are eager to share some information about their medical condition to the benefit of science, so that we can learn how to treat their disease better. Even if it doesn’t benefit them, because it might be too late, it will benefit others.
So you just mentioned object recognition, and of course humans do that so well. I could show you a photograph of a little Tiki statue, or a raven, or something… And then you could instantly go through a bunch of photos and recognize it if it’s underwater, or if it’s dark, or if it’s inside, and all of that. And I guess it’s through transferred learning of some kind. How far along are we… Do we know how to do it, and we just don’t have the horsepower to do it, or do we not really even understand how that works yet?
Well, I think it’s not that there is one way to do this. There’s a number of techniques that have been developed for transfer learning, and I agree with you that transfer learning is hugely important. But right now, if you look at models—like the Computer Vision Inception Network that Google has developed, there is a whole set of layers in that neural network that were devised based on a large category of web images that have a broad range of categories. But that same set of layers is now taken, pre-trained, and with a relatively small amount of training data—sometimes even as little as zero training examples—can be used for applications that it was never intended for, like the retinopathy project, for instance, that they recently published. I think that’s happening.
Another example, also from Google, is in the machine translation realm, where they recently showed that you could use a network architecture to translate between two languages for which you didn’t have any examples of those two languages together. The machine was effectively creating an interlingua on its own, so that you’re translating a sentence in Thai into this interlingua and then producing a sentence in Swahili as an output, and you’ve never seen a pair of sentences and Thai in Swahili together. So I think we’re already seeing examples of transfer learning emerging in the context of specific domains and I think it’s incredibly exciting.
You mentioned CRISPR/Cas9 a few minutes ago. And of course it comes with the possibility of actually changing genes in a way that that alters the line, right, where the children and the grandchildren have this new altered gene state. There is no legislative or ruling body that has any authority over any of that? CRISPR is cheap, and so can anybody do that?
I agree with you. I think there’s a very serious set of ethical questions there that we need to start thinking about seriously. So, in some ways, when people say to me, “Oh, we need to come up with legislation regarding the future of AI and the ethical treatment of artificial intelligence agents,” I tell them we have a good long time to think about that. I am not saying we shouldn’t think about it, but it’s not like it’s a burning question.
I think this is a much more burning question, and it comes up with editing the human genome, and I think it comes up at least as much in how do we prevent threats like someone recreating smallpox. That’s not CRISPR, that’s DNA synthesis, which also is a technology that’s here. So I think that’s a set of serious questions that the government ought to be thinking about, and I know that there is some movement towards that, but I think we’re behind the curve there.
Behind the curve in terms of we need to catch up?
Yeah, technology has overtaken our thinking about the legal and ethical aspects of this.
CRISPR would let you do transgenesis on a human. You could take a gene from something that glows in the dark, and make a human that glows in the dark, in theory. I mean, we are undoubtedly on the road to being able to use those technologies in horrific ways, very inexpensively. And it’s just hard to think, like, even if one nation can create legislation for it, it doesn’t mean that it couldn’t be done by somebody else. Is it an intractable problem?
I think all technology can be used for good or evil, or most technology can be used for good or evil. And we have successfully—largely successfully—navigated threats that are also quite significant, like the threat of a nuclear holocaust. Nuclear technology is another one of those examples that, it can be used for good, it has been used for good, it can also be used to great harm. We have not yet, fortunately, had a dirty bomb blow up in Manhattan, making all of Manhattan radioactive, and I am hopeful that will never happen.
So I am not telling you I have the solution to this, but I think that as a society, we should figure out what is morally permissible, and what is not, and then really try and put in guardrails both in terms of social norms, as well as in terms of legal and enforcement questions to try and prevent nations or individuals from doing things that we would consider to be horrific.
And I am not sure we have consensus as a society on what would be horrific. Is it horrific to genetically engineer a child that has a mutation that’s going to make their life untenable, or cut short after a matter of months, and make them better? I would say a lot of people would think that’s totally fine; I think that’s totally fine. Is it as permissible to make your child have superhuman vision, great muscle strength, stamina and so on? I think that’s in the gray zone. Is it permissible to make your child glow in the dark? Yeah, that’s getting beyond the pale, right? But those are discussions that we are not really having as a society, and we should be.
Yeah, and the tricky thing is, there is not agreement on whether you should use transgenesis on seeds, you know? You put Vitamin A in rice, and you can end Vitamin A deficiency, or diminish it, and we don’t seem to be able to get agreement on whether you should even do that.
Yeah. You know I find people’s attitudes here to be somewhat irrational in the sense that we’ve been doing genetic engineering on plants for a very long time, we’ve just been doing it the hard way. Most of the food that we eat comes from species of plants that don’t naturally grow in the wild. They have been very carefully bred to have specific characteristics in terms of resistance to certain kinds of pests, and growing in conditions that require hardier plants, and so on and so forth.
So even genetically engineering plants by very carefully interbreeding them, and doing various other things to create the kinds of food that, for whatever reason, we prefer—tomatoes that don’t spoil when you ship them in the bowels of a ship for three weeks—the fact that we are now doing it more easily doesn’t make it worse. In fact, you could argue that it might make it more targeted, and have fewer side effects.
I think when it comes to engineering other things, it becomes much more problematic, and you really need to think through the consequences of genetic engineering on a human, or genetic engineering on a bug.
Yeah, when x-rays came out, they would take a bunch of seeds and they would irradiate them, and then they would plant them, and very few would grow, but a few would grow poorly, and every now and then you would get some improvement, and that was the technique for much of the produce we eat today.
Indeed, and you don’t know what the radiation did, beyond the stuff that we can observe phenotypically, as in it grows better. So all of these things that are happening to all those other genes went unobserved and unmeasured. Now you are doing a much more precision intervention, in just changing the one gene that you care about. And for whatever reason some people view that as being inferior, and I think that’s a little bit of a misunderstanding of what exactly happened before, and is happening now.
It used to be that the phrase “cure aging” was looked at nonsensically. Is that something that is a valid concept, that we may be able to do?
So we do not use the term “cure aging” at Calico. What we view ourselves as doing is increasing healthspan, which is the amount of time that you live as a healthy, happy, productive human being. I think that we as a society have been increasing healthspan for a very long time. I’ve talked about a couple of examples in the past.
I don’t think that we are on the path to letting people live forever. Some people might think that that’s an achievable goal, but I think it’s definitely a worthy goal to make it so that you live healthy longer, and you don’t have people who spend twenty years of their lives in a nursing home being cared for by others because they are unable to care for themselves.
I think that’s a very important goal for us as a society, both for the people themselves, for their families, but also in terms of the cost that we incur as a society in supporting that level of care.
Well, obviously you’ve had a great impact, you know, presumably in two ways: One, with what you’ve done to promote education, and democratizing that, and then what you are doing in health. What are your goals? What do you hope to accomplish in the field? How do you want to be remembered as?
So, let’s see. I think there’s a number of answers that I could give to that question at different levels. At one level, I would like to be—and not the only one, by any stretch, because there is a whole community of us working here—one of the people that really brought together two fields that it’s critical that we bring together: the field of machine learning and the field of biology, and really turning biology into a data science.
I think that’s a hugely important thing because it is not possible, even today and certainly going forward, to make sense of the data that is being accumulated using simple, statistical methods. You really need to build much deeper models.
Another level of answer is that I would like to do something that made a difference to the lives of individual people. One of the things that I really loved about the work that we did at Coursera was that daily deluge, if you will, of learner stories. Of people who say, “My life has been transformed by the access to education that I would never have had before, and by doing that I am now employed and can feed my children and I was not able to do that before,” for instance.
And so if I can help us get to the point where I get an email from someone who says, “I had a genetic disposition that would have made me die of Alzheimer’s at an early age, but you were able to help create technology that allowed me to avoid that.” To me that would be incredibly fulfilling. Now, that is a very aspirational goal, and I’m not assuming that it’s necessarily achievable by me—and, even if it’s achievable, will definitely involve the work of many others—but that, I think, is what we should aspire to, what I aspire to.
You know, you mentioned the importance of merging machine learning with these other fields, and Pedro Domingos, who actually was on the show not long ago, wrote a book called The Master Algorithm where he proposes that there must exist a master algorithm that can solve all different kinds of problems, that unite the symbolists and the Bayesians and all of the different, what he calls, tribes. Do you think that such a thing is likely to exist? Do you think that neural nets may be that, kind of a one-size-fits-all solution to problems?
I think neural nets are very powerful technology, and they certainly help address, to a certain extent, a very large bottleneck, which is how do you construct a meaningful set of features in domains where it’s really hard for people to extract those, and solve problems really well. I think their development, especially over the last few years, when combined with large data, and the power of really high-end computing, has been transformative to the field.
Do I think they are the universal architecture? Not as of now. I think that there is going to be—and we discussed this earlier—at least one or two big things that would need to be added on top of that. I wish I knew what they were, but I don’t think we are quite there yet.
So you are walking on a beach, and you find a lamp. You rub the lamp, and out pops a genie, and the genie says: “I will give you one of the following three things: new cunning and brilliant algorithms that solve all kinds of problems in more efficient ways, an enormous amount of data that’s clean and accurate and structured, or computers that are vastly faster, way beyond the speed of what we have now.” What would you choose?
Data. I would choose data.
It sounded like, when I set that question up earlier about, “Oh, data, it’s so hard,” you were like, “Tell me about it.” So that is the daily challenge, because I know my doctor still keeps everything in those manila folders that have three letters of my last name, and I think, “Wow, that’s it? That’s what’s going to be driving the future?” So that is your bottleneck?
I think it really is the bottleneck, and it’s not even just a matter of, you know, digitizing the records that are there—which, by the way, it’s not just a matter of they are being kept in manila folders. It’s also a matter of the extent to which different doctors write things in different ways, and some of them don’t write things at all and just leave it to memory, and so on.
But I think even beyond that, there is all the stuff that’s not currently being measured. I think we’re starting to see some glimmers of light in certain ways; for instance, I’m excited by the use of wearable devices to measure things like people’s walking pace and activity and so on. I think that provides us with a really interesting window on daily activity, whereas, otherwise people see the doctor once a year or once every five years, sometimes—and that really doesn’t give us a lot of visibility into what’s going on with their lives the rest of the time.
I think there is a path forward on the data collection, but if you gave me a really beautiful large clean data set that had, you know, genetics and phenotypes and molecular biomarkers, like gene expression and proteomics and so on and so forth… I am not saying I have the algorithms today that can allow me to make sense of all of that but, boy, there is a lot that we can do with that, even today. And it would spur the development of really amazing creative algorithms.
I think we don’t lack creativity in algorithms. There is a lot that would need to happen, but I think we’re, in many cases, stymied by the lack of availability in data as well as just the amount of time and effort in terms of grunge work that’s required to clean what’s there.
So there is a lot of fear wrapped up in some people about artificial intelligence. And just to set the question up, specifically about employment, there’s three views about its effect: There’s one group of people who think we are going to enter into something like a permanent Great Depression, where there are people who don’t have the skills to compete against machines. Then there are those who believe that there’s nothing a machine can’t do eventually, and once they can learn to do things faster than we can, they’ll take every job. Then there is a third camp of people who say, look, every time we’ve had disruptive technologies, even electricity and steam power and machines, people just use those to increase their productivity and that’s how we got a rising standard of living. Which of those three camps—or a fourth one—do you identify with?
I probably would place myself—and again I tend to be an optimist, so factor that in—probably more in the third camp. Which is to say, each time that we’ve had a revolution, it has spurred productivity and people migrated from one job category into another job category that basically moves them in some ways, in many cases, further up the food chain.
So I would hope that that would be the case here; our standard of living will go up, and people will do jobs that are different. I do see the case of people saying that this revolution is different, because, over time, a larger and larger fraction of jobs will disappear and the number of jobs that are left will diminish. That is, you just won’t need that many people to do stuff.
Now, again from the optimist’s perspective, if we really have machines that do everything—from grow crops, to package them and put them in supermarkets, and so on, and basically take care of all of the day-to-day stuff that we need to exist—arguably you could imagine that a lot of us will live a life of partial leisure. And that will allow us to, at least, exist, and have food and water, and some level of healthcare and education, and so on, without having to work, and we will spend our time being creative and being artisans again or something.
Which of those is going to be the case, I think is an interesting question, and I don’t have a firm opinion on that.
So, I followed with a lot of interest Watson, when they took the cancer cases and the treatment that oncologists gave, and then Watson was able to match them ninety-some odd percent of the time, and even offered new ones because it read all of these journals and so forth.
So that’s a case of using artificial intelligence for treatment, but is treatment really fundamentally a much easier problem to solve than diagnosis? Because diagnosis is—you know, my eyes water when I eat potato chips—not very structured data.
I think that if you look back, even in the mid-’90s, which is a long way back now, there were diagnostic models that were actually pretty darn good. People moved away from that, partly because to really scale those out and make those robust, you just needed a lot more data, and also I think there are societal obstacles to the adoption of fully-automated diagnoses.
I think that’s actually an even more fundamental problem, is the extent to which doctors, patients, and insurance companies are willing to take a diagnosis that’s provided by a computer. I don’t think fundamentally, from a technological perspective, that is an unsolvable problem.
So is diagnosis a case for an expert system? I think that’s what you are alluding to—you know, how do you tell the difference between a cold and the flu? Well, do they have a fever; do they have aches and pains?
Is that a set of problems where you would use relatively older technologies to build all that out? And even if we don’t switch to that, being able to have access to just that knowledge base, in some parts of the world, is a huge step forward.
I would agree. And by the way, the thing I was thinking back on is not the earliest version of expert systems, which were all rule-based; but rather the ones that came later, which used a probabilistic model that really incorporated things like the chances of a certain thing manifesting in a somewhat different way, and if you have this predisposing factor, or, like, if you visited a country that has SARS recently, then maybe that changes the probability that what you have is not the cold or the flu but rather something worse.
And so all that needs to be built into the model. And the probabilistic models really did accommodate that, and are easily… In fact, there is a lot of technology that’s already in place for how to incorporate machine learning so that you can make those models better and better over time.
I think that’s an area that one could easily go back to, and construct technology that would be hugely impactful, especially in parts of the world where they lack access to good medical care because there just aren’t enough doctors per capita, or the doctors are all concentrated in big cities. And you have people who are living in some rural village and can’t get to see a doctor.
I agree with you that there is a huge possibility there. I think there is also a huge possibility in treatment of chronic care patients, because those are ones that consume a huge fraction of the resources of a doctor’s time, and there just aren’t enough hours in the day for a doctor to see people as frequently as might be beneficial for keeping track of whether they are deteriorating.
So maybe by the time they come and see the doctor six months later, their condition has already deteriorated, and maybe if it had been caught earlier we could have slowed that down by changing treatment. So I think there are a lot of opportunities to apply a combination of modeling and the machine learning, in medical care, that will really help make people’s lives better.
We’re almost out of time, so I have just two more questions for you. First, what is something that looks, for you, like the kind of problem in health that machine learning is going to be able to solve soon? What’s a breakthrough we can hope to pick up the newspaper and read about in five years, something really potentially big that is within our grasp, but just a little out of our reach?
I think there are a couple of areas that I see emerging which are already happening, and you’re starting to see that. Cancer—I think we talked earlier about the bottlenecks that are being addressed one after the other. And, you know, we have antibiotics and retrovirals and statins; and I think we are starting to see with areas like immuno-oncology, for instance, some actual cures for metastatic cancer which, by and large, is incurable using standard methods, with few exceptions. And I think that’s a big area where I think it’s really exciting.
I am seeing some really interesting developments on things that are in the context of specific diseases, that are more genetically-oriented therapies—be it CRISPR, be it viral therapies. We are seeing some others on the path to being approved in the next few years, and so I think that’s a place where, again, on the therapeutic side, there is a big opportunity.
I think the third one is the use of computers in the context of image-based diagnosis, and that’s an area that I used to work in when I was at Stanford—where you show an image of a tumor biopsy sample, or a radiology image, or a 3D Cat Scan of a patient, and they’re able to discover things that are not visible to a physician. Or maybe only visible to a small subset of truly expert physicians, but in most cases, you’re not going to be lucky enough to be the one that they look at.
So I think that’s an area where we will also see big advancements. These are just three off of the top of my head in the medical space, but I am sure there are others.
And a final question: You seem to be doing a whole lot of things. How do people keep up with you, what’s your social media of choice and so forth?
Boy, I am not much of a social media person, maybe because I am doing so many other things. So I think most of my visibility happens through scientific publications. As we develop new ideas, we subject them to peer review, and when we are confident that we have something to say, that’s when we say it.
Which I think is important because there is so much out there, and I think people rush to talk about stuff that’s half baked, not well-vetted… There is a lot of, unfortunately, somewhat bogus science out there—not to mention bogus news. And I think if we had less stuff, that was higher-quality—and we were not flooded with stuff of dubious correctness through which we had to sift—I think we would all be better off.
All righty. Well thank you so much for taking the time. It was a fascinating hour.
Thank you very much Byron. It was a pleasure for me too. Thank you.
Byron explores issues around artificial intelligence and conscious computers in his upcoming book The Fourth Age, to be published in April by Atria, an imprint of Simon & Schuster. Pre-order a copy here
[voices_in_ai_link_back]