Voices in AI – Episode 44: A Conversation with Gaurav Kataria

[voices_in_ai_byline]
In this episode, Byron and Gaurav discuss machine learning, jobs, and security.
[podcast_player name=”Episode 44: A Conversation with Gaurav Kataria” artist=”Byron Reese” album=”Voices in AI” url=”https://voicesinai.s3.amazonaws.com/2018-05-24-(00-57-17)-gaurav-kataria.mp3″ cover_art_url=”https://voicesinai.com/wp-content/uploads/2018/05/voices-headshot-card-1.jpg”]
[voices_in_ai_byline]
Byron Reese: This is Voices in AI brought to you by GigaOm. I am Byron Reese. Today our guest is Gaurav Kataria. He is the VP of Product over at Entelo. He is also a guest lecturer at Stanford. Up until last month, he was the head of data science and growth at Google Cloud. He holds a Ph.D. in computer security risk management from Carnegie Mellon University. Welcome to the show Gaurav!
Gaurav Kataria: Hi Byron, thank you for inviting me. This is wonderful. I really appreciate being on your show and having this opportunity to talk to your listeners.
So let’s start with definitions. What is artificial intelligence?
Artificial intelligence, as the word suggests, starts with artificial and at this stage, we are in this mode of creating an impression of intelligence, and that’s why we call it artificial. What artificial intelligence does is it learns from the past patterns. So, you keep showing the patterns to the machine, to a computer, and then it will start to understand those patterns, and it can say every time this happens I need to switch off the light, every time this happens I need to open the door, and things of this nature. So you can train the machine to spark these patterns and then take action based on those patterns. A lot of it is right now being talked about in the context of self-driving cars. When you’re developing an artificial intelligence technology, you need a lot of training towards that technology so that it can learn the patterns in a very diverse and broad set of circumstances to create a more complete picture of what to expect in the future and then whenever it sees that same pattern in the future, it knows from its past what to do, and it will do that in the future.
So…
Artificial intelligence is not built…sorry, go ahead.
So, that definition or the way you are thinking of it seems to preclude other methodologies in the past which would have been considered AI. It precludes expert systems which aren’t trained off datasets. It precludes classic AI, where you try to build a model. Your definition really is about what is machine learning, is that true? Do you see those as synonymous?
I do see a lot of similarity between artificial intelligence and machine learning. You are absolutely right that artificial intelligence is a much broader term than just machine learning. You could create an artificially intelligent system without machine learning by just writing some heuristics, and we can call it like an expert system. In today’s world, right now, there is a lot of intersection happening in the field of AI, artificial intelligence, and machine learning and the consensus or an opinion of a lot of people in this space today is that techniques in machine learning are the ones that will drive the artificial intelligence forward. However, we will continue to have many other forms of artificial intelligence.
Just to be really clear, let me ask you a different question. What you just said is kind of interesting. You say we’ve happened on machine learning and it’s kind of our path forward. Do you believe that something like a general intelligence is an evolutionary development along the line of what we are doing now? Is it are we going to be a little better with our techniques, a little better, a little better, a little better and then one day we’ll have a general intelligence? Or do you think general intelligence is something completely different and will require a completely different way of thinking?
Thanks for that question. I would say today we understand artificial intelligence as a way of extrapolating from the past. We see something in the past, and we draw a conclusion for future based on what pattern we have seen in the past. The notion of general intelligence assumes or presupposes that you can make decisions in the future without having seen those circumstances or those situations in the past. Today, most of what’s going on in the field of artificial intelligence and in the field of machine learning is primarily based on training the machine based on data that already exists. In [the] future, I can foresee a world where we will have generalized intelligence, but today we are very far from it. And to my knowledge most of the work that I have seen and I have interacted [with] and the research that I have read speaks mostly in the context of training the systems based on current data—current information so that it can respond for similar situations in the future—but not anything outside of that.
So, humans do that really well, right? Like, we are really good at transfer learning. You can train a human with a dataset of one thing. You know say this is an alien, grog, and show it a drawing, and it could pick out a photograph of that, it could pick out one of those hanging behind the tree, it could pick out one of those standing on its head…How do you think like we do that? I know it’s a big question. How do you think we do it? Is that a machine learning? Is that something that you can train a machine eventually to do solely with data or are we doing something there that’s different?
Yeah, so you asked about transfer learning. So [in] transfer learning we train the machine or train the system for one set of circumstances or one set of conditions and then it is able to transfer that knowledge or apply that knowledge in another area. It can still kind of act based on that learning, but the assumption there is that there is still training in one setup and then you transfer that learning to another new area. So when it goes to the new area it feels like there was no training and the machine is just acting without any training with all general intelligence. But that’s not true because the knowledge was transferred from another dataset or another condition where there was training data. So I would say transfer learning does start to feel like or mimic the generalized intelligence, but it’s not generalized because it’s still learning from one setup and then trying to just extrapolate it to a newer or a different setup.
So how do you think humans do it? Let me try the question in a different way. Does everything you know how to do, everything a human knows how to do by age 20, something we learned from seeing examples of data? Could you machine learn, could a human be thought of asa really sophisticated machine learning algorithm?
That’s a very good point. I would like to think of humans as, all of us, as doing two things. One is learning, we learn from our experiences, and as you said like going from birth to 20 years of age, we do a lot of learning. We learn to speak, we learn the language, we learn the grammar, and we learn the social rules and protocols. In addition to learning, or let me say separate from learning, humans also do another thing, which is humans create where there was not a learning or repetition of what was taught to them. They create something new—as the expression goes “create from scratch.” This creating something from scratch or creating something out of nothing is what we call human creativity or innovation. So humans do two things: they are very good learners, they can learn from even very little data, but in addition to being good learners, humans are also innovators, and humans are also creators, and humans are also thinkers. The second aspect is where I think the artificial intelligence and machine learning really doesn’t do much. The first aspect, you’re absolutely right, I mean humans could be thought of as a very advanced machine learning system. You could give it some data, and it will pick [it] up very quickly.
In fact, one of the biggest challenges in machine learning today or in the context of AI, the challenge from machine learning, is it needs a lot of training data. If you want to make a self-driving car, experts have said it could take billions of miles of driving data to train that car to be able to do that. The point being, with lot of training data you can create an intelligence system. But humans can learn with less training data. I mean when you start learning to drive at the age of sixteen you don’t need a million miles to drive before you learn how to drive, but machines will need millions and millions of miles of driving experience before they can learn. So humans are better learners, and there is something going on in the human brain that’s more advanced than typical machine learning and AI models today. And I’m sure the state of artificial intelligence and machine learning will advance where machines can probably learn as fast as a human and will not require this much training data that it requires today. But the second aspect of what a human does—which is create something out of nothing or out of scratch, the pure thinking, the pure imagination—there I think there is a difference between what a human does and what a machine does.
By all means! Go explain that because I have an enormous number of guests on the show who aren’t particularly impressed by human creativity. They think that it’s kind of a party trick. It’s just kind of a hack. There’s nothing really at all that interesting about it that we just like to think it is. So I’d love to talk to somebody who thinks otherwise, who thinks there’s something positively quite interesting about human creativity. Where do you think it comes from?
Sure! I would like to kind of consider a thought experiment. So imagine that a human baby was taken away from civilization, from [the] middle of San Francisco or Austin—a big city—and put on an island all by herself, like just one human child all by herself on an island and that child will grow over time and will learn to do a lot of things and the child will learn to create a lot of things on their own. That’s where I am trying to take your imagination. Consider what that one individual without having learned anything else from any other human could be capable of doing. Could they be capable of creating a little bit of shelter for themselves? Could they be capable of finding food for themselves? There may be a lot of things that humans may be able to do, and we know [that] from the history of our civilization and the history of mankind.
Humans have invented a lot of things, even basic things like creating fire and creating a wheel, to much more advanced things like sending rocket ships into space. So I do feel that humans do things that are just not learned from the behavior of other humans. Humans do create completely new and novel things which is independent of what was done by anybody before them who lived on this planet. So I definitely have a view here that I am a believer in human creativity and human ingenuity and intuition where humans do create a lot of things; it is these humans [who]are creating all the artificial intelligence systems and machine learning systems. I would never count out human creativity.
So, somebody arguing on the other side of that would say, well no she’s on this island, it’s raining and she sees a spot under a tree that didn’t get wet, or she sees a fox going into a hole when it starts raining and, therefore, she starts a data point that she was trained on. She sees birds flying down, grabbing berries and eating them, so it’s just training data from another source, it’s just not from other humans. We saw rocks roll down the hill and we generalized that to how round things roll, round rock rolls. I mean that it’s just all training data from the environment, it doesn’t have to be specifically human data. So what would you say to that?
No, absolutely! I think you’re giving very good counter examples and there is certainly a lot of training and learning but if you think about sending a rocket to the moon and you say okay, so did we just see some training data around us and create a rocket and send it to the moon? There it starts to become harder to say that it’s a one to one connection from one training data to sending a rocket to the moon. There are much more advanced and complicated things that humans have accomplished than just finding shelter and creating a tree or finding rolling rocks. So humans definitely go way further in their imagination [and] any simple example that I could give would illustrate that point.
Fair enough! So, and we´ll move onto another issue here in just a minute, but I find this fascinating. So is your contention that the brain is not a Turing machine? That the brain behaves in fundamentally different ways than a computer?
I’m not an expert on how [the] human brain or how any mammal’s brain actually behave[s], so I can’t comment on all the technical aspects on how does a human brain function. I can say from observation that humans do a lot of things that machines don’t do and it’s because humans do come up with things completely from scratch. They come up with ideas out of nowhere, whereas machines don’t come up with ideas out of nowhere. They either learn very directly from the data or as you pointed out, they learn through transfer learning. So they learn from one situation, and then they transfer that learning to another situation.
So, I often ask people on the show when they think we will get a general intelligence, and the answers I get [a] range between five and five hundred years. It sounds like, not putting any words into your mouth, you’re on the further outside of that equation. You think we’re pretty far away, is that true?
I do feel that it will be further out on that dimension. In fact what I’m most fascinated by, and I kind of would love your listeners to also think about this, is [that] we talk a lot about human consciousness—we talk about how humans become creative and what is that moment of getting a new idea or thinking through a problem where you’re not just repeating something that you have seen in the past. That consciousness is a very key topic that we all think about very, very deeply and we try to come up with good definitions for what that consciousness is. If we ever create a system which we believe can mimic or show human consciousness level behavior, then at the very least we would have understood what consciousness is. Today we don’t even understand it. We try to describe it in words, but we don’t have perfect words for it. With more advances in this field, maybe we will come up with a much crisper definition for consciousness. That’s my belief, and that’s my hope that we should continue to work in this area. Many, many researchers are putting a lot of effort and thinking into this space, and as they may progress whether it is five years or five hundred years, we will certainly learn a lot more about ourselves in that time period.
To be clear though, there is widespread agreement on what consciousness is. The definition itself is not an issue. The definition is the experience of the world. It’s qualia. It’s the difference [between] a computer sensing, measuring temperature and a person feeling heat. And so the question becomes how could a computer ever, you know, feel pain? Could a computer feel pain? If it could, then you can argue that that’s a level of consciousness. What people don’t know is how it comes about, and they don’t even know, I think to your point, what that question looks like scientifically. So, trying to parse your words out here, do you believe we will build machines that don’t just measure the world but actually experience the world?
Yeah, I think when we say experience it is still a lower level kind of feeling where you are still trying to describe the world through almost like sensors—sensing things, sensing temperatures, sensing light. If you could imagine where all our senses were turned off, so you are not getting external stimuli and everything was coming from within. Could you still come up with an idea on your own without any stimulus? That’s a much harder thing that I’m trying to understand. As humans, we do try to strive to get to that point where you can come up with an idea without a stimulus or without any external stimuli. For machines, that’s not the bar we are holding for them. We are just holding the bar to say if there is a stimulus, will they respond to that stimulus?
So just one more question along these lines. At the very beginning when I asked you about the definition of artificial intelligence, you replied about machine learning, and you said that the computer comes to understand, and I wrote down the word “understand” on my notepad here, something. And I was going to ask you about that because you don’t actually think the computer understands anything. That’s a colloquialism, right?
Correct!
So, do you believe that someday a computer can understand something?
I think for now I will say computers just learn. Understand as you said, has a much deeper meaning. Learning is much more straightforward. You have seen some pattern, and you have learned from that pattern. Whether you understand or not, is a much deeper concept but learning is a much more straightforward concept, and today with most of our machine learning systems, all we are expecting them to do is to learn.
Do you think that there is a quote “master algorithm?” Do you think that there is a machine learning technique that, in theory that we haven’t discovered yet, can do unsupervised learning? Like you could just point it at the internet, and it could just crawl it and end up figuring it all out, it’ll understand it all. Do you think that there is an algorithm like that? Or do you think intelligence is going to be found to be very kludgy and we are going to have certain techniques to do this and then this and then this and then this? What do you think that looks like?
I see it as a version of your previous question. Is there going to be generalized intelligence and is that going to be in five years or five hundred years? I think where we are today it is the more kludgy version where we do have machines that can scan the entire web and find patterns and it can repeat those patterns but nothing more than just repeating those patterns. It’s more like a question and answer type of machine. It is a machine that completes sentences. There is nothing more than that. There is no sense of understanding. There is only a sense of repeating those patterns that you have seen in the past.
So if you’re walking along the beach and you find a genie lamp, and you rub it, and a genie comes out, and the genie says I will give you one wish: I will give you vastly faster computers, vastly more data or vastly better algorithms. What would you pick? What would advance the science the most?
I think you nailed the question on the head by saying these are the three things we need to improve machine learning: better data, more data, we need more computing power, and we need better algorithms. The state of the world as I experience it today within the field of machine learning and data science, usually our biggest bottleneck, the biggest hurdle, is data. We would certainly love to have more computational power. We would certainly pick much better and faster algorithms. But if I could ask for only one thing, I would ask for more training data.
So there is a big debate going on about the implication that these technologies are going to have on employment. I mean you know the whole setup as do the listeners, what’s your take on that?
I think as a whole our economy is moving into much more specialized jobs where people and humans are doing something which is more specialized than something which is repetitive and very kind of general or simple. Machine learning systems are certainly taking a lot of repetitive tasks away. So if a task that a human repeats like hundred times a day, those simpler tasks are definitely getting automated. But humans, in coming back to our earlier discussion, do show a lot of creativity and ingenuity and intuition. A lot of jobs are moving into the direction where we are relying on human creativity. So as a whole towards the whole economy and for everybody around us, I feel the future is pretty bright. We have an opportunity now to apply ourselves to do more creative things than just repetitive things, and machines will do the repetitive things for us. Humans can focus on doing more creative things, and that brings more joy and happiness and satisfaction and fulfillment to every human than just doing repetitive tasks which become very mundane and not very exciting.
You know, Vladimir Putin famously said, I’m going to paraphrase it here, that whoever dominates in AI will dominate the world. There is this view from some who want to weaponize the technology which see it strategically, you know, in this kind of great geopolitical world we live in. Do you worry about that, or are you like well you could say that about every technology—like metallurgy, you can say about metallurgy, that whoever controls metallurgy controls the future—or do you think AI is something different and it will really reshape the geopolitical landscape of the world?
So, I mean as you said, every technology is definitely weaponized, and we have seen many examples of that, not just going back a few decades. We have seen that for thousands of years where a new technology comes up and as humans we get very creative in weaponizing that technology. I do expect that machine learning and AI will be used for these purposes, but like any other technology in the past, no one technology has destroyed the world. As humans we come up with ways and interesting ways to still reach an equilibrium, to still reach a world of peace and happiness. So while there will be challenges and AI will create problems for us in the field of weapon technology, I think that I would still kind of bet that humans will find a way to create equilibrium out of this disruptive technology and this is not the end of the world, certainly not.
You’re no doubt familiar with the European initiatives that when an artificial intelligence makes a decision that affects you—it doesn’t give you a home mortgage or something like that—that you have a right to know why it did that. You’re an advocate [for], it seems, that that is both possible and desirable. Can you speak to that? Why do you think that’s possible?
So, if I understand the intent of your question, the European Union and probably all the jurisdictions around the world have put in a lot of thought into a) protecting human privacy and b) making that information more transparent and available to all the humans. I think that is truly the intent of the European regulation as well as similar regulation in many other parts of the world where we want to make sure we protect human privacy, and we give humans an opportunity to either opt out or understand how their data or how that information is being used. I think that’s definitely the right direction. So if I understand your question, I think that’s what Entelo as a company is looking it. Every company that is in the space of AI and machine learning is also looking at creating that respectful experience where if any human’s data is used, it’s done in a privacy-sensitive manner, and the information is very transparent.
Well, I think I might be asking something rather poorly it seems [or] slightly different. Let me use Google as an example. If I have a company that sells widgets and I have a competitor—and they have a company that sells widgets, and there are ten thousand other companies that sell widgets—and if you search for widget in Google, my competitor comes up first, and I come up second, [then] I say to Google, “why am I second and they are first?” I guess I kind of expect Google’s like, “what are you talking about?” It’s like, who knows? There are so many things, so many factors, so many who knows! And yet that’s a decision that AI made that affected my business. There’s a big difference between being number one and number two in the widget business. So if you say now every decision that it makes you’ve got to be able to explain why it made that decision, it feels like it shackles on the progress of the industry. Do you comment?
Right. Now I think I understand your question better now. So that burden is on all of us, I think because it is a slope or a slippery slope where, as artificial intelligence algorithms and machine learning algorithms become more and more complex, it becomes harder to explain those algorithms, so that’s a burden that we all carry. Anybody who is using artificial intelligence, and nowadays it’s pretty much all of us. If we think about it, which company is not using AI and ML? Everybody is using AI and ML. It is a responsibility for everybody in this field to try to make sure that they have a good understanding of their machine learning models and artificial intelligence models [so] that you can start to understand what triggers certain behavior. Every company that I know of, and I can’t speak for everybody but based on my knowledge is certainly thinking about this because you don’t want to put any machine learning algorithm out there that you can’t even explain how it works. So we may not have a perfect understanding of every machine learning algorithm, but we certainly strive to understand it as best as we can and explain it as clearly as we can. So that’s a burden we all carry.
You know I’m really interested in the notion of embodying these artificial intelligences. So you know one of the use cases is that someday we’ll have robots that can be caregivers for elderly people. We can talk to them and over time learn to laugh at their jokes, and learn to tell jokes like the ones they tell and emote when they’re telling some story about the past and kind of emote with them and oh it’s a beautiful story and all of that. Do you think that’s a good thing or a bad thing? To build that kind of technology that blurs the lines between a system that, as we were talking about earlier, truly understands as opposed to a system that just learns how to, let’s just say manipulate the person?
Yeah, I think right now my understanding is more in the field of learning than just full understanding, so I’ll speak from my area of knowledge and expertise [where] our focus is primarily on learning. Understanding is something that I think we as the community and researchers will definitely look at. But as far as most of the systems that exist today and most of the systems that I can foresee in the near future, they are more learning systems; they are not understanding systems.
But even a really simple case—you know I have the device from Amazon that if I say its name right now it’s going to, you know, start talking to me, right? And when my kids come into the studio and ask a question of it, once they get the answer [and] they can tell the answer is not what they’re looking for, they just tell it, you know, to be quiet. You know I have to say it somehow doesn’t sit right with me to hear them cut off something that sounds like a human like that—something that would be rude in any other [context]. So, does that worry you? Is that teaching? Am I just an old fuddy-duddy at this point? Or does that somehow numb their empathy with real people and they really would be more inclined to say that to a real person now?
I think you are asking a very deep question here as to do we as humans change our behavior and become different as we interact with technology? And I think some of that is true!
Yeah!
Some of that is true for sure, like when you think about SMS when it came out like 25 years ago as a technology, and we started texting each other. The way we would write text was different than how we would write handwritten letters. It became, I mean by the standards of let’s say 30 years ago, the text were very impolite, they would have all kinds of spelling mistakes, they would not address the people properly, and they would not really end with the proper punctuation and things like that. But as a technology, it evolved, and it is seen as still useful to us and we as humans we are comfortable with adapting to that technology. For every new technology, whether it is a speaking speaker or texting on cell phones, we’ll introduce new forms of communication, new forms of interaction. But a lot of human decency and respect comes from us not just based on how we interact with a speaker or on a text pad. A lot of it comes from much deeper rooted beliefs than just an interface. So I do feel like while we’ll adapt to new, different interfaces, a lot of human decency will come from much [a] deeper place than just the interface of the technology.
So you hold a Ph.D. in computer security risk management. When I have a guest on the show, sometimes I ask them “what is your biggest worry?” “Or is security really, you know, an issue?” And they all say yes. They’re like okay we’re plugging in 25 billion IoT devices, none of which by the way can we upgrade the software on. So you’re basically cementing in whatever security vulnerabilities you have. And you know [of] all the hacks that get reported in the industry, in the news—stories of election interfering, all this other stuff. Do you believe that the concern for security around these technologies is, in the popular media, overstated, understated or just about right?
I would say it’s just about right. I think that this is a very serious issue as more and more data is out there and more and more devices are out there as you mention a lot of IoT devices as well, I think the importance of this area has only grown over time and will continue to grow. So it deserves the due attention in this conversation, in our conversation, in any conversation. I think by bringing it to [the] limelight and drawing attention to this topic and making everybody think deeply and carefully about it is the right thing and I believe we are certainly not doing any fearmongering. All of these are justified concerns, and we are spending our time and energy about them in the right way.
So, just talking about the United States for a moment, because I’m sure all of these problems are addressed [on] a national level differently, different country. So just talking about the US for a minute, how do you think we’ll solve it? Do you just say well we’ll keep the spotlight on it and we hope that the businesses themselves see that they have an incentive to make their devices secure? Or do you think that the government should regulate it? How would you solve the problem now if you were in charge?
Sure! First of all, I think I am not in charge, but I do feel that there are three constituents in this. First, [are] the creators of technology, like when you are creating an IoT device or you’re creating any kind of software system, the responsibility is on the creator to think about the security of the system they are creating. The second constituent, the users, which [are] the general public and the customers of that technology. They put the pressure on the creator that the technology and the system should be safe. So if you don’t create a good system, a safe system, you will have no buyers and users for it. So people will vote with their feet, and they will hold the company or the creators of technology accountable. And as you mentioned, there is a third constituent, and that is the government or the regulator. I think all three constituents have to play a role. It’s not any one stakeholder that can decide whether the technology is safe, or good and is it good enough. It’s an interplay between the three constituents here. So the creators of technology which [are the] company, research lab, [and] academic institution, they have to think very deeply about security. The users of technology definitely hold the creators accountable, and the regulators play an important role in keeping the overall system safe. So I would say it’s not any one person or any one entity that can make the world safe. The responsibility is on all three.
So let me ask Gaurav the person a question. So you got this Ph.D. in computer security and risk management. What are some things that you personally do that you do because of your concerns about security? For instance, like do you have a piece of tape over your webcam? Or you’re like I would never hook a webcam? Or I never use the same password twice. What are some of the things that you do in your online life to protect your security?
So, I mean you mention all that good things like not to reuse passwords and things like that, but one thing which I have always mentioned to kind of my friends, my colleagues and I would love to share it with your listeners is: think about two-factor authentication. Two-factor authentication means, in addition to a password you are using a second means of authentication. So if you have a banking website, or a broker website or for that matter even your email, that’s the email system, it’s a good tactic to have two-factor authentication where you enter your password, but in addition to your password the system requires you to use a second factor and the second factor could be to send you a text message on your phone and it gives you a code and then you have to enter that code into the website or into the software. So two-factor authentication is many, many times more secure than one-factor authentication which is we just enter password and password can get stolen or breached and hacked. Two-factor is a very good security practice, and almost all companies and most of the creators of technology are now supporting two-factor authentication for the world to move in to that direction.
So, up until November you were the head of data science and growth of Google Cloud, and now you are the VP of Product at Entelo. So two questions: one, in your personal journey and life, why did you decide now is the time to go do something different, and then, what about Entelo got you excited? Tell us the Entelo story and what that’s all about.
Thanks for asking that. So Entelo is in the space of recruiting automation. The idea is that recruiting candidates has always been a challenge. I mean it’s hard to find the right fit for your company. Long ago we would put classified ads in the newspaper, and then technology came along, and we could post jobs on our website, we could post jobs on job boards, and that certainly helped in broadcasting your message to a lot of people so that they could apply for your job. But when you are recruiting, people who apply for your job is only one means of getting good people to your company. You also have to sometimes reach out to candidates who are not looking for a job, who are not applying for a job on your website or on a job board, they’re just happily employed somewhere else. But they are so good for the role you have that you have to go and kind of tap on their shoulder and say would you be interested in this new role, in this new career opportunity for you? Entelo creates that experience. It automates the whole recruiting process, and it helps you find the right candidates who may not apply on your website or apply on a job board, who are not even looking for a job. It helps you identify those candidates, and it helps you engage with those candidates—to reach out to them, tell them about your role and see if they are interested about your role, to then engage them further in the recruiting process. All of this is powered by a lot of data and a lot of AI and as we discussed earlier a lot of machine learning.
And so, I’ve often thought that what you’re describing—so AI has done really well at playing games because you’ve got these rules and you’ve got points, and you’ve got winners and all of that. Is that how you think of this? In a way, like, you have successful candidates at your company and unsuccessful candidates at your company and those are good points and bad points? So you’re looking for people that look like your successful candidates more. On an abstract, conceptual level how do you solve that problem?
I think you’re definitely describing the idea where not everybody is a good fit for your company and some people are a good fit. So the question is how do you find the good fit? How do you learn that who is a good fit and who is not? Traditionally, recruiters have been combing through lots and lots of resumes. I mean if you think back like decades ago, a recruiter would have to see a hundred or a thousand resumes stacked on their desk and then they would go through each one of them to say that this is a fit or not. Then about 20 years or so ago we had a lot of keywords search engines kind of developed, where as a human you don’t have to read the thousand resumes. Let’s just do a keyword search and let’s say if any of these resumes have this word and if they had the word then is a good resume and if it doesn’t have that word, then it’s not a good resume. That was a good innovation for scoring resumes or finding resumes, but it’s very imperfect because it’s susceptible to many problems. It’s susceptible to the problem where resumes get stuffed with keywords. It is susceptible to the problem that there is more to a person and more to a resume than just keywords.
Today the technology that we have in identifying the right candidate is just barely keyword search on almost every recruiting platform today. What a recruiter would do is say, “I can’t look through a thousand or a million resumes, let me just do a keyword search.” Entelo is trying to take a very different approach. Entelo is saying, “let’s not think about just keyword search; let’s think about who is [the] right fit for a job.” When you as humans look at a resume, you don’t do [a] keyword search; computers do [a] keyword search. I mean, in fact, if I were to challenge you or propose that I put a resume in front of you for an office manager you’re hiring for your office, you will probably scan that resume, you will have some heuristics in mind, you will look through some information and then say that yes this is a good resume or not a good resume. I can bet you are not going to do a keyword search on that resume and say like, “oh it has the word office, and it has the word manager, and it has the word furniture in it, so it’s a good resume for me.”
There is a lot that happens in the minds of the recruiters where they think through, is this person a good fit for this role? We are trying to learn from that recruiter experience where they don’t have to look through hundreds and thousands of resumes and nor do they have to do [a] keyword search. But we can learn from that experience of which is a good resume for this role and which is not a good resume for this role to find that pattern and then surface the right candidate and we take it a step further. We reach out to those candidates, engage those candidates, and then the recruiter only sees the candidates that are interested, so they don’t have to kind of think about like okay now do I have to do a keyword search in a million resumes and try to reach out to a million candidates. All of that process gets automated through the system that we have built here at Entelo and the system that we are further developing.
So at what level kind of is it training? For instance, if you have, you know, Bob’s House of Plumbing across the street from Jill’s House of Plumbing and then both are looking for an office manager and there both [have] 27 employees, do you say that their pools are exactly the same? Or is there something about Jill and her 27 employees that’s different than Bob and his 27 employees that means that they don’t get necessarily get one for one the exact same candidates?
Yeah, so historically most of the systems were built where there was no fit or contextual information and no personalization. It was whether Bob does the search or Jill does the search, they would get the exact same search results. Now we are moving in that direction of really understanding the fit for Bob’s company and really understanding the fit for Jill’s company so that they get the right candidate for them because one candidate is not right for everybody and one job is not right for every candidate. It is that matching between the candidate and the job.
Another aspect to kind of think about why using a system is sometimes better than just relying on one person’s opinion, is if it was one recruiter who was just deciding who’s a good fit for Bob’s company or Jill’s company, that recruiter may have their own bias and whether we like it or not many times, all of us tend to have unconscious bias. This is where the system or the machine tends to have a much better performance than a human because it’s learning across many humans rather than learning from only one human. If you were learning by copying one human, you will pick up all of their bias, but if you learn across many humans as opposed to a single person, you tend to be very unbiased or at least you tend to kind of average out as opposed to being very biased from one recruiter’s point of view. So that’s another reason why this system performs better than just relying on Bob’s individual judgment or Jill’s individual judgment.
It’s interesting, it sounds like a really challenging thing. As you were telling the story about looking for an office manager, and there are things when you’re scanning that you’re looking for, and it’s true that there is most often some form of an abstraction, because if my company needs an office manager for an emergency room, I’m looking for people who have been in high-stress situations before. Or if my company is, you know, a law firm I’m looking for people who have a background in things that are very secure and where privacy’s super important. Or if it’s a daycare, I maybe want somebody who’s got a background of things dealing with kids or something, so they’re always kind of like one level abstracted away, and so I bet that’s really hard to extract that knowledge. I could tell you I need somebody who can handle the pace at which we move around here, but for the system to learn that sounds like a real challenge, not beyond machine learning or anything, but it sounds like that’s a challenge. Is it?
Yes, you’re absolutely right. It is a challenge, and we have kind of just recently launched a product called Entelo Envoy, that’s trying to learn what’s good for your situation. So what Entelo Envoy will do is it will find the right candidate for your job posting or for your job description, send it to you, and then learn from you as you accept or reject certain candidates. You said that this candidate is over qualified or comes from a different industry. As you categorize those as fit and non-fit, it learns, and then over time, it starts sending you candidates that are much more fine-tuned to your needs. But the whole premise of the system is, initially it’s trying to find information that’s relevant for you, where you are looking for office managers, so you should get office manager resumes and not people who are nurses or doctors. So that’s the first element, and then the second element is let’s remove all the bias because if humans see me say that well we want to have only males or only females, let’s remove that bias and let’s have a system be unbiased in finding the right candidate. And then at the third level, if we do have more contextual information, as we pointed out we are looking for experience in a high-stress situation, then we can fine tune Entelo Envoy to get the third degree of personalization, or the third degree of matching. I want to look for people who have expertise in child care because your office happens to be the office fora daycare. Then there is a third level of tuning that you need to do at the system level. Entelo Envoy allows you to do that third level of tuning. It’ll send you candidates, and as you approve and reject those candidates, it will learn from your behavior and fine tune itself to find you the perfect match for the position that you are looking for.
You know this is a little bit of a tangent, but when I talk to folks on the show about is there really this like huge shortage of people with technical skills and machine learning backgrounds, they are all like “oh yeah, it’s a real problem.” I assume to them it’s like, “I want somebody with a machine learning background, and oh they need to have a pulse, other than that I’m fine.” So is that your experience that people with these skills are, right now, in this like incredibly high demand?
You’re absolutely right, there is high demand for people [with] machine learning skills, but I have been building products for many years now, and I know that to build a good product, to make any good product, you need a good team. It’s not about one person. Intuitively, we have all known that whether you were in machine learning or finance or medical field or healthcare, you know it takes a team to accomplish a job. When you are working in an operation theatre on a patient, it’s not only the doctor that matters, everybody else, it’s the team of people that make an operation successful. The same goes for machine learning systems. When you are building a machine learning system, it’s a team of people that are working together. It’s not only one engineer or one person or one data scientist that makes all of that possible. So creating the right team and creating a team that work[s] well, that respect[s] each other, build[s] on each other’s strengths, whereas creating a team that’s constantly fighting with each other—you will never accomplish anything. So you’re right, there is a high demand for people in the field of machine learning and data science. But every company and every project requires a good team, and you want a right fit of people for that team, rather than just individually good people.
So, in a sense, Entelo may invert that setup where you started where post the job and get a thousand resumes. You may be somebody like a machine learning guru and get a thousand companies that want you. So will that happen? Do you think that people with high demand skills will get heavily recruited by these systems in kind of an outreach way?
I think it comes back to if all we were doing was keyword search, then you’re right. I mean one resume looks good because it has all the right keywords, but we don’t do that. When we hire people in our teams, we are not just doing [a] keyword search. We want to find the person who is a right fit for the team, a person who has the skills, attributes, and understanding. It may be that you want someone who is experienced in your industry. It may be that you want someone who has worked on a small team. Or you want someone who has worked in a startup before. So I think there are many, many dimensions in which candidates are found by companies, and a good match happens. So, I feel like it’s not only one candidate who gets surfaced to a thousand companies and has a thousand job offers. It’s usually that every candidate has the right fit, everyone role has the right need for the right candidate, and it’s that matching of candidate and the role that creates a win-win situation for the entire office.
Well, I do want to say, you know you’re right that this is one of those areas that we still do it largely the old-fashioned way. Somebody looks at a bunch of people and you know makes a gut call. So I think you’re right on that it’s an area that technology can be deployed to really increase efficiency and what better place to increase efficiency and building your team as you said. So I guess that’s it! We are running out of time here. I would like to thank you so much for being on the show and wish you well in your endeavor.
Thank you, Byron. Thanks for inviting me and thank you to your listeners for humoring us.
Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.
[voices_in_ai_link_back]

Are Low-skilled Jobs More Vulnerable to Automation?

The following is an excerpt from Byron Reese’s book, The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity. You can purchase the book here.
The Fourth Age explores the implications of automation and AI on humanity, and has been described by Ethernet inventor and 3Com founder Bob Metcalfe as framing “the deepest questions of our time in clear language that invites the reader to make their own choices. Using 100,000 years of human history as his guide, he explores the issues around artificial general intelligence, robots, consciousness, automation, the end of work, abundance, and immortality.”
One of those deep questions of our time:
When the topic of automation and AI comes up, one of the chief concerns is always technology’s potential impact on jobs. There is a common assumption that it will be low-skilled jobs which are first automated, but is that really how automation will change the job market? In this excerpt from The Fourth Age, Byron Reese explores which sorts of jobs are most vulnerable to automation.


The assumptions that low-skilled workers will be the first to go and that there won’t be enough jobs for them undoubtedly have some truth to them, but they require some qualification. Generally speaking, when scoring jobs for how likely they are to be replaced by automation, the lower the wage a job pays, the higher the chance it will be automated. The inference usually drawn from this phenomenon is that a low-wage job is a low-skill job.
This is not always the case. From a robot’s point of view, which of these jobs requires more skill: a waiter or a highly trained radiologist who interprets CT scans? A waiter, hands down. It requires hundreds of skills, from spotting rancid meat to cleaning up baby vomit. But because we take all those things for granted, we don’t think they are all that hard. To a robot, the radiologist job, by comparison, is a cakewalk. It is just data in, probabilities out.
This phenomenon is so well documented that it has a name, the Moravec paradox. Hans Moravec was among those who noted that it is easier to do hard, brainy things with computers than “easy” things. It is easier to get a computer to beat a grandmaster at chess than it is to get one to tell the difference between a photo of a dog and a cat.
Waiters’ jobs pay less than radiologists’ jobs not because they require fewer skills, but because the skills needed to be a waiter are widely available, whereas comparatively few people have the uncommon ability to interpret CT scans.
What this means is that the effects of automation are not going to be overwhelmingly borne by low-wage earners. Order takers at fast-food places may be replaced by machines, but the people who clean up the restaurant at night won’t be. The jobs that automation affects will be spread throughout the wage spectrum.
All that being said, there is a widespread concern that automation is destroying jobs at the “bottom” and creating new jobs at the “top.” Automation, this logic goes, may be making new jobs at the top like geneticist but is destroying jobs at the bottom like warehouse worker. Doesn’t this situation lead to a giant impoverished underclass locked out of gainful employment?
Often, the analysis you hear goes along these lines: “The new jobs are too complex for less-skilled workers. For instance, if a new robot replaces a warehouse worker, tomorrow the world will need one less warehouse worker. Even if the world also happened to need an additional geneticist, what are you doing to do? Will the warehouse worker have the time, money and aptitude to train for the geneticist’s job?”
No. The warehouse worker doesn’t become the geneticist. What actually happens is this: A college biology professor becomes the new geneticist; a high-school biology teacher takes the college job; a substitute elementary teacher takes the high school job; and the unemployed warehouse worker becomes a substitute teacher. This is the story of progress. When a new job is created at the top, everyone gets a promotion. The question is not “Can a warehouse worker become a geneticist” but “Can everyone do a job a little harder than the one they currently do?” If the answer to that is yes, which I emphatically believe, then we want all new jobs to be created at the top, so that everyone gets a chance to move up a rung on the ladder of success.


To read more of Byron Reese’s book, The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity, you can purchase it here.

Will We Really Lose Half our Jobs to Automation?

The following is an excerpt from Byron Reese’s book, The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity. You can purchase the book here.
The Fourth Age explores the implications of automation and AI on humanity, and has been described by Ethernet inventor and 3Com founder Bob Metcalfe as framing “the deepest questions of our time in clear language that invites the reader to make their own choices. Using 100,000 years of human history as his guide, he explores the issues around artificial general intelligence, robots, consciousness, automation, the end of work, abundance, and immortality.”
One of those deep questions of our time:
When the topic of automation and AI comes up, one of the chief concerns is always technology’s potential impact on jobs. Many fear that with the introduction of wide-scale automation, there will be no more jobs left for humans. But is it really that dire? In this excerpt from The Fourth Age, Byron Reese explores the prospect of massive job loss due to automation.


The “jobs will be destroyed too quickly” argument is an old one as well. In 1930, the economist John Maynard Keynes voiced it by saying, “We are being a afflicted with a new disease . . . technological unemployment. This means unemployment due to our discovery of means of economising the use of labour outrunning the pace at which we can find new uses for labour.”
In 1978, New Scientist repeated the concern:
The relationship between technology and employment opportunities most commonly considered and discussed is, of course the tendency for technology to be labour-saving and thus eliminate employment opportunities—if not actual jobs.
In 1995, the refrain was still the name. David F. Noble wrote in Progress without People:
Computer-aided manufacturing, robotics, computer inventories, automated switchboards and tellers, telecommunication technologies—all have been used to displace and replace people, to enable employers to reduce labour costs, contract-out, relocate operations.
But is it true now? Will new technology destroy the current jobs too quickly?
A number of studies have tried to answer this question directly. One of the very finest and certainly the most quoted was published in 2013 by Carl Benedikt Frey and Michael A. Osborne, both of Oxford University. The report, titled The Future of Employment, is seventy-two pages long, but what has been referenced most frequently in the media is a single ten-word phrase: “about 47 percent of total US employment is at risk.” Hey, who needs more than that? It made for juicy and salacious headlines, to be sure. It seemed as if every news source screamed a variant of “Half of US Jobs Will Be Taken by Computers in Twenty Years.”
If we really are going to lose half our jobs in twenty years, well, then the New York Times should dust off the giant type it used back in 1969 when it printed “MEN WALK ON MOON” and report the story on the front page with equal emphasis. But that is not actually what Frey and Osborne wrote. Toward the end of the report, they provide a four-hundred-word description of some of the limitations of the study’s methodology. They state that “we make no attempt to estimate how many jobs will actually be automated. The actual extent and pace of computerisation will depend on several additional factors which were left unaccounted for.”
So what’s with the 47 percent figure? What they said is that some tasks within 47 percent of jobs will be automated. Well, there is nothing terribly shocking about that at all. Pretty much every job there is has had tasks within it automated. But the job remains. It is just different.
For instance, Frey and Osborne give the following jobs a 65 percent or better chance of being computerized: social science research assistants, atmospheric and space scientists, and pharmacy aides. So what does this mean? Social science professors will no longer have research assistants? Of course they will. They will just do different things, because much of what they do today will be automated. There won’t be any more space scientists? Pharmacists will no longer have anyone helping them?
Frey and Osborne say that the tasks of a barber have an 80 percent chance of being taken over by AI or robots. In their category of jobs with a 90 percent or higher chance of certain tasks being computerized are tour guides and carpenters’ helpers.
The disconnect is clear: some of what a carpenter’s helper does will get automated, but the carpenter helper job won’t vanish; it will morph, as almost everyone else’s job will, from architect to zoologist. Sure, your iPhone can be a tour guide, but that won’t make tour guides vanish.
Anyone who took the time to read past the introduction to The Future of Employment saw this. And to be clear, Frey and Osborne were very up-front. They stated, in scholar-speak, the following:
We do not capture any within-occupation variation resulting from the computerisation of tasks that simply free-up time for human labour to perform other tasks.
In response to the Frey and Osborne paper, the Organization for Economic Cooperation and Development (OECD), an intergovernmental economic organization made up of nations committed to free markets and democracy, released a report in 2016 that directly counters it. In this report, entitled The Risk of Automation for Jobs in OECD Countries, the authors apply a “whole job” methodology and come up with the percent of jobs potentially lost to computerization as 9 percent. That is pretty normal churn for the economy.
At the end of 2015, McKinsey & Company published a report entitled Four Fundamentals of Workplace Automation that came to similar conclusions as the OECD. But again, it had a number too provocative for the media to resist sensationalizing. The report said, “The bottom line is that 45 percent of work activities could be automated using already demonstrated technology,” which was predictably reported as variants of “45% of Jobs to Be Eliminated with Existing Technology.” Often overlooked was the fuller explanation of the report’s conclusion:
Our results to date suggest, first and foremost, that a focus on occupations is misleading. Very few occupations will be automated in their entirety in the near or medium term. Rather, certain activities are more likely to be automated, requiring entire business processes to be transformed, and jobs performed by people to be redefined, much like the bank teller’s job was redefined with the advent of ATMs.
The “47 percent [or 45 percent] of jobs will vanish” interpretation doesn’t even come close to passing the sniff test. Humans, even ones with little or no professional training, have incredible skills we hardly ever think about. Let’s look closely at two of the jobs at the very top of Frey and Osborne’s list: short-order cook and waiter. Both have 94 percent chance of being computerized.
Imagine you own a pizza restaurant that employs one cook and one waiter. A fast-talking door-to-door robot salesman manages to sell you two robots: one designed to make pizzas and one designed to take orders and deliver pizzas to tables. All you have to do is preload the food containers with the appropriate ingredients, and head off to Bermuda. The robot waiter, who understands twenty languages, takes orders with amazing accuracy, and flawlessly handles special requests like “I want half this, half that” and “light on the sauce.” The orders are sent to the pizza robot, who makes the pizza with speed and consistency.
Let’s check in on these two robots on their first day of work and see how things are going:

  • A patron spills his drink. The robots haven’t been taught to clean up spills, since this is a surprisingly complicated task. The programmers knew this could happen, but the permutations of what could be spilled and where were too hard to deal with. They promised to include it in a future release, and in the meantime, to program the robot to show the customers where the cleaning supplies are kept.
  • A little dog, one of those yip-yips, comes yipping in and the waiter robot trips and falls down. Having no mechanism to right itself, it invokes the “I have fallen and cannot get up” protocol, which repeats that phrase over and over with an escalating tone of desperation until someone helps it up. When asked about this problem, the programmers reply, snappishly, that “it’s on the list.”
  • Maggots get in the shredded cheese. Maggoty pizza is served to the patrons. All the robot is trained to do with customers unhappy with their orders is to remake their pizzas. More maggots. The robots don’t even know what maggots are.
  • A well-meaning pair of Boy Scouts pop in to ask if the pipe jutting out of the roof should be emitting smoke. They say they hadn’t noticed it before. Should it be? How would the robot know?
  • A not-well-meaning pair of boys come in and order a “pizza with no crust” to see if the robots would try to make it and ruin the oven. After that, they order a pizza with double crust and another one with twenty times the normal amount of sauce. Given that they are both wearing Richard Nixon masks, the usual protocol of taking photographs of troublesome patrons doesn’t work and results only in a franchise-wide ban of Richard Nixon at affiliated restaurants.
  • A patron begins choking on a pepperoni. Thinking he must be trying to order something, the robot keeps asking him to restate his request. The patron ends up dying right there at his table. After seeing no motion from him for half an hour, the robot repeatedly runs its “Sleeping Patron” protocol, which involves poking the customer and saying, “Excuse me sir, please wake up” repeatedly.
  • The fire marshal shows up, seeing the odd smoke from the pipe in the roof, which he hadn’t noticed before. Upon discovering maggot-infested pizza and a dead patron being repeatedly poked by a robot, he shuts the whole place down. Meanwhile, you haven’t even boarded your flight to Bermuda.

This scenario is, of course, just the beginning. The range of things the robot waiter and cook can’t do is enough to provide sitcom material for ten seasons, with a couple of Christmas specials thrown in. The point is that those who think so-called low-skilled humans are easy targets for robot replacement haven’t fully realized what a magnificently versatile thing any human being is and how our most advanced electronics are little more than glorified toaster ovens.
While it is clear that we will see ever-faster technological advances, it is unlikely that they will be different enough in nature to buck our two-hundred-year run of plenty of jobs and rising wages. In one sense, no technology really compares to mechanization, electricity, or steam engines in impact on labor. And those were a huge win for both workers and the overall economy, even though they were incredibly disruptive.


To read more of Byron Reese’s book, The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity, you can purchase it here.

Are There Robot-Proof Jobs?

The following is an excerpt from Byron Reese’s book, The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity. You can purchase the book here.
The Fourth Age explores the implications of automation and AI on humanity, and has been described by Ethernet inventor and 3Com founder Bob Metcalfe as framing “the deepest questions of our time in clear language that invites the reader to make their own choices. Using 100,000 years of human history as his guide, he explores the issues around artificial general intelligence, robots, consciousness, automation, the end of work, abundance, and immortality.”
One of those deep questions of our time:
When the topic of automation and AI comes up, one of the chief concerns is always technology’s potential impact on jobs. Many fear that with the introduction of wide-scale automation, there will be no more jobs left for humans. But is it really that dire? In this excerpt from The Fourth Age, Byron Reese considers if there are jobs that will never be automated.


When I give talks about AI and robots, they are often followed by a bit of Q&A. By far, the number one question I am asked from the audience is a variant of, “What should my kids be studying today to make sure that they are employable in the future?” As a dad with four kids under twenty, I too have pondered this question at length.
If possibility one is true—that is, if robots take all the jobs—then the prediction of the author Warren G. Bennis will also have come true, that “the factory of the future will have only two employees, a man and a dog. The man will be there to feed the dog. The dog will be there to keep the man from touching the equipment.” In other words, there would be no robot-proof jobs.
But if possibility two or possibility three comes to pass, then there will be robot-proof jobs. What will they be? A good method for evaluating any job’s likelihood of being automated is what I call the “training manual test.” Think about a set of instructions needed to do your job, right down to the most specific part. How long is that document? Think about a posthole digger versus an electrician. The longer the instruction manual, the more situations, special cases, and exceptions exist that need to be explained. Interestingly, when surveyed, people overwhelmingly believe that automation will destroy a large number of jobs, but also overwhelmingly believe that their own job is robot-proof. In other words, most people think that the manual to do their job is large while other people’s job manuals are smaller.
The reason the training manual test works is because writing a manual on how to do a job is a bit like programming a computer or robot to do a job. In a program, every step, every contingency, every exception, needs to be thought through and handled.
One wonders if there are some jobs that can’t be written down. Could anyone write a set of instructions to compose a sonata or write a great novel? How you answered our big foundational questions probably determines what you think on this question. To those who think they are machines, who are monists, there is nothing mysterious about creativity that would keep machines from mastering it, whereas those on the other side of that gulf see creativity as a special, uniquely human ability.
Below are several groups of jobs that, regardless of your beliefs about the capabilities of robots, should be stable for a long time.
Jobs Robots Can Do but Probably Never Will: Some jobs are quite secure and are accessible to a huge range of the population, regardless of intellect, educational attainment, or financial resources, because although a robot could do them, it doesn’t make economic sense for them to do so. Think of all of the jobs people will need for the next hundred years, but only very occasionally.
I live in a home built in the 1800s that contains several fireplaces. I wanted to be able to use them without constantly wondering if I was going to burn the house down, so I called in “the guy” for old replace restoration. He took one look at them and started spouting off how they clearly hadn’t been rebuilt in the nineteen-sometime when some report came out in England that specified blah-blah-blah better heat reflection blah-blah-blah. Then he talked about a dozen other things relating to fireplaces that I tuned out because clearly this man knew more about fireplaces than anyone else I would ever meet, or he was a convincing enough pathological liar that I would never figure him out. Either way, the result is the same: I hired him to make my fireplaces safe. He is my poster child of a guy who isn’t going to be replaced by a robot for a long time. His grandkids can probably retire from that business.
There are many of these jobs: repairing antique clocks, leveling pier-and-beam houses, and restoring vintage guitars, just to name a few. Just make sure the object you’re working on isn’t likely to vanish. Being the best VCR repairman in the world is not a career path I would suggest.
Jobs We Won’t Want Robots to Do: There are jobs that, for a variety of reasons, we wouldn’t want a machine to do. This case is pretty straight-forward. NFL football player, ballerina, spirit guide, priest, and actor, just to name a few. Additionally, there are jobs that incorporate some amount of nostalgia or quaintness, such as blacksmith or candlemaker.
Unpredictable Jobs: Some jobs are so unpredictable that you can’t write a manual on how to do, because the nature of job has inherent unpredictability. I have served as the CEO of several companies, and my job description was basically: Come in every morning and fix whatever broke and seize whatever opportunities presented themselves. Frankly, much of the time I just winged it. I remember one day I reviewed a lease agreement, brainstormed names for a new product, and captured a large rat that fell through a ceiling tile onto an employee’s desk. If there was a robot that could do all of that, I’d put down a deposit on it today.
Jobs That Need a High Social IQ: Some jobs that require high-level interaction with other people, and they usually need superior communication abilities as well. Event planner, public relations specialist, politician, hostage negotiator, and director of social media are just a few examples. Think of jobs that require empathy or outrage or passion.
Jobs Done On-Site: On-site jobs will be difficult to be done with robots. Robots work well in perfectly controlled environments, such as factories and warehouses, and not in ad hoc environments like your aunt Sue’s attic. Forest rangers and electricians are a couple of jobs like this that come to mind, but there are many more.
Jobs That Require Creativity or Abstract inking: It will be hard if not impossible for computers to be able to do jobs that require creativity or abstract thinking, because we don’t really even understand how humans do these things. Possible jobs include author (yay!), logo designer, composer, copywriter, brand strategist, and management consultant.
Jobs No One Has Thought of Yet: There are going to be innumerable new jobs created by all this new technology. Given that a huge number of current jobs didn’t exist before 2000, it stands to reason that many more new professions are just around the corner. The market research company Forrester forecasts that within the next decade, an astonishing 12.7 million new US jobs will be created building robots and the software that powers them.


To read more of Byron Reese’s book, The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity, you can purchase it here.

Voices in AI – Episode 37: A Conversation with Mike Tamir

[voices_in_ai_byline]
In this episode, Byron and Mike talk about AGI, Turing Test, machine learning, jobs, and Takt.
[podcast_player name=”Episode 37: A Conversation with Mike Tamir” artist=”Byron Reese” album=”Voices in AI” url=”https://voicesinai.s3.amazonaws.com/2018-03-27-(00-55-21)-mike-tamir.mp3″ cover_art_url=”https://voicesinai.com/wp-content/uploads/2018/03/voices-headshot-card-1.jpg”]
[voices_in_ai_byline]
Byron Reese: This is Voices in AI, brought to you by Gigaom. I’m Byron Reese. I’m excited today, our guest is Mike Tamir. He is the Chief Data Science Officer at Takt, and he’s also a lecturer at UC Berkeley. If you look him up online and read what people have to say about him, you notice that some really, really smart people say Mike is the smartest person they know. Which implies one of two things: Either he really is that awesome, or he has dirt on people and is not above using it to get good accolades. Welcome to the show, Mike!
Mark Cuban came to Austin, where we’re based, and gave a talk at South By Southwest where he said the first trillionaires are going to be in artificial intelligence. And he said something very interesting, that if he was going to do it all over again, he’d study philosophy as an undergrad, and then get into artificial intelligence. You studied philosophy at Columbia, is that true?
I did, and also my graduate degree, actually, was a philosophy degree, cross-discipline with mathematical physics.
So how does that work? What was your thinking? Way back in the day, did you know you were going to end up where you were, and this was useful? That’s a pretty fascinating path, so I’m curious, what changed, you know, from 18-year-old Mike to today?
[Laughs] Almost everything. So, yeah, I think I can safely say I had no idea that I was going to be a data scientist when I went to grad school. In fact, I can safely say that the profession of data science didn’t exist when I went to grad school. I did, like a lot of people, who joined the field around when I did, kind of became a data scientist by accident. My degree, while it was philosophy, was fairly technical. It made me more focused on mathematical physics and helped me learn a little bit about machine learning while I was doing that.
Would you say studying philosophy has helped you in your current career at all? I’m curious about that.
Um, well, I hope so. It was very much a focus thing, the philosophy of science. So I think back all the time when we are designing experiments, when we are putting together different tests for different machine learning algorithms. I do think about what is a scientifically-sound way of approaching it. That’s as much the physics background as it is the philosophy background. But it certainly does influence, I’d say daily what we do in our data science work.
Even being a physicist that got into machine learning, how did that come about?
Well, a lot of my graduate research in physics was focused on a little bit of neural activity, but also a good deal of it was focusing in quantum statistical mechanics, which really involved doing simulations and thinking about the world in terms of lots of random variables and unknowns that results in these emergent patterns. And in a lot of ways what we do now, in fact, at Takt is actually writing a lot about group theory and how that can be used as a tool for analyzing the effectiveness of deep learning. Um, there are a lot of, at least at a high level, similarities in trying to find those superpatterns in the signal in machine learning and the way you might think about emergent phenomenon in physical systems.
Would an AGI be emergent? Or is it going to be just nuts and bolts brute force?
[Laughs] That is an important question. The more I find out about successes, at least the partial successes, that can happen with deep learning and with trying to recreate the sorts of sensitivities that humans have, that you would have with object recognition, with speech recognition, with semantics, with general, natural language understanding, the more sobering it is thinking about what humans can do, and what we do with our actual, with our natural intelligence, so to speak.
So do you think it’s emergent?
You know, I’m hesitant to commit. It’s fair to say that there is something like emergence there.
You know this subject, of course, a thousand times better than me, but my understanding of emergence is that there are two kinds: there’s a weak kind and a strong one. A weak one is where something happens that was kind of surprising—like you could study oxygen all your life, and study hydrogen but not be able to realize, “Oh, you put those together and it’s wet.” And then there’s strong emergence which is something that happens that is not deconstructable down to its individual components, it’s something that you can’t actually get to by building up—it’s not reductionist. Do you think strong emergence exists?
Yeah, that’s a very good question and one that I refuse to think about quite a bit. The answer, or my answer I think would be, it’s not as stark as it might seem. Most cases of strong emergence that you might point to, actually, there are stories you can tell where it’s not as much of a category distinction or a non-reducible phenomenon as you might think. And that goes for things as well studied as space transitions, and criticality phenomenon in the physics realm, as it does possibly for what we talk about when we talk about intelligence.
I’ll only ask you one more question on this, and then we’ll launch into AI. Do you have an opinion on whether consciousness is a strong emergent phenomenon? Because that’s going to speak to whether we can build it.
Yeah so, that’s a very good question, again. I think that what we find out when we are able to recreate some of the—we’re really just in the beginning stages in a lot of cases—at least semi-intelligent, or a component of what integrated AI look like. It shows more about the magic that we see when we see consciousness. It brings human consciousness closer to what we see in the machines rather than the other way around.
That is to say, human consciousness is certainly remarkable, and is something that feels very special and very different from what maybe imperatively constructed machine instructions are. There is another way of looking at it though, which is that maybe by seeing how, say, a deep neural net is able to adapt to signals that are very sophisticated and maybe even almost impossible to really boil it down, it’s actually something that we do that we might imagine are brains are doing all the time, just in a far, far larger magnitude of parameters and network connections.
So, it sounds like you’re saying it may not be that machines are somehow ennobled with consciousness, but that we discover that we’re not actually conscious. Is that kind of what you’re saying?
Yeah, or maybe something in the middle.
Okay.
Certainly, our personal experience of consciousness, and what we see when we interact with other humans or other people, more generally; there’s no denying that, and I don’t want to discount how special that is. At the same time, I think that there is a much blurrier line, is the best way to put it, between artificial, or at least the artificial intelligence that we are just now starting to get our arms around, and what we actually see naturally.
So, the shows called Voices in AI, so I guess I need to get over there to that topic. Let’s start with a really simple question: What is artificial intelligence?
Hmm. So, until a couple years ago, I would say that artificial intelligence really is what we maybe now call integrated AI. So a dream of using maybe several integrated techniques of machine learning to create something that we might mistake for, or even accurately describe as, consciousness.
Nowadays, the term “artificial intelligence” has, I’d say, probably been a little bit whitewashed or diluted. You know, artificial intelligence can mean any sort of machine learning or maybe even no machine learning at all. It’s a term that a lot of companies put in their VC deck, and it could be something as simple as just using a logistic regression—hopefully, logistic regression that uses gradient descendants as opposed to closed-form solution. Right now, I think it’s become kind of indistinguishable from generic machine learning.
I, obviously, agree, but, take just the idea that you have in your head that you think is legit: is it artificial in the sense that artificial turf isn’t really grass, it just looks like it? Or is it artificial in the sense we made it. In other words, is it really intelligence, or is it just something that looks like intelligence?
Yeah, I’m sure people bring up the Turing test quite a bit when you broach this subject. You know, the Turing test is very coarsely… You know, how would you even know? How would you know the difference between something that is an artificial intelligence and something that’s a bona fide intelligence, whatever bona fide means. I think Turing’s point, or one way of thinking about Turing’s point, is that there’s really no way of telling what natural intelligence is.
And that again makes my point, that it’s a very blurry line, the difference between true or magic soul-derived consciousness, and what can be constructed maybe with machines, there’s not a bright distinction there. And I think maybe what’s really important is that we probably shouldn’t discount ostensible intelligence that can happen with machines, any more than we should discount intelligence that we observe in humans.
Yeah, Turing actually said, a machine may do it differently but we still have to say that the machine is thinking, it just may be different. He, I think, would definitely say it’s really smart, it’s really intelligent. Now of course the problem is we don’t have a consensus definition even of intelligence, so, it’s almost intractable.
If somebody asks you what’s the state of the art right now, where are we at? Henceforth, we’re just going to use your idea of what actual artificial intelligence is. So, if somebody said “Where are we at?” are we just starting, or are we actually doing some pretty incredible things, and we’re on our way to doing even more incredible things?
[Laughs] My answer is, both. We are just starting. That being said, we are far, we are much, much further along than I would have guessed.
When do you date, kind of, the end of the winter? Was there a watershed event or a technique? Or was it a gradualism based on, “Hey, we got faster processors, better algorithms, more data”? Like, was there a moment when the world shifted? 
Maybe harkening back to the discussion earlier, you know, someone who comes from physics, there’s what we call the “miracle year,” when Einstein published his theory—a really remarkable paper—roughly just over a hundred years ago. You know, there is a miracle year and then there’s also when he finally was able to crack the code in general relativity. I don’t think we can safely say that there been a miracle year until far, far in the future, when it comes to the realm of deep learning and artificial intelligence.
I can say that, in particular, with natural language understanding, the ability to create machines that can capture semantics, the ability of machines to identify objects and to identify sounds and turn them into words, that’s important. The ability for us to create algorithms that are able to solve difficult tasks, that’s also important. But probably at the core of it is the ability for us to train machines to understand concepts, to understand language, and to assign semantics effectively. One of the big pushes that’s happened, I think, in the last several years, when it comes to that, is the ability to represent sequences of terms and sentences and entire paragraphs, in a rich mathematically-representable way that we can then do things with. That’s been a big leap, and we’re seeing a lot of the progress that with neural word embeddings with sentence embeddings. Even as recently as a couple months ago, some of the work with sentence embedding that’s coming out is certainly part of that watershed, and that move from dark ages in trying to represent natural language in a intelligible way, to where we are now. And I think that we’ve only just begun.
There’s been a centuries-old dream in science to represent ideas and words and concepts essentially mathematically, so that they can manipulated just like anything else can be. Is that possible, do you think?
Yeah. So one way of looking at the entire twentieth century is a gross failure in the ability to accurately capture the way humans reason in Boolean logic, and the way we represent first order logic, or more directly in code. That was a failure, and it wasn’t until we started thinking about the way we represent language in terms of the way concepts are actually found in relation to one another, training an algorithm to read all of Wikipedia and to start embedding that with Word2vec—that’s been a big deal.
The fact that by doing that, and now we can start capturing everything. It’s sobering, but we now have algorithms that can, with embed sentences, detect things like logical implications or logical equivalence, or logical non-equivalence. That’s a huge step, and that’s a step that I think we tried quite a bit to do, or many tried to do without experience and failed.
Do you believe that we are on a path to creating an AGI, in the sense that what we need is some advances in algorithms, some faster machines, and more data, and eventually we’re going to get there? Or, is AGI going to come about, if it does, from a presently-unknown approach, a completely different way of thinking about knowledge?
That’s difficult to speculate. Let’s take a step back. Five years ago, less than five years ago, if you wanted to propose a deep learning algorithm for an industry to solve a very practical problem, the response you would get is stop being too academic, let’s focus on something a little simpler, a little bit easier to understand. There’s been a dramatic shift, just in the last couple years, that now, the expectation is if you’re someone in the role that I’m in, or that my colleagues are in, if you’re not considering things like deep learning, then you’re not doing your job. That’s something that seems to have happened overnight, but was really a gradual shift over the past several years.
Does that mean that deep learning is the way? I don’t know. What do you really need in order to create an artificial intelligence? Well, we have a lot of the pieces. You need to be able to observe maybe visually or with sounds. You need to be able to turn those observations into concepts, so you need to be able to do object recognition visually. Deep learning has been very successful in solving those sorts of problems, and doing object recognition, and more recently making that object recognition more stable under adversarial perturbation.
You need to be able to possibly hear and respond, and that’s something that we’ve gotten a lot better at, too. We’ve got a lot of the work done by doing research labs, there’s been some fantastic work in making that more effective. You need to be able to not just identify those words or those concepts, but also put them together, and put them together, not just in isolation but in the context of sentences. So, the work that’s coming out of Stanford and some of the Stanford graduates, Einstein Labs, which is sort of at the forefront there, is doing a very good job in capturing not just semantics—in the sense of, what is represented in this paragraph and how can I pull out the most important terms?—but doing a job of abstractive text summarization, and, you know, being able to boil it down to terms and concepts that weren’t even in the paragraph. And you need to be able to do some sort of reasoning. Just like the example I gave before, you need to be able to use sentence embedding to be able to classify—we’re not there yet, but—that this sentence is related to this sentence, and this sentence might even entail this sentence.
And, of course, if you want to create Cylons, so to speak, you also need to be able to do physical interactions. All of these solutions in many ways have to do with the general genre of what’s now called “deep learning,” of being able to add parameters upon parameters upon parameters to your algorithm, so that you can really capture what’s going on in these very sophisticated, very high dimensional spaces of tasks to solve.
No one’s really gotten to the point where they can integrate all of these together, and I think is that going to be something that is now very generic, that we call deep learning, which is really a host of lots of different techniques that just use high dimensional parameter spaces, or is it going to be something completely new? I wouldn’t be able to guess.
So, there are a few things you left of your list, though, so presumably you don’t think an AGI would need to be conscious. Consciousness isn’t a part of our general intelligence. 
Ah, well, you know, maybe that brings us back to where we started.
Right, right. Well how about creativity? That wasn’t in your list either. Is that just computational from those basic elements you were talking about? Seeing, recognizing, combining?
So, an important part of that is being able to work with language, I’d say, being able to do natural language understanding and do natural language understanding at higher than the word level, but at the sentence level, certainly anything that might be what they call mistaken or “identified as” thinking. Have to have that as a necessary component. And being able to interact, being able to hold conversations, to abstract and to draw conclusions and inferences that aren’t necessarily there.
I’d say that that’s probably the sort of thing that you would expect of a conscious intelligence, whether it’s manifest in a person or manifest in a machine. Maybe I should say manifested in a human, or manifested in a machine.
So, you mentioned the Turing test earlier. And, you know, there are a lot of people who build chatbots and things that, you know, are not there yet, but people are working on it. And I always type in one, first question, it’s always the same, and I’ve never seen a system that even gets the question, let alone can answer it.
The question is, “What’s bigger, a nickel or the sun?” So, two questions, one, why is that so hard for a computer, and, two, how will we solve that problem?
Hmm. I can imagine how would I build a chatbot, and I have worked on this sort of project in the past. One of the things—and I mentioned earlier, this allusion to a miracle year—is the advances that happened, in particular, in 2013 with figuring out ways of doing neural-word embeddings. That’s so important, and one way of looking at why that’s so important is that, when we’re doing machine learning in general—this is what I tell my students, this what drives a lot of our design—you have to manage the shape of your data. You have to make sure that the amount of examples you have, the density of data points you have, is commensurate with the amount of degrees of freedom that you have representing your world, your model.
Until very recently, there have been attempts, but none of them as successful as we’ve seen in the last five years. The baseline has been what’s called the one-hot vector encoding, where you have a different dimension for every word in your language, usually it’s around a million words. You have all zeros and then for the word maybe in the first dimension you take the first word in the dictionary to order them that way, and you have the word ‘a,’ which is spelled with the letter ‘a,’ and that’s then the one and all zeros. And then for the second word you have a zero and a one and the rest zeros. So the point here, and not to get technical, but your dimensions are just too many.
You have millions and millions of dimensions. When we talk with students about this, it’s called the curse of dimensionality, every time you add even one dimension, you need twice as many data points in order to maintain the same density. And maintaining that density is what you need in order to abstract, in order to generalize, in order to come up with an algorithm that can actually find a pattern that works, not just for the data that it sees, but for the data that it will see.
What happens with these neural word embeddings? Well, they solve the problem of the curse of dimensionality, or at least they’ve really gotten their arms a lot further around it than ever before. They’ve enabled us to represent terms, represent concepts, not in these million dimensional vector spaces, where all that rich information is still there, but it’s spread so thinly across so many dimensions that you can’t really find a single entity as easily as you can if it were only representing a smaller number of dimensions, and that’s what these embeddings do.
Now, once you have that dimensionality, once you’re able to compress them into a lower dimension, now you can do all sorts of things that you want to do with language that you just couldn’t do before. And that’s part of why we see this slow operation with chatbots, they probably have something like this technology. What does this have to do with your question? These embeddings, for the most part, happen not by getting instructions—well nickels are this size, and they’re round, and they’re made of this sort of composite, and they have a picture of Jefferson stamped on the top—that’s not how you learn to mathematically represent these words at all.
What you do is you feed the algorithm lots and lots of examples of usage—you let it read all of Wikipedia, you let it read all of Reuters—and slowly but surely what happens is, the algorithm will start to see these patterns of co-usage, and will start to learn how one word follows after another. And what’s really remarkable, and could be profound, at least I know that a lot of people would want to infer that, is that the semantic kind of comes out for free.
You end up seeing the geometry of the way these words are embedded in such a way that you see, a famous example is a king vector minus a man vector plus a woman vector equals a queen vector, and that actually bears out in how the machine can now represent the language, and it did that without knowing anything about men, women, kings, or queens. It did it just by looking at frequencies of occurrence, how those words occur next to each other. So, when you talk about nickels and the sun, my first thought, given that running start, is that well, the machine probably hasn’t seen a nickel and a sun in context too frequently, and one of the dirty secrets about these neural embeddings is that they don’t do as well on very low-frequency terms, and they don’t always do well in being able to embed low frequency co-occurrences.
And maybe it’s just the fact that it hasn’t really learnt about, so to speak, it hasn’t read about, nickels and suns in context together.
So, is it an added wrinkle that, for example, you take a word like set, s-e-t, I think OED has two or three hundred definitions of it, you know—it’s something you do, it’s an object, etcetera. You know there’s a Wikipedia entry on a sentence, an eight word long grammatically correct sentence which is, “Buffalo buffalo buffalo buffalo buffalo buffalo buffalo buffalo,” which contains nouns, verbs, all of that. Is there any hope that if you took all the monkeys in all the universe typing cogent and coherent sentences, would it ever be enough to train it to what a human can do?
There’s a couple things there, and one of the key points that you’re making is that there are homonyms in our language, and so work should be done on disambiguating the homonyms. And it’s a serious problem for any natural language understanding project. And, you know, there are some examples out there of that. There’s one recently which is aimed at not just identifying a word but also disambiguating the usages or the context.
There are also others, not just focused on how to mathematically-represent how to pinpoint a representation of a word, but also how to represent the breadth of the usage. So maybe imagine not a vector, but a distribution or a cloud, that’s maybe a little thicker as a focal point, and all of those I think are a step in the right direction for capturing what is probably more representative of how we use language. And disambiguation, in particular with homonyms, is a part of that.
I only have a couple more questions in this highly theoretical realm, then I want to get down to the nitty gritty. I’m not going to ask you to pick dates or anything, but the nickel and the sun example, if you were just going to throw a number out, how many years is it until I type that question in something, and it answers it? Is that like, oh yeah we could do it if we wanted to, it’s just not a big deal, maybe give it a year? Or, is it like, “Oh, no that’s kind of tricky, wait five years probably.”
I think I remember hearing once never make a prediction.
Right, right. Well, just, is that a hard problem to solve?
The nickel and the sun is something that I’d hesitate to say is solvable in my lifetime, just to give a benchmark there, violating that maxim. I can’t say exactly when, what I can say is that the speed with which we are solving problems that I thought would take a lot longer to solve, is accelerating.
To me, while it’s a difficult problem and there are several challenges, we are still just scratching the surface in natural language understanding and word representation in particular, you know words-in-context representation. I am optimistic.
So, final question in this realm, I’m going to ask you my hard Turing test question, I wouldn’t even give this to a bot. And this one doesn’t play with language at all.
Dr. Smith is eating lunch at his favorite restaurant. He receives a call, takes it and runs out without paying his tab. Is management likely to prosecute? So you have to be able to infer it’s his favorite restaurant, they probably know who he is, he’s a doctor, that call was probably an emergency call. No, they’re not going to prosecute because that’s, you know, an understandable thing. Like, that doesn’t have any words that are ambiguous, and yet it’s an incredibly hard problem, isn’t it?
It is, and in fact, I think that is the, that is one of the true benchmarks—even moreso than comparing a nickel and a sun—of real, genuine natural language understanding. It has all sorts of things—it has object permanence, it has tracking those objects throughout different sentences, it has orienting sequences of events, it has management, which is mentioned in that last sentence, which is how you would be able to infer that management is somehow connected to the management of the restaurant.
That is a super hard one to solve for any Turing machine. It’s also something we’re starting to make progress on. Using LSDMs that do several passes through a sequence of sentences, classic artificial sentence dataset, that natural language understanding finds—the Facebook of AGI dataset, which actually is out there to help use as a benchmark for training these sorts of object permanence in multi-sentence thread. And we’ve made modest gains in that. There are algorithms like the Ask Me Anything algorithm, that have shown that it’s at least possible to start tracking objects over time, and with several passes come up with the right answer to questions about objects in sentences across several different statements.
Pulling back to the here and now, and what’s possible and what’s not. Did you ever expect AI to become part of the daily conversation, just to be part of popular culture the way it is now?
About as much as I expect that in a couple years that AI is going to be a term much like Big Data, which is to say overused.
Right.
I think, with respect to an earlier comments, the sort of AI that you and I have been dancing around, which is fully-integrated AI, is not what we talk about when we talk about what’s in daily conversation now, or for the most part not what we’re talking about in this context. And so it might be a little bit of a false success, or a spurious usage of “AI” in as much frequency as we see it.
That doesn’t mean that we haven’t made remarkable advances. It doesn’t mean that the examples that I’ve mentioned, in particular, in deep learning aren’t important, and aren’t very plausibly an early set of steps on the path. I do think that it’s a little bit of hype, though.
If you were a business person and you’re hearing all of this talk, and you want to do something that’s real, and that’s actionable, and you walk around your business, department to department—you go to HR, and to Marketing and you got to Sales, and Development—how do you spot something that would be a good candidate for the tools we have today, something that is real and actionable and not hype?
Ah, well, I feel like that is the job I do all the time. We’re constantly meeting with new companies, Fortune 500 CEOs and C-Suite execs, and talking about the problems that they want to solve, and thinking about ways of solving them. Like, I think a best practice is to always to keep it simple. There are a host of free deep learning techniques for doing all sorts of things—classification, clustering, user item matching—that are still tried-and-true, and that should probably be done first.
And then there are now, a lot of great paths to using these more sophisticated algorithms that mean that you should be considering them early. How exactly to consider one case from the other, I think that part of that is practice. It’s actually one of the things that when I talk to students about what they’re learning, I find that they’re walking away with not just, “I know what the algorithm is, I know what the objective function is, and how to manage momentum in the right way and optimizing that function,” but also how do you see the similarity between matching users and items in the recommender, or abstracting the latent semantic association of a bit of text or with an image, and there are similarities, and certain algorithms that solve all those problems. And that’s, in a lot of ways, practice.
You know, when the consumer web first came out and it became popularized, people had, you know, a web department, which would be a crazy thought today, right? Everything I’ve read about you, everybody says that you’re practical. So, from a practical standpoint, do you think that companies ought to have an AI taskforce? And have somebody whose job it is to do that? Or, is it more the kind of thing that it’s going to gradually come department by department by department? Or, is it prudent to put all of your thinking in one war room, as it were?
So, yeah, the general question is what’s the best way to do organizational design with machine learning machines, and the first answer is there are several right ways and there are a couple wrong ways. So, one of these wrong ways of the early-days are where you have this data science team that is completely isolated and is only responsible for R&D work, prototyping certain use cases and then they, to use a phrase you hear often, throw it over the wall to engineering to go implement, because I’m done with this project. That’s a wrong way.
There are several right ways, and those right ways usually involve bringing the people who are working on machine learning closer to production, closer to engineering, and also bringing the people involved in engineering and production closer to the machine learning. So, overall blurring those lines. You can do this with vertical integrated small teams, you could do this with peer teams, you can do this with a mandate that some larger companies, like Google, are really focused on making all their engineers machine learning engineers. I think all those strategies can work.
It all sort of depends on the size and the context of your business, and what kind of issues you have. And depending on those variables, then, among the several solutions, there might be one or two that are most optimal.
You’re the Chief Data Science Officer at Takt, spelled T-A-K-T, and is takt.com if anybody wants to go there. What does Takt do?
So we do the backend machine learning for large-scale enterprises. So, you know, many of your listeners might go to Starbucks and use the app to pay for Starbucks coffee. We do all of the machine learning personalization for the offers, for the games, for the recommendors in that app. And the way we approach that is by creating a whole host of different algorithms for different use cases—this goes back to your earlier question of abstracting those same techniques for many different use cases—and then apply that for each individual customer. We find the list completion use case, the recursive neural network approach, where there’s a time series of opportunity, where you can have interactions with an end user, and then learn from that interaction, and follow up with another interaction, doing things like reinforcement learning to do several interactions in a row, which may or may not get a signal back, but we have been trained to work towards that goal over time without that direct feedback signal.
This is the same sort of algorithms, for instance, that were used to train AlphaGo, to win a game. You only get that feedback at the end of the game, when you’ve won or lost. We take all of those different techniques and embed them in different ways for these large enterprise customers.
Are you a product company, a service company, a SaaS company—how does all that manifest?
We are a product company. We do tend to focus on the larger enterprises, which means that there is a little bit of customization involved, but there’s always going to be some customization involved when it comes to machine learning. Unless it’s just a suite of tools, which we are not. And what that means is that you do have to train and apply and suggest the right kinds of use cases for the suite of tools that we have, machine learning tools that we have.
Two more questions, if I may. You mentioned Cylons earlier, a Battlestar Galactica reference to those who don’t necessarily watch it. What science fiction do you think gets the future right? Like, when you watch it or read it, or what have you, you think “Oh yeah, things could happen that way, I see that”?
[Laughs] Well, you know the physicist in me still is both hopeful and skeptical about faster-than-light travel, so I suppose that wouldn’t really be the point of your question, is more with computers and with artificial intelligence.
Right, like Her or Ex Machina or what have you.
You know, it’s tough to say which of these, like, conscious-being robots is the most accurate. I think there are scenes worth observing that already have happened. Star Trek, you know, we create the iPad way before they had them in Star Trek time, so, good for reality. We also have all sorts of devices. I remember, when, in the ’80s—to date myself—the movie Star Trek came out, and Scotty gets up in front of his computer, an ’80s computer, and picks up the mouse and starts speaking into it and saying, “Computer, please do this.”
And my son will not get that joke, because he can say “Hey, Siri” or “Okay, Google” or “Alexa” or whatever the device is, and the computer will respond. And that’s, I like to focus on those smaller wins, that we are dramatically much quicker than forecasts in some cases able to accomplish that. I did see an example the other day about HAL, the Space Odyssey artificial intelligence, where people were mystified that this computer program could beat a human in chess, but didn’t blink an eye that the computer program could not only hold a conversation, but has a very sardonic disposition towards the main character. That, probably, very well captures this dichotomy of the several things are very likely to be captured, and we can get to very quickly, and other things that we thought were easy but take quite a lot longer than expected.
Final question, overall, are you an optimist? People worry about this technology—not just the killer robots scenario, but they worry about jobs and whatnot—but what do you think? Broadly speaking, as this technology unfolds, do you see us going down a dystopian path, or are you optimistic about the future?
I’ve spoken about this before a little bit. I don’t want to say, “I hope,” but I hope that Skynet will not launch a bunch of nuclear missiles. I can’t really speak with confidence to whether that’s a true risk or just an exciting storyline. What I can say is that the displacement of service jobs by automated machines is a very clear and imminent reality.
And that’s something that I’d like to think that politicians and governments and everybody should be thinking about—in particular how we think about education. The most important skill we can give our children is teaching them how to code, how to understand how computer programs work, and that’s something that we really just are not doing enough of yet.
And so will Skynet nuke everybody? I don’t know. Is it the case that I am, at six years old, teaching my son how to code already? Absolutely. And I think that will be make a big difference in the future.
But wouldn’t coding be something relatively easy for an AI? I mean it’s just natural language, tell it what you want it to do.
Computers that program themselves. It’s a good question.
So you’re not going to suggest, I think you mentioned, your son be a philosophy major at Columbia?
[Laughs] You know what, as long as he knows some math and he knows how to code, he can do whatever he wants.
Alright, well we’ll leave it on that note, this was absolutely fascinating, Mike. I want to thank you, thank you so much for taking the time. 
Well thank you, this was fun.
Byron explores issues around artificial intelligence and conscious computers in his upcoming book The Fourth Age, to be published in April by Atria, an imprint of Simon & Schuster. Pre-order a copy here.
[voices_in_ai_link_back]

Will a robot take your job?

There is a vigorous debate about the effects of automation on jobs. Everyone agrees that some jobs will be lost to automation and, in turn, some jobs will be created by it. The pivotal question is how all of that nets out.

Often lost in the abstract debate is the question of exactly which jobs are likely to be automated. I have created a test to try to capture just that.
The idea is simple: Some things are quite easy for computers and robots to do, and other things are quite hard. Jobs in the “safe” category have lots of things about them that are hard for machines to do.
The good news is that it doesn’t take very many hard things to make a job, practically speaking, impervious to automation, at least in this century. While jobs like “hostage negotiator” are clearly better done by people than machines, even jobs that look like good candidates for automation have difficulties. In theory, a robot should be able to clean the windows on my home, in practice this isn’t likely to happen for quite a long time.
The test is ten questions, and each one can be scored from 0 to 10. For each one, I give examples of some jobs at 0, 5, and 10. My examples are meant to show each extreme, and a midpoint. You should not just score with those three points. Use 7’s and 2’s and 9’s.
When you are done, the total is tallied. The closer it is to zero, the less likely you are to get a surprise announcement from the boss one day. The closer you get to 100, well, if you start to feel something breathing down your back, then that may be the cooling fan in the robot who is about to take your job.
The goal is not to find a job near a zero. Anything below a 70 is probably safe long enough for you to have a long illustrious career. There are obvious “100” jobs. The person who takes your order at a fast food restaurant is probably pretty close.
Take the test here. We plan to calibrate and refine it, then publish a research report about the results. If you would like to be kept in the loop about that, be sure to add your email address.

Voices in AI – Episode 35: A Conversation with Lorien Pratt

[voices_in_ai_byline]
In this episode, Byron and Lorien talk about intelligence, AGI, jobs, and the human genome project.
[podcast_player name=”Episode 35: A Conversation with Lorien Pratt” artist=”Byron Reese” album=”Voices in AI” url=”https://voicesinai.s3.amazonaws.com/2018-03-20-(00-45-11)-lorien-pratt.mp3″ cover_art_url=”https://voicesinai.com/wp-content/uploads/2018/03/voices-headshot-card-4.jpg”]
[voices_in_ai_byline]
Byron Reese: This is Voices in AI, brought to you by Gigaom, I’m Byron Reese. Today our guest is Lorien Pratt, the Chief Scientist and Co-founder over at Quantellia. They’re a software consulting company in the AI field. She’s the author of The Decision Intelligence Primer.” She holds an AB in Computer Science from Dartmouth, and an MS and PhD in Computer Science from Rutgers. Welcome to the show, Lorien!
Lorien Pratt: Thank you Byron delighted to be here, very honored thank you.
So, Lorien, let’s start with my favorite question, which is, what is artificial intelligence?
Artificial intelligence has had an awful lot of definitions over the years. These days when most people say AI, ninety percent of the time they mean machine learning, and ninety percent of the time that machine learning is a neural network underneath.
You say that most people say that, but is that what you mean by it?
I try to follow how people tend to communicate and try to track this morphing definition. Certainly back in the day we all had the general AI dream and people were thinking about Hal and the robot apocalypse, but I tend to live in the applied world. I work with enterprises and small businesses and usually when they say AI it’s, How can I make better use of my data and drive some sort of business value?” and they’ve heard of this AI thing and they don’t quite know what it is underneath.
Well, let me ask a different question then, what is intelligence?
What is intelligence, that’s a really nebulous thing isn’t it?
Well it does not have a consensus definition, so, in one sense you cannot possibly answer it incorrectly.
Right, I guess my world, again, is just really practical, what I care about is what drives value for people. Around the world sometimes intelligence is defined very broadly as the thing that humans do, and sometimes people say a bird is much more intelligent than a human at flying and a fish is much more intelligent than a human at swimming. So, to me the best way to talk about intelligence is relative to some task that has some value, and I think it’s kind of dangerous waters when we try to get too far into defining such a nebulous and fluctuating thing.
Let me ask one more definition and then I will move on. In what sense do you interpret the word artificial”? Do you interpret it as, artificial intelligence isn’t real intelligence, it’s just faking it—like artificial turf isn’t real grass—or, No, it’s really intelligence, but we built it, and that’s why we call it artificial”?
I think I have to give you another frustrating answer to that, Byron. The human brain does a lot of things, it perceives sound, it interprets vision, it thinks through, Well if I go to this college, what will be the outcome?” Those are all, arguably, aspects of intelligence—we jump on a trampoline, we do an Olympic dive. There are so many behaviors that we can call intelligence, and the artificial systems are starting to be able to do some of those in useful ways. So that perception task, the ability to look at an image and say, that’s a cat, that’s a dog, that’s a tree etcetera,” yeah, I mean, that’s intelligence for that task, just like a human would be able to do that. Certain aspects of what we like to call intelligence in humans, computers can do, other aspects, absolutely not. So, we’ve got a long path to go, it’s not just a yes or a no, but it’s actually quite a complex space.
What is the state of the art? This has been something we’ve explored since 1955, so where are we in sixty-two year journey?
Sure, I think we had a lot of false starts, people kept trying to, sort of, jump start and kick start general intelligence—this idea that we can build Hal from 2001 and that he’d be like a human child or a human assistant. And unfortunately, between the fifth generation effort of the 1980’s and stuff that happened earlier, we’ve never really made a lot of progress. It’s been kind of like climbing a tree to get to the moon. Over the years there’s been this second thread, not the AGI artificial general intelligence, but a much more practical thread where people have been trying to figure out how do we build an algorithm that does certain tasks that we usually call intelligent.
The state of the art is that we’ve gotten really good at, what I call, one-step machine learning tasks—where you look at something and you classify it. So, here’s a piece of text, is it a happy tweet or a sad tweet? Here’s a job description, and information about somebody’s resume, do they match, do they not? Here’s an image, is there a car in this image or not? So these one-step links we’re getting very, very good at, thanks to the deep learning breakthroughs that Yann LeCun and Geoffrey Hinton and Yoshua and all of those guys have done over the last few years.
So, that’s the state of the art, and there’s really two answers to that, one is, what is the state of the art in terms of things that are bringing value to companies where they’re doing breakthrough things, and the other is the state of the art from a technology point of view, where’s the bleeding edge of the coolest new algorithms, independent of whether they’re actually being useful anywhere. So, we sort of have to ask that question in two different ways.
You know AI makes headlines anytime it beats a human at a new game, right? What do you think will be the next milestone that will make the popular media, AI did _______.”
AI made a better decision about how to address climate change and sea level rise in this city than the humans could have done alone, or AI helped people with precision medicine to figure out the right medicine for them based on their genetics and their history that wasn’t just one size fits all.
But I guess both of those are things that you could say are already being done. I mean, they’re already being done, there’s not a watershed moment, where Aha! Lee Sedol just got beaten by AlphaGo.” We already do some genetic customization, we can certainly test certain medications against certain genomic markers.
We can, but I think what hasn’t happened is the widespread democratization of AI. Bill Gates said, we’re going to have a computer on every desk.” I also think that Granny, who now uses a computer, will also be building little machine learners within a few years from now. And so when I talk about personalized medicine or I talk about a city doing climate change, those are all, kind of, that general umbrella—it’s not going to be just limited to the technologists. It’s a technology that’s going through this democratization cycle, where it becomes available and accessible in a much more widespread way to solve really difficult problems.
I guess that AIs are good at games because they’re a confined set of rules, and there’s an idea of a winner. Is that a useful way to walk around your enterprise and look for things you can apply AI to?
In part, I would say necessary, but not sufficient, right? So, a game, what is that? It’s a situation in which somebody’s taking an action and then based on that some competitor—maybe literally your competitor in a market—is taking some counter action, and then you take an action, and vice versa, right? So, thinking in terms of games, is actually a direction I see coming down the pike in the future, where these single-link AI systems are going to be integrated more and more with game theory. In fact, I’ve been talking to some large telecoms about this recently, where we are trying to, sort of, game out the future, right? Right now in AI, primarily, we’re looking at historical data from the past and trying to induce patterns that might be applicable to the future, but that’s a different view of the future than actually simulating something—I’ll take this action and you’ll take this other action. So, yes, the use of games has been very important in the history of AI, but again it’s not the whole picture. It does, as you say, tend to over-simplify things when we think in terms of games. When I map complex problems, it does kind of look like game moves that my customers take, but it is way more complex than a simple game of chess or checkers, or Go.
Do you find that the people who come to you say, I have this awesome data, what can AI teach me about it?” Or do they say, I have this problem, how do I solve it?” I mean, are they looking for a problem or looking to match the data that they have?
Both. By and large, by the time they make it to me, they have a big massive set of data, somebody on the team has heard about this AI thing, and they’ll come with a set of hypotheses—we think this data might be able to solve problem X or Y or Z. And that’s a great question, Byron, because that is how folks like me get introduced into projects, it’s because people have a vague notion as to how to use it, and it’s our job to crisp that up and to do that matching of the technology to the problem, so that they can get the best value out of this new technology.
And do you find that people are realistic in their expectations of where the technology is, or is it overhyped in the sense that you kind of have to reset some of their expectations?
Usually by the time they get to me, because I’m so practical, I don’t get the folks who have these giant general artificial intelligence goals. I get the folks who are like, I want to build a business and provide a lot of value, and how can I do that?” And from their point of view, often I can exceed their expectations actually because they think, Ah, I got to spend a year cleansing my data because the AI is only as good as the data”—well it turns out that’s not true and I can tell you why if you want to hear about it—they’ll say, you know, I need to have ten million rows of data because AI only works on large data sets,” it turns out that’s not necessarily true. So, actually, the technology, by and large, tends to exceed people’s expectations. Oh, and they think, I’ve been googling AI, and I need to learn all these algorithms, and we can’t have an AI project until I learn everything,” that’s also not true. With this technology, the inside of the box is like a Ferrari engine, right? But the outside of the box is like a steering wheel and two pedals, it’s not hard to use if you don’t get caught up in the details of the algorithms.
And are you referring to the various frameworks that are out there specifically?
Yeah, Theano, Torch, Google stuff like TensorFlow, all of those yes.
And how do you advise people in terms of evaluating those solutions?
It really depends on the problem. If I was to say there’s one piece of advice I almost always give, it’s to recognize that most of those frameworks have been built over the last few years by academics, and so they require a lot of work to get them going. I was getting one going about a year ago, and, you know, I’m a smart computer scientist and it took me six days to try to get it working. And, even then, just to have one deep learning run, it was this giant file and it was really hard to change, and it was hard to find the answers. Whereas, in contrast, I use this H2O package and R frontend to it, and I can run deep learning in one line of code there. So, I guess, my advice is to be discerning about the package, is it built for the PhD audience, or is it built, kind of, more for a business user audience, because there are a lot of differences. There very, very powerful, I mean, don’t get me wrong, TensorFlow, and those systems are hugely powerful, but often it’s power that you don’t need, and flexibility that you don’t need, and there’s just a tremendous amount of value you can get out of the low-hanging fruit of simple-to-use frameworks.
What are some guiding principles? There’s that one piece of advice, but what are some others? I have an enterprise, as you say, I’ve heard of this AI thing, I’m looking around, what should I be looking for?
Well, what you’re looking for is some pattern in your data that would predict something valuable. So, I’ll give you an example, I’m working with some educational institutions, they want to know, what topics that they offer in their courses will help students ultimately be successful in terms of landing a job. In the medical domain, what aspects of someone’s medical history would determine which of these five or six different drug regiments would be the most effective? In stock prices, what data about the securities we might invest in will tell us whether they’re going to go up or down? So, you see that pattern—you’ve always got some set of factors on one side, and then something you’re trying to predict, which if you could predict it well, would be valuable on the other side. That one pattern, if your listeners only listen to one thing, that’s the outside of the box. It’s really simple, it’s not that complicated. You’re just trying to get one set of data that predicts another set of data, and try to figure out if there would be some value there, then we would want to look into implementing an AI system. So that’s, kind of, thing number one I’d recommend, is to just have a look for that pattern in your business, see if you can find a use case or scenario in which that holds.
Switching gears a bit, you say that we had these early dreams of building a general intelligence, do you still think we’re going to build one sometime?
Maybe. I don’t like to get into those conversations because I think they’re really distracting. I think we’ve got so many hard problems, poverty, conflict—
An AGI would sure be helpful with those, wouldn’t it?
No. See that’s the problem, an AGI, it’s not aiming in the right direction, it’s ultimately going to be really distracting. We need to do the work, right? We need to go up the ladder, and the ladder starts with this single-link machine learning that we just talked about, you’ve got a pattern, you predict something. And then the next step is you try linking those up, you say, well if I’m going to have this feature in my new phone, then, let me predict how many people in a particular demographic will buy it, and then the next link is, given how many people will buy it, what price can I charge? And the next link is, how much price can I charge, how much money can I make? So it’s a chain of events that start with some action that you take, and ultimately lead to some outcome.
I’m solidly convinced, from a lot of things I’ve done over the thirty years I’ve been in AI, that we have to go through this phase, where we’re building these multi-linked systems that get from actions to outcomes, and that’ll maybe ultimately get us to what you might call, generalized AI, but we’re not there yet. We’re not even very good at the single-link systems, let alone multi-link and understanding feedback loops and complex dynamics, and unintended consequences and all of the things that start to emerge when you start trying to simulate the future with multi-link systems.
Well, let me ask the question a different way. Do you think that an AGI is an evolutionary result of a path we’re already on? Like, we’re at one percent and then we’ll be at two and then four, and eventually we’ll get there, or is that just a whole different beast, and you don’t just get there gradually, that’s an Aha!” kind of technology.
Yeah, I don’t know, that’s kind of a philosophical question, because even if I got to a full robot, we’d still have this question as to whether it was really conscious or intelligent. What I really think is important, is turn AI on its head, intelligence augmentation. What’s definitely going to happen is that humans are going to be working alongside intelligent systems. What was once a pencil, and once was a calculator, now is a computer is next going to be an AI? And just like computers have really super-powered our ability to write a document or have this podcast, right? They’re going to start also supercharging our ability to think through complex situations, and it’s going to be a side-by-side partnership for the foreseeable future, and perhaps indefinitely.
There’s a fair amount of fear in terms of what AI and automation in general will do to jobs. And, just to set up the question, there are often three different narratives. One is that, we’re about to enter this period where we’re going to have some portion of the population that is not able to add economic value and there’ll be, kind of, a permanent Great Depression. Then another view is that it will be far different than that, that every single thing a person can do, we’re going to build technology to do. And then there’s a third view that this is no different than any other transformative technology, people take it and use it to grow their own productivity, and everybody goes up a notch. What do you think, or a fourth choice, how do you see AI’s impact?
Well, I think multiple things are going to happen, we’re definitely seeing disruption in certain fields that AI is now able to do, but is it a different disruption than the introduction of the cotton gin or the automobile or any other technology disruption? Nah, it’s just got this kind of overlay of the robot apocalypse that makes it a little sexier to talk about. But, to me, it’s the same evolution we’ve always been going through as we build better and better tools to assist us with things. I’m not saying that’s not painful and I’m not saying that we won’t have displacement, but it’s not going to be a qualitatively different sort of shift in employment than we’ve seen before. I mean people have been predicting the end of employment because of automation for decades and decades. Future Shock, right? Alvin Toffler said that in the 60’s, and, AI is no different.
I think the other thing to say is we get into this hype-cycle because the vendors want you, as a journalist, to think it’s all really cool, then the journalists write about it and then there are more and more vendors, and we get really hyped about this, and I think it’s important to realize that we really are just in one-link AI right now—in terms of what’s widespread and what’s implemented and what’s useful, and where the hard implementation problems have been solved—so I would, sort of, tone down that side of things. From a jobs point of view, that means we’re not going to suddenly see this giant shift in jobs and automation, in fact I think AI is going to create many jobs. I wouldn’t say as many as we’ll lose, but I think there is a big opportunity for those fields. I hear about coal miners these days being retrained in IT, turns out that a lot of them seem to be really good, I’d love to train those other populations in how to be data scientists and machine learning people, I think there’s a great opportunity there.
Is there a shortage of talent in the field?
Absolutely, but, it’s not too hard to solve. The shortage of talent only comes when you think everybody has to understand these really complex PhD level frameworks. As the technology gets democratized, the ability to address the shortage of talent will become much easier. So we’re seeing one-click machine learning systems coming out, we’re seeing things like the AI labs that are coming out of places like Microsoft and Amazon. The technology is becoming something that lots of people can learn, as opposed to requiring this very esoteric, like, three computer science degrees like I have. And so, I think we’re going to start to see a decrease in that shortage in the near future.
All of the AI winters that happened in the past were all preceded by hype followed by unmet expectations, do you think we’re going to have another AI winter?
I think we’ll have an AI fall, but it won’t be a winter and here’s why—we’re seeing a level of substantive use cases for AI being deployed, especially in the enterprise, you know, widespread large businesses, at a level that never happened before. I was just talking to a guy earlier about the last AI hype cycle in the 80’s, where VLSI computer design by AI was this giant thing and the fifth generation,” and the Japanese and people were putting tens, hundreds of millions of dollars into these companies, and there was never any substance. There was no there” there, right? Nobody ever had deployed systems. AI and law, same thing, there’s been this AI and law effort for years and years and years, and it really never produced any commercial systems, for like a decade, and now we’re starting to see some commercial solidity there.
So, in terms of that Gartner hype-cycle, we’re entering the mass majority, but we are still seeing some hype, so there’ll be a correction. And we’ll probably get to where we can’t say AI anymore, and we’ll have to come up with some new name that we’re allowed to say, because for years you couldn’t say AI, you had to say data mining, right? And then I had to call myself an analytics consultant, and now it’s kind of cool I can call myself an AI person again. So the language will change, but it’s not going to be the frozen winter we saw before.
I wonder what term we’ll replace it with? I mean I hear people who avoid it are using, cognitive systems” and all of that, but it sounds just, kind of, like synonym substitution.
It is and that’s how it always goes, I’m evangelizing multi-link machine learning right now, I’m also testing decision intelligence. It’s kind of fun to be at the vanguard, where you can, as you’re inventing the new things, you get to name them, right? And you get to try to make everybody use that terminology. It’s in flux right now, there’s a time when we didn’t call e-mail e-mail,” right? It was computer mail.” So, I don’t know it hasn’t started to crystalize yet, it’s still in the twenty different new terminologies.
Eventually it will become just mail,” and the other will be, you know, snail mail.” It happens a lot, like, corn on the cob used to just be corn, and then canned corn came along so now we say corn on the cob, or cloth diapers… Well, anyway, it happens.
Walk me through some of the misconceptions that you come across in your day-to-day?
Sure. I think that the biggest mistake that I see is people get lost in algorithms or lost in data. So lost in algorithms, let’s say you’re listening to this and you say, Oh I’d like to be interested in AI,” and you go out and you google AI. The analogy, I think, is, imagine we’re the auto industry, and for the last thirty years, the only people in the auto industry had been inventing new kinds of engines, right? So you’re going to see the Wankel engine, and the four cylinder, you’re going to read about the carburetors, and it’s all been about the technology, right? And guess what, we don’t need five hundred different kinds of engines, right? So, if you go out and google it you’re going to be totally lost in hundreds of frameworks and engines and stuff. So the big misconception is that you somehow have to master engine building in order to drive the car, right? You don’t have to, but yet all the noise out there, I mean it’s not noise, it’s really great research, but from your point of view, someone who actually wants to use it for something valuable, it is kind of noise. So, I think one of the biggest mistakes people get into is they create a much higher barrier, they think they have to learn all this stuff in order to drive a car, which is not the case, it’s actually fairly simple technology to use. So, you need to talk to people like me who are, kind of, practitioners. Or, as you google, have a really discerning eye for the projects that worked and what the business value was, you know? And that applied side of things as opposed to the algorithm design.
Without naming company names or anything, tell me some projects that you worked on and how you looked at it and how you approached it and what was the outcome like, just walk me through a few use cases.
So I’ll rattle through a few of them and you can tell me which one to talk about, which one you think is the coolest—morphological hair comparison for the Colorado Bureau of Investigation, hazardous buried waste detection for the Department of Energy, DNA pattern recognition for the human genome project, stock price prediction, medical precision medicine prediction… It’s the coolest field, you get to do so much interesting work.
Well let’s start with the hair one.
Sure, so this was actually a few years back, it was during the OJ trials. The question was, you go out to a crime scene and there’s hairs and fibers that you pick up, the CSI guys, right? And then you also have hairs from your suspect. So you’ve got these two hairs, one from the crime scene, one from your suspect and if they match, that’s going to be some evidence that you’re guy was at the scene right? So how do you go about doing that, well, you take a microphotograph of the two of them. The human eye is pretty good at, sort of, looking at the two hairs and seeing if they match, we actually use a microscope that shows us both at the same time. But, AI can take it a step further. So, just like AI is, kind of, the go-to technology for breast cancer prediction and pap smear analysis and all of this micro-photography stuff, this project that I was on used AI to recognize if these two hairs came from the same guy or not? It’s a pretty neat project.
And so that was in the 90’s?
Yeah it was a while back.
And that would have been using techniques we still have today, or using older techniques?
Both, actually, that was a back-propagation neural network, and I’m not allowed to say back propagation, nor am I really allowed to say neural network, but the hidden secret is that all the great AI stuff still use back-propagation-like neural networks. So, it was the foundations of what we do today. Today we still use neural nets, they’re the main machine learning algorithm, but they’re deeper, they have more and more layers of artificial neurons. We still learn, we still change the weights of the simulated synapses on the networks, but we have a more sophisticated algorithm that does that. So, foundationally, it’s really the same thing, it hasn’t changed that much in so many years, we’re still artificial neural network centric in most of AI today.
Now let’s go to hazardous waste.
Sure, so this was for the Department of Energy. Again it was an imaging project, but here, the question was, you’ve got these buried drums of leaking chemical nerve gas, that’ve been dumped into these superfund sites, and it was really carelessly done. I mean, literally, trenches were dug and radioactive stuff was just dumped in them. And after a few years folks realized that wasn’t so smart, and so, then they took those sites and they passed these pretty cool sensors over them, like gravitometers, that detected micro-fluctuations in gravity, and ground-penetrating radar and other techniques that could sense what was underground—this was originally developed for the oil industry, actually, to find buried energy deposits—and you try to characterize where those things are. Where the neural net was good was in combining all those sensors from multiple modalities into a picture that was better than any one of the sensors.
And what technologies did that use?
Neural nets, same thing, back propagation.
At the beginning you made some references to some recent breakthroughs, but would you say that most of our techniques are things we’ve known about since the 60’s, we just didn’t have the computer horsepower to do it? Would that be fair to say or not?
It’s both, it’s the rocket engines plus the rocket fuel, right? I remember as a graduate student, I used to take over all the faculties computers at night when there was no security, I’d run my neural net training on forty different machines and then have them all RPC the data back to my machine. So, I had enough horsepower back then, but what we were missing was the modern deep-learning algorithms that allow us to get better performing systems out of that data, and out of those high-performance computing environments.
And now what about the human genome project, tell me about that project.
That was looking at DNA patterns, and trying to identify something called a ribosomal-binding site. If you saw that Star Trek episode where everybody turns into a lizard, there are these parts of our DNA that we don’t really know what they do between the parts that express themselves. This was a project nicely funded by a couple of funding agencies to detect these locations on a DNA strand.
Was that the one where everybody essentially accelerated their evolution and Picard was some kind of a nervous chimp of some kind, somebody else was a salamander?
Yes that’s right, remember it was Deanna Troi who turned into a salamander, I think. And she was expressing the introns, the stuff that was between the currently expressed genome. This was a project that tried to find the boundaries between the expressed and the unexpressed parts. Pretty neat science project, right?
Exactly. Tell me about the precision medicine one, was that a recent one?
Yeah, so the first three were kind of older. I’m Chief Scientist, also, at ehealthanalytics.net and they’ve taken on this medical trials project. It turns out that if you do a traditional medical trial, it’s very backward facing and you often have very homogenous data. In contrast, we’ve got a lot of medical devices that are spitting out data, like, I’m wearing my Fitbit right now and it’s got data about me, and, you know, we have more DNA information, and with all of that we can actually do better than traditional medical trials. So, that was a project I did for those guys. More recently we’re predicting failure in medical devices. That’s not as much precision medicine as precision analysis of medical devices, so that we can catch them in the field before they fail, and that’s obviously a really important thing to be able to do.
And so you’ve been at this for, you say, three decades.
Three decades, yeah. It was about 1984, when I built my first neural net.
Would you say that your job has changed over that time, or has it, in a way, not—you still look at the data, look at the approach, figure out what question you’re asking, figure out how to get an answer?
From that point of view, it’s really been the same. I think what has changed is, once I built the neural net—before, the accuracies and the false-positives and the false-negatives were kind of, eh, they weren’t really exciting results. Now, we see Microsoft, a couple of years ago, using neural network transfer, which was my big algorithm invention, to beat humans at visual pattern recognition. So, the error rates, just with the new deep learning algorithms, have plummeted, as I’m sure your other interviewee’s have told you about, but the process has been really the same.
And I’ll tell you what’s surprising, you’d think that things would have changed a lot, but there just hasn’t been a lot of people who drive the cars, right? Up until very recently, this field has really been dominated by people who build the engines. So, we’re just on the cusp. I look at SAP is a great example of this. SAP’s coming out with this big new Leonardo launch of its machine learning platform, and, they’re not trying to build new algorithms, right? SAP is partnering with Google and NVIDIA, and what they recognize is that the next big innovation is in the ability of connecting the algorithms to the applied problems, and just churning out one use case after another, that drives value for their customers. I would’ve liked to have seen us progress further along those lines over the last few years, but I guess just the performance wasn’t there and the interest wasn’t there. That’s what I’m excited about with this current period of excitement in AI, that we’ll finally start to have a bunch of people who drive the cars, right? Who use this technology in valuable ways to get from here to there to predict stock prices, to match people to the perfect job—that’s another project that I’m doing, for HR human resources—all these very practical things that have so much value. But yeah, it hasn’t really changed that much, but I hope it does, I hope we get better at software engineering for AI, because that’s really what’s just starting right now.
So, you, maybe, will become more of a car-driver—to use your analogy—in the future. Even somebody as steeped in it as you, it sounds like you would prefer to use higher-level tools that are just that much easier to use.
Yeah, and the reason is, we have plenty of algorithms, we’re totally saturated with new algorithms. The big desperate need that everybody has is, again, to democratize this and to make it useful, and to drive business value. You know, a friend of mine who just finished an AI project said on a ten million dollar project, we just upped our revenue by eighteen percent from this AI thing. That’s typical, and that’s huge, right? But yet everybody was doing it for the very first time, and he’s at a fairly large company, so, that’s where the big excitement is. I mean, I know it’s not as sexy as artificial general intelligence, but it’s really important to the human race, and that’s why I keep coming back to it.
You made a passing reference to image recognition and the leap forward we have there, how do you think it is that people do such a good job, I mean is it just all transferred learning after a while, do we just sort of get used to it, or do you think people do it in a different way than we got machines to do it?
In computer vision, there was a paper that came out last year that Yann LeCun was sending around that said that somebody was looking at the structure of the deep-learning vision networks and had found this really strong analogue to the multiple layers—what is it the lateral geniculate nucleus, I’m not a human vision person, but there’s these structures in the human vision system that are very analogous. So, it’s like this convergent evolution, that computers converge to the same way of recognizing images that it turns out the human brain does things.
Were we totally inspired by the human brain? Yes, to some extent. Back in the day when we’d go to the NIPS conference, half the people there were in neurophysiology, and half of us were computer modelers, more applied people, and so there was a tremendous amount of interplay between those two sides. But more recently, folks have just tried to get computers to see things, for self-driving cars and stuff, and we keep heading back to things that sort of look like the human vision system, I think that’s pretty interesting.
You know, I think the early optimism in AI—like the Dartmouth project where they thought they could do a bunch of stuff if they worked really hard on it for one summer—stemmed from a hope that, just like in Physics you had a few laws that explain everything, in electronics, in magnetism, it’s just a few laws. And the hope was that intelligence would just be three or four simple laws, we’ll figure them out and that’s all it’s going to be. I guess we’ve given up on that, or have we, we’re essentially brute forcing our way to everything, right?
Yeah, it’s sort of the submergent property, right? Like Conway’s Game of Life,” has these very complex emergent epiphenomenon from just a few simple rules. I, actually, haven’t given up on that, I just think we don’t quite have the substrate right yet. And again I keep going back to single-link learning versus multi-link. I think when we start to build multi-link systems that have complex dynamics that end up doing four-at-a-time simulation using piecewise backward machine learning based on historical data, I think we are going to see a bit of an explosion and start to see, kind of, this emergence happen. That’s the optimistic, non-practical side of me. I just think we’ve been focusing so much on certain low-hanging fruit problems, right? We had image recognition—because we had these great successes in medicine, even with the old algorithms, they were just so great at cancer recognition and images—and then Google was so smart with advertising, and then Netflix with the movies. But if you look at those successful use cases, there’s only like a dozen of them that have been super successful, and we’ve been really focused on these use cases that fit our hammer, we’ve been looking at nails, right? Because that’s the technology that we had. But I think multi-link systems will make a big difference going forward, and when we do that I think we might start to see this kind of explosion in what the systems can do, I’m still an optimist there.
There are those who think we really will have an explosion, literally, from it all.
Yeah, like the singularitists, yep.
It’s interesting that there are people, high profile individuals of unquestionable intelligence, who believe we are at the cusp of building something transformative, where do you think they err?
Well, I can really only speak to my own experience, I think there’s this hype thing, right? All the car companies want to show that they’re still relevant, so they hype the self-driving cars, and of course we’re not taking security, and other things into account, and we all kind of wanted to get jumping on that bandwagon. But, my experience is just very plebeian, you just got to do the work, you got to roll up your sleeves you got to condition your data, you got to go around the data science loop and then you need to go forward. I think people are really caught up in this prediction task, like, What can we predict, what will the AI tell us, what can we learn from the AI?” and I think we’re all caught up in the wrong question, that’s not the question. The question is, what can we do? What actions will we take that lead to which outcomes we care about, right? So, what should we do in this country, that’s struggling in conflict, to avoid the unintended consequences? What should we teach these students so that they have a good career? What actions can we take to mitigate against sea-level rise in our city?
Nobody is thinking in terms of actions that lead to outcomes, they’re thinking of data that leads to predictions. And again I think this comes from the very academic history of AI, where it was all about the idea factory and what can we conclude from this. And yeah, it’s great, that’s part of it, being able to say, here’s this image, here’s what we’re looking at, but to really be valuable for something it can’t be just recognizing an image, it has to be take some action that leads to some outcome. I think that’s what’s been missing and that’s what’s coming next.
Well that sounds like a great place to end our conversation.
Excellent.
I want to thank you so much, you’ve just gave us such a good overview of what we can do today, and how to go about doing it, and I thank you for taking the time.
Thank you Byron, I appreciate the time.
Byron explores issues around artificial intelligence and conscious computers in his upcoming book The Fourth Age, to be published in April by Atria, an imprint of Simon & Schuster. Pre-order a copy here.
[voices_in_ai_link_back]

Voices in AI – Episode 28: A Conversation with Mark Stevenson

[voices_in_ai_byline]
In this episode, Byron and Mark discuss the future of jobs, energy and more.
[podcast_player name=”Episode 28 – A Conversation with Mark Stevenson” artist=”Byron Reese” album=”Voices in AI” url=”https://voicesinai.s3.amazonaws.com/2018-01-15-(00-58-06)-mark-stevenson.mp3″ cover_art_url=”https://voicesinai.com/wp-content/uploads/2018/01/voices-headshot-card-1.jpg”]
[voices_in_ai_byline]
Byron Reese: This is “Voices in AI,” brought to you by Gigaom. I’m Byron Reese. Today I’m excited we have Mark Stevenson. Mark is a London-based British author, businessman, public speaker, futurologist and occasionally musician and comedian. He is also a fellow of The Royal Society for the Encouragement of Arts, Manufactures and Commerce. His first book, An Optimist’s Tour of the Future was released in 2011 and his second one, We Do Things Differently came out in 2017. He also co-founded and helps run the London-based League of Pragmatic Optimists. Welcome to the show, Mark! 
Mark Stevenson: Thank you for having me on, Byron! It’s a pleasure.
So, the subtitle of your Optimist’s Tour of the Future is, “One curious man sets out to answer what’s next.” Assuming you’re the curious man, what is next?
You can take “curious” in two ways, can’t you? Somebody is interested in new stuff, or somebody’s just a little bit odd, and I am probably a bit of both. Actually, I don’t conclude what’s next. I actually said the question is its own answer. My work is about getting people to be literate about the questions the future is asking them. What’s next will depend on how we collectively answer those questions.
What’s next could be a climate change, dystopian, highly unequal world; or what’s next could be a green-powered, prosperous, abundant, distributed economy for everybody. And each is likely. What’s next is what we decide to do about it, and that’s why I do the work I do, which is trying to educate people about the questions we’re being asked, and allowing them to imagine for themselves.
You said that’s why you do the work that you do. What do you do?
Well, I guess I am a professional irritant. I work with governments, corporations, universities helping them become literate about the questions the future is asking them. You’ll find that most organizations have a very narrow view of the world, because they are kind of governed by their particular marketplace or whatever, and same with governments and government departments.
So, I’ll give you an example, I was working with an insurance company recently who wanted me to come in and help them, and I just put up a picture of two cars having an accident and I said, “What happens if one or both of these is a driverless car?” and the head of insurance went, “I don’t know.” And I’m like, “Well, you should really be asking yourself that question because that question is coming.” And he said, “Mark, we insure drivers. If there aren’t any, it’s a real fucker on the balance sheet.”
It’s funny, but I used to work on old cars, and they were always junkers when I got them, and one time, I had one parked at the top of the hill and in the middle of the night, the brakes failed evidently and it rolled down the hill and hit another car. That scenario actually happened.
The other thing I said was, “What’s your biggest cost?” and he said, “Of course, it’s claims.” And ninety-seven percent or something of claims are because of human error, and it turns out driverless cars are way safer than cars with drivers in them; so maybe that’s good for him, because maybe it will reduce claims. My point was that I don’t know what he should do. He’s the expert in insurance, but my point is, you should be asking yourselves these questions.
Another example from insurance—I was working with the reinsurance industry, the insurers that insure the insurers. On the one hand, you’re being asked to underpin businesses that are insuring a coal-fired power plant. On the other hand, you’re being asked to insure businesses that are going to be absolutely decimated by climate risk.
And you can’t do both and it’s that lack of systems thinking, I suppose, I bring to my clients. And how the food system, the energy system, the government system, the education system, what’s happening in physics, what’s happening in the arts and culture, what’s happening in technology, what’s happening in economics, what’s happening in politics—how they all interrelate, and what questions they ask you?
And then what are you going to do about it, with the levers you have and the position you’re in, to make our world more sustainable, equitable, humane and just? And if you’re not doing that, why are you getting up in the morning and what is the point of you? That’s kind of my business.
When you deal with people, are they, generally speaking, optimistic, are they pessimistic, or are they agnostic on that, because they’re basically just looking at the future from a business standpoint?
That’s a really good question. They’re often quite optimistic about their own chances and often pessimistic about everybody else’s. [Laughter] If you ask people, “Are you optimistic about the future?” they’re going to go, “Yeah, I’m optimistic about the future.” Then, you go, “Are you optimistic about the future generally, like, for the human race?” And you hear, “Oh, no, it’s terrible.”
Of course, those two things are incompatible. People are convinced of their ability to prevail against the odds, but not for everybody else. And so, I often get hired by companies who are saying to me, “We want you to help us be more successful in the future,” and then, I’ll point out to them that actually there’s some existential threats to their business model that may mean they’ll be irrelevant in five years, which they haven’t even thought about.
A really good example of this from the past, which is quite famous, is what happened to Blockbuster. So Netflix went to Blockbuster—I think in 2006—and said, “You should invest in us. You should buy us. We’ll be your online distribution arm.” And the management at Blockbuster went, “I don’t know. I think people will always want to take a cassette home.” But also, Blockbuster made a large amount of their profits from late returns.
So they weren’t likely to embrace downloads, because that would kind of cannibalize one of their revenue streams. Of course, that was very short-sighted of them. And one of the things I say to a lot of my clients is, “Taking the future seriously is going to cost some people their jobs, and I am sorry about that, but not taking the future seriously is going to cost everybody their jobs. So it’s kind of your choice.”
Are your clients continental, British, American… primarily? 
All over. I’m under non-disclosure agreements with most of them.
Fair enough. My follow-up question is going to be, there’s of course a stereotype that Europeans overall are more pessimistic about the future and Americans are less so. Is that true or is it that there’s a grain of truth somewhere, but it’s not really material?
I think there is something in it, and I think it’s because certainly, people from the United States are very confident about the wonderfulness of the United States and how it will prevail. There’s that “American Dream” kind of culture, whereas Europe is in a lot of smaller nations that up until quite recently have been beating the crap out of each other. Perhaps we are a little bit more circumspect, but yeah, it’s a very slight skewing in one direction or the other.
You subtitle your book “What’s Next?” and then, you say, “The question is the answer,” kind of in this Zen fashion, but at some level you must have an opinion, like, it could go either way, but it will likely do what? What do you personally think?
 I don’t know. I feel it’s really up for grabs. If we carry on the way we’re going, it’s going to be terrible; there’s no doubt about that. I think it’s an ancient Chinese proverb that says, “If we don’t change the direction we’re going, we’re going to end up where we’re headed.” And where we’re heading to at the moment is a four-degree world, mass inequality, mass unemployment from the subject we’re going to get into a bit later, which is AI replacing a lot of middle-class jobs, etc. That’s certainly possible.
Then, on the other hand, because of the other work I do with Atlas of the Future, I’m constantly at the cutting-edge, finding people doing amazing stuff. There’s all sorts of people out there putting different futures on the table that make it imminently possible for us to have a humane and just and sustainable world. When you realize, for instance, that we’re installing half a million solar panels a day at the moment. Solar is doubling in capacity every two or three years, and it’s a sort of low starting point, but if it carries on like that, we’ll be completely on renewables within a generation.
And that’s not just good for the environment. Even if you don’t care about the environment, it’s really good for the economy, because the marginal cost of renewable energy is zero and the energy price is very, very stable, which is great when you want to invest long-term. Because one of the problems with the world’s economy is that the oil price keeps going up and down, and nobody knows what’s going to happen to their economy as a result.
You’ll remember—I don’t know how old you are, but certainly some of your listeners will remember—what happened after the Yom Kippur War, where the Arab nations, in protest of American support for Israel, just upped the oil price by about fivefold and suddenly, you had a fifty-five mile-per-hour speed limit, there were states that banned Christmas lights because it was a frivolous use of energy, there was gas rationing, etc. That’s a very extreme example of what’s wrong with relying on fossil fuels, just from an economic perspective, not even an environmental one.
So there are all sorts of great opportunities out there, and I think we really are on the dividing line at the moment. And I suppose I have just decided to put my shoulder against fighting for the side of sustainability and humanity and justice, rather than business as usual, and I don’t have a view. People call me an optimist because I fight, I suppose, for the optimistic side, but we could lose, and we could lose very badly.
Of course, you’re right that if we don’t change direction, you can see what’s going to happen. But there are other things that no force on heaven and earth could stop, like the trend toward automation, the trend toward computerization, the development of artificial intelligence, and those sorts of things.  
Those are known things that will happen. Let’s dive into that topic. Putting aside climate and energy and those topics for the moment, what do you think are just things that will certainly happen in the future?
This is really interesting. The problem with futurology as a profession—and I use that word “profession” very loosely—is that it’s associated with prediction, and predictions are usually wrong. As you said, there are some things you can definitely see happening, and it’s therefore very easy to predict what I would call the “first-order effects” of that.
A good example: When the internet arrived, it’s not hard to predict the rise of email, as you’ve got a network of computers with people sat behind them, typing on keyboards. Email is not a massive leap. So predicting the rise of email is not a problem, but does anybody predict the invention of social media? Does anybody predict the role of social media in spreading fake news or whatever? You can’t. These are second, third-order, fourth-order effects. So each technology is really not an answer, it’s just a question.
If you look at AI, we are looking very much at the automation of lots of jobs that previously we would’ve thought “un-automatable.” As already mentioned, driverless cars is one example of artificial intelligence. A great report came out last year from the Oxford Martin School listing literally hundreds of middle-class jobs that are on the brink of being replaced by automation—
Let me put a pin there, because that’s not actually what they say, they go to great pains to say just the opposite. What they say is that forty-seven percent of things people do in their jobs are potentially automatable. That’s why things on their list are things like pharmacist assistants or whatnot. So all they really say is, “We make no predictions whatsoever about what is going to happen in jobs.”
So if a futurologist does anything, the futurologist looks at the past, and says, “We know human nature is a constant, and we know things that have happened in the past, again and again and again. And we can look at that and say ‘Okay, that will probably happen again.’” So we know that for two hundred and fifty years, three hundred years since the Industrial Revolution in the West, unemployment has remained very narrow in this broad band of five to ten percent.
Aside from the Depression, all over the West, even though you’ve had, arguably, more disruptive technologies—you’ve had the electrification of industry, the mechanization of industry, the end of animal power being a force of locomotion, coal grew from generating five percent of energy to eighty percent of energy in just twenty years—all these enormous disrupting things that did, to use your exact words, “automated jobs that we would’ve thought were not automatable,” and yet, we never ever had a hiccup or a surge in unemployment from that. So wouldn’t it be incumbent on somebody saying something different is going to happen, to really go into a lot of detail about what’s different with this? 
I absolutely agree with you there, and I am not worried about employment in the long run. Because if you look at what’s happened in employment, it’s what you call “non-routine things,” things that humans are good at, that have been hard to automate. A really good example is the beginning of the Industrial Revolution, lots of farm laborers, end of Industrial Revolution, not nearly as many farm laborers—I think five percent of the number—because we introduced automation to the farming industry, tractors, etcetera; now far fewer people can farm the same amount of land.
And by the same token, at the beginning of the Industrial Revolution, not so many accountants; by the end of it, stacks of accountants—thirty times more accountants. We usually end up creating these higher-value, more complex jobs. The problem is the transition. In my experience, not many farm laborers want to become accountants, and even if they did, there’s no transition route for them. So whole families, whole swathes of the populace can get blindsided by this change, because they’re not literate about it, or their education system isn’t thinking about it in a sensible way.
Let’s look at driverless technology again. There’s 3.5 million truck drivers in the United States, and it’s very likely that a large chunk of them will not have that job available to them in ten or fifteen years, and it’s not just them. Actually, if you go to the American Trucking Association, they will say that one in fifteen of the American workforce are somehow related to the trucking industry.
A lot of those jobs will be at threat. Other jobs may replace them, but my concern is what happens to the people who are currently truck drivers? What happens to an education system that doesn’t tell people that truck drivers won’t be existing in such numbers in ten or fifteen years’ time? What does the American Trucking Association do? What do logistics firms that employ those truckers do?
They’ve all got a responsibility to think about this problem in a systemic way, and they often don’t, which is where my work comes in, saying, “Look, Government, you have to think about an education that is very different, because AI is going to be creating a job market that’s entirely different from the one you’re currently educating your children into.”
Fair enough. I don’t think that anybody would argue that an industrial economy education system is going to make workers successful in this world of tomorrow, but that set up that you just gave, it strikes me as a bit disingenuous. Which is to say, well, let’s just take truck driving for example. The facts on the ground are that it will be gradual, because you’ve got, likely, ten years to replace all the truckers, and it’s going to be gradual. So, fewer people are going to enter the field, people who might retire earlier are going to retire out of it. Technology seldom does it all that quickly.
But the thing that I think might be different is that, usually, what people say is, “We’re going to lose these lower-skill jobs and we’re going to make jobs for geneticists,” and those people who had these lower-skill jobs are going to become geneticists, and nobody actually ever says that that’s what happens.
The question is, “Can everybody already do a job a little harder than the one they presently have?” So, each person just goes up one layer, one notch in the food chain that doesn’t actually require that you take truck drivers and send them to graduate school for twelve years.
Indeed, and this is why having conversations like this is so important, because, as I said, my thing is about making people literate about the questions the future is asking them. And so, now, we’re having quite a literate conversation about that, and that’s really important. It’s why podcasts like this are important, it’s why the research you do is important. But in my experience, a lot of people, particularly in government, they would not even be having this conversation or asking this question. And the same for lots of people in business as well, because they’re very focused on a very narrow way of looking at things. So, I think I’m in violent agreement with you.
And I with you. I am just trying to dissect it and think it through, because one could also say that about the electrification of industry, all those things I just listed. Nobody said, “Electrification is coming.” We’ve always been reactive, and, luckily, change has come at a pace that our reactive skills have been able to keep up. Do you think this time is different? Are you saying there’s a better way to do it?
I just think it’s going to be faster this time. I think it’s an arguable truism in the work of futurism that technology waves speed up. If you look at, for instance, there are some figures I’ve got for the United States National Intelligence Council, and it’s really interesting just to look at how long it took the United States population to adopt certain technologies. It took forty-six years for twenty-five percent of the United States population to bring electricity into their homes from its introduction to the market.
It took just seven for the World Wide Web, and there were two and a half times as many citizens there. And that makes sense, because each technology provides the platform and the tools to build the next one. You can’t have the World Wide Web until you have electricity. And so you see this speeding up because now you have more powerful tools than you had the last time to help you build the next one, and they distribute much more quickly as well.
So what we have—and this is what my third book is going to be about—is this problem between the speed of change of technology and also, the speed of change of thought and philosophy and new ideas about how we might organize ourselves, and the speed of our bureaucracies and our governments and our administration, which is still painfully slow. And it’s that mismatch of those gears that I think causes the most problems. The education system being a really good example. If your education system isn’t keeping up with those changes, isn’t in lockstep with them, then inevitably, you’re going to do a disservice to many of the students going through it.
Where do you think that goes to? Because, if it took forty-seven years for electricity and seven for the web, eventually, it’s like that movie Spaceballs, where they had that scene where the video hits the video store before they finish shooting it. At some point, there’s an actual physical limit to that, right? You don’t have a technology that comes out on Thursday and by Friday, half the world is using it. So what does that world look like?
Exactly, and all of these things move at slightly different speeds. If you look at what’s happening with energy at the moment, which is one of my favorite topics because I think it kind of underpins everything else, the speed at which the efficiency of solar panels is rising, the speed at which the price of solar is going down, the invention of energy Internet technology, based on ideas from Bob Metcalfe, is extraordinary.
I was at the EU Commission a few weeks ago, talking to them about their energy policy and looking at it and saying, “Look guys, you have a fantastic energy policy for 1994. What’s going on here? How come I am having to tell you about this stuff? Because actually, we should be moving to a decentralized, decarbonized, much more efficient, much cheaper energy system because that’s good for everybody, but you’re still writing energy policy as if it was the mid ‘90s.” And that really worries me. Energy is not going to move as fast as a new social networking application, because you do have to actually build stuff and stick it in the ground and connect to each other, but it is still moving way faster than the administration, and that is my major concern.
The focus of my work for the next two-three years is working at, how do we get those things working at the same speed or at least nearly enough at the same speed so they can usefully talk to each other. Because governments, at the moment, don’t talk to technology in any useful way. Data protection law, I was just talking to a lawyer yesterday and he’s saying, “I’m in the middle of this data protection case. I am dealing with data protection law that was written in 1985.”
Let’s spend one more minute on energy, because it obviously makes the world go around, literally. My question is, the promise of nuclear way back was that it would be too cheap to meter, or in theory it could’ve been, and it didn’t work out. There were all kinds of things that weren’t foreseen and whatnot. Energy is arguably the most abundant thing in the universe, so do you think we’ll get to a point where it’s too cheap to meter, it’s like radio waves, it’s like the water fountain at the department store that nobody makes you put a quarter in?
Yeah, I think we will, but I think that comes from a distributed system, rather than a centralized one. One of my pet tropes that I trot out quite regularly is this idea that we’re moving from economies of scale to economies of distribution. It used to be that the most efficient way to do things was to get everything in a centralized place and do it all there because it was cheaper that way, given the technology we had at that time. Whether it was schools where we get all the children into a room and teach at them, whether it was power stations where we dig up a bunch of coal, take it to a big factory or power station, burn it and then send it out through the wires. Even though in your average coal-fired power plant, you would lose sixty-seven percent of the energy through waste-heat, it was still the most efficient way to do things.
Now, we have these technologies that are distributed. Even though they might be slightly less efficient or not quite as cost-effective, in and of themselves, when you connect them all together and distribute them, you start to see the ability to do things that the centralized system can’t. Energy, I think, is a really good example of that.
All our energy is derived from the sun, and the sun’s energy doesn’t hit just power plants. It hits the entire planet, and there’s that very famous statistic, that there’s more energy that hits the Earth’s surface in an hour than the human race uses in a year, I think. The sun has been waving this massive energy paycheck in our face every second since it started burning, and we haven’t been able to bank it very well.
So we’ve been running into the savings account, which is fossil fuels. That’s sunshine that has been laid down for us very dutifully by Mother Nature for billions of years and we can dig it up, thank you very much. Thank you for the savings account, but now, we don’t need the savings account so much because we can actually bank the stuff as it’s coming towards us with the improving renewable technologies that are out there. Couple that with an energy Internet, and you start to make your energy and your fuel where you are. I’m also an advisor to Richard Branson’s “Virgin Earth Challenge”, which is a twenty-five million dollar prize for taking carbon out of the atmosphere.
You have to be able to do that in an environmentally-sustainable way, and make a profit while you’re doing it. And I have to be very careful and say this is not the view of the Virgin Earth Challenge; it’s not the official view, but I am fairly confident that we will award that prize in the next three to four years, because we’ve got finalists that are taking carbon directly out of the air and turning it into fuel, and they’re doing it at a price point that’s competitive with the fossil fuel.
So if you distribute the production of liquid fuels and electricity and anybody can do it, that means you as a school can do it, you as a local business can do it. And what you find is when people do take control of the energy system, because they’re not so motivated by making a profit, the energy is cheaper, they maintain it better, and everybody’s happier.
There’s a town in the middle of Texas right now called Georgetown—65,000 Trump voters who I imagine are not that interested about the climate change threat, as conservatives generally don’t seem to think that that is a problem—and they’re all moving over to renewables, because it’s just cheaper than using oil, and they are in the middle of central Texas. I think we’re definitely going in that direction.
You’re entirely right. I am going to pull these numbers from my head, so they could be off, but something like four million exajoules of sunlight comes on the planet every year, and humanity needs five hundred. That’s what it is right now. It’s like four million raining down and we have to figure out how to pull five hundred of them and harvest those economically. Maybe, if the Virgin Earth Prize works, there’s going to be a crisis in the future—there’s not enough carbon in the air! They’ve pulled it all out at a profit.
That would be a nice problem to have, because we’ve already proven to ourselves that we can put carbon in the air. That’s not going to be a problem if it’s getting too low.
So let’s return to artificial intelligence for a moment. I want to throw a few things at you. Two different views of the world—I’d love to talk about each one by itself. One of them is that the time it takes for a computer to learn to do a task gets shorter and shorter as we learn how to do it better, and that there’s some point at which it is possible for the computer to learn to do everything a human can do, faster than a human can do it. And it would be at that point that there are literally no jobs, or could be literally no jobs if we chose that view. So, whether you think that or not, I am curious about, but assuming that that is true, what do you think happens?
I think we find new kinds of jobs. I really do. The thing is that the clue is in the name, “artificial intelligence.” We have planes; that’s artificial flying. We don’t fly the same way that birds fly. We’ve created an entire artificial way of doing it. And the intelligences that will come out of computers will not be the same as human intelligence.
They might be as intelligent, arguably, although I am not convinced of that yet, but they will be very different intelligences—in the same way that a dog’s intelligence is not the same as an ant’s intelligence, which is not the same as my Apple MacBook’s intelligence, if it has any, which is not the same as human intelligence. These intelligences will do different things.
They’ll be artificial intelligences and they’ll be very, very good at some things and very bad at other things. And the human intelligence will have certain abilities that I don’t think a machine will ever be able to replicate, in the same way that I don’t believe a wasp is ever going to be as good as me at playing the bass guitar and I am never going to be as good as it at flying.
So what would be one of those things that you would be dubious that artificial intelligence would be able to do?
I think it is the moral questions. It’s the actual philosophy of life—what are we here for, where are we going, why are we doing it, what’s the right thing to do, what do we value, and also the curiosity. I interviewed Hod Lipson at Columbia and he was very occupied with the idea of creating a computer that was curious, because I think curiosity is one of those things that sort of defines a human intelligence, that machines, to my knowledge, don’t have in any measurable sense.
So I think it would be those kind of very uniquely human things—the ability to abstract across ideas and ask moral, ethical questions and be curious about the world. Those are things that I don’t see machines doing very well at the moment, at all, and I am not convinced they’ll do them in the future. But it’s such a rapidly evolving field and I’m not a deep expert in AI, and I’m willing to be proved wrong.
So, you don’t think there will ever be a book One Curious Computer Sets Out To Answer What’s Next? 
Do you know what? I don’t, but I really wish there was because I’d love to go on stage and have that panel discussion with that computer.
Then, let’s push the scenario one step further. I would have to say it’s an overwhelming majority of people who work in the AI field who believe that we will someday—and interestingly, the estimates range from five to five hundred years—make a general intelligence. And it begins with the assumption that we, our brains and our minds, are machines and therefore, we can eventually build a mechanical one. It sounds like you do not hold that view.
It’s a nuance view. Again, it’s interesting to discuss these things. What we’re really talking about here is consciousness, because if you want to build an “artificial general intelligence,” as they call it, what you’re talking about is building a conscious machine that can have the same kind of thoughts and reflections that we associate with our general intelligence. Now, there are two things I’d say.
The first is, to build a conscious machine, you’d have to know what consciousness is, and we don’t. And we’ve been arguing about it for two thousand years. I would also say that some of the most interesting work in that field is happening in AI, particularly in robotics, because in nature, there is no consciousness without a body. It may be that when we say, “What is consciousness?” consciousness isn’t actually one thing; it’s actually eight separate questions we have to answer, and we worked out what those eight are, and we can answer with technology. I think that might be a plausible route.
And clearly, as you point out, consciousness must be computable, because we are computing it right now. Me and you are “just” DNA computer code being read, and that computer code generates proteins and lipids and all kinds of things to make us work, and we’re having this conversation as a result of these computer programs that are running in ourselves. So clearly, consciousness is computable, but I am still very much to be convinced that we have any idea of what consciousness really is, or if we’re even asking the right questions about it.
To your point, we’re way ahead of ourselves in one sense, but do you think that in the end, if you really did have a conscious computer, a conscious machine, does that in some way undermine human rights? In the sense that we think people have these rights by virtue of being conscious and by virtue of being sentient, being able to feel pain? Do you think that if all of a sudden, the refrigerator and everything in your house also made that claim, that we are somehow lessened by it, not that the machines are somehow ennobled by it?
I would hope not. George Church, who runs Harvard Medical School said to me, “If you could show me a conscious machine, I wouldn’t be frightened by it. I’d be emboldened by it, I’d be curious about how that thing works, because then I’d be able to understand myself better.”
I was asked just recently by the people who are making “The Handmaid’s Tale,” the TV series based on the Margaret Atwood book, “What do you think AI is going to do for humanity?” Hopefully, one scenario is that it helps us understand ourselves better, because if we are able to create that machine that is conscious, we will have to answer the question, “What is consciousness?” as I said earlier, and when we’ve done that, we will have also unlocked also some of the great secrets about ourselves, about our own motivations, about our emotions, why we fight, what’s good for us, what’s bad for us, how to handle depression. We might open a whole new toolbox on actually understanding ourselves better.
One interpretation of it is that actually creating artificial general intelligence is one of the best things that could happen to humanity, because it will help us understand ourselves better, which might help us achieve more and be better human beings.
At the beginning of our chat, you listed a litany of what you saw as the big challenges which face our planet. You mentioned income inequality. So, absent wide-scale redistribution, technology, in a sense, promotes that in a way, doesn’t it?
Microsoft, Google and Facebook between them have generated 12 billionaires, so it’s evidently easier to make a billion dollars now—not me, but for some people to make billions now—than it would’ve been twenty years ago or five hundred years ago for that matter. Do you think that technology in itself, by multiplying the abilities of people and magnifying it ever-more, is a root cause of income inequality? Or do you think that comes from somewhere else?
I think income inequality comes from the way our capital markets and our property law works. If you look at democracy for instance, there’s several pillars to it. If you talk to a political philosopher, they’ll say, you know, a functioning democracy has several things that need to be working. One is you need to have universal suffrage, so everybody gets to vote, you need to have free and fair elections, you need to have free press, you need to have a judiciary that isn’t influenced by the government, etcetera.
The other thing that’s mentioned but less talked about is working property rights. Working property rights say that you, as a citizen, have the right to own something, whether that’s some property or machinery or an idea, and you are allowed to generate an income from that and profit from it. Now that’s a great idea, and it’s part of entrepreneurship and going and creating something, but the problem is once you have a certain amount of property that you’ve profited from, you would then have more ability to go and buy some property from other people.
What’s happening is the property rights, whether they’re intellectual or physical, have concentrated themselves in fewer and fewer hands, because as you get rich, it’s easier to buy other stuff. And I know this from my own experience. I used to be a poor musician-student. Now, I’m doing pretty well and I find myself today buying some shares in a company that I thought was going to do really well… and they did. And I find myself just thinking, “Wow, that was easy.” It’s easy for me now because I have more property rights to acquire more property rights, and that’s what we’re seeing. There’s a fundamental problem there somewhere, and I am not quite sure how we deal with it.
After World War II, England toyed with incredibly high, sometimes over 100% marginal taxes on unearned income, and I think The Beatles figured they needed to leave. What is your take on that? Did that work, is that an experiment you would advocate repeating, or what did we learn from that? 
I think we’ve learnt that’s a very bad way of doing it. Again, it comes back to how much do things cost? If things are expensive and you’re running a state, you need to collect more taxes. We’re having this huge debate in the UK at the moment about the cost of National Health Service, and how do you fund that. To go back to some of our earlier conversation, if you suddenly reduce the cost of energy to very little, actually everything gets cheaper—healthcare, education, building roads.
If you have a whole bunch of machines that can do stuff for you cheaper that humans could do it, in one way, that’s really good, because now you can provide health care, education, road building, whatever… cheaper. The question is, “How does the job market change then? Where do human beings find value? Do we create these higher-valued jobs?” One radical idea that’s come out at the moment is this idea of universal basic income.
The state has now enough money coming in because the cost of energy has gone down, and it can build stuff much more cheaply. We’ll just get a salary anyway from the state to follow our dreams. That’s one plausible scenario.
Moving on, I would love to hear more about the book that’s just come out. I’ve read what I could find online, I don’t have a copy of it yet. What made you write We Do Things Differently, and what are you hoping it accomplishes?
So with my first book, which is really an attempt to talk about the cutting-edge of technology and what’s happening with the environment in an entertaining way for the layman, I got to the end of that book and it became very clear to me that we have all the technology that we need to solve the world’s grand challenges, whether that’s the energy price, or climate change, or problems with manufacturing.
We’re not short of technology. If we didn’t invent another thing from tomorrow, we could deal with all the world’s grand challenges, we could distribute wealth better, we could do all the things. But it’s not technology that’s the problem. It’s the administration, it’s the way we organize ourselves, it’s the way our systems have been built, and how they’ve become kind of fossilized in the way they work.
What I wanted to do with this book is look at systems and look at five key human systems—energy, healthcare, food, education and governance—and say, “Is there a way to do these better?” It wasn’t about me saying, “Here’s my idea.” It was about me going around the world and finding people who’ve already done it better and prevailed and say, “What do these people tell us about the future?”
Do they give us a roadmap to and a window on a future that is better run, more sustainable, kinder to everybody, etcetera? And that’s what it is. It’s a collection of stories of people who’ve gone and looked at existing systems, challenged those systems, built something better, and they’ve succeeded and they’ve been there for a while—so you can’t say it was just like a six-month thing. They’re actually prevailing, and it’s those stories in education, healthcare, food, energy and governance.
I think the saddest fact I know, in all the litany of the things you run across, any time food comes up, it jumps to the front of my mind. There’s a billion people more or less—960 something million—that are hungry. You can go to the UN’s website, you can download a spreadsheet, it lists them out by country.
The sad truth is that seventy-nine percent of hungry people in the world live in nations that are net food exporters. So, the food that’s made inside of the country can be sold on the world market for more than the local people can pay for it. The truth in the modern age is not that you starve to death if you have no food; it is that you starve to death if you have no money. What did you find?
 There’s an even worse fact that I can tell you, which is, the human race wastes between thirty and fifty percent of the food it makes, depending on where you are in the world, before it even reaches the market. It spoils or it rots or it gets wasted or damaged between the field and the supermarket shelf, and this is particularly prevalent in the global south, the hotter countries. And the reason is we simply don’t have enough refrigeration, we don’t have enough cold chains, as they’re called.
So one of the great pillars of civilization, which we kind of take for granted and don’t really think about, is refrigeration and cooling. In the UK, where I am, sixteen percent of our electricity is spent on cooling stuff, and it’s not just food as well. It’s medical tissues and medicines and all that kind of stuff. And if you look at sub-Saharan Africa, it’s disastrous because the food they are growing, they are not even eating because it ruins too quickly, because we don’t have a sustainable refrigeration system for them to use. And one of the things I look at in the book is a new sustainable refrigeration system that looks like it could solve that problem.
You also talk about education. What do you advocate there? What are your thoughts and findings?
I try not to advocate anything, because I think that’s generally vainglorious and I’m all about debate and getting people to ask the right questions. What I will do is sort of say, look, this person over here seems to have done something pretty extraordinary. What lessons can we draw from them?
So, I went to see a school in a very, very rough housing estate in Northern England. This is not an urban paradise; this is a tough neighborhood, lots of violence, drug dealing, etcetera, low levels of social cohesion, and in the middle of this housing estate there was a school that, I think the government called it the fifth worst school in the entire UK, and they were about to close it. A guy called Carl turns up as new headmaster and two years later, it’s considered one of the best schools in the world, and he’s done all that without changing any staff. It took the same staff everybody thought was rubbish and two years later, they’re regarded as some of the best educators in the world.
And the way he did that is not rocket science. It was really about creating a collaborative learning environment. One of the things he said was, “Teachers don’t work in teams anymore. They don’t watch each other teach. They don’t learn about the latest of what’s happening in education; they don’t do that. They kind of become automatized and do their lessons, so I’m going to get them working as a team.”
He also said they lost any culture of aspiration about what they should be doing, so they were just trying to get to the end of the week, rather than saying, “Let’s create the greatest school in the world.” So he took some very simple management practices which is about, ‘We’re going to aspire to be the best, and we’re going to start working together, and we’re going to start working with our kids.”
And he did the same with the kids, even though they were turning up at this school four years old, most of them still in nappies, most of them without language, even at four—by the time they were leaving, they were outperforming the national average, from this very rough working-class estate. By also working with the kids in the same way and saying, “Look, what’s your aspiration? How are we going to design this together collectively as a school—you the students, us the teachers?”
This is actually good management practice, but introduced into a school environment, and it worked very well. I am vastly trivializing the amount of effort and sweat and emotional effort he had to put into that. But, again, talking about teamwork: Rather than splitting the world up into subjects, which is what we tend to do in schools, he’s like, “Let’s pick things that the kids are really interested in, and we’ll teach the subjects along the way because they’ll all be interrelated with each other.”
I walked into a classroom there and it’s bedecked out like NASA headquarters, because they picked the theme of space for this term for this particular class. But of course, as they talk about space and astronauts, they learn about the physics, the maths, they learn about the communications, they learn about history…
And I said to Carl, “Once they’re given this free environment, how do they feel when exams come along, which is a very constraining environment?” He said, “Oh, they love it.” I’m like, “You’re kidding me!” He said, “No, they can’t wait to prove how much they’ve learnt.”
None of this is rocket science, but it’s really interesting that education is one of those places where, when you try and do anything new, someone is going to try to kill you, because education is autobiography. Everybody’s been through it, and everybody has a very prejudiced view of what it should be like. So for any change, it’s always going to upset somebody.
You made the statement that even if we didn’t invent any new technology, we would know how to solve all of life’s greatest challenges. I would like to challenge that and say, we actually don’t know how to solve the single biggest challenge.
This sounds good.
Death.
Death! That’s an interesting question, whether you view it as a challenge or not.
I think most people, even if they don’t want to live indefinitely, that the power to choose the moment of your own demise is something that I think many people would aspire to—to live a full life and then choose the terms of their own ending. Do you think death is solvable? Or at least aging?
 I think aging is probably solvable. Again, I am not a high-ranking scientist in this area, but I know a number of them. I was working with the chief scientist at one of our big aging charities recently, and if you look at the research that’s coming out from places like Stanford and Harvard, there’s an incredible roadmap to humans living healthy lives in healthy bodies till one hundred and ten, one hundred and thirty. Stanford have been reversing human aging in certain human cell lines since 2014.
The problem is, of course, it turns out that what’s good for helping humans live longer is also often quite good for promoting cancer. And so that’s the big conundrum we have at the moment. Certainly, we are living longer and healthier anyway. Average life expectancy has been rising a quarter-year for every year, for the last hundred years. Technology is clearly doing something in that direction.
Well what it seems to be doing is ending premature death, but the number of people who live to be supercentenarians, one hundred and ten and above is forty, and it doesn’t seem to be going up particularly.
Yeah, I think that’s true. But it depends what you call “premature death,” because actually, certainly the age at which we die is definitely creeping up. But if we can keep ourselves a bit younger, if we can, for instance, find a way to lengthen the telomeres in our cells without encouraging cancer, that’s a really good thing because most of the diseases we end up dying from are the diseases of aging—cardiovascular disease, stroke, etcetera.
We haven’t solved it yet. You asked me if I think it’s solvable. Like you, I think I am fairly optimistic about the human race’s ability to finally ask the right questions, and then find answers to them. But I think we still don’t really understand aging well enough yet to solve it, but I think we’re getting there much faster, I would say, than we are perhaps with an artificial general intelligence.
Talk about the “Atlas of the Future” project.
 Ah, I love the Atlas. The Atlas is kind of the first instantiation of something from the Democratizing the Future society. What we’re trying to do is to say, “Look, if we want the world to progress in a way that’s good for everybody, it needs to involve everybody.” And therefore, you need to be literate about the questions the future asks you, and not just literate about threats. Which is what we get from the media. The general media will just walk in and go, “It’s all going to be terrible, everyone’s trying to kill you.” They’ll drop that bomb and then just walk away, because that gets your attention.
We are trying to say, “Yeah, all those stories are worth paying attention to, and there are a whole other bunch of stories worth paying attention to, about what we can do with renewables, what we can do to improve healthcare, what we can do to improve social cohesion, what we can do to improve happiness, what we can do to improve nations understanding each other, what we can do to reduce partisan political divides, etcetera.” And we collect all that stuff. So it’s a huge media project.
If you go to “The Atlas of the Future,” you’ll find all these projects of people doing amazing stuff—some of them very big-picture stuff, some of it small-picture stuff. Subsequently, what we’re doing with that is we’re farming out that content either via TV series, the books I write, there’s a podcast—by The Futurenauts, which is me and my friend, Ed Gillespie—where we talk about the stuff on the Atlas and we interview people.
So it’s about a way of creating a culture of the future that’s aspirational, because we kind of feel that, at the moment, we’re being asked to be fearful of the future and run away in the opposite direction. And we’d like to put on the table the idea that the future could be great, and we’d like to run towards that, and get involved in making it.
And then, what’s this third book you are working on?
The third book is just an idea at the moment, but it is about how do we get our administration, our government, our bureaucracy to move at something like a similar pace to the pace of ideas and technology, because it seems to me that it’s that friction that causes so many of the problems—that we don’t move forward fast enough. The time it takes to approve a drug is stratospheric, and there’s some good reasons for that, I am not against the work the FDA does, but when you’re looking at, sometimes, twelve or thirteen years for a drug to reach the market, that’s got to be too slow.
And so, we have to think about ways to get those parts of the human experience—the technology, the philosophy and the bureaucracy—working at roughly the same clock speed, then I think things would be better for everybody. And that’s the idea I want to explore in the next book—how we go about doing that. Some of it, I think, will be blockchain technology, some of it might be the use of virtual reality, and a whole bunch of stuff I haven’t probably found out yet. I’m really just asking that question. If any of your listeners have any ideas about what some of the technologies or approaches or philosophies that will help us solve that, I’d love to hear from them.
You mentioned a TV program earlier. In views of the future, science fiction movies, TV, books, all of that, what do you read or watch that you think, “Huh, that could happen. That is a possible outcome”? What do you think is done really well?
It’s interesting, because I have a sixteen-month old child, and I am trying to write a book and save the world, so I hardly watch anything. I think it’s very difficult to cite fiction as a good source. It’s an inspiration, it’s a question, but it never turns out how we imagine. So I take all those things with a pinch of salt, and just enjoy them for what they are.
I have no idea what the future is going to be like, but I have an idea that it could be great, and I’d like it to be so. And actually, there is no fiction really like that, because if you look at science fiction, generally, it’s dystopian, or it’s about conflict, and there’s a very good reason for that—which is that it’s entertaining. Nobody wants to watch a James Cameron movie where the robots do your gardening. That’s not entertaining to watch. Terminator 3: Gardening Day is nothing that anybody is going to the cinema to see.
I’m in full agreement with that. I authored a book called Infinite Progress, and, unlike you, I have a clearer idea of what I think the future is going to be. And I used to really be bothered by dystopian movies, mainly because I am required to go see them. Because everybody’s like, “Did you see Elysium?” So, I have to go see and read everything, because I’m in that space. And it used to bother me, until I read a quote, I think by Frank Robert—I apologize if it isn’t him—who said, “Sometimes, the job of science fiction is to warn you of something that could happen so that you have your guard up about it,” so you’re like, “A-ha! I’m not going to let that happen.” It kind of lets the cat out of the bag. And so I was able to kind of switch my view on it by keeping that in mind, that these are cautionary tales.
I think we also have to adopt that view with the media. The media leads on the stuff that is terrifying, because that will get our attention, and we are programmed as human beings to be cautious first and optimistic second. That makes perfect sense on the African savanna. If one of your tribe goes over the hill without checking for big cats, and gets eaten by a big cat, you’re pretty cynical about hills from that moment on. You’re nervous of them, you approach them carefully. That’s the way we’re kind of programmed to look at the world.
But of course, that kind of pessimism doesn’t move us forward very much. It keeps us where we are, and even worse than that is the cynicism. And of course, cynicism is just obedience to the status quo, so I think you can enjoy the entertainment, and enjoy the dystopia, enjoy us fighting the robots, all that kind of stuff. One thing you do see about all those movies is that eventually, we win, even if we are being attacked by aliens or whatever; we usually prevail. So whilst they are dystopian, there is this yearning amongst us, saying, “Actually, we will prevail, we will get somewhere.” And maybe it will be a rocky ride, but hopefully, we’ll end up in the sunshine.
An Optimist’s Tour of the Future is still available all over the world—I saw it was in, like, nine languages—and you can order that from your local book proprietor and We Do Things Differently, is that out in the US? When will that be out in US? 
It’s out in the US early next year. We don’t have a publication date yet, but I am told by my lovely publishers that that will be sort of January-February next year. Yet you can buy the UK edition on Amazon.com and various other online stores, I’m sure.
If people want to follow you and follow what you do and whatnot, what’s the best way to do that? 
My Twitter handle is @Optimistontour. You can learn about me at my website, which is markstevenson.org, and check out “The Futurenauts” podcast at thefuturenauts.com where we do something similar to this, although we have more swearing and nakedness than your podcast. Also, get yourself down to “Atlas of the Future.” I think that would be the central place to go. It’s a great resource for everybody, and that’s not just about me—there’s a whole bunch of future, forward-thinking people on that. Future heroes. We should probably get you on there at some point, Byron.
I would be delighted. This was an amazing hour! There could be a Mark Stevenson show. It’s every topic under the sun. You’ve got wonderful insights, and thank you so much for taking the time to share them with us. Bye!
 Cheers! Bye!
Byron explores issues around artificial intelligence and conscious computers in his upcoming book The Fourth Age, to be published in April by Atria, an imprint of Simon & Schuster. Pre-order a copy here.
[voices_in_ai_link_back]

Voices in AI – Episode 27: A Conversation with Adrian McDermott

[voices_in_ai_byline]
In this episode, Byron and Adrian discuss intelligence, consciousness, self-driving cars and more.
[podcast_player name=”Episode 27 – A Conversation with Adrian McDermott” artist=”Byron Reese” album=”Voices in AI” url=”https://voicesinai.s3.amazonaws.com/2018-01-15-(00-58-48)-adrian-mcdermott.mp3″ cover_art_url=”https://voicesinai.com/wp-content/uploads/2018/01/voices-headshot-card.jpg”]
[voices_in_ai_byline]
Byron Reese: This is Voices in AI, brought to you by Gigaom. I’m Byron Reese. Today our guest is Adrian McDermott, he is Zendesk’s President of Products where he works to build software for better customer relationships, including, of course, exploring how AI and machine learning impacts the way customers engage with businesses. Adrian is a Yorkshireman, living in San Francisco, and he holds a Bachelor of Science and Computer Science from De Montfort University. Welcome to the show, Adrian!
Adrian McDermott: Thanks, Byron! Great to be here!
My first question is almost always: What is artificial intelligence?
When I think about artificial intelligence, I think about AI as a system that can interact with and learn from its environment in an independent manner. I think that’s where the intelligence comes from. AI systems have traditionally been optimized for achieving specific tasks. In computer science, we used to write programs using procedural languages and we would tell them exactly what to do at every stage of that language. With AI, it can actually learn and adapt from its environment and, you know, reason to a certain extent and build the capabilities to do that. Narrowly, I think that’s what AI is, but societally I think the term has a series of connotations it takes on, some scary and some super interesting and exciting meanings and consequences when we think about it and when we talk about it.
We’ll get to that in due course, but back to your narrow definition, “It learns for its environment,” that’s a pretty high bar, actually. By that measure, my dog food bowl that automatically refills when it runs out, even though it’s reacting to its environment, it’s not learning from its environment; whereas a Nest thermometer, you would say, is learning from its environment and therefore is AI. Did I call the ball right on both of those, kind of the way you see the world?
I think so. I mean, your dog bowl, perhaps, it learns, over time, how much food your dog needs every day, and it adapts to its environment, I don’t know. You could have an intelligent dog bowl, dog feeding system, hopefully one that understands the nature of most dogs is to keep eating until they choke. That would be an important governor on that system, let’s be honest, but I think in general that characterization is good.
We, as biological computational devices, learn from our environment and take in a series of inputs from those environments and then use those experiences, I think, to pattern match new stimuli and new situations that we encounter so that we know what to do, even though we’ve never seen that exact situation before.
So, and not to put any words in your mouth, but it sounds like you think that humans react to our environment and that is the source of our intelligence, and a computer that reacts to its environment, it’s artificial intelligence, but it really is intelligent. It’s not artificial, it’s not faking it, it really is intelligent. Is that correct?
I think artificial intelligence is this ability to learn from the environment, and come up with new behaviors as a result of this learning. There is a tremendous number of examples of AI systems that have created new ways of doing things and have learned. I think one of the most famous is move thirty-four in Google’s AlphaGo when it’s playing the game Go against Lee Sedol, one of the greatest players in the world. It performed a move that was shocking to the Go community and the Go intelligentsia because it had learned and it had evolved its thinking to a point where it created new ways of doing things that were not natural for us as humans. I think artificial intelligence, really, when it fulfills its promises, is able to create and learn in that way, but currently most systems do that within a very narrow problem domain.
With regard to an artificial general intelligence, do you think that the way we think of AI today eventually evolves into an AGI? In other words, are we on a path to create one? Or do you think a truly generalized intelligence will be built in a completely different way than how we are currently building AI systems today?
I mean, there are a series of characteristics of intelligence that we have, right, that we think about. One of them is the ability to think about a problem, think about a scenario, and run our head through different ways of handling that scenario and imagine different outcomes, and almost to self actualize in those situations. I think that modern deep-learning techniques actually are, you know, the construction is such that they are looking at different scenarios to come up with different outcomes. Ultimately, we don’t necessarily, I believe it’s true to say, understand a great deal about the nature of consciousness and the way that our brains work.
We know a lot about the physiology, not necessarily about the philosophy. It does seem like our brains are sort of neuron-based computation devices that take a whole bunch of inputs and process them based on stored experiences and learnings, and it does seem like that’s the kind of systems that we’re building with artificial-intelligence-based machines and computers.
Given that technology gets better every year, year over year, it seems like a natural conclusion that ultimately technology advancements will be such that we can reach the same point of general intelligence that our cerebral cortex reached hundreds of thousands of years ago. I think we have to assume that we will eventually get there. It seems like we’re building the systems in the same way that our brains function right now.
That’s fascinating because, that description of human’s ability to imagine different scenarios is in fact some people’s theory as to how consciousness emerged. And, not putting you on the spot because, as you said, we don’t really know, but is that plausible to you? That being able to essentially, kind of, carry on that internal dialogue, “I wonder if I should go pull that tiger’s tail,” you know, is that what you think made us conscious or are you indifferent on that question?
I only have a layman’s opinion, but, you know, there’s a test—I don’t know if it’s in evolutionary biology or psychology—the mirror test where if you put a dog in front of a mirror it doesn’t recognize itself, but Asian elephants and dolphins do recognize themselves in the mirror. So, it’s an interesting question of that ability to self-actualize, to understand who you are, and to make plans and go forward. That is the nature of intelligence and from an evolutionary point of view you can imagine a number of ways in which that consciousness of self and that ability to make plans was essential for the species to thrive and move forward. You know we’re not the largest species on the planet, but we’ve become somewhat dominant as a result of our ability to plan and take actions.
I think certain behaviors that we manifest came from the advantageous nature of cooperation between members of our species, and the way that we act together and act independently and dream independently and move together. I think it seems clear that that is probably how consciousness evolved, it was an evolutionary advantage to be conscious, to be able to make plans, to think about oneself, and we seem to be on the path where we’re emulating those structures in artificial intelligence work.
Yeah, the mirror test is fascinating because only one bird passes it and that is the magpie.
The magpie?
Yeah, and there’s recent research, very recent, that suggests that ants pass it, which would be staggering. It looks like they’ve controlled for so many things, but it is unquestionably a fascinating thing. Of course, people disagree on what exactly it means.
Yeah, what does it mean? It’s interesting that ants pass because ants do form a multi-role complex society. So, is it one of the requirements of a multi-role complex society that you need to be able to pass the mirror test, and understand who you are and what your place is in that society?
Yeah, that is fascinating. I actually emailed Gallup and asked him, “Did you know ants passed the test?” And he’s like, “Really, I hadn’t heard that?” You know, because he’s the originator of it.
The argument against the test goes like this: If you put a red dot on a dog’s paw, the dog knows that’s its paw and it might lick it off its own paw, right? The dog has a sense of self, it knows that’s its foot. And so, maybe all the mirror test is doing is testing to see if the dog is smart enough to understand what a mirror is, which is a completely different thing.
Do you think, by extension, and again with your qualification that it’s a layman’s viewpoint, I asked you a question about AGI and you launched into a description of consciousness. Can I infer from your answer that you believe that an AGI will be conscious?
You can infer from my answer that I believe that to have a truly artificial general intelligence, I think that consciousness is a requirement, or some kind of ability to have freedom in thought direction. I think that is part of the nature of consciousness or one way of thinking about it.
I would tend to agree, but let me just… Everybody’s had that sensation where you’re driving and you kind of space, right, and all of a sudden you snap to a minute later and you’re like, “Whoa, I don’t have any memory of driving to this spot,” and, in that moment, you merged traffic, you changed lanes, and all of that. So, you acted intelligently but you were not, in a sense, conscious at that moment. Do you think that saying, “Oh, that’s an example of intelligence without consciousness,” is the problem? Like, “No, no you really were conscious all that time,” or is it like, “No, no, you didn’t have, like, some new idea or anything, you just managed off rote.” Do you have a thought on that?
I think it’s true that so much of what we do as beings is managed off rote, but probably a lot of the reason we’re successful as a species is because we don’t just go off rote. Like, if someone had driven in front of you or the phone had rung, if all these things had happened, that would have created a suitably justifiable, stored in short-term memory because it’s important event while you were driving, then you would have moved into a different mode of consciousness. I think the human brain takes in a massive amount of input in some ways but filters it down to just this, quote unquote, “stream of consciousness” of experiences that are important, or things that are happening. And it’s that filter of consciousness, or the filter of the brain, that puts you in the moment where you’re dealing with the most important thing. That, in some ways, characterizes us.
When we think about artificial intelligence and how machines experience the world, I mean, we have five sensory inputs falling into our brains and our memories, but a machine can have, yes, vision, sound, but GPS, infrared, just some random event stream from another machine. There are all of these inputs that act in some ways as sensors for an artificially-intelligent machine that are much, in some ways, richer and more diverse, or could be. And that governor, that thing that filters that down, figures out what the objective is for the artificial intelligence machine and takes the right inputs and does the right pattern matching and does the right thinking, is going to be incredibly important to achieve, I think, artificial general intelligence. Where, it knows how to direct, if you like, it’s thoughts and how to plan and how to do and how to act, how to think about solving problems.
This is fascinating to me, so I have just a few more questions about AGI, if you’ll just indulge me for another minute. The range of time that people think it’s going to take us to get it, by my reckoning, is five years on the soonest and five-hundred on the longest. Do you have any opinion of when we might develop an AGI?
I think I agree with five years on the soonest, but, you know, honestly one of the things I struggle with as we think about that is, who really knows? We have so little understanding of how the brain actually works to produce intelligence and sentience that it’s hard to know how rapidly we’re approaching that or replicating it. It could be that, as we build smarter and smarter non-general artificial intelligence, eventually we’ll just wander into a greater understanding of consciousness or sentience by accident just because we built a machine that emulates the brain. That’s, in some ways, a plausible outcome, like, we’ll get enough computation that eventually we’ll figure it out or it will become apparent. I think, if you were to ask me, I think that’s ten to fifteen years away.
Do you think we already have computers fast enough to do it, we just don’t know how to do it, or do you think we’re waiting on hardware improvements as well?
I think the primary improvements we’re waiting on are software, but software activities are often constrained by the power and limits of the hardware that we’re running it on. Until you see a more advanced machine, it’s hard to practically imagine or design a system that could run upon it. The two things improve in parallel, I think.
If you believe we’ll, maybe, have an AGI in fifteen years, that if we have one it could very easily be conscious, and if it’s conscious therefore it would have a will, presumably, are you one of the people that worries about that? The super intelligence scenario, that it has different goals and ambitions than we have?
I think that’s one of many scenarios that we need to worry about. In our current society, any great idea, it seems like, is either weaponizable in a very direct way, which is scary. The way that we’re setup, locally and globally, is intensely competitive. Where any advantage one could eek out is then used to dominate, or take advantage of, or gain advantage from our position against our fellow man in this country and other countries, globally, etcetera.
There’s quite a bit of fear-mongering about artificial general intelligence, but, artificial intelligence does give the owner of those technologies, the inventor of those technologies, innate advantages in terms of taking and using those technologies to get great gain. I think there’s many stages along the way where someone can very competitively put those technologies to work without even achieving artificial general intelligence.
So, yes, the moment of singularity, when artificial general intelligence machines can invent machines that are considerably faster in ways that we can’t understand. That’s a scary thought, and technology may be out-thinking our moral and philosophical understanding of the implications of that, but at the same time some of the things that we’re building now—like you said, are just fifty percent better or seventy-seven percent smarter—could actually be, through weaponization or just through extreme mercantile advantage taking, those could have serious effects on the planet, humankind, etcetera. I do believe that we’re in an AI arms race and I do find that a little bit scary.
Vladimir Putin just said that he thinks the future is going to belong to whoever masters AI, and Elon Musk recently said, “World War Three will be fought over AI.” It sounds like you think that’s maybe a more real-world concern than the rogue AGI.
I think it is, because we’ve seen tremendous leaps in the capability of technology just in the last five years, certainly no less than five to ten years. More and more people are working in this problem domain; that number must be doubling every six months, or something ridiculous like that, in terms of the number of people who are starting to think about AI, the number of companies deploying some kind of technology. As a result, there are breakthroughs that are going to begin happening, either in public academia or more likely, in private labs that will be leverageable by the entities that create them in really meaningful ways.
I think by one count there are twenty different nations whose militaries are working on AI weapons. It’s hard to get a firm grip on it because: A, they wouldn’t necessarily say so, and, B, there’s not a lot of agreement on what the term AI means. In terms of machines that can make kill decisions, that’s probably a reasonable guess.
I think one shift that we’ve seen, and, you know, this is just anecdotal and my own opinion, is that so much of base research in computer science or artificial intelligence is done in academia and done basically publicly, publishable, and for the public good, I think, traditionally. And if you look at artificial intelligence where, you know, the greatest minds of our generation are not necessarily working in the public sphere on artificial intelligence; they’re locked up, tied up in private entity companies, generally very, very large companies, or they’re working on the military-industrial complex. I think that’s a shift, I think that’s different from scientific discovery, medical research, all these things in the past.
The closed-door nature of this R&D effort, and the fact that it’s becoming almost a national or nationalistic concern, with very little… You know there are weapons treaties, there are nuclear treaties, there are research weapons treaties, right? I think we’re only just beginning to talk about AI treaties, and AI understanding and we’re a long way from any resolve because the potential gains for whomever goes first, or makes the biggest discovery first, makes a great breakthrough first, are tremendous. It’s a very competitive world, and it’s going on behind closed doors.
The thing about the atomic bomb is that they were hard to build, and so even if you knew how to build it, it was hard. AI won’t be that way. It’ll fit on a flash drive, or at least the core technology would, right?
I think building an AGI, some of these things require web-scale computational power that currently, based on today’s technology, that requires data centers not flash drives. So, there is a barrier to entry to some of these things, but, that said, the great breakthrough more than likely will be an algorithm or some great thinking, and that will, yes, indeed, fit on a modern flash drive without any problem.
What do you think of the open AI initiative which says, “Let’s make this all public and share it all. It’s going to happen, we might as well make sure everybody has access to it and not just one party.”
I work at SaaS company, we build products to sell, and through open-source technologies, through cloud platforms, we get to stand on the shoulders of giants and use amazing stuff and shorten our development cycles and do things that we would never be able to do as a small company founded in Copenhagen. I’m a huge believer in those initiatives. I think that part of the reason that open-source has been so successful in just the problems of computer science and computer infrastructure is that, to a certain extent, there’s been a maturation of thought where not every company believes its ability to store and retrieve its data quickly is a defining characteristic for them. You know, I work at Zendesk and we’re in the business of customer service software, we build software that tries to help our customers have better relationships with their customers. It’s not clear that having the best cloud hosting engine or being able to use NoSQL technology is something that’s of tremendous commercial value to us.
We believe in open-sources, so we contribute back and we contribute because there’s no perceived risk of commercial impairment by doing that. This isn’t our core IP, our core IP is around how we treat customers. I think that, while I’m a huge believer in the open AI initiative, I think that there isn’t necessarily that widespread same belief when the parties are at investment levels in AI research, and at the forefront of thinking. I think that there’s a clear, for some of those entities, there’s a clear notion that they can gain tremendous advantage by keeping anything that they invent inside of the walled garden for as long as possible and using it to their advantage. I would dearly love that initiative to succeed. I don’t know that right now we have the environment in which it will truly succeed.
You’ve made a couple of references to artificial intelligence mirroring the human brain. Do you follow the human brain project in Europe, which is taking that approach? They’re saying, “Why don’t we just try to replicate the thing that we know can think already?”
I don’t really. I’m delighted by the idea, but I haven’t read too much about it. What are they learning?
It’s expensive, and they’re behind schedule. But it’s been funded to the tune of one and a half billion dollars, I mean it’s a really serious effort. The challenge is going to be if it turns out that a neuron is as complicated as a supercomputer, that things go on at the Planck level, that it is this incredible machine. Because I think the hope is that it if you take it at face value, that is something maybe we can duplicate, but if there’s other stuff going on it might be more problematic.
As an AI researcher yourself, do you ever start with the question, “How do humans do that?” Is that how you do it when you’re thinking about how to solve a problem? Or do you not find a lot of corollaries, in your day to day, between how a human does something and how a computer would do it?
When we’re thinking about solving problems with AI, we’re at the basic level of directed AI technology, and what we’re thinking about is, “How can we remove these tasks that humans perform on a regular basis? How can we enrich the lives of, in our case, the person needing customer service or the person providing customer services?” It’s relatively simple, and so the standard approach for that is to, yes, look directly at the activities of a person, look at ways that you can automate and take advantage of the benefits that the AI is going to buy you. In customer service land, you can remember every interaction very easily that every customer has had with a particular brand, and then you can look at the outcomes that those interactions have had, good or bad, through the satisfaction, the success and the timing. And you can start to emulate those things, remove friction, replace the need for people whatsoever, and build out really interesting things to do.
The primal way to approach the problem is really to look at what humans are doing, and try and replace them certainly where it’s not their cognitive ability that is necessarily to the fore or being used, and that’s something that we do a lot. But I think that misses the magic, because one of the things that happens with an AI system can be that it produces results that are, to use Arthur C. Clarke’s phrase, “sufficiently advanced to be indistinguishable from magic.” You can invent new things that were not possible because of the human brains limited bandwidth, because our limited memories or other things. You can basically remember all experiences all at once and then use those to create new things.
In our own work, we realize that it’s incredibly difficult, with any accuracy, given an input from a customer, a question from a customer, to predict the ultimate customer satisfaction score, the CSAT score that you’ll get. But it’s an incredibly important number for customer service departments, and knowing ahead of time that you’re going to have a bad experience with this customer based on signals in the input is incredibly useful. So, one of the things we built was a satisfaction-prediction engine, using various models, that allows us to basically route tickets to experts and do other things. There’s no human who sits there and gives out predictions on how a ticket is going to go, how our experience with the customer is going to go; that’s something that we invented because only a machine can do that.
So, yes, there is an approach to what we do which is, “How can we automate these human tasks?” But there’s also an approach of, “What is it that we can do that is impossible for humans that would be awesome to do?” Is there magic here that we can put in place?
In addition to there being a lot of concern about the things we talked about, about war and about AGI and all of that, in the narrow AI, in the here and now, of course, there’s a big debate about automation, and what these technologies are going to do for jobs. Just to, kind of, set the question up, there are three different narratives people offer. One is that automation is going to take all of the really low-skilled jobs, and they’ll be a group of people who are unable to compete against machines and we’ll have, kind of, permanent unemployment at the level of the Great Depression or something like that. Then there’s a second camp that says, “Oh, no, no, you don’t understand, it’s far worse than that, they’re going to take everybody’s job, everybody, because there’ll be a moment that the machine can learn something faster than a human.” Then there’s a third one that says, “No, with these technologies, people just take the technology and they use it to increase their own productivity, and they don’t actually ever cause unemployment.” Electricity and mechanization and all of that didn’t increase unemployment at all. Do you believe one of those three, or maybe a fourth one? What do you think about the effects of AI on employment?
I think the parallel that’s often drawn is a parallel to the Industrial Revolution. The Industrial Revolution brought us a way to transform energy from one form into another, and allowed us to mechanize manufacturing which altered the nature of society from agrarian to industrial, which created cities which had this big transformation. The Industrial Revolution took a long time. It took a long time for people to move from the farms to the factories, it took a long time to transform the landscape, comparatively. I think that one of the reasons that there’s trepidation and nervousness around artificial intelligence is it doesn’t seem like it will take that long, it’s almost fantastical science fiction to me that I get to see different vendors, self-driving cars mapping San Francisco on a regular basis, and I see people driving around with no hands on the wheel. I mean, that’s extraordinary, I don’t think even five years ago I would believe that we would have self-driving cars on public roads, it didn’t seem like a thing, and now it seems like automated driving machines are not very far away.
If you think about the societal impacts of that, well, according to an NPR study in 2014, I think, truck driving is the number one job in twenty-nine states in America. There are literally millions of driving jobs, and I think it’s one of the fastest growing categories of jobs. Things like that will all disappear, or to a certain extent will disappear, and it will happen rapidly.
It’s really hard for me to subscribe to the… Yes, we’re improving customer service software here at Zendesk in such a way that we’re making agents more efficient and they’re getting to spend more time with customers and they’re upping the CSAT rating, and consequently those businesses have better Net Promoter scores and they’re thriving. I believe that that’s what we’re doing and I believe that that’s what’s going to happen. If we can answer automatically ten percent of a customers’ tickets that means that you need ten percent less agents to answer those tickets, unless they’re going to invest more in customer service. The profit motive says that there needs to be a return on investment analysis between those two things. So, in my own industry I see this, and across society it’s hard not to believe that there won’t be a fairly large-scale disruption.
I don’t know that, as a society, we’re necessarily in a position to absorb that destruction yet. I know in Finland, they’re experimenting with a guaranteed minimum income to take away the stress of having to find work or qualify for unemployment benefit and all these things, so that people have a better quality of life and can hopefully find ways to be productive in society. Not many countries are as progressive as Finland. I would put myself in the “very nervous about the societal effects of large-scale removal of sources of employment,” because it’s not clear what the alternative structures are, that are set up in society to find meaningful work and sustenance for people who were losing those jobs. We’ve been under a trajectory since, I think, the 1970s, of polarization in society, and generating inequality. And I worry that the large-scale creation of an unemployed mass could be a tipping point. I take a very pessimistic view.
Let me give you a different narrative on that, and tell me what what’s wrong with it, how the logic falls down on it. Let’s talk just about truck drivers. That would go like this, it would say, “That concern that you’re going to have in mass all these unemployed truck drivers is beyond ill-founded. To begin with, the technology’s not done, and it will still need to be worked out. Then the legislative hurdles will have to be worked out, and that’ll be done gradually state by state by state. Then, there’ll be a long period of time when law will require that there be a driver, and self-driving technology would kick in when it feels like the driver’s making a mistake, but there’ll be an override; just like we can fly airplanes without pilots now but we insist on having a pilot.
Then, the driving part of the job is actually not the whole job, and so like any other job when you automate part of it, like the driving, that person takes on more things. Then, on top of that, the equipment’s not retrofit to it, so you going to have to figure out how do you retrofit all this stuff. Then, on top of that, having self-driving cars is going to open up all kinds of new employment, and because we talk about this all the time, there are probably fewer people going into truck driving, and there are people who retire in it every year. And that, just like every other thing, it’s just going to gradually work as the economy reallocates resources. Why do you think truck driving is like this big tipping point thing?
I think driving jobs in general are a tipping point thing because, yes, there are challenges to rolling it out, and obviously there’s legislative challenges, but it’s not hard to see, certainly interstate trucking going first and then drivers meeting those trucks and driving through urban areas and various things like that happening. I think people are working on retrofit devices for trucks. What will happen is truck drivers who are not actually driving will be allowed to work more hours, so you’ll need less truck drivers. In general, as a society, we’re shifting from going and getting our stuff to having our stuff delivered to us. And so, the voracious appetite for more drivers, in my opinion, is not going to abate. Yeah, the last mile isn’t driven by trucks, it’s smaller delivery drivers or things that can be done by smarter robots, etcetera.
I think those challenges you communicated are going to be moderating forces of the disruption, but when something reaches the tipping point of acceptance and cost acceptability, change tends to be rapid if driven by the profit motive. I think that is what we’re going to see. The efficiency of Amazon, and the fact that every product is online in that marketplace is driving a tremendous change in the nature of retail. I think the delivery logistics of that need are going to go through a similar turnaround, and companies driving that are going to be very aggressive about it because the economics is so appealing.
Of course, again, the general answer to that is that when technology does lower the price of something dramatically—like you’re talking about the cost of delivery, self-driving cars would lower it—that that in turn increases demand. That lowering of cost means all of a sudden you can afford to deliver all kinds of things, and that ripple effect in turn creates those jobs. Like, people spend all their money, more or less, and if something becomes cheaper they turn around and spend that money on something else which, by definition, therefore creates downstream employment. I’m just having a hard time seeing this idea that somehow costs are going to fall and that money won’t be redeployed in other places that in turn creates employment, which is kind of two hundred and fifty years of history.
I wouldn’t necessarily say that as costs fall in industries all of those profits are generally returned to the consumer, right? Businesses in the logistics retail space, generally, retailers run at a two percent margin, right, and businesses in logistics run with low margins. So, there’s room for those people to kind of optimize their own businesses, and not necessarily pass down all those benefits for the consumer. Obviously, there’s room for disruption where someone will come in, shave back down the margins and pass on those benefits. But, in general, you know, online banking is more efficient because we prefer it, and so there are less people working in banking. Conversely, when banks shifted to ATMs banking became much more of a part of our lives, and more convenient so we ended up with more bank tellers because personal service was a thing.
I think that there just are a lot of driving jobs out there that don’t necessarily need to be done by humans, but we’ll still be spending the same amount on getting driven around, so there’ll be more self-driving cars. Self-driving cars crash less, hopefully, and so there’s less need for auto repair shops. There’s a bunch of knock-on effects of using that technology, and for certain classes of jobs there’s clearly going to be a shift where those jobs disappear. There is a question of how readily the people doing those jobs are able to transfer their skills to other employment, and is there other employment out there for them.
Fair enough. Let’s talk with Zendesk for a moment. You’ve alluded to a couple of ways that you employ artificial intelligence, but can you just kind of give me an idea of, like, what gets you excited in the morning, when you wake up and you think, “I have this great new technology, artificial intelligence, that can do all these wondrous things, I want to use it to make people’s lives better who are in charge of customer relationships”? Entice me with some things that you’re thinking of doing, that you’re working on, that you’ve learned, and just kind of tell me about your day-to-day?
So many customer service inquiries begin with someone who has a thirst for knowledge, right? Seventy-six percent of people try to self-serve when trying to find the answer to a question, and many people who do get on the phone or online at the same time trying to discover the answer to that problem. I think often there’s a challenge in terms of having enough context to know what someone is looking for, having that context available to all of the systems that they’re interacting with. I think technology, not just artificial intelligence technology, but artificial intelligence can help us pinpoint the intention of users because the goal of the software that we provide, and the customer service ethos that we have is that we need to remove friction.
The thing that really generates bad experiences in customer service interactions isn’t that someone said no, or we didn’t get the outcome that we want, or we didn’t get our return processed or something like that, it’s that negative experiences tend to be generated from an excess of friction. It’s that I had to switch from one channel to another, it’s that I had to repeat myself over and over again because everyone I was talking to didn’t have context on my account or my experience as the customer and these things. I think that if you look at that sort of pile of problems, you see real opportunities to give people better experiences just by holding a lot more data at one time about that context, and then being able to process that data and make intelligent predictions and guesses and estimations about what it is they’re looking for and what is going to help them.
We recently launched a service we call “answer bot” which uses deep learning to look at the data we have when an email comes in and figure out, quite simply, which knowledgebase article is going to best serve that customer. It’s not driving a car down to the supermarket, this sounds very simple, but in another way these are millions and millions of experiences that can be optimized over time. Similarly, the people on the other side of that conversation generally don’t know what it is that customers are searching for or asking for, for which there is no answer. And so by using the same analysis of environment queries that we have and knowledge bases we can give them cues as to what content to write, and, sort of, direct them to build a better experience and improve their customer experience in that way.
I think from an enterprise software builder’s point of view, artificial intelligence is a tool that you can use at so many points of interaction between brand and consumer, between the two parties basically on either side of any transaction inside of your knowledge base. It’s something that you can use to shave off little moments of pain, and remove friction, and apply intelligence, and just make the world seem frictionless and a little smarter. Our goal internally is basically to meander through our product in a directed way, finding those experiences and making them better. At the end of the day we want someone who’s deploying our stuff and giving a customer experience with it, and we want the consumers experiencing that brand, the people interacting with that brand, to be like, “I’m not sure why that was good, but I did really enjoy that customer service experience. I got what I wanted, it was quick. I don’t know how they quite did that, but I really enjoyed it.” We all have had those moments in service where someone just totally got what you were after and it was delightful because it was just smooth and efficient, good, and no drama—prescient almost.
I think what we are trying to do, what we would like to do is adapt all of our software and experiences that we have to be able to be that anticipatory and smart and enjoyable. I think the enterprise software world—for all types of software like CRM, ERP, all these kind of things—is filled with sharp edges, friction, and pain, you know, pieces of acquisitions glued together, and you’re using products that represent someone’s broken dreams acquired by someone else and shoehorned into other experiences. I think, generally, the consumer of enterprise software at this point is a little bit tired of the pain of form-filling and repetition and other things. Our approach to smoothing those edges, to grinding the stone and polishing the mirror, is to slowly but surely improve each of those experiences with intelligence.
It sounds like you have a broad charter to look at kind of all levels of the customer interaction and look for opportunity. I’m going to ask you a question that probably doesn’t have an answer but I’m going to try anyway, “Do you prefer to find places where there was an epic fail where it was so bad it was just terrible and the person was angry and it was just awful, or would you rather fix ten of a minor annoyance where somebody had entered data too many times?” I mean, are you working to cut the edges off the bad experiences, or just generally make the system phase shift up a little bit?
I think, to a certain extent, I like to think of that as a false dichotomy, because the person who has a terrible experience and gets angry, chances are there wasn’t a momentary snap, there was a drip feed of annoyances that took them to that point. So, our goal, when we think about it, is to pick out the most impactful rough edges that cumulatively are going to engulf someone into the red mist of homicidal fury on the end of the phone, complaining about their broken widget. I think most people do not flip their anger bit over a tiny infraction or over a larger fraction, it’s over a period, it’s a lifetime of infractions, it’s a lifetime of inconveniences that gets you to that point, or the lifetime of that incident and that inquiry and how you got there. We’re generally, sort of, emotionally-rational beings who’ve been through many customer service experiences, so exhibiting that level of frustration, generally, requires a continued and sustained effort on the part of a brand to get us there.
I assume that you have good data to work off of. I mean, there are good metrics in your field and so you get to wade through a lot of data and say, “Wow, here’s a pattern of annoyances that we can fix.” Is that the case?
Yeah, we have an anonymized data set that encompasses billions of interactions. And the beauty of that data set is they’re rated, right? They’re rated either by the time it took to solve the problem, or they’re rated by an explicit rating, where someone said that was a good interaction, that was a bad interaction. When we did the CSAT prediction we were really leveraging the millions of scores that we have that tell us how customer service interactions went. In general, though, we talk about the data asset that we have available to us, that we can use to train and learn a query and analyze.
Last question, you quoted Arthur C. Clarke, so I have to ask you, is there any science fiction about AI that you enjoy or like or think that could happen? Like Her or Westworld or iRobot or any of that, even books or whatnot?
I did find Westworld to be, probably, the most compelling thing I watched this year, and just truly delightful in its thinking about memory and everything else, although it was, obviously, pure fiction. I think Her was also just a, you know, a disturbing look at the way that we will be able to identify with inanimate machines and build relationships that, you know, it was all too believable. I think you quoted two my favorite things, but Westworld was so awesome.
It, interestingly, had a different theory of consciousness from the bicameral mind, not to give anything away.
Well, let’s stop there. This was a magnificently interesting hour, I think we touched on so many fascinating topics, and I appreciate you taking the time!
Adrian McDermott: Thank you, Byron, it’s wonderful to chat too!
Byron explores issues around artificial intelligence and conscious computers in his upcoming book The Fourth Age, to be published in April by Atria, an imprint of Simon & Schuster. Pre-order a copy here.
[voices_in_ai_link_back]

How Artificial Intelligence Will Personalize How We Work

Artificial intelligence in the workplace is here to stay. However, as enterprise technologies continue to develop and evolve, we must understand how AI will affect our roles and responsibilities at work.
The unknowns about the impact of AI has led to the fear that this emerging technology could be a substitute for – or entirely eradicate – existing jobs. Depending on which stats you refer to, AI will replace over 40% of jobs by 2030, or that 165 million Americans could be out of work before 2025.
Yet it is not all doom and gloom. Given the rate of new systems, processes, and data that we’re exposed to each day, AI can deliver tangible benefits in learning our skills, habits and behaviors, upending how we use technology. When companies are spending over $3.5 trillion on IT and use an average of 831 cloud services, it’s no surprise that we forget 70% of what we learn in a day, unless we immediately apply that knowledge into
our workflows.
There are four tectonic shifts happening within businesses that are propelling the need for greater personalization and efficiency in how we use technology:
● Employee expectations and behaviors have shifted. Unlike their predecessors, Millennials and Gen Z employees are accustomed to digital technologies. While they’re resourceful and can easily access information, they aren’t necessarily able to retain it. Generally speaking, they expect consumer-level technologies, are highly distracted and change positions often – and thus expect technology to be quick, efficient and intuitive.
● Organizations are undergoing a sweeping digital transformation. One of the biggest buzzwords of 2017 is “digital transformation” and has been sweeping across all businesses as they look to modernize their activities, processes and models to become completely digitized.
● Decisions are fragmented between departments. As companies move to more digitalized systems, the decision to implement new technologies has been driven by line of business heads. From HR systems, customer relationship management (CRM) tools to ERP solutions, procurement decisions are based on departmental needs, rather than the traditional approach of it being mandated by the CIO or at the organizational level.
● Cloud technologies are creating a training challenge. Cloud-based technologies indicate that systems are undergoing regular improvements and updates, creating a situation where employees must constantly adjust to new changes that they need to learn and adopt quickly.
Based on these changes, AI is a critical component for tomorrow’s organizations. Coupled with deep analytics, AI can greatly affect individual user behavior, identifying barriers to technology adoption and contextually guiding users on how to use any new solution. In doing so, employees can ultimately become instant pros in using a system – even if they haven’t used the technology before.
This contextual, personalized, and just-in- time approach allows us to abandon traditional training and development methods, which can become quickly outdated as we continue to encounter new systems and interfaces. It doesn’t make sense to set up a classroom style training to familiarize your team with a new HR software, for example, when incremental product updates occur so frequently. When employees are stuck
using a system, they’re more apt to ask a colleague for help, search online for the answer, or worst of all, give up on using the system. All are ineffective uses of our time.
Instead of feeling daunted by the onslaught of new systems we encounter, technology should learn about the user to improve their workflows. Creating systems that learn and automate tedious processes will be a major battleground for technology vendors in the next few years. It won’t be long before we can rely on AI to do all the “learning” for us – leading to a workplace where we train the software to adapt to our needs, rather than forcing us to adapt to the software.
Rephael Sweary is the cofounder and president of WalkMe, which pioneered the digital adoption platform. Previously, Rephael was the cofounder, CEO and then President of Jetro Platforms which was acquired in 2007. Since then, he has funded and helped build a number of companies both in his role as Entrepreneur-inResidence at Ocean Assets and in a personal capacity.