Voices in AI – Episode 71: A Conversation with Paul Daugherty

[voices_in_ai_byline]

About this Episode

Episode 71 of Voices in AI features host Byron Reese and Paul Daugherty discuss transfer learning, consciousness and Paul’s book “Human + Machine: Reimagining Work in the Age of AI.” Paul Daugherty holds a degree in computer engineering from the University of Michigan, and is currently the Chief Technology and Innovation Officer at Accenture.
Visit www.VoicesinAI.com to listen to this one-hour podcast or read the full transcript.

Transcript Excerpt

Byron Reese: This is Voices in AI brought to you by GigaOm. Today my guest is Paul Daugherty. He is the Chief Technology and Innovation Officer at Accenture. He holds a computer engineering degree from the University of Michigan. Welcome to the show Paul.

Paul Daugherty: It’s great to be here, Byron.

Looking at your dates on LinkedIn, it looks like you went to work for Accenture right out of college and that was a quarter of a century or more ago. Having seen the company grow… What has that journey been like?

Thanks for dating me. Yeah it’s actually been 32 years, so I guess I’m going on a third of a century, joined Accenture back in 1986, and the company’s evolved in many ways since then. It’s been an amazing journey because the world has changed so much since then and a lot of what’s fueled the change in the world around us has been what’s happened with technology. I think [in] 1986 the PC was brand new, and we went from that to networking and client server and the Internet, cloud computing mobility, internet of things, artificial intelligence and the things we’re working on today. So it’s been a really amazing journey fueled by the way the world’s changed, enabled by all this amazing technology.

So let’s talk about that, specifically artificial intelligence. I always like to get our bearings by asking you to define either artificial intelligence or if you’re really feeling bold, define intelligence.

I’ll start with artificial intelligence which we define as technology that can sense, think, act and learn, is the way we describe it. And [it’s] systems that can then do that, so sense: like vision in a self-driving car, think: making decisions on what the car does next, acts: in terms of they actually steer the car and then learn: to continuously improve behavior. So that’s the working definition that we use for artificial intelligence, and I describe it more simply to people sometimes, as fundamentally technology that has more human-like capability to approximate the things that we’re used to assuming and thinking that only humans can do: speech, vision, predictive capability and some things like that.

So that’s the way I define artificial intelligence. Intelligence I would define differently. Intelligence I would just define more broadly. I’m not an expert in neuroscience or cognitive science or anything, but I define intelligence generally as the ability to both reason and comprehend and then extrapolate and generalize across many different domains of knowledge. And that’s what differentiates human intelligence from artificial intelligence, which is something we can get a lot more into. Because I think the fact that we call this body of work that we’re doing artificial intelligence, both the word artificial and the word intelligence I think lead to misleading perceptions on what we’re really doing.

So, expand that a little bit. You said that’s the way you think human intelligence is different than artificial, — put a little flesh on those bones, in exactly what way do you think it is?

Well, you know the techniques we’re really using today for artificial intelligence, they’re generally from the branch of AI around machine learning, so machine learning, deep learning, neural nets etc. And it’s a technology that’s very good at using patterns and recognizing patterns in data to learn from observed behavior, so to speak. Not necessarily intelligence in a broad sense, it’s ability to learn from specific inputs. And you can think about that almost as idiot savant-like capability.

So yes, I can use that to develop Alpha Go to beat the world’s Go master, but then that same program wouldn’t know how to generalize and play me in tic-tac-toe. And that ability, the intelligence ability to generalize, extrapolate, rather than interpolate, is what human intelligence is differentiated by, and the thing that would bridge that, would be artificial general intelligence, which we can get into a little bit, but we’re not at that point of having artificial general intelligence, we’re at a point of artificial intelligence, where it could mimic very specific, very specialised, very narrow human capabilities, but it’s not yet anywhere close to human-level intelligence.

Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com
[voices_in_ai_link_back]
 
Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.

GitHub Actions: The Best Practice Game Changer

“GitHub? That’s a code repository, right?” said a friend, when I mentioned I was in San Francisco. GitHub Universe, the company’s annual conference, is small but perfectly formed — 1,500 delegates fills a hall but doesn’t overwhelm. And yes, developers, engineers and managers are here because they are pulling files from, and pushing to, one of the largest stores of programming code on the planet.

GitHub representatives would likely dispute the “just a code repo” handle, nonetheless. I would imagine they would point at the collaboration mechanisms and team management features on the one hand, and the 30-plus million developers on the other. “It’s an ecosystem,” they might say. I haven’t asked, because the past two days’ announcements may have made the question somewhat moot. Or one announcement in particular: GitHub Actions.

In a nutshell, GitHub Actions allow you to do something based on a triggering event: they can be strung together to create (say) a set of tests when code is committed to the repository, or to deploy to a target environment. The “doing something” bit runs in a container on GitHub’s servers; and a special command (whose name escapes me…wait: RepositoryDispatch) means external events can trigger actions.

That’s kind of it, so what makes GitHub Actions so special? Or, to put it another way, what is causing the sense of unadulterated glee, across both the execs I have spoken to and those presenting from the main stage. “I can feel the hairs on the back of my neck as I talk about this,” I was told, not in some faux ’super-excited’ way but with genuine delight.

The answer lies in several, converging factors. First, as tools mature, they frequently add rules-based capabilities — we saw it with enterprise management software two decades ago, and indeed ERP and CRM before that. Done right, event-driven automation is always a feature to be welcomed, increasing efficiency, productivity, enforcing policy, governance and all.

Second is: what happens when you switch on such a feature for a user base as large, and as savvy, as the GitHub community? Automation is a common element of application lifecycle management tooling, and multiple vendors exist to deliver on this goal. But few if any have the ability to tell millions of open source developers, “let’s see what you got.”

Which brings to a third point: right now, we are in one of those fan-out technology waves. In my report on DevOps, I name-checked 110 vendors; I left out many more. Choosing a best-of-breed set of tools for a pipeline, or indeed, deciding the pipeline, involves a complex, uncertain and fraught set of decisions. And many enterprises will have built their own customisations on top.

As I wrote in the report’s introduction, “In the future, it is likely that a common set of practices and standards will emerge from DevOps; that the market landscape for tools will consolidate and simplify; and that infrastructure platforms will become increasingly automated.” The market desperately needs standardisation and simplification: every day, organisations reinvent and automate practices which, frankly, is not a good use of their time.

For there to be a better way requires a forum — an ecosystem, if you will — within which practices can be created, shared and enhanced. While there may be a thousand ways to deploy a Ruby application, most organisations could probably make do with one or two, based on constraints which will be similar for their industry peers. With a clear day, a following wind and the right level of support, GitHub Actions could provide the platform for this activity.

Will this put other continuous automation and orchestration vendors out of business? Unlikely, as there’s always more to be done (and no organisation is going to switch off existing automations overnight). However it could create a common language for others to adopt, catalysing standardisation still further; it also creates opportunities for broader tooling, for example helping select a workflow based on specific needs, or bringing in plugins for common actions.

It’s also notable that GitHub Actions is only being released as Beta at this point (you can sign up here). Questions remain over how to authorise and authenticate access, what criteria GitHub will set over “acceptable” Action workloads, and indeed, how Actions will work within a GutHub enterprise installation. Cliché it may be, but the capability creates as many questions as it does answers — which is perhaps just as well.

Above all perhaps, the opportunity for GitHub Actions is defined by its lack of definition. Methodologists could set out workflows based on what they thought might be appropriate; but the bigger opportunity is to let the ecosystem decide what is going to be most useful, by creating Actions and seeing which are adopted. And yes, these will go way beyond the traditional dev-to-ops lifecycle.

One thing is for sure: the capability very much changes the raison d’être for their founding organisation. “Just a code repository” they may have been, in the eyes of some; but a collaborative hub for best practice is what the organisation will undoubtedly become, with the adoption of GitHub Actions. No wonder the sense of suppressed glee.

This Much I Know: Byron Reese on Conscious Computers and the Future of Humanity

Recently Byron Reese sat down for a chat with Seedcamp’s Carlos Espinal on their podcast ‘This Much I Know.’ It’s an illuminating 80-minute conversation about the future of technology, the future of humanity, Star Trek, and much, much more.
You can listen to the podcast at Seedcamp or Soundcloud, or read the full transcript here.


Carlos Espinal: Hi everyone, welcome to ‘This Much I Know,’ the Seedcamp podcast with me, your host Carlos Espinal bringing you the inside story from founders, investors, and leading tech voices. Tune in to hear from the people who built businesses and products scaled globally, failed fantastically, and learned massively. Welcome everyone! On today’s podcast we have Byron Reese, the author of a new book called The Fourth Age: Smart Robots, Conscious Computers and the Future of Humanity. Not only is Byron an author, he’s also the CEO of publisher GigaOm, and he’s also been a founder of several high-tech companies, but I won’t steal his thunder by saying every great thing he’s done. I want to hear from the man himself. So welcome, Byron.
Byron Reese: Thank you so much for having me. I’m so glad to be here.
Excellent. Well, I think I mentioned this before: one of the key things that we like to do in this podcast is get to the origins of the person; in this case, the origins of the author. Where did you start your career and what did you study in college?
I grew up on a farm in east Texas, a small farm. And when I left high school I went to Rice University, which is in Houston. And I studied Economics and Business, a pretty standard general thing to study. When I graduated, I realized that it seemed to me that like every generation had… something that was ‘it’ at that time, the Zeitgeist of that time, and I knew I wanted to get into technology. I’d always been a tinkerer, I built my first computer, blah, blah, blah, all of that normal kind of nerdy stuff that I did.
But I knew I wanted to get into technology. So, I ended up moving out to the Bay Area and that was in the early 90s, and I worked for a technology company and that one was successful, and we sold it and it was good. And I worked for another technology company, got an idea and spun out a company and raised the financing for that. And we sold that company. And then I started another one and after 7 hard years, we sold that one to a company and it went public and so forth. So, from my mother’s perspective, I can’t seem to hold a job; but from another view, it’s kind of like the thing of our time instead. We’re in this industry that changes so rapidly. There are more opportunities that always come along and I find that whole feeling intoxicating.
That’s great. That’s a very illustrious career with that many companies having been built and sold. And now you’re running GigaOm.  Do you want to share a little bit for people who may not be as familiar with GigaOm and what it is and what you do?
Certainly. And I hasten to add that I’ve been fortunate that I’ve never had a failure in any of my companies, but they’ve always had harder times. They’ve always had these great periods of like, ‘Boy, I don’t know how we’re going to pull this through,’ and they always end up [okay]. I think tenacity is a great trait in the startup world, because they’re all very hard. And I don’t feel like I figured it all out or anything. Every one is a struggle.
GigaOm is a technology research company. So, if you’re familiar with companies like Forrester or Gartner or those kinds of companies, what we are is a company that tries to help enterprises, help businesses deal with all of the rapidly changing technology that happens. So, you can imagine if you’re a CIO of a large company and there are so many technologies, and it all moves so quickly and  how does anybody keep up with all of that? And so, what we have are a bunch of analysts who are each subject matter experts in some area, and we produce reports that try to orient somebody in this world we’re in, and say ‘These kinds of solutions work here, and these work there’ and so forth.
And that’s GigaOm’s mission. It’s a big, big challenge, because you can never rest. Big new companies I find almost every day that I’ve never even heard of and I think, ‘How did I miss this?’ and you have to dive into that, and so it’s a relentless, nonstop effort to stay current on these technologies.
On that note, one of the things that describes you on your LinkedIn page is the word ‘futurist.’ Do you want to walk us through what that means in the context of a label and how does the futurist really look at industries and how they change?
Well, it’s a lower case ‘f’ futurist, so anybody who seriously thinks about how the future might unfold, is to one degree or another, a futurist. I think what makes it into a discipline is to try to understand how change itself happens, how does technology drive changes and to do that, you almost by definition, have to be a historian as well. And so, I think to be a futurist is to be deliberate and reflective on how it is that we came from where we were, in savagery and low tech and all of that, to this world we are in today and can you in fact look forward.
The interesting thing about the future is it always progresses very neatly and linearly until it doesn’t, until something comes along so profound that it changes it. And that’s why you hear all of these things about one prediction in the 19th Century was that, by some year in the future, London would be unnavigable because of all the horse manure or the number of horses that would be needed to support the population, and that maybe would have happened, except you had the car, and like that. So, everything’s a straight-line, until one day it isn’t. And I think the challenge of the futurist is to figure out ‘When does it [move in] a line and when is it a hockey stick?’
So, on that definition of line versus hockey stick, your background as having been CEO of various companies, a couple of which were media centric, what is it that drew you to artificial intelligence specifically to futurize on?
Well, that is a fantastic question. Artificial intelligence is first of all, a technology that people widely differ on its impact, and that’s usually like a marker that something may be going on there. There are people who think it’s just oversold hype. It’s just data mining, big data renamed. It’s just the tool for raising money better. Then there are people who say this is going to be the end of humanity, as we know it. And philosophically the idea that a machine can think, maybe, is a fantastically interesting one, because we know that when you can teach a machine to do something, you can usually double and double and double and double and double its ability to do that over time. And if you could ever get it to reason, and then it could double and double and double and double, well that could potentially be very interesting.
Humans only evolve, computers are able to evolve kind of at the speed of light, they get better and humans evolve at the speed of life.  It takes generations.  And so, if a machine can think, a question famously posed by Alan Turing, if a machine could think then that could potentially be a game changer. Likewise, I have a similar fascination for robots because it’s a machine that can act, that can move and can interact physically in the world. And I got to thinking what would happen, what is it a human in a world where machines can think better and act better, then what are we? What is uniquely human at that point?
And so, when you start asking those kinds of questions about a technology, that gets very interesting. You can take something like air conditioning and you can say, wow, air conditioning. Think of the impact that had. It meant that in the evenings people wouldn’t… in warm areas, people don’t go out on their front porch anymore. They close the house up and air condition it, and therefore they have less interaction with their neighbors. And you can take some technology as simple as that and say that had all these ripples throughout the world.
The discovery of the new world ended the Italian Renaissance effectively, because it changed the focus of Europe to a whole different direction. So, when those sorts of things had those kinds of ripples through history, you can only imagine what if the machine could think, like that’s a big deal. Twenty-five years ago, we made the first browser, the Mosaic browser, and if you had an enormous amount of foresight and somebody said to you, in 25 years, 2 billion people are going to be using this, what do your think’s going to happen?
If you had an enormous amount of foresight, you might’ve said, well, the Yellow Pages are going to have it rough and the newspapers are, and travel agents are, and stock brokers are going to have a hard time, and you would have been right about everything, but nobody would have guessed there would be Google, or eBay, or Etsy, or Airbnb, or Amazon, or $25 trillion worth of a million new companies.  And all that was, was computers being able to talk to each other. Imagine if they could think. That is a big question.
You’re right and I think that there is…I was joking and I said ‘Tinder’ in the background just because that’s a social transformation. Not even like a utility, but rather the social expectation of where certain things happen that was brought about that. So, you’re right… and we’re going to get into some of those [new platforms] as we review your book. In order to do that, let’s go through the table of contents. So, for those of you that don’t have the book yet, because hopefully you will after this chat, the book is broken up into five parts and in some ways these parts are arguably chronological in their stage of development.
The first one I would label as the historical, and it’s broken out into the fourth ages that we’ve had as humans, the first age being language and fire, the second one being agriculture and cities, the third one being writing and wheels, and the fourth one being the one that we’re currently in, which is robots and AI. And we’re left with three questions, which are: what is the composition of the universe, what are we, and what is yourself? And those are big, deep philosophical ones that will manifest themselves in the book a little bit later as we get into consciousness.
Part two of the book is about narrow AI and robots. Arguably I would say this is where we are today, and Seedcamp as an investor in AI companies has broadly invested in narrow AI through different companies. And this is I think the cutting edge of AI, as far as we understand it. Part three in the book covers artificial general intelligence, which is everything we’ve always wanted to see, where science fiction represents quite well, everything from that movie AI, with the little robot boy, to Bicentennial Man with Robin Williams, and sort of the ethical implications of that.
Then part four of the book is computer consciousness, which is a huge debate, because as Byron articulates in the book, there’s a whole debate on what is consciousness and there’s a distinction between a monist and the dualist and how they experience consciousness and how they define it. And hopefully Byron will walk us through that in more detail. And lastly, the road from here is the future, as far as we can see it in the futurist portion of the book, I mean part three, four and five are all futurist portions of the book, but this one is where I think, Byron, you go to the ‘nth’ degree  possible with a few exceptions. So maybe we can kick off with your commentary on why you have broken up the book into these five parts.
Well you’re right that they’re chronological, and you may have noticed each one opens with what you could call a parable, and the parables themselves are chronological as well. The first one is about Prometheus and it’s about technology, and about how the technology changed and all the rest. And like you said, that’s where you want to kind of lay the groundwork of the last 100,000 years and that’s why it’s named something like ‘the road to here,’ it’s like how we got to where we are today.
And then I think there are three big questions that everywhere I go I hear one variant of them or another. The first one is around narrow AI and like you said, it’s a real technology that’s going to impact us, what’s it going to do with jobs, what’s this going to do in warfare, what will it do with income? All of these things we are certainly going to deal with. And then we’re unfortunate with the term ‘artificial intelligence,’ because it can mean many different things, but one is that it can be narrow AI, it can be a Nest thermometer that can adjust the temperature, but it can also be Commander Data of Star Trek. It can be C-3PO out of Star Wars. It can be something as versatile as a human and fortunately those two things share the same name, but they’re different technologies, so it has to kind of be drawn out on its own, and to say, “Is this very different thing that shares the same name likely? possible? What are its implications and whatnot?”
Interestingly, of the people who believe we’re going to build [an AGI] very immensely and when, some say as soon as five years, and some say as long away as five hundred. And that’s very telling that these people had such wide viewpoints on when we’ll get it. And then to people who believe we’re going to build one, the question then becomes, ‘well is it alive? Can it feel pain?  Does it experience the world? And therefore, by that basis does it have rights?’ And if it does, does that mean you can no longer order it to plunge your toilet when it gets stopped up, because all you’ve made is a sentient being that you can control, and is that possible?
And why is it that we don’t even know this? The only real thing any of us know is our own consciousness and we don’t even know where that comes about. And then finally the book starts 100,000 years ago. I wanted to look 100,000 years out or something like that. I wanted to start thinking about, no matter how these other issues shake out, what is the long trajectory of the human race? Like how did we get here and what does that tell us about where we’re going? Is human history a story of things getting better or things getting worse, and how do they get better or worse and all of the rest. So that was a structure that I made for the book before I wrote a single word.
Yeah, and it makes sense. Maybe for the sake of not stealing the thunder of those that want to read it, we’ll skip a few of those, but before we go straight into questions about the book itself, maybe you can explain who you want this book to be read by. Who is the customer?
There are two customers for the book. The first is people who are in the orbit of technology one way or the other, like it’s their job, or their day to day, and these questions are things they deal with and think about constantly. The value of the book, the value prop of the book is that it never actually tells you what I think on any of these issues. Now, let me clarify that ever so slightly because the book isn’t just another guy with another opinion telling you what I think is going to happen. That isn’t what I was writing it for at all.
What I was really intrigued by is how people have so many different views on what’s going to happen. Like with the jobs question, which I’m sure we’ll come to. Are we going to have universal unemployment or are we going to have too few humans? These are very different outcomes all by very technical minded informed people. So, what I’ve written or tried to write is a guidebook that says I will help you get to the bottom of all the assumptions underlying these opinions and do so in a way that you can take your own values, your own beliefs, and project them onto these issues and have a lot of clarity. So, it’s a book about how to get organized and understand why the debate exists about these things.
And then the second group are people who, they just see headlines every now and then where Elon Musk says, “Hey, I hope we’re not just the boot loaders for the AI, but it seems to be the case,” or “There’s very little chance we’re going to survive this.” And Stephen Hawking would say, “This may be the last invention we’re permitted to make.” Bill Gates says he’s worried about AI as well. And the people who see these headlines, they’re bound to think, “Wow, if Bill Gates and Elon Musk and Stephen Hawking are worried about this, then I guess I should be worried as well.” Just on the basis of that, there’s a lot of fear and angst about these technologies.
The book actually isn’t about technology. It’s about how much you believe and what that means for your beliefs about technology. And so, I think after reading the book, you may still be afraid of AI, you may not, but you will be able to say, ‘I know why Elon Musk, or whoever, thinks what they think. It isn’t that they know something I don’t know, they don’t have some special knowledge I don’t have, it’s that they believe something. They believe something very specific about what people are, what the brain is.  They have a certain view of the world as completely mechanistic and all these other things.’ You may agree with them, you may not, but I tried to get at all of the assumptions that live underneath those headlines you see. And so why would Stephen Hawking say that, why would he? Well, there are certain assumptions that you would have to believe to come to that same conclusion.
Do you believe that’s the main reason that very intelligent people will disagree on with respect to how optimistic they are about what artificial intelligence will do? You mentioned Elon Musk who is pretty pessimistic about what AI might do, whereas there are others like Mark Zuckerberg from Facebook, who is pretty optimistic, comparatively speaking. Do you think it’s this different account of what we are, that’s explaining the difference?
Absolutely. The basic rules that govern the universe and what our self is, what is that voice you hear in your head?
The three big questions.
Exactly.  I think the answer to all these questions boil down to those three questions, which as I pointed out, are very old questions. They go back as far as we have writing, and presumably therefore they go back before that, way beyond that.
So we’ll try to answer some of those questions and maybe I can prod you. I know that you’ve mentioned in the past that you’re not necessarily expressing your specific views, you’re just laying out the groundwork for people to have a debate, but maybe we can tease some of your opinions.
I make no effort to hide them. I have beliefs about all those questions as well, and I’m happy to share them, but the reason they don’t have a place in the book is: it doesn’t matter whether I think I’m a machine or not. Who cares whether I think I’m a machine? The reader already has an opinion of whether a human being is a machine. The fact that I’m just one more person who says ‘yay’ or ‘nay,’ that doesn’t have any bearing on the book.
True. Although, in all fairness, you are a highly qualified person to give an opinion.
I know, but to your point, if Elon Musk says one thing and Mark Zuckerberg says another, and they’re diametrically opposed, they are both eminently qualified to have an opinion and so these people who are eminently qualified to have opinions have no consensus, and that means something.
That does mean something. So, one thing I would like to comment about the general spirit of your book, is that I generally felt like the book was built from a position of optimism. Even towards the very end of the book, towards the 100,000 years in the future, there was always this underlying tone of, we will be better off because of this entire revolution, no matter how it plays out versus not.And I think that maybe I can tease out of you that fact that you are telegraphing your view on ‘what are we?’ Effectively, are we a benevolent race in a benevolent existence, or are we something that’s more destructive in nature? So, I don’t know if you would agree with that statement about the spirit of the book or whether…
Absolutely. I am unequivocally, undeniably optimistic about the future, for a very simple reason, which is, there was a time in the past, maybe 70,000 years ago, that humans were down to something like maybe a 1000 breeding pairs. We were an endangered species and we were one epidemic, one famine, one away from total annihilation and somehow, we got past that. And then 10,000 years ago, we got agriculture and we learned to regularly produce food, but it took us 90 percent of our people for 10,000 years to make our food.
But then we learned a trick and the trick is technology, because what technology does is it multiplies what you are able to do. And what we saw is that all of a sudden, it didn’t take 90 percent, 80 percent, 70, 60, all the way down, in the West to 2 percent. And furthermore, we learned all of these other tricks we could do with technology. It’s almost magic that what it does is it multiplies human ability. And we know of no upward limit of what technology can do and therefore, there is no end to how it can multiply what we can do.
And so, one has to ask the question, “Are we on balance going to use that for good or ill?” And the answer obviously is for good. I know maybe it doesn’t seem obvious if you caught the news this morning, but the simple fact of the matter is by any standard you choose today, life is better than it was in the past, by that same standard anywhere in the world. And so, we have an unending story of 10,000 years of human progress.
And what has marred humanity for the longest time is the concept of scarcity.  There was never enough good stuff for everybody, not enough food, not enough medicine, not enough education, not enough leisure, and technology lets us overcome scarcity. And so, I think if you keep that at the core, that on balance, there have been more people who wanted to build than destroy, we know that, because we have been building for 10,000 years. That on balance, on net, we use technology for good on net, always, without fail.
I’d be interested to know the limits to your optimism there. Is your optimism probabilistic? Do you assign, say a 90 percent chance to the idea that technology and AI will be on balance, good for humans? Or do you think it’s pretty precarious, there’s maybe a 10 percent chance, 20 percent chance that that might be a point where if we fail to institute the right sort of arrangements, it might be bad. How would you sort of describe your optimism in that sense?
I find it hard to find historic cases where technology came along that magnified what people were able to do and that was bad for us. If in fact artificial intelligence makes everybody effectively smarter, it’s really hard to spin that to a bad thing. If you think that’s a bad thing, then one would advocate that maybe it would be great if tomorrow everybody woke up with 10 fewer IQ points. I can’t construct that in my mind.
And what artificial intelligence is, is it’s a collective memory of the planet. We take data from all these people’s life experiences and we learn from that data, and so to somehow say that’s going to end up badly, is to say ignorance is better than knowledge. It’s to say that, yeah, now that we have a collective memory of the planet, things are going to get worse. If you believe that, then it would be great if everybody forgot everything they know tomorrow. And so, to me, the antithetical position that somehow making everybody smarter, remembering our mistakes better, all of these other things can somehow lead to a bad result…I think is…I shall politely say, unproven in the extreme.
You see, I believe that people are inherently…we have evolved to be by default, extremely cautious. Somebody said it’s much better to mistake a rock for a bear and to run away from it, than it is to mistake a bear for a rock and just stand there. So, we are a skittish people and our skittishness has served us well. But what happens is it means anytime you’re born with some bias, some cognitive bias, and we’re born I think with one of fear, it does one well to be aware of that and to say, “I know I’m born this way. I know that for 10,000 years things have gotten better, but tomorrow they might just be worse.” We come by that honestly, it served us well in the past, but that doesn’t mean it’s not wrong.
All right, well if we take that and use that as a sort of a veneer for the rest of the conversation, let’s move into the narrow AI portion of your book. We can go into the whole variance of whether robots are going to take all of our jobs, some of our jobs, or none of our jobs and we can kind of explore that.
I know that you’ve covered that in other interviews, and one of the things that maybe we also should cover is how we train our AI systems in this narrow era. How we can inadvertently create issues for ourselves by having old data sets that represent social norms that have changed and therefore skew things in the wrong way, and inherently create momentum for machines to believe and make wrong conclusions of us, even though we as humans might be able to derive that out of contextual relevance at some point, but is no longer. Maybe you can just kick off that whole section with commentary on that.
So, that is certainly a real problem. You see when you take a data set and let’s say the data is 100 percent accurate and you come up with some conclusion about it, it takes on a halo of, ‘well that’s just the facts, that’s just how things are, that’s just the truth.’ And in a sense, it is just the truth, and AI is only going to come to conclusions based on like you said, the data that it’s trained on. You see, the interesting thing about artificial intelligence, is it has a philosophical assumption behindit, and it is that the future is like the past and for many things that is true. A cat tomorrow looks like a cat today and so you can take a bunch of cats from yesterday, or a week ago, or a month, or a year and you can train it and it’s going to be correct. A cell phone tomorrow doesn’t look like a cell phone ten years ago though, and so if you took a bunch of photos of cell phones from 10 years ago, trained an AI, it’s going to be fabulously wrong.  And so, you hit the nail on the head.
The onus is on us to make sure that whatever we are teaching it is a truth that will be true tomorrow, and that is a real concern. There is no machine that can kind of ‘sanity check’ that for you, that you tell the machine, “This is the truth, now, tell me about tomorrow,” but people have to get very good at that. Luckily there’s a lot of awareness around this issue, like people who assemble large datasets, are aware that data has a ‘best-by’ date that varies widely. For how to play a game of chess, it’s hundreds of years. That hasn’t changed.  If it’s what a cell phone looks like, it’s a year. So the trick is to just be very cognizant of the data you’re using.
I find the people who are in this industry are very reflective about these kinds of things, and this gives me a lot of encouragement. There have been times in the past where people associated with a new technology had a keen sense that it was something very serious, like the Manhattan project in the United States in World War II, or the computers that were built in the United Kingdom in that same period.  They realized they were doing something of import, and they were very reflective about it, even in that time. And I find that to be the case with people in AI today.
I think that generally speaking, a lot of the companies that we’ve invested in this sector and in this stage of effectively narrow based AI, as you said, are going through and thinking through it. But what’s interesting is that I’ve noticed that there is a limit to what we can teach as metadata to data for machine learning algorithms to learn and evolve by themselves. So, the age-old argument is that you can’t build an artificial general intelligence. You have to grow it. You have to nurture it. And it’s done over time. And part of the challenge of nurturing or growing something is knowing what pieces of input to give it. 
Now, if you use children as the best approximation of what we do, there’s a lot of built in features, including curiosity and a desire to self-preserve and all these things that then enable the acquisition of metadata, which then justifies and rewrites existing data as either valid or invalid, to use your cell phone example. How do you see us being able to tackle that when we’re inherently flawed in our ability to add metadata to existing data? Are we going to effectively never be able to make it to artificial general intelligence because of our inability to add that additional color to data so that it isn’t effectively a very tarnished and limited utility?
Well, yes, it could very easily be the case, and by the way, that’s an extremely minority view among people in AI. I will just say that up front. I’m not representing a majority of people in AI, but I think that could very well be the case. Let me just dive into that a little bit about how people know what we know. How is it that we are generally intelligent, have general intelligence? If I asked, “Does it hurt your thumb when you hit it with a hammer?” You would say “yes,” and then I would say, “Have you ever done it?” “Yes.” And then I would say, “Well, when?” And you likely can’t remember, and so you’re right, we have data that we take somehow learning from, and we store it and we don’t knowhow we store it. There’s no place in your brain which is ‘hitting your thumb with a hammer hurts,’ and then if I somehow could cut that out, you no longer know that.  It doesn’t exist. We don’t know how we’d do that.
Then we do something really clever. We know how to take data we know in one area and apply it to another area.  I could draw a picture of a completely made up alien that is weird beyond imagination. And I could show that picture to you and then I could give you a bunch of photographs and say find that alien in these. And if the alien is upside down or underwater or covered in peanut butter, or half behind a tree or whatever, you’re like, “There it is. There it is. There it is. There it is.” We don’t know how we do that. So, we don’t know how to make computers do it.
And then if you think about it, if I were to ask you to imagine a trout swimming in a river, and imagine the same trout in a jar of formaldehyde and in a laboratory. “Do they weigh the same?” You would say, “yeah.” “Do they smell the same?” “Uh, no.” “Are they the same color?” “Probably not.” “Are they the same temperature?” “Definitely not.” And even though you have no experience with any of that, you instinctively know how to apply it. These are things that people do very naturally, and we don’t know how to make machines do them.
If you were to think of a question to ask a computer like, “Dr. Smith is having lunch at his favorite restaurant when he receives a phone call.  Looking worried, he runs out the door neglecting to pay his bill. Are the owners liable to call the police?” You would say a human would say no. Clearly, he’s a doctor. It’s his favorite restaurant, he must eat there a lot, he must’ve gotten an emergency call. He ran out the door forgetting to pay.  We’ll just ask him to pay the next night he comes in. The amount of knowledge you had to have, just to answer that question is complexity in the extreme.
I can’t even find a chatbot that can answer [the question:] “What’s bigger, a nickel or the sun?”  And so to try to answer a question that requires this nuance and all of this inference and understanding and all of that, I do not believe we know how to build that now. That would be, I believe, a statement within the consensus. I don’t believe we know how to build it, and even if you were to say, “Well, if you had enough data and enough computers, you could figure that out.” It may just literally be impossible, like every instantiation of every possibility. We don’t know how we do it. It’s a great mystery and it’s even hotly debated [around] even if we knew how we do it, could we build a machine to do it? I don’t even know that that’s the case.
I think that’s part of the thing that baffles me in your book. I’m jumping a little bit around here in your book now. You do talk about consciousness and you talk about sentience and how we know what we know, who we are, what we are. You talk about the dot on pets and how they identify themselves as themselves, and with any engineering problem, sometimes you can conceive of a solution before actually the method by which to get there is accomplished.  You can conceive the idea of flying. You just don’t know what combination of anything that you are copying from birds or copying from leaves, or whatever, will function in getting to that goal: flying.
The problem with this one is that from an engineering point of view, this idea of having another human or another human-like entity that not only has consciousness, but has free will and sentience as far as we can perceive it, [doesn’t recognize that] there’s a lot of things that you described in your chapter on consciousness that we don’t even know how to qualify. Like which is a huge catalyst in being able to create the metadata that structures data in a way that then gives the illusion and perception of consciousness. Maybe this is where you give me your personal opinion… do you think we’ll ever be able to create an answer to that engineering question, such that technology can be built around it? Because otherwise we might just be stuck on the formulation of the problem.
The logic that says we can build it is very straightforward and seemingly ironclad. The logic goes like this. If we figure out how a neuron works, we can build one. Either physically build one or model it in a computer.  And if you can model that neuron in a computer, then you learn how it talks to other neurons and then you model a 100 billion of them in the computer, and all of a sudden you have a human mind.  So that that says, we don’t have to know it, we just have to understand the physics.  So, the position just says whatever a neuron does, it behaves the laws of physics and if we can understand how those laws are interacting, then we will be able to build it. Case closed. There’s no question at all that it cannot be done.
So I would say that’s the majority viewpoint. The other viewpoint says, “Well wait a minute, we have this brain that we don’t understand how it works. And then we have this mind, and a mind is a concept everybody uses and if you want a definition, it’s kind of everything your brain can do that an organ doesn’t seem like it would be able to. You have a sense of humor; your liver may not have a sense of humor.  You have emotions, your stomach may not have emotions, and so forth.” So somehow, we have a mind that we don’t know how it comes about. And then to your point, we are conscious and what that means is we experience the world. I feel warmth, [whereas] a computer measures temperature. Those are very different things and we not only don’t know how it is that we are conscious, we don’t even know how to ask the question in a scientific method, nor what the answer looks like.
And so, I would say my position to be perfectly clear is, we have brains we don’t understand, minds we don’t understand and consciousness we don’t understand.  And therefore, I am unconvinced that we can ever build something like this. And so I see no evidence that we can build it because the only example that we have is something that we don’t understand. I don’t think you have to appeal to spiritualism or anything like that, to come to that conclusion, although many people would disagree with me.
Yeah, it’s interesting. I think one thing underlying the pessimistic view is this belief that while we may not have the technology now or have an idea of how we’re going to get there, the kinetic sort of an AI explosion—that’s what I think Nick Bostrom, the philosopher has called it—may be pretty rapid in the sense that once there is material success in developing these AI models, that will encourage researchers to sort of pile on and therefore they bring in more people to produce those models and then secondly, if there are advancements in self-improving AI models. So there’s a belief that it may be pretty quick that we get super intelligence that underlies this pessimism and the belief that we sort of have to act now.  What would be your thoughts on that?
Oh, well I don’t agree. I think that’s the “Is that a bear or a rock?” kind of thing. The only evidence we really have for that scenario is movies, and they’re very compelling and I’m not conspiratorial, and they’re entertaining. But what happens is you see that enough, and you do something that has a name, it’s called ‘reasoning from fictional evidence’ and that’s what we do. Where you say, “Well, that could happen, and then you see it again, and yeah, that could happen. That really could again.”  Again, and again and again.
To put it in perspective, when I say we don’t understand how the brain works, let me be really clear about that. Your brain has 100 billion neurons, roughly the same number of stars in the Milky Way. You might say, “Well, we don’t understand it because there’s so many.” This is not true. There’s a worm called the nematode worms. He’s about as long as a hair is thick, and his brain has 302 neurons. These are the most successful creatures on the planet, by the way. Seventy percent of all animals are nematode worms and 302 neurons. That’s it. [This is about] the number of pieces of cereal in a bowl of cereal.  So, for 20 years a group of people in something called the ‘open worm project’ had been trying to model those 302 neurons in a computer to get it to display some of complex behavior that a nematode worm does.  And not only have they not done it, there’s even a debate among them whether it is even possible to do that. So that’s the reality of the situation. We haven’t even gotten to the mind.
Again, how is it that we’re creative? And we haven’t even gotten to, how is it that we experience the world? We’re just talking about how does a brain work, if it only has 302 neurons, a bunch of smart people, 20 years working on it, may not even be possible. So somehow to spin a narrative that, well, yeah, that all may be true, but what if there was a breakthrough and then it sped up on itself and sped up and then it got smarter and then it got so smart it had 100 IQ, then a thousand, then a million, then 100 million. And then it doesn’t even see us anymore. That’s as speculative as any other kind of scenario you want to come up with. It’s so removed from the facts on the ground that you can’t rebut it because it is not based on any evidence that you can rebuke.
You know, the fun thing about chatting with you, Byron, is that the temptation is to sort of jump into all these theories and which ones are your favorites. So because I have the microphone, I will.  Let me just jump into one.  Best science fiction theory that you like. I think we’ve touched on a few of these things, but what is the best unified theory of everything, from science fiction that you feel like, ‘you know what, this might just explain it all’?
Star Trek.
Okay. Which variant of it?  Because there’s not…
Oh, I would take either….I’ll take ‘The Next Generation.’ So, what is that narrative? We use technology to overcome scarcity. We have bumps all along the way. We are insatiably curious, and we go out to explore the stars as Captain Picard told the guy he thought out from the 20th Century. He said the challenge in our time is to better yourself, is to discover who you are. And what we found interestingly with the Internet, and sure, you can list all the nefarious uses you want. What we found is the minute you make blogs, 100 million people want to tell you what they think. The minute you make YouTube, millions of people want to upload video; the minute you make iTunes, music flourishes.
I think in my father’s generation, they didn’t write anything after they left college. We wake up in the morning, and we write all day long. You send emails constantly and so what we have found is that it isn’t that there were just a few people, and like the Italian Renaissance, there were only a few people who wanted to paint or cared to paint. It was like everybody probably did. Only there wasn’t enough of the good stuff, and so only either you had extreme talent or extreme wealth and then you got to paint.
Well, in the future, in the Star Trek variant of it, we’ve eliminated scarcity through technology, and everybody is empowered, every Dante to write their Inferno, every Marie Curie to discover radium and all of the rest. And so that vision of the future, you know, Gene Roddenberry said in the future there will be no hunger and there will be no greed and all the children would know how to read.  That variant of the future is the one that’s most consistent with the past. That’s the one you can say, “Yeah, somebody in the 1400s looking at our life today, that would look like Star Trek to them. These people like push up a button and the temperature in the room gets cooler, and they have leisure time. They have hobbies.”  That would’ve seemed like science fiction.
I think there’s a couple of things that I want to tackle with the Star Trek analogy to get us sort of warmed up on this and I think Kyran’s waiting here at the top to ask some of them, but I think the most obvious one to ask, if we use that as a parable of the future, is about Lieutenant Commander Data. Lieutenant Commander Data is one of the characters starring in The Next Generation and is the closest attempt to artificial general intelligence, and yet he’s crippled from fully comprehending the human condition because he’s got an emotion chip that has to be turned off because when it’s turned on, he goes nuts; and his brother is also nuts because he was overly emotional.  And then he ends up representing every negative quality of humanity. So to some extent, not only have I just shown off about my knowledge of the Star Trek era…
Lore wasn’t over overly emotional. He got the chip that was meant for Data and it wasn’t designed for him. That was his backstory.
Oh, that’s right. I stand corrected, but maybe you can explore that.  In that future, walk us through why you think Gene had that level of limitation for Data, and whether or not that’s an implication of ultimately the limits of what we can expect from robots.
Well, obviously that story is about…that whole setup is just not hard science. Right? That whole setup is, like you said, it’s embodying us and it’s the Pinocchio Story of Data wanting to be a boy and all of the rest. So, it’s just storytelling as far as I’m concerned. You know, it’s convenient that he has a positronic brain and having removed part of his scalp, you just see all this light coursing through, but that’s not something that science is behind, like Warp 10 or something, the tri-quarter. You know Uhura in the original series, she had a Bluetooth device in her ear all the time, right?
Yeah, but I guess with the Data metaphor, I guess what I’m asking is: the limitations that prevented Data from being able to do some of the things that humans do, and therefore ultimately come around full circle into being a fully independent, conscious, free-willed, sentient being, were entirely because of some human elements he was lacking. I guess the question and you brought it up in your book is, whether or not we need those human elements to really drive that final conversion of a machine to some sort of entity that we can respect as an equivalent peer to us.
Yeah. Data is a tricky one because he could not feel pain, so you would say he’s not sentient. And to be clear, sentient means, it’s often misused, to mean ‘smart.’ That’s sapient. Sentient means you can experience pain. He didn’t, but as you said, at some point in the show, he experienced emotional pain through that chip and therefore he is sentient. They had a whole episode about, “Does Data have a soul?” And you’re right, I think there are things that humans do that it’s hard to…unless you start with the assumption everything in a human being is mechanistic, in physics and that you’re a bag of chemicals with electrical impulses going through you.
If you start with that, then everything has to be mechanical, but most people don’t see themselves that way, I have found, and so if there is something else, some emergent or something else that’s going on, then yeah, I believe that has to be wrapped up in our intelligence. That being said, everybody I think has had this experience of when you’re driving along and you kind of space [out] and then you kind of ‘come to’ and you’re like, “Holy cow, I’m three miles along. I don’t remember driving there.” Yet you behaved very intelligently. You navigated traffic and did all of that, but you weren’t kind of conscious. You weren’t experiencing the world at least that much. That may be the limit of what we can do, that a person during that three minutes when you’re kind of spaced, because that person also didn’t write a new poem or do anything creative. They just merely mechanically went through the motions of driving. That may be the limit. That may be that last little bit that makes us human.
The Star Trek view has two pieces to it. It has a technological optimism, which I don’t contest. I think I’m aligned with you and agreeing with that. There’s also an economic or a social optimism there and that’s also about how that technology is owned, who owns the means of production, who owns the replicators.  When it comes to that, how precarious do you think the Star Trek Universe is in the sense that if the replicators are only in the hand of a certain group of people, if they’re so expensive that only a few people learn them, or only a few people own the robots,  then it’s no longer such an optimistic scenario that we have. I’d just be interested in hearing your views there.
You’re right, that the replicator is a little bit of a convenient…I don’t want to say it’s a cheat, but it’s a convenient way to get around scarcity and they never really go into, well, how is it that anybody could go to the library and replicate whatever they wanted.  Like how did they get that?  I understand those arguments. We have [a world where] the ability of a person using technology to affect a lot of lives goes up and that’s why we have more billionaires. We have more self-made billionaires now; a higher percentage of billionaires are self-made now than ever before. You know, Google and Facebook together made 12 billionaires. The ability to make a billion dollars gets easier and easier, at least for some people (not me) because technology allows them to multiply and affect more lives and you’re right. So that does tend to make more super, super, super rich people. But, I think the income inequality debate is a little…maybe needs a slight bit of focus.
To my mind it doesn’t matter all that much how many super rich people there are. The question is how many poor people are there? How many people have a good life? How many people can have medical care and can, you know, if I could get everybody to that state, but I had to make a bunch of super rich people, it’s like, absolutely, we’ll take that. So I think, income inequality by itself is a distraction.
I think the question is how do you raise the lot of everybody else and what we know about technology is that it gets better over time and the prices fall over time. And that goes on ad infinitum. Who could have afforded an iPhone 20 years ago?  Nobody. Who could have afforded the cell phone 30 years ago? Rich people. Who could have afforded any of this stuff all these years ago?  Nobody but the very rich, and yet now because they get rich, all the prices of all that continue to fall and everybody else benefits from it.
I don’t deny there are all kinds of issues. You have your Hepatitis C vaccine, costs $100,000 and there are a lot of people who need it and only a few people are going to [get it]. There’s all kinds of things like that, but I would just take some degree of comfort that if history has taught us anything, is that the price of anything related to technology falls over time. You probably have 100 computers in your house.  You certainly have dozens of them, and who from 1960 would have ever thought that ? Yet here they are here. Here we are in that future.
So, I think you almost have to be conspiratorial to say, yeah, we’re going to get these great new technologies, and only a few people are going to control them and they’re just going to use them to increase their wealth ad infinitum. And everybody else is just going to get the short end of the stick. Again, I think that’s playing on fear. I think that’s playing on all of that, because if you just say, “What are the facts on the ground? Are we better off than we were 50 years ago, 100 years ago, 200 years ago?” I think you can only say “yes.”
Those are all very good points and I’m actually tempted to jump around a little bit in your book and maybe revisit a couple of ideas from the narrow AI section, but maybe what we can do is we can merge the question about robot proofing jobs with some of the stuff that you’ve talked about in the last part, which is the road from here.
One of the things that you mentioned before is this general idea that the world is getting better, no matter what. These things that we just discussed about iPhones and computers being more and more accessible is an example of it.  You talked about the section of ‘murderous meerkats’ where you know, even things like crime are things that are improving over time, and therefore there is no real reason for us to fear the future. But at the same time, I’m curious as to whether or not you think that there is a decline in certain elements of society, which we aren’t factoring into the dataset of positivity.
For example, do we feel that there is a decline in the social values that have developed in the current era, in this sort of decline of social values, things like helping each other out, things like looking out for the collective versus the individual, has come and gone, and we’re now starting to see the manifestations of that through some of the social media and how it represents itself.  And I just wanted to get your ideas down the road from here and whether or not you would revisit them, if somebody were to tell you and show you some sociologists’ research regarding the decline of social values, and how that might affect the kinds of jobs humans will have in the future versus robots.
So I’m an optimist about the future. I’m clear about that. Everything is hard. It’s like me talking about my companies. Everything’s a struggle to get from here to there. I’m not going to try to spin every single thing. I think these technologies have real implications on people’s privacy and they’re going to affect warfare and there are all these things that are real problems that we’re really going to have to have to think about.  The idea that somehow these technologies make us less empathetic, I don’t agree with. And you can just run through a list of examples like everybody kind of has a cause now. Everybody has some charity or thing that they support. Volunteerism, Go-Fund-Me’s are up…People can do something as simple as post a problem they have online and some stranger who will get nothing in return is going to give them a big, long answer.
People toil on a free encyclopedia and they toil in anonymity. They get no credit whatsoever. We had the ‘open source’ movement. Nobody saw that. Nobody said “Yeah, programmers are going to work really hard and write really good stuff and give it away.” Nobody said we’re going to have Creative Commons where people are going to create things that are digital and they’re going to give them away. Nobody said, “Oh yeah, people are going to upload videos on YouTube and just let other people watch them for free.” Everywhere you look, technology empowers us and our benevolence.
To take the other view is like a “Kids these days!” shaking your cane, “Get off my grass!” kind of view that things are bad now. They’re getting worse. Which is what people have said for as long as people have been reflecting on the age.  And so, I don’t buy any of that.  In terms of specifically about jobs, I’ve tried hard to figure out what the half-life of a job is.  And I think every 40 years, every 50 years, half of all the jobs vanish. Because what does technology do? It makes great new high paying jobs, like a geneticist. And it destroys low-paying tedious jobs, like an order taker at a fast food restaurant.
And what people sometimes say is, “You really think that order taker is going to become a geneticist? They’re not trained for these new jobs.” And the answer is, “Well, no.” What’ll happen is a college professor will become a geneticist and a high school biology teacher gets the college job and the substitute teacher gets hired at the high school job, all the way down. The question isn’t, “Can that person who lost their job to automation get one of these great new jobs?” The question is, “Can everybody on the planet do a job a little harder than the job they have today?” And if the answer to that is yes, then what happens is, every time technology creates great new jobs, everybody down the line gets a promotion. And that is 250 years of why have we had in the West full employment, because employment other than during the depression has always been 5 to 10 percent… for 250 years.
Why have we had full employment for 250 years and rising wages? Even when something like the assembly line came out, or something like we replaced all the animal power with steam, you never had bumps in unemployment because people just used those technologies to do more. So yes, in 40 or 50 years, half the jobs are going to be gone, that’s just how the economy works. The good news is though, when I think back to my K-12 education, and I think if I knew the whole future, what would I have taken then that would help me today.  And I can only think of one thing that I really just missed out on. And can you guess by the way?
Computer education?
No, because anything they taught me then would no longer be useful. Typing. I should’ve taken typing. Who would have thought that that would be like the skill I need every day the most? But I didn’t know that. So you have to say, “Wow, like everything you have, everything that I do in my job today is not stuff I learned in school.” What we all do now is you hear a new term or concept and you google it and you click on that and you go to Wikipedia and you follow the link, and then it’s 3:00 AM in the morning and you wake up the next morning, and you know something about it.  And that’s what every single one of us does, what every single one of us has always done, what every single one of us will continue to do. And that’s how the workforce morphs. It isn’t that we’re facing this kind of cataclysmic disconnect between our education system and our job market. It’s that people are going to learn to do the new things, as they learned to be web designers, and they learned every other thing that they didn’t learn in school.
Yeah, we’d love to dive into the economic arguments in a second, but just to bring it back to your point that technology is always empowering. I’m going to play devil’s advocate here and mention someone we had on the podcast about a year ago. Tristan Harris, who’s the leader of an initiative called ‘Time Well Spent’ and his arguments were that the effects of technology can be nefarious. Two days ago, there was a New York Times article, referring to a research paper on statistical analysis and anti-refugee violence in Germany, and one of the biggest correlating factors was time spent on social media, suggesting that it isn’t always like beneficial or benign for humans. Just to play devil’s advocate here, what is your take on that?
So, is your point that social media causes people to be violent, or is the interpretation people prone to violence also are prone to using social media?
Maybe one variant of that, and Kyran can provide his own, is that the good is getting better with technology and the bad is getting badder with technology. You just hope that one doesn’t detonate something that is irreversible.
Well, I will not uniformly defend every application of technology to every single situation. I could rattle off all the nefarious uses of the Internet, right? I mean bilking people, you know them all, you don’t need me to list it. The question isn’t, “Do any of those things happen?” The question is, “On balance, are more people using the Internet for good, than evil?” And we know the answer is ‘good.’
It has to be, because if we were more evil than good as a species, we never would have survived this way. We’re highly communal. We’ve only survived because we like to support each other, forget about all the wars, granted, all of the problems, all the social strife, all of that. But in the end, you’re left with the question, “How did we make progress to begin with?” And we made progress because there are more people who are working for progress than there are…who are carrying torches and doing all the rest. It just is simple.
I guess I’m not qualified to make this statement, but I’m going to go ahead and do it anyway. Humans have those attributes because we’re inherently social animals, and as a consequence we’re driven to survive and forego being right at times, because we value the social structure more than we do our own selves; and we value the success of the social structure more than ourselves; and there’s always going to be deviations from that, but on average it then answers and  shows and represents itself in the way that you have articulated it. 
And that’s a theory that I have, but one of the things that if you accept that theory, well you can let me know or not, but let’s, for the sake of the question, let’s just assume that it’s correct, then how do you impart that onto a collection of artificial intelligences such that they mirror that? And as we start delegating more and more to those collective artificial intelligences, can we rely on them to have that same drive when they’re no longer as socially dependent on each other, the way that humans are for reproduction and defense and emotional validation?
That could well be the case, yes. I mean, we have to make sure that we program them to reflect an ethical code, and that’s an inherently very hard thing to do, because people aren’t great at articulating them and even when they articulate them, they’re full of all these provisos and exceptions and everybody’s is different. But luckily, there are certain broad concepts that almost everybody agrees with. That life is better than death, and that building is better than destroying, and there are these very high-level concepts that we will need to take great pains in how we build our AIs, and this is an old debate, even in AI.
There was a man named Weizenbaum, who made a chatbot in the sixties. It was simple. You would say, “I’m having a bad day today,” and it would say, “Why are you having a bad day?” “I’m having a bad day because of my mother.” “Why are you having a bad day because of your mother?” Back and forth. Super simple. Everybody knew it was a chatbot, and yet he saw people getting like emotionally attached to it, and he kind of turned on it and he said, “In the end, we never want computers.”
When the computer says ‘I understand,’ it’s just a lie, that there is no ‘I,’ and there is no understanding. And he came to believe we should never let computers do those kinds of things. They should never be…recipients of our emotions. We should never make them caregivers and all of these other things because in the end, they don’t have any moral capacity at all. They have no empathy. They have faked empathy, they have simulated empathy, and so I think there is something to that, that there will just simply be jobs we’re not going to want them to do because in the end they’re going to require a person I think.
You see, any job a computer could do; a robot could do. If you make a person do that job, there’s a word for that. That’s dehumanizing. If a machine can, in theory, do a job, if you make a person do it, that’s dehumanizing.  You’re not using anything about them that makes them a human being, you’re using them as a stand-in for a machine, and those are the jobs machines should do.
But then there are all the other jobs that only people can do, and that’s what I think people should do. I think they’re going to be a lot of things like that, that we are going to be uncomfortable with and we still don’t have any idea. Like, when you’re on a chatbot, you need to be told it’s a chatbot. Should robotic voices on the phone actually sound somewhat robotic, so you know that’s not a person? You think about R2-D2 or C-3PO, just think if their names were Jack and Larry.  That’s a subtle difference in how we regard them that we don’t know how we’re going to do that, but you’re entirely right. Machines don’t have any empathy and they can only fake it, and there are real questions if that’s good or not.
Well, that’s a great way of looking at it, and one of the things that’s been really great during this chat is understanding the origin of some of these views and how you end up at this positive outcome at the end of the day on average. And the book does a really good job of leaving the reader with that thought in mind, but arms them to have these kinds of engaging conversations. So thanks for sharing the book with us and thanks for providing your opinion on different elements of the book.
However, you know, it’d be great to get some thoughts about things that you feel that inspired you or that you left out of the book. For example, which movies have most affected you in the vein of this particular book. What are your thoughts on a TV show like Westworld and how that illustrates the development of the mind of the artificial intelligence in the show? Maybe just share a little bit about how your thoughts have evolved.
Certainly, and I would also like to add, I do think there’s one way it can all go south. I think there is one pessimistic future and I think that will come about if people stop believing in a better tomorrow. I think pessimism is what will get us all killed. The reason we’ve had optimism, be so successful, is there’ve been a number of people who get up and say, “Somebody needs to invent the blank. Somebody needs to find a cure for this, somebody needs to do it. I will do it.” And you have enough people who believe in one form or another, in a better tomorrow.
There’s a mentality of, don’t polish brass on a sinking ship. And that’s where you just say, “Well what’s the point? Why bother? Why bother?” And if enough people said “Why bother?” then we are going to have to build that world. We’re going to have to build that better world. And just like I said earlier with my companies, it’s going to be hard. Everybody’s got to work hard at it. And so, it’s not a gift, it’s not free.  We’ve clawed our way from savagery to civilization and we’ve got to keep clawing. But the interesting thing is, finally I think there is enough of the good stuff for everybody and you’re right, there are big distribution problems about that, and there are a lot of people who aren’t getting any of the good stuff, and those are all real things we’re going to have to deal with.
When it comes to movies and TV, I have to see them all because everybody asks me about them on shows. So I have to go see them.  And I used to loathe going to all the pessimistic movies that have far and away dominated…In fact, I even get to think of, you know, Black Mirror, it’s like I started writing out story ideas for a show in my head, I call ‘White Mirror.’  Who’s telling those stories about how everything can be good in the future? That doesn’t mean they’re bereft of drama. It just means that it’s a different setting to explore these issues.
I used to be so annoyed at having to go to all of these movies. I would go to see some movie like Elysium and then be like, yeah, they’re the 99 percent, yeah, they’re poor and beaten down. Yeah, they’re covered in dirt. And now, yeah, the 1 percent, I bet they live in someplace high up in the sky, pretty and clean. Yeah, there that is. And then, you know, you see Metropolis, the most expensive movie ever made, adjusted for inflation, from almost a century ago. And yeah, there are the 99 percent. They’re dirty, they’re covered in dirt, everybody forgets to bathe in the future. I wonder where the…oh yeah, the one percent, yeah, they live in that tower up there.  Oh, everything up there is white and clean. Wow. Isn’t that something. And I have to sit through these things.
And then I read a quote by Frank Herbert, and he said sometimes the purpose of science fiction is to keep the future from happening. And I said, okay, these are cautionary tales. These are warnings, and now I view them all like that.  And so, I think there are a lot of cautionary tales out there and very few things that we can…like Star Trek. You heard me answer that so quickly because there aren’t a lot of positive views about the future that are in science fiction. It just doesn’t seem to be as rich of a ground to tells stories and even in that world, you had to have the Ferengi, and you had to have the Klingons and you had to have the Romulans and so forth.
So, I’ve watched them all and you know, I enjoy Westworld, like the next person.  But I also realized those are people playing those androids and that nobody can build a machine that does any of that. And so it’s fiction. It’s not speculative in my mind. It’s pure fiction. It’s what they are and that doesn’t mean they’re any less enjoyable… When I ask people on my AI podcast what science fiction influenced you, they all, almost all say Star Trek. That was a show that inspired people, and so I really gravitate towards things that inspire me and inspire me in a vision of a better tomorrow.
For me, if I had to answer that question, I would say The Matrix. And I think that it brings up a lot of philosophical questions and even questions about reality. And it’s dystopian in some ways I guess, but in some ways, it illustrates how we got there and how we can get out of it. And it has a utopian conclusion I guess, because it’s ultimately in the form of liberation. But it is an interesting point you make.
And it actually makes me reflect back on all the movies that I’ve seen, and it actually also brings up another question which is whether or not it’s just representative of the times. Because if you look at art and if you look at literature over the years, in many ways they are inspired by what’s going on during that era. And you can see bouts of optimism, post- the resolution of some conflict. And then you can see the brewing of social upheaval, which then ends up with some sort of a conflict, and you see that all across the decades and it is interesting.  And I guess that brings up a moral responsibility for us not to generate the most intense set of innovations around artificial intelligence, in a point where maybe society is quite split at the moment.  We might inject unfortunate conclusions into AI systems just because of the state of where we are in our geopolitical evolution.
Yeah. I call my airline of choice once a week to do something, and it asked me to state my member number, which unfortunately has an A, an H, and an 8 in it.  And it never gets it right. So that’s what people are trying to do with AI today, is it’s just like make a lot of really tedious stuff less tedious and use caller ID by the way. I always call from the same number, but that’s a different subject.
And so most of the problems that we try to solve with it are relatively mundane, and most of them are about how do we stop disease, and how do we… all of these very worthwhile things. It’s not a scary technology. It’s study the past, look for patterns in data, project into the future. That’s it. And anything around that that tries to make it terrifying, I think is sensationalism. I think the responsibility is to tell the story about AI like that, without the fear and emphasizing the positivity of all the good that can come out of this technology.
What do you think we’ll look upon 50 years from now and think, “Wow, why were we doing that?” How do you get away with that, the way that we look back today on slavery and think, “Why the hell did that happen?”
Well, I will give an answer to that. And it’s not my own personal axe to grind. To be clear, I live in Austin, Texas. We have barbecue joints here in abundance, but I believe that we will learn to grow meat in a laboratory and it will be not only environmentally, massively better, but it will taste better, and be cheaper and healthier and everything.  And so I think we’re going to grow all of our meat and maybe even all of our vegetables, by the way. Why do you need sunlight and rain and all of that?  But put that aside for a minute, I think we’re going to grow all of our meat in the future and I don’t know if you grow it from a cell, if it’s still veganism to eat it. Maybe it is, I don’t know, like strictly speaking, but I think once the best steak you’ve ever had in your life is 99 cents, everybody’s just going to have that.
And then we’ll look back at how we treat animals with a sense of collective shame of that, because the question is, “Can they feel?” In the United States, up until the mid-90s, veterinarians were taught that animals couldn’t feel pain and so they didn’t anesthetize them. They also operated on babies at the same time because they couldn’t feel pain. Now I think people care whether the chicken that they’re eating was raised humanely. And so, I think that expansion of empathy to animals, who now I think most people believe they do feel pain, they do experience sadness or something that must feel like that, and the fact that we essentially keep them in abhorrent conditions and all of that.
And again, I’m not grinding my own axe here. This isn’t something that…I don’t think it’s going to come up with people, like overnight changing. I think what’s gonna happen is there’ll be an alternative. The alternative will be so much better, but then everybody would use it and look back and think, how in the world did we do that?
No, I agree with that.  As a matter of fact, we’ve invested in a company that’s trying to solve that problem, and I’m going to post in the show notes just because they’re in stealth right now, but by the time this interview goes to print, hopefully we’ll be able to talk about them. But yes, I agree with you entirely, and we put our money behind it. So, looking forward to that being one of the issues to be solved. Now another question is, what’s something that you used to strongly believe in, that now you think you were fundamentally misguided about?
Oh, that happens all the time. I didn’t write this book to start off by saying, “I will write a book that doesn’t really say what I think, it’ll just be this framework.” I wrote a book to try to figure out what I think, because I would hear all of these proclamations about these technologies and what they could do. And so, I think I used to be way more in the AGI camp, that this is something we’re going to build and we’re going to have those things, like on Westworld. This was before Westworld though. And I used to be much more in that, until I wrote the book, which changed me and I can’t say I disbelieve it, that would be the wrong way to say it, but I see no evidence for it. I think I used to buy that narrative a lot more and I didn’t realize it was less a technological opinion and more a metaphysical opinion. And so, like working through all of that and just understanding all of the biases and all of the debate. It’s very humbling because these are big issues and what I wanted to do, like I said, is make a book that helps other people work through them.
Well it is a great book. I’ve really enjoyed reading it. Thank you very much for writing it. Congratulations! You’re also the longest podcast we’ve ever recorded, but it’s a subject that is very dear to me, and one that is endlessly fascinating, and we could continue on, but we’re going to be respectful of your time, so thank you for joining us and for your thoughts.
Well, thank you. Anytime you want me back, I would love to continue the conversation.
Well, until next time guys. Bye. Thanks for listening. If you enjoyed the podcast, don’t forget to subscribe on iTunes and SoundCloud and leave us a review with your thoughts on our show.

Voices in AI – Episode 70: A Conversation with Jakob Uszkoreit

[voices_in_ai_byline]

About this Episode

Episode 70 of Voices in AI features host Byron Reese and Jakob Uszkoreit discuss machine learning, deep learning, AGI, and what this could mean for the future of humanity. Jakob has a masters degree in Computer Science and Mathematics from Technische Universität Berlin. Jakob has also worked at Google for the past 10 years currently in deep learning research with Google Brain.
Visit www.VoicesinAI.com to listen to this one-hour podcast or read the full transcript.

Transcript Excerpt

Byron Reese: This is Voices in AI, brought to you by GigaOm. I’m Byron Reese. Today our guest is Jakob Uszkoreit, he is a researcher at Google Brain, and that’s kind of all you have to say at this point. Welcome to the show, Jakob.
Let’s start with my standard question which is: What is artificial intelligence, and what is intelligence, if you want to start there, and why is it artificial?
Jakob Uszkoreit: Hi, thanks for having me. Let’s start with artificial intelligence specifically. I don’t think I’m necessarily the best person to answer the question what intelligence is in general, but I think for artificial intelligence, there’s possibly two different kind of ideas that we might be referring to with that phrase.
One is kind of the scientific or the group of directions of scientific research, including things like machine learning, but also other related disciplines that people commonly refer to with the term ‘artificial intelligence.’ But I think there’s this other maybe more important use of the phrase that has become much more common in this age of the rise of AI if you want to call it that, and that is what society interprets that term to mean. I think largely what society might think when they hear the term artificial intelligence, is actually automation, in a very general way, and maybe more specifically, automation where the process of automating [something] requires the machine or the machines doing so to make decisions that are highly dynamic in response to their environment and in our ideas or in our conceptualization of those processes, require something like human intelligence.
So, I really think it’s actually something that doesn’t necessarily, in the eyes of the public, have that much to do with intelligence, per se. It’s more the idea of automating things that at least so far, only humans could do, and the hypothesized reason for that is that only humans possess this ephemeral thing of intelligence.
Do you think it’s a problem that a cat food dish that refills itself when it’s empty, you could say has a rudimentary AI, and you can say Westworld is populated with AIs, and those things are so vastly different, and they’re not even really on a continuum, are they? A general intelligence isn’t just a better narrow intelligence, or is it?
So I think that’s a very interesting question. Whether basically improving and slowly generalizing or expanding the capabilities of narrow intelligences, will eventually get us there, and if I had to venture a guess, I would say that’s quite likely actually. That said, I’m definitely not the right person to answer that. I do think that guesses, that aspects of things are today still in the realms of philosophy and extremely hypothetical.
But the one trick that we have gotten good at recently that’s given us things like AlphaZero, is machine learning, right? And it is itself a very narrow thing. It basically has one core assumption, which is the future is like the past. And for many things it is: what a dog looks like in the future, is what a dog looked like yesterday. But, one has to ask the question, “How much of life is actually like that?” Do you have an opinion on that?
Yeah so I think that machine learning is actually evolving rapidly from the initial classic idea of basically trying to predict the future just in the past, and not just the past as a kind of encapsulated version of the past. So it’s basically a snapshot captured in this fixed static data set. You expose machines to that, you allow it to learn from that, train on that, whatever you want to call it, and then you evaluate how the resulting model or machine or network does in the wild or on some evaluation tasks, and tests that you’ve prepared for it.
It’s evolving from that classic definition towards something that is quite a bit more dynamic, that is starting to incorporate learning in situ, learning kind of “on the job,” learning from very different kinds of supervision, where some of it might be encapsulated by data sets, but some might be given to the machine through somewhat more high level interactions, maybe even through language. There is at least a bunch of lines of research attempting that. Also quite importantly, we’re starting slowly but surely to employ machine learning in ways where the machine’s actions actually have an impact on the world, from which the machine then keeps learning. I think that that’s actually something [for which] all of these parts are necessary ingredients, if we ever want to have narrow intelligences, that maybe have a chance of getting more general. Maybe then in the more distant future, might even be bolted together into somewhat more general artificial intelligence.
Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com
[voices_in_ai_link_back]
 
Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.

Voices in AI – Episode 69: A Conversation with Raj Minhas

[voices_in_ai_byline]

About this Episode

Episode 69 of Voices in AI features host Byron Reese and Dr. Raj Minhas talk about AI, AGI, and machine learning. They also delve into explainability and other quandaries AI is presenting. Raj Minhas has a PhD and MS in Electrical and Computer Engineering from the University of Toronto, with his BE from Delhi University. Raj is also the Vice President and Director of Interactive and Analytics Laboratory at PARC.
Visit www.VoicesinAI.com to listen to this one-hour podcast or read the full transcript.

Transcript Excerpt

Byron Reese: This is Voices in AI, brought to you by GigaOm, I’m Byron Reese. Today I’m excited that our guest is Raj Minhas, who is Vice President and the Director of Interactive and Analytics Laboratory at PARC, which we used to call Xerox PARC. Raj earned his PhD and MS in Electrical and Computer Engineering from the University of Toronto, and his BE from Delhi University. He has eight patents and six patent-pending applications. Welcome to the show, Raj!
Raj Minhas: Thank you for having me.
I like to start off, just asking a really simple question, or what seems like a very simple question: what is artificial intelligence?
Okay, I’ll try to give you two answers. One is a flip response, which is if you tell me what is intelligence, I’ll tell you what is artificial intelligence, but that’s not very useful, so I’ll try to give you my functional definition. I think of artificial intelligence as the ability to automate cognitive tasks that we humans do, so that includes the ability to process information, make decisions based on that, learn from that information, at a high level. That functional definition is useful enough for me.
Well I’ll engage on each of those, if you’ll just permit me. I think even given a definition of intelligence which everyone agreed on, which doesn’t exist, artificial is still ambiguous. Do you think of it as artificial in the sense that artificial turf really isn’t grass, so it’s not really intelligence, it just looks like intelligence? Or, is it simply artificial because we made it, but it really is intelligent?
It’s the latter. So if we can agree on what intelligence is, then artificial intelligence to me would be the classical definition of artificial intelligence, which is re-creating that outside the human body. So re-creating that by ourselves, it may not be re-created in the way it-is created in our minds, in the way humans or other animals do it, but, it’s re-created in that it achieves the same purpose, it’s able to reason in the same way, it’s able to perceive the world, it’s able to do problem solving in that way. So without getting necessarily bogged down by what is the mechanism by which we have intelligence, and does that mechanism need to be the same; artificial intelligence to me would be re-creating that – the ability of that.
Fair enough, so I’ll just ask you one more question along these lines. So, using your ability to automate cognitive tasks, let me give you four or five things, and you tell me if they’re AI. AlphaGo?
Yes.
And then a step down from that, a calculator?
Sure, a primitive form of AI.
A step down from that: an abacus?
Abacus, sure, but it involves humans in the operation of it, but maybe it’s on that boundary where it’s partially automated, but yes.
What about an assembly line?
Sure, so I think…
And then I would say my last one which is a cat food dish that refills itself when it’s empty? And if you say yes to that…
All of those things to me are intelligent, but some of those are very rudimentary, and not, so, for example, you look at animals. On one end of the scale are humans, they can do a variety of tasks that other animals cannot, and on the other end of the spectrum, you may have very simple organisms, single-celled or mammals, they may do things that I would find intelligent, they may be simply responding to stimuli, and that intelligence may be very much encoded. They may not have the ability to learn, so they may not have all aspects of intelligence, but I think this is where it gets really hard to say what is intelligence. Which is my flip response.
If you say: what is intelligence? I can say I’m trying to automate that by artificial intelligence, so, if you were to include in your definition of intelligence, which I do, that ability to do math implies intelligence, then by automating that with an abacus is a way of artificially doing that, right? You have been doing it in your head using whatever mechanism is in there, you’re trying to do that artificially. So it is a very hard question that seems so simple, but, at some point, in order to be logically consistent, you have to say yes, if that’s what I mean, that’s what I mean, even though the examples can get very trivial.
Well I guess then, and this really is the last question along those lines: what, if everything falls under your definition, then what’s different now? What’s changed? I mean a word that means everything means nothing, right?
That is part of the problem, but I think what is becoming more and more different is, the kinds of things you’re able to do, right? So we are able to reason now artificially in ways that we were not able to before. Even if you take the narrower definition that people tend to use which is around machine learning, they’re able to use that to perceive the world in ways in which we were not able to before, and so, what is changing is that ability to do more and more of those things, without relying on a person necessarily at the point of doing them. We still rely on people to build those systems to teach them how to do those things, but we are able to automate a lot of that.
Obviously artificial intelligence to me is more than machine learning where you show something a lot of data and it learns just for a function, because it includes the ability to reason about things, to be able to say, “I want to create a system that does X, and how do I do it?” So can you reason about models, and come to some way of putting them together and composing them to achieve that task?
Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com
[voices_in_ai_link_back]
 
Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.

Could DevOps Exist Without Cloud Models?

The GigaOm DevOps market landscape report is complete, distilling conversations and briefings into a mere 8,500 word narrative. Yes, it’s big, but it could have been bigger, for the simple reason that DevOps touches, and is therefore helped and hindered by, every aspect of IT and indeed, the broader business. Security and governance, service level delivery, customer experience, API and data management, deployment and orchestration, legacy migration and integration, they all impact DevOps success, or cause what we have termed DevOps Friction.
While the report is about DevOps, and not all these other things (the line had to be drawn somewhere), one aspect rings out like a bell. I go back to an early conversation I had with ex-analyst Michael Coté, who brings a hard-earned, yet homespun wisdom to technology conversations. I paraphrase but Coté’s point, pretty much, was, “The kids of today, they don’t know any other way of building things than using cloud-based architectures.”
With that, he lifted his rifle and shot a can off the fence. Okay, no he didn’t, he talked about the foolishness of caring about operating system versions rather than just using what’s offered by the cloud provider. It took me back to a software audit I undertook, many years ago, when the ultra-modern JBoss interface layer built onto a Progress back-end had been customized by a freelancer who promptly left, leaving the organization with a poorly documented, legacy open source deployment… but I digress.
Many, if not all startups work on the basis of using what they are given, innovating on top rather than worrying about what’s under the hood (or bonnet, as we say over here. I know, right?). They also then adopt some form of DevOps practice, as the faster they can add new features, the more quickly their organization will progress: the notion of the constant Beta has been replaced by measuring success in terms of delivery cycles.
Bluntly, the startup innovation approach wouldn’t work without cloud. Providers such as AWS know this; they also know their job is to deliver as many new capabilities as they can, feeding the sausage machine of innovation, however much this complicates things for people trying to understand what is going on. AWS is more SkyMall than Sears, its own business model also based on dynamism of new feature delivery.
This truth also applies to the toolsets around DevOps, which are geared up to help deploy to cloud-based resources, orchestrate services, deploy containers and spin up virtual machines. If a single cloud is your target, the DevOps pipeline is a sight simpler than if you are deploying to an in-house, hybrid and/or multi-cloud environment. Which, of course, reflects the vast majority of enterprises today.
The point, and the central notion behind the report, is that enterprises don’t have it easy: DevOps needs to roll with the punches, rather than sneering from the sidelines about how much easier everything could be. We are where we are: enterprises are complex, wrapped up in historical structures, governance and legacy, and need to be approached accordingly. They might want to adopt cloud wholesale, and may indeed do so at some point in the future, but getting there will be an evolution, not an overnight, flick the switch transformation.
DevOps Friction comes from this reality, and many providers are looking to do something about it. As per a recent conversation with my colleague Christina von Seherr-Thoss, such developments as VMware running on AWS, or indeed Kubernetes-VMware integration, help close the gap between the now-embedded models of the data center, and the capabilities of the cloud. This isn’t just about making things work together: it’s also transferring some of the weight of processing from internal, to external models.
And, by doing so, it’s helping organizations let go of the stuff that doesn’t matter. We’ve long talked about data gravity, in that most data now sits outside the organization, but an equally important notion is that processing gravity hasn’t followed, making enterprise DevOps harder as a result. I personally don’t care where things run: if you can run your own cloud, go for it. More important is whether you are locked into a mindset where you tinker with infrastructure, or whether you use what you are given and innovate on top.
Right now, enterprise organizations are looking to adopt DevOps as part of a bigger push, to become more innovative and adapt faster to evolving customer needs — that is, digital transformation. Enterprises are always going to struggle with the weight of complexity and size: as startups grow up, they hit the same challenges. But traditional organizations can do themselves a favor and shift to a model that breaks dependency with servers, storage and so on.
While we don’t deep-dive on infrastructure and cloud advances in our DevOps report, it is fundamental and inevitable that organizations which see technology infrastructure as an externally provided (and relatively fixed) platform will be able to innovate faster than those who see it as a primary focus. Breaking the link with infrastructure, minimizing dependencies, using the operating systems you are given and building on top, could be the most important thing your organization does.

Voices in AI – Episode 67: A Conversation with Amir Khosrowshahi

[voices_in_ai_byline]

About this Episode

Episode 67 of Voices in AI features host Byron Reese and Amir Khosrowshahi talk about the explainability, privacy, and other implications of using AI for business. Amir Khosrowshahi is VP and CTO at Intel. He holds a Bachelor’s Degree from Harvard in Physics and Math, a Master’s Degree from Harvard in Physics, and a PhD in Computational Neuroscience from UC Berkeley.
Visit www.VoicesinAI.com to listen to this one-hour podcast or read the full transcript.

Transcript Excerpt

Byron Reese: This is Voices in AI brought to you by GigaOm. I’m Byron Reese. Today I’m so excited that my guest is Amir Khosrowshahi. He is a VP and the CTO of AI products over at Intel. He holds a Bachelor’s Degree from Harvard in Physics and Math, a Master’s Degree from Harvard in Physics, and a PhD in Computational Neuroscience from UC Berkeley. Welcome to the show, Amir.
Amir Khosrowshahi: Thank you, thanks for having me.
I can’t imagine someone better suited to talking about the kinds of things we talk about on this show, because you’ve got a PhD in Computational Neuroscience, so, start off by just telling us what is Computational Neuroscience?
So neuroscience is a field, the study of the brain, and it is mostly a biologically minded field, and of course there are aspects of the brain that are computational and there’s aspects of the brain that are opening up the skull and peering inside and sticking needles into areas and doing all sorts of different kinds of experiments. Computational neuroscience is a combination of these two threads, the thread that there [are] computer science statistics and machine learning and mathematical aspects to intelligence, and then there’s biology, where you are making an attempt to map equations from machine learning to what is actually going on in the brain.
I have a theory which I may not be qualified to have and you certainly are, and I would love to know your thoughts on it. I think it’s very interesting that people are really good at getting trained with a sample size of one, like draw a made up alien you’ve never seen before and then I can show you a series of photographs, and even if that alien’s upside down, underwater, behind a tree, whatever, you can spot it.
Further, I think it’s very interesting that people are so good at transfer learning, I could give you two objects like a trout swimming in a river, and that same trout in a jar of formaldehyde in a laboratory and I could ask you a series of questions: Do they weigh the same, are they the same color, do they smell the same, are they the same temperature? And you would instantly know, and yet, likewise, if you were to ask me if hitting your thumb with a hammer hurts, and I would say “yes,” and then somebody would say, “Well, have you ever done it?” And I’m like, “yeah,” and they would say, “when?” And it’s like, I don’t really remember, I know I have. Somehow we take data and throw it out, and remember metadata and yet the fact a hammer hurts your thumb is stored in some little part of your brain that you could cut it out and somehow forget that. And so when I think of all of those things that seem so different than computers to me, I kind of have a sense that human intelligence doesn’t really tell us anything about how to build artificial intelligence. What do you say?
Okay, those are very deep questions and actually each one of those items is a separate thread in the field of machine learning and artificial intelligence. There are lots of people working on things, so the first thing you mentioned I think, was one shot learning where you have, you see as something that’s novel. From the first time you see it, you recognize it as something that’s singular and you retain that knowledge to then identify if it occurs again—such as for a child it would be like a chair, for you it’s potentially an alien. So, how do you learn from single examples?
That’s an open problem in machine learning and is very actively studied because you want to be able to have a parsimonious strategy for learning and the current ways that—it’s a good problem to have—the current ways that we’re doing learning in, for example, online services that sort photos and recognize objects and images. It’s very computationally wasteful and it’s actually wasteful in usage of data. You have to see many examples of chairs to have an understanding of a chair, and it’s actually not clear if you actually have an understanding of a chair, because the models that we have today for chairs, they do make mistakes. When you peer into where the mistakes were made, it seems like there the machine learning model doesn’t actually have an understanding of a chair, it doesn’t have a semantic understanding of a scene or of grammar, or of languages that are translated, and we’re noticing these efficiencies and we’re trying to address them.
You mentioned some other things, such as how do you transfer knowledge from one domain to the next. Humans are very good at generalizing. We see an example of something in one context, and it’s amazing that you can extrapolate or transfer it to a completely different context. That’s also something that we’re working on quite actively, and we have some initial success in that we can take a statistical model that was trained on one set of data and then we can then apply to another set of data by using that previous experience as a warm start, and then moving away from that old domain to the new domain. This is also possible to do in continuous time.
Much of the things we experience in the real world—they’re not stationary, and that’s a statistics change with time. We need to have models that can also change. For a human it’s easy to do that, it’s very good at going from… it’s good at handling non-stationary statistics, so we need to build that into our models, be cognizant of it, we’re working on it. And then [for] other things you mentioned—that intuition is very difficult. It’s potentially one of the most difficult things for us to translate from human intelligence to machines, and remembering things and having kind of a hazy idea of having done something bad to yourself with a hammer, that I’m not actually sure where that falls in into the various subdomains of machine learning.
Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com
[voices_in_ai_link_back]
 
Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.

Voices in AI – Episode 66: A Conversation with Steve Ritter

[voices_in_ai_byline]

About this Episode

Episode 66 of Voices in AI features host Byron Reese and Steve Ritter talk about the future of AGI, how AI will effect jobs, security, warfare, and privacy. Steve Ritter holds a B.S. in Cognitive Science, Computer Science and Economics from UC San Diego and is currently the CTO of Mitek.
Visit www.VoicesinAI.com to listen to this one-hour podcast or read the full transcript.

Transcript Excerpt

Byron Reese: This is Voices in AI, brought to you by GigaOm, I’m Byron Reese, and today our guest is Steve Ritter. He is the CTO of Mitek. He holds a Bachelor of Science in Cognitive Science, Computer Science and Economics from UC San Diego. Welcome to the show Steve.
Steve Ritter: Thanks a lot Byron, thanks for having me.
So tell me, what were you thinking way back in the ’80s when you said, “I’m going to study computers and brains”? What was going on in your teenage brain?
That’s a great question. So first off I started off with a Computer Science degree and I was exposed to the concepts of the early stages of machine learning and cognitive science through classes that forced me to deal with languages like LISP etc., and at the same time the University of California, San Diego was opening up their very first department dedicated to cognitive science. So I was just close to finishing up my Computer Science degree, and I decided to add Cognitive Science into it as well, simply because I was just really amazed and enthralled with the scope of what Cognitive Science was trying to cover. There was obviously the computational side, then the developmental psychology side, and then neuroscience, all combined to solve a host of different problems. You had so many researchers in that area that were applying it in many different ways, and I just found it fascinating, so I had to do it.
So, there’s human intelligence, or organic intelligence, or whatever you want to call it, there’s what we have, and then there’s artificial intelligence. In what ways are those things alike and in what ways are they not?
That’s a great question. I think it’s actually something that trips a lot of people up today when they hear about AI, and we might use the term, artificial basic intelligence, or general intelligence, as opposed to artificial intelligence. So a big difference is, on one hand we’re studying the brain and we’re trying to understand how the brain is organized to solve problems and from that derive architectures that we might use to solve other problems. It’s not necessarily the case that we’re trying to create a general intelligence or a consciousness, but we’re just trying to learn new ways to solve problems. So I really like the concept of neural inspired architectures, and that sort of thing. And that’s really the area that I’ve been focused on over the past 25 years, is really how can we apply these learning architectures to solve important business problems.
Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com
[voices_in_ai_link_back]
 
Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.

Disruptive Technologies: In Conversation with Byron Reese & Lauren Sallata

Byron Reese, sits down with Lauren Sallata, Chief Marketing Officer & VP, Panasonic Corporation of North America, Inc. to discuss IoT devices, driverless cars, immersive entertainment and solar initiatives.

Byron Reese: Many people are excited about possibilities that today’s new technologies offer. They see a world made better through technology. Another group of people view the technology landscape completely differently and are concerned about the impact of technology on privacy, community, employment. Why is there so much disagreement, and where do you come down in your view of the future?
Lauren Sallata: In the words of Mohandas Gandhi, “Honest disagreement is often a good sign of progress.” You could say the same about disagreement over technology. Panasonic is involved in engineering entirely new and better experiences, in cities and factories, in stores, offices, entertainment venues, and in automobiles, airports and on airplanes. Consultancy McKinsey identified 12 disruptive technologies that will account for trillions of dollars in economic value over the decade ahead. Panasonic is deeply engaged in 10 of those 12 technologies. And we see the positive impact of these technologies clearly already. For example, in renewable energy, our lithium-ion batteries are being used in the world’s leading electric vehicles to reduce pollution. Sensors embedded in road systems to send information to cars and road operators about hazardous conditions and traffic and use IoT to improve driving safety and reduce traffic jams. Other examples include wearable robotics designed to reduce injuries at work.
How do you think the widespread adoption of IoT devices will change our daily lives?  What is Panasonic’s vision of a hyper-connected world look like?
We see the “things” that make up the “Internet of Things” bringing us unparalleled data, information and convenience to change our in-home and working experiences. Voice technology will enable each interaction to be more personalized and seamless. We believe that voice is the technology that moves all other technologies forward. Why? Voice takes away the learning curve and gives both businesses and consumers more control over the way they use and interact with technology. Using our voices frees up our hands and our brains. When we pry our eyes away from screens and stop tapping on keypads, we can focus on what we’re doing right now. The factory worker is less likely to make errors …the car driver is less distracted…the ER nurse can focus more completely on his patients. Voice is already an auto sector mainstay. We’ve developed cutting-edge, voice-activated Infotainment systems for many of the world’s top automakers, like our new ELS system for Acura. We’re working with Amazon to help us take voice integration beyond just information and move toward fully-realized contextual understanding. These capabilities are giving auto drivers and passengers control over critical features such as heating and ventilation systems and audio and navigation functions. We’re also giving passengers the benefit of connecting to other smart devices to allow them to fully control their experience both in and out of the car. We’re also working with Google on similar projects in the voice space, to provide integration and information throughout their technology solutions.
Talk about driverless cars for a minute.  When do you think we might see the end of the human driver? What is Panasonic’s role in building that world?
We’ve estimated by 2030, 15% of new cars sold could be fully autonomous. We work with almost all the major automakers around the world, have for almost 60 years, and are doubling down on our ADAS and automation technology investments with partners. Autonomous Vehicles are going to have a huge impact on our society. Vehicle Electrification is going to have a similar impact on our planet…The combination of the two technologies will create a multiplier effect that will remake transportation. This will happen in stages. Stage one is the emergence of the connected vehicle, which lays the foundation. With EVs, we’re still at a price premium to internal combustion. By around 2022, we’ll be at parity. During this time, we’ll see elements of autonomous driving, such as autonomous braking, and EV autonomous vehicles for commercial and fleet start to go mainstream. Next, we see trucking fleets start to make the transition.  Then commercial ridesharing fleets come on-line, giving consumers the benefit of autonomous electric vehicle transportation. In the last stage, we’ll see the personal ownership market catch up with commercial.
Tell us about what’s going on at Highway I-70 in Colorado.
As cars become more computer than machine, they are capable of communicating with one another in real time – saving time and lives. Panasonic has partnered with the Colorado Department of Transportation to create a connected vehicle ecosystem that promises to drive a revolution in roadway safety and efficiency. On a 90-mile commuter stretch of interstate 70 into Denver, this technology has been designed and will be deployed later this year to allow CDOT to share information on highway conditions, traffic alerts and other driving hazards. It’s the first production-grade, U.S. connected vehicle system in which real-time data would be shared across vehicles, infrastructure, and people with a goal to improve safety, lower fuel consumption and reduce congestion. Estimates are that such a solution could reduce non-impaired traffic crashes by 80 percent and save drivers hours stuck in traffic each year.
What is Panasonic doing in the world of immersive entertainment?
At iconic stadiums, beloved theme parks, and worldwide special events like the Olympic Games, Panasonic technologies immerse fans in the action and create storytelling experiences that inspire and amaze with the world’s largest video displays, mesmerizingly sharp content, sophisticated projection mapping, seamless mobile integration, and innovations like an augmented reality skybox that gives fans control of stats and replays, projecting them right on to the glass inside stadium suites – all without obstructing their view of the field. From racing through Radiator Springs at Disney California Adventure Theme Park to embarking on a frozen voyage through Arendelle in the Frozen Ever After attraction at Orlando’s Epcot, Panasonic’s technology has enhanced the experience for millions. Recently Panasonic collaborated with Disney creative teams on an amazing experience inside Pandora – The World of Avatar, at Disney’s Animal Kingdom. Its projection technology helped Disney bring the Na’vi River Journey attraction to life. Guests take a boating expedition down a mysterious river hidden within a bioluminescent rainforest, through a darkened network of caves illuminated by exotic glowing plants and amazing creatures that call Pandora home. The journey culminates in an encounter with a Na’vi Shaman of Songs, who has a deep connection to Pandora’s life force and sends positive energy out into the forest through her music. Disney wanted the two worlds to work seamlessly with one another, and Panasonic’s projection system allowed the attraction to achieve that seamless connection through projection imaging that provided perfect color rendition, precise levels of brightness, and robust systems. Today fans who use Instagram and rideshare as verbs expect the same mobile connectivity and convenience from their ballpark as they do from their Lyft. The Atlanta Braves franchise understands this well, and with help from Panasonic technology welcomes fans way before the opening pitch. Panasonic technologies at SunTrust Park and its adjacent mixed-use site, the Atlanta Battery, are all digitally connected, with more than 18 LED displays, monitors, projectors, digital signage kiosks, and video security systems – all regulated from one central control room. We just conducted a study of CTOs and senior tech decision makers on how companies are using or want to use disruptive technologies in areas such as retail, sports, media and entertainment. Our new study reveals that four technologies are at the top of their innovation agendas – artificial intelligence, robotics, 3-D printing and energy storage. Four out of five respondents are poised to adopt AI to gain customer insights and predict behavior.
And talk a bit about your solar initiatives.
Panasonic has been a leader in the solar energy space for over 40 years. From electric vehicles to solar grids, Panasonic’s solutions are helping forward-thinking businesses and governments pursue a brighter, more eco-responsible future. To solve the world’s growing energy needs, Panasonic is developing high-efficiency solar panels that make eco more economical, planning entire eco-sustainable communities, using sensor technology to regulate energy usage in offices, and building energy storage systems that allow for more efficient energy consumption. When it comes to solar panel technology, revolutionary materials, and system design have led Panasonic to record-setting efficiencies. Panasonic’s heterojunction (HIT®) technology has been designed with ultra-thin silicon layers that absorb and retain more sunlight, coupled with an ingenious bifacial cell design that captures light from both sides of the panel. By continuously innovating, we’re helping each generation of solar panel make better use of renewable resources and offering the industry greater cost savings.
How do we make sure that the benefits of all these technologies extend to everyone on the planet?
Over the last 100 years, Panasonic has taken pride in creating new and exciting solutions in many different realms.  By having expertise in so many strong areas, especially those identified as disruptive technologies, we hope to enhance the lives of as many people as possible.

About Lauren Sallata


Lauren Sallata is Chief Marketing Officer at Panasonic Corporation of North America, the principal North American subsidiary of Panasonic Corporation and the hub of Panasonic’s U.S. branding, marketing, sales, service and R&D operations. She leads the corporations digital, brand, content, and advertising efforts, as well as Corporate Communications.