Voices in AI – Episode 72: A Conversation with Irving Wladawsky-Berger

[voices_in_ai_byline]

About this Episode

Episode 72 of Voices in AI features host Byron Reese and Irving Wladawsky-Berger discuss the complexity of the human brain, the possibility of AGI and its origins, the implications of AI in weapons, and where else AI has and could take us. Irving has a PhD in Physics from the University of Chicago, is a research affiliate with the MIT Sloan School of Management, he is a guest columnist for the Wall Street Journal and CIO Journal, he is an agent professor of the Imperial College of London, and he is a fellow for the Center for Global Enterprise.
Visit www.VoicesinAI.com to listen to this one-hour podcast or read the full transcript.

Transcript Excerpt

Byron Reese: This is Voices in AI, brought to you by GigaOm, and I’m Byron Reese. Today our guest is Irving Wladawsky-Berger. He is a bunch of things. He is a research affiliate with the MIT Sloan School of Management. He is a guest columnist for the Wall Street Journal and CIO Journal. He is an adjunct professor of the Imperial College of London. He is a fellow for the Center for Global Enterprise, and I think a whole lot more things. Welcome to the show, Irving.
Irving Wladawsky-Berger: Byron it’s a pleasure to be here with you.
So, that’s a lot of things you do. What do you spend most of your time doing?
Well, I spend most of my time these days either in MIT-oriented activities or writing my weekly columns, [which] take quite a bit of time. So, those two are a combination, and then, of course, doing activities like this – talking to you about AI and related topics.
So, you have an M.S. and a Ph.D. in Physics from the University of Chicago. Tell me… how does artificial intelligence play into the stuff you do on a regular basis?
Well, first of all, I got my Ph.D. in Physics in Chicago in 1970. I then joined IBM research in Computer Science. I switched fields from Physics to Computer Science because as I was getting my degree in the ‘60s, I spent most of my time computing.
And then you spent 37 years at IBM, right?
Yeah, then I spent 37 years at IBM working full time, and another three and a half years as a consultant. So, I joined IBM research in 1970, and then about four years later my first management job was to organize an AI group. Now, Byron, AI in 1974 was very very very different from AI in 2018. I’m sure you’re familiar with the whole history of AI. If not, I can just briefly tell you about the evolution. I’ve seen it, having been involved with it in one way or another for all these years.
So, back then did you ever have occasion to meet [John] McCarthy or any of the people at the Dartmouth [Summer Research Project]?
Yeah, yeah.
So, tell me about that. Tell me about the early early days in AI, before we jump into today.
I knew people at the MIT AI lab… Marvin Minsky, McCarthy, and there were a number of other people. You know, what’s interesting is at the time the approach to AI was to try to program intelligence, writing it in Lisp, which John McCarthy invented as a special programming language; writing in rules-based languages; writing in Prolog. At the time – remember this was years ago – they all thought that you could get AI done that way and it was just a matter of time before computers got fast enough for this to work. Clearly that approach toward artificial intelligence didn’t work at all. You couldn’t program something like intelligence when we didn’t understand at all how it worked…
Well, to pause right there for just a second… The reason they believed that – and it was a reasonable assumption – the reason they believed it is because they looked at things like Isaac Newton coming up with three laws that covered planetary motion, and Maxwell and different physical systems that only were governed by two or three simple laws and they hoped intelligence was. Do you think there’s any aspect of intelligence that’s really simple and we just haven’t stumbled across it, that you just iterate something over and over again? Any aspect of intelligence that’s like that?
I don’t think so, and in fact my analogy… and I’m glad you brought up Isaac Newton. This goes back to physics, which is what I got my degrees in. This is like comparing classical mechanics, which is deterministic. You know, you can tell precisely, based on classical mechanics, the motion of planets. If you throw a baseball, where is it going to go, etc. And as we know, classical mechanics does not work at the atomic and subatomic level.
We have something called quantum mechanics, and in quantum mechanics, nothing is deterministic. You can only tell what things are going to do based on something called a wave function, which gives you probability. I really believe that AI is like that, that it is so complicated, so emergent, so chaotic; etc., that the way to deal with AI is in a more probabilistic way. That has worked extremely well, and the previous approach where we try to write things down in a sort of deterministic way like classical mechanics, that just didn’t work.
Byron, imagine if I asked you to write down specifically how you learned to ride a bicycle. I bet you won’t be able to do it. I mean, you can write a poem about it. But if I say, “No, no, I want a computer program that tells me precisely…” If I say, “Byron I know you know how to recognize a cat. Tell me how you do it.” I don’t think you’ll be able to tell me, and that’s why that approach didn’t work.
And then, lo and behold, in the ‘90s we discovered that there was a whole different approach to AI based on getting lots and lots of data in very fast computers, analyzing the data, and then something like intelligence starts coming out of all that. I don’t know if it’s intelligence, but it doesn’t matter.
I really think that to a lot of people the real point where that hit home is when in the late ‘90s, IBM’s Deep Blue supercomputer, beat Garry Kasparov in a very famous [chess]match. I don’t know, Byron, if you remember that.
Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com
[voices_in_ai_link_back]
 
Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.

Voices in AI – Episode 71: A Conversation with Paul Daugherty

[voices_in_ai_byline]

About this Episode

Episode 71 of Voices in AI features host Byron Reese and Paul Daugherty discuss transfer learning, consciousness and Paul’s book “Human + Machine: Reimagining Work in the Age of AI.” Paul Daugherty holds a degree in computer engineering from the University of Michigan, and is currently the Chief Technology and Innovation Officer at Accenture.
Visit www.VoicesinAI.com to listen to this one-hour podcast or read the full transcript.

Transcript Excerpt

Byron Reese: This is Voices in AI brought to you by GigaOm. Today my guest is Paul Daugherty. He is the Chief Technology and Innovation Officer at Accenture. He holds a computer engineering degree from the University of Michigan. Welcome to the show Paul.

Paul Daugherty: It’s great to be here, Byron.

Looking at your dates on LinkedIn, it looks like you went to work for Accenture right out of college and that was a quarter of a century or more ago. Having seen the company grow… What has that journey been like?

Thanks for dating me. Yeah it’s actually been 32 years, so I guess I’m going on a third of a century, joined Accenture back in 1986, and the company’s evolved in many ways since then. It’s been an amazing journey because the world has changed so much since then and a lot of what’s fueled the change in the world around us has been what’s happened with technology. I think [in] 1986 the PC was brand new, and we went from that to networking and client server and the Internet, cloud computing mobility, internet of things, artificial intelligence and the things we’re working on today. So it’s been a really amazing journey fueled by the way the world’s changed, enabled by all this amazing technology.

So let’s talk about that, specifically artificial intelligence. I always like to get our bearings by asking you to define either artificial intelligence or if you’re really feeling bold, define intelligence.

I’ll start with artificial intelligence which we define as technology that can sense, think, act and learn, is the way we describe it. And [it’s] systems that can then do that, so sense: like vision in a self-driving car, think: making decisions on what the car does next, acts: in terms of they actually steer the car and then learn: to continuously improve behavior. So that’s the working definition that we use for artificial intelligence, and I describe it more simply to people sometimes, as fundamentally technology that has more human-like capability to approximate the things that we’re used to assuming and thinking that only humans can do: speech, vision, predictive capability and some things like that.

So that’s the way I define artificial intelligence. Intelligence I would define differently. Intelligence I would just define more broadly. I’m not an expert in neuroscience or cognitive science or anything, but I define intelligence generally as the ability to both reason and comprehend and then extrapolate and generalize across many different domains of knowledge. And that’s what differentiates human intelligence from artificial intelligence, which is something we can get a lot more into. Because I think the fact that we call this body of work that we’re doing artificial intelligence, both the word artificial and the word intelligence I think lead to misleading perceptions on what we’re really doing.

So, expand that a little bit. You said that’s the way you think human intelligence is different than artificial, — put a little flesh on those bones, in exactly what way do you think it is?

Well, you know the techniques we’re really using today for artificial intelligence, they’re generally from the branch of AI around machine learning, so machine learning, deep learning, neural nets etc. And it’s a technology that’s very good at using patterns and recognizing patterns in data to learn from observed behavior, so to speak. Not necessarily intelligence in a broad sense, it’s ability to learn from specific inputs. And you can think about that almost as idiot savant-like capability.

So yes, I can use that to develop Alpha Go to beat the world’s Go master, but then that same program wouldn’t know how to generalize and play me in tic-tac-toe. And that ability, the intelligence ability to generalize, extrapolate, rather than interpolate, is what human intelligence is differentiated by, and the thing that would bridge that, would be artificial general intelligence, which we can get into a little bit, but we’re not at that point of having artificial general intelligence, we’re at a point of artificial intelligence, where it could mimic very specific, very specialised, very narrow human capabilities, but it’s not yet anywhere close to human-level intelligence.

Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com
[voices_in_ai_link_back]
 
Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.

Voices in AI – Episode 70: A Conversation with Jakob Uszkoreit

[voices_in_ai_byline]

About this Episode

Episode 70 of Voices in AI features host Byron Reese and Jakob Uszkoreit discuss machine learning, deep learning, AGI, and what this could mean for the future of humanity. Jakob has a masters degree in Computer Science and Mathematics from Technische Universität Berlin. Jakob has also worked at Google for the past 10 years currently in deep learning research with Google Brain.
Visit www.VoicesinAI.com to listen to this one-hour podcast or read the full transcript.

Transcript Excerpt

Byron Reese: This is Voices in AI, brought to you by GigaOm. I’m Byron Reese. Today our guest is Jakob Uszkoreit, he is a researcher at Google Brain, and that’s kind of all you have to say at this point. Welcome to the show, Jakob.
Let’s start with my standard question which is: What is artificial intelligence, and what is intelligence, if you want to start there, and why is it artificial?
Jakob Uszkoreit: Hi, thanks for having me. Let’s start with artificial intelligence specifically. I don’t think I’m necessarily the best person to answer the question what intelligence is in general, but I think for artificial intelligence, there’s possibly two different kind of ideas that we might be referring to with that phrase.
One is kind of the scientific or the group of directions of scientific research, including things like machine learning, but also other related disciplines that people commonly refer to with the term ‘artificial intelligence.’ But I think there’s this other maybe more important use of the phrase that has become much more common in this age of the rise of AI if you want to call it that, and that is what society interprets that term to mean. I think largely what society might think when they hear the term artificial intelligence, is actually automation, in a very general way, and maybe more specifically, automation where the process of automating [something] requires the machine or the machines doing so to make decisions that are highly dynamic in response to their environment and in our ideas or in our conceptualization of those processes, require something like human intelligence.
So, I really think it’s actually something that doesn’t necessarily, in the eyes of the public, have that much to do with intelligence, per se. It’s more the idea of automating things that at least so far, only humans could do, and the hypothesized reason for that is that only humans possess this ephemeral thing of intelligence.
Do you think it’s a problem that a cat food dish that refills itself when it’s empty, you could say has a rudimentary AI, and you can say Westworld is populated with AIs, and those things are so vastly different, and they’re not even really on a continuum, are they? A general intelligence isn’t just a better narrow intelligence, or is it?
So I think that’s a very interesting question. Whether basically improving and slowly generalizing or expanding the capabilities of narrow intelligences, will eventually get us there, and if I had to venture a guess, I would say that’s quite likely actually. That said, I’m definitely not the right person to answer that. I do think that guesses, that aspects of things are today still in the realms of philosophy and extremely hypothetical.
But the one trick that we have gotten good at recently that’s given us things like AlphaZero, is machine learning, right? And it is itself a very narrow thing. It basically has one core assumption, which is the future is like the past. And for many things it is: what a dog looks like in the future, is what a dog looked like yesterday. But, one has to ask the question, “How much of life is actually like that?” Do you have an opinion on that?
Yeah so I think that machine learning is actually evolving rapidly from the initial classic idea of basically trying to predict the future just in the past, and not just the past as a kind of encapsulated version of the past. So it’s basically a snapshot captured in this fixed static data set. You expose machines to that, you allow it to learn from that, train on that, whatever you want to call it, and then you evaluate how the resulting model or machine or network does in the wild or on some evaluation tasks, and tests that you’ve prepared for it.
It’s evolving from that classic definition towards something that is quite a bit more dynamic, that is starting to incorporate learning in situ, learning kind of “on the job,” learning from very different kinds of supervision, where some of it might be encapsulated by data sets, but some might be given to the machine through somewhat more high level interactions, maybe even through language. There is at least a bunch of lines of research attempting that. Also quite importantly, we’re starting slowly but surely to employ machine learning in ways where the machine’s actions actually have an impact on the world, from which the machine then keeps learning. I think that that’s actually something [for which] all of these parts are necessary ingredients, if we ever want to have narrow intelligences, that maybe have a chance of getting more general. Maybe then in the more distant future, might even be bolted together into somewhat more general artificial intelligence.
Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com
[voices_in_ai_link_back]
 
Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.

Voices in AI – Episode 69: A Conversation with Raj Minhas

[voices_in_ai_byline]

About this Episode

Episode 69 of Voices in AI features host Byron Reese and Dr. Raj Minhas talk about AI, AGI, and machine learning. They also delve into explainability and other quandaries AI is presenting. Raj Minhas has a PhD and MS in Electrical and Computer Engineering from the University of Toronto, with his BE from Delhi University. Raj is also the Vice President and Director of Interactive and Analytics Laboratory at PARC.
Visit www.VoicesinAI.com to listen to this one-hour podcast or read the full transcript.

Transcript Excerpt

Byron Reese: This is Voices in AI, brought to you by GigaOm, I’m Byron Reese. Today I’m excited that our guest is Raj Minhas, who is Vice President and the Director of Interactive and Analytics Laboratory at PARC, which we used to call Xerox PARC. Raj earned his PhD and MS in Electrical and Computer Engineering from the University of Toronto, and his BE from Delhi University. He has eight patents and six patent-pending applications. Welcome to the show, Raj!
Raj Minhas: Thank you for having me.
I like to start off, just asking a really simple question, or what seems like a very simple question: what is artificial intelligence?
Okay, I’ll try to give you two answers. One is a flip response, which is if you tell me what is intelligence, I’ll tell you what is artificial intelligence, but that’s not very useful, so I’ll try to give you my functional definition. I think of artificial intelligence as the ability to automate cognitive tasks that we humans do, so that includes the ability to process information, make decisions based on that, learn from that information, at a high level. That functional definition is useful enough for me.
Well I’ll engage on each of those, if you’ll just permit me. I think even given a definition of intelligence which everyone agreed on, which doesn’t exist, artificial is still ambiguous. Do you think of it as artificial in the sense that artificial turf really isn’t grass, so it’s not really intelligence, it just looks like intelligence? Or, is it simply artificial because we made it, but it really is intelligent?
It’s the latter. So if we can agree on what intelligence is, then artificial intelligence to me would be the classical definition of artificial intelligence, which is re-creating that outside the human body. So re-creating that by ourselves, it may not be re-created in the way it-is created in our minds, in the way humans or other animals do it, but, it’s re-created in that it achieves the same purpose, it’s able to reason in the same way, it’s able to perceive the world, it’s able to do problem solving in that way. So without getting necessarily bogged down by what is the mechanism by which we have intelligence, and does that mechanism need to be the same; artificial intelligence to me would be re-creating that – the ability of that.
Fair enough, so I’ll just ask you one more question along these lines. So, using your ability to automate cognitive tasks, let me give you four or five things, and you tell me if they’re AI. AlphaGo?
Yes.
And then a step down from that, a calculator?
Sure, a primitive form of AI.
A step down from that: an abacus?
Abacus, sure, but it involves humans in the operation of it, but maybe it’s on that boundary where it’s partially automated, but yes.
What about an assembly line?
Sure, so I think…
And then I would say my last one which is a cat food dish that refills itself when it’s empty? And if you say yes to that…
All of those things to me are intelligent, but some of those are very rudimentary, and not, so, for example, you look at animals. On one end of the scale are humans, they can do a variety of tasks that other animals cannot, and on the other end of the spectrum, you may have very simple organisms, single-celled or mammals, they may do things that I would find intelligent, they may be simply responding to stimuli, and that intelligence may be very much encoded. They may not have the ability to learn, so they may not have all aspects of intelligence, but I think this is where it gets really hard to say what is intelligence. Which is my flip response.
If you say: what is intelligence? I can say I’m trying to automate that by artificial intelligence, so, if you were to include in your definition of intelligence, which I do, that ability to do math implies intelligence, then by automating that with an abacus is a way of artificially doing that, right? You have been doing it in your head using whatever mechanism is in there, you’re trying to do that artificially. So it is a very hard question that seems so simple, but, at some point, in order to be logically consistent, you have to say yes, if that’s what I mean, that’s what I mean, even though the examples can get very trivial.
Well I guess then, and this really is the last question along those lines: what, if everything falls under your definition, then what’s different now? What’s changed? I mean a word that means everything means nothing, right?
That is part of the problem, but I think what is becoming more and more different is, the kinds of things you’re able to do, right? So we are able to reason now artificially in ways that we were not able to before. Even if you take the narrower definition that people tend to use which is around machine learning, they’re able to use that to perceive the world in ways in which we were not able to before, and so, what is changing is that ability to do more and more of those things, without relying on a person necessarily at the point of doing them. We still rely on people to build those systems to teach them how to do those things, but we are able to automate a lot of that.
Obviously artificial intelligence to me is more than machine learning where you show something a lot of data and it learns just for a function, because it includes the ability to reason about things, to be able to say, “I want to create a system that does X, and how do I do it?” So can you reason about models, and come to some way of putting them together and composing them to achieve that task?
Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com
[voices_in_ai_link_back]
 
Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.

Voices in AI – Episode 66: A Conversation with Steve Ritter

[voices_in_ai_byline]

About this Episode

Episode 66 of Voices in AI features host Byron Reese and Steve Ritter talk about the future of AGI, how AI will effect jobs, security, warfare, and privacy. Steve Ritter holds a B.S. in Cognitive Science, Computer Science and Economics from UC San Diego and is currently the CTO of Mitek.
Visit www.VoicesinAI.com to listen to this one-hour podcast or read the full transcript.

Transcript Excerpt

Byron Reese: This is Voices in AI, brought to you by GigaOm, I’m Byron Reese, and today our guest is Steve Ritter. He is the CTO of Mitek. He holds a Bachelor of Science in Cognitive Science, Computer Science and Economics from UC San Diego. Welcome to the show Steve.
Steve Ritter: Thanks a lot Byron, thanks for having me.
So tell me, what were you thinking way back in the ’80s when you said, “I’m going to study computers and brains”? What was going on in your teenage brain?
That’s a great question. So first off I started off with a Computer Science degree and I was exposed to the concepts of the early stages of machine learning and cognitive science through classes that forced me to deal with languages like LISP etc., and at the same time the University of California, San Diego was opening up their very first department dedicated to cognitive science. So I was just close to finishing up my Computer Science degree, and I decided to add Cognitive Science into it as well, simply because I was just really amazed and enthralled with the scope of what Cognitive Science was trying to cover. There was obviously the computational side, then the developmental psychology side, and then neuroscience, all combined to solve a host of different problems. You had so many researchers in that area that were applying it in many different ways, and I just found it fascinating, so I had to do it.
So, there’s human intelligence, or organic intelligence, or whatever you want to call it, there’s what we have, and then there’s artificial intelligence. In what ways are those things alike and in what ways are they not?
That’s a great question. I think it’s actually something that trips a lot of people up today when they hear about AI, and we might use the term, artificial basic intelligence, or general intelligence, as opposed to artificial intelligence. So a big difference is, on one hand we’re studying the brain and we’re trying to understand how the brain is organized to solve problems and from that derive architectures that we might use to solve other problems. It’s not necessarily the case that we’re trying to create a general intelligence or a consciousness, but we’re just trying to learn new ways to solve problems. So I really like the concept of neural inspired architectures, and that sort of thing. And that’s really the area that I’ve been focused on over the past 25 years, is really how can we apply these learning architectures to solve important business problems.
Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com
[voices_in_ai_link_back]
 
Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.

5 Common Misconceptions about AI

In recent years I have ran into a number of misconceptions regarding AI, and sometimes when discussing AI with people from outside the field, I feel like we are talking about two different topics. This article is an attempt at clarifying what AI practitioners mean by AI, and where it is in its current state.
The first misconception has to do with Artificial General Intelligence, or AGI:

  1. Applied AI systems are just limited versions of AGI

Despite what many think,the state of the art in AI is still far behind human intelligence. Artificial General Intelligence, i.e. AGI, has been the motivating fuel for all AI scientists from Turing to today. Somewhat analogous to Alchemy, the eternal quest for AGI that replicates and exceeds human intelligence has resulted in the creation of many techniques and scientific breakthroughs. AGI has helped us understand facets of human and natural intelligence, and as a result, we’ve built  effective algorithms inspired by our understanding and models of them.
However, when it comes to practical applications of AI, AI practitioners do not necessarily restrict themselves to pure models of human decision making, learning, and problem solving. Rather, in the interest of solving the problem and achieving acceptable performance, AI practitioners often do what it takes to build practical systems. At the heart of the algorithmic breakthroughs that resulted in Deep Learning systems, for instance, is a technique called back-propagation. This technique, however, is not how the brain builds models of the world. This brings us to the next misconception:

  1. There is a one-size-fits-all AI solution.

A common misconception is that AI can be used to solve every problem out there–i.e. the state of the art AI has reached a level such that minor configurations of ‘the AI’ allows us to tackle different problems. I’ve even heard people assume that moving from one problem to the next makes the AI system smarter, as if the same AI system is now solving both problems at the same time. The reality is much different: AI systems need to be engineered, sometimes heavily,  and require specifically trained models in order to be applied to a problem. And while similar tasks, especially those involving sensing the world (e.g., speech recognition, image or video processing) now have a library of available reference models, these models need to be specifically engineered to meet deployment requirements and may not be useful out of the box. Furthermore, AI systems are seldom the only component of AI-based solutions. It often takes many tailor-made classically programed components to come together to augment one or more AI techniques used within a system. And yes, there are a multitude of different AI techniques out there, used alone or in hybrid solutions in conjunction with others, therefore it is incorrect to say:

  1. AI is the same as Deep Learning

Back in the day, we thought the term artificial neural networks (ANNs) was really cool. Until, that is, the initial euphoria around it’s potential backfired due to its lack of scaling and aptitude towards over-fitting. Now that those problems have, for the most part, been resolved, we’ve avoided the stigma of the old name by “rebranding” artificial neural networks as  “Deep Learning”. Deep Learning or Deep Networks are ANNs at scale, and the ‘deep’ refers not to deep thinking, but to the number of hidden layers we can now afford within our ANNs (previously it was a handful at most, and now they can be in the hundreds). Deep Learning is used to generate models off of labeled data sets. The ‘learning’ in Deep Learning methods refers to the generation of the models, not to the models being able to learn real-time as new data becomes available. The ‘learning’ phase of Deep Learning models actually happens offline, needs many iterations, is time and process intensive, and is difficult to parallelize.
Recently, Deep Learning models are being used in online learning applications. The online learning in such systems is achieved using different AI techniques such as Reinforcement Learning, or online Neuro-evolution. A limitation of such systems is the fact that the contribution from the Deep Learning model can only be achieved if the domain of use can be mostly experienced during the off-line learning period. Once the model is generated, it remains static and not entirely robust to changes in the application domain. A good example of this is in ecommerce applications–seasonal changes or short sales periods on ecommerce websites would require a deep learning model to be taken offline and retrained on sale items or new stock. However, now with platforms like Sentient Ascend that use evolutionary algorithms to power website optimization, large amounts of historical data is no longer needed to be effective, rather, it uses neuro-evolution to shift and adjust the website in real time based on the site’s current environment.   
For the most part, though, Deep Learning systems are fueled by large data sets, and so the prospect of new and useful models being generated from large and unique datasets has fueled the misconception that…

  1. It’s all about BIG data

It’s not. It’s actually about good data. Large, imbalanced datasets can be deceptive, especially if they only partially capture the data most relevant to the domain. Furthermore, in many domains, historical data can become irrelevant quickly. In high-frequency trading in the New York Stock Exchange, for instance, recent data is of much more relevance and value than, for example data from before 2001, when they had not yet adopted decimalization.
Finally, a general misconception I run into quite often:

  1. If a system solves a problem that we think requires intelligence, that means it is using AI

This one is a bit philosophical in nature, and it does depend on your definition of intelligence. Indeed, Turing’s definition would not refute this. However, as far as mainstream AI is concerned, a fully engineered system, say to enable self-driving cars, which does not use any AI techniques, is not considered an AI system. If the behavior of the system is not the result of the emergent behavior of AI techniques used under the hood, if programmers write the code from start to finish, in a deterministic and engineered fashion, then the system is not considered an AI-based system, even if it seems so.
AI paves the way for a better future
Despite the common misconceptions around AI, the one correct assumption is that AI is here to stay and is indeed, the window to the future. AI still has a long way to go before it can be used to solve every problem out there and to be industrialized for wide scale use. Deep Learning models, for instance, take many expert PhD-hours to design effectively, often requiring elaborately engineered parameter settings and architectural choices depending on the use case. Currently, AI scientists are hard at work on simplifying this task and are even using other AI techniques such as reinforcement learning and population-based or evolutionary architecture search to reduce this effort. The next big step for AI is to make it be creative and adaptive, while at the same time, powerful enough to exceed human capacity to build models.  
by Babak Hodjat, co-founder & CEO Sentient Technologies

Master Data Management Joins the Machine Learning Party

In a normal master data management (MDM) project, a current state business process flow is built, followed by a future state business process flow that incorporates master data management. The current state is usually ugly as it has been built piecemeal over time and represents something so onerous that the company is finally willing to do something about it and inject master data management into the process. Many obvious improvements to process come out of this exercise and the future state is usually quite streamlined, which is one of the benefits of MDM.
I present today that these future state processes are seldom as optimized as they could be.
Consider the following snippet, supposedly part of an optimized future state.

This leaves in the process four people to manually look at the product, do their (unspecified) thing and (hopefully) pass it along, but possibly send it backwards to an upstream participant based on nothing evident in particular.
The challenge for MDM is to optimize the flow. I suggest that many of the “approval jails” in business process workflow are ripe for reengineering. What criteria is used? It’s probably based on data that will now be in MDM. If training data for machine learning (ML) is available, not only can we recreate past decisions to automate future decisions, we can look at the results of those decisions and take past outcomes and actually create decisions in the process that should have been made and actually do them, speeding up the flow and improving the quality by an order of magnitude.
This concept of thinking ahead and automating decisions extends to other kinds of steps in a business flow that involve data entry, including survivorship determination. As with acceptance & rejection, data entry is also highly predictable, whether it is a selection from a drop-down or free-form entry. Again, with training data and backtesting, probable contributions at that step can be manifested and either automatically entered or provided as default for approval. The latter approach can be used while growing a comfort level.
Manual, human-scale processes, are ripe for the picking and it’s really a dereliction of duty to “do” MDM without significantly streamlining processes, much of which is done by eliminating the manual. As data volumes mount, it is often the only way to not watch process time increase over time. At the least, prioritizing stewardship activities or routing activities to specific stewards based on an ML interpretation of past results (quality, quantity) is required. This approach is paramount to having timely, data-infused processes.
As a modular and scalable trusted analytics foundational element, the IBM Unified Governance & Integration platform incorporates advanced machine learning capabilities into MDM processes, simplifying the user experience and adding cognitive capabilities.
Machine learning can also discover master data by looking at actual usage patterns. ML can source, suggest or utilize external data that would aid in the goal of business processes. Another important part of MDM is data quality (DQ). ML’s ability to recommend and/or apply DQ to data, in or out of MDM, is coming on strong. Name-identity reconciliation is a specific example but generally ML can look downstream of processes to see the chaos created by data lacking full DQ and start applying the rules to the data upstream.
IBM InfoSphere Master Data Management utilizes machine learning to speed the data discovery, mapping, quality and import processes.
In the last post, I postulated that blockchain would impact MDM tremendously. In this post, it’s machine learning affecting MDM. (Don’t get me started on graph technology). Welcome to the new center of the data universe. MDM is about to undergo a revolution. Products will look much different in 5 years. Make sure your vendor is committed to the MDM journey with machine learning.

Voices in AI – Episode 43: A Conversation with Markus Noga

[voices_in_ai_byline]
In this episode, Byron and Markus discuss machine learning and automation.
[podcast_player name=”Episode 43: A Conversation with Markus Noga” artist=”Byron Reese” album=”Voices in AI” url=”https://voicesinai.s3.amazonaws.com/2018-05-22-(00-58-23)-markus-noga.mp3″ cover_art_url=”https://voicesinai.com/wp-content/uploads/2018/05/voices-headshot-card.jpg”]
[voices_in_ai_byline]
Byron Reese: This is Voices In AI brought to you by GigaOm, I’m Byron Reese. Today, my guest is Marcus Noga. He’s the VP of Machine Learning over at SAP. He holds a Ph.D.in computer science from Karlsruhe Institute of Technology, and prior to that spent seven years over at Booz Allen Hamilton working on helping businesses adopt and transform their businesses through IT. Welcome to the show Markus.
Markus Noga: Thank you Byron and it’s a pleasure to be here today.
Let’s start off with a question I have yet to have two people answer the same way. What is artificial intelligence?
That’s a great one, and it’s sure something that few people can agree on. I think the textbook definition mostly defines that by analogy with human intelligence, and human intelligence is also notoriously tricky and hard to define. I define human intelligence as the ability to deal with the unknown and bring structure to the unstructured, and answer novel questions in a surprisingly resourceful and mindful way. Artificial intelligence in itself is the thing, rather more playfully, that is always three to five years out of reach. We love to focus on what can be done today—what we call machine learning and deep learning—that can draw a tremendous value for businesses and for individuals already today.
But, in what sense is it artificial? Is it artificial intelligence in the way artificial turf? Is it really turf, it just looks like it? Or is it just artificial in the sense that we made it? Or put another way, is artificial intelligence actually intelligent? Or does is it just behave intelligently?
You’re going very deep here into things like Searle’s Chinese room paradox about the guy in the room with a hand for definitions of how to transcribe Chinese symbols to have an intelligent conversation. The question being who or what is having the intelligent conversation. Is it the book? Certainly not. Is it the guy mindlessly transcribing these symbols? Certainly not? Is it maybe the system of the guy in the room, the book, and the room itself that generates these intelligent seeming responses? I guess I’m coming down on the output-oriented side here. I try not to think too hard about the inner states or qualia, or the question whether the neural networks we’re building have a sentient experience or the experience in this qualia. For me, what counts is whether we can solve real-world problems in a way that’s compatible with intelligence. Its place in intelligent behavior of everything else—I would leave to the philosophers Byron.
We’ll get to that part where we can talk about the effects of automation and what we can expect and all of that. But, don’t you think at some level, understanding that question, doesn’t it to some degree inform you as to what’s possible? What kinds of problems should we point this technology at? Or do you think it’s entirely academic that it has no real-world implications?
I think it’s extremely profound and it could unlock a whole new curve of value creation. It’s also something that, in dealing with real-world problems today, we may not have to answer—and this is maybe also something specific to our approach. You’ve seen all these studies that say that X percent of activities can be automated with today’s machine learning, and Y percent could be automated if there are better natural language speech processing capabilities and so on, and so forth. There’s such tremendous value to be had by going after all these low-hanging fruits and sort of doing applied engineering by bringing ML and deep learning into an application context. Then we can bide our time until there is a full answer to strong AI, and some of the deeper philosophical questions. But what is available now is already delivering tremendous value, and will continue to do so over the next three to five years. That’s my business hat on—what I focus on together with the teams that I’m working with. The other question is one that I find tremendously interesting for my weekend and unique conversations.
Let me ask you a different one. You started off by saying artificial intelligence, and you dealt with that in terms of human intelligence. When you’re thinking of a problem that you’re going to try to use machine intelligence to solve, are you inspired in any way by how the brain works or is that just a completely different way of doing it? Or do we learn how intelligence, with the capital I, works by studying the brain?
I think that’s the multi-level answer because clearly the architectures that do really well in analytic learning today are in a large degree neurally-inspired. Instead of having multi-layered deep networks—having them with a local connection structure, having them with these things we call convolutions that people use in computer vision, so successfully—it resembles closely some of the structures that you see in the visual cortex with vertical columns for example. There’s a strong argument for both these structures in the self-referential recurrent networks that people use a lot for video processing and text processing these days are very, very deeply morally inspired. On the other hand, we’re also seeing that a lot of the approaches that make ML very successful today are about as far from neutrally-inspired learning as you can get.
Example one, we struggled as a discipline with neutrally-inspired transfer functions—that were all nice, and biological, and smooth—and we couldn’t really train deep networks with them because they would saturate. One of the key enablers for modern deep learning was to step away from the biological analogy of smooth signals and go to something like the rectified linear unit, the ReLU function, as an activation, and that has been a key part in being able to train very deep networks. Another example when a human learns or an animal learns, we don’t tend to give them 15 million cleanly labored training examples, and expect them to go over these training examples 10times in a row to arrive at something. We’re much closer to one-shot learning and being able to recognize the person with a cylinder hat on their head just the basis of one description or one image that shows us something similar.
So clearly, the approaches that are most successful today are both sharing some deep neural inspiration as a basis, but, also a departure into computationally tractable, and very, very different kinds of implementations than the network that we see in our brains. I think that both of these themes are important in advancing the state-of-the-art in ML and there’s a lot going on. In areas like one-shot learning, for example, right now I’m trying to mimic more of the way the human brain—with an active working memory and these rich associations—is able to process new information, and there’s almost no resemblance to what convolutional networks and the current networks do today.
Let’s go with that example. If you take a small statue of a falcon, and you put it in a hundred photos—and sometimes it’s upside down, and sometimes it’s laying on its side, sometimes it’s half in water, sometimes it’s obscured, sometimes it’s in shadows—a person just goes “boom boom boom boom boom” and picks them out, right and left with no effort, you know, one-shot learning. What do you think a human is doing? It is an instance of some kind of transfer learning, but what do you think is really going on in the human brain, and how do you map that to computers? How do you deal with that?
This is an invitation to speculate on the topics of falcons, so let me try. I think that, clearly, our brains have built a representation of the real world around us, because we’re able to create that representation even though the visual and other sensory stimuli that reach us are not in fact as continuous as they seem. Standing in the room here having the conversation with you, my mind creates the illusion of a continuous space around me, but in fact, I’m getting distinct feedbacks from the eyes as they succumb and jump around the room. The illusion of a continuous presence, the continuous sharp resolution of the room is just that; it’s an illusion because our mind has built very, very effective mental models of the world around us, that’s highly contrasting information and make it tractable on an abstract level.
Some of the things that are going on in research right now [are] trying to exploit these notions, and trying to use a lot of unsupervised training with some very simple assumptions behind them; basically the mind doesn’t like to be surprised, and would, therefore, like to predict what’s next [by]leveraging very, very powerful unsupervised training approaches where you can use any kind of data that’s available, and you don’t need to enable it to come up with these unsupervised representation learning approaches. They seem to be very successful, and they’re beating a lot of the traditional approaches because you can have access to way larger corpuses of unlabeled information which means you can train better models.
Now is that it a direct analogy to what the human brain does? I don’t know. But certainly it’s an engineering strategy that results in world-leading performance on a number of very popular benchmarks right now, and it is, broadly speaking, neutrally-inspired. So, I guess bringing together what our brains do and what we can do in engineering is always a dance between the abstract inspiration that we can get from how biology works, and the very hard math and engineering in getting solutions to train on large-scale computers with hundreds of teraflops in compute capacity and large matrix multiplications in the middle. It’s advances on both sides of the house that make ML advance rapidly today.
Then take a similar problem, or tell me if this is a similar problem, when you’re doing voice recognition, and there’s somebody outside with the jackhammer, you know, it’s annoying, but a human can separate those two things. It can hear what you’re saying just fine, but for a machine, that’s a really difficult challenge. Now my question to you is, is that the same problem? Is it one trick humans have like that that we apply in a number of ways? Or is that a completely different thing that’s going on in that example?
I think it’s similar, and you’re hitting onto something because in the listening example there are some active and some passive components going on. We’re all familiar with the phenomenon of selective hearing when we’re at a dinner party, and there are 200 conversations going on in parallel. If we focus our attention on a certain speaker or a certain part of the conversation, we can make them stand out over the din and the noise because their own mind had some prior assumptions as to what constitutes a conversation, and we can exploit these priors in our minds in order to selectively listen in to parts of the conversation. This has partly a physical characteristic, maybe hearing in stereo. Our ears have certain directional characteristics to the way they pick up certain frequencies by turning our head the right way and inclining it the right way. We can do a lot already [with] stereo separation, whereas, if you have a single microphone—and that’s all the signal you get—all these avenues would be closed to you.
But, I think the main story is one about signals superimposed with noise—whether that’s camera distortions, or fog, or poor lighting in the case of the statue that we are trying to recognize, or whether it’s ambient noise or intermittent outages in the sense of the audio signal that you’re looking into. The two different most popular neutrally-inspired architectures on the market right now, [are] the convolutional networks for a lot of things in the image and also natural text space, and the recurrent networks for a lot of things in the audio ends at time series signal, but also on text space. Both share the characteristics that they are vastly more resilient to noise than any hard-coded or programmed approach. I guess the underlying problem is one that, five years ago, would have been considered probably unsolvable; where today with these modern techniques, we’re able to train models that can adequately deal with the challenges if the information is in the solid state.
Well, what do you think when the human hears, at a conversation at the party to go with that example, and you kind of like, “Oh, I want to listen to that.” I heard what you say that there’s one aspect of you where you make a physical modification to the situation, but what you’ve also done is introduced this idea of consciousness, that a person selectively can change their focus and that aspect of what the brain is doing, where it’s like, “Oh, wait a minute.” Maybe something that’s hard to implement on a machine, or is that not the case at all?
If you take that idea, and I think in the ML research and engineering communities this is currently most popular under the label of attention, or attention-based mechanisms, then certainly this is all over leading approaches right now—whether it’s the computer vision papers from CVPR just last week or whether it’s the text processing architectures that return state-of-the-art results right now. They all start to include some kind of attention mechanism allowing you to both weigh outputs by the center of attention, and also to trace back results to centers of extension, which have two very nice properties. On the one hand attention mechanisms, nascent as they are today, help improve the accuracy of what models can deliver. On the second hand, the ability to trace back on the outcome of a machine learning model to centers and regions of attention in the input can do wonders for explain-ability of ML and AI results, which is something that increasingly users and customers are looking for. Don’t just give me any result which is as good as my current process, or hopefully a couple of percentage points better. But, also helped me build confidence in this by explaining why things are being classed or categorized or translated or extracted the way they are. To gain the human trust into operating system of humans and machines working together explain-ability future is big.
One of the peculiar things to me, with regard to strong AI—general intelligence—is that there are folks who say, when you say, “When will we get a general intelligence, “the soonest you ever hear is five years. There are very famous people who believe we’re going to have something very soon. Then you get the other extreme is about 500 years and that worrying about that is like worrying about overpopulation on Mars. My question to you is why do you think that there’s such a wide range in terms of our idea of when we may make such a breakthrough?
I think it’s because of one vexing property of humans and machines is that the things that are easiest for us humans tend to be the things that are hardest for machines and vice versa. If you look at that today, nobody would dream of having computer as a job description. That’s a machine. If you think back 60-70 years, computer was the job description of people actually doing manual calculations. “Printer” was a job description, and a lot of other things that we would never dream of doing manually today were being done manually. Think of spreadsheets potentially the greatest simple invention in computing, think of databases, think of things like enterprise resource planning systems that SAP does, and business networks connecting them or any kind of cloud-based solutions—what they deliver is tremendous and it’s very easy for machines to do, but it tends to be the things that are very hard for humans. Now at the same time things that are very easy for humans to do, see a doggie and shout “doggie,” or see a cat and say “meow” is something that toddlers can do, but until very, very recently, the best and most sophisticated algorithms haven’t been able to do that part.
I think part of the excitement around ML and deep learning right now is that a lot of these things have fallen, and we’re seeing superhuman performance on image classification tasks. We’re seeing superhuman performance on things like switchboard voice-to-text transcription tasks, and many other elements are falling to machines that that used to be very easy for humans but are now impossible for us. This is something that generates a lot of excitement right now. I think where we have to be careful is [letting] this guide our expectations on the speed of progress in following years. Human intuition about what is easy and what is hard is traditionally a very, very poor guide to the ease of implementation with computers and with ML.
Example, my son was asking me yesterday, “Dad, how come the car can know where it is at and tell us where to drive?” And I was like, “Son, that’s fairly straightforward. There are all these satellites flying around, and they’re shouting at us, ‘It’s currently 2 o’clock and 30 seconds,’ and we’re just measuring the time between their shouts to figure out where we are today, and then that gives us that position on the planet. It’s not a great invention; it’s the GPS system—it’s mathematically super hard to do for a human with a slide rule; it’s very easy to do for the machine.” And my son said, “Yeah, but that’s not what I wanted to know. How come the machine is talking to us with the human voice? This is what I find amazing, and I would like to understand how that is built.” and I think that our intuition about what’s easy and what’s hard is historically a very poor guide for figuring out what the next step and the future of ML and artificial intelligence look like. This is why you’re getting those very broad bands of predictions.
Well do you think that the difference between the narrow or weak AI we have now and strong AI, is evolutionary? Are we on the path [where] when machines get somewhat faster, and we get more data, and we get better algorithms, that we’re going to gradually get a general intelligence? Or is a general intelligence something very different, like a whole different problem than the kinds of problems we’re working on today?
That’s a tough one. I think that taking the brain analogy; we’re today doing the equivalent of very simple sensory circuits which maybe can’t duplicate the first couple of dozens or maybe a hundred layers in the way the visual cortex works. We’re starting to make progress into some things like one-shot learning; it’s very nascent in that early-stage research right now. We’re starting to make much more progress in directions like reinforcement learning, but overall it’s very hard to say which if any additional mechanisms are there in the large. If you look at the biological system of the brain, there’s a molecular level that’s interesting. There’s a cellular level that’s interesting. There is a simple interconnection I know that’s interesting. There is a micro-interconnection level that’s interesting. I think we’re still far from a complete understanding of how the brain works. I think right now we have tremendous momentum and a very exciting trajectory with what our artificial neural networks can do, and at least for the next three to five years. There seems to be pretty much limitless potential to bring them out into real-world businesses, into real-world situations and contexts, and to create amazing new solutions. Do I think that really will deliver strong AI? I don’t know. I’m an agnostic, so I always fall back to the position that I don’t know enough.
Only one more question about strong AI and then let’s talk about the shorter-term future. The question is, human DNA converted to code is something like 700 MB, give or take. But the amount that’s uniquely human, compared to say a chimp or something like that is only about 1% difference—only 7 or 8 or 9 MB of code—is what gives us a general intelligence. Does that imply or at least tell us how to build something that then can become generally intelligent? Does that imply to you that general intelligence is actually simple, straightforward? That we can look at nature and say, it’s really a small amount of code, and therefore we really should be looking for simple, elegant solutions to general intelligence? Or do those two things just not map at all?
Certainly, what we’re seeing today is that deep learning approaches to problems like image classification, image object detection, image segmentation, video annotation, audio transcription—all these things tend to be orders of magnitude, smaller problems than what we dealt with when we handcrafted things. The core of most deep learning solutions to these things, if you really look at the core model on the model structure, tends to be maybe 500 lines of code, maybe 1000. And that’s within the reach of an individual putting this together over a weekend, so the huge democratization that deep learning based on big data lends is that actually a lot of these models that do amazing things are very, very small code artifacts. The weight matrices and the binary models that they generate then tend to be as large or larger than traditional programs compiled into executable, sometimes orders of magnitude larger again. The thing is, they are very hard to interpret, and we’re only at the beginning of an explain-ability of what the different weights and the different excitations mean. I think there are some nice early visualizations on this. There are also some nice visualizations that explain what’s going on with attention mechanisms in the artificial networks.
As to explain-ability of the real network in the brain, I think that is very nascent. I’ve seen some great papers and results on things like spatial representations in the visual cortex where surprisingly you find triangle scripts or attempts to reconstruct the image hitting the retina based on reading, with fMRI scans, the excitations in lower levels of the visual cortex. They show that we’re getting closer to understanding the first few layers. I think that even with the 7 MB difference or so that you allude to between chimps and humans spelled out for us, there is a whole set of layers of abstractions between the DNA code and the RNA representation, the protein representation, the excitation of these with methylation and other mechanisms that control activation of genes, and the interplay of the proteins across a living breathing human brain that all of this magnitude of complexity above of the super megabyte, by a certain megabyte difference in A’s and C’s, and T’s, and G’s. We live in super exciting types. We live in times were a new record, and a new development, and a new capability that was unthinkable of a year ago, or let alone a decade ago, is becoming commonplace, and it’s an invigorating and exciting time to be alive. I still struggle to make a prediction from the year to general AI based on a straight-line trend.
There’s some fear wrapped up though as exciting as AI is, there’s some fear wrapped up in it as well. The fear is the effect of automation on employment. I mean you know this, of course, it’s covered so much. There’s kind of three schools of thought: One says that we’re going to automate certain tasks and that there will be a group of individuals who do not have the training to add economic value. They will be pushed out of the labor market, and we’ll have perpetual unemployment, like a big depression that never goes away. Then there’s another group that says, “No, no, no, you don’t understand. Everybody is replaceable. Every single job we have, machines can do any of it.” And then there’s a third school about that says, “No, none of that’s going to happen. The history of 250 years of the Industrial Revolution is that people take these new technologies, even profound ones like electricity and engines, and steam, and they just use them to increase their own productivity and to drive wages up. We’re not going to have any unemployment from this, any permanent unemployment.” Which of those three camps, or a fourth, do you fall into?
I think that there’s a lot of historical precedent for how technology gets adopted, and there are also numbers of the adoption of technologies in our own day and age that sort of serve as reference points here. For example, one of the things that surprised me, truly, is the amount of e-commerce—as a percentage of overall retail market share—[that] is still in the mid to high single digit percentage points according to surveys that I’ve seen. That totally does not match my personal experience of basically doing all my non-grocery shopping entirely online. But it shows that in the 20-25 years of the Internet Revolution, a tremendous value has been created—and the conveniences of having all kinds of stuff at your doorstep with just a single click actually—that has transformed the single-digit percentage of the overall retail market with the transformation that we’ve seen. This was one of the most rapid uptakes in history of new technology that has groundbreaking value, by decoupling evidence and bits, and it’s been playing out over the past 20-25 years that all of us are observing.
So, I think while there is tremendous potential of machine learning in AI to drive another Industrial Revolution, we’re also in the middle of all these curves from other revolutions that are ongoing. We’ve had a mobile revolution that unshackled computers and gave everybody what used to be a supercomputer in their pocket which had an infinite revolution. Before that, we’ve had a client-server revolution and the computing revolution in its own—all of these building on prior revolutions like electricity, or the internal combustion engine, or methods like the printing press. They certainly have a tendency to show accelerating technology cycles. But on the other hand, for something like e-commerce or even mobile, the actual adoption speed has been one that is none too frightening. So for all the tremendous potential that ML and AI bring, I would be hard-pressed to come up with a completely disruptive scenario here. I think we are seeinga technology with tremendous potential for rapid adoption. We’re seeing the potential to both create new value and do new things, and to automate existing activities which continues past trends. Nobody has computer or printer as their job description today, and job descriptions like social-media influencer, or blogger, or web designer did not exist 25 years ago. This is an evolution on a Schumpeterian creative destruction that is going on all over industry, in every industry, in every geography, based on every new technology curve that comes in here.
I would say fears in this space are greatly overblown today. But fear is real the moment you feel it, therefore institutions—like The Partnership on Artificial Intelligence, with the leading technology companies, as well as the leading NGOs, think tanks, and research institutes—are coming together to discuss the implications of AI, and ethics of AI, and safety and guiding principles. All of these things are tremendously important to make sure that we can adopt this technology with confidence. Just remember that when cars were new, Great Britain had a law that a person with a red flag had to walk in front of the car in order to warn all pedestrians of the danger that was approaching. That was certainly an instance of fear about technology, that, on the one hand, was real at that point in time, but that also went away with a better understanding of how it works and of the tremendous value on the economy.
What do you think of these efforts to require that when an artificial intelligence makes a ruling or a decision about you that you have a right to know why it made that decision? Is that a manifestation of the red flag in front of the car as well, and is that something that would, if that became the norm, actually constrain the development of artificial intelligence?
I think you’re referring to the implicit right to explanation on this part of the European Union privacy novella for 2018. Let me start by saying that the privacy novella we’re seeing is a tremendous step forward because the simple act of harmonizing the rules and creating one digital playing field across the hundreds of millions of European citizens, and countries, and nationalities, is a tremendous step forward. We used to have one different data protection regime for each federal state in Germany, so anything that is required and harmonized is a huge step forward. I also think that the quest for an explanation is something that is very human. At the core of us is to continue to ask “why” and “how.” That is something that is innate to ourselves when we apply for a job with the company, and we get rejected. We want to know why. And when we apply for a mortgage and we can offer a rate that seems high to us and we want to understand why. That’s a natural question, it’s a human question, and it’s an information need that needs to be served if we don’t want to end up in a Kafka-esque future where people don’t have a say about their destiny. Certainly, that is hugely important on the one hand.
On the other hand, we also need to be sure that we don’t measure ML and AI to a stricter standard than we measure humans today because that could become an inhibitor to innovation. So, if you ask a company, “Why you didn’t get accepted for that job offer?” They will probably say, “Dear Sir or Madam, thank you for your letter. Due to the unusually strong field of candidates for this particular posting, we regret to inform you that certain others are stronger, and we wish you all the best for your continued professional future.” This is what almost every rejection letter reads like today. Are we asking the same kind of explain-ability from an AI system that is delivering a recommendation today that we apply to a system of humans and computers working together to create a letter like that? Or are we holding them to a much, much higher standard? If it is the first thing, absolutely essential. If it’s the second thing, we got to watch whether we’re throwing out the baby with the bathwater on this one. This is something where we, I think, need to work together to find the appropriate levels and standards for things like explain-ability in AI to fill very abstract sentences like write to an explanation with life that can be implemented, that can be delivered, and that can provide satisfactory answers at the same time while not unduly inhibiting progress. This is something that, with a lot of players focused on explain-ability today, where we will certainly see significant advances going forward.
If you’re a business owner, and you read all of this stuff about artificial intelligence, and neural nets, and machine learning, and you say, “I want to apply some of this great technology in my company,” how do people spot problems in a business that might be good candidates for an AI solution?
I can extort that and turn it around by asking, “What’s keeping you awake at night? What are the three big things that make you worried? What are the things that make up the largest part of your uncertainty, or of your cost structure, or of the value that you’re trying to create?” Looking on end-to-end processes, it’s usually fairly straightforward to identify cases where AI and ML might be able to help and to deliver tremendous value. The use-case identification tends to be the fairly easiest chord of the game. Where it gets tricky is in selecting and prioritizing these cases, figuring out the right things to build, and finding the data that you need in order to make the solution real, because unlike traditional software engineering, this is about learning from data. Without data, you basically can’t sort or at least we have to build some very small simulators in order to create the data that you’re looking for.
You mentioned that that’s the beginning of the game, but what makes the news all the time is when AI beats a person at a game. In 1997 you had chess, then you had Ken Jennings in Jeopardy!, then you had AlphaGo and Lee Sedol, and you had AI beating poker. Is that a valid approach to say, “Look around your business and look for things that look like games?” Because games have constrained rules, and they have points, and winners, and losers. Is that a useful way to think about it? Or are the game things more like AI’s publicity, a PR campaign, and that’s not really a useful metaphor for business problems?
I think that these very publicized showcases are extremely important to raise awareness and to demonstrate stunning new capabilities. What we see in building business solutions is that I don’t necessarily have to be the human world champion in something in order to deliver value. Because a lot of business is about processes, is about people following flowcharts together with software systems trying to deliver a repeatable process for things like customer service, or IT incident handling, or incoming invoice screening and matching, or other repetitive recurring tasks in the enterprise. And already by addressing—it’d be easy to serve 60-80% of these, we can create tremendous value for enterprises by making processes run faster, by making people more productive, and by relieving them of the parts of activities that they regard as repetitive and mind-numbing, and not particularly enjoyable.
The good thing is that in a modern enterprise today, people tend to have IT systems in place where all these activities leave a digital exhaust stream of data, and locking into that digital exhaust stream and learning from it is the key way to make ML solutions for the enterprise feasible today. This is one of the things where I’m really proud to be working for SAP because 76% of all business transactions, as measured by value, anywhere on the globe, are on an SAP system today. So if you want to learn models on digital information that touch the enterprise, chances are it’s either in an SAP system or in a surrounding system already today. Looking for these and sort of doing the intersection between what’s attractive—because I can serve core business processes with faster speed, greater agility, lower cost, more flexibility, or bigger value—and crossing that with the feasibility aspect of “do I have the digital information that I can learn from to build business-relevant functionality today?,” is our overriding approach to identifying things that we built in order to make all our SAP enterprise applications intelligent.
Let’s talk about that for a minute. What sorts of things are you working on right now? What sorts of things have the organization’s attention in machine learning?
It’s really end-to-end digital intelligence on processes, and let me give you an example. If you look at the finance space, which SAP is well-known for, these huge end-to-end processes—like record to report, or things like invoice to record—which really deal end-to-end with what an enterprise needs to do in order to buy stuff and pay for it, and receive it, or to sell stuff, and get paid for it. These are huge machines with dozens and dozens of process steps, and many individuals in shared service environments that otherwise perform the delivering of these services. They see a document like an invoice, for example, it’s just the tip of the iceberg for a complex orchestration and things to deal with that. We’re taking these end-to-end processes, and we’re making them intelligent every step of the way.
When an invoice hits the enterprise, the first question is what’s in it? And today most of the units in shared service environments extract development information via SAP systems. The next question is, “Do I know this supplier?” If they have merged or changed names or opened a new branch, I might not have them in my database. That’s a fuzzy lookup. The next step might be, “Have I ordered something like this?” and that’s a significant question because in some industries up to one-third of spending actually doesn’t have a purchase order. Finding people who have an order of this stuff, all related stuff from this supplier, or similar suppliers in the past, can be the key to figuring out whether we should approve it or not. Then, there’s the question of, “Did we receive the goods and services that this invoice is for?” That’s about going through lists and lists of staff, and figuring out whether the bill of lading for the truck that arrived really contains all the things that were on the truck and all the things that were on the invoice, but no other things. That’s about list matching and list comprehensing, and document matching, and recommending classification systems. It goes on and on like that until the point where we actually put through their payment, and the supplier gets paid for the first invoice that was there.
What you see is a digital process that is enabled by IT systems, very sophisticated IT systems, routine workflows between many human participants today. What you do is we can take the digital exhaust of all the process participants to learn what they’ve been doing, and then put the common, the repetitive, the mind-numbing part of the process on autopilot—gaining speed, reducing cost, making people more satisfied with their work day, because they can focus on the challenging, and the interesting, and the stimulating cases, and increasing customer satisfaction, or in this case supplier satisfaction because they get paid faster. This end-to-end approach is how we look at business processes, and when my ML group and AI do that, we see an order recommender, an entity extractor or some kind of translation mechanism at every step of the process. We work hard to turn these capabilities into scalable APIs on our cloud platform that integrates seamlessly with these standard applications, and that’s really our approach to problem-solving. It ties to the underlying data repository about how business operates and how processes slow.
Did you find that your customers are clear with how this technology can be used, and they’re coming to you and saying, “We want this kind of functionality, and we want to apply it this way,” and they’re very clear about their goals and objectives? Or are you finding that people are still finding their sea legs and figuring out ways to apply artificial intelligence in the business, and you’re more heading to lead them and say, “Here’s a great thing you could do that you maybe didn’t know was possible?”
I think it’s like everywhere, you’ve got early adopters, and innovation promoters, and dealers who actively come with these cases of their own. You have more conservative enterprises looking to see how things play out and what the results for early adopters are. You have others who have legitimate reasons to focus on burning parts of their house right now, for whom this, right now is not yet a priority. What I can say is that the amount of interest in ML and AI that we’re seeing from customers and partners is tremendous and almost unprecedented, because they all see the potential to tag business processes and the way business executes to a complete new level. The key challenge is working with customers early enough, and at the same time working with enough customers in a given setting to make sure that this is not a one-off that is highly specific, and to make sure that we’re really rethinking the process with digital intelligence instead of simply automating the status quo. I think this is maybe the biggest risk. We have tremendous opportunity to transform how business is done today if we truly see this through end-to-end and if we are looking to build out the robots. If we’re only trying to build isolated instances of faster horses, the value won’t be there. This is why we take such an active interest in the end-to-end and integration perspective.
Alright well, I guess just to two final questions. The first is, overall it sounds like you’re optimistic about the transformative power of artificial intelligence and what it can do—
Absolutely Byron.
But I would put that question to you that you put to businesses. What keeps you awake at night? What are the three things that worry you? They don’t have to be big things, but what are the challenges right now that you’re facing or thinking about like, “Oh, I just wish I had better data or if we could just solve this one problem?”
I think the biggest thing keeping me awake right now is the luxury problem of being able to grow as fast as demand and the market wants us to. That has all the aspects of organizational scaling and scaling the product portfolio that we enable with intelligence. Fortunately, we’re not a small start-up with limited resource. We are the leading enterprise software company and scaling inside such an environment is substantially easier than it would be on the outside. Still, we’ve been doubling every year, and we look set to continue in that vein. That’s certainly the biggest strain and the biggest worry that I face. It’s very old-fashioned things; it’s like leadership development that I tend to focus a lot of my time on. I wish I would have more time to play with models, and to play with the technology and to actually build and ship a great product. What keeps me awake is these more old-fashioned things, one of leadership development that matter the most for where we are at right now.
You talked at the very beginning, you said that during the week you’re all about applying these technologies to businesses, and then on the weekend you think about some of these fun problems? I’m curious if you consume science fiction like books or movies, or TV, and if so, is there any view of the future, anything you’ve read or seen or experienced that you think, “Ah, I could see that happening.” Or, “Wow, that really made me think.” Or do you not consume science fiction?
Byron, you caught me out here. The last thing I consumed was actually Valerian and the City of a Thousand Planets just last night in the movie theater in Karlsruhe that I went to all the time when I was a student. While not per se occupied with artificial intelligence, it was certainly stunning, and I do consume a lot of the stuff from the ease of it. It provides a view of plausible futures. Most of the things I tend to read are more focused on things like space, oddly enough. So things like The Three-Body Problem, and the fantastic trilogy that that became, really aroused my interest, and really made me think. There are others that offer very credible trajectories. I was a big fan of the book called Accelerando, which paints a credible trajectory from today’s world of information technology to an upload culture of digital minds and humans colonizing the solar system and beyond. I think that these escapes are critical to cure the hem from day-to-day business, and the pressures of delivering product under a given budget and deadlines. Sort of indulging in them, allows me to return relaxed, and refreshed, and energized on every Monday morning.
Alright, well that’s a great place to leave it, Markus. I’m want to thank you so much for your time. It sounds like you’re doing fantastically interesting work, and I wish you the best.
Did I mention that we’re hiring? There’s a lot of fantastically interesting work here, and we would love to have more people engaging in it. Thank you, Byron.
Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.
[voices_in_ai_link_back]

Voices in AI – Episode 40: A Conversation with Dennis Laudick

[voices_in_ai_byline]
In this episode Byron and Dennis discuss machine learning.
[podcast_player name=”Episode 40: A Conversation with Dennis Laudick” artist=”Byron Reese” album=”Voices in AI” url=”https://voicesinai.s3.amazonaws.com/2018-04-05-(00-49-36)-dennis-laudick.mp3″ cover_art_url=”https://voicesinai.com/wp-content/uploads/2018/04/voices-headshot-card-1.jpg”]
[voices_in_ai_byline]
Byron Reese: This is “Voices in AI,” brought to you by GigaOm. I’m Byron Reese. Today my guest is Dennis Laudick. He is the VP of Marketing of Machine Learning at ARM. ARM is—well, let’s just start off by saying, you certainly have several of their products. They make processors and they have between 90% to 95% market share of mobile devices. They’ve shipped 125 billion processors and are shipping at a rate of about 20 billion a year. That’s what three per person per year. Welcome to the show, Dennis.
Dennis Laudick: Great. Thank you very much. Pleased to be here.
So picking up on that thread, three per person. So, anybody who owns any electronics, they probably have four or five of your chips this year, where would they find those? Like, walk me around the house and office, what all might they be in?
Yeah so we are kind of one of the greatest secrets out in the market at the moment, we’re pervasive, certainly. So, I mean, ARM is responsible for designs of processors so that the CPUs are, ironic to this topic, the brains, as a lot of people call it, that go into the computer chips and that power our devices. So, behind your smartphone, obviously, there is a processor which is doing all of the things that you are seeing as well as a lot in the background. Just looking around you, TVs; I am speaking into a phone now, it probably has a processing chip in the background doing something—those consumer electronic devices, the majority are probably being powered by a processor which was designed by ARM.
We do things that range from tiny sensors and watches and things like that, clear up to much larger-scale processing. So yeah, just looking around, battery-powered devices or powered consumer electronic devices around you in your home or your office, there is a good chance that the majority of those are running a processor designed by ARM, which is quite an exciting place to be.
I can only imagine. What was that movie that was out, the Kingsmanmovie, where once they got their chips in all the devices, they took over the world? So I assume that’s kind of the long term plan.
I am not particularly aware of any nefarious plans, but we certainly got that kind of reach.
I like that you didn’t deny it. You just said, you are not in the loop. I am with that. So let’s start at the top of the equation. What is artificial intelligence?
So it’s a good question. I think the definitions around it are a bit unsettled at the moment. I mean certainly from my perspective, I tend to view things pretty broadly, and I think I probably best describe it as “a machine trying to mimic parts of what we consider to be human intelligence.” So it’s a machine mimicking either a part, or several parts of what humans considered to be intelligent. Not exactly a concrete term but probably is—
I think it’s a great definition except for problems with the word “artificial” and problems with the word “intelligence.” Other than that, I have no problem. In one talk I heard, you said that old tic-tac-toe problems would therefore be AIs. I am with that, but that definition is so broad. The sprinkler system that comes on when my grass is dry; that’s AI. A calculator adds 2+2 which is something a person does; that’s AI. An abacas therefore would be AI; it’s machine that’s doing what humans do. I mean is that definition so broad that it’s meaningless or what meaning do you tease out of that?
Yeah. That’s a good question, and certainly it’s a context-driven type of question and answer. And I tend to view artificial intelligence and intelligence itself is kind of a continuum of ideas. So I think the challenge is to sit there and go, “Right, let’s nail down exactly what artificial intelligence is,” and that naturally leads you to saying, “Right, let’s nail down exactly what intelligence is.” I don’t think we’re to the point where that’s actually a practical possibility. You would have to start from the principle that human beings have completely fathomed the context of what the human being is capable of and I don’t think we’re there yet. If we’ve learned everything there is to be learned about ourselves, then I would be very surprised.
So if you start from the concept that intelligence itself isn’t completely well understood, then you naturally fall back to the concept that artificial intelligence isn’t something that you can completely nail down. So, from a more philosophical standpoint which is quite fun, it’s not something that’s concrete that you can just say, this is the denotation of it. And, again, from my perspective, it’s much more useful if you want to look at it in a broad sense to look at it as a scale or a spectrum of concepts. So, in that context, then yeah, going back to tic-tac-toe, it was an attempt at a machine trying to mimic human intelligence.
I certainly spent a lot of my earlier years playing games like chess and so forth, where I was amazed by the fact that a computer could make these kind of assessments. And, yes, you could go back to an abacus. And you could go forward to things like, okay, we have a lot of immediate connotations around artificial intelligence, around robots and what we consider quasi-autonomous thinking machines, but that then leaves the questions around things like feelings, things like imagination, things like intuition. What exactly falls into the realm of intelligence?
It’s a pretty subjective and non-concrete domain but I think the important thing, although I like to look at it from a very broad continuum of ideas, you know you do have to drive it on a context-sensitive basis. So from a practical standpoint, as a technologist, we look at different problem spaces and we look at different technologies which can be applied to those problem spaces, and although it’s not always clear, there is usually some very contextual driven understanding between the person or the people talking about AI or intelligence itself.
So, when you think of different approaches to artificial intelligence, we’ve been able to make a good deal of advances lately for a few reasons. One is the kinds of processors, that do parallel processing, like you guys make, that become better and better and cheaper and cheaper and we use more and more of them, and then we are getting better at applying machine learning which is of course your domain to broader problem sets.
Yeah.
Do you have an opinion? You are bound to look at a problem like, “Oh, my car is routing me somewhere strange,” is that a machine learning problem? And machine learning, at its core, is studying the past—a bunch of data from the past—and projecting that into the future.
What do you think are the strengths of that approach and what, I am very interested, are the limits of it? You think for instance creativity, like what Banksy does, is fundamentally a machine learning problem? You give it enough cultural references and it will eventually be graffiti-ing it on the wall.
Yeah.
Where do you think machine learning rocks, and where is it not able to add anything?
Yeah. That’s a really interesting question. So, I think a lot of times I get asked a question about artificial intelligence and machine learning, and they get interposed between each other. I think a lot of people—because of the fact that in our childhood, we all heard stories from science fiction that were labeled under artificial intelligence and went off in various different directions—hear of a step forward in terms of what computers can do, and to quickly extrapolate to what is far-reaching elements of artificial intelligence, and it’s somewhere in the domain of science fiction still.
So it is interesting to get involved in those discussions, but there are some practicalities in terms of what the technology is actually capable of doing. So, from my perspective, I think this is actually a really important wave that’s happening at the moment, the machine learning wave as you might call it. For years and years, we’ve been developing more and more complex classical computing methodologies, and we’ve progressively become more complex in what we can produce, and therefore we got increasingly more sophisticated in terms of what we could achieve in terms of human expectations.
Simple examples that I use with people who aren’t necessarily technical are, we started out with programs that said, if the temperature is greater than 21°C, then turn on the air conditioner, and if it’s less than 21 °C, turn off the air conditioner. What you ended up with was a thermostat that was constantly flickering the air conditioning on and off. Then, we became a little more sophisticated and we introduced hysteresis, and we said, I tell you what, if the temperature goes above 22 °C, turn on the air conditioner and if the temperature goes below 19 °C, turn it off. You can take that example and extrapolate that over time, and that’s kind of what’s been happening in computing technology, is we’ve been introducing more and more layers of complexity to allow more sophistication and more naturalness in our interactions with things, and the way that things made quasi-decisions. And that’s all been well and fine, but the methodologies are becoming incredibly complex and it was increasingly difficult to make those next steps in progression and sophistication.
The ImageNet which is a bit of cornerstone in modern ML was just a great example of what happened—the classic approaches were becoming more and more sophisticated, but it was difficult to really move the output and the capabilities on. And the application of machine learning and neural networks in particular, that’s just really blown the doors open in terms of moving to the next level. You know, when I try to de-complicate what’s happened, I tend to express it as, we’ve gone from the world where we had a very deterministic approach and we were trying to mimic fuzziness, an approximation, to where we now have a computing approach which very naturally approximates and it does patterns and it does approximation. And it just turns out that, lo and behold, when you look at the world, a lot of things are patterns. And, suddenly, the ability to understand patterns as opposed to trying to break them up into very deterministic principles becomes very useful. It so happens that humans do a huge amount of approximation, and that suddenly moves us much more forward in terms of what we can achieve with computing. So, the ability to do pattern matching and the ability to do approximation, it doesn’t follow the linear progression of more and more determinism, and more complex determinism. It moves us into a more fuzzy space, and it just so happens that that fuzzy space is a huge leap forward in terms of getting fundamentally deterministic machines to do something that feels more natural to human beings. So that’s a massive shift forward in terms of what we can do with computers.
Now, the thing to keep in mind there, and what I am trying to explain what’s happening with machine learning to people who aren’t technologists or aren’t into the theory behind machine learning, one way I do try to simplify it is, I say, “Well listen, don’t get too worried in terms of building the next Terminator. What we’ve kind of, in essence, managed to do is we’ve taught computers to be much, much better at identifying cats.” There’s still a problem about okay, what should the machine do once it’s identified a cat. So it’s not a complete shift in all of what we can do with computing. It’s a complete shift in the capabilities but we still got a long way to go in terms of something like AGI and so forth. But don’t get me wrong, it’s a massive wave. I think this is a new era in terms of what we can get our machines to do. So it’s pretty exciting from that, but there is still a long way to go.
So you mentioned that we take these deterministic machines and get them to do approximations but, in the end, they are still at their core deterministic and digital. No matter how much you tried to obfuscate that in the application the technology, is there still an inherent limit to how closely that can mimic human behavior?
That’s, again, a very good question. So you are right, at its fundamental level, a computer is basically 1s and 0s. It all breaks down to that. What we’ve managed to do over time is produce machines which are increasingly more capable and we’ve created increasing layers of sophistication and platforms that can support that. That’s nothing to be laughed at. In the technology I work with in ARM, the leaps forward in the last few years have been quite incredible in terms of what you can do. But, yeah, it always breaks down to 1s and 0s. But it’s important not to let the fundamentals of the technology form a constraint about its potential because, if anything, what we have learned is that we can create increasing levels of sophistication to get these 1s and 0s to do more and more things and to act more and more natural in terms of our interactions and the way that they act.
So yes, you are absolutely right and it’s interesting to see the journey from 1s and 0s to being able to do something like natural language processing and things like that. So, as fascinating as that is, we do see the end on one side in terms of the beginning is 1s and 0s but it’s really difficult to understand where it’s going to finish in terms of what it’s capable of. What I do think is, we are still quite a long journey from something like AGI, depending on where you draw your limits in terms of where AGI is. We’ve undoubtedly taken a step forward with the machine learning principles, and the research that’s going on around that is still uncovering significant steps forward.
So the world has changed in that sense. And to say that that’s the kind of end of it, I don’t think anyone would buy into that. How far can it go, the fundamentals of your question, I don’t think we are anywhere close to reaching the end of that yet. There are probably more ways to come and we’ve even yet to explore where the limits of the new wave of machine learning is going to take us.
So I want to give you a question that I’ve been mulling lately and maybe you can help me with this. The setup goes like this: You mentioned we can teach your computer to identify cats and it turns out we needed a million cats, actually more than a million cats and a million dogs to get the computer to tell the difference reliably, right?
The interesting thing is that a human can be trained on a sample size of one. You get a human up, show them a stuffed animal of an imaginary creature and say find that creature in all these photos. And even if it’s frozen in a bulk of ice, or upside down, or half obscured by a tree, or has mold growing on it, or whatever, a human goes, “Yep, there, there, there, there.” Then, the normal reply to that is, well, that’s transfer learning and that’s something humans do really well. And the way we do it well is we have a whole life time of experience of seeing things in different settings and what happens is because we can do that, we can extend it. And I used to be fine with that, but now I am not.
Now I think, you know, you can take a little kid who does not have a lifetime of learning. You can show them half a dozen photos of a cat, ten photos of a cat whatever, a very, very small number and then you go for a walk and a Manx would walk out and they would say, “Oh look, a cat with no tail.” How did they do that because nobody told them sometimes cats don’t have tails. And yet, the Manx had enough cat-ness about it that it still said, “Oh, that’s still category cat, sans tail that’s worth noting.” So how did the child who is five years old, in my example, who does not have a lifetime of “Oh, there’s a dog, and a dog without a tail. There’s an elephant, and elephant without a tail.” Where did they learn to do that?
Yeah. So that’s interesting, and I think it actually breaks down into a couple of different components, one of them being the fundamentals of the processor, so to speak, the technology—not to dehumanize humans—but the platform under which the learning is occurring, and then the other one is the process of learning. I mean, one thing I would say is that, you know, to go into the more psychological, biological side of it, by the age you reach five, you’ve actually done an awful lot of experimental learning. And I know from my own experience, I spent far too many hours bent over my two-year old trying to keep them from doing something silly that they’d already probably done before and there’s actually a lot of experiments that have been run around this.
I mean I am not a behavioral psychologist, but one I remember is that they ran some experiments around a grasshopper in a glass of milk with children, and it turned out in this particular experiment that up until about the age of three, children were quite happy to drink the glass of milk, they didn’t mind. But it was around the age of three that the children started deciding that, no actually I don’t want to drink milk with a grasshopper in it, that’s disgusting. And the principle behind this research, from what I understood, was that disgusting is a learned behavior and it tends to kick in around the age of three.
So, the rate at which humans are building up information and knowledge and extensible understanding is just massive in the human in the early ages. So, being able to identify a dog or a cat or a piece of pie with a huge bite out of it, even by age two, you’ve got a huge amount of data that’s already gone into that. Now, behind that is a question of whether or not the human brain and the machine have the same capacity or the same capability. I think that’s a much more significant question, and that kind of gets down to the fundamentals of the machine versus the human. And it actually reaches back a lot to the question of what is intelligence and that’s again where I see a continuum of things.
So in terms of being able to identify objects, from a personal perspective, I think what we are seeing now in machine learning is really just the tip of the iceberg. We are working in the space where models are very static where they do, as you say, involve typically a vast amount of data in order to be able to train. Even more so than that, they often involve very particular setups in terms of the models that they are trained against. So, at the moment, it’s kind of a bit of a static world in machine learning. I would expect that it’s only a matter of time until that kind of space around static machine learning is well understood and the natural place to go from there is into a domain of more general purpose or more dynamic or more versatile machine learning type algorithms.
So models which can, not only deal with identification of particular classes of objects, but can actually be extended to do recognition of orthogonal type things, to models where they can dynamically update to learn as they experience. So I think, in terms of what we can do with machine learning, I really do think that it’s got a long way to go, a long way towards what human beings appear to do, which is be able to simulate not obviously like data and to form useful conclusions that are more general purpose. I think the technology or the wave we are on at the moment has the legs to get there. But whether this is the technology that’s going to take us into other aspects of human intelligence, such as the ability to imagine, the ability to feel or intuit, it’s not obvious at the moment that it lends itself to that at all.
If anything, technology continues to surprise us and surprise me. I like Arthur C. Clarke’s quote about, “any significantly advanced technology being indistinguishable from magic.” And I certainly believe that’s true. We’ve seen again and again that what we think is possible is simply a matter of time. A colleague of mine was on a flight with me and said they watched the original Space Odysseyand were amazed by how much of what seemed like the future and inconceivable at the time is now just a technical practicality. So I think there is a long way to go with the current wave around machine learning, but I am not sure it’s the right harness to take it into the domains of some of the further out aspects of human intelligence. But that falls in line with the fact that this is a pretty exciting wave that is going to change things, but it’s probably not the last wave.
So, if I can rephrase that, it sounds to me like you are saying that the narrow AI we have today is still nascent and we are still going to do amazing things with it, but it may have nothing whatsoever in common with a general intelligence other than they happen to share the word intelligence. That maybe a completely different quantum-based or who knows what have you, a completely different technology that we haven’t even started building it. Is that true?
Correct. Yeah, that’s certainly my opinion and I have been proved wrong repeatedly in my life and we will see where the technology takes us. The space of machine learning, it’s a new capability for machines which is not to be underestimated at all, it’s pretty amazing. But it does lend itself to certain types of things and it doesn’t lend itself to other types of things. I am not clear on where its limits are going to be found to be, but I don’t think this is the tool that’s going to solve all problems. It’s a tool that can impact everything in a positive way but it’s not going to take us to the end of the earth.
So, assuming that’s true, I want to get back to my five-year old again, because it sounds like you think the kinds of things I was just marveling that the five-year old did, the cat with no tail seems like that’s squarely in your bucket of things narrow AI can do. And so I would put the question to you slightly different. A computer should be able to do five years worth of living, maybe not in five minutes, but certainly in five days or five weeks.
Even if you built a sensor on a computer that a kid could wear around their neck 24 hours a day and you let them free in the world at age five, right now, the kid would still know a whole lot more than that device would know. Is that in your mind a software problem or a hardware problem? Do we not have the chip that can do it or do we not have the software or do we not have the sensors, do we not have embodiment which we may need in order for it to teach itself? What is it that you think maybe we are missing that at least would allow that narrow AI to track with the development of that growing child?
Yeah. So, my answer is roughly all of those. So, I think it’s important to bear in mind that the human brain is an amazing thing. What we do in my company is, we spend a lot of time thinking about power efficiency and you know, sort of part of our DNA is to try to push the boundaries in terms of processing capability but to make sure that we are doing it in a very, very energy efficient way and with that goal in mind, we are always looking for a beacon. And the beacon in terms of raw processing capability and efficiency, for us in many ways, is the human brain.
The human brain’s ability to process information, I don’t have the exact numbers at hand, but there’s been estimates as to the rough digital equivalent and the sheer bandwidth at which we can digest information is just massive. So I think we would be arrogant to the extreme to say that we’ve got a processor which is capable of supporting the same amount of information processing as a human brain. We certainly made great strides forward in the last couple of decades but the human brain is still the gold standard in terms of what can be achieved and the software kind of flows on from that. So I think there’s still a long way to go. That said, I have yet to see the limits in terms of what could be achieved both in the hardware and the software side of things, the pace at which they’ve been progressing has accelerated, if anything. So, still a long way to go to be able to match a five-year old or even a two-year old but it’s definitely increasing over time.
Yeah. It’s funny because you got this brain, and it’s a marvel in itself and then you say, what are its power requirements, and it’s 20 watts. Wow, how are we going to duplicate that in 20 watts, you know, because everything we do right now is more energy intensive. So, some of the techniques in machine learning are of course, fit things to a linear regression, or do classification—is that an A or a B or a C or a dog or a cat or whatever—and then there is clustering where the machine is fed a lot of data and it finds clouds in this n-dimensional space where you know it says something in that cloud has some likelihood of being such and such. So, if you basically said, “Here is a credit card transaction is it fraudulent?” and then the AI is going to say, “Well, how much was it and where was it purchased and what is the item and what kind of day,” and who knows, how many different things and then it says, “This is maybe fraud and this isn’t.”
You know, there is a sentiment and a legal reality that if an AI makes a decision that affects your life, you have the right to know why it made that decision. So my question to you, is that inherently going to limit what we are able to do with it? Because in n-dimensional space of clustering, it would be really hard to say, because the short answer is, you were in the cloud and this other person wasn’t in the cloud. If you were to go to Google and say, I rank #5 for such and such search, and my competitor ranks #1, why? They might very well say, we don’t know. So, how do you thread that needle?
So that’s a fascinating question. You are absolutely right. There’s kind of been a trend in society around, well, we think we understand what computers are capable of—we do understand what computers are capable of—and we try to build a human world around this, which is enjoyable or meets our social norms. And that has been, to date, largely based around the fact that computers are deterministic and they work in the classical deterministic algorithms and that those were reproducible, and so forth and so on. We kind of, as human beings, molded our world around those principles and it’s a progressive society and we continually mold our expectations and the rules of social norms to make us comfortable in that space.
Now, you are very right in the fact that when you get into the domain of machine learning, you are dealing with a technology which is largely irreproducible. So the traceability and the determinism of the decisions becomes a problem, or it becomes a shift in terms of what’s capable. From my perspective, I think this goes on to a range of different domain spaces. I mean, some of the places where they are talking about this are around automobiles for example, machine learning moves the capabilities of computing and it opens up a huge range of benefits that can be delivered into the automotive space. A lot of accidents and fatalities are caused by human error, and being able to hand more and more support to the driver, or do many things for them on a machine basis, potentially has the capability to save a lot of lives and save a lot of distress. So, that’s fantastic, but at the same time, it’s a heavily regulated industry that’s become used to determinism. And suddenly you have this thing where we can produce a huge amount of benefit for human kind, but it doesn’t follow the social norms that we’ve constructed around us to date.
I think this is causing a quandary in a lot of different spaces and even at some government levels. From my perspective, it’s interesting because a lot of the discussion today has been around what needs to be put in place around the technology, what are the constraints around the technology, how do we mold our views of the world today to get this new technology to fit into it. Personally I think that’s a very wrong way to look at it, because what we’ve had with machine learning and what we’ve currently got in front of us is a huge shift in what we can achieve with machines. And, as I said, it’s a principle which is now established which is only really getting started in terms of what it’s capable of and what it can be applied to.
And you know, there’s a lot of debate around is it good, is it bad, and you can find examples that are inherently good or inherently bad, but if you abstract far enough away from it, there’s a couple of principles I think that are important. One of them is, technology, in and of itself, is effectively inert. It’s not a question of it being good or bad. It can be used for positive or it can be used for negative. It doesn’t really inherently have a view on that. It’s about how the human beings normalize it in society, and you know, you can look at examples like speech synthesis.
So machine learning brings speech recognition to a level where it can be used for security purposes. It’s also capable of synthesizing speech from limited samples to be able to circumvent security. So, that’s a good example of a nil sum game. From my perspective, the real question around machine learning isn’t how do we get this technology to mold into our society. It’s about recognizing the fact that what we can achieve has suddenly changed, and getting society and human beings to move with that to remold their world around these new capabilities and rebuild the social norms so that they can harness the huge benefits that this technology can bring, but at the same time making sure that the social norms are in place to where they don’t become chemical weapons. And similar to chemical weapons, we say as a society, that’s not allowed, we are not going to tolerate that.
So, I think that the question around the technology, around the machine learning really is about human beings in societies need to recognize that this is a shift in capabilities. And we need to look at these, and reconstruct our social norms so that we are again happy with the positives that we can get, and we can benefit from those. But at the same time, we put the barriers up to the progression around what could be done negatively, and that’s something that has to happen with any technology advancement. I do think the focus really needs to be on society, and around the recreation of a particular decision, I think we can view that in terms of our existing social norms—we can look at it again as human beings and say, right, what do we consider to be acceptable. I am pretty confident that we will be able to reach those social norms, it’s just a question of the approach we take and how quick we get there. Personally, I feel that it starts from just embracing the technology and appreciating it’s here, let’s understand this and mold this into something that’s positive for us.
So let’s talk a little bit about IoT devices. You know that there’s been this struggle for 2,500 years between code makers and code breakers and there’s a longstanding unsettled debate as to who has the easier job. And then in computers, you had the same thing where you have people who make viruses and Trojan horses, then people who try to detect and prevent them. And they largely stay in check because when one makes an advance, the other one figures out how to counter it and then they patch the software and then they find the hole in that and then there’s another patch and we muddle through. I had a security person on the show and I said, you know, what’s your biggest concern about the future and he said, oh, you know that we are connecting billions of devices to the internet that we do not have the capability of upgrading and therefore if vulnerabilities are found in them, we don’t have a way to fix them.
So if somebody finds a way to turn on a toaster oven that’s connected to the internet, there is not really a way to fix that. What are your thoughts on that? Is that a real concern and is it intractable, is there a solution, what would you say?
Yeah. I think it’s definitely something to be taken seriously. It’s something that I know we are certainly very active around and we have been for quite some time. What I am pleased about is the fact that it’s become very topical. So, to kind of go back to your question, it is a concern. It is a genuine concern. We are attaching more and more devices to the internet. We’ve seen early examples where someone was able to gain control of a camera in a casino and people were able to launch denial of service attacks because they’d taken over a class of devices in the home. So I think the examples are there to move beyond the question of whether or not this is something we will need to be concerned about. And we are connecting more and more devices at a huge rate and these devices are more and more intelligent, and it just flows from that that we really do need to take security quite seriously.
I think there’s been a range of events that have happened particularly over the course of the last couple of years where if you have a retail credit card machine that’s been compromised through, as I said, cameras and so forth, that has really woken every one up. And it’s kind of interesting within the sort of fundamental technologies that we work on, it’s something that we’ve been taking very, very seriously for a long time but it wasn’t really something that everybody took seriously, and to some extent, we felt like we were banging a drum when no one was marching to it. But events have driven it to the forefront of people’s thinking a lot more lately and that’s been a positive thing to see.
One of the things that we view from, again, a platform technology perspective is, you really you can’t think of security as an afterthought. Many years ago, there were a lot of people who would build devices in the way that they had before and then say, oh, hold on, somebody said, I had to have some security, why don’t we bolt on some security at the end. It doesn’t really work that way. You can achieve a certain level but it’s easy to obfuscate in many ways. So, you really need to think about it at the very fundamentals. It needs to be something that’s as integral to the design as the 1s and 0s that you start with, to build it up. If you do that, then, that’s the right approach. Will we ever get to perfection? Probably not, but certainly it needs to be taken with the level of sincerity and gravitas that it deserves and I think people are starting to do that. We’ve seen people start to look at the security aspects of the device from the very beginning, in the very inception and carry that through to the end and thinking about things like, you know, the fact that we need to be able to manage and update these devices and so forth.
So yes, I think it’s a genuine concern. It’s something we do need to take very seriously, and like I said, the examples are already there to show. The positive note is that the trends that we are seeing from the low levels of design on up, people are actually taking it very serious, and that’s globally. Up until last summer, I lived in China for five years and over that time, I saw it become much more serious over there. So yeah, I think we are headed in the right direction. We’ve still got further to go, and, again, I think there’s lots of room for innovation in terms of what people can do around security. Will we ever get out of the cat and mouse chase? I am not sure we ever will, but it’s beholden on the people with the white hats to do the best they can from the very beginning and that seems to be the direction people are doing.
My final question is, when the media gets a hold of these topics about artificial intelligence and machine learning, automation, the effect on jobs, security, privacy, all of them—there is often kind of a dystopian narrative that is put forward. So I just kind of want to ask you flat out, are you optimistic about the future, especially with regard to these technologies? Do you think they are the empowering, wealth generating, information freeing up, cognitive skill enhancing, technologies that are going to transform the world into a better place? Or is the jury still out and we don’t know? Or is there always going to be kind of this dystopian narrative that’s breathing over our shoulder?
Yeah. So I think it’s a little bit of both to be honest with you. At the moment, there is certainly a huge amount of dialogue around the dystopian views of future. To some degree you kind of see these whenever there’s an element of unknown. It’s very easy to paint the worst and in some ways, that’s probably healthy because it means that you try to build the new world in such a way that it’s safe and it keeps you out of those kind of situations. So, I am not saying it’s a bad thing but I do think it’s fueled a lot by the unknown. We’ve had a significant jump forward in terms of what the technology can do, where it’s going to lead us. It’s almost impossible to say. I think those that are in the technology space have a sense of the limits of where it can take us and it’s far from those dystopian domains or even the AGI type domains, but they are incapable of seeing the end, so we can’t deterministically say where the end of the capabilities are. That leaves the world outside of the technology sphere with a huge amount of uncertainty and fear, and so I think that’s the generation behind a lot of the dialogue in the market and those are healthy. Thinking about what we find acceptable and unacceptable in the future is a perfectly sensible discussion to be having.
So is it actually going to produce a dystopian world? I am an optimist. I see the machine learning what it can do, the positives it can bring, just what it’s doing in the medical space alone is just incredible in terms of improving human health and giving us medical benefits. What it can do in automotive and so forth is, again, quite incredible. Projecting into the future, there’s a lot of questions around what’s going to happen with jobs and the ethics and so forth. I am not going to sit here and say, “I have a crystal ball that makes it clear anymore than anyone else does,” but I have an inherent belief in human society and I do think that we are going to have some disruption while we reconstruct our social norms around what’s allowed and what’s not allowed. But, as I said earlier, the technology is inert, so it really comes down to how we decide as human beings to manifest the technology’s capabilities. Although there may be individuals that may have a particular dark nefarious side to them, history would suggest that as a collective and as a whole, we tend to build our social norms of what can and can’t be done in a positive direction.
So I have a tremendous amount of faith that this is all ultimately happening within the apparatus of the human society, and that we will drive the capabilities in what actually gets achieved in a very largely positive direction. And so, from my standpoint, I think there is a massive amount of benefit to be had around machine learning, and I am personally very excited about what it might be able to produce, even if it is as simple as not having to worry about losing the remote on my TV. Sure, there is potential to abuse and misuse it and bring about negative consequences, as there is with any new technology, and to some degree, almost any classical technology, but I do have great faith in society’s guidance and where that’s going to actually end up being manifested.
Well, that’s a wonderful place to leave it. I want to thank you for an exciting hour of challenging and interesting thoughts.
Likewise, it’s been very interesting.
Byron explores issues around artificial intelligence and conscious computers in his upcoming book The Fourth Age, to be published in April by Atria, an imprint of Simon & Schuster. Pre-order a copy here.
[voices_in_ai_link_back]