Voices in AI – Episode 50: A Conversation with Steve Pratt

[voices_in_ai_byline]
In this episode, Byron and Steve discuss the present and future impact of AI on businesses.
[podcast_player name=”Episode 50: A Conversation with Steve Pratt” artist=”Byron Reese” album=”Voices in AI” url=”https://voicesinai.s3.amazonaws.com/2018-06-14-(00-56-12)-stephen-pratt.mp3″ cover_art_url=”https://voicesinai.com/wp-content/uploads/2018/06/voices-headshot-card-3.jpg”]
[voices_in_ai_byline]
Byron Reese: This is Voices in AI, brought to you by GigaOm, and I’m Byron Reese. Today, our guest is Steve Pratt. He is the Chief Executive Officer over at Noodle AI, the enterprise artificial intelligence company. Prior to Noodle, he was responsible for all Watson implementations worldwide, for IBM Global Business Services. He was also the founder and CEO of Infosys Consulting, a Senior Partner at Deloitte Consulting, and a Technology and Strategy Consultant at Booz Allen Hamilton. Consulting Magazine has twice selected him as one of the top 25 consultants in the world. He has a Bachelor’s and a Master’s in Electrical Engineering from Northwestern University and George Washington University. Welcome to the show, Steve.
Steve Pratt: Thank you. Great to be here, Byron.
Let’s start with the basics. What is artificial intelligence, and why is it artificial?
Artificial intelligence is basically any form of learning algorithm; is the way we think of things. We actually think there’s a raging religious debate [about] the differences between artificial intelligence and machine learning, and data science, and cognitive computing, and all of that. But we like to get down to basics, and basically say that they are algorithms that learn from data, and improve over time, and are probabilistic in nature. Basically, it’s anything that learns from data, and improves over time.
So, kind of by definition, the way that you’re thinking of it is it models the future, solely based on the past. Correct?
Yes. Generally, it models the future and sometimes makes recommendations, or it will sometimes just explain things more clearly. It typically uses four categories of data. There is both internal data and external data, and both structured and unstructured data. So, you can think of it kind of as a quadrant. We think the best AI algorithms incorporate all four datasets, because especially in the enterprise, where we’re focused, most of the business value is in the structured data. But usually unstructured data can add a lot of predictive capabilities, and a lot of signal, to come up with better predictions and recommendations.
How about the unstructured stuff? Talk about that for a minute. How close do you think we are? When do you think we’ll have real, true unstructured learning, that you can kind of just point at something and say, “I’m going to Barbados. You figure it all out, computer.”
I think we have versions of that right now. I am an anti-fan of things like chatbots. I think that chatbots are very, very difficult to do, technically. They don’t work very well. They’re generally very expensive to build. Humans just love to mess around with chatbots. I would say in the scoring of business value and something that’s affordable, and is easy to do, that chatbots is in the worst quadrant there.
I think there is a vast array of other things that actually add business value to companies, but if you want to build an intelligent agent using natural language processing, you can do some very basic things. But I wouldn’t start there.
Let me try my question slightly differently, then. Right now, the way we use machine learning is we say, “We have this problem that we want to solve. How do you do X?” And we have this data that we believe we can tease the answer out of. We ask the machine to analyze the data, and figure out how to do that. It seems the inherent limit of that, though, it’s kind of all sequential in nature. There’s no element of transferred learning in that, where I grow exponentially what I’m able to do. I just can do: “Yes. Another thing. Yes. Another. Yes. Another.” So, do you think this strict definition of machine learning, as you’re thinking of AI that way, is that a path to a general intelligence? Or is general intelligence like “No, that’s something way different than what we’re trying to do. We’re just trying to drive a car, without hitting somebody?”
General intelligence, I think, is way off in the future. I think we’re going to have to come up with some tremendous breakthroughs to get there. I think you can duct-tape together a lot of narrow intelligence, and sort of approximate general intelligence, but there are some fundamental skills that computers just can’t do right now.For instance, if I give a human the question, “Will the guinea pig population in Peru be relevant to predicting demand for tires in the U.S?” A human would say, “No, that’s silly. Of course not.” A computer would not know that. A computer would actually have to go through all of the calculations, and we don’t have an answer to that question, yet. So, I think generalized intelligence is a way off, but I think there are some tremendously exciting things that are happening right now, that are making the world a better place, in narrow intelligence.
Absolutely. I do want to spend the bulk of our time in there, in that world. But just to explore what you were saying, because there’s a lot of stuff to mine, in what you just said. That example you gave about the guinea pigs is sort of a common-sense problem, right? In how it’s referred. “Am I heavier than the statue of liberty?” How do you think humans are so good at that stuff? How is it that if I said, “Hey, what would an Oscar statue look like, smeared with peanut butter?” You can conjure that up, even though you’ve never even thought of that before, or seen it covered, or seen anything covered with peanut butter. Why are we so good at that kind of stuff, and machines seem amazingly ill-equipped at it?
I think humans have constant access to an incredibly diverse array of datasets. Through time, they have figured out patterns from all of those diverse datasets. So, we are constantly absorbing new datasets. In machines, it’s a very deliberate and narrow process right now. When you’re growing up, you’re just seeing all kinds of things. And as we go through our life, we develop these – you could think of them as regressions and classifications in our brains, for those vast arrays of datasets.
As of right now, machine learning and AI are given very specific datasets, crunch the data, and then make a conclusion. So, it’s somewhere in there. We’re not exactly sure, yet.
All right, last question on general intelligence, and we’ll come back to the here and now. When I ask people about it, the range of answers I get is 5 to 500 years. I won’t pin you down to a time, but it sounds like you’re “Yeah, it’s way off.” Yet, people who say that often usually say, “We don’t know how to do it, and it’s going to be a long time before we get it.”
But there’s always the implicit confidence that we can do it, that it is a possible thing. We don’t know how to do it. We don’t know how we’re intelligent. We don’t know the mechanism by which we are conscious, or the mechanism by which we have a mind, or how the brain fundamentally functions, and all of that. But we have a basic belief that it’s all mechanistic, so we’re going to eventually be able to build it. Do you believe that, or is it possible that a general intelligence is impossible?
No. I don’t think it’s impossible, but we just don’t know how to do it, yet. I think transfer learning, there’s a clue in there, somewhere. I think you’re going to need a lot more memory, and a lot more processing power, to have a lot more datasets in general intelligence. But I think it’s way off. I think there will be stage gates, and there will be clues of when it’s starting to happen. That’s when you can take an algorithm that’s trained for one thing, and have it – if you can take Alpha Go, and then the next day, it’s pretty good at Chess. And the next day, it’s really good at Parcheesi, and the next day, it’s really good at solving mazes, then we’re on the track. But that’s a long way off.
Let’s talk about this narrow AI world. Let’s specifically talk about the enterprise. Somebody listening today is at, let’s say a company of 200 people, and they do something. They make something, they ship it, they have an accounting department, and all of that. Should they be thinking about artificial intelligence now? And if so, how? How should they think about applying it to their business?
A company that small, it’s actually really tough, because artificial intelligence really comes into play when it’s beyond the complexity that a human can fit in their mind.
Okay. Let’s up it to 20,000 people.
20,000? Okay, perfect. 20,000 people – there are many, many places in the organization where they absolutely should be using learning algorithms to improve their decision-making. Specifically, we have 5 applications that focus on the supply side of the company; that’s in: materials, production, distribution, logistics and inventory.
And then, on the supply side, we have 5 areas also: customer, product, price, promotion and sales force. All of those things are incredibly complex, and they are highly interactive. Within each application area, we basically have applications that almost treat it like a game, although it’s much more complicated than a game, even though games like Go are very complex.
Each of our applications does, really, 4 things: it senses, it proposes, it predicts, and then it scores. So, basically it senses the current environment, it proposes a set of actions that you could take, it predicts the outcome of each of those actions – like the moves on a Chessboard – and then it scores it. It says, “Did it improve?” There are two levels of that, two levels of sophistication. One is “Did it improve locally? Did it improve your production environment, or your logistics environment, or your materials environment?” And then, there is one that is more complex, which says “If you look at that across the enterprise, did it improve across the enterprise?” These are very, very complex mathematical challenges. The difference is dramatic, from the way decisions are made today, which is basically people getting in meetings with imperfect data on spreadsheets and PowerPoint slides, and having arguments.
So, pick a department, and just walk me through a hypothetical or real use case where you have seen the technology applied, and have measurable results.
Sure. I can take the work we’re doing at XOJET, which is the largest private aviation company in the U.S. If you want to charter a jet, XOJET is the leading company to do that. The way they were doing pricing before we got there was basically old, static rules that they had developed several years earlier. That’s how they were doing pricing. What we did is we worked with them to take into account where all of their jets currently were, where all of their competitors’ jets are, what the demand was going to be, based on a lot of internal and external data; like what events were happening in what locations, what was the weather forecast, what [were] the economic conditions, what were historic prices and results? And then, basically came up with all of the different pricing options they could come up with, and then basically made a recommendation on what the price should be. As soon as they put in our application, which was in Q4 of 2016, the EBITDA of the company, which is basically the net margin – not quite, but – went up 5%, in the company.
The next thing we did for them was to develop an application that looked at the balance in their fleet, which is: “Do you have the right jets in the right place, at the right time?” This takes into account having to look at the next day. Where is the demand going to be the next day? So, you make sure you don’t have too many jets in low demand locations, or not enough jets in high demand locations. We actually adjusted the prices, to create an economic incentive to drive the jets to the right place at the right time.
We also, again, looked at competitive position, which is through Federal Aviation Administration data. You can track the tail numbers of all of their jets, and all of the competitor jets, so you could calculate competitive position. Then, based on that algorithm, the length of haul, which is the amount of hours flown per jet, went up 11%.
This was really dramatic, and dramatically reduced the number of “deadheads” they were flying, which is the amount of empty jets they were flying to reposition their jets. I think that’s a great success story. There’s tremendous leadership at that company, very innovative, and I think that’s really transformed their business.
That’s kind of a classic load-balancing problem, right? I’ve got all of these things, and I want to kind of distribute it, and make sure I have plenty of what I need, where. That sounds like a pretty general problem. You could apply it to package delivery or taxicab distribution, or any number of other things. How generalizable is any given solution, like from that, to other industries?
That’s a great question. There are a lot of components in that, that are generalizable. In fact, we’ve done that. We have componentized the code and the thinking, and can rapidly reproduce applications for another client, based on that. There’s a lot of stuff that’s very specific to the client, and of course, the end application is trained on the client’s data. So, it’s not applicable to anybody else. The models are specifically trained on the client data. We’re doing other projects in airline pricing, but the end result is very different, because the circumstances are different.
But you hit on a key question, which is “Are things generalizable?” One of the other approaches we’re taking is around transferred learning, especially when you’re using deep learning technologies. You can think of it as the top layers of a neural net can be trained on sort of general pricing techniques, and just the deeper layers are trained on pricing specific to that company.
That’s one of the other generalization techniques. Because AI problems in the enterprise generally have sparser datasets than if you’re trying to separate cat pictures from dog pictures. So, data sparcity is a constant challenge. I think transfer learning is one of the key strategies to avoid that.
You mentioned in passing, looking at things like games. I’ve often thought that was kind of a good litmus test for figuring out where to apply the technology, because games have points, and they have winners, and they have turns, and they have losers. They have structure to them. If that case study you just gave us was a game, what was the point in that? Was it a dollar of profit? Because you were like “Well, the plane could be, or it could fly here, where it might have a better chance to get somebody. But that’s got this cost. It wears out the plane, so the plane has to be depreciated accordingly.” What is the game it’s playing? How do you win the game it’s playing?
That’s a really great question. For XOJET, we actually created a tree of metrics, but at the top of the tree is something called fleet contribution, which is “What’s the profit generated per period of time, for the entire fleet?” Then, you can decompose that down to how many jets are flying, the length of haul, and the yield, which is the amount of dollars per hour flown. There’s also, obviously, a customer relationship component to it. You want to make sure that you get really good customers, and that you can serve them well. But there are very big differences between games and real-life business. Games have a finite number of moves. The rules are well-defined. There’s generally, if you look at Deep Blue or Alpha Go, or Arthur Samuels, or even the Labradas. All of these were two-player games. In the enterprise, you have typically tens, sometimes hundreds of players in the game, with undefined sets of moves. So, in the one sense, it’s a lot more complicated. The idea is, how do you reduce it, so it is game-like? That’s a very good question.
So, do you find that most people come to you with a defined business problem, and they’re not really even thinking about “I want some of this AI stuff. I just want my planes to be where they need to be.” What does that look like in the organization that brings people to you, or brings people to considering an artificial intelligence solution to a problem?
Typically, clients will see our success in one area, and then want to talk to us. For instance, we have a really great relationship with a steel company in Arkansas, called Big River Steel. Big River Steel, we’re building the world’s first learning steel mill with them. Which will learn from their sensors, and be able to just do all kinds of predictions and recommendations. It goes through that sense, propose, predict and score. It goes through that. So, when people heard that story, we got a lot of calls from steel mills. Now, we’re kind of deluged with calls from steel mills all over the world, saying, “How did you do that, and how do we get some of it?”
Typically, people hear about us because of AI. We’re a product company, with applications, so we generally don’t go in from a consulting point of view, and say “Hey, what’s your business problem?” We will generally go in and say, “Here are the ten areas where we have expertise and technology to improve business operations,” and then we’ll qualify a company, if it applies or not. One other thing is that AI follows the scientific methods, so it’s all about hypothesis, test, hypothesis, test. So it is possible that an AI application that works for one company will not work for another company. Sometimes, it’s the datasets. Sometimes, it’s just a different circumstance. So, I would encourage companies to be launching lots of hypotheses, using AI.
Your website has a statement quite prominently, “AI is not magic. It’s data.” While I wouldn’t dispute it, I’m curious. What were you hearing from people that caused you to… or maybe hypothetically, – you may not have been in on it – but what do you think is the source of that statement?
I think there’s a tremendous amount of hype and B.S. right now out there about AI. People anthropomorphize AI. You see robots with scary eyes, or you see crystal balls, or you see things that – it’s all magic. So, we’re trying to be explainers in chief, and to kind of de-mystify this, and basically say it’s just data and math, and supercomputers, and business expertise. It’s all of those four things, coming together.
We just happen to be at the right place in history, where there are breakthroughs in those areas. If you look at computing power, I would single that out as the thing that’s made a huge difference. In April of last year, NVIDIA released the DGX-1, which is their AI supercomputer. We have one of those in our data center, that in our platform we affectionately call “the beast,” which has a petaflop of computing power.
If you put that into perspective, that the fastest supercomputer in the world in the year 2000, was the ASCI Red, that had one teraflop of computing power. There was only one in the world, and no company in the world had access to that.
Now, with the supercomputing that’s out there, the beast has 1,000 times more computing power than the ASCI Red did. So, I think that’s a tremendous breakthrough. It’s not magic. It’s just good technology. The math behind artificial intelligence still relies largely on mathematical breakthroughs that happened in the ‘50s and ‘60s. And of course, Thomas Bayes, with Bayes’ Theorem, who was a philosopher in the 1700s.
There’s been a lot of good work recently around different variations on neural nets. We’re particularly interested in long- and short-term memory, and convolutional neural nets. But a lot of this is, a lot of the math has been around for a while. In fact, it’s why I don’t think we’re going to hit general intelligence any time soon. Because it is true that we have had exponential growth in computing power, and exponential growth in data. But it’s been a very linear growth in mathematics, right? If we start seeing AI algorithms coming up with breakthroughs in mathematics, that we simply don’t understand, then I think the antennas can go up.
So, if you have your DGX-1, at a petaflop, and in five years, you get something that’s an exaflop – it’s 1,000 times faster than that – could you actually put that to use? Or is it at some point, the jet company only has so much data. There are only so many different ways to crunch it. We don’t really need more – we have, at the moment, all of the processor power we need. Is that the case? Or would you still pay dearly to get a massively faster machine?
We could always use more computing power. Even with the DGX-1. For instance, we’re working with a distribution company where we’re generating 500,000 models a day for them, crunching on massive amounts of data. If you have massive datasets for your processing, it takes a while. I can tell you, life is a lot better. I mean, in the ‘90s, we were working on a neural net for the Coast Guard; to try to determine which ships off of the west coast were bad guys. It was very simple neural nets. You would hit return, and it would usually crash. It would run for days and days and days and days, be very, very expensive, and it just didn’t work.
Even if it came up with an answer, the ships were already gone. So, we could always use more computing power. I think right now, a limitation is more on the data side of it, and related to the fact that they shouldn’t be throwing out data that they’re throwing out. For instance, like customer relationship management systems. Typically, when you have an update to a customer, that it overwrites the old data. That is really, really important data. I think coming up with a proper data strategy, and understanding the value of data, is really, really important.
What do you think, on this theme of AI is not magic, it’s data; when you go into an organization, and you’re discussing their business problems with them, what do you think are some of the misconceptions you hear about AI, in general? You said it’s overhyped, and glowing-eyed robots and all of that. From an enterprise standpoint, what is it that you think people are often getting wrong?
I think there’s a couple of fundamental things that people are getting wrong. One is I think there is a tremendous over-reliance and over-focus on unstructured data, that people are falling in love with natural language processing, and thinking that that’s artificial intelligence. While it is true that NLP can help with judging things like consumer sentiment or customer feedback, or trend analysis on social media, generally those are pretty weak signals. I would say, don’t follow the shiny object. I think the reason people see that, is the success of Siri and Alexa, and people see that as AI. It is true that those are learning algorithms, and those are effective in certain circumstances.
I think they’re much less effective when you start getting into dialogue. Doing dialogue management with humans is extraordinarily difficult. Training the corpus of those systems is very, very difficult. So, I would say stay away from chatbots, and focus mostly on structured data, rather than unstructured data. I think that’s a really big one. I also think that focusing on the supply side of a company is actually a much more fruitful area than focusing on the demand side, other than sales forecasting. The reason I say that is that the interactions between inbound materials and production, and distribution, are more easily modeled and can actually make a much bigger difference. It’s much harder to model things like the effect of a promotion on demand, although it’s possible to do a lot better than they’re doing now. Or, things like customer loyalty; like the effect of general advertising on customer loyalty. I think those are probably two of the big areas.
When you see large companies being kind of serious about machine learning initiatives, how are they structuring those in the organization? Is there an AI department, or is it in IT? Who kind of “owns” it? How are its resources allocated? Are there a set of best practices, that you’ve gleaned from it?
Yes. I would say there are different levels of maturity. Obviously, the vast majority of companies have no organization around this, and it is individuals taking initiatives, and experimenting by themselves. IT in general has not taken a leadership role in this area. I think, fundamentally, that’s because IT departments are poorly designed. Like the CIO job needs to be two jobs. There needs to be a Chief Infrastructure Officer and Chief Innovation Officer. One of those jobs is to make sure that the networks are working, the data center is working, and people have computers. The other job is, “How are advances in technologies helping companies?” There are some companies that have Chief Data Officers. I think that’s also caused a problem, because they’re focusing more on big data, and less on what do you actually do with those data?
I think the most advanced companies – I would say, first of all, it’s interesting, because it’s following the same trajectory as information technology organizations follow, in companies. First, it’s kind of anarchy. Then, there’s the centralized group. Then, it goes to a distributed group. Then, it goes to a federated group, federated meaning there’s a central authority which basically sets standards and direction. But each individual business unit has their representatives. So, I think we’re going to go through a whole bunch of gyrations in companies, until we end up where most technology organizations are today, which is; there is a centralized IT function, but each business unit also has IT people in it. I think that’s where we’re going.
And then, the last question along these lines: Do you feel that either: A) machine learning is doing such remarkable things, and it’s only going to gain speed, and grow from here, or B) machine learning is over-hyped to a degree that there are unrealistic expectations, and when disappointment sets in, you’re going to get a little mini AI winter again. Which one of those has more truth?
Certainly, there is a lot of hype about it. But I think if you look at the reality of how many companies have actually implemented learning algorithms; AI, ML, data science, across the operations of their company, we’re at the very, very beginning. If you look at it as a sigmoid, or an s-curve, we’re just approaching the first inflection point. I don’t know of any company that has fully deployed AI across all parts of their operations. I think ultimately, executives in the 21stcentury will have many, many learning algorithms to support them, making complex business decisions.
I think the company that clearly has exhibited the strongest commitment to this, and is furthest along, is Amazon. If you wonder how Amazon can deliver something to your door in one hour, it’s because there are probably 100 learning algorithms that made that happen, like where should the distribution center be? What should be in the distribution center? Which customers are likely to order what? How many drivers do we need? What’s the route the driver should take? All of those things are powered by learning algorithms. And you see the difference, you feel the difference, in a company that has deployed learning algorithms. I also think if you look back, from a societal point of view, that if we’re going to have ten billion people on the planet, we had better get a lot more efficient at the consumption of natural resources. We had better get a lot more efficient at production.
I think that means moving away from static business rules that were written years ago, that are only marginally relevant to learning algorithms that are constantly optimizing. And then, we’ll have a chance to get rid of what Hackett Group says is an extra trillion dollars of working capital, basically inventory, sitting in companies. And we’ll be able to serve customers better.
You seem like a measured person, not prone to wild exaggeration. So, let me run a question by you. If you had asked people in 1995, if you had said this, “Hey, you know what? If you take a bunch of computers, just PCs, like everybody has, and you connected them together, and you got them to communicate with hypertext protocol of some kind, that’s going to create trillions and trillions and trillions and trillions and trillions of dollars of wealth.” “It’s going to create Amazon and Google and Uber and eBay and Etsy and Baidu and Alibaba, and millions of jobs that nobody could have ever imagined. And thousands of companies. All of that, just because we’re snapping together a bunch of computers in a way that lets them talk to each other.” That would have seemed preposterous. So, I ask you the question; is artificial intelligence, even in the form that you believe is very real, and what you were just talking about, is it an order of magnitude bigger than that? Or is it that big, again? Or is it like “Oh, no. Just snapping together, a bunch of computers, pales to what we are about to do.” How would you put your anticipated return on this technology, compared to the asymmetrical impact that this seemingly very simple thing had on the world?
I don’t know. It’s really hard to say. I know it’s going to be huge. Right? It is fundamentally going to make companies much more efficient. It’s going to allow them to serve their customers better. It’s going to help them develop better products. It’s going to feel a lot like Amazon, today, is going to be the baseline of tomorrow. And there’s going to be a lot of companies that – I mean, we run into a lot of companies right now that just simply resist it. They’re going to go away. The shareholders will not tolerate companies that are not performing up to competitive standards.
The competitive standards are going to accelerate dramatically, so you’re going to have companies that can do more with less, and it’s going to fundamentally transform business. You’ll be able to anticipate customer needs. You’ll be able to say, “Where should the products be? What kind of products should they be? What’s the right product for the right customer? What’s the right price? What’s the right inventory level? How do we make sure that we don’t have warehouses full of billions and billions of dollars worth of inventory?”
It’s very exciting. I think the business, and I’m generally really bad at guessing years, but I know it’s happening now, and I know we’re at the beginning. I know it’s accelerating. If you forced me to guess, I would say, “10 years from now, Amazon of today will be the baseline.” It might even be shorter than that. If you’re not deploying hundreds of algorithms across your company, that are constantly optimizing your operations, then you’re going to be trailing behind everybody, and you might be out of business.
And yet my hypothetical 200-person company shouldn’t do anything today. When is the technology going to be accessible enough that it’s sort of in everything? It’s in their copier, and it’s in their routing software. When is it going to filter down, so that it really permeates kind of everything in business?
The 200-person company will use AI, but it will be in things like, I think database design will change fundamentally. There is some exciting research right now, actually using predictive algorithms to fundamentally redesign database structures, so that you’re not actually searching the entire database; you’re just searching most likely things first. Companies will use AI-enabled databases, they’ll use AI in navigation, they’ll use AI in route optimization. They’ll do things like that. But when it comes down to it, for it to be a good candidate for AI, in helping make complex decisions, the answer needs to be non-obvious. Generally with a 200-person company, having run a company that went from 2 people to 20 people, to 200 people, to 2,000 people, to 20,000 people, I’ve seen all of the stages.
A 200-person company, you can kind of brute force. You know everybody. You’ve just crossed Dunbar’s number, so you kind of know everything that’s going on, and you have a good feel for things. But like you said, I think applying it in using other peoples’ technologies that are driven by AI, for the things that I talked about, will probably apply to a 200-person company.
With your jet company, you did a project, and EBITDA went up 5%, and that was a big win. That was just one business problem you were working on. You weren’t working on where they buy jet fuel, or where they print. Nothing like that. So presumably, over the long haul, the technology could be applied in that organization, in a number of different ways. If we have a $70 trillion economy in the world, what percent is – 5% is easy – what percentage improvement do you think we’re looking at? Like just growing that economy dramatically, just by the efficiencies that machine learning can provide?
Wow. The way to do that is to look at an individual company, and then sort of extrapolate. I would say an individual company could, if you look at the value of companies. That’s the way I look at it, like shareholder value, which is made up of revenue, margins and capital efficiency. I think that revenue growth could take off, could probably double, from what it is. The growth could double from what it is now. Margins, it will have a dramatic impact. I think you could, if you look at all of the different things you could do within the company, and you had fully deployed learning algorithms, and gotten away from making decisions on yardsticks and averages, you could, a typical company, I’ll say double your margins.
But the home run is in capital efficiency, which not too many people pay attention to, and is one of the key drivers of return on invested capital, which is the driver of general value. This is where you can reduce things 30%, things like that, and get rid of warehouses of stuff. That allows you to be a lot more innovative, because then you don’t have obsolescence. You don’t have to push products that don’t work. You can develop more innovative products. There are a lot of good benefits. Then, you start compounding that year over year, and pretty soon, you’ve made a big difference.
Right, because doubling margins alone doubles the value of all of the companies, right?
It would, if you projected it out over time. Yes. All else being equal.
Which it seldom is. It’s funny, you mentioned Amazon earlier. I just assumed they had a truck with a bunch of stuff on it, that kept circling my house, because it’s like every time I want something, they’re just there, knocking on the door. I thought it was just me!
Yeah. Amazon Prime now came out, was it last year? In the Bay Area? My daughter ordered a pint of ice cream and a tiara. An hour later, a guy is standing at the front door with a pint of ice cream, and a tiara. It’s like Wow!
What a brave new world, that has such wonders in it!
Exactly!
As we’re closing up on time here, there are a number of people that are concerned about this technology. Not in the killer robot scenario. They’re concerned about automation; they’re concerned about – you know it all. Would you say that all of this technology and all of this growth, and all of that, is good for workers and jobs? Or it’s bad, or it’s disruptive in the short term, not in the long term? How do you size that up for somebody who is concerned about their job?
First of all, moving sort of big picture to small picture, first of all, this is necessary for society, unless we stop having babies. We need to do this, because we have finite resources, and we need to figure out how to do more with less. I think the impact on jobs will be profound. I think it will make a lot of jobs a lot better. In AI, we say it’s augment, amplify and automate. Right now, like the things we’re doing at XOJET really help make the people in revenue management a lot more powerful, and I think, enjoy their jobs a lot more, and doing a lot less routine research and grunt work. So, they actually become more powerful, it’s like they have super powers.
I think that there will also be a lot of automation. There are some tasks that AI will just automate, and just do, without human interaction. A lot of decisions, in fact most decisions, are better if they’re made with an algorithm anda human, to bring out the best of both. I do think there’s going to be a lot of dislocation. I think it’s going to be very similar to what happened in the automotive industry, and you’re going to have pockets of dislocation that are going to cause issues. Obviously, the one that’s talked about the most is the driverless car. If you look at all of the truck drivers, I think probably within a decade, that most cross-country trucks, there’s going to be some person sitting in their house, in their pajamas, with nine screens in front of them, and they’re going to be driving nine trucks simultaneously, just monitoring them. And that’s the number one job of adult males in the U.S. So, we’re going to have a lot of displacement. I think we need to take that very seriously, and get ahead of it, as opposed to chasing it, this time. But I think overall, this is also going to create a lot more jobs, because it’s going to make more successful companies. Successful companies hire people and expand, and I think there are going to be better jobs.
You’re saying it all eventually comes out in the wash; that we’re going to have more, better jobs, and a bigger economy, and that’s broadly good for everyone. But there are going to bumps in the road, along the way. Is that what I’m getting from you?
Yes. I think it will actually be a net positive. I think it will be a net significant positive. But it is a little bit of, as economists would say, “creative destruction.” As you go from agricultural to industrial, to knowledge workers, toward sort of an analytics-driven economy, there are always massive disruptions. I think one of the things that we really need to focus on is education, and also on trade schools. There is going to be a lot larger need for plumbers and carpenters and those kinds of things. Also, if I were to recommend what someone should study in school, I would say study mathematics. That’s going to be the core of the breakthroughs, in the future.
That’s interesting. Mark Cuban was asked that question, also. He says the first trillionaires are going to be in AI.  And he said philosophy. Because in the end, what you’re going to need are what the people know how to do. Only people can impute value, and only people can do all of that.
Wow! I would also say behavioral economics; understanding what humans are good at doing, and what humans are not good at doing. We’re big fans of Kahneman and Tversky, and more recently, Thaler. When it comes down to how humans make decisions, and understanding what skills humans have, and what skills algorithms have, it’s very important to understand that, and to optimize that over time.
All right. That sounds like a good place to leave it. I want to thank you so much for a wide-ranging show, with a lot of practical stuff, and a lot of excitement about the future. Thanks for being on the show.
My pleasure. I enjoyed it. Thanks, Byron.
Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.
[voices_in_ai_link_back]

Voices in AI – Episode 44: A Conversation with Gaurav Kataria

[voices_in_ai_byline]
In this episode, Byron and Gaurav discuss machine learning, jobs, and security.
[podcast_player name=”Episode 44: A Conversation with Gaurav Kataria” artist=”Byron Reese” album=”Voices in AI” url=”https://voicesinai.s3.amazonaws.com/2018-05-24-(00-57-17)-gaurav-kataria.mp3″ cover_art_url=”https://voicesinai.com/wp-content/uploads/2018/05/voices-headshot-card-1.jpg”]
[voices_in_ai_byline]
Byron Reese: This is Voices in AI brought to you by GigaOm. I am Byron Reese. Today our guest is Gaurav Kataria. He is the VP of Product over at Entelo. He is also a guest lecturer at Stanford. Up until last month, he was the head of data science and growth at Google Cloud. He holds a Ph.D. in computer security risk management from Carnegie Mellon University. Welcome to the show Gaurav!
Gaurav Kataria: Hi Byron, thank you for inviting me. This is wonderful. I really appreciate being on your show and having this opportunity to talk to your listeners.
So let’s start with definitions. What is artificial intelligence?
Artificial intelligence, as the word suggests, starts with artificial and at this stage, we are in this mode of creating an impression of intelligence, and that’s why we call it artificial. What artificial intelligence does is it learns from the past patterns. So, you keep showing the patterns to the machine, to a computer, and then it will start to understand those patterns, and it can say every time this happens I need to switch off the light, every time this happens I need to open the door, and things of this nature. So you can train the machine to spark these patterns and then take action based on those patterns. A lot of it is right now being talked about in the context of self-driving cars. When you’re developing an artificial intelligence technology, you need a lot of training towards that technology so that it can learn the patterns in a very diverse and broad set of circumstances to create a more complete picture of what to expect in the future and then whenever it sees that same pattern in the future, it knows from its past what to do, and it will do that in the future.
So…
Artificial intelligence is not built…sorry, go ahead.
So, that definition or the way you are thinking of it seems to preclude other methodologies in the past which would have been considered AI. It precludes expert systems which aren’t trained off datasets. It precludes classic AI, where you try to build a model. Your definition really is about what is machine learning, is that true? Do you see those as synonymous?
I do see a lot of similarity between artificial intelligence and machine learning. You are absolutely right that artificial intelligence is a much broader term than just machine learning. You could create an artificially intelligent system without machine learning by just writing some heuristics, and we can call it like an expert system. In today’s world, right now, there is a lot of intersection happening in the field of AI, artificial intelligence, and machine learning and the consensus or an opinion of a lot of people in this space today is that techniques in machine learning are the ones that will drive the artificial intelligence forward. However, we will continue to have many other forms of artificial intelligence.
Just to be really clear, let me ask you a different question. What you just said is kind of interesting. You say we’ve happened on machine learning and it’s kind of our path forward. Do you believe that something like a general intelligence is an evolutionary development along the line of what we are doing now? Is it are we going to be a little better with our techniques, a little better, a little better, a little better and then one day we’ll have a general intelligence? Or do you think general intelligence is something completely different and will require a completely different way of thinking?
Thanks for that question. I would say today we understand artificial intelligence as a way of extrapolating from the past. We see something in the past, and we draw a conclusion for future based on what pattern we have seen in the past. The notion of general intelligence assumes or presupposes that you can make decisions in the future without having seen those circumstances or those situations in the past. Today, most of what’s going on in the field of artificial intelligence and in the field of machine learning is primarily based on training the machine based on data that already exists. In [the] future, I can foresee a world where we will have generalized intelligence, but today we are very far from it. And to my knowledge most of the work that I have seen and I have interacted [with] and the research that I have read speaks mostly in the context of training the systems based on current data—current information so that it can respond for similar situations in the future—but not anything outside of that.
So, humans do that really well, right? Like, we are really good at transfer learning. You can train a human with a dataset of one thing. You know say this is an alien, grog, and show it a drawing, and it could pick out a photograph of that, it could pick out one of those hanging behind the tree, it could pick out one of those standing on its head…How do you think like we do that? I know it’s a big question. How do you think we do it? Is that a machine learning? Is that something that you can train a machine eventually to do solely with data or are we doing something there that’s different?
Yeah, so you asked about transfer learning. So [in] transfer learning we train the machine or train the system for one set of circumstances or one set of conditions and then it is able to transfer that knowledge or apply that knowledge in another area. It can still kind of act based on that learning, but the assumption there is that there is still training in one setup and then you transfer that learning to another new area. So when it goes to the new area it feels like there was no training and the machine is just acting without any training with all general intelligence. But that’s not true because the knowledge was transferred from another dataset or another condition where there was training data. So I would say transfer learning does start to feel like or mimic the generalized intelligence, but it’s not generalized because it’s still learning from one setup and then trying to just extrapolate it to a newer or a different setup.
So how do you think humans do it? Let me try the question in a different way. Does everything you know how to do, everything a human knows how to do by age 20, something we learned from seeing examples of data? Could you machine learn, could a human be thought of asa really sophisticated machine learning algorithm?
That’s a very good point. I would like to think of humans as, all of us, as doing two things. One is learning, we learn from our experiences, and as you said like going from birth to 20 years of age, we do a lot of learning. We learn to speak, we learn the language, we learn the grammar, and we learn the social rules and protocols. In addition to learning, or let me say separate from learning, humans also do another thing, which is humans create where there was not a learning or repetition of what was taught to them. They create something new—as the expression goes “create from scratch.” This creating something from scratch or creating something out of nothing is what we call human creativity or innovation. So humans do two things: they are very good learners, they can learn from even very little data, but in addition to being good learners, humans are also innovators, and humans are also creators, and humans are also thinkers. The second aspect is where I think the artificial intelligence and machine learning really doesn’t do much. The first aspect, you’re absolutely right, I mean humans could be thought of as a very advanced machine learning system. You could give it some data, and it will pick [it] up very quickly.
In fact, one of the biggest challenges in machine learning today or in the context of AI, the challenge from machine learning, is it needs a lot of training data. If you want to make a self-driving car, experts have said it could take billions of miles of driving data to train that car to be able to do that. The point being, with lot of training data you can create an intelligence system. But humans can learn with less training data. I mean when you start learning to drive at the age of sixteen you don’t need a million miles to drive before you learn how to drive, but machines will need millions and millions of miles of driving experience before they can learn. So humans are better learners, and there is something going on in the human brain that’s more advanced than typical machine learning and AI models today. And I’m sure the state of artificial intelligence and machine learning will advance where machines can probably learn as fast as a human and will not require this much training data that it requires today. But the second aspect of what a human does—which is create something out of nothing or out of scratch, the pure thinking, the pure imagination—there I think there is a difference between what a human does and what a machine does.
By all means! Go explain that because I have an enormous number of guests on the show who aren’t particularly impressed by human creativity. They think that it’s kind of a party trick. It’s just kind of a hack. There’s nothing really at all that interesting about it that we just like to think it is. So I’d love to talk to somebody who thinks otherwise, who thinks there’s something positively quite interesting about human creativity. Where do you think it comes from?
Sure! I would like to kind of consider a thought experiment. So imagine that a human baby was taken away from civilization, from [the] middle of San Francisco or Austin—a big city—and put on an island all by herself, like just one human child all by herself on an island and that child will grow over time and will learn to do a lot of things and the child will learn to create a lot of things on their own. That’s where I am trying to take your imagination. Consider what that one individual without having learned anything else from any other human could be capable of doing. Could they be capable of creating a little bit of shelter for themselves? Could they be capable of finding food for themselves? There may be a lot of things that humans may be able to do, and we know [that] from the history of our civilization and the history of mankind.
Humans have invented a lot of things, even basic things like creating fire and creating a wheel, to much more advanced things like sending rocket ships into space. So I do feel that humans do things that are just not learned from the behavior of other humans. Humans do create completely new and novel things which is independent of what was done by anybody before them who lived on this planet. So I definitely have a view here that I am a believer in human creativity and human ingenuity and intuition where humans do create a lot of things; it is these humans [who]are creating all the artificial intelligence systems and machine learning systems. I would never count out human creativity.
So, somebody arguing on the other side of that would say, well no she’s on this island, it’s raining and she sees a spot under a tree that didn’t get wet, or she sees a fox going into a hole when it starts raining and, therefore, she starts a data point that she was trained on. She sees birds flying down, grabbing berries and eating them, so it’s just training data from another source, it’s just not from other humans. We saw rocks roll down the hill and we generalized that to how round things roll, round rock rolls. I mean that it’s just all training data from the environment, it doesn’t have to be specifically human data. So what would you say to that?
No, absolutely! I think you’re giving very good counter examples and there is certainly a lot of training and learning but if you think about sending a rocket to the moon and you say okay, so did we just see some training data around us and create a rocket and send it to the moon? There it starts to become harder to say that it’s a one to one connection from one training data to sending a rocket to the moon. There are much more advanced and complicated things that humans have accomplished than just finding shelter and creating a tree or finding rolling rocks. So humans definitely go way further in their imagination [and] any simple example that I could give would illustrate that point.
Fair enough! So, and we´ll move onto another issue here in just a minute, but I find this fascinating. So is your contention that the brain is not a Turing machine? That the brain behaves in fundamentally different ways than a computer?
I’m not an expert on how [the] human brain or how any mammal’s brain actually behave[s], so I can’t comment on all the technical aspects on how does a human brain function. I can say from observation that humans do a lot of things that machines don’t do and it’s because humans do come up with things completely from scratch. They come up with ideas out of nowhere, whereas machines don’t come up with ideas out of nowhere. They either learn very directly from the data or as you pointed out, they learn through transfer learning. So they learn from one situation, and then they transfer that learning to another situation.
So, I often ask people on the show when they think we will get a general intelligence, and the answers I get [a] range between five and five hundred years. It sounds like, not putting any words into your mouth, you’re on the further outside of that equation. You think we’re pretty far away, is that true?
I do feel that it will be further out on that dimension. In fact what I’m most fascinated by, and I kind of would love your listeners to also think about this, is [that] we talk a lot about human consciousness—we talk about how humans become creative and what is that moment of getting a new idea or thinking through a problem where you’re not just repeating something that you have seen in the past. That consciousness is a very key topic that we all think about very, very deeply and we try to come up with good definitions for what that consciousness is. If we ever create a system which we believe can mimic or show human consciousness level behavior, then at the very least we would have understood what consciousness is. Today we don’t even understand it. We try to describe it in words, but we don’t have perfect words for it. With more advances in this field, maybe we will come up with a much crisper definition for consciousness. That’s my belief, and that’s my hope that we should continue to work in this area. Many, many researchers are putting a lot of effort and thinking into this space, and as they may progress whether it is five years or five hundred years, we will certainly learn a lot more about ourselves in that time period.
To be clear though, there is widespread agreement on what consciousness is. The definition itself is not an issue. The definition is the experience of the world. It’s qualia. It’s the difference [between] a computer sensing, measuring temperature and a person feeling heat. And so the question becomes how could a computer ever, you know, feel pain? Could a computer feel pain? If it could, then you can argue that that’s a level of consciousness. What people don’t know is how it comes about, and they don’t even know, I think to your point, what that question looks like scientifically. So, trying to parse your words out here, do you believe we will build machines that don’t just measure the world but actually experience the world?
Yeah, I think when we say experience it is still a lower level kind of feeling where you are still trying to describe the world through almost like sensors—sensing things, sensing temperatures, sensing light. If you could imagine where all our senses were turned off, so you are not getting external stimuli and everything was coming from within. Could you still come up with an idea on your own without any stimulus? That’s a much harder thing that I’m trying to understand. As humans, we do try to strive to get to that point where you can come up with an idea without a stimulus or without any external stimuli. For machines, that’s not the bar we are holding for them. We are just holding the bar to say if there is a stimulus, will they respond to that stimulus?
So just one more question along these lines. At the very beginning when I asked you about the definition of artificial intelligence, you replied about machine learning, and you said that the computer comes to understand, and I wrote down the word “understand” on my notepad here, something. And I was going to ask you about that because you don’t actually think the computer understands anything. That’s a colloquialism, right?
Correct!
So, do you believe that someday a computer can understand something?
I think for now I will say computers just learn. Understand as you said, has a much deeper meaning. Learning is much more straightforward. You have seen some pattern, and you have learned from that pattern. Whether you understand or not, is a much deeper concept but learning is a much more straightforward concept, and today with most of our machine learning systems, all we are expecting them to do is to learn.
Do you think that there is a quote “master algorithm?” Do you think that there is a machine learning technique that, in theory that we haven’t discovered yet, can do unsupervised learning? Like you could just point it at the internet, and it could just crawl it and end up figuring it all out, it’ll understand it all. Do you think that there is an algorithm like that? Or do you think intelligence is going to be found to be very kludgy and we are going to have certain techniques to do this and then this and then this and then this? What do you think that looks like?
I see it as a version of your previous question. Is there going to be generalized intelligence and is that going to be in five years or five hundred years? I think where we are today it is the more kludgy version where we do have machines that can scan the entire web and find patterns and it can repeat those patterns but nothing more than just repeating those patterns. It’s more like a question and answer type of machine. It is a machine that completes sentences. There is nothing more than that. There is no sense of understanding. There is only a sense of repeating those patterns that you have seen in the past.
So if you’re walking along the beach and you find a genie lamp, and you rub it, and a genie comes out, and the genie says I will give you one wish: I will give you vastly faster computers, vastly more data or vastly better algorithms. What would you pick? What would advance the science the most?
I think you nailed the question on the head by saying these are the three things we need to improve machine learning: better data, more data, we need more computing power, and we need better algorithms. The state of the world as I experience it today within the field of machine learning and data science, usually our biggest bottleneck, the biggest hurdle, is data. We would certainly love to have more computational power. We would certainly pick much better and faster algorithms. But if I could ask for only one thing, I would ask for more training data.
So there is a big debate going on about the implication that these technologies are going to have on employment. I mean you know the whole setup as do the listeners, what’s your take on that?
I think as a whole our economy is moving into much more specialized jobs where people and humans are doing something which is more specialized than something which is repetitive and very kind of general or simple. Machine learning systems are certainly taking a lot of repetitive tasks away. So if a task that a human repeats like hundred times a day, those simpler tasks are definitely getting automated. But humans, in coming back to our earlier discussion, do show a lot of creativity and ingenuity and intuition. A lot of jobs are moving into the direction where we are relying on human creativity. So as a whole towards the whole economy and for everybody around us, I feel the future is pretty bright. We have an opportunity now to apply ourselves to do more creative things than just repetitive things, and machines will do the repetitive things for us. Humans can focus on doing more creative things, and that brings more joy and happiness and satisfaction and fulfillment to every human than just doing repetitive tasks which become very mundane and not very exciting.
You know, Vladimir Putin famously said, I’m going to paraphrase it here, that whoever dominates in AI will dominate the world. There is this view from some who want to weaponize the technology which see it strategically, you know, in this kind of great geopolitical world we live in. Do you worry about that, or are you like well you could say that about every technology—like metallurgy, you can say about metallurgy, that whoever controls metallurgy controls the future—or do you think AI is something different and it will really reshape the geopolitical landscape of the world?
So, I mean as you said, every technology is definitely weaponized, and we have seen many examples of that, not just going back a few decades. We have seen that for thousands of years where a new technology comes up and as humans we get very creative in weaponizing that technology. I do expect that machine learning and AI will be used for these purposes, but like any other technology in the past, no one technology has destroyed the world. As humans we come up with ways and interesting ways to still reach an equilibrium, to still reach a world of peace and happiness. So while there will be challenges and AI will create problems for us in the field of weapon technology, I think that I would still kind of bet that humans will find a way to create equilibrium out of this disruptive technology and this is not the end of the world, certainly not.
You’re no doubt familiar with the European initiatives that when an artificial intelligence makes a decision that affects you—it doesn’t give you a home mortgage or something like that—that you have a right to know why it did that. You’re an advocate [for], it seems, that that is both possible and desirable. Can you speak to that? Why do you think that’s possible?
So, if I understand the intent of your question, the European Union and probably all the jurisdictions around the world have put in a lot of thought into a) protecting human privacy and b) making that information more transparent and available to all the humans. I think that is truly the intent of the European regulation as well as similar regulation in many other parts of the world where we want to make sure we protect human privacy, and we give humans an opportunity to either opt out or understand how their data or how that information is being used. I think that’s definitely the right direction. So if I understand your question, I think that’s what Entelo as a company is looking it. Every company that is in the space of AI and machine learning is also looking at creating that respectful experience where if any human’s data is used, it’s done in a privacy-sensitive manner, and the information is very transparent.
Well, I think I might be asking something rather poorly it seems [or] slightly different. Let me use Google as an example. If I have a company that sells widgets and I have a competitor—and they have a company that sells widgets, and there are ten thousand other companies that sell widgets—and if you search for widget in Google, my competitor comes up first, and I come up second, [then] I say to Google, “why am I second and they are first?” I guess I kind of expect Google’s like, “what are you talking about?” It’s like, who knows? There are so many things, so many factors, so many who knows! And yet that’s a decision that AI made that affected my business. There’s a big difference between being number one and number two in the widget business. So if you say now every decision that it makes you’ve got to be able to explain why it made that decision, it feels like it shackles on the progress of the industry. Do you comment?
Right. Now I think I understand your question better now. So that burden is on all of us, I think because it is a slope or a slippery slope where, as artificial intelligence algorithms and machine learning algorithms become more and more complex, it becomes harder to explain those algorithms, so that’s a burden that we all carry. Anybody who is using artificial intelligence, and nowadays it’s pretty much all of us. If we think about it, which company is not using AI and ML? Everybody is using AI and ML. It is a responsibility for everybody in this field to try to make sure that they have a good understanding of their machine learning models and artificial intelligence models [so] that you can start to understand what triggers certain behavior. Every company that I know of, and I can’t speak for everybody but based on my knowledge is certainly thinking about this because you don’t want to put any machine learning algorithm out there that you can’t even explain how it works. So we may not have a perfect understanding of every machine learning algorithm, but we certainly strive to understand it as best as we can and explain it as clearly as we can. So that’s a burden we all carry.
You know I’m really interested in the notion of embodying these artificial intelligences. So you know one of the use cases is that someday we’ll have robots that can be caregivers for elderly people. We can talk to them and over time learn to laugh at their jokes, and learn to tell jokes like the ones they tell and emote when they’re telling some story about the past and kind of emote with them and oh it’s a beautiful story and all of that. Do you think that’s a good thing or a bad thing? To build that kind of technology that blurs the lines between a system that, as we were talking about earlier, truly understands as opposed to a system that just learns how to, let’s just say manipulate the person?
Yeah, I think right now my understanding is more in the field of learning than just full understanding, so I’ll speak from my area of knowledge and expertise [where] our focus is primarily on learning. Understanding is something that I think we as the community and researchers will definitely look at. But as far as most of the systems that exist today and most of the systems that I can foresee in the near future, they are more learning systems; they are not understanding systems.
But even a really simple case—you know I have the device from Amazon that if I say its name right now it’s going to, you know, start talking to me, right? And when my kids come into the studio and ask a question of it, once they get the answer [and] they can tell the answer is not what they’re looking for, they just tell it, you know, to be quiet. You know I have to say it somehow doesn’t sit right with me to hear them cut off something that sounds like a human like that—something that would be rude in any other [context]. So, does that worry you? Is that teaching? Am I just an old fuddy-duddy at this point? Or does that somehow numb their empathy with real people and they really would be more inclined to say that to a real person now?
I think you are asking a very deep question here as to do we as humans change our behavior and become different as we interact with technology? And I think some of that is true!
Yeah!
Some of that is true for sure, like when you think about SMS when it came out like 25 years ago as a technology, and we started texting each other. The way we would write text was different than how we would write handwritten letters. It became, I mean by the standards of let’s say 30 years ago, the text were very impolite, they would have all kinds of spelling mistakes, they would not address the people properly, and they would not really end with the proper punctuation and things like that. But as a technology, it evolved, and it is seen as still useful to us and we as humans we are comfortable with adapting to that technology. For every new technology, whether it is a speaking speaker or texting on cell phones, we’ll introduce new forms of communication, new forms of interaction. But a lot of human decency and respect comes from us not just based on how we interact with a speaker or on a text pad. A lot of it comes from much deeper rooted beliefs than just an interface. So I do feel like while we’ll adapt to new, different interfaces, a lot of human decency will come from much [a] deeper place than just the interface of the technology.
So you hold a Ph.D. in computer security risk management. When I have a guest on the show, sometimes I ask them “what is your biggest worry?” “Or is security really, you know, an issue?” And they all say yes. They’re like okay we’re plugging in 25 billion IoT devices, none of which by the way can we upgrade the software on. So you’re basically cementing in whatever security vulnerabilities you have. And you know [of] all the hacks that get reported in the industry, in the news—stories of election interfering, all this other stuff. Do you believe that the concern for security around these technologies is, in the popular media, overstated, understated or just about right?
I would say it’s just about right. I think that this is a very serious issue as more and more data is out there and more and more devices are out there as you mention a lot of IoT devices as well, I think the importance of this area has only grown over time and will continue to grow. So it deserves the due attention in this conversation, in our conversation, in any conversation. I think by bringing it to [the] limelight and drawing attention to this topic and making everybody think deeply and carefully about it is the right thing and I believe we are certainly not doing any fearmongering. All of these are justified concerns, and we are spending our time and energy about them in the right way.
So, just talking about the United States for a moment, because I’m sure all of these problems are addressed [on] a national level differently, different country. So just talking about the US for a minute, how do you think we’ll solve it? Do you just say well we’ll keep the spotlight on it and we hope that the businesses themselves see that they have an incentive to make their devices secure? Or do you think that the government should regulate it? How would you solve the problem now if you were in charge?
Sure! First of all, I think I am not in charge, but I do feel that there are three constituents in this. First, [are] the creators of technology, like when you are creating an IoT device or you’re creating any kind of software system, the responsibility is on the creator to think about the security of the system they are creating. The second constituent, the users, which [are] the general public and the customers of that technology. They put the pressure on the creator that the technology and the system should be safe. So if you don’t create a good system, a safe system, you will have no buyers and users for it. So people will vote with their feet, and they will hold the company or the creators of technology accountable. And as you mentioned, there is a third constituent, and that is the government or the regulator. I think all three constituents have to play a role. It’s not any one stakeholder that can decide whether the technology is safe, or good and is it good enough. It’s an interplay between the three constituents here. So the creators of technology which [are the] company, research lab, [and] academic institution, they have to think very deeply about security. The users of technology definitely hold the creators accountable, and the regulators play an important role in keeping the overall system safe. So I would say it’s not any one person or any one entity that can make the world safe. The responsibility is on all three.
So let me ask Gaurav the person a question. So you got this Ph.D. in computer security and risk management. What are some things that you personally do that you do because of your concerns about security? For instance, like do you have a piece of tape over your webcam? Or you’re like I would never hook a webcam? Or I never use the same password twice. What are some of the things that you do in your online life to protect your security?
So, I mean you mention all that good things like not to reuse passwords and things like that, but one thing which I have always mentioned to kind of my friends, my colleagues and I would love to share it with your listeners is: think about two-factor authentication. Two-factor authentication means, in addition to a password you are using a second means of authentication. So if you have a banking website, or a broker website or for that matter even your email, that’s the email system, it’s a good tactic to have two-factor authentication where you enter your password, but in addition to your password the system requires you to use a second factor and the second factor could be to send you a text message on your phone and it gives you a code and then you have to enter that code into the website or into the software. So two-factor authentication is many, many times more secure than one-factor authentication which is we just enter password and password can get stolen or breached and hacked. Two-factor is a very good security practice, and almost all companies and most of the creators of technology are now supporting two-factor authentication for the world to move in to that direction.
So, up until November you were the head of data science and growth of Google Cloud, and now you are the VP of Product at Entelo. So two questions: one, in your personal journey and life, why did you decide now is the time to go do something different, and then, what about Entelo got you excited? Tell us the Entelo story and what that’s all about.
Thanks for asking that. So Entelo is in the space of recruiting automation. The idea is that recruiting candidates has always been a challenge. I mean it’s hard to find the right fit for your company. Long ago we would put classified ads in the newspaper, and then technology came along, and we could post jobs on our website, we could post jobs on job boards, and that certainly helped in broadcasting your message to a lot of people so that they could apply for your job. But when you are recruiting, people who apply for your job is only one means of getting good people to your company. You also have to sometimes reach out to candidates who are not looking for a job, who are not applying for a job on your website or on a job board, they’re just happily employed somewhere else. But they are so good for the role you have that you have to go and kind of tap on their shoulder and say would you be interested in this new role, in this new career opportunity for you? Entelo creates that experience. It automates the whole recruiting process, and it helps you find the right candidates who may not apply on your website or apply on a job board, who are not even looking for a job. It helps you identify those candidates, and it helps you engage with those candidates—to reach out to them, tell them about your role and see if they are interested about your role, to then engage them further in the recruiting process. All of this is powered by a lot of data and a lot of AI and as we discussed earlier a lot of machine learning.
And so, I’ve often thought that what you’re describing—so AI has done really well at playing games because you’ve got these rules and you’ve got points, and you’ve got winners and all of that. Is that how you think of this? In a way, like, you have successful candidates at your company and unsuccessful candidates at your company and those are good points and bad points? So you’re looking for people that look like your successful candidates more. On an abstract, conceptual level how do you solve that problem?
I think you’re definitely describing the idea where not everybody is a good fit for your company and some people are a good fit. So the question is how do you find the good fit? How do you learn that who is a good fit and who is not? Traditionally, recruiters have been combing through lots and lots of resumes. I mean if you think back like decades ago, a recruiter would have to see a hundred or a thousand resumes stacked on their desk and then they would go through each one of them to say that this is a fit or not. Then about 20 years or so ago we had a lot of keywords search engines kind of developed, where as a human you don’t have to read the thousand resumes. Let’s just do a keyword search and let’s say if any of these resumes have this word and if they had the word then is a good resume and if it doesn’t have that word, then it’s not a good resume. That was a good innovation for scoring resumes or finding resumes, but it’s very imperfect because it’s susceptible to many problems. It’s susceptible to the problem where resumes get stuffed with keywords. It is susceptible to the problem that there is more to a person and more to a resume than just keywords.
Today the technology that we have in identifying the right candidate is just barely keyword search on almost every recruiting platform today. What a recruiter would do is say, “I can’t look through a thousand or a million resumes, let me just do a keyword search.” Entelo is trying to take a very different approach. Entelo is saying, “let’s not think about just keyword search; let’s think about who is [the] right fit for a job.” When you as humans look at a resume, you don’t do [a] keyword search; computers do [a] keyword search. I mean, in fact, if I were to challenge you or propose that I put a resume in front of you for an office manager you’re hiring for your office, you will probably scan that resume, you will have some heuristics in mind, you will look through some information and then say that yes this is a good resume or not a good resume. I can bet you are not going to do a keyword search on that resume and say like, “oh it has the word office, and it has the word manager, and it has the word furniture in it, so it’s a good resume for me.”
There is a lot that happens in the minds of the recruiters where they think through, is this person a good fit for this role? We are trying to learn from that recruiter experience where they don’t have to look through hundreds and thousands of resumes and nor do they have to do [a] keyword search. But we can learn from that experience of which is a good resume for this role and which is not a good resume for this role to find that pattern and then surface the right candidate and we take it a step further. We reach out to those candidates, engage those candidates, and then the recruiter only sees the candidates that are interested, so they don’t have to kind of think about like okay now do I have to do a keyword search in a million resumes and try to reach out to a million candidates. All of that process gets automated through the system that we have built here at Entelo and the system that we are further developing.
So at what level kind of is it training? For instance, if you have, you know, Bob’s House of Plumbing across the street from Jill’s House of Plumbing and then both are looking for an office manager and there both [have] 27 employees, do you say that their pools are exactly the same? Or is there something about Jill and her 27 employees that’s different than Bob and his 27 employees that means that they don’t get necessarily get one for one the exact same candidates?
Yeah, so historically most of the systems were built where there was no fit or contextual information and no personalization. It was whether Bob does the search or Jill does the search, they would get the exact same search results. Now we are moving in that direction of really understanding the fit for Bob’s company and really understanding the fit for Jill’s company so that they get the right candidate for them because one candidate is not right for everybody and one job is not right for every candidate. It is that matching between the candidate and the job.
Another aspect to kind of think about why using a system is sometimes better than just relying on one person’s opinion, is if it was one recruiter who was just deciding who’s a good fit for Bob’s company or Jill’s company, that recruiter may have their own bias and whether we like it or not many times, all of us tend to have unconscious bias. This is where the system or the machine tends to have a much better performance than a human because it’s learning across many humans rather than learning from only one human. If you were learning by copying one human, you will pick up all of their bias, but if you learn across many humans as opposed to a single person, you tend to be very unbiased or at least you tend to kind of average out as opposed to being very biased from one recruiter’s point of view. So that’s another reason why this system performs better than just relying on Bob’s individual judgment or Jill’s individual judgment.
It’s interesting, it sounds like a really challenging thing. As you were telling the story about looking for an office manager, and there are things when you’re scanning that you’re looking for, and it’s true that there is most often some form of an abstraction, because if my company needs an office manager for an emergency room, I’m looking for people who have been in high-stress situations before. Or if my company is, you know, a law firm I’m looking for people who have a background in things that are very secure and where privacy’s super important. Or if it’s a daycare, I maybe want somebody who’s got a background of things dealing with kids or something, so they’re always kind of like one level abstracted away, and so I bet that’s really hard to extract that knowledge. I could tell you I need somebody who can handle the pace at which we move around here, but for the system to learn that sounds like a real challenge, not beyond machine learning or anything, but it sounds like that’s a challenge. Is it?
Yes, you’re absolutely right. It is a challenge, and we have kind of just recently launched a product called Entelo Envoy, that’s trying to learn what’s good for your situation. So what Entelo Envoy will do is it will find the right candidate for your job posting or for your job description, send it to you, and then learn from you as you accept or reject certain candidates. You said that this candidate is over qualified or comes from a different industry. As you categorize those as fit and non-fit, it learns, and then over time, it starts sending you candidates that are much more fine-tuned to your needs. But the whole premise of the system is, initially it’s trying to find information that’s relevant for you, where you are looking for office managers, so you should get office manager resumes and not people who are nurses or doctors. So that’s the first element, and then the second element is let’s remove all the bias because if humans see me say that well we want to have only males or only females, let’s remove that bias and let’s have a system be unbiased in finding the right candidate. And then at the third level, if we do have more contextual information, as we pointed out we are looking for experience in a high-stress situation, then we can fine tune Entelo Envoy to get the third degree of personalization, or the third degree of matching. I want to look for people who have expertise in child care because your office happens to be the office fora daycare. Then there is a third level of tuning that you need to do at the system level. Entelo Envoy allows you to do that third level of tuning. It’ll send you candidates, and as you approve and reject those candidates, it will learn from your behavior and fine tune itself to find you the perfect match for the position that you are looking for.
You know this is a little bit of a tangent, but when I talk to folks on the show about is there really this like huge shortage of people with technical skills and machine learning backgrounds, they are all like “oh yeah, it’s a real problem.” I assume to them it’s like, “I want somebody with a machine learning background, and oh they need to have a pulse, other than that I’m fine.” So is that your experience that people with these skills are, right now, in this like incredibly high demand?
You’re absolutely right, there is high demand for people [with] machine learning skills, but I have been building products for many years now, and I know that to build a good product, to make any good product, you need a good team. It’s not about one person. Intuitively, we have all known that whether you were in machine learning or finance or medical field or healthcare, you know it takes a team to accomplish a job. When you are working in an operation theatre on a patient, it’s not only the doctor that matters, everybody else, it’s the team of people that make an operation successful. The same goes for machine learning systems. When you are building a machine learning system, it’s a team of people that are working together. It’s not only one engineer or one person or one data scientist that makes all of that possible. So creating the right team and creating a team that work[s] well, that respect[s] each other, build[s] on each other’s strengths, whereas creating a team that’s constantly fighting with each other—you will never accomplish anything. So you’re right, there is a high demand for people in the field of machine learning and data science. But every company and every project requires a good team, and you want a right fit of people for that team, rather than just individually good people.
So, in a sense, Entelo may invert that setup where you started where post the job and get a thousand resumes. You may be somebody like a machine learning guru and get a thousand companies that want you. So will that happen? Do you think that people with high demand skills will get heavily recruited by these systems in kind of an outreach way?
I think it comes back to if all we were doing was keyword search, then you’re right. I mean one resume looks good because it has all the right keywords, but we don’t do that. When we hire people in our teams, we are not just doing [a] keyword search. We want to find the person who is a right fit for the team, a person who has the skills, attributes, and understanding. It may be that you want someone who is experienced in your industry. It may be that you want someone who has worked on a small team. Or you want someone who has worked in a startup before. So I think there are many, many dimensions in which candidates are found by companies, and a good match happens. So, I feel like it’s not only one candidate who gets surfaced to a thousand companies and has a thousand job offers. It’s usually that every candidate has the right fit, everyone role has the right need for the right candidate, and it’s that matching of candidate and the role that creates a win-win situation for the entire office.
Well, I do want to say, you know you’re right that this is one of those areas that we still do it largely the old-fashioned way. Somebody looks at a bunch of people and you know makes a gut call. So I think you’re right on that it’s an area that technology can be deployed to really increase efficiency and what better place to increase efficiency and building your team as you said. So I guess that’s it! We are running out of time here. I would like to thank you so much for being on the show and wish you well in your endeavor.
Thank you, Byron. Thanks for inviting me and thank you to your listeners for humoring us.
Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.
[voices_in_ai_link_back]

Voices in AI – Episode 43: A Conversation with Markus Noga

[voices_in_ai_byline]
In this episode, Byron and Markus discuss machine learning and automation.
[podcast_player name=”Episode 43: A Conversation with Markus Noga” artist=”Byron Reese” album=”Voices in AI” url=”https://voicesinai.s3.amazonaws.com/2018-05-22-(00-58-23)-markus-noga.mp3″ cover_art_url=”https://voicesinai.com/wp-content/uploads/2018/05/voices-headshot-card.jpg”]
[voices_in_ai_byline]
Byron Reese: This is Voices In AI brought to you by GigaOm, I’m Byron Reese. Today, my guest is Marcus Noga. He’s the VP of Machine Learning over at SAP. He holds a Ph.D.in computer science from Karlsruhe Institute of Technology, and prior to that spent seven years over at Booz Allen Hamilton working on helping businesses adopt and transform their businesses through IT. Welcome to the show Markus.
Markus Noga: Thank you Byron and it’s a pleasure to be here today.
Let’s start off with a question I have yet to have two people answer the same way. What is artificial intelligence?
That’s a great one, and it’s sure something that few people can agree on. I think the textbook definition mostly defines that by analogy with human intelligence, and human intelligence is also notoriously tricky and hard to define. I define human intelligence as the ability to deal with the unknown and bring structure to the unstructured, and answer novel questions in a surprisingly resourceful and mindful way. Artificial intelligence in itself is the thing, rather more playfully, that is always three to five years out of reach. We love to focus on what can be done today—what we call machine learning and deep learning—that can draw a tremendous value for businesses and for individuals already today.
But, in what sense is it artificial? Is it artificial intelligence in the way artificial turf? Is it really turf, it just looks like it? Or is it just artificial in the sense that we made it? Or put another way, is artificial intelligence actually intelligent? Or does is it just behave intelligently?
You’re going very deep here into things like Searle’s Chinese room paradox about the guy in the room with a hand for definitions of how to transcribe Chinese symbols to have an intelligent conversation. The question being who or what is having the intelligent conversation. Is it the book? Certainly not. Is it the guy mindlessly transcribing these symbols? Certainly not? Is it maybe the system of the guy in the room, the book, and the room itself that generates these intelligent seeming responses? I guess I’m coming down on the output-oriented side here. I try not to think too hard about the inner states or qualia, or the question whether the neural networks we’re building have a sentient experience or the experience in this qualia. For me, what counts is whether we can solve real-world problems in a way that’s compatible with intelligence. Its place in intelligent behavior of everything else—I would leave to the philosophers Byron.
We’ll get to that part where we can talk about the effects of automation and what we can expect and all of that. But, don’t you think at some level, understanding that question, doesn’t it to some degree inform you as to what’s possible? What kinds of problems should we point this technology at? Or do you think it’s entirely academic that it has no real-world implications?
I think it’s extremely profound and it could unlock a whole new curve of value creation. It’s also something that, in dealing with real-world problems today, we may not have to answer—and this is maybe also something specific to our approach. You’ve seen all these studies that say that X percent of activities can be automated with today’s machine learning, and Y percent could be automated if there are better natural language speech processing capabilities and so on, and so forth. There’s such tremendous value to be had by going after all these low-hanging fruits and sort of doing applied engineering by bringing ML and deep learning into an application context. Then we can bide our time until there is a full answer to strong AI, and some of the deeper philosophical questions. But what is available now is already delivering tremendous value, and will continue to do so over the next three to five years. That’s my business hat on—what I focus on together with the teams that I’m working with. The other question is one that I find tremendously interesting for my weekend and unique conversations.
Let me ask you a different one. You started off by saying artificial intelligence, and you dealt with that in terms of human intelligence. When you’re thinking of a problem that you’re going to try to use machine intelligence to solve, are you inspired in any way by how the brain works or is that just a completely different way of doing it? Or do we learn how intelligence, with the capital I, works by studying the brain?
I think that’s the multi-level answer because clearly the architectures that do really well in analytic learning today are in a large degree neurally-inspired. Instead of having multi-layered deep networks—having them with a local connection structure, having them with these things we call convolutions that people use in computer vision, so successfully—it resembles closely some of the structures that you see in the visual cortex with vertical columns for example. There’s a strong argument for both these structures in the self-referential recurrent networks that people use a lot for video processing and text processing these days are very, very deeply morally inspired. On the other hand, we’re also seeing that a lot of the approaches that make ML very successful today are about as far from neutrally-inspired learning as you can get.
Example one, we struggled as a discipline with neutrally-inspired transfer functions—that were all nice, and biological, and smooth—and we couldn’t really train deep networks with them because they would saturate. One of the key enablers for modern deep learning was to step away from the biological analogy of smooth signals and go to something like the rectified linear unit, the ReLU function, as an activation, and that has been a key part in being able to train very deep networks. Another example when a human learns or an animal learns, we don’t tend to give them 15 million cleanly labored training examples, and expect them to go over these training examples 10times in a row to arrive at something. We’re much closer to one-shot learning and being able to recognize the person with a cylinder hat on their head just the basis of one description or one image that shows us something similar.
So clearly, the approaches that are most successful today are both sharing some deep neural inspiration as a basis, but, also a departure into computationally tractable, and very, very different kinds of implementations than the network that we see in our brains. I think that both of these themes are important in advancing the state-of-the-art in ML and there’s a lot going on. In areas like one-shot learning, for example, right now I’m trying to mimic more of the way the human brain—with an active working memory and these rich associations—is able to process new information, and there’s almost no resemblance to what convolutional networks and the current networks do today.
Let’s go with that example. If you take a small statue of a falcon, and you put it in a hundred photos—and sometimes it’s upside down, and sometimes it’s laying on its side, sometimes it’s half in water, sometimes it’s obscured, sometimes it’s in shadows—a person just goes “boom boom boom boom boom” and picks them out, right and left with no effort, you know, one-shot learning. What do you think a human is doing? It is an instance of some kind of transfer learning, but what do you think is really going on in the human brain, and how do you map that to computers? How do you deal with that?
This is an invitation to speculate on the topics of falcons, so let me try. I think that, clearly, our brains have built a representation of the real world around us, because we’re able to create that representation even though the visual and other sensory stimuli that reach us are not in fact as continuous as they seem. Standing in the room here having the conversation with you, my mind creates the illusion of a continuous space around me, but in fact, I’m getting distinct feedbacks from the eyes as they succumb and jump around the room. The illusion of a continuous presence, the continuous sharp resolution of the room is just that; it’s an illusion because our mind has built very, very effective mental models of the world around us, that’s highly contrasting information and make it tractable on an abstract level.
Some of the things that are going on in research right now [are] trying to exploit these notions, and trying to use a lot of unsupervised training with some very simple assumptions behind them; basically the mind doesn’t like to be surprised, and would, therefore, like to predict what’s next [by]leveraging very, very powerful unsupervised training approaches where you can use any kind of data that’s available, and you don’t need to enable it to come up with these unsupervised representation learning approaches. They seem to be very successful, and they’re beating a lot of the traditional approaches because you can have access to way larger corpuses of unlabeled information which means you can train better models.
Now is that it a direct analogy to what the human brain does? I don’t know. But certainly it’s an engineering strategy that results in world-leading performance on a number of very popular benchmarks right now, and it is, broadly speaking, neutrally-inspired. So, I guess bringing together what our brains do and what we can do in engineering is always a dance between the abstract inspiration that we can get from how biology works, and the very hard math and engineering in getting solutions to train on large-scale computers with hundreds of teraflops in compute capacity and large matrix multiplications in the middle. It’s advances on both sides of the house that make ML advance rapidly today.
Then take a similar problem, or tell me if this is a similar problem, when you’re doing voice recognition, and there’s somebody outside with the jackhammer, you know, it’s annoying, but a human can separate those two things. It can hear what you’re saying just fine, but for a machine, that’s a really difficult challenge. Now my question to you is, is that the same problem? Is it one trick humans have like that that we apply in a number of ways? Or is that a completely different thing that’s going on in that example?
I think it’s similar, and you’re hitting onto something because in the listening example there are some active and some passive components going on. We’re all familiar with the phenomenon of selective hearing when we’re at a dinner party, and there are 200 conversations going on in parallel. If we focus our attention on a certain speaker or a certain part of the conversation, we can make them stand out over the din and the noise because their own mind had some prior assumptions as to what constitutes a conversation, and we can exploit these priors in our minds in order to selectively listen in to parts of the conversation. This has partly a physical characteristic, maybe hearing in stereo. Our ears have certain directional characteristics to the way they pick up certain frequencies by turning our head the right way and inclining it the right way. We can do a lot already [with] stereo separation, whereas, if you have a single microphone—and that’s all the signal you get—all these avenues would be closed to you.
But, I think the main story is one about signals superimposed with noise—whether that’s camera distortions, or fog, or poor lighting in the case of the statue that we are trying to recognize, or whether it’s ambient noise or intermittent outages in the sense of the audio signal that you’re looking into. The two different most popular neutrally-inspired architectures on the market right now, [are] the convolutional networks for a lot of things in the image and also natural text space, and the recurrent networks for a lot of things in the audio ends at time series signal, but also on text space. Both share the characteristics that they are vastly more resilient to noise than any hard-coded or programmed approach. I guess the underlying problem is one that, five years ago, would have been considered probably unsolvable; where today with these modern techniques, we’re able to train models that can adequately deal with the challenges if the information is in the solid state.
Well, what do you think when the human hears, at a conversation at the party to go with that example, and you kind of like, “Oh, I want to listen to that.” I heard what you say that there’s one aspect of you where you make a physical modification to the situation, but what you’ve also done is introduced this idea of consciousness, that a person selectively can change their focus and that aspect of what the brain is doing, where it’s like, “Oh, wait a minute.” Maybe something that’s hard to implement on a machine, or is that not the case at all?
If you take that idea, and I think in the ML research and engineering communities this is currently most popular under the label of attention, or attention-based mechanisms, then certainly this is all over leading approaches right now—whether it’s the computer vision papers from CVPR just last week or whether it’s the text processing architectures that return state-of-the-art results right now. They all start to include some kind of attention mechanism allowing you to both weigh outputs by the center of attention, and also to trace back results to centers of extension, which have two very nice properties. On the one hand attention mechanisms, nascent as they are today, help improve the accuracy of what models can deliver. On the second hand, the ability to trace back on the outcome of a machine learning model to centers and regions of attention in the input can do wonders for explain-ability of ML and AI results, which is something that increasingly users and customers are looking for. Don’t just give me any result which is as good as my current process, or hopefully a couple of percentage points better. But, also helped me build confidence in this by explaining why things are being classed or categorized or translated or extracted the way they are. To gain the human trust into operating system of humans and machines working together explain-ability future is big.
One of the peculiar things to me, with regard to strong AI—general intelligence—is that there are folks who say, when you say, “When will we get a general intelligence, “the soonest you ever hear is five years. There are very famous people who believe we’re going to have something very soon. Then you get the other extreme is about 500 years and that worrying about that is like worrying about overpopulation on Mars. My question to you is why do you think that there’s such a wide range in terms of our idea of when we may make such a breakthrough?
I think it’s because of one vexing property of humans and machines is that the things that are easiest for us humans tend to be the things that are hardest for machines and vice versa. If you look at that today, nobody would dream of having computer as a job description. That’s a machine. If you think back 60-70 years, computer was the job description of people actually doing manual calculations. “Printer” was a job description, and a lot of other things that we would never dream of doing manually today were being done manually. Think of spreadsheets potentially the greatest simple invention in computing, think of databases, think of things like enterprise resource planning systems that SAP does, and business networks connecting them or any kind of cloud-based solutions—what they deliver is tremendous and it’s very easy for machines to do, but it tends to be the things that are very hard for humans. Now at the same time things that are very easy for humans to do, see a doggie and shout “doggie,” or see a cat and say “meow” is something that toddlers can do, but until very, very recently, the best and most sophisticated algorithms haven’t been able to do that part.
I think part of the excitement around ML and deep learning right now is that a lot of these things have fallen, and we’re seeing superhuman performance on image classification tasks. We’re seeing superhuman performance on things like switchboard voice-to-text transcription tasks, and many other elements are falling to machines that that used to be very easy for humans but are now impossible for us. This is something that generates a lot of excitement right now. I think where we have to be careful is [letting] this guide our expectations on the speed of progress in following years. Human intuition about what is easy and what is hard is traditionally a very, very poor guide to the ease of implementation with computers and with ML.
Example, my son was asking me yesterday, “Dad, how come the car can know where it is at and tell us where to drive?” And I was like, “Son, that’s fairly straightforward. There are all these satellites flying around, and they’re shouting at us, ‘It’s currently 2 o’clock and 30 seconds,’ and we’re just measuring the time between their shouts to figure out where we are today, and then that gives us that position on the planet. It’s not a great invention; it’s the GPS system—it’s mathematically super hard to do for a human with a slide rule; it’s very easy to do for the machine.” And my son said, “Yeah, but that’s not what I wanted to know. How come the machine is talking to us with the human voice? This is what I find amazing, and I would like to understand how that is built.” and I think that our intuition about what’s easy and what’s hard is historically a very poor guide for figuring out what the next step and the future of ML and artificial intelligence look like. This is why you’re getting those very broad bands of predictions.
Well do you think that the difference between the narrow or weak AI we have now and strong AI, is evolutionary? Are we on the path [where] when machines get somewhat faster, and we get more data, and we get better algorithms, that we’re going to gradually get a general intelligence? Or is a general intelligence something very different, like a whole different problem than the kinds of problems we’re working on today?
That’s a tough one. I think that taking the brain analogy; we’re today doing the equivalent of very simple sensory circuits which maybe can’t duplicate the first couple of dozens or maybe a hundred layers in the way the visual cortex works. We’re starting to make progress into some things like one-shot learning; it’s very nascent in that early-stage research right now. We’re starting to make much more progress in directions like reinforcement learning, but overall it’s very hard to say which if any additional mechanisms are there in the large. If you look at the biological system of the brain, there’s a molecular level that’s interesting. There’s a cellular level that’s interesting. There is a simple interconnection I know that’s interesting. There is a micro-interconnection level that’s interesting. I think we’re still far from a complete understanding of how the brain works. I think right now we have tremendous momentum and a very exciting trajectory with what our artificial neural networks can do, and at least for the next three to five years. There seems to be pretty much limitless potential to bring them out into real-world businesses, into real-world situations and contexts, and to create amazing new solutions. Do I think that really will deliver strong AI? I don’t know. I’m an agnostic, so I always fall back to the position that I don’t know enough.
Only one more question about strong AI and then let’s talk about the shorter-term future. The question is, human DNA converted to code is something like 700 MB, give or take. But the amount that’s uniquely human, compared to say a chimp or something like that is only about 1% difference—only 7 or 8 or 9 MB of code—is what gives us a general intelligence. Does that imply or at least tell us how to build something that then can become generally intelligent? Does that imply to you that general intelligence is actually simple, straightforward? That we can look at nature and say, it’s really a small amount of code, and therefore we really should be looking for simple, elegant solutions to general intelligence? Or do those two things just not map at all?
Certainly, what we’re seeing today is that deep learning approaches to problems like image classification, image object detection, image segmentation, video annotation, audio transcription—all these things tend to be orders of magnitude, smaller problems than what we dealt with when we handcrafted things. The core of most deep learning solutions to these things, if you really look at the core model on the model structure, tends to be maybe 500 lines of code, maybe 1000. And that’s within the reach of an individual putting this together over a weekend, so the huge democratization that deep learning based on big data lends is that actually a lot of these models that do amazing things are very, very small code artifacts. The weight matrices and the binary models that they generate then tend to be as large or larger than traditional programs compiled into executable, sometimes orders of magnitude larger again. The thing is, they are very hard to interpret, and we’re only at the beginning of an explain-ability of what the different weights and the different excitations mean. I think there are some nice early visualizations on this. There are also some nice visualizations that explain what’s going on with attention mechanisms in the artificial networks.
As to explain-ability of the real network in the brain, I think that is very nascent. I’ve seen some great papers and results on things like spatial representations in the visual cortex where surprisingly you find triangle scripts or attempts to reconstruct the image hitting the retina based on reading, with fMRI scans, the excitations in lower levels of the visual cortex. They show that we’re getting closer to understanding the first few layers. I think that even with the 7 MB difference or so that you allude to between chimps and humans spelled out for us, there is a whole set of layers of abstractions between the DNA code and the RNA representation, the protein representation, the excitation of these with methylation and other mechanisms that control activation of genes, and the interplay of the proteins across a living breathing human brain that all of this magnitude of complexity above of the super megabyte, by a certain megabyte difference in A’s and C’s, and T’s, and G’s. We live in super exciting types. We live in times were a new record, and a new development, and a new capability that was unthinkable of a year ago, or let alone a decade ago, is becoming commonplace, and it’s an invigorating and exciting time to be alive. I still struggle to make a prediction from the year to general AI based on a straight-line trend.
There’s some fear wrapped up though as exciting as AI is, there’s some fear wrapped up in it as well. The fear is the effect of automation on employment. I mean you know this, of course, it’s covered so much. There’s kind of three schools of thought: One says that we’re going to automate certain tasks and that there will be a group of individuals who do not have the training to add economic value. They will be pushed out of the labor market, and we’ll have perpetual unemployment, like a big depression that never goes away. Then there’s another group that says, “No, no, no, you don’t understand. Everybody is replaceable. Every single job we have, machines can do any of it.” And then there’s a third school about that says, “No, none of that’s going to happen. The history of 250 years of the Industrial Revolution is that people take these new technologies, even profound ones like electricity and engines, and steam, and they just use them to increase their own productivity and to drive wages up. We’re not going to have any unemployment from this, any permanent unemployment.” Which of those three camps, or a fourth, do you fall into?
I think that there’s a lot of historical precedent for how technology gets adopted, and there are also numbers of the adoption of technologies in our own day and age that sort of serve as reference points here. For example, one of the things that surprised me, truly, is the amount of e-commerce—as a percentage of overall retail market share—[that] is still in the mid to high single digit percentage points according to surveys that I’ve seen. That totally does not match my personal experience of basically doing all my non-grocery shopping entirely online. But it shows that in the 20-25 years of the Internet Revolution, a tremendous value has been created—and the conveniences of having all kinds of stuff at your doorstep with just a single click actually—that has transformed the single-digit percentage of the overall retail market with the transformation that we’ve seen. This was one of the most rapid uptakes in history of new technology that has groundbreaking value, by decoupling evidence and bits, and it’s been playing out over the past 20-25 years that all of us are observing.
So, I think while there is tremendous potential of machine learning in AI to drive another Industrial Revolution, we’re also in the middle of all these curves from other revolutions that are ongoing. We’ve had a mobile revolution that unshackled computers and gave everybody what used to be a supercomputer in their pocket which had an infinite revolution. Before that, we’ve had a client-server revolution and the computing revolution in its own—all of these building on prior revolutions like electricity, or the internal combustion engine, or methods like the printing press. They certainly have a tendency to show accelerating technology cycles. But on the other hand, for something like e-commerce or even mobile, the actual adoption speed has been one that is none too frightening. So for all the tremendous potential that ML and AI bring, I would be hard-pressed to come up with a completely disruptive scenario here. I think we are seeinga technology with tremendous potential for rapid adoption. We’re seeing the potential to both create new value and do new things, and to automate existing activities which continues past trends. Nobody has computer or printer as their job description today, and job descriptions like social-media influencer, or blogger, or web designer did not exist 25 years ago. This is an evolution on a Schumpeterian creative destruction that is going on all over industry, in every industry, in every geography, based on every new technology curve that comes in here.
I would say fears in this space are greatly overblown today. But fear is real the moment you feel it, therefore institutions—like The Partnership on Artificial Intelligence, with the leading technology companies, as well as the leading NGOs, think tanks, and research institutes—are coming together to discuss the implications of AI, and ethics of AI, and safety and guiding principles. All of these things are tremendously important to make sure that we can adopt this technology with confidence. Just remember that when cars were new, Great Britain had a law that a person with a red flag had to walk in front of the car in order to warn all pedestrians of the danger that was approaching. That was certainly an instance of fear about technology, that, on the one hand, was real at that point in time, but that also went away with a better understanding of how it works and of the tremendous value on the economy.
What do you think of these efforts to require that when an artificial intelligence makes a ruling or a decision about you that you have a right to know why it made that decision? Is that a manifestation of the red flag in front of the car as well, and is that something that would, if that became the norm, actually constrain the development of artificial intelligence?
I think you’re referring to the implicit right to explanation on this part of the European Union privacy novella for 2018. Let me start by saying that the privacy novella we’re seeing is a tremendous step forward because the simple act of harmonizing the rules and creating one digital playing field across the hundreds of millions of European citizens, and countries, and nationalities, is a tremendous step forward. We used to have one different data protection regime for each federal state in Germany, so anything that is required and harmonized is a huge step forward. I also think that the quest for an explanation is something that is very human. At the core of us is to continue to ask “why” and “how.” That is something that is innate to ourselves when we apply for a job with the company, and we get rejected. We want to know why. And when we apply for a mortgage and we can offer a rate that seems high to us and we want to understand why. That’s a natural question, it’s a human question, and it’s an information need that needs to be served if we don’t want to end up in a Kafka-esque future where people don’t have a say about their destiny. Certainly, that is hugely important on the one hand.
On the other hand, we also need to be sure that we don’t measure ML and AI to a stricter standard than we measure humans today because that could become an inhibitor to innovation. So, if you ask a company, “Why you didn’t get accepted for that job offer?” They will probably say, “Dear Sir or Madam, thank you for your letter. Due to the unusually strong field of candidates for this particular posting, we regret to inform you that certain others are stronger, and we wish you all the best for your continued professional future.” This is what almost every rejection letter reads like today. Are we asking the same kind of explain-ability from an AI system that is delivering a recommendation today that we apply to a system of humans and computers working together to create a letter like that? Or are we holding them to a much, much higher standard? If it is the first thing, absolutely essential. If it’s the second thing, we got to watch whether we’re throwing out the baby with the bathwater on this one. This is something where we, I think, need to work together to find the appropriate levels and standards for things like explain-ability in AI to fill very abstract sentences like write to an explanation with life that can be implemented, that can be delivered, and that can provide satisfactory answers at the same time while not unduly inhibiting progress. This is something that, with a lot of players focused on explain-ability today, where we will certainly see significant advances going forward.
If you’re a business owner, and you read all of this stuff about artificial intelligence, and neural nets, and machine learning, and you say, “I want to apply some of this great technology in my company,” how do people spot problems in a business that might be good candidates for an AI solution?
I can extort that and turn it around by asking, “What’s keeping you awake at night? What are the three big things that make you worried? What are the things that make up the largest part of your uncertainty, or of your cost structure, or of the value that you’re trying to create?” Looking on end-to-end processes, it’s usually fairly straightforward to identify cases where AI and ML might be able to help and to deliver tremendous value. The use-case identification tends to be the fairly easiest chord of the game. Where it gets tricky is in selecting and prioritizing these cases, figuring out the right things to build, and finding the data that you need in order to make the solution real, because unlike traditional software engineering, this is about learning from data. Without data, you basically can’t sort or at least we have to build some very small simulators in order to create the data that you’re looking for.
You mentioned that that’s the beginning of the game, but what makes the news all the time is when AI beats a person at a game. In 1997 you had chess, then you had Ken Jennings in Jeopardy!, then you had AlphaGo and Lee Sedol, and you had AI beating poker. Is that a valid approach to say, “Look around your business and look for things that look like games?” Because games have constrained rules, and they have points, and winners, and losers. Is that a useful way to think about it? Or are the game things more like AI’s publicity, a PR campaign, and that’s not really a useful metaphor for business problems?
I think that these very publicized showcases are extremely important to raise awareness and to demonstrate stunning new capabilities. What we see in building business solutions is that I don’t necessarily have to be the human world champion in something in order to deliver value. Because a lot of business is about processes, is about people following flowcharts together with software systems trying to deliver a repeatable process for things like customer service, or IT incident handling, or incoming invoice screening and matching, or other repetitive recurring tasks in the enterprise. And already by addressing—it’d be easy to serve 60-80% of these, we can create tremendous value for enterprises by making processes run faster, by making people more productive, and by relieving them of the parts of activities that they regard as repetitive and mind-numbing, and not particularly enjoyable.
The good thing is that in a modern enterprise today, people tend to have IT systems in place where all these activities leave a digital exhaust stream of data, and locking into that digital exhaust stream and learning from it is the key way to make ML solutions for the enterprise feasible today. This is one of the things where I’m really proud to be working for SAP because 76% of all business transactions, as measured by value, anywhere on the globe, are on an SAP system today. So if you want to learn models on digital information that touch the enterprise, chances are it’s either in an SAP system or in a surrounding system already today. Looking for these and sort of doing the intersection between what’s attractive—because I can serve core business processes with faster speed, greater agility, lower cost, more flexibility, or bigger value—and crossing that with the feasibility aspect of “do I have the digital information that I can learn from to build business-relevant functionality today?,” is our overriding approach to identifying things that we built in order to make all our SAP enterprise applications intelligent.
Let’s talk about that for a minute. What sorts of things are you working on right now? What sorts of things have the organization’s attention in machine learning?
It’s really end-to-end digital intelligence on processes, and let me give you an example. If you look at the finance space, which SAP is well-known for, these huge end-to-end processes—like record to report, or things like invoice to record—which really deal end-to-end with what an enterprise needs to do in order to buy stuff and pay for it, and receive it, or to sell stuff, and get paid for it. These are huge machines with dozens and dozens of process steps, and many individuals in shared service environments that otherwise perform the delivering of these services. They see a document like an invoice, for example, it’s just the tip of the iceberg for a complex orchestration and things to deal with that. We’re taking these end-to-end processes, and we’re making them intelligent every step of the way.
When an invoice hits the enterprise, the first question is what’s in it? And today most of the units in shared service environments extract development information via SAP systems. The next question is, “Do I know this supplier?” If they have merged or changed names or opened a new branch, I might not have them in my database. That’s a fuzzy lookup. The next step might be, “Have I ordered something like this?” and that’s a significant question because in some industries up to one-third of spending actually doesn’t have a purchase order. Finding people who have an order of this stuff, all related stuff from this supplier, or similar suppliers in the past, can be the key to figuring out whether we should approve it or not. Then, there’s the question of, “Did we receive the goods and services that this invoice is for?” That’s about going through lists and lists of staff, and figuring out whether the bill of lading for the truck that arrived really contains all the things that were on the truck and all the things that were on the invoice, but no other things. That’s about list matching and list comprehensing, and document matching, and recommending classification systems. It goes on and on like that until the point where we actually put through their payment, and the supplier gets paid for the first invoice that was there.
What you see is a digital process that is enabled by IT systems, very sophisticated IT systems, routine workflows between many human participants today. What you do is we can take the digital exhaust of all the process participants to learn what they’ve been doing, and then put the common, the repetitive, the mind-numbing part of the process on autopilot—gaining speed, reducing cost, making people more satisfied with their work day, because they can focus on the challenging, and the interesting, and the stimulating cases, and increasing customer satisfaction, or in this case supplier satisfaction because they get paid faster. This end-to-end approach is how we look at business processes, and when my ML group and AI do that, we see an order recommender, an entity extractor or some kind of translation mechanism at every step of the process. We work hard to turn these capabilities into scalable APIs on our cloud platform that integrates seamlessly with these standard applications, and that’s really our approach to problem-solving. It ties to the underlying data repository about how business operates and how processes slow.
Did you find that your customers are clear with how this technology can be used, and they’re coming to you and saying, “We want this kind of functionality, and we want to apply it this way,” and they’re very clear about their goals and objectives? Or are you finding that people are still finding their sea legs and figuring out ways to apply artificial intelligence in the business, and you’re more heading to lead them and say, “Here’s a great thing you could do that you maybe didn’t know was possible?”
I think it’s like everywhere, you’ve got early adopters, and innovation promoters, and dealers who actively come with these cases of their own. You have more conservative enterprises looking to see how things play out and what the results for early adopters are. You have others who have legitimate reasons to focus on burning parts of their house right now, for whom this, right now is not yet a priority. What I can say is that the amount of interest in ML and AI that we’re seeing from customers and partners is tremendous and almost unprecedented, because they all see the potential to tag business processes and the way business executes to a complete new level. The key challenge is working with customers early enough, and at the same time working with enough customers in a given setting to make sure that this is not a one-off that is highly specific, and to make sure that we’re really rethinking the process with digital intelligence instead of simply automating the status quo. I think this is maybe the biggest risk. We have tremendous opportunity to transform how business is done today if we truly see this through end-to-end and if we are looking to build out the robots. If we’re only trying to build isolated instances of faster horses, the value won’t be there. This is why we take such an active interest in the end-to-end and integration perspective.
Alright well, I guess just to two final questions. The first is, overall it sounds like you’re optimistic about the transformative power of artificial intelligence and what it can do—
Absolutely Byron.
But I would put that question to you that you put to businesses. What keeps you awake at night? What are the three things that worry you? They don’t have to be big things, but what are the challenges right now that you’re facing or thinking about like, “Oh, I just wish I had better data or if we could just solve this one problem?”
I think the biggest thing keeping me awake right now is the luxury problem of being able to grow as fast as demand and the market wants us to. That has all the aspects of organizational scaling and scaling the product portfolio that we enable with intelligence. Fortunately, we’re not a small start-up with limited resource. We are the leading enterprise software company and scaling inside such an environment is substantially easier than it would be on the outside. Still, we’ve been doubling every year, and we look set to continue in that vein. That’s certainly the biggest strain and the biggest worry that I face. It’s very old-fashioned things; it’s like leadership development that I tend to focus a lot of my time on. I wish I would have more time to play with models, and to play with the technology and to actually build and ship a great product. What keeps me awake is these more old-fashioned things, one of leadership development that matter the most for where we are at right now.
You talked at the very beginning, you said that during the week you’re all about applying these technologies to businesses, and then on the weekend you think about some of these fun problems? I’m curious if you consume science fiction like books or movies, or TV, and if so, is there any view of the future, anything you’ve read or seen or experienced that you think, “Ah, I could see that happening.” Or, “Wow, that really made me think.” Or do you not consume science fiction?
Byron, you caught me out here. The last thing I consumed was actually Valerian and the City of a Thousand Planets just last night in the movie theater in Karlsruhe that I went to all the time when I was a student. While not per se occupied with artificial intelligence, it was certainly stunning, and I do consume a lot of the stuff from the ease of it. It provides a view of plausible futures. Most of the things I tend to read are more focused on things like space, oddly enough. So things like The Three-Body Problem, and the fantastic trilogy that that became, really aroused my interest, and really made me think. There are others that offer very credible trajectories. I was a big fan of the book called Accelerando, which paints a credible trajectory from today’s world of information technology to an upload culture of digital minds and humans colonizing the solar system and beyond. I think that these escapes are critical to cure the hem from day-to-day business, and the pressures of delivering product under a given budget and deadlines. Sort of indulging in them, allows me to return relaxed, and refreshed, and energized on every Monday morning.
Alright, well that’s a great place to leave it, Markus. I’m want to thank you so much for your time. It sounds like you’re doing fantastically interesting work, and I wish you the best.
Did I mention that we’re hiring? There’s a lot of fantastically interesting work here, and we would love to have more people engaging in it. Thank you, Byron.
Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.
[voices_in_ai_link_back]

Expectation Versus Reality: Are boardrooms blocking digital revolutions?

There’s a heck of a lot of new technology available in the market. From artificial intelligence to blockchain, companies are inundated with new tools and solutions that promise to revolutionize aspects of their business they didn’t even know could be (or needed to be) improved. For executives facing intense pressure to keep up with the latest technology trends to remain competitive, figuring out what the true value is, versus more noise, is a daunting task. And unfortunately, this is leaving many companies to decide to stick with the status quo – so they’re falling behind.
A new survey by Gartner found that 91 percent of companies still haven’t reached a “transformational” level of maturity in data and analytics, despite this having been the number one priority for CIOs in recent years. As most businesses have not yet been able to fully implement and reap ROI for data analytics, which is the foundation of popular technologies like AI and machine learning, it’s clear these new tools still have a long way to go before they exit the ‘hype’ cycle and enter into operational reality.
But while the board may evangelize these major technology initiatives, what they need to realize is that these major digital disruptions are a long term strategy that require ongoing thought, planning and incremental tech investments. Simply having an end-goal to make AI a reality in your business to reap the many benefits it presents won’t necessarily get you there. Today, there are smaller tech trends that are fully operational and promise to bridge to the future. One such example is automation.
While not as sexy or media worthy as AI in grabbing business news headlines, software robots today can perform a lot of the repetitive and time consuming business tasks across departments with faster speed, accuracy, and ROI –  directly benefiting the bottom line.
But any business automation roll out has to start from the top. It requires careful planning and backing from the board in order for the c-suite to correctly navigate the changes it brings – operationally, culturally and technologically.
Here are three ways that the boardroom can break out of old habits and bring on the digital revolution.
Remove the bottlenecks
It’s clear that automation is at or near the top of the priority list, and the C-suite is beginning to reflect this.  According to a survey by KPMG, 25 percent of enterprises worldwide now have a Chief Digital Officer to lead this change.
However, the CDO has a long road ahead of them. A recent survey revealed that in 74 percent of organizations, automation is only being implemented by the IT department. Unfortunately, that’s a recipe for failure. On average, 25 percent of technology projects fail, and many more show little return on investment or need significant alteration to be successful. Often, it’s because IT projects are simply that: IT projects.
Automation isn’t just an IT function; it’s a function of the entire business, which means that a top-down leadership approach is critical to success. For IT leaders, getting C-suite buy-in from the very beginning not only establishes overarching business goals, it cements the project scope and removes potential bottlenecks or silos.
Be a champion
As the technology revolution continues, more and more business leaders are finding themselves boasting a new title: digital champion. A recent survey found that 68 percent of executives believe their CEOs are “digital champions,” up from 33 percent just ten years ago. It is clear organizations have come a long way, but there’s still a ways to go.
Today, those in senior positions must take the lead in the robotic revolution, and not just on the project scope. To spur true change, leaders must foster a culture that not only understands automation technology, but openly accepts it as necessary to carry out business functions. When business leaders evangelize the benefits on both an executive and employee level from the very beginning, it removes the fear of the unknown, allowing for open dialogue and communication across all departments.
Fan it out
With recent news reporting that a one third of jobs will be automated by 2030, a common concern for the human workforce is that robots are coming to steal their jobs. However, that’s simply not the case. Automation isn’t a threat; it’s an enabler. And, for employees who are mired in manual work, it will be a breath of fresh air. With effective leadership, employees can recognize the opportunity and shift attitudes towards incoming technology.
As the need for automation increases, business leaders can’t make decisions in a vacuum. Instead of simply swapping humans for robots, the C-suite must solicit feedback from the employees who will be affected by automation and look for ways to retrain or repurpose roles and duties. By focusing on the high-level strategic activities that require empathy and communication and giving them a say in designing new responsibilities, employees can bring real value to the business while feeling safe and secure in the midst of change.  
The business world is transforming, and technology is driving business objectives faster than ever before. There are a number of benefits to implementing automation, but it’s up to the C-suite to design a plan that allows the business to maximize return on investment. As with any new deployment, success starts in the boardroom.
by Dennis Walsh, President, Americas & APAC, Redwood Software

Dennis Walsh is responsible for operations of Redwood Software in North America, LATAM, South America as well as Asia Pacific. Walsh combines his business background and years in the software and services industry to successfully solve some of the most challenging IT and business automation issues.

Google Inbox Smart Reply: Cognition Meets Communication

How hot is work chat in the enterprise? So trendy that it’s now being used to improve the very thing it’s supposedly killing off – email.
Google announced yesterday a new feature for its Inbox application, which is an alternate Gmail interface designed for use on mobile devices. That feature, called Smart Reply, lets Inbox users reply to an incoming email with a short message (one phrase or sentence) that has been suggested by the application.
Google’s blog post announcing the new feature highlights the natural language processing, artificial intelligence, and machine learning technologies that work behind the scenes. Inbox uses these technologies to formulate three possible short responses for each incoming message. The user taps on the most appropriate one to embed it in her response and has the option to manually type or verbally dictate additional text. (Click on the image below to enlarge it and see this in action.)

 
 
 
 
 
 
 
 

How Smart Reply Works

The three auto-generated, suggested responses are based on the content of the incoming email message and the responses that were generated and selected for previous, similar messages. Smart Reply uses separate neural networks that work in tandem; one network reads and makes sense of every word in the incoming text and the second network predicts best-fit responses and synthesizes them into grammatically correct replies. (See this post on the Google Research Blog for more technical details on Smart Reply.)
The principles and even some of the technologies behind Smart Reply are not new. The Autonomy IDOL technology that Hewlett-Packard infamously acquired four years ago is used to auto-classify digital documents based on their content. Once classified, the documents can more easily be searched for, used in workflows, and archived.
Just last week IBM announced its new intelligent data capture solution, IBM Datacap Insight Edition, which uses cognitive computing capabilities to read document-based content and auto-classify it. Like Google’s and H-P’s technologies, IBM’s software must initially be trained with a set of control documents and then continues to learn as it reviews documents in a production environment.

Smart Reply and Chatbots

Inbox’s new ability to read text and formulate multiple, viable short responses doesn’t quite turn email into real-time messaging, but it does help individuals respond more quickly to the incoming messages in their email queue. Pair that with instant notifications of new incoming email messages on a mobile device and email becomes much more like instant messaging and other forms of work chat.
Google has taken a significant first step toward creating an intelligent bot that replies to email messages for you based on their content. Contrast this with the current practice of employing prebuilt, user-defined rules to reply with a canned response depending on who sent the incoming email or based on the recipients schedule (think vacation autoreplies).
If Google were to apply its deep neural network technology in Hangouts, it would move closer to Slack, HipChat, Telegram and other work chat tools that use bots to reply to user generated queries and as intermediaries between users and integrated third-party applications. In fact, Hangouts would have a differentiated advantage – the ability to parse not only incoming messages and suggest appropriate responses, but to do the same with text-based documents that are attached to chat messages.
It is likely that Google will go beyond applying this new technology in apps like Inbox and Hangouts. Imagine the power of having Smart Reply baked into Android, so it could be deployed on watches, in cars and as part of other emerging hardware-based platforms that run on that operating system. Tap on the watch’s or car’s display and quickly choose and send a response to an incoming message.
Some more advanced variant of Smart Reply might be used to semi-automate communication between nodes in mixed networks of machines and humans – Networks of Everything. Take as an example the current generation of software (the machine) that listens to social media and discerns trending topics related to a company’s customer-facing operations. This type of software could be enhanced with cognitive capabilities so that it would be able to suggest appropriate Twitter-length responses to an individual tasked with responding to relevant incoming social content. Eventually, the software might be able to respond, without human intervention, directly to someone expressing their opinion on social media.
The possibilities are numerous and mind-boggling. For now, Google has taken an important step toward a computing future in which real-time communication at work is increasingly semi- or fully-automated.

Think you are ready to build a cloud? Think again.

There is plenty of banter about what it truly takes to play in cloud: Do you think you are ready to jump into the deep end? Is your team ready? What about your processes? And is the technology itself ready to take the leap? Read on before you answer.

Two months ago I wrote “8 Reasons Not to Move to Cloud” to address the common reasons why organizations are hesitant to move to cloud. This post is geared to address the level one must play if they want to build their own clouds. Having built cloud services myself, I can say from experience that it is not for the faint of heart.

Automation and agility

Traditional corporate infrastructure is typically not automated or agile. In other words, a change in the requirements or demands may constitute a change in architecture, which in turn requires a manual change in the configurations. All of this takes time, which works against the real-time expectations of cloud provisioning.

Cloud-based solutions must have some form of automation and agility to address the changing demands coming from customers. Customers expect real-time provisioning of their resources. Speed is key here, and only possible with automation. And a prerequisite for automation is standardization.

Standardization is key

The need for standardization is key when building cloud-based solutions. In order to truly enable automation there must be a level of assumption around hardware configurations, architecture, and logical network. Even relatively small things such as BIOS version, NIC model, and patch level can throw havoc into cloud automation. From a corporate perspective, even the same model of server hardware could have different versions of BIOS, NIC, and patches.

Add logical configurations such as network topology, and the complexities start to mount. Where are the switches? How is the topology configured? Which protocols are in play for which sections of the network? One can quickly see how a very small hiccup can throw things into whack pretty quickly.

For the average corporate environment, managing physical and logical configurations at this level is challenging. Even for those operating at scale it is a challenge. This is one reason why those at scale build their own systems; so they can control the details.

The scale problem

At scale, however, the challenge is more than just numbers. Managing at scale requires a different mode of thinking. In a traditional corporate environment, when a server fails, an alert goes off to dispatch someone to replace the failed component. In parallel, the impacted workload is moved or changed to limit the impact to users. These issues can range from a small fire to a three-alarm inferno.

At scale those processes simply collapse under the stress. This is where operating at scale requires a different mode of thinking. Manual intervention for every issue is not an option.

The operations math problem

Cloud architecture must endure multiple hardware failures. Single points of failure must come out of the equation as much as possible. I wrote about this back in 2011 with my post “Clouds, Failure and Other Things That Go Bump in the Night.” This is where we revert to probability and statistics. There will be hardware failures. Entire data centers will fail, even. The challenge is how to change out operational thinking to assuming failure. I detail this a bit further in my post “Is the cloud unstable and what can we do about it?

Discipline, discipline, discipline

All of this leads to a required chance in discipline. No longer is one able to simply run into the data center to fix something. Humans are no longer capable of fixing everything manually. In fact, the level of discipline goes way up with cloud. Even the smallest mistake can have cataclysmic consequences. Refer to the November Microsoft Azure outage that was caused by a”‘performance update.” Process, operations, configurations and architectures must all raise their level of discipline.

Consider the consequences

Going full-circle, the question is, Should an enterprise or corporate entity consider building private clouds? For the vast majority of organizations, the answer should be no. But there are exceptions. Refer to my post way back in 2009 on the “Importance of Private Clouds.” Internal private clouds may present challenges for some, but hosted private clouds provide an elegant alternative.

In the end, building clouds are hard and complicated. Is it plausible for an enterprise to build their own cloud? Yes. Typically, it may come as a specific solution to a specific problem. But the hurdle is pretty high . . . and getting higher every day. Consider the consequences before making the leap.