Voices in AI – Episode 67: A Conversation with Amir Khosrowshahi

[voices_in_ai_byline]

About this Episode

Episode 67 of Voices in AI features host Byron Reese and Amir Khosrowshahi talk about the explainability, privacy, and other implications of using AI for business. Amir Khosrowshahi is VP and CTO at Intel. He holds a Bachelor’s Degree from Harvard in Physics and Math, a Master’s Degree from Harvard in Physics, and a PhD in Computational Neuroscience from UC Berkeley.
Visit www.VoicesinAI.com to listen to this one-hour podcast or read the full transcript.

Transcript Excerpt

Byron Reese: This is Voices in AI brought to you by GigaOm. I’m Byron Reese. Today I’m so excited that my guest is Amir Khosrowshahi. He is a VP and the CTO of AI products over at Intel. He holds a Bachelor’s Degree from Harvard in Physics and Math, a Master’s Degree from Harvard in Physics, and a PhD in Computational Neuroscience from UC Berkeley. Welcome to the show, Amir.
Amir Khosrowshahi: Thank you, thanks for having me.
I can’t imagine someone better suited to talking about the kinds of things we talk about on this show, because you’ve got a PhD in Computational Neuroscience, so, start off by just telling us what is Computational Neuroscience?
So neuroscience is a field, the study of the brain, and it is mostly a biologically minded field, and of course there are aspects of the brain that are computational and there’s aspects of the brain that are opening up the skull and peering inside and sticking needles into areas and doing all sorts of different kinds of experiments. Computational neuroscience is a combination of these two threads, the thread that there [are] computer science statistics and machine learning and mathematical aspects to intelligence, and then there’s biology, where you are making an attempt to map equations from machine learning to what is actually going on in the brain.
I have a theory which I may not be qualified to have and you certainly are, and I would love to know your thoughts on it. I think it’s very interesting that people are really good at getting trained with a sample size of one, like draw a made up alien you’ve never seen before and then I can show you a series of photographs, and even if that alien’s upside down, underwater, behind a tree, whatever, you can spot it.
Further, I think it’s very interesting that people are so good at transfer learning, I could give you two objects like a trout swimming in a river, and that same trout in a jar of formaldehyde in a laboratory and I could ask you a series of questions: Do they weigh the same, are they the same color, do they smell the same, are they the same temperature? And you would instantly know, and yet, likewise, if you were to ask me if hitting your thumb with a hammer hurts, and I would say “yes,” and then somebody would say, “Well, have you ever done it?” And I’m like, “yeah,” and they would say, “when?” And it’s like, I don’t really remember, I know I have. Somehow we take data and throw it out, and remember metadata and yet the fact a hammer hurts your thumb is stored in some little part of your brain that you could cut it out and somehow forget that. And so when I think of all of those things that seem so different than computers to me, I kind of have a sense that human intelligence doesn’t really tell us anything about how to build artificial intelligence. What do you say?
Okay, those are very deep questions and actually each one of those items is a separate thread in the field of machine learning and artificial intelligence. There are lots of people working on things, so the first thing you mentioned I think, was one shot learning where you have, you see as something that’s novel. From the first time you see it, you recognize it as something that’s singular and you retain that knowledge to then identify if it occurs again—such as for a child it would be like a chair, for you it’s potentially an alien. So, how do you learn from single examples?
That’s an open problem in machine learning and is very actively studied because you want to be able to have a parsimonious strategy for learning and the current ways that—it’s a good problem to have—the current ways that we’re doing learning in, for example, online services that sort photos and recognize objects and images. It’s very computationally wasteful and it’s actually wasteful in usage of data. You have to see many examples of chairs to have an understanding of a chair, and it’s actually not clear if you actually have an understanding of a chair, because the models that we have today for chairs, they do make mistakes. When you peer into where the mistakes were made, it seems like there the machine learning model doesn’t actually have an understanding of a chair, it doesn’t have a semantic understanding of a scene or of grammar, or of languages that are translated, and we’re noticing these efficiencies and we’re trying to address them.
You mentioned some other things, such as how do you transfer knowledge from one domain to the next. Humans are very good at generalizing. We see an example of something in one context, and it’s amazing that you can extrapolate or transfer it to a completely different context. That’s also something that we’re working on quite actively, and we have some initial success in that we can take a statistical model that was trained on one set of data and then we can then apply to another set of data by using that previous experience as a warm start, and then moving away from that old domain to the new domain. This is also possible to do in continuous time.
Much of the things we experience in the real world—they’re not stationary, and that’s a statistics change with time. We need to have models that can also change. For a human it’s easy to do that, it’s very good at going from… it’s good at handling non-stationary statistics, so we need to build that into our models, be cognizant of it, we’re working on it. And then [for] other things you mentioned—that intuition is very difficult. It’s potentially one of the most difficult things for us to translate from human intelligence to machines, and remembering things and having kind of a hazy idea of having done something bad to yourself with a hammer, that I’m not actually sure where that falls in into the various subdomains of machine learning.
Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com
[voices_in_ai_link_back]
 
Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.

Voices in AI – Episode 66: A Conversation with Steve Ritter

[voices_in_ai_byline]

About this Episode

Episode 66 of Voices in AI features host Byron Reese and Steve Ritter talk about the future of AGI, how AI will effect jobs, security, warfare, and privacy. Steve Ritter holds a B.S. in Cognitive Science, Computer Science and Economics from UC San Diego and is currently the CTO of Mitek.
Visit www.VoicesinAI.com to listen to this one-hour podcast or read the full transcript.

Transcript Excerpt

Byron Reese: This is Voices in AI, brought to you by GigaOm, I’m Byron Reese, and today our guest is Steve Ritter. He is the CTO of Mitek. He holds a Bachelor of Science in Cognitive Science, Computer Science and Economics from UC San Diego. Welcome to the show Steve.
Steve Ritter: Thanks a lot Byron, thanks for having me.
So tell me, what were you thinking way back in the ’80s when you said, “I’m going to study computers and brains”? What was going on in your teenage brain?
That’s a great question. So first off I started off with a Computer Science degree and I was exposed to the concepts of the early stages of machine learning and cognitive science through classes that forced me to deal with languages like LISP etc., and at the same time the University of California, San Diego was opening up their very first department dedicated to cognitive science. So I was just close to finishing up my Computer Science degree, and I decided to add Cognitive Science into it as well, simply because I was just really amazed and enthralled with the scope of what Cognitive Science was trying to cover. There was obviously the computational side, then the developmental psychology side, and then neuroscience, all combined to solve a host of different problems. You had so many researchers in that area that were applying it in many different ways, and I just found it fascinating, so I had to do it.
So, there’s human intelligence, or organic intelligence, or whatever you want to call it, there’s what we have, and then there’s artificial intelligence. In what ways are those things alike and in what ways are they not?
That’s a great question. I think it’s actually something that trips a lot of people up today when they hear about AI, and we might use the term, artificial basic intelligence, or general intelligence, as opposed to artificial intelligence. So a big difference is, on one hand we’re studying the brain and we’re trying to understand how the brain is organized to solve problems and from that derive architectures that we might use to solve other problems. It’s not necessarily the case that we’re trying to create a general intelligence or a consciousness, but we’re just trying to learn new ways to solve problems. So I really like the concept of neural inspired architectures, and that sort of thing. And that’s really the area that I’ve been focused on over the past 25 years, is really how can we apply these learning architectures to solve important business problems.
Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com
[voices_in_ai_link_back]
 
Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.

Voices in AI – Episode 54: A Conversation with Ahmad Abdulkader

[voices_in_ai_byline]

About this Episode

Episode 54 of Voices in AI features host Byron Reese and Ahmad Abdulkader talking about the brain, learning, and education as well as privacy and AI policy. Ahmad Abdulkader is the CTO of Voicera. Before that he was the technical lead of Facebook’s DeepText, an AI text understanding engine. Prior to that he developed OCR engines, machine learning systems, and computer vision systems at Google.
Visit www.VoicesinAI.com to listen to this one-hour podcast or read the full transcript.

Transcript Excerpt

Byron Reese: This is Voices in AI brought to you by GigaOm. I am Byron Reese. Today our guest is Ahmad Abdulkader. He is the CTO of Voicera. Before that he was the lead architect for Facebook supplied AI efforts producing Deep Texts, which is a text understanding engine. Prior to that he worked at Google building OCR engines, machine learning systems, and computer vision systems. He holds a Bachelor of Science and Electrical Engineering degree from Cairo University and a Masters in Computer Science from the University of Washington. Welcome to the show.
Ahmad Abdulkader: Thank you, thanks Byron, thanks for having me.
I always like to start out by just asking people to define artificial intelligence because I have never had two people define it the same way before.
Yeah, I can imagine. I am not aware of a formal definition. So, to me AI is the ability of machines to do or perform cognitive tasks that humans can do or learn to do rather. And eventually learn to do it in a seamless way.
Is the calculator therefore artificial intelligence?
No, the calculator is not performing a cognitive task. A cognitive task I mean vision, speech understanding, understanding text, and such. Actually, in fact the brain is actually lousy at multiplying two six-digit numbers, which is what the calculator is good at. But the calculator is really bad at doing a cognitive test.
I see, well actually, that is a really interesting definition because you’re defining it not by some kind of an abstract notion of what it means to be intelligent, but you’ve got a really kind of narrow set of skills that once something can do those, it’s an AI. Do I understand you correctly?
Right, right, I have a sort of a yard stick, or I have a sort of a set of tasks a human can do in a seamless easy way without even knowing how to do it, and we want to actually have machines mimic that to some degree. And there will be some very specific set of tasks, some of them are more important than others and so far, we haven’t been able to build machines that actually get even close to the human beings around these tasks.
Help me understand how you are seeing the world that way, and I don’t want to get caught up on definitions, but this is really interesting.
Right.
So, if a computer couldn’t read, couldn’t recognize objects, and couldn’t do all those things you just said, but let’s say it was creative and it could write novels. Is that an AI?
First of all, this is hypothetical. I wouldn’t know, I wouldn’t call it AI, so it goes back to the definition of intelligence, and then there’s a natural intelligence that humans exhibit, and then there is artificial intelligence that machines will attempt to make and exhibit. So, the most important of these that we actually use sort of almost every second of the day are vision, speech understanding, or language understanding, and creativity is one of them. So if you were to do that I would say this machine performed a subset of AI, but haven’t exhibited the behavior to show that’s it good at the most important ones, being vision, speech and such.
When you say vision and speech are the most important ones, nobody’s ever really looked at the problem this way, so I really want to understand how you’re saying that, because it would seem to me those aren’t really the most important by a long shot. I mean, if I had an AI that could diagnose any disease, tell us how to generate unlimited energy, fix all the environmental woes, tell us how to do faster than light travel, all of those things, like, feed the hungry, and alleviate poverty and all of those things, but they couldn’t tell a tuna fish from a Land Rover. I would say that’s pretty important, I would take that hands down over what you’re calling to be more important stuff.
I think really important is an overloaded word. I think you’re talking about utility, right? So, you’re imagining a hypothetical situation where we’re able to build computers that will do the diagnosis or poverty and stuff like that. These would be way more useful for us, or that’s what we think, or that’s the hypothesis. But actually to do these tasks that you’re talking about, it probably implies, most probably that you have done or solved, to a great degree, solved vision. It’s hard to imagine that you would be doing diagnosis without actually solving vision. So, these are sort of the basic tasks that actually humans can do, and babies learn, and we see babies or children learn this as they grow up. So, perhaps the utility of what you talked about would be much more useful for us, but if you were to define importance as sort of the basic skills that you could build upon, I would say vision would be the most important one. Language understanding perhaps would be the second most important one. And I think doing well in these basic cognitive skills would enable us to solve the problems that you’re talking about.
Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com 
[voices_in_ai_link_back]
 
Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.

Voices in AI – Episode 24: A Conversation with Deep Varma

[voices_in_ai_byline]
In this episode, Byron and Deep talk about the nervous system, AGI, the Turing Test, Watson, Alexa, security, and privacy.
[podcast_player name=”Episode 24: A Conversation with Deep Varma” artist=”Byron Reese” album=”Voices in AI” url=”https://voicesinai.s3.amazonaws.com/2017-12-04-(00-55-19)-deep-varma.mp3″ cover_art_url=”https://voicesinai.com/wp-content/uploads/2017/12/voices-headshot-card_preview-1.jpeg”]
[voices_in_ai_link_back]
Byron Reese: This is Voices in AI, brought to you by Gigaom. I’m Byron Reese. Today our guest is Deep Varma, he is the VP of Data Engineering and Science over at Trulia. He holds a Bachelor’s of Science in Computer Science. He has a Master’s degree in Management Information Systems, and he even has an MBA from Berkeley to top all of that off. Welcome to the show, Deep.
Deep Varma: Thank you. Thanks, Byron, for having me here.
I’d like to start with my Rorschach test question, which is, what is artificial intelligence?
Awesome. Yeah, so as I define artificial intelligence, this is an intelligence created by machines based on human wisdom, to augment a human’s lifestyle to help them make the smarter choices. So that’s how I define artificial intelligence in a very simple and the layman terms.
But you just kind of used the word, “smart” and “intelligent” in the definition. What actually is intelligence?
Yeah, I think the intelligence part, what we need to understand is, when you think about human beings, most of the time, they are making decisions, they are making choices. And AI, artificially, is helping us to make smarter choices and decisions.
A very clear-cut example, which sometimes what we don’t see, is, I still remember in the old days I used to have this conventional thermostat at my home, which turns on and off manually. Then, suddenly, here comes artificial intelligence, which gave us Nest. Now as soon as I put the Nest there, it’s an intelligence. It is sensing that someone is there in the home, or not, so there’s motion sensing. Then it is seeing what kind of temperature do I like during summer time, during winter time. And so, artificially, the software, which is the brain that we have put on this device, is doing this intelligence, and saying, “great, this is what I’m going to do.” So, in one way it augmented my lifestyle—rather than me making those decisions, it is helping me make the smart choices. So, that’s what I meant by this intelligence piece here.
Well, let me take a different tack, in what sense is it artificial? Is that Nest thermostat, is it actually intelligent, or is it just mimicking intelligence, or are those the same thing?
What we are doing is, we are putting some sensors there on those devices—think about the central nervous system, what human beings have, it is a small piece of a software which is embedded within that device, which is making decisions for you—so it is trying to mimic, it is trying to make some predictions based on some of the data it is collecting. So, in one way, if you step back, that’s what human beings are doing on a day-to-day basis. There is a piece of it where you can go with a hybrid approach. It is mimicking as well as trying to learn, also.
Do you think we learn a lot about artificial intelligence by studying how humans learn things? Is that the first step when you want to do computer vision or translation, do you start by saying, “Ok, how do I do it?” Or, do you start by saying, “Forget how a human does it, what would be the way a machine would do it?
Yes, I think it is very tough to compare the two entities, because the way human brains, or the central nervous system, the speed that they process the data, machines are still not there at the same pace. So, I think the difference here is, when I grew up my parents started telling me, “Hey, this is Taj Mahal. The sky is blue,” and I started taking this data, and I started inferring and then I started passing this information to others.
It’s the same way with machines, the only difference here is that we are feeding information to machines. We are saying, “Computer vision: here is a photograph of a cat, here is a photograph of a cat, too,” and we keep on feeding this information—the same way we are feeding information to our brains—so the machines get trained. Then, over a period of time, when we show another image of a cat, we don’t need to say, “This is a cat, Machine.” The machine will say, “Oh, I found out that this is a cat.”
So, I think this is the difference between a machine and a human being, where, in the case of machine, we are feeding the information to them, in one form or another, using devices; but in the case of human beings, you have conscious learning, you have the physical aspects around you that affect how you’re learning. So that’s, I think, where we are with artificial intelligence, which is still in the infancy stage.
Humans are really good at transfer learning, right, like I can show you a picture of a miniature version of the Statue of Liberty, and then I can show you a bunch of photos and you can tell when it’s upside down, or half in water, or obscured by light and all that. We do that really well. 
How close are we to being able to feed computers a bunch of photos of cats, and the computer nails the cat thing, but then we only feed it three or four images of mice, and it takes all that stuff it knows about different cats, and it is able to figure out all about different mice?
So, is your question, do we think these machines are going to be at the same level as human beings at doing this?
No, I guess the question is, if we have to teach, “Here’s a cat, here’s a thimble, here’s ten thousand thimbles, here’s a pin cushion, here’s ten thousand more pin cushions…” If we have to do one thing at a time, we’re never going to get there. What we’ve got to do is, like, learn how to abstract up level, and say, “Here’s a manatee,” and it should be able to spot a manatee in any situation.
Yeah, and I think this is where we start moving into the general intelligence area. This is where it is becoming a little interesting and challenging, because human beings falls under more of the general intelligence, and machines are still falling under the artificial intelligence framework.
And the example you were giving, I have two boys, and when my boys were young, I’d tell them, “Hey, this is milk,” and I’d show them milk two times and they knew, “Awesome, this is milk.” And here come the machines, and you keep feeding them the big data with the hope that they will learn and they will say, “This is basically a picture of a mouse or this is a picture of a cat.”
This is where, I think, this artificial general intelligence which is shaping up—that we are going to abstract a level up, and start conditioning—but I feel we haven’t cracked the code for one level down yet. So, I think it’s going to take us time to get to the next level, I believe, at this time.
Believe me, I understand that. It’s funny, when you chat with people who spend their days working on these problems, they’re worried about, “How am I going to solve this problem I have tomorrow?” They’re not as concerned about that. That being said, everybody kind of likes to think about an AGI. 
AI is, what, six decades old and we’ve been making progress, do you believe that that is something that is going to evolve into an AGI? Like, we’re on that path already, and we’re just one percent of the way there? Or, is an AGI is something completely different? It’s not just a better narrow AI, it’s not just a bunch of narrow AI’s bolted together, it’s a completely different thing. What do you say?
Yes, so what I will say, it is like in the software development of computer systems—we call this as an object, and then we do inheritance of a couple of objects, and the encapsulation of the objects. When you think about what is happening in artificial intelligence, there are companies, like Trulia, who are investing in building the computer vision for real estate. There are companies investing in building the computer vision for cars, and all those things. We are in this state where all these dysfunctional, disassociated investments in our system are happening, and there are pieces that are going to come out of that which will go towards AGI.
Where I tend to disagree, I believe AI is complimenting us and AGI is replicating us. And this is where I tend to believe that the day the AGI comes—that means it’s a singularity that they are reaching wisdom or the processing power of human beings—that, to me, seems like doomsday, right? Because that those machines are going to be smarter than us, and they will control us.
And the reason I believe that, and there is a scientific reason for my belief; it’s because we know that in the central nervous system the core tool is the neurons, and we know neurons carry two signals—chemical and electrical. Machines can carry the electrical signals, but the chemical signals are the ones which generate these sensory signals—you touch something, you feel it. And this is where I tend to believe that AGI is not going to happen, I’m close to confident. Thinking machines are going to come—IBM Watson, as an example—so that’s how I’m differentiating it at this time.
So, to be clear, you said you don’t believe we’ll ever make an AGI?
I will be the one on the extreme end, but I will say yes.
That’s fascinating. Why is that? The normal argument is a reductionist argument. It says, you are some number of trillions of cells that come together, and there’s an emergent you” that comes out of that. And, hypothetically, if we made a synthetic copy of every one of those cells, and connected them, and did all that, there would be another Deep Varma. So where do you think the flaw in that logic is?
I think the flaw in that logic is that the general intelligence that humans have is also driven by the emotional side, and the emotional side—basically, I call it a chemical soup—is, I feel, the part of the DNA which is not going to be possible to replicate in these machines. These machines will learn by themselves—we recently saw what happened with Facebook, where Facebook machines were talking to each other and they start inventing their own language, over a period of time—but I believe the chemical mix of humans is what is next to impossible to produce it.
I mean—and I don’t want to take a stand because we have seen proven, over the decades, what people used to believe in the seventies has been proven to be right—I think the day we are able to find the chemical soup, it means we have found the Nirvana; and we have found out how human beings have been born and how they have been built over a period of time, and it took us, we all know, millions and millions of years to come to this stage. So that’s the part which is putting me on the other extreme end, to say, “Is there really going to another Deep Varma,” and if yes, then where is this emotional aspect, where are those things that are going to fit into the bigger picture which drives human beings onto the next level?
Well, I mean there’s a hundred questions rushing for the door right now. I’ll start with the first one. What do you think is the limit of what we’ll be able to do without the chemical part? So, for instance, let me ask a straight forward question—will we be able to build a machine that passes the Turing test?
Can we build that machine? I think, potentially, yes, we can.
So, you can carry on a conversation with it, and not be able to figure out that it’s a machine? So, in that case, it’s artificial intelligence in the sense that it really is artificial. It’s just running a program, saying some words, it’s running a program, saying some words, but there’s nobody home.
Yes, we have IBM Watson, which can go a level up as compared to Alexa. I think we will build machines which, behind the scenes, are trying to understand your intent and trying to have those conversations—like Alexa and Siri. And I believe they are going to eventually start becoming more like your virtual assistants, helping you make decisions, and complimenting you to make your lifestyle better. I think that’s definitely the direction we’re going to keep seeing investments going on.
I read a paper of yours where you made a passing reference to Westworld.
Right.
Putting aside the last several episodes, and what happened in them—I won’t give any spoilerstake just the first episode, do you think that we will be able to build machines that can interact with people like that?
I think, yes, we will.
But they won’t be truly creative and intelligent like we are?
That’s true.
Alright, fascinating. 
So, there seem to be these two very different camps about artificial intelligence. You have Elon Musk who says it’s an existential threat, you have Bill Gates who’s worried about it, you have Stephen Hawking who’s worried about it, and then there’s this other group of people that think that’s distracting
saw that Elon Musk spoke at the governor’s convention and said something and then Pedro Domingos, who wrote The Master Algorithmretweeted that article, and his whole tweet was, “One word: sigh. So, there’s this whole other group of people that think that’s just really distractingreally not going to happen, and they’re really put off by that kind of talk. 
Why do you think there’s such a gap between those two groups of people?
The gap is that there is one camp who is very curious, and they believe that millions of years of how human beings evolved can immediately be taken by AGI, and the other camp is more concerned with controlling that, asking are those machines going to become smarter than us, are they going to control us, are we going to become their slaves?
And I think those two camps are the extremes. There is a fear of losing control, because humans—if you look into the food chain, human beings are the only ones in the food chain, as of now, who control everything—fear that if those machines get to our level of wisdom, or smarter than us, we are going to lose control. And that’s where I think those two camps are basically coming to the extreme ends and taking their stands.
Let’s switch gears a little bit. Aside from the robot uprising, there’s a lot of fear wrapped up in the kind of AI we already know how to build, and it’s related to automation. Just to set up the question for the listener, there’s generally three camps. One camp says we’re going to have all this narrow AI, and it’s going to put a bunch of people out of work, people with less skills, and they’re not going to be able to get new work and we’re going to have, kind of, the GreaDepression going on forever. Then there’s a second group that says, no, no, it’s worse than that, computers can do anything a person can do, we’re all going to be replaced. And then there’s a third camp that says, that’s ridiculous, every time something comes along, like steam or electricity, people just take that technology, and use it to increase their own productivity, and that’s how progress happens. So, which of those three camps, or fourth one, perhaps, do you believe?
I fall into, mostly, the last camp, which is, we are going to increase the productivity of human beings; it means we will be able to deliver more and faster. A few months back, I was in Berkeley and we were having discussions around this same topic, about automation and how jobs are going to go away. The Obama administration even published a paper around this topic. One example which always comes in my mind is, last year I did a remodel of my house. And when I did the remodeling there were electrical wires, there are these water pipelines going inside my house and we had to replace them with copper pipelines, and I was thinking, can machines replace those job? I keep coming back to the answer that, those skill level jobs are going to be tougher and tougher to replace, but there are going to be productivity gains. Machines can help to cut those pipeline pieces much faster and in a much more accurate way. They can measure how much wire you’ll need to replace those things. So, I think those things are going to help us to make the smarter choices. I continue to believe it is going to be mostly the third camp, where machines will keep complementing us, helping to improve our lifestyles and to improve our productivity to make the smarter choices.
So, you would say that there are, in most jobs, there are elements that automation cannot replace, but it can augment, like a plumber, or so forth. What would you say to somebody who’s worried that they’re going to be unemployable in the future? What would you advise them to do?
Yeah, and the example I gave is a physical job, but think about an example of a business consultants, right? Companies hire business consultants to come, collect all the data, then prepare PowerPoints on what you should do, and what you should not do. I think those are the areas where artificial intelligence is going to come, and if you have tons of the data, then you don’t need a hundred consultants. For those people, I say go and start learning about what can be done to scale them to the next level. So, in the example I’ve just given, the business consultants, if they are doing an audit of a company with the financial books, look into the tools to help so that an audit that used to take thirty days now takes ten days. Improve how fast and how accurate you can make those predictions and assumptions using machines, so that those businesses can move on. So, I would tell them to start looking into, and partnering into, those areas early on, so that you are not caught by surprise when one day some industry comes and disrupts you, and you say, “Ouch, I never thought about it, and my job is no longer there.”
It sounds like you’re saying, figure out how to use more technology? That’s your best defense against it, is you just start using it to increase your own productivity.
Yeah.
Yeah, it’s interesting, because machine translation is getting comparable to a human, and yet generally people are bullish that we’re going to need more translators, because this is going to cause people to want to do more deals, and then they’re going to need to have contracts negotiated, and know about customs in other countries and all of that, so that actually being a translator you get more business out of this, not less, so do you think things like that are kind of the road map forward?
Yeah, that’s true.
So, what are some challenges with the technology? In Europe, there’s a movement—I think it’s already adopted in some places, but the EU is considering it—this idea that if an AI makes a decision about you, like do you get the loan, that you have the right to know why it made it. In other words, no black boxes. You have to have transparency and say it was made for this reason. Do you think a) that’s possible, and b) do you think it’s a good policy?
Yes, I definitely believe it’s possible, and it’s a good policy, because this is what consumers wants to know, right? In our real estate industry, if I’m trying to refinance my home, the appraiser is going to come, he will look into it, he will sit with me, then he will send me, “Deep, your house is worth $1.5 million dollar.” He will provide me the data that he used to come to that decision—he used the neighborhood information, he used the recent sold data.
And that, at the end of the day, gives confidence back to the consumer, and also it shows that this is not because this appraiser who came to my home didn’t like me for XYZ reason, and he end up giving me something wrong; so, I completely agree that we need to be transparent. We need to share why a decision has been made, and at the same time we should allow people to come and understand it better, and make those decisions better. So, I think those guidelines need to be put into place, because humans tend to be much more biased in their decision-making process, and the machines take the bias out, and bring more unbiased decision making.
Right, I guess the other side of that coin, though, is that you take a world of information about who defaulted on their loan, and then you take you every bit of information about, who paid their loan off, and you just pour it all in into some gigantic database, and then you mine it and you try to figure out, “How could I have spotted these people who didn’t pay their loan? And then you come up with some conclusion that may or may not make any sense to a human, right? Isn’t that the case that it’s weighing hundreds of factors with various weights and, how do you tease out, “Oh it was this”? Life isn’t quite that simple, is it?
No, it is not, and demystifying this whole black box has never been simple. Trust us, we face those challenges in the real estate industry on a day-to-day basis—we have Trulia’s estimates—and it’s not easy. At the end, we just can’t rely totally on those algorithms to make the decisions for us.
I will give one simple example, of how this can go wrong. When we were training our computer vision system, and, you know, what we were doing was saying, “This is a window, this is a window.” Then the day came when we said, “Wow, our computer vision can say I will look at any image, and known this is a window.” And one fine day we got an image where there is a mirror, and there is a reflection of a window on the mirror, and our computer said, “Oh, Deep, this is a window.” So, this is where big data and small data come into a place, where small data can make all these predictions and goes wrong completely.
This is where—when you’re talking about all this data we are taking in to see who’s on default and who’s not on default—I think we need to abstract, and we need to at least make sure that with this aggregated data, this computational data, we know what the reference points are for them, what the references are that we’re checking, and make sure that we have the right checks and balances so that machines are not ultimately making all the calls for us.
You’re a positive guy. You’re like, “We’re not going to build an AGI, it’s not going to take over the world, people are going to be able to use narrow AI to grow their productivity, we’re not going to have unemployment.” So, what are some of the pitfalls, challenges, or potential problems with the technology?
I agree with you, it’s being positive. Realistically, looking into the data—and I’m not saying that I have the best data in front of me—I think what is the most important is we need to look into history, and we need to see how we evolved, and then the Internet came and what happened.
The challenge for us is going to be that there are businesses and groups who believe that artificial intelligence is something that they don’t have to worry about, and over a period of time artificial intelligence is going to start becoming more and more a part of business, and those who are not able to catch up with this, they’re going to see the unemployment rate increase. They’re going to see company losses increase because some of the decisions they’re not making in the right way.
You’re going to see companies, like Lehman Brothers, who are making all these data decisions for their clients by not using machines but relying on humans, and these big companies fail because of them. So, I think, that’s an area where we are going to see problems, and bankruptcies, and unemployment increases, because of they think that artificial intelligence is not for them or their business, that it’s never going to impact them—this is where I think we are going to get the most trouble.
The second area of trouble is going to be security and privacy, because all this data is now floating around us. We use the Internet. I use my credit card. Every month we hear about a new hack—Target being hacked, Citibank being hacked—all this data physically-stored in the system and it’s getting hacked. And now we’ll have all this data wirelessly transmitting, machines talking to each of their devices, IoT devices talking to each other—how are you we going to make sure that there is not a security threat? How are we going to make sure that no one is storing my data, and trying to make assumptions, and enter into my bank account? Those are the two areas where I feel we are going to see, in coming years, more and more challenges.
So, you said privacy and security are the two areas?
Denial of accepting AI is the one, and security and privacy is the second one—those are the two areas.
So, in the first one, are there any industries that don’t need to worry about it, or are you saying, “No, if you make bubble-gum you had better start using AI?
I will say every industry. I think every industry needs to worry about it. Some industries may adapt the technologies faster, some may go slower, but I’m pretty confident that the shift is going to happen so fast that, those businesses will be blindsided—be it small businesses or mom and pop shops or big corporations, it’s going to touch everything.
Well with regard to security, if the threat is artificial intelligence, I guess it stands to reason that the remedy is AI as well, is that true?
The remedy is there, yes. We are seeing so many companies coming and saying, “Hey, we can help you see the DNS attacks. When you have hackers trying to attack your site, use our technology to predict that this IP address or this user agent is wrong.” And we see that to tackle the remedy, we are building an artificial intelligence.
But, this is where I think the battle between big data and small data is colliding, and companies are still struggling. Like, phishing, which is a big problem. There are so many companies who are trying to solve the phishing problem of the emails, but we have seen technologies not able to solve it. So, I think AI is a remedy, but if we stay just focused on the big data, that’s, I think, completely wrong, because my fear is, a small data set can completely destroy the predictions built by a big data set, and this is where those security threats can bring more of an issue to us.
Explain that last bit again, the small data set can destroy…?
So, I gave the example of computer vision, right? There was research we did in Berkeley where we trained machines to look at pictures of cats, and then suddenly we saw the computer start predicting, “Oh, this is this kind of a cat, this is cat one, cat two, this is a cat with white fur.” Then we took just one image where we put the overlay of a dog on the body of a cat, and the machines ended up predicting, “That’s a dog,” not seeing that it’s the body of a cat. So, all the big data that we used to train our computer vision, just collapsed with one photo of a dog. And this is where I feel that if we are emphasizing so much on using the big data set, big data set, big data set, are there smaller data sets which we also need to worry about to make sure that we are bridging the gap enough to making sure that our securities are not compromised?
Do you think that the system as a whole is brittle? Like, could there be an attack of such magnitude that it impacts the whole digital ecosystem, or are you worried more about, this company gets hacked and then that one gets hacked and they’re nuisances, but at least we can survive them?
No, I’m more worried about the holistic view. We saw recently, how those attacks on the UK hospital systems happened. We saw some attacks—which we are not talking about—on our power stations. I’m more concerned about those. Is there going to be a day when we have built massive infrastructures that are reliant on computers—our generation of power and the supply of power and telecommunications—and suddenly there is a whole outage which can take the world to a standstill, because there is a small hole which we never thought about. That, to me, is the bigger threat than the stand alone individual things which are happening now.
That’s a hard problem to solve, there’s a small hole on the internet that we’ve not thought about that can bring the whole thing down, that would be a tricky thing to find, wouldn’t it?
It is a tricky thing, and I think that’s what I’m trying to say, that most of the time we fail because of those smaller things. If I go back, Byron, and bring the artificial general intelligence back into a picture, as human beings it’s those small, small decisions we make—like, I make a fast decision when an animal is approaching very close to me, so close that my senses and my emotions are telling me I’m going to die—and this is where I think sometimes we tend to ignore those small data sets.
I was in a big debate around those self-driven cars which are shaping up around us, and people were asking me when will we see those self-driven cars on a San Francisco street. And I said, “I see people doing crazy jaywalking every day,” and accidents are happening with human drivers, no doubt, but the scale can increase so fast if those machines fail. If they have one simple sensor which is not working at that moment in time and not able to get one signal, it can kill human beings much faster as compared to what human beings are killing, so that’s the rational which I’m trying to put here.
So, one of my questions that I was going to ask you, is, do you think AI is a mania? Like it’s everywhere but it seems like, you’re a person who says every industry needs to adopt it, so if anything, you would say that we need more focus on it, not less, is that true?
That’s true.
There was a man in the ‘60s named Weizenbaum who made a program called ELIZA, which was a simple program that you would ask a question, say something like, I’m having a bad day,” and then it would say, “Why are you having a bad day?” And then you would say, I’m having a bad day because I had a fight with my spouse,” and then would ask, “Why did you have a fight? And so, it’s really simple, but Weizenbaum got really concerned because he saw people pouring out their heart to it, even though they knew it was a program. It really disturbed him that people developed emotional attachment to ELIZA, and he said that when a computer says, “I understand,” that it’s a lie, that there’s no “I,” there’s nothing that understands anything. 
Do you worry that if we build machines that can imitate human emotions, maybe the care for people or whatever, that we will end up having an emotional attachment to them, or that that is in some way unhealthy?
You know, Byron, it’s a very great question. I think, also pick out a great example. So, I have Alexa at my home, right, and I have two boys, and when we are in a kitchen—because Alexa is in our kitchen—my older son comes home and says, “Alexa, what’s the temperature look like today?” Alexa says, “Temperature is this,” and then he says, “Okay, shut up,” to Alexa. My wife is standing there saying “Hey, don’t be rude, just say, ‘Alexa stop.’” You see that connection? The connection is you’ve already started treating this machine as a respectful device, right?
I think, yes, there is that emotional connection there, and that’s getting you used to seeing it as part of your life in an emotional connection. So, I think, yes, you’re right, that’s a danger.
But, more than Alexa and all those devices, I’m more concerned about the social media sites, which can have much more impact on our society than those devices. Because those devices are still physical in shape, and we know that if the Internet is down, then they’re not talking and all those things. I’m more concerned about these virtual things where people are getting more emotionally attached, “Oh, let me go and check what my friends been doing today, what movie they watched,” and how they’re trying to fill that emotional gap, but not meeting individuals, just seeing the photos to make them happy. But, yes, just to answer your question, I’m concerned about that emotional connection with the devices.
You know, it’s interesting, I know somebody who lives on a farm and he has young children, and, of course, he’s raising animals to slaughter, and he says the rule is you just never name them, because if you name them then that’s it, they become a pet. And, of course, Amazon chose to name Alexa, and give it a human voice; and that had to be a deliberate decision. And you just wonder, kind of, what all went into it. Interestingly, Google did not name theirs, it’s just the Google Assistant. 
How do you think that’s going to shake out? Are we just provincial, and the next generation isn’t going to think anything of it? What do you think will happen?
So, is your question what’s going to happen with all those devices and with all those AI’s and all those things?
Yes, yes.
As of now, those devices are all just operating in their own silo. There are too many silos happening. Like in my home, I have Alexa, I have a Nest, those plug-ins. I love, you know, where Alexa is talking to Nest, “Hey Nest, turn it off, turn it on.” I think what we are going to see over the next five years is that those devices are communicating with each other more, and sending signals, like, “Hey, I just saw that Deep left home, and the garage door is open, close the garage door.”
IoT is popping up pretty fast, and I think people are thinking about it, but they’re not so much worried about that connectivity yet. But I feel that where we are heading is more of the connectivity with those devices, which will help us, again, compliment and make the smart choices, and our reliance on those assistants is going to increase.
Another example here, I get up in the morning and the first thing I do is come to the kitchen and say Alexa, “Put on the music, Alexa, put on the music, Alexa, and what’s the weather going to look like?” With the reply, “Oh, Deep, San Francisco is going to be 75,” then Deep knows Deep is going to wear a t-shirt today. Here comes my coffee machine, my coffee machine has already learned that I want eight ounces of coffee, so it just makes it.
I think all those connections, “Oh, Deep just woke up, it is six in the morning, Deep is going to go to office because it’s a working day, Deep just came to kitchen, play this music, tell Deep that the temperature is this, make coffee for Deep,” this is where we are heading in next few years. All these movies that we used to watch where people were sitting there, and watching everything happen in the real time, that’s what I think the next five years is going to look like for us.
So, talk to me about Trulia, how do you deploy AI at your company? Both customer facing and internally?
That’s such an awesome question, because I’m so excited and passionate because this brings me home. So, I think in artificial intelligence, as you said, there are two aspects to it, one is for a consumer and one is internal, and I think for us AI helps us to better understand what our consumers are looking for in a home. How can we help move them faster in their search—that’s the consumer facing tagline. And an example is, “Byron is looking at two bedroom, two bath houses in a quiet neighborhood, in good school district,” and basically using artificial intelligence, we can surface things in much faster ways so that you don’t have to spend five hours surfing. That’s more consumer facing.
Now when it comes to the internal facing, internal facing is what I call “data-driven decision making.” We launch a product, right? How do we see the usage of our product? How do we predict whether this usage is going to scale? Are consumers going to like this? Should we invest more in this product feature? That’s the internal facing we are using artificial intelligence.
I don’t know if you have read some of my blogs, but I call it data-driven companies—there are two aspects of the data driven, one is the data-driven decision making, this is more of an analyst, and that’s the internal reference to your point, and the external is to the consumer-facing data-driven product company, which focuses on how do we understand the unique criteria and unique intent of you as a buyer—and that’s how we use artificial intelligence in the spectrum of Trulia.
When you say, “Let’s try to solve this problem with data, is it speculative, like do you swing for the fences and miss a lot? Or, do you look for easy incremental wins? Or, are you doing anything that would look like pure science, like, “Let’s just experiment and see what happens with this? Is the science so nascent that you, kind of, just have to get in there and start poking around and see what you can do?
I think it’s both. The science helps you understand those patterns much faster and better and in a much more accurate way, that’s how science helps you. And then, basically, there’s trial and error, or what we call an, “A/B testing” framework, which helps you to validate whether what science is telling you is working or not. I’m happy to share an example with you here if you want.
Yeah, absolutely.
So, the example here is, we have invested in our computer vision which is, we train our machines and our machines basically say, “Hey, this is a photo of a bathroom, this is a photo of a kitchen,” and we even have trained that they can say, “This is a kitchen with a wide granite counter-top.” Now we have built this massive database. When a consumer comes to the Trulia site, what they do is share their intent, they say, “I want two bedrooms in Noe Valley,” and the first thing that they do when those listings show up is click on the images, because they want to see what that house looks like.
What we saw was that there were times when those images were blurred, there were times when those images did not match up with the intent of a consumer. So, what we did with our computer vision, we invested in something called “the most attractive image,” which basically takes the three attributes—it looks into the quality of an image, it looks into the appropriateness of an image, and it looks into the relevancy of an image—and based on these three things we use our conventional neural network models to rank the images and we say, “Great, this is the best image.” So now when a consumer comes and looks at that listing we show the most attractive photo first. And that way, the consumer gets more engaged with that listing. And what we have seen— using the science, which is machine learning, deep learning, CNM models, and doing the A/B testing—is that this project increased our enquiries for the listing by double digits, so that’s one of the examples which I just want to share with you.
That’s fantastic. What is your next challenge? If you could wave a magic wand, what would be the thing you would love to be able to do that, maybe, you don’t have the tools or data to do yet?
I think, what we haven’t talked about here and I will use just a minute to tell you, that what we have done is we’ve built this amazing personalization platform, which is capturing Byron’s unique preferences and search criteria, we have built machine learning systems like computer vision recommender systems and the user engagement prediction model, and I think our next challenge will be to keep optimizing the consumer intent, right? Because the biggest thing that we want to understand is, “What exactly is Byron looking into?” So, if Byron visits a particular neighborhood because he’s travelling to Phoenix, Arizona, does that mean you want to buy a home there, or Byron is in San Francisco and you live here in San Francisco, how do we understand?
So, we need to keep optimizing that personalization platform—I won’t call it a challenge because we have already built this, but it is the optimization—and make sure that our consumers get what they’re searching for, keep surfacing the relevant data to them in a timely manner. I think we are not there yet, but we have made major inroads into our big data and machine learning technologies. One specific example, is Deep, basically, is looking into Noe Valley or San Francisco, and email and push notifications are the two channels, for us, where we know that Deep is going to consume the content. Now, the day we learn that Deep is not interested in Noe Valley, we stop sending those things to Deep that day, because we don’t want our consumers to be overwhelmed in their journey. So, I think that this is where we are going to keep optimizing on our consumer’s intent, and we’ll keep giving them the right content.
Alright, well that is fantastic, you write on these topics so, if people want to keep up with you Deep how can they follow you?
So, when you said “people” it’s other businesses and all those things, right? That’s what you mean?
Well I was just referring to your blog like I was reading some of your posts.
Yeah, so we have our tech blog, http://www.trulia.com/tech, and it’s not only me; I have an amazing team of engineers—those who are way smarter than me to be very candid—my data scientist team, and all those things. So, we write our blogs there, so I definitely ask people to follow us on those blogs. When I go and speak at conferences, we publish that on our tech blog, and I publish things on my LinkedIn profile. So, yeah, those are the channels which people can follow. Trulia, we also host data science meetups here in Trulia, San Francisco on the seventh floor of our building, that’s another way people can come, and join, and learn from us.
Alright, well I want to thank you for a fascinating hour of conversation, Deep.
Thank you, Byron.
Byron explores issues around artificial intelligence and conscious computers in his upcoming book The Fourth Age, to be published in April by Atria, an imprint of Simon & Schuster. Pre-order a copy here.
[voices_in_ai_link_back]

Voices in AI – Episode 22: A Conversation with Rudina Seseri

[voices_in_ai_byline]
In this episode, Byron and Rudina talk about the AI talent pool, cyber security, the future of learning, and privacy.
[podcast_player name=”Episode 22: A Conversation with Rudina Seseri” artist=”Byron Reese” album=”Voices in AI” url=”https://voicesinai.s3.amazonaws.com/2017-11-20-(01-05-05)-rudina-seseri.mp3″ cover_art_url=”https://voicesinai.com/wp-content/uploads/2017/11/voices-headshot-card-3-1.jpg”]
[voices_in_ai_link_back]
Byron Reese: This is Voices in AI brought to you by Gigaom. I’m Byron Reese. Today, our guest is Rudina Seseri. She is the founding and manager partner over at Glasswing Ventures. She’s also an entrepreneur in residence at Harvard Business School and she holds an MBA from that same institution. Welcome to the show, Rudina.
Rudina Seseri: Hello Byron. Thank you for having me.
You wrote a really good piece for Gigaom, as a matter of fact; it was your advice to startups—don’t say you’re doing AI just to have the buzzwords on the side, you better be able to say what you’re really doing.  
What is your operational definition of artificial intelligence, and can you expand on that theme? Because I think it’s really good advice.
Sure, happy to. AI—and I think of it as the wave of disruption—has become such a popular term, and I think there are definitional challenges in the market. From my perspective, and at the very highest level, AI is technology, largely computers and software, that possesses or has some level of intelligence that mirrors that of humans. It’s as basic as one would imagine it to be by the very name artificial intelligence.
Where I think we are in the AI maturity curve, if one wants to express it in such a form, is really the early days of AI and the impact it is having and will have going forward. It’s really, what I would call, “narrow AI” in that we’re not at a point where machines, in general, can operate at the same level of diversity and complexity as the human mind. But for narrow purposes, or in a narrow function—for a number of areas across enterprise and consumer businesses—AI can be really transformational, even narrow AI.
Expressed differently, we think of AI as anything—such as visual recognition, social cognition, speech recognition—underpinned with a level of machine learning, with a particular interest around deep learning. I hope that helps.
That’s wonderful. You’re an investor so you get pitches all the time and you’re bound to see ones where the term AI is used, and it’s really just in there to play “buzzword bingo” and all of that… Because, your definition that it’s, “doing things humans would normally do” kind of takes me back to my cat food bowl that fills itself up when it’s empty. It’s weighing and measuring it so that I don’t have to. I used to do it, and now a computer does it. Surely, if you saw that in a business case, like, “We have an AI cat food bowl,” that really isn’t AI, or is it? And then you’ve got things like the Nest, which is a learning system. It learns as you do it, and yours is eventually going to be different than mine—I think that is clearly in the AI camp. What would be a case of something that you would see in a business case and just roll your eyes?
To address your examples and give you a few illustrations, I think in your example of the cat food plate or whatnot, I think you’re describing automation much more than AI. And you can automate it because it’s very prescriptive—if A takes place, then do B; if C takes place, then do D. I think that’s very different than AI.
I think when technologies and products are leveraging artificial intelligence, you are really looking for a learning capability. Although, to be perfectly honest, even within the world of artificial intelligence, researchers don’t agree on whether learning, in and of its own, qualifies as AI. But, coming back to everyday applications, I think, much like the human mind learns, in artificial intelligence, whatever facet of it, we are looking for some level of learning. For sure, there’s a differentiator.
To then address your question head on, my goodness, we’re seeing AI disrupt all facets—from cyber security and martech to IT and HR to new robotics platforms—it’s running the whole gamut. Why don’t I give you a perfect example, that’s a real example, and I can give you the name of a portfolio company so we make it even more practical and less hypothetical?
One of my recent investments is a company called Talla. Talla is taking advantage of natural language processing capabilities for the HR and IT organizations in particular, where they’re automating lower level tickets, Q&A for issues that an employee may have—maybe an outage of email or some other question around an HR benefit—and instead of having a human address the question, it is actually the bot that’s addressing the question. The bot is initially augmenting, so if the question is too complex and the bot can only take the answer so far and can’t fully address the particular question, then the human becomes involved. But the bot is learning, so when a second person has a similar question, the bot can actually address it fully.
In that instance, you have both natural language processing and a lot of learning, because no two humans ask the very same question. And even if we are asking the same question, we do not ask it in the same manner. That’s the beauty of our species. So, there’s a lot of learning that goes on in that regard. And, of course, it’s also the case that it’s driving productivity and augmentation. Does that address your question, Byron?
Absolutely. That’s Rob May’s company, isn’t it?
Yes, it is.
I know Rob; he’s a brilliant guy.
Phenomenal.
Specifically, with that concept, as we are able to automate more things at a human level, like customer service inquiries, how important do you think it is that the end-to-end user knows that they’re talking to a bot of some kind, as opposed to a person?
When you say “know,” are you trying to get at the societal norm of what… Is this a normative question?
Exactly. If I ask where is your FAQ and “Julia”—in air quotes—says, “Here. Our FAQs are located here,” and there was no human involved, how important is it that I, as an end user, know that it’s called “Julia Bot” not “Julia”?
I think disclosure is always best. There’s nothing to be hidden, there’s nothing that’s occurring that’s untoward. In that regard, I would personally advocate for erring on the side of disclosure rather than not, especially if there is learning involved, which means observing, on the part of the bot. I think it would be important. I also think that we’re in the early days of this type of technology being adopted and becoming pervasive that the best practices and norms have yet to be established.
Where I suspect you will see both, is what I call the “New York Times risk”—where we’ll have a lot more discussion around what’s an acceptable norm and what’s right and wrong in this emerging paradigm—when we read a story where something went the wrong way. Then we will all weigh in, and the bodies will come together and establish norms. But, I think, fundamentally, erring on the side of disclosure serves a company well at all times.
You’re an investor. You see all kinds of businesses coming along. Do you have an investment thesis like, “I am really interested in artificial intelligence applied to enterprises”? What is your thesis?
We refer to our thesis as—not only do we have a thesis, but I think we have a good name to capture it—“Intelligent, Connect and Protect,” wherein our firm strategy is to invest in startups that are really disrupting, in a positive manner, and revolutionizing the enterprise—from sales tech and martech, to pure IT and data; around platforms, be those software platforms or robotics and the like; as well as cyber security and infrastructure.
So that first part, around enterprise and platforms, is the “Connect” world and then the cyber security and the infrastructure is the protection of that ecosystem. The reason why we don’t just call it “Connect and Protect” is because with every single startup that we invest in, core to our strategy is the utilization, or taking advantage, of artificial intelligence, so that is the “Intelligent” part in describing, or in capturing our thesis.
Said differently, we fundamentally believe that if a technology startup, in this day and age, is not leveraging some form of machine learning, some facet of AI, it’s putting itself at a disadvantage from day one. Put more directly, it becomes legacy from the get-go, because from a performance point of view those legacy products, or products without any kind of learning and AI, just won’t be able to keep up and outperform their peers that do.
You’re based in Boston. Are you doing most of your investing on the East Coast?
For the most part, correct. Yes. East Coast, and in other pockets of opportunity where our strategy holds. There are some interesting things in areas like Atlanta with security, even in certain parts of Europe like London, Berlin, Munich, etcetera, but yes.
Are AI being used for different things on the East Coast than what we think of in Silicon Valley? Can you go into that a little more? Where do you see pockets that are doing different things?
I think AI is a massive wave, and I think we would be in our own bubble if we thought that it was divided by certain coasts. Where I think it manifests itself differently, however—and I think it’s impacting at a global level to be honest rather than in our own microcosms—is where you see a difference in the concentration of the talent pool around AI, and especially deep learning. Because, keep in mind, the notion of specializing in machine learning or visual cognition, but particularly deep learning, is the best example, it didn’t exist before 2012. We talk a lot about data scientists, but the true data scientists and machine learning experts are very, very hard to come by, because it is, in many ways, driven by the explosion in data, and then the maturity that the whole deep learning field is achieving to be commercializable, for the techniques to be used in real products. It’s all very new, only existing in the last five to—if you want to be generous—ten years.
From that perspective, where talent is concentrated makes a difference. To come back to how, maybe, the East Coast compares, I think we will see AI companies across the board. I’m very optimistic, in that I think we have quite a bit of concentration of AI through the universities on the east coast. I think of MIT, Carnegie Mellon, and Cornell; and what we’re seeing come out of Harvard and BU on the NLP side.
Across the universities, there are very, very deep pockets of talent, and I think that manifests itself both with the number and high quality of AI-enabled products and startups that we’re seeing get launched, but also for, what one would call, the “incumbents” such as Facebook, Amazon, Google, Uber, and the list goes on; if you look closely at where their AI teams are—even though almost all the companies I just mentioned are headquartered in the Valley and, in the case of Amazon, in Seattle—their AI talent is concentrated on the East Coast; probably most notably is Facebook’s AI headquartered in New York. So, combine that talent concentration with the market, that we, in particular, focus with our strategy around—the enterprise—where the East Coast has always had, and continues to have an advantage, I think, it’s an interesting moment in time.
I assume with the concentration of government on the East Coast and finance on the East Coast, that you see more technologies like security and those sorts of things. Specifically, with security, there’s been this game that’s gone back and forth for thousands of years between people who make codes, and people who break them. And nobody’s ever really come to an agreement about who has the harder job. Can you make an unbreakable code, and can it be broken? Do you think AI helps those who want to violate security, or those who want to defend against those violations, right now?
I think AI will play an important role in defending and securing the ecosystem. The reason I say that is because, in this day and age, with the exploding number of devices, and pervasive connectivity everywhere—translated in cyber security lingo, an increase in the number of endpoints, the areas of vulnerability, whether it is at the network level and device level or whether it is at the data and identity levels—has made us a lot more vulnerable, which is sort of the paradigm we live in.
Where I think AI and machine learning can be true differentiators is that not only can they be leveraged, again, for the various software solutions to continuously learn, but also on the predictive side they can point out where a vulnerability attack is being predicted before it actually takes place. There are certain patterns that help the enterprise to hone in on the vulnerability—from assessment to time of attack, at or during the attack, and then post attack. I do think that AI is a really meaningful differentiator for cyber security.
You alluded, just a moment ago, to the lack of talent; there just aren’t enough people who are well-versed in a lot of these topics. How does that shake out? Do you think that we will use artificial intelligence to make up for shortage of people with the skills? Or, do you think that universities are going to produce a surge of new talent coming in? How do we solve that? Because you look out your window, and almost everything you see, you could figure out how we could use data to study that and make it better. It’s kind of a blue ocean. What do you think is going to happen in the talent marketplace to solve for that?
AI eventually will be a layer, you’re absolutely right. From that perspective, I cannot come up with an area where AI will not play a role, broadly put, for the foreseeable future and for a long time in the future.
In terms of the talent challenge, let me address your question twofold. The talent shortage challenge that we have right now stems from the fact that it’s a relatively new field, or the resurgence of the field, and the ability to now actually deploy it in the real world and commercialize; this is what’s driving this demand. It’s the demand that has spurred it, and of course, the supply for that adjustment to take place requires talent, if I can think of it in that manner, and it’s not there. It’s a bit of a matter of market timing at one level. For sure, we will see many more students enter the field, many more students specialize and get trained in machine learning.
Then the real question becomes will part of their functions be automated? Will we need fewer humans to perform the same functions, which I think was the second part of your question if I understood it correctly?
Yes.
I think we’re in a phase of augmentation. And we’ve seen this in the past. Think about this, Byron: how did developers code, going back ten to fifteen years ago? Largely, in different languages, but largely, from the ground up. How do they code today? I don’t know of any developer who doesn’t use the tools available to get a quick spin up, and to ramp up quickly.
AI and machine learning are no different. Not every company is going to build their own neural net. Quite the opposite. A lot of them will use what’s open source and available out there in the market, or what’s commercialized for their needs. They might do some customization on top, and then they will focus on the product they’re building.
The fact that you will see part of the machine learning function that’s being performed by the data scientists be somewhat automated should come as no surprise, and that has nothing to do with AI. That has to do with driving efficiencies and getting tools and having access to open source support, if you will.
I think down the road—where AI plays a role both in augmentation and in automation—we will see definitional changes to what it means to be in a certain profession. For example, I think a medical doctor of the future might look, from a day-to-day activity point of view, very differently than what we perceive a doctor’s role to be—from interaction to what they’re trained at. The fact that a machine learning expert and a data scientist—which by the way are not the same thing but for the sake of argument, I’m using them interchangeably—are going to use tools, and not start from scratch but are going to leverage some level of automation and AI learning is par for the course.
When I give talks on these topics, especially on artificial intelligence, I always get asked the question, “What should I, or what should my children, study to remain employable in the future?”—and we’ll talk about that in a minute, about how AI kind of shakes up all of that.
There are two kind of extreme ends on this. One school of thought says everyone in school should learn how to code, everyone. It’s just like one of the three R’s, but it starts with a C. Everyone should learn to code. And then Mark Cuban, at South by Southwest here in Austin, said that the first trillionaires are going to be from AI companies because it offers the ability to make better decisions, right? And he said if he were coming up today, he would study philosophy, because it’s going to be that kind of thinking that allows you to use these technologies, and to understand how to apply them and whatnot.
On that spectrum of everyone should code, or no, we might just be making a glut of people to code, when what we really need are people to think about how to use these technologies and so forth, what would you say to that?
I have a 4-year-old daughter, so you better believe that I think about this topic quite a bit. My view is that AI is an enabler. It’s a tool for us as a society to augment and automate the mundane, and give us more ability and more room for creativity and different thinking. I would hope to God that the students of the future will study philosophy, they will study math, they will study the arts, they will study all the sciences that we know, and then some. Creativity of thinking, and diversity of thinking will remain the most precious asset we have, in my view.
I do think that, much like children today study the core hard sciences of math and chemistry and biology as well as literature, part of the core curriculum in the future will probably be some form of advanced data statistics, or inter machine learning, or some level computer sciences. We will see some technology training that becomes core, but I think that is a very, very, very different discussion than, “Everybody should study computer science or, looking forward, everybody should be a roboticist or machine learning expert or AI expert.” We need all the differentiation in thinking that we can get. Philosophy does matter, because what we do today shapes the present and society in the future.
Back to the talent question, to your point about someone who is well-versed in machine learning—which is different than data science, as you were saying—do you think those jobs are very difficult, and we’re always going to have a shortage of them because they’re just really hard? Or, do you think it’s just a case that we haven’t really taught them that much and they’re not any harder than coding in C or something? Which of those two things do you think it is?
I think it’s a bit more the latter than the former, that it’s a relatively new field. Yes, math and quants matter in this area, but it’s a new field. It will be talent that has certain predisposition around, like I said, math and quants, yes for sure. But, I do think that the shortage that we experience has a lot more to do with the newness of the field rather than the lack of interest or the lack of qualified talent or lack of aptitude.
One thing, when people say, “How can I spot a place to use artificial intelligence in my enterprise?” one thing I say is find things that look like games. Because every time AI wins in chess, and beats Ken Jennings in Jeopardy and Lee Sedol in Go—the games are really neat because they are these very constrained universes with definable rules and clear objectives.
So, for example, you mentioned HR in your list of all the things it was going to affect, so I’ll use that one. When you have a bunch of resumes, and you’ve hired some people that get great performance reviews, and some people that don’t, and you can think of them as points, or whatever—and you can then look at it as a big game, and you can then try to predict, you know? You can go into each part of the enterprise and say, “What looks like a game here?” Do you have a rule like that or just a guiding metaphor in your own mind? Because, you see all these business plans, right? Is there something like that, that you’re looking for?
There were several questions embedded in this. Let me see if I can decouple a couple of them. I think any area that is data-driven, any facet of the enterprise that is data-driven or that there is information, I think you can leverage learning and narrow AI for predictive, so you used some of the keywords. Is there opportunities for optimization? Are there areas where analytics are involved where you can move away from basic statistical models, and can start leveraging AI? I think where there is room for efficiency and automation, you can leverage it. It’s hard not to find an area where you can leverage it. The question is where can you create the most value?
For example, if you are on the forefront of an enterprise on the sales side, can you leverage AI? Of course, you can—not all prospective customers are created equal, there are better funnels, you can leverage predictives; the more and better data you have, the better are the outcomes. At the end of the day, your neural net will perform as well as the data you put in: junk in, junk out. That’s one facet.
If you’re looking at the marketing and technology side, think about how one can leverage machine learning and predictives around advertising, particularly on the programmatic side, so that you’re personalizing your engagement in whichever capacity with your consumer or your buyer. We can go down the list, Byron. I think the better question is what are the lower-hanging fruits that I can start taking advantage of AI right away, and which ones will I wait on rather than do I have any areas? If the particular manager or business person can’t find any areas, I think they’re missing the big picture, and the day-to-day execution.
I remember in the ‘90s when the consumer web became a big thing, and companies had a web department and they had a web strategy, and now that’s not really a thing, because the internet is part of your business. Do you think we’re like that with artificial intelligence, where it’s siloed now, but eventually, we won’t talk about it the way we’re talking about it now?
I do think so. I often get asked the very same question, “How do I think AI will shape up?” and I think AI will be a layer much like the internet has become a layer. I absolutely do. I think we will see tools and capabilities that will be ever pervasive.
Since AIs are only as good as the data you train them on, does it seem monopolistic to you that certain companies are in a place where they can constantly get more and more and more data, which they can therefore use to make their businesses stronger and stronger and stronger, and it’s hard for new entrants to come in because they don’t have access to the data? Do you think that data monopolies will become kind of a thing, and we’ll have to think about how to regulate them or how to make them available, or is that not likely?
I think the possession of data is, for sure, a barrier to entry in the market, and I do think that the current incumbents, probably more than we’ve ever seen before, have built this barrier to entry by amalgamating the data. How it will shake out… First of all, two thoughts: one, even though they have amassed huge amounts of data with this whole pervasive connectivity, and devices that stay connected all the time, even the large incumbents are only scratching the surface of the data we are generating, and the growth that we’ll continue to see on the data side. So, even though it feels oligarchy-like, maybe—not quite monopolistic—that the big players have so much data, I think we’re generating even more data going forward. So that’s sort of at the highest level.
I do think that, particularly on the consumer side, something needs to be done around customers taking control of their data. I think brands and advertisers have been squatting on consumer data with very little in return for us. I think, again, one can leverage AI in predictives, in that regard, to compensate—whether it’s through an experience or in some other form—consumers for their personal private data being used. And, we probably need some form of regulation, and I don’t know if it’s at the industry standard level, or with more regulatory bodies involved.
Not sure if you follow Sir Timothy Berners-Lee who invented the web, but he does talk a lot about data centralization. I think there is something quite substantive in his statements around centralizing the web and all the data and giving consumers a say. I think we’re seeing a bit of ground swell in that regard. How it will manifest itself? I’m not quite sure, but I do think that the discussion around data will remain very relevant and become even more important as the amount of data increases, and as it becomes critical in a barrier to entry for future businesses.
With regard to privacy in AI, do you think that we are just in a post-privacy world? Because so much of what you do is recorded one way or the other that data just exists and we’ll eventually get used to that. Or do you think people are always going to insist on the protections that you’re talking about, and ways to guarantee their anonymity; and that the technology will actually be used to help promote privacy, not to wear it down?
I think we haven’t given up on privacy. I think the definition of privacy might have changed, especially with the millennials and the social norms that they have been driving, and, largely, the rest of the population has adopted. I’d say we have a redefinition of privacy, but for sure, we haven’t given up on it; even the younger generations who often get accused of doing so. And you don’t need to take my word on it, look at what happened with Snap. Basically, in the early days, it was really almost tweens but let’s say it was teenagers who were on Snapchat and what they were doing was “borderline misbehavior” because it was going to go away, it wouldn’t leave a footprint. The value prop being that it disappears, so your privacy, your behavior, does not become exposed to the broader world. It mattered, and, in my view, it was a critical factor in the growth that the company saw.
I think you’d be hard pressed to find people, I’m sure they exist but I think they are in the minority, that would say, “Oh, I don’t care. Put all of my data, 24/7, let the world know what I’m up to.” Even on the exhibitionist side, I think there’s a limit to that. We care about privacy. How we define it today, I suspect, is very different than how we defined it in the past and that is something that’s still a bit more nebulous.
I completely agree with that. My experience with young people is they are onto it, they understand it better and they are all about it. Anyway, I completely agree with all of that.
So, what about European efforts with regard to the “right to know why”? If an artificial intelligence makes a decision that impacts your life—like gives you a loan or doesn’t—you have the right to know how that conclusion was made. How does that work in a world of neural nets where there may not be a why that’s understandable, kind of, in plain English? Do you think that that is going to hold up the development of black box systems, or that that’s a passing fad? What are your thoughts on that?
I think Europe has always been on the side of protecting consumers. We were just thinking about privacy, and look at what they are doing with GDPR, and what’s coming to market from the data point of view on the topic we were just wrapping up. I think, as we gain a better understanding of AI and as the field matures, if we hide behind, “We don’t quite know how the decision was made,” and we may not fully comprehend but if we hide behind the, “Oh, it’s hard to explain and people can’t understand it,” I think at some point it becomes a cop-out. I don’t think we need to educate everyone on how neural nets and deep learning are performed, but I think you can talk about the fundamentals of what are the drivers, how are they interacting with each other, and at a minimum, you can give the consumer some basic level of understanding as to where they probably outperformed or underperformed.
It reminds me, in tech, we used to use acronyms in talking to each other, and making everybody feel like they were less intelligent than the rest of the world. I don’t think we need to go into the science of artificial intelligence machine learning to help consumers understand how decisions were made. Because guess what? If we can’t explain it to the consumer, the person on the other side that’s managing the relationship will not understand it themselves.
I think you’re right, but, if you ask Google, “Why did this page come number one for this search?” the answer, “We don’t know,” is perfectly understandable. It’s six hundred different algorithms that go into how they rank pages—or whatever the number is, it’s big. So, how can they know why this is page number one and that is page number two?
They may not know fully, or it may take some effort to drill in specifically as to why, but at some level they can tell you what some of the underlying drivers were behind the ranking or how the ranking algorithms took place etcetera, etcetera. I think, Byron, what you and I are going back and forth on is, in my view, it’s a level of granularity question, rather than can they or can they not. It’s not a yes or a no, it’s a granularity question.
There’s a lot of fear in the world around the effect that artificial intelligence is going to have on people, and one of the fear areas is the effect on jobs. As you know, there kind of are three narratives. One narrative is that there are some people who don’t have a lot of training in things that machines can’t do, and the machines are eventually going to take their jobs, and that we’ll have some portion of the population that’s permanently unemployed, like a permanent Great Depression.
Then there’s a school of thought that says, “No, no, no. Everybody’s replaceable by a machine, that eventually, they’re going to get to a point where they can learn something new faster than a human, and then we’re all out of work.”
And then there’s a third group that says, “No, no, no, we’re not going to have any unemployment because we’ve had disruptive technologies: electricity, replacing animals with machines, and steam; all these really disruptive technologies, and unemployment never spiked because of those. All that happens is people learned to use those tools to increase their own productivity.”
My question to you is, which of those three narratives, or is there a fourth one, do you identify with?
I would say I identify only in part with the last narrative. I do think we will see job displacement. I do think we will see job displacement in categories of workers that we would have normally considered highly-skilled. In my view, what’s different about the paradigm we are in vis-à-vis, let’s say, the Industrial Revolution, is that it is not the lowest-trained workers or the highly-specialized workers—if you think about artisanal-type workers back in the day—that get displaced out of their roles, and, through automation, replaced by machines in the Industrial Revolution, or here by technology and the AI paradigm.
I think with the current paradigm and what’s tricky is that the middle class and the upper middle class gets impacted as much as the less-trained, low-skilled workers. There will be medical doctors, there will be attorneys, there will be highly-educated parts of the workforce where their jobs—some of the jobs may be done away with—in large part, will be redefined. And very analogous to the discussion we were just having about see a shortage in machine learning experts, we’ll see older generations who are still seeking to be active members of the workforce be are put out of the labor market, or are no longer qualified and require new training, and it will be a challenge for them to gain the training to be as high of a performer as someone who has been learning the particular skill that’s in medicine in an AI paradigm from the get-go.
I think we’ll see a shift in job definitions, and a displacement of meaningful chunks of the highly-trained workforce, and that will have significant societal consequences as well as economic consequences. Which is why I think a form of guaranteed basic income is a worthy discussion, at least until that generation of workers get settled and the new labor force that’s highly-trained in an AI-type of paradigm comes into play.
I also think there will be many, many, many new jobs and professions that will be created that we have yet to think about or even imagine as a result. I do not think that AI is a net negative in terms of creating entire unemployment or lower employment. It’s not a net negative. I think—McKenzie and many, many others have done studies on this—in the long term, we’ll probably see more employment than not created as a result of AI. But, at any point in time, as we look at the AI disruption and adoption over the next few decades, I think we will see moments of pain and meaningful pain.
That’s really interesting because, in the United States, as an example, since the industrial revolution, unemployment has been between five and nine percent, without fail five and nine percent, except the Great Depression which nobody said was caused by technology. If you think about an assembly line, an assembly line is AI. If you were making cars one at a time in a garage, and then all of a sudden, Henry Ford shows up and he makes them a hundred at a time and sells them for a tenth the price and they’re better, that has got to be like, “Oh my gosh, this AI, this technology just really upset this enormous amount of people,” and yet you never see unemployment go above nine percent in this country.
I will leave the predictions of the magnitude of the impact to the macroeconomists; I will focus on startups. But I do think, let me stick with that example, so have artisanal shops and sewing by hand, and then the machine comes along and the factory line, and now it’s all automated, and you and others are displaced. So, for every ten of you who were working, one is now on the factory line and nine are finding themselves out of a position. That was the paradigm I was describing a minute ago with doctors and lawyers and other professions, that a lot of their function will become automated or replaced by AI. But then, it’s also the case that now their children or their grandchildren are studying outer space, or are going into astronomy and other fields that we might have, at a folklore level, thought about, but never expected that we’d get there; so, new fields emerge.
The pain will be felt, though. What do you do with the nine out of ten who are, right there and then, out of a position? In the long term, in an AI paradigm, we’ll see many, many more professions get created. It’s just about where you get caught in the cycle.
It’s true. In ’95, you never would have thought, “If you just connect a bunch of computers together with a common protocol and make the web, you’re going to have Google and eBay and Etsy.”
Let’s talk about startups for a minute. You see a lot of proposals, and then you make investments, and then you help companies along. What would you say are the most common mistakes that you’re seeing startups make, and do you have general advice for portfolio companies?
Well, my portfolio companies get the advice in real time, but I think, especially for AI companies—to go back to how you opened this discussion, which was referencing a byline I had done for Gigaom—if a company truly does have artificial intelligence, show it. And it’s pretty easy to show. You show how your product leverages various learning techniques, you show who the people on your team are that are focusing on machine learning, but also how also you, the founder, whether you are a technical founder or not, understands the underpinnings of AI and of machine learning. I think that’s critical.
So many companies, they’re calling themselves something-something-dot-AI and it’s very, very similar and analogous to what we saw with big data. If you remember, seven to ten years ago, every company was big data. Every company is now AI, because it’s the hot buzzword. So, rising above the noise while taking advantage of the wave is important, but meaningfully so because it’s valuable to your business, and because, from the get-go, you’re taking advantage of machine learning and AI not because it’s the buzzword of the day that you think might get you money. The matter of fact is for those of us who live and breathe AI and startups, we’ll cut through the noise fairly quickly, and pattern recognition and the number of deals we see in any given week is such that the true AI capabilities will stand out. That’s one piece.
I do think, also, that for the companies and founders that truly are leveraging neural net, truly are getting the software or hardware—whatever their product might be—to outperform; the dynamics within the companies have changed. Because we don’t just have the technology team consisting of the developers with the link to the product people; we now have this third leg, the machine learning or the data scientist people. So, how is the product roadmap being driven, is it the product people driving it, or is the machine learning talent coming up with models to help support it, or are they driving it, and product is turning it into a roadmap, and technology, the developers, are implementing it? It’s a whole new dichotomy among these various groups.
There’s a school of thought, in fact, that says, “Machine learning experts, who’s that? It’s the developers who will have machine learning expertise, they will be the same people.” I don’t share the view. I think developers will have some level of fluency in machine learning AI, but I think we will have distinct talent around it. So, getting the culture right amongst those groups makes a very, very big difference to the outcome. I think it’s still in the making, to be honest.
This may be an unanswerable question, because it’s too vague.
Lucky me.
I know.
Go ahead.
Two business plans come across your desk, and one of them is a company that says, “We have access to data that nobody else has, and we can use this data to learn how to do something really well,” and the other one says, “We have algorithms that are so awesome that they can do stuff that nobody else knows how to do.” Which of those do you pick up and read first?
Let’s merge them. Ideally, you’d like to have both the algorithms, or the neural nets, and the data. If you really force me to pick one, I’ll pick the data. I think there are enough tools out there and there is enough TensorFlows or whatnot out there in the market and in open source, that I think you could probably work with those and build on top of them. Data becomes the big differentiator.
I think of data, Byron, today as we used to think of patents back in the day. The role of patents is an interesting topic because, with execution, they’ve taken second or third seat as a barrier to entry. But, back ten, fifteen years ago, patents mattered a lot more. I think data can give you that kind of barrier to entry and even more so. So, I pick data. It is an answerable question; I’ll pick big data.
Actually, my very next question was the role of patents in this world. Because doesn’t the world change so quickly, plus you have to disclose so much. Would you advise people to keep them as trade secrets? Or, just, how do you think that companies who develop a technology should protect and utilize it?
I think your question depends a bit on what facet of technology are we talking about. In the life sciences, they still matter quite a bit, which is an area that I don’t know as much about, for sure. I think, in technology, their role has diminished, although still relevant. I cannot think of a company that became big and a market leader because they had patents. I think they are an important facet, but it is not the make-all or break-all in terms of must-have. In my view, they are a nice to have.
I think where one pauses, is if their immediate competitor has a healthy body of patents, then you think a bit more about that. As far as the tradeoff between patents and trade secrets, I think there is a moment in time when one files a patent, especially if secrecy matters. At the end the day though—and this may be ironic given that we’re talking about artificial intelligence startups—much like any other facet of our lives, what matters is excellence of execution, and people. People can make or break you.
So, when you ask me about the various startups that I see, and talk about the business plans, I never think of them as “the business plan.” I always think of them in the context of, “Who are the founders? Who are the team members, the management team?” So, team first. Then, market timing for what they are going after, because you could have the right execution or the right product, but the wrong market timing. And then, of course, the question of what problem are they solving, and how are they taking advantage of AI. But, people matter. To come back to your question, patents are one more area that a startup can build defensibility but not the end-all and be-all by any stretch, and they have a diminished role, in fact.
How do you think startups have changed in the last five or ten years? Are they able to do more early? Or, are they demographically different—are they younger or older? How do you think the ecosystem evolves in a world where we have all these amazing platforms that you can access for free?
I think we’ve seen a shift. Earlier, you referenced the web, and with the emergence of the web, back in 1989, we saw digital and e-commerce and martech; and entire new markets get created. In that world—what I’ll call not just pure technology businesses, but tech-enabled businesses—we saw a shift both in younger demographics and startups founded by younger entrepreneurs, but also more diversity in terms of gender and background as well, in that not everybody needed to have a computer science degree or an engineering degree to be able to launch a tech or a tech-enabled company.
I think that became even more prevalent and emphasized in the more recent wave that we’re just on the completion side of with social-mobile. I mean, the apps, that universe and ecosystem, it’s two twenty-year-olds, right? It’s not the gray-headed three-time entrepreneur. So, we absolutely saw a demographic shift. In this AI paradigm, I think we’ll see a healthy mixture. We’ll see the researcher and the true machine learning expert who’s not quite twenty but not quite forty either, so, a bit more maturity. And then we’ll see the very young cofounder or the very experienced cofounder. I think we’ll see a mix of demographics and age groups, which is the best. Again, we’re in a business of diversity of thought and creativity. We’re looking for that person who’s taking advantage of the tools and innovation and what’s out there to reimagine the world and deliver a new experience or product.
I was thinking it’s a great time to be a university professor in these topics because, all of a sudden, they are finding themselves courted right and left because they have long-term deep knowledge in what everyone is trying to catch up on.
I would agree, but keep in mind that there is quite a bit of a chasm between teaching a topic and actually commercializing, in that regard. So I think the professors who are able to cross the chasm—not to sound too Geoffrey Moore-ish—are the ones, that, yes, they’re in the right field and in the right moment in time. Otherwise, their students, the talent that is knowledgeable enough, those PhDs that don’t go into academia, but are actually going into commercialization, execution, and implementation; that’s the talent that we’re in high demand for.
My last question is, kind of, how big can this be? If you’re a salesperson, and you have a bunch of leads, you can just use your gut, and pick one, and work that one, or you have data that informs you and makes you better. If you’re an HR person, you hire people more suited to the job than you would have before. If you’re a CEO, you make better decisions about something. If you’re a driver, you can get to the place quicker. I mean, when you add all of that up across an entire world of inefficiency… So, you kind of imagine this world where, on one end of the spectrum, we all just kind of stumble through life like drunken sailors on shore leave, randomly making decisions based on how we feel; and then you think of this other world where we have all of this data, and it’s all informed, and we make the best decisions all the time. Where do you think we are? Are we way over at the wandering around, and this this is going to get us over to the other side? How big of an impact is this? Could artificial intelligence double GNP in the United States? How would you say how big can it be?
Fortunately, or unfortunately, I don’t know, but I don’t think we live in a binary world. I think, like everything else, it’s going to be a matter of shades. I think we’ve driven productivity and efficiency, historically, to entirely new levels, but I don’t think we have any more free time, because we find other ways to occupy ourselves even in our roles. We have mobile phones now, we have—from a legacy perspective—laptops, computers, and whatnot; yet, somehow, I don’t find myself vacationing on the beach. Quite the contrary, I’m more swamped than ever.
I think we have to be careful about—if I understood your question correctly—transplanting technology into, “Oh, it will take care of everything and we’ll just kind of float around a bit dumber, a bit freer, and whatnot.” I think we’ll find different ways to reshape societal norms, not in a bad way, but in a, “What constitutes work?” way, and possibly explore new areas that we didn’t think were possible before.
I think it’s not necessarily about gaining efficiency, but I think we will use that time, not in an unproductive or leisurely way, but to explore other markets, other facets of life that we may or may not have imagined. I’m sorry for giving you such a high-level answer, and not making it more concrete. I think productivity from technology has been something that’s been, as you well know, very hard to measure. We know, anecdotally, that it’s had an impact on measured activity, but there are entire groups of macroeconomists, who, not only can they not measure it, but they don’t believe it has improved productivity.
It will have a fundamental transformative impact, whether we’re able to measure it—I know you defined it as GNP, but I’m defining it from a productivity point of view—or not remains to be seen. Some would argue, that it’s not productive, but I would throw the thought out there, that traditional methodologies of measuring productivity do not account for technological impact. Maybe we need to look at how we’re defining productivity. I don’t know if I answered your question.
That’s good. The idea that technology hasn’t increased our standard of living, I don’t think… I live a much more leisurely life than my great grandparents, not because I work any harder than them, but because I have technology in my life, and because I use that technology to make me more productive. I know the stuff you’re referring to where it’s like, “We’ve got all these computers in the office and worker productivity doesn’t seem to just be shooting through the roof.” I don’t know. Let’s leave it there.
Actually, I do have a final question. You said you have a four-year-old daughter, are you optimistic overall about the world she’s going to grow up in with these technologies?
My gosh! We’re going into a shrink session.
No, I mean are you an optimist or a pessimist about the future?
Apparently, I’ve just learned—in the spirit of sharing information with you and all your listeners—that my age group falls into something called the Xennial where we are very cynical like Generation X, but also optimists like the Millennials. I’m not sure what to make of that. I would call it an interesting hybrid.
I am very optimistic about my daughter’s future, though. I think of it as, today’s twentysomethings are digital natives, and today’s ten-year-olds and later are mobile natives. My daughter is going to be an AI native, and what an amazing moment in time for her to be living in this world. The opportunities she will have and the world she will explore on this planet and beyond, I think, will be fascinating. I do hope that somewhere in the process, we manage to find a bit more peace, and not destroy each other. But, short of that, I think I’m quite optimistic about the future that lies ahead.
Alrighty, well let’s leave it at that. I want to thank you for an absolutely fascinating hour. We touched on so many things and I just thank you for taking the time.
My pleasure. Thanks again for having me.
Byron explores issues around artificial intelligence and conscious computers in his upcoming book The Fourth Age, to be published in April by Atria, an imprint of Simon & Schuster. Pre-order a copy here.
[voices_in_ai_link_back]

Four Questions For: Ryan Calo

How do you draw the line between prosecuting a robot that does harm and its creator? Who bears the burden of the crime or wrongdoing?
I recently got the chance to respond to a short story by a science fiction writer I admire. The author, Paulo Bacigalupi, imagines a detective investigating the “murder” of a man by his artificial companion. The robot insists it killed its owner intentionally in retaliation for abuse and demands a lawyer.
Today’s robots are not likely to be held legally responsible for their actions. The interesting question is whether anyone will be. If a driverless car crashes, we can treat the car like a defective product and sue the manufacturer. But where a robot causes a truly unexpected harm, the law will struggle. Criminal law looks for mens rea, meaning intent. And tort law looks for foreseeability.
If a robot behaves in a way no one intended or foresaw, we might have a victim with no perpetrator. This could happen more and more as robots gain greater sophistication and autonomy.
Do tricky problems in cyber law and robotics law keep you awake at night?
Yes: intermediary liability. Personal computers and smart phones are useful precisely because developers other than the manufacturer can write apps for them. Neither Apple nor Google developed Pokemon Go. But who should be responsible if an app steals your data or a person on Facebook defames you? Courts and lawmakers decided early on that the intermediary—the Apple or Facebook—would not be liable for what people did with the platform.
The same may not be true for robots. Personal robotics, like personal computers, is likely to rise or fall on the ingenuity of third party developers. But when bones instead of bits are on the line—when the software you download can touch you—courts are likely to strike a different balance. Assuming, as I do, that the future of robotics involves robot app stores, I am quite concerned that the people that make robots will not open them up to innovation due to the uncertainty of whether they will be held responsible if someone gets hurt.
Would prosecution against someone who harms a robotic be different than someone who harms a non-thinking or non-intelligent piece of machinery?
It could be. The link between animal abuse and child abuse, for instance, is so strong that many jurisdictions require authorities responding to an animal abuse allegation to alert child protective services if kids are in the house. Robots elicit very strong social reactions. There are reports of soldiers risking their lives on the battlefield to rescue a robot. In Japan, people have funerals for robotic dogs. We might wonder about a person who abuses a machine that feels like a person or a pet. And, eventually, we might decide to enhance penalties for destroying or defacing a robot beyond what we usually levy for vandalism. Kate Darling has an interesting paper on this.
Should citizens be concerned about robotic devices in their home compromising their privacy or about hackers attacking medical their medial devices? How legitimate or illegitimate are people’s fears about the rise of technology?
People should be concerned about robots and artificial intelligence but not necessarily for the reasons they read about in the press. Kate Crawford of Microsoft Research and I have been thinking through how society’s emphasis on the possibility of the Singularity or a Terminator distorts the debate surrounding the social impact of AI. Some think that superintelligent AI could be humankind’s “last invention.” Many serious computer scientists working in the field scoff at this, pointing out that AI and robotics are technologies still in their infancy. But despite AI’s limits, these same experts advocate introducing AI into some of our most sensitive social contexts such as criminal justice, finance, and healthcare. As my colleague Pedro Domingos puts it: The problem isn’t that AI is too smart and will take over the world. It’s that it is too stupid and already has.
Ryan Calo
Ryan Calo is a law professor at the University of Washington and faculty co-director of the Tech Policy Lab, a unique, interdisciplinary research unit that spans the School of Law, Information School, and Department of Computer Science and Engineering. Calo holds courtesy appointments at the University of Washington Information School and the Oregon State University School of Mechanical, Industrial, and Manufacturing Engineering. He has testified before the U.S. Senate and German Parliament and been called one of the most important people in robotics by Business Insider. This summer, he helped the White House organize a series of workshops on artificial intelligence.
@rcalo on Twitter
http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2402972
http://www.slate.com/articles/technology/future_tense/2016/04/a_robotics_law_expert_on_paolo_bacigalupi_s_mika_model.html

US Judge confuses privacy and security, concludes that you should have neither

Senior U.S. District Judge Henry Coke Morgan Jr. a federal judge for the Eastern District of Virginia has ruled that the user of any computer which connects to the Internet should not have an expectation of privacy because computer security is ineffectual at stopping hackers.
The ruling made on June 23rd was reached in one of the many cases resulting from the FBI’s infiltration of PlayPen, a hidden child exploitation site on the Tor network. After taking control of the site, the FBI kept it up and running, using it to plant malware on visitors’ computers, gathering identifying information that was used to enable prosecution.
JCM ruled that the FBI’s actions in hacking visitors’ computers did not violate Fourth Amendment protections and did not require a warrant, stating that the “Defendant here should have been aware that by going [on-line] to access Playpen, he diminished his expectation of privacy.”
JCM offered as an analogy a previous case (Minnesota v. Carter 525 U.S. 83 – 1998) which ruled that a police officer looking through broken window blinds does not violate anyone’s Fourth Amendment rights, so hacking a computer does not either.

“Just as the area into which the officer in Carter peered - an apartment - usually is afforded Fourth Amendment protection, a computer afforded Fourth Amendment protection in other circumstances is not protected from Government actors who take advantage of an easily broken system to peer into a user's computer. People who traverse the Internet ordinarily understand the risk associated with doing so.”

JCM notes that in 2007 the Ninth Circuit found that connecting to a network did not eliminate the reasonable expectation of privacy in one’s computer, but takes the position that in the last nine years things have changed enough to render this position outdated.

“Now, it seems unreasonable to think that a computer connected to the Web is immune from invasion. Indeed, the opposite holds true: in today's digital world, it appears to be a virtual certainty that computers accessing the Internet can - and eventually will - be hacked.”

As justification for this opinion, JCM cites the Ashley Madison hack and a Pew Research Center study on privacy and information sharing as evidence of the acceptance that hacking is inevitable. The Pew study looked at American’s attitudes to sharing personal information in return for receiving something of perceived value. Although the focus of the Pew report was on privacy and not security it did report that focus group participants “worried about hackers”. However, these concerns were expressed exclusively in terms of a hacker’s ability to gain access to personal data from compromised business computer systems, not personal systems in the home.
Judge Coke Morgan’s level of technical understanding appears to be highly selective. The same judge who ruled on a patent case between Vir2us, INC. and INVINCEA, INC. over competing claims covering advanced anti-malware products, fails to acknowledge that anti-malware products continue to advance. In offering that “Terrorists no longer can rely on Apple to protect their electronically stored private data, as it has been publicly reported that the Government can find alternative ways to unlock Apple users’ iPhones.” He ignores the level of expertise needed to identify the exploit that was used to access the phone used by one of the San Bernardino attackers, or that the hack in question was only applicable to the now superseded iPhone 5C. While it may be possible to unlock older iPhones running back-level OS releases lacking the most up-to-date security features, Apple continues to develop new hardware-based security features and works to fix security vulnerabilities as it finds them. While it is reasonable to claim that many computers are vulnerable to attack, in suggesting that this means it is “a virtual certainty that [all] computers accessing the Internet can – and eventually will – be hacked” or that there is nothing that can be done to mitigate this risk, JCM is either being deliberately disingenuous or is failing in his analysis.
Describing the ruling as “dangerously flawed” EFF Senior Staff Attorney Mark Rumold wrote “The implications for the decision, if upheld, are staggering: law enforcement would be free to remotely search and seize information from your computer, without a warrant, without probable cause, or without any suspicion at all.” But holds out that the ruling is “incorrect as a matter of law, and we expect there is little chance it would hold up on appeal.”

Snapchat’s effort to make its policies more readable backfires

Snapchat has attempted to ease the fear, uncertainty, and doubt that spread like wildfire after it updated its terms of service and privacy policy last week.
The company said in a blog post today that it continues to delete users’ photos from its servers after they are viewed or have expired. This means it “could not — and do not — share [private images] with advertisers or business partners,” according to the company. Content shared via Snapchat is just as ephemeral as it was before the updates.
Snapchat explained in the post that it changed the policies to be more readable, to allow for in-app purchases like the counterintuitive Replays, and to make users aware about the information they have given the service. These were all routine updates tech companies make to their policies semi-regularly.
The reaction to these updates was also routine. Just look at when Instagram updated its policies to make it clear that it planned to use photos shared to its service in advertisements. People started to lose their minds, but as the Verge’s Nilay Patel explained, the problem didn’t lie with the policies themselves. It lied with Instagram’s inability to explain them and a lack of trust in Facebook.
Snapchat could have learned from the Instagram debacle. Instead of posting something on its blog when people started to freak out, it could’ve published the same exact blog post when it first made the changes. That might’ve helped people understand exactly what the company intended with its new policies.
There might be another problem: Making the policies readable to humans sounds good in theory, but in practice things could be just a little bit messier.
Nobody can be expected to read through all the terms of service and privacy policies for everything they use. That would require far more time than anyone wants to spend when they’re setting up their iPhone, for example, or signing up for the newest social tool for teens who want to indicate their down-ness.
So we click the “agree” button without knowing what’s happening, content to keep ourselves from drowning in a flood of legalese. Even if we did read many of these policies, it would be hard to tell exactly what companies are allowed to do, mostly because the vast majority of us aren’t familiar with applicable laws.
This is an obvious problem. Making policies and agreements easier to read is admirable. But when people realize exactly what they’re agreeing to, especially if those terms aren’t broken down like they are in Snapchat’s blog post, they’re likely to respond with the fear Snapchat’s users showed after these updates.
It would be easier for tech companies to keep the legalese and prevent their users from ever understand what they’ve agreed to until scandal breaks out. The companies are damned if their policies are inscrutable to normal people, and damned if they make them more readable but people misinterpret them.
Snapchat has learned this the hard way. Some of the blame lies with the company — as I said, it could’ve saved itself a headache by publishing yesterday’s blog post earlier — but a lot of it lies with us for not knowing what we’ve agreed to in the past. Welcome to the wonders of modern technology.

Snowden revelations threaten U.S.-EU data transfer deal

A data-sharing agreement between the European Union and the United States should be invalidated after the revelation of mass surveillance programs uncovered thanks to the efforts of Edward Snowden in 2013, according to Advocate General for EU Court of Justice Yves Bot.
The agreement to which Bot refers is the Safe Harbor decision from 2000. It allows US companies to self-certify that they comply with EU rules governing the transfer of data related to European citizens to other countries, like the US.
“The access enjoyed by the United States intelligence services to the transferred data constitutes an interference with the right to respect for private life and the right to protection of personal data,” Bot stated in an opinion published this morning. This means Safe Harbor is “no longer adequate” and “the decision adopted in 2000 was no longer adapted to the reality of the situation.”
The opinion was published in response to a complaint brought against Facebook by privacy advocate Max Schrems, who says the personal data of European citizens has been made available to U.S. intelligence agencies via the social network.
Schrems has welcomed Bot’s recommendation, saying in response that “This finding, if confirmed by the court, would be a major step in limiting the legal options for US authorities to conduct mass surveillance on data held by EU companies, including EU subsidiaries of US companies,.” He also argues that invalidating Safe Harbor is a leveling of the playing field:

Self-certification under safe harbor gives US companies an extremely unfair advantage over all other players on the European market that have to stick to much stricter EU law. Removing ‘safe harbor’ would mainly mean that US companies have to play by rules that are equal to those their competitors already play by and that they cannot aid US mass surveillance.

It’s important to note that Bot’s opinion is non-binding, though the court is said to often side with the advocate general. Facebook wouldn’t be the only company affected by the invalidation of Safe Harbor, either; it would affect all companies that transfer data about European citizens to servers located in the US. The BBC reports that a decision like this could affect an estimated 4,000 companies.
In response to a request for comment, a Facebook spokesperson said the company “operates in compliance with EU Data Protection law.  Like the thousands of other companies who operate data transfers across the [A]tlantic we await the full judgement.” And, in response to complaints that data is transfers is given to US intelligence agencies through surveillance programs:

We have repeatedly said that we do not provide ‘backdoor’ access to Facebook servers and data to intelligence agencies or governments.  As Mark said in June 2013, we had never heard of PRISM before it was reported by the press and we have never participated in any such scheme.

The court’s judges are expected to make their own ruling later this year.