Voices in AI – Episode 69: A Conversation with Raj Minhas

[voices_in_ai_byline]

About this Episode

Episode 69 of Voices in AI features host Byron Reese and Dr. Raj Minhas talk about AI, AGI, and machine learning. They also delve into explainability and other quandaries AI is presenting. Raj Minhas has a PhD and MS in Electrical and Computer Engineering from the University of Toronto, with his BE from Delhi University. Raj is also the Vice President and Director of Interactive and Analytics Laboratory at PARC.
Visit www.VoicesinAI.com to listen to this one-hour podcast or read the full transcript.

Transcript Excerpt

Byron Reese: This is Voices in AI, brought to you by GigaOm, I’m Byron Reese. Today I’m excited that our guest is Raj Minhas, who is Vice President and the Director of Interactive and Analytics Laboratory at PARC, which we used to call Xerox PARC. Raj earned his PhD and MS in Electrical and Computer Engineering from the University of Toronto, and his BE from Delhi University. He has eight patents and six patent-pending applications. Welcome to the show, Raj!
Raj Minhas: Thank you for having me.
I like to start off, just asking a really simple question, or what seems like a very simple question: what is artificial intelligence?
Okay, I’ll try to give you two answers. One is a flip response, which is if you tell me what is intelligence, I’ll tell you what is artificial intelligence, but that’s not very useful, so I’ll try to give you my functional definition. I think of artificial intelligence as the ability to automate cognitive tasks that we humans do, so that includes the ability to process information, make decisions based on that, learn from that information, at a high level. That functional definition is useful enough for me.
Well I’ll engage on each of those, if you’ll just permit me. I think even given a definition of intelligence which everyone agreed on, which doesn’t exist, artificial is still ambiguous. Do you think of it as artificial in the sense that artificial turf really isn’t grass, so it’s not really intelligence, it just looks like intelligence? Or, is it simply artificial because we made it, but it really is intelligent?
It’s the latter. So if we can agree on what intelligence is, then artificial intelligence to me would be the classical definition of artificial intelligence, which is re-creating that outside the human body. So re-creating that by ourselves, it may not be re-created in the way it-is created in our minds, in the way humans or other animals do it, but, it’s re-created in that it achieves the same purpose, it’s able to reason in the same way, it’s able to perceive the world, it’s able to do problem solving in that way. So without getting necessarily bogged down by what is the mechanism by which we have intelligence, and does that mechanism need to be the same; artificial intelligence to me would be re-creating that – the ability of that.
Fair enough, so I’ll just ask you one more question along these lines. So, using your ability to automate cognitive tasks, let me give you four or five things, and you tell me if they’re AI. AlphaGo?
Yes.
And then a step down from that, a calculator?
Sure, a primitive form of AI.
A step down from that: an abacus?
Abacus, sure, but it involves humans in the operation of it, but maybe it’s on that boundary where it’s partially automated, but yes.
What about an assembly line?
Sure, so I think…
And then I would say my last one which is a cat food dish that refills itself when it’s empty? And if you say yes to that…
All of those things to me are intelligent, but some of those are very rudimentary, and not, so, for example, you look at animals. On one end of the scale are humans, they can do a variety of tasks that other animals cannot, and on the other end of the spectrum, you may have very simple organisms, single-celled or mammals, they may do things that I would find intelligent, they may be simply responding to stimuli, and that intelligence may be very much encoded. They may not have the ability to learn, so they may not have all aspects of intelligence, but I think this is where it gets really hard to say what is intelligence. Which is my flip response.
If you say: what is intelligence? I can say I’m trying to automate that by artificial intelligence, so, if you were to include in your definition of intelligence, which I do, that ability to do math implies intelligence, then by automating that with an abacus is a way of artificially doing that, right? You have been doing it in your head using whatever mechanism is in there, you’re trying to do that artificially. So it is a very hard question that seems so simple, but, at some point, in order to be logically consistent, you have to say yes, if that’s what I mean, that’s what I mean, even though the examples can get very trivial.
Well I guess then, and this really is the last question along those lines: what, if everything falls under your definition, then what’s different now? What’s changed? I mean a word that means everything means nothing, right?
That is part of the problem, but I think what is becoming more and more different is, the kinds of things you’re able to do, right? So we are able to reason now artificially in ways that we were not able to before. Even if you take the narrower definition that people tend to use which is around machine learning, they’re able to use that to perceive the world in ways in which we were not able to before, and so, what is changing is that ability to do more and more of those things, without relying on a person necessarily at the point of doing them. We still rely on people to build those systems to teach them how to do those things, but we are able to automate a lot of that.
Obviously artificial intelligence to me is more than machine learning where you show something a lot of data and it learns just for a function, because it includes the ability to reason about things, to be able to say, “I want to create a system that does X, and how do I do it?” So can you reason about models, and come to some way of putting them together and composing them to achieve that task?
Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com
[voices_in_ai_link_back]
 
Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.

Voices in AI – Episode 67: A Conversation with Amir Khosrowshahi

[voices_in_ai_byline]

About this Episode

Episode 67 of Voices in AI features host Byron Reese and Amir Khosrowshahi talk about the explainability, privacy, and other implications of using AI for business. Amir Khosrowshahi is VP and CTO at Intel. He holds a Bachelor’s Degree from Harvard in Physics and Math, a Master’s Degree from Harvard in Physics, and a PhD in Computational Neuroscience from UC Berkeley.
Visit www.VoicesinAI.com to listen to this one-hour podcast or read the full transcript.

Transcript Excerpt

Byron Reese: This is Voices in AI brought to you by GigaOm. I’m Byron Reese. Today I’m so excited that my guest is Amir Khosrowshahi. He is a VP and the CTO of AI products over at Intel. He holds a Bachelor’s Degree from Harvard in Physics and Math, a Master’s Degree from Harvard in Physics, and a PhD in Computational Neuroscience from UC Berkeley. Welcome to the show, Amir.
Amir Khosrowshahi: Thank you, thanks for having me.
I can’t imagine someone better suited to talking about the kinds of things we talk about on this show, because you’ve got a PhD in Computational Neuroscience, so, start off by just telling us what is Computational Neuroscience?
So neuroscience is a field, the study of the brain, and it is mostly a biologically minded field, and of course there are aspects of the brain that are computational and there’s aspects of the brain that are opening up the skull and peering inside and sticking needles into areas and doing all sorts of different kinds of experiments. Computational neuroscience is a combination of these two threads, the thread that there [are] computer science statistics and machine learning and mathematical aspects to intelligence, and then there’s biology, where you are making an attempt to map equations from machine learning to what is actually going on in the brain.
I have a theory which I may not be qualified to have and you certainly are, and I would love to know your thoughts on it. I think it’s very interesting that people are really good at getting trained with a sample size of one, like draw a made up alien you’ve never seen before and then I can show you a series of photographs, and even if that alien’s upside down, underwater, behind a tree, whatever, you can spot it.
Further, I think it’s very interesting that people are so good at transfer learning, I could give you two objects like a trout swimming in a river, and that same trout in a jar of formaldehyde in a laboratory and I could ask you a series of questions: Do they weigh the same, are they the same color, do they smell the same, are they the same temperature? And you would instantly know, and yet, likewise, if you were to ask me if hitting your thumb with a hammer hurts, and I would say “yes,” and then somebody would say, “Well, have you ever done it?” And I’m like, “yeah,” and they would say, “when?” And it’s like, I don’t really remember, I know I have. Somehow we take data and throw it out, and remember metadata and yet the fact a hammer hurts your thumb is stored in some little part of your brain that you could cut it out and somehow forget that. And so when I think of all of those things that seem so different than computers to me, I kind of have a sense that human intelligence doesn’t really tell us anything about how to build artificial intelligence. What do you say?
Okay, those are very deep questions and actually each one of those items is a separate thread in the field of machine learning and artificial intelligence. There are lots of people working on things, so the first thing you mentioned I think, was one shot learning where you have, you see as something that’s novel. From the first time you see it, you recognize it as something that’s singular and you retain that knowledge to then identify if it occurs again—such as for a child it would be like a chair, for you it’s potentially an alien. So, how do you learn from single examples?
That’s an open problem in machine learning and is very actively studied because you want to be able to have a parsimonious strategy for learning and the current ways that—it’s a good problem to have—the current ways that we’re doing learning in, for example, online services that sort photos and recognize objects and images. It’s very computationally wasteful and it’s actually wasteful in usage of data. You have to see many examples of chairs to have an understanding of a chair, and it’s actually not clear if you actually have an understanding of a chair, because the models that we have today for chairs, they do make mistakes. When you peer into where the mistakes were made, it seems like there the machine learning model doesn’t actually have an understanding of a chair, it doesn’t have a semantic understanding of a scene or of grammar, or of languages that are translated, and we’re noticing these efficiencies and we’re trying to address them.
You mentioned some other things, such as how do you transfer knowledge from one domain to the next. Humans are very good at generalizing. We see an example of something in one context, and it’s amazing that you can extrapolate or transfer it to a completely different context. That’s also something that we’re working on quite actively, and we have some initial success in that we can take a statistical model that was trained on one set of data and then we can then apply to another set of data by using that previous experience as a warm start, and then moving away from that old domain to the new domain. This is also possible to do in continuous time.
Much of the things we experience in the real world—they’re not stationary, and that’s a statistics change with time. We need to have models that can also change. For a human it’s easy to do that, it’s very good at going from… it’s good at handling non-stationary statistics, so we need to build that into our models, be cognizant of it, we’re working on it. And then [for] other things you mentioned—that intuition is very difficult. It’s potentially one of the most difficult things for us to translate from human intelligence to machines, and remembering things and having kind of a hazy idea of having done something bad to yourself with a hammer, that I’m not actually sure where that falls in into the various subdomains of machine learning.
Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com
[voices_in_ai_link_back]
 
Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.

Voices in AI – Episode 63: A Conversation with Hillery Hunter

[voices_in_ai_byline]

About this Episode

Episode 63 of Voices in AI features host Byron Reese and Hillery Hunter discuss AI, deep learning, power efficiency, and understanding the complexity of what AI does with the data it is fed. Hillery Hunter is an IBM Fellow and holds an MS and a PhD in electrical engineering from the University of Illinois Urbana-Champaign.
Visit www.VoicesinAI.com to listen to this one-hour podcast or read the full transcript.

Transcript Excerpt

Byron Reese: This is Voices in AI brought to you by GigaOm, I’m Byron Reese. Today, our guest is Hillery Hunter. She is an IBM Fellow, and she holds an MS and a PhD in electrical engineering from the University of Illinois Urbana-Champaign. Welcome to the show, Hillery.
Thank you it’s such a pleasure to be here today, looking forward to this discussion, Byron.
So, I always like to start off with my Rorschach test question, which is: what is artificial intelligence, and why is it artificial?
You know that’s a great question. My background is in hardware and in systems and in the actual compute substrate for AI. So one of the things I like to do is sort of demystify what AI is. There are certainly a lot of definitions out there, but I like to take people to the math that’s actually happening in the background. So when we talk about AI today, especially in the popular press and such and people talk about the things that AI is doing, be it understanding medical stands or labelling people’s pictures on a social media platform, or understanding speech or translating language, all those things that are considered core functions of AI today are actually deep learning, which means using many layered neural networks to solve a problem.
There’s also other parts of AI though, that are much less discussed in popular press, which include knowledge and reasoning and creativity and all these other aspects. And you know the reality is where we are today with AI, is we’re seeing a lot of productivity from the deep learning space and ultimately those are big math equations that are solved with lots of matrix math, and we’re basically creating a big equation that matches in its parameters to a set of data that it was fed.
So, would you say though that that it is actually intelligent, or that it is emulating intelligence, or would you say there’s no difference between those two things?
Yeah, so I’m really quite pragmatic as you just heard from me saying, “Okay, let’s go talk about what the math is that’s happening,” and right now where we’re at with AI is relatively narrow capabilities. AI is good at doing things like classification or answering yes and no kind of questions on data that it was fed and so in some sense, it’s mimicking intelligence in that it is taking in sort of human sensory data a computer can take in. What I mean by that is it can take in visual data or auditory data, people are even working on sensory data and things like that. But basically a computer can now take in things that we would consider sort of human process data, so visual things and auditory things, and make determinations as to what it thinks it is, but certainly far from something that’s actually thinking and reasoning and showing intelligence.
Well, staying squarely in the practical realm, that approach, which is basically, let’s look at the past and make guesses about the future, what is the limit of what that can do? I mean, for instance, is that approach going to master natural language for instance? Can you just feed a machine enough printed material and have it be able to converse? Like what are some things that model may not actually be able to do?
Yeah, you know it’s interesting because there’s a lot of debate. What are we doing today that’s different from analytics? We had the big data era, and we talked about doing analytics on the data. What’s new and what’s different and why are we calling it AI now? To refer to your question from that direction, one of the things that AI models do, be it anything from a deep learning model to something that’s more in the knowledge reasoning area, is that they’re much better interpolators, they’re much better able to predict on things that they’ve never seen before.
Classical rigid models that people programmed in computers, could answer “Oh, I’ve seen that thing before.” With deep learning and with more modern AI techniques, we are pushing forward into computers and models being able to guess on things that they haven’t exactly seen before. And so in that sense there’s a good amount of interpolation influx, whether or not and how AI pushes into forecasting on things well outside the bounds of what it’s never seen before and moving AI models to be effective at types of data that are very different from what they’ve seen before, is the type of advancement that people are really pushing for at this point.
Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com
[voices_in_ai_link_back]
 
Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.