Voices in AI – Episode 66: A Conversation with Steve Ritter

[voices_in_ai_byline]

About this Episode

Episode 66 of Voices in AI features host Byron Reese and Steve Ritter talk about the future of AGI, how AI will effect jobs, security, warfare, and privacy. Steve Ritter holds a B.S. in Cognitive Science, Computer Science and Economics from UC San Diego and is currently the CTO of Mitek.
Visit www.VoicesinAI.com to listen to this one-hour podcast or read the full transcript.

Transcript Excerpt

Byron Reese: This is Voices in AI, brought to you by GigaOm, I’m Byron Reese, and today our guest is Steve Ritter. He is the CTO of Mitek. He holds a Bachelor of Science in Cognitive Science, Computer Science and Economics from UC San Diego. Welcome to the show Steve.
Steve Ritter: Thanks a lot Byron, thanks for having me.
So tell me, what were you thinking way back in the ’80s when you said, “I’m going to study computers and brains”? What was going on in your teenage brain?
That’s a great question. So first off I started off with a Computer Science degree and I was exposed to the concepts of the early stages of machine learning and cognitive science through classes that forced me to deal with languages like LISP etc., and at the same time the University of California, San Diego was opening up their very first department dedicated to cognitive science. So I was just close to finishing up my Computer Science degree, and I decided to add Cognitive Science into it as well, simply because I was just really amazed and enthralled with the scope of what Cognitive Science was trying to cover. There was obviously the computational side, then the developmental psychology side, and then neuroscience, all combined to solve a host of different problems. You had so many researchers in that area that were applying it in many different ways, and I just found it fascinating, so I had to do it.
So, there’s human intelligence, or organic intelligence, or whatever you want to call it, there’s what we have, and then there’s artificial intelligence. In what ways are those things alike and in what ways are they not?
That’s a great question. I think it’s actually something that trips a lot of people up today when they hear about AI, and we might use the term, artificial basic intelligence, or general intelligence, as opposed to artificial intelligence. So a big difference is, on one hand we’re studying the brain and we’re trying to understand how the brain is organized to solve problems and from that derive architectures that we might use to solve other problems. It’s not necessarily the case that we’re trying to create a general intelligence or a consciousness, but we’re just trying to learn new ways to solve problems. So I really like the concept of neural inspired architectures, and that sort of thing. And that’s really the area that I’ve been focused on over the past 25 years, is really how can we apply these learning architectures to solve important business problems.
Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com
[voices_in_ai_link_back]
 
Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.

Voices in AI – Episode 65: A Conversation with Luciano Floridi

[voices_in_ai_byline]

About this Episode

Episode 65 of Voices in AI features host Byron Reese and Luciano Floridi discuss ethics, information, AI and government monitoring. They also dig into Luciano’s new book “The Fourth Revolution” and ponder how technology will disrupt the job market in the days to come. Luciano Floridi holds multiple degrees including a PhD in philosophy and logic from the University of Warwick. Luciano currently is a professor of philosophy and ethics of information, as well as the director of Digital Ethics Lab at the University of Oxford. Along with his responsibilities as a professor, Luciano is also the chair of the Data Ethics Group at the Alan Turing Institute.
Visit www.VoicesinAI.com to listen to this one-hour podcast or read the full transcript.

Transcript Excerpt

Byron Reese: This is Voice in AI, brought to you by GigaOm, I’m Byron Reese. Today our guest is Luciano Floridi, he is a professor of philosophy and ethics of information, and the director at the Digital Ethics Lab at the University of Oxford. In addition to that, he is the chair at the Data Ethics Group at the Alan Turing Institute. Among multiple degrees, he holds a Doctor of Philosophy in philosophy and logic from the University of Warwick. Welcome to the show, Luciano.
Luciano Floridi: Thank you for having me over.
I’d like to start with a simple question which is: what is intelligence, and by extension, what is artificial intelligence?
Well this is a great question and I think one way of getting away with a decent answer, is to try to understand, what’s the lack of intelligence. So that you recognize it by spotting when there isn’t intelligence around.
So, imagine you are, say, nailing something on the wall and all of a sudden you hit your finger. Well, that was stupid, that was a lack of intelligence, it would have been intelligent not to do that. Or, imagine that you get all the way to the supermarket and you forgot your wallet so you can’t buy anything, well that was also stupid, so you would need intelligence to take your wallet. You can multiply that by, shall we say, a million cases, so there are a million cases in which you can be, or—just to be more personal—I can be stupid, and therefore I can be intelligent by the other way around.
So intelligence is a way of, shall we say, sometimes, coping with the world in a way that is effective, successful, but it also can be so many other things. It’s not intelligent, or it would be intelligent not to talk to your friend about the wrong topic, because that’s not the right day. It is intelligent, or not very intelligent, to make sure that that party you organize, you don’t invite Mary and Peter because they can’t stand each other.
The truth is that we don’t have a definition for intelligence or vice versa, for the lack of it. But at this point, I can sort of recycle an old joke by one of the judges in the Supreme Court, I’m sure everyone listening to or reading this knows that very well, but always ask for a definition of pornography, as you know, he said, “I don’t have one, but I recognize it when I see it.” I think that that sounds good—we know when we’re talking to someone intelligent on a particular topic, we know when we are doing something stupid about a particular circumstance, and I think that that’s the best that we can do.
Now, let me just add one last point just in case, say, “Oh, well isn’t that funny that we don’t have a definition for such a fundamental concept?” No it isn’t. In fact, most of the fundamental concepts that we use, or experiences we have, don’t have a definition. Think about friendship, love, hate, politics, war, on and on. You start getting a sense of, okay, I know what we’re talking about, but this is not like water equal to H2O, it’s not like a triangle is a figure with a plain of three sides and three angles, because we’re not talking about simple objects that we can define in terms of necessary and sufficient condition, we’re talking about having criteria to identify what it looks like to be intelligent, what it means to behave intelligently. So, if I really have to go out of my way and provide a definition—intelligence is nothing, everything is about behaving intelligently. So, let’s get an adverb instead of a noun.
I’m fine with that. I completely agree that we do have all these words, like, “life” doesn’t have a consensus definition, and “death” doesn’t have a consensus definition and so forth, so I’m fine with leaving it in a gray area. That being said, I do think it’s fair to ask how big of a deal is it—is it a hard and difficult thing, there’s only a little bit of it, or is it everywhere? If your definition is about coping with the world, then plants are highly intelligent, right? They will grow towards light, they’ll extend their roots towards water, they really cope with the world quite well. And if plants are intelligent, you’re setting a really low bar, which is fine, but I just want to kind of think about it. You’re setting a really low bar, intelligence permeates everything around us.
That’s true. I mean, you can even say, well look the way the river goes from that point to that point, and reaches the sea through the shortest possible path, well, that looks intelligent. I mean, remember that there was a past when we thought that precisely because of this reason, and many others, plants were some kinds of gods, and the river was a kind of god, that it was intelligent, purposeful, meaningful, goal-oriented, sort of activity there, and not simply a good adaptation, some mechanism, cause and effect. So what I wanted to detach here, so to speak, is our perception of what it looks like, and what it actually is.
Suppose I go back home, and I find that the dishes have been cleaned. Well, do I know whether the dishes have been cleaned by the dishwasher or by, say, my friend Mary? Well, looking at the dishes, I cannot. They’re all clean, so the output looks pretty much the same, but of course the two processes have been very different. One thing requires some intelligence on Mary’s side, otherwise she would break things and so on, waste soap, and so on. And the other one is, well, simple dishwashing machine, so, zero intelligence as far as I’m concerned—of the kind that we’ve been discussing, you know, that goes back to the gray area, the pornography example, and so on.
I think what we can do here is to say, look, we’re really not quite sure about what intelligence means. It has a thousand different meanings we can apply to this and that, if you really want to be inclusive, even a river’s intelligence, why not? The truth is that when we talk about our intelligence, well then we have some kind of a meter, like a criteria to measure, and we can say, “Look, this thing is intelligent, because had it been done by a human being, it would have required intelligence.” So, they say, “Oh, that was a smart way of doing things,” for example, because had that been left to a human being, well I would have been forced to be pretty smart.
I mean, chess is a great example today—my iPhone is as idiotic, as my grandmother’s fridge, you know zero intelligence of the sort we’ve been discussing here, and yet, it plays better chess than almost anyone I can possibly imagine. Meaning? Meaning that we have managed to detach the ability to pursue a particular goal, and been successful in implementing a process from the need to be intelligent. It doesn’t have to be to be successful.
Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com
[voices_in_ai_link_back]
 
Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.

Voices in AI – Episode 64: A Conversation with Eli David

[voices_in_ai_byline]

About this Episode

Episode 64 of Voices in AI features host Byron Reese and Dr. Eli David discuss evolutionary computation, deep learning and neural networks, as well as AI’s role in improving cyber-security. Dr. David is the CTO and co-founder of Deep Instinct as well as having published multiple papers on deep learning and genetic algorithms in leading AI journals.
Visit www.VoicesinAI.com to listen to this one-hour podcast or read the full transcript.

Transcript Excerpt

Byron Reese: This is Voices in AI, brought to you by GigaOm. I’m Byron Reese. And today, our guest is Dr. Eli David. He is the CTO and the co-founder of Deep Instinct. He’s an expert in the field of computational intelligence, specializing in deep learning and evolutionary computation. He’s published more than 30 papers in leading AI journals and conferences, mostly focusing on applications of deep learning and genetic algorithms in various real-world domains. Welcome to the show, Eli.
Eli David: Thank you very much. Great to be here.
So bring us up to date, or let everybody know what do we mean by evolutionary computation, and deep learning and neural networks? Because all three of those are things that, let’s just say, they aren’t necessarily crystal clear in everybody’s minds what they are. So let’s begin by defining your terms. Explain those three concepts to us.
Sure, definitely. Now, both neural networks and evolutionary computation take inspiration from intelligence in nature. If instead of trying to come up with smart mathematical ways of creating intelligence, we just look at the nature to see how intelligence works there, we can reach two very obvious conclusions. First, the only algorithm that is in charge of creating intelligence – we started from single-cell organisms billions of years ago, and now we are intelligent organisms – and the main algorithm, or maybe the only algorithm, in charge of that was evolution. So evolutionary computation takes inspiration from the evolutionary process in the nature and trying to evolve computer programs so that, from one generation to other, they will become smarter and smarter, and the smarter they are, the more they breed, the more children they have, and so, hopefully the smart gene improves one generation after the other.
The other thing that we will notice when we observe nature is brains. Nearly all the intelligence in humans or other mammals or the intelligent animals, it is due to a neural network and network of neurons which we refer to as a brain — many small processing units connected to each other via what we call synapses. In our brains, for example, we have many tens of billions of such neurons, each one of them, on average, connected to about ten thousand other neurons, and these small processing units connected to each other, they create the brain; they create all our intelligence. So the two fields of evolutionary computation and artificial neural networks, nowadays referred to as deep learning, and we will shortly dwell on the difference as well, take direct inspiration from nature.
Now, what is the difference between deep learning, deep neural networks, traditional neural networks, etc? So, neural networks is not a new field. Already in the 1980s, we had most of the concepts that we have today. But the main difference is that during the past several years, we had several major breakthroughs, while until then, we could train only shallow neural networks, shallow artificial neural networks, just a few layers of neurons, just a few thousand synapses, connectors. A few years ago, we managed to make these neural networks deep, so instead of a few layers, we have many tens of layers; instead of a few thousand connectors, we have now hundreds of millions, or billions, of connectors. So instead of having shallow neural networks, nowadays we have deep neural networks, also known as deep learning. So deep learning and deep neural networks are synonyms.
Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com
[voices_in_ai_link_back]
 
Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.

Voices in AI – Episode 63: A Conversation with Hillery Hunter

[voices_in_ai_byline]

About this Episode

Episode 63 of Voices in AI features host Byron Reese and Hillery Hunter discuss AI, deep learning, power efficiency, and understanding the complexity of what AI does with the data it is fed. Hillery Hunter is an IBM Fellow and holds an MS and a PhD in electrical engineering from the University of Illinois Urbana-Champaign.
Visit www.VoicesinAI.com to listen to this one-hour podcast or read the full transcript.

Transcript Excerpt

Byron Reese: This is Voices in AI brought to you by GigaOm, I’m Byron Reese. Today, our guest is Hillery Hunter. She is an IBM Fellow, and she holds an MS and a PhD in electrical engineering from the University of Illinois Urbana-Champaign. Welcome to the show, Hillery.
Thank you it’s such a pleasure to be here today, looking forward to this discussion, Byron.
So, I always like to start off with my Rorschach test question, which is: what is artificial intelligence, and why is it artificial?
You know that’s a great question. My background is in hardware and in systems and in the actual compute substrate for AI. So one of the things I like to do is sort of demystify what AI is. There are certainly a lot of definitions out there, but I like to take people to the math that’s actually happening in the background. So when we talk about AI today, especially in the popular press and such and people talk about the things that AI is doing, be it understanding medical stands or labelling people’s pictures on a social media platform, or understanding speech or translating language, all those things that are considered core functions of AI today are actually deep learning, which means using many layered neural networks to solve a problem.
There’s also other parts of AI though, that are much less discussed in popular press, which include knowledge and reasoning and creativity and all these other aspects. And you know the reality is where we are today with AI, is we’re seeing a lot of productivity from the deep learning space and ultimately those are big math equations that are solved with lots of matrix math, and we’re basically creating a big equation that matches in its parameters to a set of data that it was fed.
So, would you say though that that it is actually intelligent, or that it is emulating intelligence, or would you say there’s no difference between those two things?
Yeah, so I’m really quite pragmatic as you just heard from me saying, “Okay, let’s go talk about what the math is that’s happening,” and right now where we’re at with AI is relatively narrow capabilities. AI is good at doing things like classification or answering yes and no kind of questions on data that it was fed and so in some sense, it’s mimicking intelligence in that it is taking in sort of human sensory data a computer can take in. What I mean by that is it can take in visual data or auditory data, people are even working on sensory data and things like that. But basically a computer can now take in things that we would consider sort of human process data, so visual things and auditory things, and make determinations as to what it thinks it is, but certainly far from something that’s actually thinking and reasoning and showing intelligence.
Well, staying squarely in the practical realm, that approach, which is basically, let’s look at the past and make guesses about the future, what is the limit of what that can do? I mean, for instance, is that approach going to master natural language for instance? Can you just feed a machine enough printed material and have it be able to converse? Like what are some things that model may not actually be able to do?
Yeah, you know it’s interesting because there’s a lot of debate. What are we doing today that’s different from analytics? We had the big data era, and we talked about doing analytics on the data. What’s new and what’s different and why are we calling it AI now? To refer to your question from that direction, one of the things that AI models do, be it anything from a deep learning model to something that’s more in the knowledge reasoning area, is that they’re much better interpolators, they’re much better able to predict on things that they’ve never seen before.
Classical rigid models that people programmed in computers, could answer “Oh, I’ve seen that thing before.” With deep learning and with more modern AI techniques, we are pushing forward into computers and models being able to guess on things that they haven’t exactly seen before. And so in that sense there’s a good amount of interpolation influx, whether or not and how AI pushes into forecasting on things well outside the bounds of what it’s never seen before and moving AI models to be effective at types of data that are very different from what they’ve seen before, is the type of advancement that people are really pushing for at this point.
Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com
[voices_in_ai_link_back]
 
Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.

Voices in AI – Episode 62: A Conversation with Atif Kureishy

[voices_in_ai_byline]

About this Episode

Episode 62 of Voices in AI features host Byron Reese and Atif Kureishy discussing AI, deep learning, and the practical examples and implications in the business market and beyond. Atif Kureishy is the Global VP of Emerging Practices at Think Big, a Teradata company. He also has a B.S. in physics and math from the University of Maryland as well as an MS in distributive computing from Johns Hopkins University.
Visit www.VoicesinAI.com to listen to this one-hour podcast or read the full transcript.

Transcript Excerpt

Byron Reese: This is Voices in AI, brought to you by GigaOm, I’m Byron Reese. Today my guest is Atif Kureishy. He is the Global VP of Emerging Practices, which is AI and deep learning at Think Big, a Teradata company. He holds a BS in Physics and Math from the University of Maryland, Baltimore County, and an MS in distributive computing from the Johns Hopkins University. Welcome to the show Atif.
Atif Kureishy: Welcome, thank you, appreciate it.
So I always like to start off by just asking you to define artificial intelligence.
Yeah, definitely an important definition, one that unfortunately is overused and stretched in many different ways. Here at Think Big we actually have a very specific definition within the enterprise. But before I give that, for me in particular, when I think of intelligence, that conjures up the ability to understand, the ability to reason, the ability to learn, and we usually equate that to biological systems, or living entities, and now with the rise of probably more appropriate machine intelligence, we’re applying the term ‘artificial’ to it, and the rationale is probably because machines aren’t living and they’re not biological systems.
So with that, the way we’ve defined AI in particular is: leveraging machine and deep learning to drive towards a specific business outcome. And it’s about giving leverage for human workers, to enable higher degrees of assistance and higher degrees of automation. And when we define AI in that way, we actually give it three characteristics. Those three characteristics are: the ability to sense and learn, and so that’s being able to understand massive amounts to data and demonstrate continuous learning, and detecting patterns and signals within the noise, if you will. And the second is being able to reason and infer, and that is driving intuition and inference with increasing accuracy again to maximize a business outcome or a business decision. And then ultimately it’s about deciding and acting, so actioning or automating a decision based on everything that’s understood, to drive towards more informed activities that are based on corporate intelligence. So that’s kind of how we view AI in particular.
Well I applaud you for having given it so much thought, and there’s a lot there to unpack. You talked about intelligence being about understanding and reasoning and learning, and that was even in your three areas. Do you believe machines can reason?
You know, over time, we’re going to start to apply algorithms and specific models to the concept of reasoning, and so the ability to understand, the ability to learn, are things that we’re going to express in mathematical terms no doubt. Does it give it human lifelike characteristics? That’s still something to be determined.
Well I don’t mean to be difficult with the definition because, as you point out, most people aren’t particularly rigorous when it comes to it. But if it’s to drive an outcome, take a cat food dish that refills itself when it’s low, it can sense, it can reason that it should put more food in, and then it can act and release a mechanism that refills the food dish, is that AI, in your understanding, and if not why isn’t that AI?
Yeah, I mean I think in some sense it checks a lot of the boxes, but the reality is, being able to adapt and understand what’s occurring, for instance if that cat is coming out during certain times of the day ensuring that meals are prepared in the right way and that they don’t sit out and become stale or become spoiled in any way, and that is signs of a more intelligent type of capability that is learning the behaviors and anticipating how best to respond given a specific outcome it’s driving towards.
Got you. So now, to take that definition, your company is Think Big. What do you think big about? What is Think Big and what do you do?
So looking back in history a little bit, Think Big was actually an acquisition that Teradata had done several years ago, in the big data space, and particularly around open source and consulting. And over time, Teradata had made several acquisitions and now we’ve unified all of those various acquisitions into a unified group, called Think Big Analytics. And so what we’re particularly focused on is how do we drive business outcomes using advanced analytics and data science. And we do that through a blend of approaches and techniques and technology frankly.
Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com
[voices_in_ai_link_back]
 
Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.

Voices in AI – Episode 61: A Conversation with Dr. Louis Rosenberg

[voices_in_ai_byline]

About this Episode

Episode 61 of Voices in AI features host Byron Reese and Dr. Louis Rosenberg talking about AI and swarm intelligence. Dr. Rosenberg is the CEO of Unanimous AI. He also holds a B.S., M.S., and a PhD in Engineering from Stanford.
Visit www.VoicesinAI.com to listen to this one-hour podcast or read the full transcript.

Transcript Excerpt

Byron Reese: This is Voices in AI, brought to you by GigaOm. I’m Byron Reese and today I’m excited that our guest is Louis Rosenberg. He is the CEO at Unanimous A.I. He holds a B.S. in Engineering, an M.S. in Engineering and a PhD in Engineering all from Stanford. Welcome to the show, Louis.
Dr. Louis Rosenberg: Yeah, thanks for having me.
So tell me a little bit about why do you have a company? Why are you CEO of a company called Unanimous A.I.? What is the unanimous aspect of it?
Sure. So, what we do at Unanimous A.I. is we use artificial intelligence to amplify the intelligence of groups rather than using A.I. to replace people. And so instead of replacing human intelligence, we are amplifying human intelligence by connecting people together using A.I. algorithms. So in laymen’s terms, you would say we build hive minds. In scientific terms, we would say we build artificial swarm intelligence by connecting people together into systems.
What is swarm intelligence?
So swarm intelligence is a biological phenomenon that people have been studying, or biologists have been studying, since the 1950s. And it is basically the reason why birds flock and fish school and bees swarm—they are smarter together than they would be on their own. And the way they become smarter together is not the way people do it. They don’t take calls, they don’t conduct surveys, there’s no SurveyMonkey in nature. The way that groups of organisms get smarter together is by forming systems, real-time systems with feedback loops so that they can essentially think together as an emergent intelligence that is smarter as a uniform system than the individual participants would be on their own. And so the way I like to think of an artificial swarm intelligence or a hive mind is as a brain of brains. And that’s essentially what we focus on at Unanimous A.I., is figuring out how to do that among people, even though nature has figured out how to do that among birds and bees and fish, and have demonstrated over millions of years and hundreds of millions of years, how powerful it can be.
So before we talk about artificial swarm intelligence, let’s just spend a little time really trying to understand what it is that the animals are doing. So the thesis is, your average ant isn’t very smart and even the smartest ant isn’t very smart and yet collectively they exhibit behavior that’s quite intelligent. They can do all kinds of things and forage and do this and that, and build a home and protect themselves from a flood and all of that. So how does that happen?
Yeah, so it’s an amazing process and its worth taking one little step back and just asking ourselves, how do we define the term intelligence? And then we can talk about how we can build a swarm intelligence. And so, in my mind, the word intelligence could be defined as a system that takes in noisy input about the world and it processes that input and it uses it to make decisions, to have opinions, to solve problems and, ideally, it does it creatively and by learning over time. And so if that’s intelligence, then there’s lots of ways we can think about building an artificial intelligence, which I would say is basically creating a system that involves technology that does some or all of these systems, takes in noisy input and uses it to make decisions, have opinions, solve problems, and does it creatively and learning over time.
Now, in nature, there’s really been two paths by which nature has figured out how to do these things, how to create intelligence. One path is the path we’re very, very familiar with, which is by building up systems of neurons. And so, over hundreds of millions and billions of years, nature figured out that if you build these systems of neurons, which we call brains, you can take in information about the world and you can use it to make decisions and have opinions and solve problems and do it creatively and learn over time. But what nature has also shown is that in many organisms—particularly social organisms—once they’ve built that brain and they have an individual organism that can do this on their own, many social organisms then evolve the ability to connect the brains together into systems. So if a brain is a network of neurons where intelligence emerges, a swarm in nature is a network of brains that are connected deeply enough that a superintelligence emerges. And by superintelligence, we mean that the brain of brains is smarter together than those individual brains would be on their own. And as you described, it happens in ants, it happens in bees, it happens in birds, and fish.
And let me talk about bees because that happens to be the type of swarm intelligence that’s been studied the longest in nature. And so, if you think about the evolution of bees, they first developed their individual brains, which allowed them to process information, but at some point their brains could not get any larger, presumably because they fly, and so bees fly around, their brains are very tiny to be able to allow them to do that. In fact, a honeybee has a brain that has less than a million neurons in it, and it’s smaller than a grain of sand. And I know a million neurons sounds like a lot, but a human has 85 billion neurons. So however smart you are, divide that by 85,000 and that’s a honeybee. So a single honeybee, very, very simple organism and yet they have very difficult problems that they need to solve, just like humans have difficult problems.
And so the type of problem that is actually studied the most in honeybees is picking a new home to move into. And by new home, I mean, you have a colony of 10,000 bees and every year they need to find a new home because they’ve outgrown their previous home and that home could be a hole in a hollow log, it could be a hole at the side of a building, it could be a hole—if you’re unlucky—in your garage, which happened to me. And so a swarm of bees is going to need to find a new home to move into. And, again, it sounds like a pretty simple decision, but actually it’s a life-or-death decision for honeybees. And so for the evolution of bees, the better decision that they can make when picking a new home, the better the survival of their species. And so, to solve this problem, what colonies of honeybees do is they form a hive mind or a swarm intelligence and the first step is that they need to collect information about their world. And so they send out hundreds of scout bees out into the world to search 30 square miles to find potential sites, candidate sites that they can move into. So that’s data collection. And so they’re out there sending hundreds of bees out into the world searching for different potential homes, then they bring that information back to the colony and now they have the difficult part of it: they need to make a decision, they need to pick the best possible site of dozens of possible sites that they have discovered. Now, again, this sounds simple but honeybees are very discriminating house-hunters. They need to find a new home that satisfies a whole bunch of competing constraints. That new home has to be large enough to store the honey they need for the winter. It needs to be ventilated well enough so they can keep it cool in the summer. It needs to be insulated well enough so it can stay warm in cold nights. It needs to be protected from the rain, but also near good sources of water. And also, of course, it needs to be well-located, near good sources of pollen.
And so it’s a complex multi-variable problem. This is a problem that a single honeybee with a brain smaller than a grain of sand could not possibly solve. In fact, a human that was looking at that data would find it very difficult to use a human brain to find the best possible solution to this multi-variable optimization problem. Or a human that is faced with a similar human challenge, like finding the perfect location for a new factory or the perfect features of a new product or the perfect location to put a new store, would be very difficult to find a perfect solution. And yet, rigorous studies by biologists have shown that honeybees pick the best solution from all the available options about 80% of the time. And when they don’t pick the best possible solution, they pick the next best possible solution. And so it’s remarkable. By working together as a swarm intelligence, they are enabling themselves to make a decision that is optimized in a way that a human brain, which is 85,000 times more powerful, would struggle to do.
And so how do they do this? Well, they form a real-time system where they can process the data together and converge together on the optimal solution. Now, they’re honeybees, so how do they process the data? Well, nature came up with an amazing way. They do it by vibrating their bodies. And so biologists call this a “waggle dance” because to humans, when people first starting looking into hives, they saw these bees doing something that looked like they were dancing because they were vibrating their bodies. It looked like they were dancing but really they were generating these vibrations, these signals that represent their support for their various home sites that were under consideration. By having hundreds and hundreds of bees vibrating their bodies at the same time, they’re basically engaging in this multi-directional tug of war. They’re pushing and pulling on a decision, exploring all the different options until they converge together in real time on the one solution that they can best agree upon and it’s almost always the optimal solution. And when it’s not the optimal solution, it’s the next best solution. So basically they’re forming this real-time system, this brain of brains that can converge together on an optimal solution and can solve problems that they couldn’t do on their own. And so that’s the most well-known example of what a swarm intelligence is and we see it in honeybees, but we also see the same process happening in flocks of birds, in schools of fish, which allow them to be smarter together than alone.
Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com
[voices_in_ai_link_back]
 
Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.

Voices in AI – Episode 60: A Conversation with Robin Hanson

[voices_in_ai_byline]

About this Episode

Episode 60 of Voices in AI features host Byron Reese and Robin Hanson talking about AI and the “Age of Ems,” brain emulations. Robin Hanson is an author, research associate at the Future of Humanity Institute of Oxford University, the Chief Scientist at Consensus Point, and an associate professor of Economics at George Mason University.
Visit www.VoicesinAI.com to listen to this one-hour podcast or read the full transcript.

Transcript Excerpt

Byron Reese: This is Voices in AI, brought to you by GigaOm, I’m Byron Reese. Today my guest is Robin Hanson. He is an author, and he is also the Chief Scientist over at Consensus Point. He’s an associate professor of Economics at George Mason University. He holds a BS in Physics, an MS in Physics and he’s got an NA in conceptual foundations of science from the University of Chicago, he’s got a PhD in Social Science from Caltech, and I’m sure there are other ones as well. Welcome to the show Robin.
Robin Hanson: It’s great to be here.
I’m really fascinated by your books. Let’s start there. Tell me about the new book, what is it called?
My latest book is co-authored with Kevin Simler, and it’s called “The Elephant in the Brain: Hidden Motives in Everyday Life,” and that subtitle is the key. We are just wrong about why we do lots of things. For most everything we do, we have a story. If I were to stop you at any one moment and ask you, “Why are you doing that?,” you’ll almost always have a story and you’ll be pretty confident about it, and you don’t know how that is just wrong a lot. Your stories about why you do things are not that accurate.
So is it the case that we do everything, essentially unconsciously, and then the conscious mind follows along behind it and tries to rationalize, “Oh, I did that because of ‘blank,'” and then the brain fools us by switching the order of those two things, is that kind of what you’re getting at?
That’s part of it yes, your conscious mind is not the king or president of your mind, it’s the secretary. It’s the creepy guy who stands behind the king saying, “a judicious choice, sir.” Your job isn’t to know why you do things or to make decisions, your job is to make up good explanations for them.
And there’s some really interesting research that bears that out, with split-brain patients and the like. How do we know that about the brain? Tell me a little bit about that.
Well, we know that in many circumstances when people don’t actually know why they do things, they still make up confident explanations. So, we know that you’re just the sort of creature who will always have a confident story about why you do things, even when you’re wrong. Now that by itself doesn’t say that you’re wrong, it just says that you might well be wrong. In order to show that you are wrong a lot in specific situations, there’s really no substitute for looking at the things you do, and trying to come up with a theory about why you do them. And that’s what most of our book is about.
So our first third of the book is reviewing all the literature we have on why people might plausibly not be aware of their motives; why it might make sense for evolution to create a creature who isn’t aware, who wants to make up another story, but we really can’t convince you that you are wrong in detail, unless we go to specific things. So that’s why the last two thirds of the book goes over 10 particular areas of life, and then [for] each area of life it says, “Here is your standard story about why you do things, and here are all these details of people’s behavior that just don’t make much sense from the usual story’s point of view.
And then we say: “Here’s another theory that makes a lot more sense in the details, that’s a better story about why you do things.” And isn’t it interesting that you’re not aware of that, you’re not saying that’s why you’re doing things, you’re doing the other thing.
Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com
[voices_in_ai_link_back]
 
Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.

Voices in AI – Episode 59: A Conversation with Tiger Tyagarajan

[voices_in_ai_byline]

About this Episode

Episode 59 of Voices in AI features host Byron Reese and Tiger Tyagarajan talking about AI, augmented intelligence, and its use in the enterprise. Tiger Tyagarajan is the President and CEO at GenPact. He holds a degree in mechanical engineering, and he also holds an MBA.
Visit www.VoicesinAI.com to listen to this one-hour podcast or read the full transcript.

Transcript Excerpt

Byron Reese: This is Voices in AI, brought to you by GigaOm, I’m Byron Reese. Today I’m so excited my guest is Tiger Tyagarajan, he is the President and CEO at GenPact. He holds a degree in mechanical engineering, and he also holds an MBA. Welcome to the show Tiger.
Tiger Tyagarajan: Byron, great to be on the show, thank you.
So let’s start, tell me about GenPact, what your mission is and how it came about.
Our mission continues to be, Byron, to work with global enterprises in a variety of industries, to actually help them become more competitive in the markets they are in. We do that by actually helping them undertake change agendas—transformation agendas to drive value for them—either by helping them drive growth or better pricing, or better risk management or lower fraud, better working capital, better cash flow etc.
Our history goes back to when we were set up as an enterprise and 100% subsidiary of the General Electric company (GE) in the late 90s. Then in 2005, seven years into our existence we spun off into a separate company, so that we could serve other clients. Today we are about $3 billion in revenue, serving 700 clients across the globe. GE continues to be a big relationship of ours, but only accounts for less than 10% of our revenue, as compared to everyone else that accounts for the balance of about 95%.
And tell me, you’re using artificial intelligence to achieve that mission in some cases. Can you talk about that, like what you’re doing?
So Byron, early days, I would say about 5+ years back, we came to the conclusion that digital is going to pretty dramatically change the way work gets done along many dimensions. We picked 12 different digital technologies to actually bring into the company, build capabilities, and change the way a lot of our services get delivered, and a lot of the way work gets done by our clients, and one of them we picked was artificial intelligence. Within the family of AI, we picked computer vision, we picked computational linguistics, we picked machine learning, three examples that are very relevant to the kind of services we offer. We’ve gone down the path of building those capabilities, acquiring those capabilities, partnering with other companies in the ecosystem on these capabilities, so that we can change the way work gets done and services will get delivered, in, I would say, a dramatic fashion that I would suspect some of us could not have imagined.
Well, don’t just leave it there, give me an example of something dramatic that’s happened.
I’ll give you a couple. Some of the clients that we deal with are banks, and think about a bank that is in the business of small and medium business lending. So half a million dollar leases for equipment or a loan for equipment to a mid-market company, that is actually manufacturing a product somewhere in Ohio etc. And the way the small business lending world works is that the customer gives to the sales person a bunch of documents, and this would be financial statements of the company, cash flows of the company etc. A lot of those documents are produced by these companies, in their own way they are audited by a small audit firm, somewhere in the vicinity and therefore they are written up in different ways, with different accounting standards and so on.
Now when a bank receives it, typically they would have to change it to actually match their understanding of cash flow the way they define it. They have to recast all the numbers, they have to read the footnotes, and then after a few days, they have 5 questions to ask, so they go back to the customer, ask those questions, and finally [it] takes about 15 days, 20 days in some cases to say, “hey customer, I’ve given an approval for half a million dollars, go buy our equipment.”
Now, in today’s world, that is way too long. Now if you bring in a combination of being able to read those documents, read unstructured data, read the language in the footnotes, interpret it using computational linguistics that then converts it into a specific standard financial statement in the way that particular bank understands financial statements, the way their definition works… You could actually argue that I could take a decision, the bank would take a decision in 30 minutes.
So think about the ability to tell a customer that your application for a loan to buy your equipment is approved in 30 minutes versus 3 weeks. I mean that makes a huge difference to the small/medium enterprise, that makes a huge difference to their business, their ability to grow, and if you think about the U.S. and if you think about small/medium enterprises in the U.S., that is the backbone of this economy, we’re beginning to see the use of this in a number of banking relationships.
I would say it’s still early days, and I would say it could make a huge difference to the top line of the banks, to the pricing power of the banks, to the ability to actually satisfy your customer dramatically. I think that is a great example of some of the ways that service changes versus a human being spending a lot of their time in actually passing the data before they take a decision. Now in the end the decision, by the way, is still taken by the human being who brings their expertise which is why we think about AI, and it’s always a combination of man + machine.
Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com
[voices_in_ai_link_back]
 
Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.

Voices in AI – Episode 57: A Conversation with Akshay Sabhikhi

[voices_in_ai_byline]

About this Episode

Episode 57 of Voices in AI features host Byron Reese and Akshay Sabhikhi talking about how AI augments and informs human intelligence. Akshay Sabhikhi is the CEO and Co-founder of CognitiveScale. He’s got more than 18 years of entrepreneurial leadership, product development and management experience with growth stage venture backed companies and high growth software divisions within Fortune 50 companies. He was a global leader for Smarter Care at IBM, and he successfully led and managed the acquisition of Cúram Software to establish IBM’s leadership at the intersection of social programs and healthcare. He has a BS and MS in electrical and computer engineering from UT at Austin and an MBA from the Acton School of Entrepreneurship.
Visit www.VoicesinAI.com to listen to this one-hour podcast or read the full transcript.

Transcript Excerpt

Byron Reese: This is Voices in AI brought to you by GigaOm, I’m Byron Reese. Today my guest is Akshay Sabhikhi. He is the CEO and Co-founder of CognitiveScale. He’s got more than 18 years of entrepreneurial leadership, product development and management experience with growth stage venture backed companies and high growth software divisions within Fortune 50 companies. He was a global leader for Smarter Care at IBM, and he successfully led and managed the acquisition of Cúram Software to establish IBM’s leadership at the intersection of social programs and healthcare. He has a BS and MS in electrical and computer engineering from UT at Austin and an MBA from the Acton School of Entrepreneurship. Welcome to the show, Akshay.
Akhay Sabhikhi: Thank you Byron, great to be here.
Why is artificial intelligence working so well now? I mean like, my gosh, what has changed in the last 5-10 years?
You know, the big difference is everyone knows artificial intelligence has been around for decades, but the big difference this time as I’d like to say, is there’s a whole supporting cast of characters that’s making AI really come into its own. And it all starts firstly with the fact that it’s delivering real value to clients, so let’s dig into that.
Firstly, data is a field for AI and we all know with the amount of information we’re surrounded with, we certainly hear about big data all over the place, and you know, it’s the amount and the volume of the information, but it’s also systems that are able to interpret that information. So the type of information I’m talking about is not just your classic databases, nice neatly packaged structured information; it is highly unstructured and messy information that includes, you know, audio, video, certainly different formats of text, images, right? And our ability to really bring that data and reason over that data is a huge difference.
We talk about a second big supporting cost or supporting character here is the prominence of social, and I say social because this is the amount of data that’s available through social media, where we can in real time see consumers and how they behave, or whether it is mobile, and the fact that you have devices now in the hands of every consumer, and so you have touch points where insights can be pushed out. Those are the different, I guess supporting costs that are now there which didn’t exist before, and that’s one of the biggest changes with the prominence and true, sort of, value people are seeing with AI.
And so give us some examples, I mean you’re at the forefront of this with CognitiveScale. What are some of the things that you see that are working that wouldn’t have worked 5-10 years ago?
Well, so let’s take some examples. So, we use an analogy which is, we all sort have used WAZE as an application to get from point A to point B, right? When you look at WAZE, it’s a great consumer tool that tells you exactly what’s ahead of you: cop, traffic, debris on the road and so on, and it guides you through your journey right? Well if you look at applying a WAZE-like analogy to the enterprise where you have a patient, and I’ll use a patient as an example because that’s how we started the company. You’re largely unmanaged, all you do is you show up to your appointments, you get prescriptions, you’re told about your health condition, but then once you leave that appointment, you’re pretty much on your own right? But think about everything that’s happening around you, think about social determinants, for example, the city you live in, whether you live in the suburbs or you live in downtown, the weather patterns, the air quality, such as the pollen counts for example, or allergens that affect you or whether it is a specific zip code within the city that tells us about the food choices that exist around me.
There’s a lot of determinants that go well beyond your pure sort of structured information that comes from an electronic medical record. If you bring all of those pieces of data together, an AI system is able to look at that information and the biggest difference here being in the context of the consumer, in this case, the patient, and surface unique insights to them, but it doesn’t stop right there. What an AI system does is, it takes it a step or two further by saying, “I’m going to push insights based on what I’ve learned from data that surrounds you, and hopefully it makes sense to you. And I will give you the mechanisms to provide a thumbs up/thumbs down or specific feedback that I can then incorporate back into a system to learn from it. So that’s a real life example of an AI system that we’ve stood up for many of our clients using various kinds of structured and unstructured information to be brought together.
Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com
[voices_in_ai_link_back]
 
Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.

Voices in AI – Episode 56: A Conversation with Babak Hodjat

[voices_in_ai_byline]

About this Episode

Episode 56 of Voices in AI features host Byron Reese and Babak Hodjat talking about genetic algorithms, cyber agriculture, and sentience. Babak Hodjat is the founder and CEO of Sentient Technologies. He holds a PhD in the study of machine intelligence.
Visit www.VoicesinAI.com to listen to this one-hour podcast or read the full transcript.

Transcript Excerpt

Byron Reese: This is Voices in AI brought to you by GigaOm, I’m Byron Reese. Today my guest is Babak Hodjat, he is the founder and CEO of Sentient Technologies. He holds a PhD in the study of machine intelligence. Welcome to the show, Babak. Rerecorded the intro
Babak Hodjat: Great to be here, thank you.
Let’s start off with my normal intro question, which is, what is artificial intelligence?
Yes, what a question. Well we know what artificial is, I think mainly the crux of this question is, “What is intelligence?”
Well actually no, there are two different senses in which it’s artificial. One is that it’s not really intelligence, it’s like artificial turf isn’t really grass, that it just looks like intelligence, but it’s not really. And the other one is, oh no it’s really intelligent it just happens to be something we made.
Yeah it’s the latter definition I think is the consensus. I’m saying this partly because there was a movement to call it machine intelligence, and there were other names to it as well, but I think artificial intelligence is, certainly the emphasis is on the fact that, as humans, we’ve been able to construct something that gives us a sense of intelligence. The main question then is, “What is this thing called intelligence?” And depending on how you answer that question, actual manifestations of AI have differed through the years.
There was a period in which AI was considered: If it tricks you into believing that it is intelligent, then it’s intelligent. So, if that’s the definition, then everything is fair game. You can cram this system with a whole bunch of rules, and back then we called them expert systems, and when you interact with these rule sets that are quite rigid, it might give you a sense of intelligence.
Then there was a movement around actually building intelligence systems, through machine learning, and mimicking how nature creates intelligence. Neural networks, genetic algorithms, some of the approaches, amongst many others that were proposed and suggested, reinforcement learning in its early form, but they would not scale. So the problem there was that they did actually show some very interesting properties of intelligence, namely learning, but they didn’t quite scale, for a number of different reasons, partly because we didn’t quite have the algorithms down yet, also the algorithms could not make use of scalable compute, and compute and memory storage was expensive.
Then we switched to redefinition in which we said, “Well, intelligence is about these smaller problem areas,” and that was the mid to late 90s where there was more interest in agenthood and agent-based systems, and agent-oriented systems where the agent was tasked with a simplified environment to solve. And intelligence was extracted into: If we were tasked with a reduced set of tools to interact with the world, and our world was much simpler than it is right now, how would we operate? That would be the definition of intelligence and those are agent based systems.
We’ve kind of swung back to machine learning based systems, partly because there have been some breakthroughs in the past, I would say 10-15 years, in neural networks in learning how to scale this technology, and an awesome rebranding of neural networks—calling them deep learning—the field has flourished on the back of that. Of course it doesn’t hurt that we have cheap compute and storage and lots and lots of data to feed these systems.
You know, one of the earlier things you said is that we try to mimic how nature creates intelligence, and you listed three examples: neural nets, and then GANNs, how we evolve things and reinforcement learning. I would probably agree with evolutionary algorithms, but do you really think… I’ve always thought neural nets, like you said, don’t really act like neurons. It’s a convenient metaphor I guess, but do you really consider neural nets to be really derived from biology or it’s just an analogy from biology?
Well it was very much inspired by biology, very much so. I mean models that we had of how we thought neurons and synapses between neurons and chemistry of the brain operates, fuels this field, absolutely. But these are very simplified versions of what the brain actually does, and every day there’s more learning about how brain cells operate. I was just reading an article yesterday about how RNA can capture memory, and how the basal ganglia also have a learning type of function—it’s not just the pre-frontal cortex. There’s a lot of complexity and depth in how the brain operates, that is completely lost when you simplify it. So absolutely we’re inspired definitely, but this is not a model of the brain by any stretch of the imagination.
Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com 
[voices_in_ai_link_back]
 
Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.