Voices in AI – Episode 15: A Conversation with Daniel H. Wilson

[voices_in_ai_byline]
In this episode, Byron and Daniel talk about magic, robots, Alexa, optimism, and ELIZA.
[podcast_player name=”Episode 15: A Conversation with Daniel H. Wilson” artist=”Byron Reese” album=”Voices in AI” url=”https://voicesinai.s3.amazonaws.com/2017-10-30-(00-57-18)-daniel-h-wilson.mp3″ cover_art_url=”https://voicesinai.com/wp-content/uploads/2017/10/voices-headshot-card-1-1.jpg”]
[voices_in_ai_link_back]
Byron Reese: This is Voices in AI, brought to you by Gigaom. I’m Byron Reese. Today, our guest is Daniel Wilson. He is the author of the New York Times best-selling Robopocalypse, and its sequel, Robogenesis, as well as other books, including How to Survive A Robot Uprising, A Boy And His Bot, and Amped. He earned a PhD in Robotics from Carnegie-Mellon University, and a master’s degree in AI and robotics, as well. His newest novel, The Clockwork Dynasty, was released in August 2017. Welcome to the show, Daniel.
Daniel H Wilson: Hi, Byron. Thanks for having me.
So how far back—the earliest robots—I guess they began in Greek myth, didn’t they?
Yeah, so it’s something I have been thinking a lot about, because automatons play a major part in my novel The Clockwork Dynasty. I started thinking about, how far back does this desire to build robots, or lifelike machines, really go? Yeah, and if you start to look at history, you’ll see that we have actual artifacts from the last few hundred years.
And before that, we have a lot stories. And before that, we have mythology, and it does go all the way back to Greek mythology. People might remember that Hephaestus supposedly built tripod robots to serve the gods on Mount Olympus.
They had to chain them up at night, didn’t they, because they would wander off?
I don’t remember that part, but it wouldn’t surprise me. Yeah, that was written somewhere. Someone reported that they had visited, and that was true. I think there was the giant bronze robot that guarded… I think it was Crete, that was called Talos? That was another one of Hephaestus’s creations. So yeah, there are stories about lifelike machines that go all the way back into prehistory, and into mythology.
I think even in the story of Prometheus, in its earliest tellings, it was a robot eagle that actually flew down and plucked his liver out every day?
Oh, really… I didn’t remember that. I always, of course, loved the little robots from Clash of the Titans, you know the robot owl… do you remember his name?
No.
Bobo, or something.
That’s funny. So, those were not, even at the time, considered scientific devices, right? They were animated by magic, or something else. Nobody looked at a bunch of tools and thought, “A-ha, I can build a mechanical device here.” So where do you think it came from?
Well, you know, I think obviously human beings are really fascinated with themselves, right? Think about Galatea, and creating sculptures, and creating imitations of ourselves, and of animals, of course. It doesn’t surprise me at all that people have been trying to build this stuff for a really long time; what is kind of interesting to consider is to look at how it’s evolved over centuries and centuries.
Because you’re right; one thing that I have found doing research for this novel is that—it’s really fascinating to me—our concept of the scientific method, and the idea of the world as a machine, and that we can pick up the pieces and build new things. And we can figure out underlying physical principles, and things like that. That’s a relatively new viewpoint, which human beings haven’t really had for that long.
Looking at automatons, I saw that there’s this sort of pattern, in that the longer we build these things, they really are living embodiments of the world as the machine, right? If you start to look at the automatons being built during the Middle Ages, the medieval times, and then up through to the beginning of the Industrial Revolution, you see that people like Descartes, and philosophers who really helped us, as a civilization, solidify our viewpoint of the way nature works, and the way that science works—they were inspired by automatons, because they showed a living embodiment of what it would be like if an animal were made out of parts.
Then you go and dissect a real animal, and you start to think, “Wait, maybe I can figure this out. Maybe it’s not just, ‘God created it, walk away from it; it is what it is.'” Maybe there’s actually some rule or rhyme under this, and we can figure it out. I think that these kinds of machines actually really helped propel our civilization towards the technological age that we live in right now, because these philosophers were able to see this playing out.
Sorry, not to prattle on too long, but one thing I also really love about, specifically, medieval times is the notions of how this stuff works were very set down, but they were also very magical. There were different types of magic, that’s what I really loved in my research. Finding that whenever you see something like an aqueduct functioning, they would think of that as a natural kind of magic, whereas if you had some geometry, or pure math, they would think of that as a celestial type of magic.
But underneath all of it there were always angels or demons, and always there were the suspicions of a Necromantic art, that this lifelike thing is animated by a spirit of the dead. There’s so much magic and mystery that was laced into science at the time, that I think it really hindered the ability to develop iterative scientific advancements, at the time.
So picking up on that a bit, late eighteenth century, you’ve got Frankenstein. Frankenstein was a scientific creation, right? There was nothing magical about that. Can you think of an example before Frankenstein where the animating force was science-based?
The animating force behind some kind of creature, or like lifelike automaton? Yeah, I really can’t. I can think of lots of examples of stuff like Golem, or something like that, and they are all kind of created by magic, or by deities. I’m trying to think… I think that all of those ideas really culminated right around the time of the Industrial Revolution, and that was really reflective of their time. Do you have any examples?
No. What do you know about Da Vinci’s robot?
Not much. I know that he had a lot of sketches for various mechanical devices.
He, of course, couldn’t build it. He didn’t have the tools, but obviously what Da Vinci would have made would have been a purely scientific thing, in that sense.
Sure, but even if it were, that doesn’t mean that other people wouldn’t have applied the mindset that, whatever his inventions were, they were powered by natural magic, or some kind of deity or spirit. It’s kind of funny, because people back then were able to completely hold both of those ideas in their heads at once.
They could completely believe the idea that whatever they were creating was magical, and at the same time, they were doing science. It’s such an interesting thing to contemplate, being able to do science from that mentality.
Let’s go to the 1920s. Talk to us about the play that gives us the word “robot.”
Wow, this is like a quiz. This is great. So, you’re talking about R.U.R., the Čapek play. Yeah, Rossum’s Universal Robots—it’s a play from the ’20s in which, you know, a scientist creates a robot, and a race of robots. And of course, what do they do, they rise up and overthrow humanity and they kill every single one of us. It’s attributed as being the place where the term “robot” was coined, and yeah, it plays out in the way that a lot of the stories about robots have played out, ever since.
One of the things that is interesting about R.U.R. is that, so often, we use robots differently in our stories, based on whatever the context is, of what’s going on in the world at the time, because robots are really reflections of people. They are kind of this distorted mirror that we hold up to ourselves. At that time, you know, people were worried about the exploitation of the working class. When you look at R.U.R., that’s pretty much what those robots embodied.
They are the children of men, they are working class, they rise up and they destroy their rulers. I think the lesson there was clear for everybody in the 1920s who went to go see that play. Robots represent different things, depending on what’s going on. We’ve seen lots of other killer robots, but they’ve embodied or represented lots of other different evils and fears that we have, as people.
Would you call that 1920s version of a robot a fully-formed image in the way we think of them now? What would have been different about that view of robots?
Well, no. Those robots, they just looked like people, but I don’t even think there was the idea that they were made of metal, or anything like that. I think that that sort of image of the pop culture robot evolved more in the ’40s, ’50s, and ’60s, with pulp science fiction, when we started thinking of them as “big metal men”—you know, like Gort from The Day the Earth Stood Still, or Robby, or all of these giant hunks of metal with lights and things on them—that are more consistent with the technology of that time, which was the dawn of rocket ships and stuff like that, and that kind of science fiction.
From what I recall, in R.U.R., they aren’t mechanical at all. They are just like people, except they can’t procreate.
The reason why I ask you if you thought they were fully modern: let me just read you this quote from the play, and tell me what it sounds like to you. This is Harry Domin, he’s one of the characters, and he says:
“In ten years, Rossum’s Universal Robots will produce so much corn, so much cloth, and so much of everything that things will be practically without price. There will be no poverty, all work will be done by living machines. Everyone will be free from worry, and liberated from the degradation of labor. Everyone will live only to perfect himself.”
Yeah, it’s a utopian post-economy. Of course, it’s built on the back of slaves, which I think is the point of the play—we’re all going to have great lives, and we’re going to be standing right on the throats of this race of slaves that are going to sacrifice everything so we can have everything.
I guess I am struck by the fact that it seems very similar to what people’s hope for automation is right now—”The factories will run themselves.” Who was it that said, “The factory of the future will only have two employees—a man and a dog. The man’s job will be to feed the dog, and the dog’s job will be to keep the man from punching the machines.”
I’ve been cooking up a little rant about this, lately, honestly. I might as well launch into it. I think that’s actually a really naïve and childish view of a future. I’m starting to realize it more and more as I see the technology that we are receiving. This is sort of the first fruit, right?
Because we’ve only just gotten speech recognition to a level that’s useful, and gesture recognition, and maybe a little bit of natural language, and some computer vision, and then just general AI pattern recognition—we’re just now getting useful stuff from that, right?
We’re getting stuff like Alexa, or these mapping algorithms that can take us from one place to another, and Facebook and Twitter are choosing what they think would be most interesting to us, and I think this is very similar to what they’re describing in R.U.R., is this perfect future where we do nothing.
But doing nothing is not perfect. Doing nothing sucks. Doing nothing robs a person of all their ability and all their potential—it’s not what we would want. But a child, a person who just stumbled upon a treasure trove of this stuff, that’s what they’d think; that’s like the first wish you’d make, that would then make the rest of your life hell.
That’s what we are seeing now, what I’ve been calling the “candy age” of artificial intelligence, where people—researchers and technologists—are going, “What do people want? Let’s give them exactly what they say they want.”
Then they do, and then we don’t know how to get around in the cities that we live, because we depend on a mapping algorithm. We don’t know the viewpoints that our neighbors have, because we’ve never actually read an article that doesn’t tell us exactly what our worldview already is, there are a million examples. Talking to Alexa, I don’t have to say “please” or “thank you.” I just order it around, and it does whatever I say, and delivers whatever I ask for.
I think that, and hope that, as we get a little bit more of a mature view on technology, and as the technology itself matures, we can reach a future in which the technology doesn’t deliver exactly what we want, exactly when we want it, but the technology actually makes us better, in whatever way it can. I would prefer that my mapping algorithm not just take me to my destination, I want it to help me know where stuff is myself. I want it to teach me, and make me better.
Not just give me something, but make me better. I think that, potentially, that is the future of technology. It’s not a future where we’re all those overweight, helpless people from Wall-E leaning back in floating chairs, doing nothing and totally dependent on a machine. I think it’s a future where the technology makes us stronger, and I think that’s a more mature worldview and idea of the future.
Well, you know, the quote that I read though, he said that “everybody will spend their time perfecting themselves.” And I assume you’ve seen Star Trek before?
Sure, yes.
There’s an episode where the Enterprise thaws some people out from the twentieth century, and one of the guys—his name is Offenhouse—he’s talking about what’s the challenge in a world where there are no material needs or hunger, and all of that? And Picard said, the challenge is to become a better person, and make the most of it. So that’s also part of the narrative as well, right?
Yeah, and I think that slots in kind of well with the Alexa example, you know? Alexa is this AI that Amazon has built that—oh God, and mine’s talking to me right now because I keep saying her name—is this AI that sits in your house and you tell it what to do, and you don’t have to be polite to it. And this is kind of interesting to contemplate, right?
If your future with technology is a place where you are going to hone your sense of being the best version of yourself that you can be, how are you going to do that if you’re having interactions with lifelike machines in which you don’t have to behave ethically?
Where it’s okay to shout at Alexa—sorry, I’ve got to whisper her name—who, by the way, sounds exactly like a woman, and has a woman’s voice, and is therefore implicitly teaching you via your interaction with her that it’s okay to shout at that type of a voice.
I think it’s not going to be mutually exclusive—where the machines take over everything and you are free to be by yourself—because technology is a huge part of our life. We are going to have to work with technology to be the best versions of ourselves.
I think another example you can find easily is just looking at athletes. You don’t gauge how fast a runner is by putting them on a motorcycle; they run. They’re human. They are perfecting something that’s very human. And yet, they are doing it in concert with extreme levels of technology, so that when they do stand on the starting mark, ideally under the same conditions that every other human has stood on a starting mark for the last, however long, and the pistol goes off, and they start running, they are going to run faster than any human being who ever ran before.
The difference is that they are going to have trained with technology, and it’s going to have made them better. That’s kind of the non-mutually-exclusive future that I see, or that I end up writing science fiction about, since I’m not actually a scientist and I don’t have to do any of this stuff.
Let me take that idea and run with it for just a minute. Just to set this up for the listener, in the 1960s, there was a man named Weizenbaum, who wrote a program named ELIZA. ELIZA was kind of a therapy bot—I guess we would think of it now—and you would say something like, “I’m having a bad day,” and it would say, “Why are you having a bad day?” And you would say, “I’m having a bad day because of my boyfriend,” and it would say, “What about your boyfriend is making you have a bad day?”
It’s really simple, and uses a few linguistic rules. And Weizenbaum saw people engaging with it, and even though they knew it was a machine, he saw them form an emotional attachment—they would pour their heart out to it, they would cry. And he turned on AI, as it were. He deleted ELIZA and said, when the computer says, “I understand,” it’s just a lie, because there’s no “I” and no understanding.
He distinguished between choosing and deciding. He said, “Deciding is something a computer can do, but choice is a human thing.” He was against using computers as substitutes for people, especially anything that involved empathy. Is your observation about Alexa that we need to program it to require us to say please, or we need to not give it a personality, or something different?
Absolutely, we need to just figure out ethical interactions and make sure that our technology encourages those. And it’s not about the technology. No one cares about whether or not you’re hurting Alexa’s feelings; she doesn’t have any feelings. The question is, what kind of interactions are you setting up for yourself, and what kind of behaviors are you implicitly encouraging in yourself?
Because we get to choose the environments that we are in. The difference between when ELIZA was written and now is that we are surrounded by technology. Every minute of our lives has got technology. At that time, you can say, “Oh, let’s erase the program, this is sick, this is messed up.” Well guess what, man, that’s not the world anymore.
Every teenager has a real social network, and then they have a virtual social network, that’s bigger and stranger and more complex, and possibly more rewarding than the real people that are out there. That’s the environment that we live in now. It’s not a choice to say “turn it off,” right? We’re too far. I think that the answer is to make sure that technologists remember that this is a dimension that they have to consider while they create technology.
That’s kind of a new thing, right? We didn’t have to use to worry about consumer products—are people going to fall in love with a toaster, are people going to get upset when the toaster goes kaput, are people going to curse at the toasters and become worse versions of themselves? That wasn’t an issue then, but it is an issue now, because we are having interactions with lifelike artifacts. Therefore, ethical dimensions have to be considered. I think it’s a fascinating problem, and I think it’s something that is going to really make people better, in the end.
Assuming we do make machines that simulate emotions—you can have a bot best friend, or what have you—do you think that that is something that people will do, and do you think that that is healthy, and good, and positive?
It’s going to be interesting to see how that shakes out. Talking in terms of decision versus choice; one thing that’s always stuck with me is a moment in the movie AI, when Gigolo Joe—who is exactly what he sounds like, and he’s a robot—he looks this woman in the eyes, and he says, “You are the most beautiful woman in the world.” Immediately, you look at that, and you go, he’s just a robot, that doesn’t mean anything.
He just said, “You’re the most beautiful woman in the world,” but his opinion doesn’t mean anything, right? But then you think about it for another second, and you realize, he means it. He means that with every fiber of his being, and there’s no human alive, that could probably look at that exact woman, at that exact moment, and say, “You’re the most beautiful woman alive,” and really mean it. So, there’s value there.
You can see how that value exists when you see complete earnestness versus how a wider society might attribute a zero value to the whole thing, but at least he means it. So yeah, I can kind of see both sides of this. I’m judging now from the environment that I live in right now, the context of the world that I have; I don’t think it would be a great idea. I wouldn’t want my kids to just have virtual friends that are robots, or whatever, but you never know.
I can’t make that call for people twenty years from now. They could be living in a friggin’ apocalypse, where they don’t have access to human beings and the only thing that they’ve got are virtual characters to be friends with. I don’t know what the future is going to bring. But I can definitely say that we are going to have interactions with lifelike machines, there are going to be ethical dimensions to those interactions; technologists had better figure out ways to make sure those interactions make us better people, and not monsters.
You know, it’s interestingly an old question. Do you remember that original Twilight Zone episode about the guy who’s on the planet by himself—I think he’s in prison—and they leave him a robot. He gets a pardon, or something, and they go to pick him up, and they only have room for him, not the robot, and he refuses to leave the robot.
So, he just stays alone on the planet. It’s kind of interesting that fifty years ago, we looked ahead and that was a real thing that people thought about—are synthetic emotions as valuable to a human as real ones? I assume you think we are definitely going to face that—as a roboticist—we certainly are going to build things that can look you in the eye, and tell you that you are beautiful, in a very convincing way.
Yes. I have a very humanist kind of viewpoint on this. I don’t think technology means anything without people. I think that technology derives its value entirely from how much it matters to human beings. It’s the part of me that gets very excited about this idea of the robot that looks you in the eye and says, “I love you,” but I’m not interested in replacing human relationships that I have.
I don’t know how many friends you have, but I have a couple of really good friends. That’s all I can handle. I have my wife, and my kids, and my family. I think most people aren’t looking to add more and replace all their friends with machines, but what I get excited about is how storytelling is going to evolve. Because all of us are constantly scouring books and movies and television, because we are looking for glimpses of those kinds of emotional interactions and relationships between people, because we feed on that, because we are human beings and we’re designed to interact with each other.
We just love watching other human beings interact with each other. Having written novels and comic books and screenplays and the occasional videogame, I can’t wait to interact with these types of agents in a storytelling setting, where the game, the story, is literally human interaction.
I’ve talked about this a little bit before, and some examples I’ve cooked up, like… What if it’s World War I, and you’re in No Man’s Land, and there are mortars streaking out of the sky, blowing up, and your whole job for this story is to convince your seventeen-year-old brother to get out of the crater and follow you to the next crater before he gets killed, right? The job is not to carry a videogame gun and shoot back. Your job is to look him in the eye, and beg him, and say, “I’m begging you, you have to get up, you have to be strong enough to come with me and go over here, I promised mom you would not die here!” You convince him to get up and go with you over the hill to the next crater, and that’s how you pass that level of that story, or that’s how you move through that storytelling world.
That level of human interaction with an artificial agent, where it’s looking at me, and it can tell whether I mean it, and it can tell if there’s emotion in my voice, and it can tell if I’m committed to this, and it can also reflect that back to me accurately, through the actions of this artificial agent—man, now that is going to be a really fascinating way to engage in a story. And I think, it has—again, like I’ve been harping on—it has the ability to make people better through empathy, through sharing situations that they get to experience emotionally, and then understand after that.
Thinking about replacing things is interesting, and often depressing. I think it’s more interesting to think about how we are going to evolve, and try out new things, and have new experiences with this type of technology.
Let’s talk a little bit about life and intelligence. So, will the robots be alive? Do you think we are going to build living machines? And by asking you the question, I am kind of implicitly asking you to define “life.”
Sorry, let’s back up. The question is: Do we think we’re going to build perfectly lifelike machines?
No. Will we build machines that are alive—whether they look human or not, I’m not interested in—will there be living machines?
That’s interesting, I mean—I only find that interesting in a philosophical way to contemplate. I don’t really care about that question. Because at the end of the day, I think Turing had it right. If we are talking about human-like machines, and we are going to consider whether they are alive—which would probably mean that they need rights, and things like that—then I think the proof is just in the comparison.
I’m making the assumption that every other human is conscious. I’m assuming that I’m conscious, because I’m sitting here feeling what executive function feels like, but, I think that that’s a fine hoop to jump through. Human-like level of intelligence: it’s enough for me to give everyone else the benefit of the doubt, it’s enough for them to give me the benefit of the doubt, so why wouldn’t I just use that same metric for a lifelike machine?
To the extent that I have been convinced that I’m alive, or that anybody is alive, I’m perfectly willing to be convinced that a machine is alive, as well.
I would submit, though, that it is the farthest thing from a philosophical question, because, as you touched on, if the machine is alive, then it has certain rights? You can’t have it plunge your toilet, necessarily, or program it to just do your bidding. Nobody thinks the bots we have now are alive. Nobody worries—
—Well, we currently don’t have a definition of “life” that everyone agrees on, period. So, throwing robots into that milieu, is just… I don’t know…
We don’t have to have a definition. We can know the endpoints, though. We know a rock is not alive, and we know a human is alive. The question isn’t, are robots going to walk in some undefined grey area that we can’t figure out; the question is, will they actually be alive? And if they’re alive, are they conscious? And if they’re conscious, then that is the furthest thing from a philosophical question. It used to be a philosophical question, when you couldn’t even really entertain the question, but now…
I’m willing to alter that slightly. I’ll say that it’s an academic question. If the first thing that leads off this whole chain is, “Is it alive?” and we have not yet assigned a definition to that symbol—A-L-I-V-E—then it becomes an academic discussion of what parameters are necessary in order to satisfy the definition of “alive.”
And that is not really very interesting. I think the more interesting thing is, how are we actually going to deal with these things in our day-to-day lives? So from a very practical, concrete manner, like… I walk up to a robot, the robot is indistinguishable from a human being—which, that’s not a definition of alive, that’s just a definition—then how am I going to behave, what’s my interaction protocol going to be?
That’s really fun to contemplate. It’s something that we are contemplating right now. We’re at the very beginning of making that call. You think about all of the thought experiments that people are engaging in right now regarding autonomous vehicles. I’ve read a lot lately about, “Okay, we got a Tesla here, it’s fully autonomous, it’s gotta go left or right, can’t do anything else, but there’s a baby on the left, and an elderly person on the right, what are we going to do? It’s gotta kill somebody; what’s going to happen?”
The fact is, we don’t know anything about the moral culpability, we don’t know anything about the definitions of life or of consciousness, but we’ve got a robot that’s going to run over something, and we’ve got to figure out how we feel about it. I love that, because it means that we are going to have to formalize our ethical values as a society.
I think that’s something that’s very good for us to consider, and we are going to have to pack that stuff into these machines, and they are going to continue to evolve. My feeling is that I hope that by the time we get to a point where we can sit in armchairs and discuss whether these things are alive, they’ll of course already be here. And hopefully, we will have already figured out exactly how we do want to interact with these autonomous machines, whether they are vehicles or human-like robots, or whatever they are.
We will hopefully already have figured that out by the time we smoke cigars and consider what “aliveness” is.
The reason I ask the question… Up until the 1990s, veterinarians were taught not to use anesthetic when they operated on animals. The theory was—
—And on babies. Human babies. Yeah.
Right. That was scientific consensus, right? The question is, how would we have known? Today, we would look at that and say, “That dog really looks like it’s hurting.” Therefore, we would be intensely curious to know it. And of course we call that sentience, the ability to sense something, generally pain, and we base our laws all on it.
Human rights arrived, in part, because we are sentient. And animal cruelty law arrived because the animals are sentient. And yet, we don’t get in trouble for using antibiotics on bacteria because, they are not deemed to be sentient. So all of a sudden we are going to be confronted by something that says, “Ouch, that hurt.” And either it didn’t, and we should pay that no mind whatsoever, or it did hurt, which is a whole different thing.
To say, “Let’s just wait until that happens, and then we can sit around and discuss it academically” is not necessarily what I’m asking—I’m asking how will we know when that moment changes? It sounds like you are saying, we should just assume, if they say they hurt, we should just assume that they do.
By extension, if I put a sensor on my computer, and I hold a match to it, and it hits five hundred degrees, and it says “ouch,” I should assume that it is in pain. Is that what you’re saying?
No, not exactly. What I’m saying is that there are going to be a lot of iterations before we reach a point where we have a perfectly lifelike robot that is standing in front of you and saying, “Ouch.” Now, what I said about believing it when it says that, is that I hold it to the same bar that I hold human beings to: which is to say, if I can’t tell the difference between it and a human being, then I might as well give it the benefit of the doubt.
That’s really far down the line. Who knows, we might not ever even get there, but I assume that we would. Of course, that’s not the same standard that I would hold a CPU to. I wouldn’t consider the CPU as feeling pain. My point is, every iteration that we have, until we reach that perfectly lifelike human robot that’s standing in front of us and saying, “You hurt my feelings, you should apologize,” is that the emotions that these things exhibit are only meaningful insomuch as they affect the human beings that are around them.
So I’m saying, to program a machine that says, “Ouch you hurt my feelings, apologize to me,” is very important, as long as it looks like a person. And there is some probability that by interacting with it as a person, I could be training myself to be a serial killer without knowing it, if it didn’t require that I treat it with any moral care.
Is that making any sense? I don’t want to kick a real dog, and I don’t want to kick a perfectly lifelike dog. I don’t think that’s going to be good for me.
Even if you can argue that one dog doesn’t feel it, and the other dog does. In the case that one of the dogs is a robot, I don’t care about that dog actually getting hurt—it’s a robot. What I care about is me training myself to be the sort of person who kicks a dog. So I want that robot dog to not let me kick it—to growl, to whimper, to do whatever it does to invoke whatever the human levers are that you pull in order to make sure that we are not serial killers… if that makes any sense.
Let me ask in a different way, a different kind of question. I call a 1-800 number of my airline of choice, and they try to route me into the automated system, and I generally hit zero, because… whatever.
I fully expect that there is going to be a day, soonish, where I may be able to chat with a bot and do some pretty basic things without even necessarily knowing that it’s a bot. When I have a person that I’m chatting with, and they’re looking something up, I make small talk, ask about the weather, or whatnot.
If I find myself doing that, and then, towards the end of the call I figure out that this isn’t even a person; I will have felt tricked, and like I wasted my time. There’s nothing there that heard me. We yell at the TV—
—No. You heard you. When you yell at the TV, you yell for a reason. You don’t yell at an empty room for no reason, you yell for yourself. It’s your brain that is experiencing this. There’s no such thing as anything that you do which doesn’t get added up and go into your personality, and go into your daily experiences, and your dreams, and everything that eventually is you.
Whatever you spend your time doing, that’s going to have an impact on who you are. If you’re yelling at a wall, it doesn’t matter—you’re still yelling.
Don’t you think that there is something different about interacting with a machine and interacting with a human? We would by definition do those differently. Kicking the robot dog, I don’t think that’s going to be what most people do. But if the Tesla has to go left or go right, and hit a robot dog or a real dog… You know which way it should go, right?
Clearly the Tesla, we don’t care what decision it makes. We’re not worried about the impact on the Tesla. The Tesla would obviously kill a dog. If it was a human being who had a choice to kill a robot dog or a real dog, we would obviously choose the robot dog, because it would be better for the human being’s psyche.
We could have fun playing around with gradations, I guess. But again, I am more interested in real practical outcomes, and how to make lifelike artifacts that interact with human beings ethically, and what our real near-term future with that is going to look like. I’m just curious, what’s the future that you would like to see? What kind of interactions would you prefer to have—or none at all—with lifelike machines?
Well, I’m far more interested—like you—with what’s going to happen, and how we are going to react to it. It’s going to be confusing, though, because we’re used to things that speak in a human voice being a human.
I share some of Weizenbaum’s unease—not necessarily quite to the extent—but some unease that if we start blurring the lines between what’s human and what’s not, that doesn’t necessarily ennoble the machine. It may actually be to our own detriment. We’ve had to go through thousands of years of civilization to get something we call human rights, and we do them because we think there is something uniquely special about humans, or at least about life.
To just blithely say, “Let’s start extending that elsewhere,” I think it diminishes and maybe devalues it. But, enough with that; let me ask you a different one. What do you see? You said you’re far more interested in what the near-future holds. So, what does the near future hold?
Well, yeah, that’s kind of what I was ranting about before. Exactly what you were saying; I really agree with you strongly that these interactions, and what happens with us and our machines, puts a lot of power strongly in the hands of the people that make this technology. Like this dopamine reflex, mouse-pushing-the-cocaine-button way that we check our smartphone; that’s really good for corporations. That’s not necessarily great for individuals, you know?
That’s what scares me. If you ask me what is worrisome about the potential future interactions we have with these machines, and whether we should at all, a lot of it boils down to: are corporations going to take any responsibility for not harming people, once they start to understand better how these interactions play out? I don’t have a whole lot of faith in the corporations to look out for anyone’s interests but their own.
But if once we start understanding what good interactions look like… maybe as consumers, we can force these people to make these products that are hopefully going to make us better people.
Sorry, I got a little off into the weeds there. That’s my main fear. And as a little aside, I think it’s absolutely vital that when we are talking to an AI, or when we are interacting with a lifelike artificial machine, that that interaction be out in the open. I want that AI to tell me, “Hi, I’m automated, let’s talk about car insurance.”
Because you’re right, I don’t want to sit there and talk about weather with that thing. I don’t want to treat it exactly like I would a human being—unless it’s like fifty years from now, and these things are incredibly smart, and it would be totally worthwhile to talk to it. It would be like having a conversation with your smart aunt, or something.
But I would want that information upfront. I would want it to be flagged. Because I’d want to know if I’m talking to something that’s real or not—my boundaries are going to change depending on that information. And I think it’s important.
You have a PhD in Robotics, so what’s going to be something that’s going to happen in the near future? What’s something that’s going to be built that’s really just going to blow our minds?
Everyone’s always looking for something totally new, some sort of crazy app that’s going to come out of nowhere and blow our minds. It’s highly doubtful that anything is going to happen within the next five years, because science is incredibly iterative. Where you often see real breakthroughs is not some atomic thing being created completely new, that blows everybody away… But often, when you get connections between two things that already exist, and then you suddenly realize, “Oh wow! Peanut butter and jelly! Here we go, it’s a whole new world!”
This Alexa thing, the smart assistants that are now physically manifesting themselves in our homes, in the places where we spend most of our time socially—in our kitchens, in my office, where I’m at right now—they have established a beachhead in our homes now.
They started on our phones, and they’re in some of our cars, and now they’re in our homes, and I think that as this web spreads, slowly, and they add more ability to these personal AI assistants, and my conversations with Alexa get more complex, and there starts to become a dialogue… I think that slow creep is going to result in me sort of snapping to attention in five years and going, “Oh, holy crap! I just talked about what’s the best present to buy for my ten-year-old daughter with Alexa, based on the last ten years that I’ve spent ordering stuff off of Amazon, and everything she knows about me!”
That’s going to be the moment. I think it’s going to be something that creeps up on us, and it’s gonna show up in these monthly updates to these devices, as they creep through our houses, and take control of more stuff in our environments, and increase their ability to interact with us at all times.
It’ll be your Weizenbaum moment.
It’ll be a relationship moment, yeah. And I’ll know right then whether I value that relationship. By the way, I just wrote a short story all about this called “Iterations.” I joined the XPRIZE Science Fiction Advisory Council, and they’re really focused on optimistic futures. They brought together all of these science fiction authors and said, “Write some stories twenty years in the future with optimism, Utopias… Let’s do some good stuff.”
I wrote a story about a guy who comes back twenty years later, he finds his wife, and realizes that she has essentially been carrying on a relationship with an AI that’s been seeded with all of his information. She, at first, uses it as a tool for her depression at having mysteriously lost her husband, but now it’s become a part of her life. And the question in the story is, is that optimistic? Or is that a pessimistic future?
My feeling is that people use technology to survive, and we can’t judge them for it. We can’t tell them, “You’re living in a terrible dystopia, you’re a horrible person, you don’t understand human interaction because you spend all your time with a machine.” Well, no…if you’ve got severe depression, and this is what keeps you alive, then that’s an optimistic future, right? And who are we to judge?
You know, I don’t know. I keep on writing stories about it. I don’t think I’ll ever get any answers out of myself.
Isn’t it interesting that, you know, Siri has a name. Alexa—I have to whisper it, too, I have them all, so I have to watch everything that I say—has a name, Microsoft has Cortana, but Google is the “Google Assistant”—they didn’t name it; they didn’t personify it.
Do you have any speculation—I mean, not any first-hand knowledge—but would you have any speculation as to why that would be the case? I mean, I think Alexa, it’s got a hard “x” and it’s a reference to the library of Alexandria.
Yeah, that’s interesting. Well, also you literally want to choose a series of phonemes that are not high frequency, because you don’t want to constantly be waking the thing up. What’s also interesting about Alexa, is that it’s a “le” sound, which is difficult for children to make, so kids can’t actually use Alexa—I know this from extreme experience. Most of them can’t say “Alexa,” they say “Awexa” when they’re little, and so she doesn’t respond to little kids, which is crucial because little kids are the worst, and they’re always telling her to play these stupid songs that I don’t want to hear.
Can’t you change the trigger word, actually?
I think you can, but I think you’re pretty limited. I think you can change it to Echo.
Right.
I’m not sure why exactly Google would make that decision—I’m sure that it was a serious decision. It’s not the decision that every other company made—but I would guess that it’s not the greatest situation, because people like to anthropomorphize the objects that they interact with; it creates familiarity, and it also reinforces that this is an interaction with a person… It has a person’s name, right?
So, if you’re talking to something, what do we talk to? What’s the only thing that we’ve ever talked to in the history of humankind that was able to respond in English? Friggin’, another human being, right? So why would you call that human being “Google”? It doesn’t make any sense. Maybe they just wanted to reinforce their brand name, again and again and again, but I do think it’s a dumb decision.
Well, I notice that you give gender to Alexa, every time you refer to it.
She has a female name, and a female voice, so of course I do.
It’s still not an “it.”
If I was defining “it” for a dictionary or something, I would obviously define the entity Alexa as an “it,” but she’s intentionally piggybacking on human interaction, which is smart, because that’s the easiest way to interact, that’s what we have been evolved to do. So I am more than happy to bend to her wishes and utilize my interaction with her as naturally as I can, because she’s clearly trying to present herself as a female voice, living in a box in my kitchen. And so I’m completely happy, of course, to interact with her in that way, because it’s most efficient.
As we draw to the end here, you talked about optimism, and you came to this conclusion on different ways the future may unfold and that it may be hard to call the ball on whether that’s good or bad. But those nuances aside, generally speaking, are you optimistic about the future?
I am. I’m frighteningly optimistic. In everything I see, I have some natural level of optimism that is built into me, and it is often at odds with what I am seeing in the world. And yet it’s still there. It’s like trying to sit on a beach ball in a swimming pool. You can push it down, but it floats right back to the surface.
I feel like human beings make tools—that’s the most fundamental thing about people—and that part of making tools is being afraid of what we’ve made. That’s also a really great innate human instinct, and probably the reason that we’ve been around as long as we have been. I think every new tool we build—every time it’s more powerful than the one before it—we make a bigger bet on ourselves being a species worthy of that tool.
I believe in humanity. At the end of the day, I think that’s a bet worth making. Not everybody is good, not everybody is evil, but I think in the end, in the composition, we’re going to keep going forward, and we’re going to get somewhere, someday.
So, I’m mostly just excited, I’m excited to see what the future is going to bring.
Let’s close up talking about your books really quickly. Who do you write for? Of all the people listening, you would say, “The people that like my books are…”?
The people who are very similar to me, I guess, in taste. Of course, I write for myself. I get interested in something, I think a lot about it, sometimes I’ll do a lot of research on it, and then I write it. And I trust that someone else is going to be interested in that. It’s impossible for me to predict what people are going to want. I can’t do it. I didn’t go get a degree in robotics because I wanted to write science fiction.
I like robots, that’s why I studied robots, that’s why I write about robots now. I’m just very lucky that there’s anybody out there that’s interested in reading this stuff that I’m interested in writing. I don’t put a whole lot of thought into pleasing an audience, you know? I just do the best I can.
What’s The Clockwork Dynasty about? And it’s out already, right?
Yeah, so it’s out. It’s been out a couple weeks, and I just got back from a book tour, which is why I might be hoarse from talking about it. So the idea behind The Clockwork Dynasty is… It’s told in two parts: one part is set in the past, and the other part is set in the present. In the past, it imagines a race of humanlike machines built from automatons that are serving the great empires of antiquity, and they’re blending in with humanity, and hiding their identity.
And then in the present day, these same automatons are still alive, and they’re running out of power, and they’re cannibalizing each other in order to stay alive. An anthropologist discovers that they exist, and she goes on this Indiana Jones-style around-the-world journey to figure out who made these machines in the distant past, and why, and how to save their race, and resupply their power.
It’s this really epic journey that takes place over thousands of years, and all across Russia, and Europe, and China, and the United States; and I just had a hell of a good time writing it, because it’s all my favorite moments of history. I love clockwork automatons. I’ve always loved court automatons that were built in the seventeenth century, and around then… And yeah, I just had a great time writing it.
Well I want to thank you so much for taking an hour, to have maybe the most fascinating conversation about robots that I think I’ve ever had, and I hope that we can have you come back another time.
Thank you very much for having me, Byron. I had a great time.
Byron explores issues around artificial intelligence and conscious computers in his upcoming book The Fourth Age, to be published in April by Atria, an imprint of Simon & Schuster. Pre-order a copy here
[voices_in_ai_link_back]

Artificial Intelligence: It’s Not Man vs. Machine. It’s Man And Machine

At the Gigaom Change conference in Austin, Texas, on September 21-23, 2016, Manoj Saxena (Chairman of CognitiveScale), Josh Sutton (Head of Data & Artificial Intelligence at Publicis Sapient) and Rob High (CTO for IBM Watson) talked with moderator and market strategist, Patricia Baumhart, about the next frontier in artificial intelligence and how the race to win in AI will soon reshape our world.
Artificial intelligence is a field with a long history starting as early as 1956, but today what we’re beginning to see emerge is a new convergence of 6 major technologies: AI, cloud, mobile, social, big data and blockchain. Each of the panelists agreed that as we enter into the next digital frontier, AI will be woven into each of these areas causing a “super-convergence” of capabilities.
Saxena predicts that “this age of the Internet is going to look small by comparison to what’s happening in AI.” It’s true. The proliferation of AI creates a new world of application and computation design, including embodied cognition in concierge-style robots that help when we need assistance.
Cloud will become “cognitive cloud,” a ubiquitous virtual data repository powered by a “digital brain” that understands human needs to help us engage with information seamlessly in work and life. Big data will evolve from being about understanding trends to understanding and predicting outcomes. In combination these developments will disrupt enterprise IT and other business models across the world.
But as we move from a “mobile first to an AI first” landscape, how do we differentiate the winners from the losers? And how can investors know where to place their bets?
Trust and transparency are going to be the two most critical pieces of winning applications. Imagine a hedge fund manager using AI algorithms to develop a financial strategy for their portfolio. Before placing millions of dollars at risk, that manager will need an explanation of why the AI chose a particular solution.
We’re seeing companies like Waze do this already. Beyond being a great way to navigate, Waze is a contextually aware, predictive computing platform that anticipates what information you need next based on your location and route. More applications in different industries — from healthcare, to retail, to personal finance — will soon act like Waze, using cognitive computing and context to constantly learn and anticipate what we need.
The businesses that will win are the ones that apply AI capabilities not just to automate their processes, but that use AI to run their business in a fundamentally different way.
First, we have to understand the areas that AI can best be applied. The challenge in cognitive computing is interpreting and understanding the oftentimes imprecise language we use as humans. As High pointed out in the panel, “our true meaning is often hidden in our context.” AI needs to be able to learn from these conditions to gain meaning.
It’s not a question of who has the best technology, but who has the best understanding and appreciation of what the technology can unlock. The people who will gain the most from AI are the ones who are rethinking their business processes, not just running their existing businesses better.
As more of our lives are aided by intelligent systems in our homes, at work, and in our cars, other questions arise. Will AI get so smart that it replaces us? Sutton, High and Saxena all agree “no,” but they say that some tasks will certainly become automated. They believe the more important change will be the creation of a new class of jobs. According to Forrester, 25% of all job tasks will be offloaded to software robots, physical robots, or customer self-service automation — in other words, all of us will be impacted in some way. But while that may sound disparaging, the same study states that 13.6 million jobs will be created using AI tools over the next decade.
The nature of work will change dramatically with AI. We’ll have technology that augments our skills and abilities — perhaps something like a “JARVIS suit” that allows us to be superhuman. We’ll work alongside robotic colleagues that help us with our most challenging tasks. In terms of cognitive computing, we’re talking about amplifying human cognition, not replacing the human mind. There is so much to be gained when we uncover ideas and solutions we wouldn’t have been able to do on our own.
Today 2.5 exabytes of data are being produced every day. That number is expected to grow to 44 zettabytes a day by 2020. Like an actual brain — a super-complex network of biological components that learns and grows with experience — these interconnected data points, along with the machine learning algorithms that learn and act upon them on our behalf, are the building blocks of our AI-powered future.
By Royal Frasier, Gryphon Agency for Gigaom Change 2016

Bob Metcalfe to Keynote at Gigaom Change in Austin

metcalfeOne of the nice things about the Internet Age being relatively new is that many of its earliest pioneers are not only still around, but still doing interesting new work. Among these titans, few loom as large as Bob Metcalfe. Inventor of Ethernet. Coiner of Metcalfe’s Law. Founder of 3com.
Bob was there in the early days at PARC, and today you can find him at University of Texas promoting entrepreneurship and startups, and keeping his eyes open for the next big thing.
When considering keynote speakers for Gigaom Change, an event about the present disruption of business through new technology such as AI and robots, I wanted to find someone who had seen a new technology arrive at the very beginning and then ushered it through to commercial success, and finally helped to make it impact the entire world.
I had a short list of candidates and Bob was at the top. Luckily, he said yes.
I caught up with him Monday, April 25, and all but ambushed him with a series of questions about the kinds of changes he expects technology to bring about next.
Byron Reese: So I’ll ask you the question that Alan Turing posed way back: “Can a machine think?”
Bob Metcalfe: Yes, I mean, if human beings can think then machines can think.
And so, you believe we’ll develop an AI.
Yes, absolutely. The brain consists of these little machines, and eventually we’ll be able to build little machines and then they’ll be able to think.
Do you have an opinion on what consciousness is?
It has something to do with attention. That is, focusing the activities of the thinking machine; focusing them in on a certain set of inputs, and that’s sort of what consciousness is.
Do you think we’ll make conscious machines?
Yes. An interesting case of consciousness is when the selected inputs, that is the ones selected for attention are internal, that is self-consciousness—being able to look down on our own thoughts, which also seems to be possible with some version of a neural net.
Would a conscious machine have inalienable rights?
Whoa! Do human beings have inalienable rights, I’m not sure.
We claim we have a right to life and it’s generally regarded there are things called universal human rights.
That’s a conflict of interest because we’re declaring that we have our own rights. Actually, it worries me a little how in modern day life, the list of things that are ‘rights’ are getting longer and longer.
Why does that worry you?
It just seems to be more a conflict of interest. Sort of a failure to recognize that we live in a reality that requires effort and responsibility, and ‘rights’ somehow is a short-cut, as in we have a ‘right’ to stuff as opposed to having to work for it.
Do you believe that robots and AI will be able to do all the jobs that humans can do?
I think so, I think that’s inevitably the case. The big issue as you well know is whether it’s man-versus-the-machine or man-and-the-machine, and I tend to come down on the ‘man-and-the-machine’ side of things that is, humans will be enhanced by their robots not replaced by their robots.
So, some kind of human-machine synthesis like augmented memory and all of those sorts of things.
Well, we have that already. I have the entire Google world at my disposal, and it’s now part of my habit when something comes up that can’t be remembered, I quickly take out my iPhone and I know what it is within a minute. You know, like, ‘Who was Attila the Hun,’ that came up the other day, and they can read the entire life of Attila the Hun within a minute. Although the interface between Google and my thought process is awkward between typing and reading. I can imagine eventually that we’ll have Google inserted in our head more efficiently. And then it won’t take 10 years to learn French, it’ll take just a few minutes to learn French because you’ll just ‘plug it in’.
What do you think people will do in the future if machines and AI’s are doing all the things that have to be done.
I don’t know. I guess, you know, a hundred years ago everybody knew how to milk cows—well, 40 percent of the population knew how to milk a cow. And now, you know, the percentage of people who know how to milk a cow is pretty small and there are robots doing it. And somehow all of those people managed to get employed in something else, and now they’re UX/UI engineers, or they’re bloggers or they’re data scientists. Somehow all those people stopped milking cows and they started doing something at a higher-level in Maslow’s hierarchy.
There’s two potential problems with that though. One is if the disruption comes too quickly to be absorbed without social instability. Or, the second problem is in the past we always found things to do because there were things we could do better than machines could do. But, what if there’s nothing we can do better than a machine can do? Or are there things only people can do.
You’ve wandered out of my area of expertise. Although, on the ‘happened too quickly’ front, as we’re seeing in Austin this week, the status quo can slow things down like the Uber-Lyft slow-down initiative here in Austin. We like taxis here in Austin rather than Uber and Lift apparently because they’re safer.
What are you working on? Enough about the big issues, how do you spend your days?
I spend my days on the big issues, and the big issue is innovation as a driver of freedom and prosperity; and the tool of innovation that I’ve settled on perfecting and promoting and professing is startups. Startups as vehicles—as innovation vehicles—and mostly coming out of research universities. So most of what I do is focused on that view of the world.
Why did you choose startups as the mechanism of innovation?
Because startups, in my experience, have been the most effective way to innovate. Everyone loves innovation as long as they’re not being innovated upon, and then as soon as they’re innovated upon they become the status quo, which is resourceful and nasty and mean. And, so the most effective tools in my experience against the status quo have been these startups, which at their core are champions of innovation. I got the word champion from Bob Langer at MIT; he believes these new technologies need champions, which is why he likes startups. A startup is a place where champions go and gather resources, coordinate their efforts and scale up. So, I guess it’s their effectiveness in having real impact with innovations that causes me to admire and profess startups.
It’s interesting though that as much as what you call the status quo can slow down innovation, nothing can really ever be stopped can it? I mean, big whale oil didn’t stop kerosene and big kerosene didn’t stop electricity.
The rate of advance can be slowed. The internet is old now, it started running in ’69. Just think how many years have passed, 50 years, to get where we are today. Is that fast or slow, by the way?
I would say that’s very fast. We’ve had recorded history, and by that I mean writing, for 5000 years. We have only therefore had the Internet for 1% of recorded history. Are you overall optimistic about the future that all these new technologies and startups are going to usher in? Do you think it’s going to be a better future, or not?
I’m a better-future believer, an optimist, and enthusiast. I think cynics are often right but they never get anything done. Just as a matter of choice, without assessment, I choose to be optimistic.
Last question: Aren’t startups fundamentally irrational in the sense that the likelihoods of success are so small and the risk so high that one has to be somewhat self-deluded to undertake one? I ask this, of course, as someone who has done several.
Maybe that circles us back to your big question before, maybe that’s what makes us humans, is that we need to delude ourselves to thereby make progress. Maybe robots won’t do startups because they’re too rational.

Why you can’t program intelligent robots, but you can train them

If it feels like we’re in the midst of robot renaissance right now, perhaps it’s because we are. There is a new crop of robots under development that we’ll soon be able to buy and install in our factories or interact with in our homes. And while they might look like robots past on the outside, their brains are actually much different.

Today’s robots aren’t rigid automatons built by a manufacturer solely to perform a single task faster than, cheaper than and, ideally, without much input from humans. Rather, today’s robots can be remarkably adaptable machines that not only learn from their experiences, but can even be designed to work hand in hand with human colleagues. Commercially available (or soon to be) technologies such as Jibo, Baxter and Amazon Echo are three well-known examples of what’s now possible, but they’re also just the beginning.

Different technological advances have spurred the development of smarter robots depending on where you look, although they all boil down to training. “It’s not that difficult to builtd the body of the robot,” said Eugene Izhikevich, founder and CEO of robotics startup Brain Corporation, “but the reason we don’t have that many robots in our homes taking care of us is it’s very difficult to program the robots.”

Essentially, we want robots that can perform more than one function, or perform one function very well. And it’s difficult to program a robot to do multiple things, or at least the things that users might want, and it’s especially difficult to program to do these things in different settings. My house is different than your house, my factory is different than your factory.

A collection of RoboBrain concepts.

A collection of RoboBrain concepts.

“The ability to handle variations is what enables these robots to go out into the world and actually be useful,” said Ashutosh Saxena, a Stanford University visiting professor and head of the RoboBrain project. (Saxena will be presenting on this topic at Gigaom’s Structure Data conference March 18 and 19 in New York, along with Julie Shah of MIT’s Interactive Robotics Group. Our Structure Intelligence conference, which focuses on the cutting edge in artificial intelligence, takes place in September in San Francisco.)

That’s where training comes into play. In some cases, particularly projects residing within universities and research centers, the internet has arguably been a driving force behind advances in creating robots that learn. That’s the case with RoboBrain, a collaboration among Stanford, Cornell and a few other universities that crawls the web with the goal of building a web-accessible knowledge graph for robots. RoboBrain’s researchers aren’t building robots, but rather a database of sorts (technically, more of a representation of concepts — what an egg looks like, how to make coffee or how to speak to humans, for example) that contains information robots might need in order to function within a home, factory or elsewhere.

RoboBrain encompasses a handful of different projects addressing different contexts and different types of knowledge, and the web provides an endless store of pictures, YouTube videos and other content that can teach RoboBrain what’s what and what’s possible. The “brain” is trained with examples of things it should recognize and tasks it should understand, as well as with reinforcement in the form of thumbs up and down when it posits a fact it has learned.

For example, one of its flagship projects, which Saxena started at Cornell, is called Tell Me Dave. In that project, researchers and crowdsourced helpers across the web train a robot to perform certain tasks by walking it through the necessary steps for tasks such as cooking ramen noodles.  In order for it to complete a task, the robot needs to know quite a bit: what each object it sees in the kitchen is, what functions it performs, how it operates and at which step it’s used in any given process. In the real world, it would need to be able to surface this knowledge upon, presumably, a user request spoken in natural language — “Make me ramen noodles.”

The Tell Me Dave workflow.

The Tell Me Dave workflow.

Multiply that by any number of tasks someone might actually want a robot to perform, and it’s easy to see why RoboBrain exists. Tell Me Dave can only learn so much, but because it’s accessing that collective knowledge base or “brain,” it should theoretically know things it hasn’t specifically trained on. Maybe how to paint a wall, for example, or that it should give human beings in the same room at least 18 inches of clearance.

There are now plenty of other examples of robots learning by example, often in lab environments or, in the case of some recent DARPA research using the aforementioned Baxter robot, watching YouTube videos about cooking (pictured above).

Advances in deep learning — the artificial intelligence technique du jour for machine-perception tasks such as computer vision, speech recognition and language understanding — also stand to expedite the training of robots. Deep learning algorithms trained on publicly available images, video and other media content can help robots recognize the objects they’re seeing or the words they’re hearing; Saxena said RoboBrain uses deep learning to train robots on proper techniques for moving and grasping objects.

The Brain Corporation platform.

The Brain Corporation platform.

However, there’s a different school of thought that says robots needn’t necessarily be as smart as RoboBrain wants to make them, so long as they can at least be trained to know right from wrong. That’s what Izhikevich and his aforementioned startup, Brain Corporation, are out to prove. It has built a specialized hardware and software platform, based on the idea of spiking neurons, that Izhikevich says can go inside any robot and “you can train your robot on different behaviors like you can train an animal.”

That is to say, for example, that a vacuum robot powered by the company’s operating system (called BrainOS) won’t be able to recognize that a cat is a cat, but it will be able to learn from its training that that object — whatever it is — is something to avoid while vacuuming. Conceivably, as long as they’re trained well enough on what’s normal in a given situation or what’s off limits, BrainOS-powered robots could be trained to follow certain objects or detect new objects or do lots of other things.

If there’s one big challenge to the notion of training robots versus just programming them, it’s that consumers or companies that use the robots will probably have to do a little work themselves. Izhikevich noted that the easiest model might be for BrainOS robots to be trained in the lab, and then have that knowledge turned into code that’s preinstalled in commercial versions. But if users want to personalize robots for their specific environments or uses, they’re probably going to have to train it.

Part of the training process with Canary. The next step is telling the camera what its seeing.

Part of the training process with Canary. The next step is telling the camera what it’s seeing.

As the internet of things and smart devices, in general, catch on, consumers are already getting used the idea — sometimes begrudgingly. Even when it’s something as simple as pressing a few buttons in an app, like training a Nest thermostat or a Canary security camera, training our devices can get tiresome. Even those of us who understand how the algorithms work can get get annoyed.

“For most applications, I don’t think consumers want to do anything,” Izhikevich said. “You want to press the ‘on’ button and the robot does everything autonomously.”

But maybe three years from now, by which time Izhikevich predicts robots powered by Brain Corporation’s platform will be commercially available, consumers will have accepted one inherent tradeoff in this new era of artificial intelligence — that smart machines are, to use Izhikevich’s comparison, kind of like animals. Specifically, dogs: They can all bark and lick, but turning them into seeing eye dogs or K-9 cops, much less Lassie, is going to take a little work.

Meet your new driveway-clearing masters: Robot snowplows

They won’t arrive in time for this week’s “historic snowstorm Juno,” but a spate of prototype autonomous snow-clearing devices (aka snow-shoveling robots) might spell relief for future events.

Eight college design teams are in St. Paul, Minnesota with devices they designed to navigate and clear two snow fields in a set amount of time. Entries in the Fifth ION Annual Autonomous Snowplow Competition cost from $4,000 to $12,000 to build according to this report from local CBS affiliate WCCO. (Video here.)

Sponsors for the event included the Institute of Navigation, SpaceX, [company]Lockheed Martin[/company], [company]Honeywell[/company], [company]John Deere[/company], and [company]Toro[/company].

The point of all this is to create a:

 “snowplow vehicle that will autonomously remove snow from a pre-defined path. The competition invites and challenges teams in the area of high-performance autonomous vehicle guidance, navigation, and control. The competition is also designed to encourage student interest in the areas of mathematics, science, and engineering.”

One of the devices relies on a magnetic track embedded in the path that it can follow. Stay tuned for the results.

Given that Juno is supposed to dump two to three feet of snow on parts of the Northeast, I sort of doubt that these prototypes will make a dent, but hey they’re a step in the right direction.

Contestants in the 2013 Autonomous Snowplow Competition

Contestants in the 2013 Autonomous Snowplow Competition

Robotics funding is off to a hot start in 2015

Robotics hardware startups have already raised more than $51.9 million in 2015 thus far, bolstered by home robotics startup Jibo’s $25.3 million Series A round Tuesday.

That’s chump change for a lot of industries, but not for robotics companies, which have traditionally seen much lower investment rates. Google X engineer Travis Deyle’s annual semi-scientific tally put venture funding for robot companies at around $341.3 million in 2014. That’s up significantly from $250.7 million in 2013.

While home robots like Jibo are playing a part in the trend, drones are also a major factor, drawing in $105 million in 2014 by Deyle’s count. They are still here in 2015; Skydio and Galileo grabbed $3 million and an undisclosed amount, respectively, in their seed rounds this month.

That $51.9 million figure also includes Rethink Robotics, which raised a $26.6 million Series D. Unlike Jibo’s home assistant bot, Rethink Robotics’ Baxter robot is best known for its work in labs and factories, where it can be quickly trained to take over repetitive tasks from humans.

The increased interest in consumer-level robots might be due in part to crowdfunding sites, where novel hardware often turns into a blockbuster campaign. Personal Robot, a home assistant much like Jibo, is in the middle of a campaign that has raised well over $100,000. And there is no question it will be another big year for crowdfunded drones.

Arduino-compatible Quirkbot lets kids build robots out of straws

One year ago, a simple and very cool construction kit for children came out; called Strawbees, it lets kids develop their inner engineer by making all kinds of structures out of ordinary drinking straws and cardboard. Now, a spinoff project has emerged: a “toy to make toys” called Quirkbot.

Quirkbot is a small 8MHz microcontroller with an Arduino-compatible bootloader that can be made part of a Strawbees creation without any need for soldering or breadboarding. It has light, distance and sound sensors and can basically be used to create moving, drinking-straw-based robots called “Qreatures.” Squeeze-on electronics can add sounds and lights to the mix.

Bot & Roll concertIt’s even possible to make a game controller using the thing. Quirkbot has a microUSB port for charging and for loading programs, which kids can create through a browser-based visual programming interface that allows for the sharing of projects.

This is a really nice educational idea – the Strawbees-compatible system makes it easy to quickly try out new ideas. The Swedish Quirkbot team’s Kickstarter campaign launched on Tuesday with the package of Quirkbot microcontroller, Strawbees Maker Kit, light sensors and motor costing $69 or $59 for the first 99 early birds.

Pricier kits come with features such as Midi out, speakers and LED lights, and with distance and sound sensors. The estimated ship date is August this year.

[youtube https://www.youtube.com/watch?v=a2LIR4TEiaI&w=560&h=315]