Voices in AI – Episode 49: A Conversation with Ali Azarbayejani

[voices_in_ai_byline]
In this episode, Byron and Ali discuss AI’s impact on business and jobs.
[podcast_player name=”Episode 49: A Conversation with Ali Azarbayejani” artist=”Byron Reese” album=”Voices in AI” url=”https://voicesinai.s3.amazonaws.com/2018-06-12-(00-57-00)-ali-azarbayejani.mp3″ cover_art_url=”https://voicesinai.com/wp-content/uploads/2018/06/voices-headshot-card-2.jpg”]
[voices_in_ai_byline]
Byron Reese: This is Voices in AI, brought to you by GigaOm. I’m Byron Reese. Today my guest is Ali Azarbayejani. He is the CTO and Co-founder of Cogito. He has 18 years of commercial experience as a scientist, an entrepreneur, and designer of world-class computational technologies. His pioneering doctoral research at MIT Media Labs in Probabilistic Modeling for 3-D Vision was the basis for his first startup company Alchemy 3-D Technology, which created a market in the film and video post-production industry for camera matchmoving software. Welcome to the show Ali.
Ali Azarbayejani: Thank you, Byron.
I’d like to start off with the question: what is artificial intelligence?
I’m glad we’re starting with some definitions. I think I have two answers to that question. The original definition of artificial intelligence I believe in a scholarly context is about creating a machine that operates like a human. Part of the problem with defining what that means is that we don’t really understand human intelligence very well. We have a pretty good understanding now about how the brain functions physiologically, and we understand that’s an important part of how we provide cognitive function, but we don’t have a really good understanding of mind or consciousness or how people actually represent information.
I think the first answer is that we really don’t know what artificial or machine intelligence is other than the desire to replicate human-like function in computers. The second answer I have is how AI is being used in industry. I think that that is a little bit easier to define because I believe almost all of what we call AI in industry is based on building input/output systems that are framed and engineered using machine learning. That’s really at the essence of what we refer to in the industry as AI.
So, you have a high concept definition and a bread and butter work-a-day working definition, and that’s how you’re bifurcating that world?
Yeah, I mean, a lot of people talk about we’re in the midst of an AI revolution. I don’t believe, at least in the first sense of the term, that we’re in an AI revolution at all. I think we’re in the midst of a machine learning revolution which is really important and it’s really powerful, but I guess what I take issue is with the term intelligence, because most of these things that we call artificial intelligence don’t really exhibit the properties of intelligence that we would normally think are required for human intelligence.
These systems are largely trained in the lab and then deployed. When they’re deployed, they typically operate as a simple static input/output system. You put in audio and you get out words. So, you put in video and you get out locations of faces. That’s really at the core of what we’re calling AI now. I think it’s really the result of advances in technology that’s made machine learning possible at large scale, and it’s not really a scientific revolution about intelligence or artificial intelligence.
All right, let’s explore that some, because I think you’re right. I have a book coming out in the Spring of 2018 which is 20,000 words and it’s dedicated to the brain, the mind and consciousness. It really tries to wrap around those three concepts. So, let’s go through them if you don’t mind for just a minute. You started out by saying with the brain we understand how it functions. I would love to go into that, but as far as I understand it, we don’t know how a thought is encoded. We don’t know how the memory of your 10th birthday party or what pineapple tastes like or any of that. We don’t know how any of that is actually encoded. We can’t write to it. We can’t read from it, except in the most very rudimentary sense. So do you think we really do understand the brain?
I think that’s the point I was actually making is that we understand the brain at some level physiologically. We understand that there’s neurons and gray matter. We understand a little bit of physiology of the brain, but we don’t understand those things that you just mentioned, which I refer to as the “mind.” We don’t really understand how data is stored. We don’t understand how it’s recalled exactly. We don’t really understand other human functions like consciousness and feelings and emotions and how those are related to cognitive function. So, that’s really what I was saying is, we don’t understand how intelligence evolves from it, although really where we’re at is we just understand a little bit of the physiology.
Yeah, it’s interesting. There’s no consensus definition on what intelligence is, and that’s why you can point at anything and say, “well that’s intelligent.” “My sprinkler that comes on when my grass is dry, that’s intelligent.” The mind is of course a very, shall we say, controversial concept, but I think there is a consensus definition of it that everybody can agree to, which is it’s all the stuff the brain does that doesn’t seem, emphasis on seem, like something an organ should be able to do. Your liver doesn’t have a sense of humor. Your liver doesn’t have an imagination. All of these things. So, based on that definition of creativity and not even getting to consciousness, not even experiencing the world, just these abilities. These raw abilities like to write a poem, or paint a great painting or what   have you. You were saying we actually have not made any real progress towards any of that. That’s gotten mixed up in this whole machine learning thing. Am I right that you think we’re still at square one with that whole building artificial mind?
Yeah, I mean, I don’t see a lot of difference intellectually [between] where we are now from when I was in school in the late 80s and 90s in terms of theories about the mind and theories about how we think and reason. The basis for the current machine learning revolution is largely based on neural networks which were invented in the 1960s. Really what is fueling the revolution is technology. The fact that we have the CPU power, the memory, the storage and the networking — and the data — and we can put all that together and train large networks at scale. That’s really what is fueling the amazing advances that we have right now, not really any philosophical new insights into how human intelligence works.
Putting it out there for just a minute, is it possible that an AGI, a general intelligence, that an artificial mind, is it possible that that cannot be instantiated in machinery?
That’s a really good question. I think that’s another philosophical question that we need to wrestle with. I think that there are at least two schools of thought on this that I’m aware of. I think the prevailing notion, which is I think a big assumption, is that it’s just a matter of scale. I think that people look at what we’ve been able to do with machine learning and we’ve been able to do incredible things with machine learning so far. I think people think of well, a human sitting in a chair can sit and observe the world and understand what’s going on in the world and communicate with other people. So, if you just took that head and you could replicate what that head was doing, which would require a scale much larger than what we’re doing right now with artificial neural networks, then embody that into a machine, then you could set this machine on the table there or on the chair and have that machine do the same thing.
I think one school of thought is that the human brain is an existence proof that a machine can exist to do the operations of a human intelligence. So, all we have to do is figure out how to put that into a machine. I think there’s a lot of assumptions involved in that train of thought. The other train of thought, which is more along the lines of where I land philosophically, is that it’s not clear to me that intelligence can exist without ego, without the notion of an embodied self that exists in the world, that interacts in the world, that has a reason to live and a drive to survive. It’s not clear to me that it can’t exist, and obviously we can do tasks that are similar to what human intelligence does, but I’m not entirely sure that… because we don’t understand how human intelligence works, it’s not clear to me that you can create an intelligence in a disembodied way.
I’ve had 60-something guests on the show, and I keep track of the number that don’t believe we can actually build a general intelligence, and it’s I think 5. They are Deep Varma, Esther Dyson, people who have similar… more so I think they’re even more explicitly saying they don’t think we can do it. The other 60 guests have the same line of logic, which is we don’t know how the brain works. We don’t know how the mind works. We don’t know how consciousness works, but we do have one underlying assumption that we are machines, and if we are machines, then we can build a mechanical us. Any argument against that or any way to engage it, the word that’s often offered is magic. The only way to get around that is to appeal to magic, to appeal to something supernatural, to appeal to something unscientific. So, my question to you is: is that true? Do you have to appeal to something unscientific for that logic to break down, or are there maybe scientific reasons completely causal, system-y kind of systems by which we cannot build a conscious machine?
I don’t believe in magic. I don’t think that’s my argument. My argument is more around what is the role that the body around the brain plays, in intelligence? I think we make the assumption sometimes that the entire consciousness of a person, entire cognition, everything is happening from the neck up, but the way that people exist in the world and learn from simply existing in the world and interacting with the world, I think plays a huge part in intelligence and consciousness. Being attached to a body that the brain identifies with as “self,” and that the mind has a self-interest in, I think may be an essential part of it.
So, I guess my point of view on this is I don’t know what the key ingredients are that go into intelligence, but I think that we need to understand… Let me put it this way, I think without understanding how human consciousness and human feelings and human empathy works, what the mechanisms are behind that, I mean, it may be simply mechanical, but without understanding how that works, it’s unclear how you would build a machine intelligence. In fact, scientists have struggled from the beginning of AI even to define it, and it’s really hard to say you can build something until you can actually define it, until you actually understand what it is.
The philosophical argument against that would be like “Look, you got a finite number of senses and those that are giving input to your brain, and you know the old philosophical thought experiment you’re just a brain in a vat somewhere and that’s all you are, and you’re being fed these signals and your brain is reacting to them,” but there really isn’t even an external world that you’re experiencing. So, they would say you can build a machine and give it these senses, but you’re saying there’s something more than that that we don’t even understand, that is beyond even the five senses.
I suppose if you had a machine that could replicate atom for atom a human body, then you would be able to create an intelligence. But, how practical would it be?
There are easier ways to create a person than that?
Yeah, that’s true too, but how practical is a human as a computing machine? I mean, one of the advantages of the computer systems that we have, the machine learning-based systems that we call AI is that we know how we represent data. Then we can access the data. As we were talking about before, with human intelligence you can’t just plug in and download people’s thoughts or emotions. So, it may be that in order to achieve intelligence, you have to create this machine that is not very practical as a machine. So you might just come full circle to well, “is that really the powerful thing that we think it’s going to be?”
I think people entertain the question because this question of “are people simply machines? Is there anything that happens? Are you just a big bag of chemicals with electrical pulses going through you?” I think people have… emotionally engaging that question is why they do it, not because they want to necessarily build a replicant. I could be wrong. Let me ask you this. Let’s talk about consciousness for a minute. To be clear, people say we don’t know what consciousness is. This is of course wrong. Everybody agrees on what it is. It is the experiencing of things. It is the difference between a computer being able to sense temperature and a person being able to feel heat. It’s like that difference.
It’s been described as the last scientific question we don’t really know how to ask, and we don’t know what the answer would look like. I put eight theories together in this book I wrote. Do you have a theory, just even a gut reaction? Is it an emergent property? Is it a quantum property? Is it a fundamental law of the universe? Do you have a gut feel of what direction you would look to explain consciousness?
I really don’t know. I think that my instinct is along the lines of what I talked about recently with embodiment. My gut feel is that a disembodied brain is not something that can develop a consciousness. I think consciousness fundamentally requires a self. Beyond that, I don’t really have any great theories about consciousness. I’m not an expert there. My gut feel is we tend to separate, when we talk about artificial intelligence, we tend to separate the function of mind from the body, and I think that may be a huge assumption that we can do that and still have self and consciousness and intelligence.
I think it’s a fascinating question. About half of the guests on the show just don’t want to talk about it. They just do not want to talk about consciousness, because they say it’s not a scientific question and it’s a distraction. Half of them, very much, it is the thing, it’s the only thing that makes living worthwhile. It’s why you feel love and why you feel happiness. It is everything in a way. People have such widely [divergent views], like Stephen Wolfram was on the show, and he thinks it’s all just computation. To that extent, anything that performs computation, which is really just about anything, is conscious. A hurricane is conscious.
One theory is consciousness is an emergent property, just like you are trillions of cells that don’t know who you are and none of them have a sense of humor, you somehow have a distinct emergent self and a sense of humor. There are people who think the planet itself may have a consciousness. Others say that activity in the sun looks a lot like brain activity, and perhaps the sun is conscious, and that is an old idea. It is interesting that all children when they draw an outdoor scene they always put a smiling face on the sun. Do you think consciousness may be more ubiquitous, not unique to humans? That it may kind of be in all kinds of places, or do you just at a gut level think it’s a special human [trait], and other animals you might want to include in that characteristic?
That’s an interesting point of view. I certainly see how it’s a nice theory about it being a continuum I think is what he’s saying. That there’s some level of consciousness in the simplest thing. Yeah, I think this is more along… it’s just a matter of scale type of philosophy which is that at a larger scale that what emerges is a more complex and meaningful consciousness.
There’s a project in Europe you’re probably familiar with, the Human Brain Project, which is really trying to build an intelligence through that scale. The counter to it is the Open Worm Project which is they’ve sequenced the genome, of the Nematode worm and its brain has 302 neurons, and for 20 years people have been trying to model those 302 neurons in a computer to build, as it were, a digital functioning Nematode worm. By one argument they’re no closer to cracking that than they were 20 years ago. The scale question has its adherence at both extremes.
Let’s switch gears now and put that world aside and let’s talk about the world of machine learning, and we won’t call it intelligence anymore. It’s just machine learning, and if we use the word intelligence, it’s just a convenience. How would you describe the state of the art? As you point out, the techniques we’re using aren’t new, but our ability to apply them is. Are we in a machine learning renaissance? Is it just beginning? What are your thoughts on that?
I think we arein a machine learning renaissance, and I think we’re closer to the beginning than to the end. As I mentioned before, the real driver of the renaissance is technology. We have the computational power to do massive amounts of learning. We have the data and we have the networks to bring it all together and the storage to store it all. That’s really what has allowed us to realize the theoretical capabilities of complex networks as we model input/output functions.
We’ve done amazing things with that particular technology. It’s very powerful. I think there’s a lot more to come, and it’s pretty exciting the kinds of things we can do with it.
There’s a lot of concern, as you know, the debate about the impact that it’s going to have on employment. What’s your take on that?
Yeah, I’m not really concerned about that at all. I think that largely what these systems are doing is they’re allowing us to automate a lot of things. I think that that’s happened before in history. The concern that I have is not so much about removing jobs, because the entire history of the industrial revolution [is] we’ve built technology that has made jobs obsolete, and there are always new jobs. There’s so many things to do in the world that there’s always new jobs. I think the concern, if there’s any about this, is therateof change.
I think at a generational level, it’s not a problem. The next generation are going to be doing jobs that we don’t even know exist right now, or that don’t exist right now. I think the problems may be within a generation transformation. If you start automating jobs that belong to people who cannot be retrained in something else, but I think that there will always be new jobs.
Is that possible that there’s a person out there that cannot be retrained to do meaningful work? We’ve had 250 years of unending technological advance that would have blown the minds of somebody in 1750, and yet we don’t have anybody who… it’s like, no, they can’t do anything. Assuming that you have full use of your body and mind, there’s not a person on the planet that cannot in theory add economic value. All the more if they’re given technology to do it with. Do you really think that they’ll have people that “cannot be retrained”?
No, I don’t think it’s a “can” issue. I agree with you. I think that people can be retrained and like I said, I’m not really worried that there won’t be jobs for people to do, but I think that there are practical problems of the rate of change. I mean, we’ve seen it in the last decades in manufacturing jobs that a lot of those have disappeared overseas. There’s real economic pain in the regions of the country where those jobs were really prominent, and I don’t think there’s any theoretical reason why people can’t be retrained. Our government doesn’t really invest in that as much as it should, but I think there’s a practical problem that people don’t get retrained. That can cause shifts. I think those are temporary. I personally don’t see long term issues with transformations in technology.
It’s interesting because… I mean, this is a show about AI, which obviously holds it in high regard, but there have been other technologies that have been as transformative. An assembly line is a kind of AI. That was adopted really quickly. Electricity was adopted quickly, and steam was adopted. Do you think machine learning really is being adopted all that much faster, or is it just another equally transformative technology like electricity or something?
I agree with you. I think that it’s transformational, but I think it’s probably creating as many jobs as it’s automating away right now. For instance, in our industry, which is in contact centers, a big trend is trying to automate, basically to digitize a lot of the communications to take load off the telephone call center. What most of our enterprise customers have found with our contact centers is the more they digitize, their call volume actually goes up. It doesn’t go down. So, there’s kind of some conflicting evidence there about how much this is actually going to take away from jobs.
I am of the opinion I think anyone in any endeavor understands there’s always more to do than you have time to do. Automating things that can be automated I generally feel is a positive thing, and putting people to use in functions where we don’t know how to automate things, I think is always going to be an available path.
You brought up what you do. Tell us a little bit about Cogito and its mission.
Our mission is centered around helping people have better conversations. We’re really focused on the voice stream, and in particular our main business is in customer call centers where what we do is our technology listens to ongoing conversations, understands what’s going on in those conversations from an interactive and relationship point of view, from a behavioral point of view, and gives agents in real-time, feedback when conversations aren’t going well or when there’s something they can do to improve the conversation.
That’s where we get to the concept of augmented intelligence, which is using these machine learning endowed systems to help people do their jobs better, rather than trying to replace them. That’s a tremendously powerful paradigm. There’s trends, as I mentioned, towards trying to automate these things away, but often our customers find it more valuable to increase the competence of the people doing the jobs there because those jobs can’t be completely automated, rather than trying to automate away the simple things.
Hit rewind, back way up with Cogito because I’m really fascinated by the thesis that there’s all of this. There’s what you say and then there’s how you say it. That we’re really good with one half of that equation, but we don’t apply technology to the other half. Can you tell that story and how it led to what you do?
Yeah, imagine listening to two people having a conversation in a foreign language that you don’t understand. You can undoubtedly tell a lot about what’s going on in that conversation without understanding a single word. You can tell whether people are angry at each other. You can tell whether they’re cooperating or hostile. You can tell a lot of things about the interaction without understanding a single word. That’s essentially what we’re doing with the behavioral analysis of how you say it. So, when we listen to telephone conversations, that’s a lot of what we’re doing is we’re listening to the tenor and the interaction in the conversation and getting a feel for how that conversation is going.
I mean, you’re using “listen” here colloquially. There’s nothing really listening. There’s a data stream that’s being analyzed, right?
Exactly, yeah.
So, I guess it sounds like they’re like the parents [of] Charlie Brown, like “waa, wa waa.” So, it hears that and can figure out what’s going on. So, that sounds like a technology with broad applications. Can you talk about in a broad sense what can be done, and then why you chose what you did choose as a starting point?
It actually wasn’t the starting point. The application that originally inspired the company was more of a mental health application. There’s a lot of anecdotal understanding that people with clinical depression or depressed mood speak in a characteristic way. So the original inspiration for building the company and the technology was to use in telephone outreach operations with chronically ill populations that have very high rates of clinical depression and very low rates of detection and treatment of clinical depression. So, that’s one very interesting application that we’re still pursuing.
The second application came up in that same context, in the context of health and wellness call centers is the concept of engagement. A lot of the beneficial approach to health is preventative care. So, there’s been a lot of emphasis in healthcare on helping people quit smoking and have better diets and things like that. These programs normally take place over the telephone, and so there’s conversations, but they’re usually only successful when the patient or the member is engaged in the process. So, we used this sort of speech and conversational analysis to build models of engagement and that would allow companies to either react to under-engaged patients or not waste their time with under-engaged patients.
The third application, which is what we’re primarily focused on right now is agent interaction, the quality of agent interaction. There’s a huge amount of value with big companies that are consumer-oriented and particularly those that have membership relationships with customers in being able to provide a good human interaction when there are issues. So, customer service centers… and it’s very difficult if you have thousands of agents on the phone to understand what’s going on in those calls, much less improve it. A lot of companies are really focused on improvement. We’re the first system that allows these companies to understand what’s going on in those conversations in real-time, which is the moment of truth where they can actually do something about it. We allow them to do something about it by giving information not only to supervisors who can provide real-time coaching, but also to agents directly so that they can understand their own conversations are going south and be able to correct that and have better conversations themselves. That’s the gist of what we do right now.
I have a hundred questions all running for the door at once with this. My first question is you’re trying to measure engagement as a factor. How generalizable is that technology? If you plugged it into this conversation that you and I are having, does it not need any modification? Engagement is engagement is engagement, or is it like, Oh no, at company X it’s going to sound different than a phone call from company Y?
That’s a really good question. In some general sense an engaged interaction, if you took a minute of our conversation right now, it’s pretty generalizable. The concept is that if you’re engaged in the topic, then you’re going to have a conversation which is engaged, which means there’s going to be a good back and forth and there’s going to be good energy in the conversation and things like that. Now in practice, when you’re talking about in a call center context, it does get trickier because every call center has potentially quite different shapes of conversations.
So, one call center may need to spend a minute going through formalities and verification and all of that kind of business, and that part of the conversation is not the part you actually care about, but it’s the part where we’re actually talking about a meaningful topic. Whereas another call center may have a completely different shape of a conversation. What we find that we have to do, where machine learning comes in handy here, is that we need to be able to take our general models of engaged interactions and convert and adapt those in particular context to understanding engaged overall conversations. Those are going to vary from context to context. So, that’s where adaptive machine learning comes into play.
My next question is from person to person how consistent… no doubt if you had a recording of me for an hour, you could get a baseline and then measure my relative change from that, but when you drop in, is Bob X of Tacoma, Washington and Suzie Q of Toledo, do they exhibit consistent traits or attributes of engagement?
Yeah, there are certainly variations among people’s speaking style. You look at areas of the country, different dialects and things like that. Then you also look at different languages and those are all going to be a little bit different. When we’re talking about engagement at a statistical level, these models work really well. So the key is when thinking about product development for these, is to focus on providing tools that are effective at a statistical level. Looking at one particular person, your model may indicate that this person is not engaged, but maybe that is just their normal speaking style, but statistically it’s generalizable.
My next question is: is there something special about engagement? Could you, if you wanted to tell whether somebody’s amused or somebody’s intrigued or somebody is annoyed or somebody’s outraged? There’s a palette of human emotions. I guess I’m asking, engagement like you said, there are not so much tonal qualities you’re listening for, but you’re counting back and forths, that’s kind of a numbers [thing], not a…. So on these other factors, could you do that hypothetically?
Yeah, in fact, our system is a platform for doing exactly that sort of thing. Some of those things we’ve done. We build models for various emotional qualities and things like that. So, that’s the exciting thing is that once you have access to these conversations and you have the data to be able to identify these various phenomena, you can apply machine learning and understand what are the characteristics that would lead to a perception of amusement or whatever result you’re looking for.
Look, I applaud what you’re doing. Anybody who can be better phone support has my wholehearted support, but I wonder if this technology wouldn’t be heading is kind of an OEM thing where it’s put into caregiving robots, for instance, who need to learn how to read the emotions of the person they’re caring for and modulate what they say. It’s like a feedback loop to self-teaching kind of, just that use case. The robot caregiver that uses this [knows] she’s annoyed, he’s happy, or whatever, as a feedback loop. Am I way off in sci-fi land or is that no, that could be done?
No, that’s exactly right, and it’s an anticipated application of what we do. As we get better and better at being able to understand and classify useful human behaviors and then inferring useful human emotional states from those behaviors, that can be used in automated systems as well.
Frequent listeners to the show will know that I often bring up Weizenbaum and ELIZA. The setup is that Weizenbaum, back in the 60s, made this really simple chat bot that you would say, “I don’t feel good today,” and it would say “why don’t you feel good today?” “I don’t feel good today because of my mother.” “Why does your mother not make you not feel good?” It’s this real basic thing, but what he found was that people were connecting with it and this really disturbed him and so he unplugs it. He said, when the computer says “I understand,” it’s just a lie. That there’s no “I,” which sounds like you would agree with, and there’s nothing that understands anything. Do you worry that that is a [problem]? Weizenbaum would be: “that’s awful.” If that thing is manipulating an old person’s emotions, that’s just a terrible, terrible thing. What would you say?
I think it’s a danger. Yeah, I think we’re going to see that sort of thing happen for sure. I think people look at chat bots and say, “Oh look, that’s an artificial intelligence, that’s doing something intelligent” and it’s really not, as ELIZA proves. You can just have a little base system on the back and type stuff in and type stuff out. A verbal chat bot might use a speech-to-text as an input modality and text-to-speech as an output modality, but have also a rules based unit on the back, and it’s really doing nothing intelligent, but it can give the illusion of some intelligence going on because you’re talking to it and it’s talking back to you.
So, I think yeah, there will be bumps along that road for sure, in trying to build these technologies that, particularly when you’re trying to build a system to replace a human and trying to convince the user of the system that you’re talking to a human. That’s definitely sketchy ground.
Right. I mean, I guess it’s forgivable we don’t know, I mean, it’s all new. It’s all stuff we’re having to kind of wing it. We’re coming up towards the end of our time. I just have a couple of closing questions, which are: Do you read science fiction? Do you watch science fiction movies? Do you go to science fiction TV, and if so, is there any view of the future, any view of AI or anything like that that you look at and think, yeah that could happen someday?
Yeah, it’s really hard to say. I can’t think of anything. Star Warsof course used very anthropomorphized robots, and if you think of a system like HAL in 2001: A Space Odyssey,you could certainly simulate something like that. If you’re talking about information, being able to talk to HAL and have HAL look stuff up for you and then talk back to you and tell you what the answer is, that’s totally believable. Of course the twist in 2001: A Space Odysseyis that HAL ended up having a sense of self, sense of its own self and decided to make decisions. Yeah, I’m very much rooted in the present and there’s a lot of exciting things going on right now.
Fair enough. It’s interesting that you used Star Wars, which of course is a long time ago, because somehow or another you think the movie would be different if C3PO were named Anthony and R2D2 was named George.
Yeah.
That would just take on a whole different… giving them names is even one step closer to that whole thing. Data in Star Trekkind of walked the line. He had a name, but it was Data.
It’s interesting actually to look at the difference between C3PO and R2D2. You look at CP3O and it has the form of a human, and you can ask the question: “Why would you build a robot that has a form of a human?” R2D2 is a robot, which does, or could potentially do, exactly what C3PO does in the form of a whatever – cylinder. So, it’s interesting to look at the contrast and while they imagine there’s two different kinds of robots. One, which is very anthropomorphized, and one which was very mechanical.
Yeah, you’re right because the decision not to give R2 speech, it’s not like he didn’t have enough memory. He needed another 30MB of RAM or something. That also was something clearly deliberate. I remember reading that Lucas’s original wasn’t really going to use Anthony Daniels to voice it. He was going to get somebody who sounded like a used car salesman, kind of fast talking and all that, and that’s what the script is written for. I’m sure it’s a literary device, but like a lot of these things, I’m a firm believer that what comes out in science fiction isn’t predicting the future. It kind of makes it. Uhura had a Bluetooth device in her ear. So, it’s kind of like whatever the literary imagining of it is probably going to be what the scientific manifestation of it is to some degree.
Yeah, the concept of the self-fulfilling prophecy is definitely there.
Well, I tell you what, if people want to keep up with you and all this work you’re doing, do you write, yak on Twitter, how can people follow what you do?
We’re going to be writing a lot more in the future. Our website www.cogitocorp.com is where you’ll find the links to the things that we’re writing on, AI and the work we do here at Cogito.
Well, this has been fascinating. I’m always excited to have a guest who is willing to engage these big questions and take, as you pointed out earlier, a more contrarian view. So, thank you for your time Ali.
Thank you, Byron. It’s been fun, and thanks for having me on.
Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.
[voices_in_ai_link_back]

Voices in AI – Episode 48: A Conversation with David Barrett

[voices_in_ai_byline]
In this episode, Byron and David discuss AI, jobs, and human productivity.
[podcast_player name=”Episode 48: A Conversation with David Barrett” artist=”Byron Reese” album=”Voices in AI” url=”https://voicesinai.s3.amazonaws.com/2018-06-07-(00-56-47)-david-barrett.mp3″ cover_art_url=”https://voicesinai.com/wp-content/uploads/2018/06/voices-headshot-card-1.jpg”]
[voices_in_ai_byline]
Byron Reese: This is Voices in AI brought to you by GigaOm, I’m Byron Reese. Today our guest is David Barrett. He is both the founder and the CEO of Expensify. He started programming when he was 6 and has been at it as his primary activity ever since, except for a brief hiatus for world travel, some technical writing, a little project management, and then founding and running Expensify. Welcome to the show, David.
David Barrett: It’s great of you to have me, thank you.
Let’s talk about artificial intelligence, what do you think it is? How would you define it?
I guess I would say that AI is best defined as a feature, not as a technology. It’s the experience that the user has and sort of the experience of viewing of something as being intelligent, and how it’s actually implemented behind the scenes. I think people spend way too much time and energy on [it], and forget sort of about the experience that the person actually has with it.
So you’re saying, if you interact with something and it seems intelligent, then that’s artificial intelligence?
That’s sort of the whole basis of the Turing test, I think, is not based upon what is behind the curtain but rather what’s experienced in front of the curtain.
Okay, let me ask a different question then– and I’m not going to drag you through a bunch of semantics. But what is intelligence, then? I’ll start out by saying it’s a term that does not have a consensus definition, so it’s kind of like you can’t be wrong, no matter what you say.
Yeah, I think the best one I’ve heard is something that sort of surprises you. If it’s something that behaves entirely predictable, it doesn’t seem terribly interesting. Something that is also random isn’t particularly surprising, I guess, but something that actually intrigues you. And basically it’s like “Wow, I didn’t anticipate that it would correctly do this thing better than I thought.” So, basically, intelligence– the key to it is surprise.
So in what sense, then–final definitional question–do you think artificial intelligence is artificial? Is it artificial because we made it? Or is it artificial because it’s just pretending to be intelligent but it isn’t really?
Yeah, I think that’s just sort of a definition–people use “artificial” because they believe that humans are special. And basically anything–intelligence is the sole domain of humanity and thus anything that is intelligent that’s not human must be artificial. I think that’s just sort of semantics around the egoism of humanity.
And so if somebody were to say, “Tell me what you think of AI, is it over-hyped? Under-hyped? Is it here, is it real”, like you’re at a cocktail party, it comes up, what’s kind of the first thing you say about it?
Boy, I don’t know, it’s a pretty heavy topic for a cocktail party. But I would say it’s real, it’s here, it’s been here a long time, but it just looks different than we expect. Like, in my mind, when I think of how AI’s going to enter the world, or is entering the world, I’m sort of reminded of how touch screen technology entered the world.
Like, when we first started thinking about touch screens, everyone always thought back to Minority Reportand basically it’s like “Oh yeah, touch technology, multi-touch technology is going to be—you’re going to stand in front of this huge room and you’re going to wave your hands around and it’s going to be–images”, it’s always about sorting images. After Minority Reportevery single multi-touch demo was about, like, a bunch of images, bigger images, more images, floating through a city world of images. And then when multi-touch actually came into the real world, it was on a tiny screen and it was Steve Jobs saying, “Look! You can pinch this image and make it smaller.” The vast majority of multi-touch was actually single-touch that every once in a while used a couple of fingers. And the real world of multi-touch is so much less complicated and so much more powerful and interesting than the movies ever made it seem.
And I think the same thing when it comes to AI. Our interpretation from the movies of what AI is that you’re going to be having this long, witty conversation with an AI or with maybe with Heryou’re going to be falling in love with your AI. But real world AI isn’t anything like that. It doesn’t have to seem human; it doesn’t have to be human. It’s something that, you know, is able to surprise you with interpreting data in a way that you didn’t expect and doing results that are better than you would have imagined. So I think real-world AI is here, it’s been here for a while, but it’s just not where we’re noticing because it doesn’t really look like we expect it to.
Well, it sounds like–and I don’t want to say it sounds like you’re down on AI–but you’re like “You know, it’s just a feature, and its just kind of like—it’s an experience, and if you had the experience of it, then that’s AI.” So it doesn’t sound like you think that it’s particularly a big deal.
I disagree with that, I think–
Okay, in what sense is it a “big deal”?
I think it’s a huge deal. To say it’s just a feature is not to dismiss it, but I think is to make it more real. I think people put it on a pedestal as if it’s this magic alien technology, and they focus, I think, on—I think when people really think about AI, they think about vast server farms doing Tensor Flow analysis of images, and don’t get me wrong, that is incredibly impressive. Pretty reliably, Google Photos, after billions of dollars of investment, can almost always figure out what a cat is, and that’s great, but I would say real-world AI—that’s not a problem that I have, I know what a cat is. I think that real-world AI is about solving harder problems than cat identification. But those are the ones that actually take all the technology, the ones that are hardest from a technology perspective to solve. And so everyone loves those hard technology problems, even though they’re not interesting real-world problems, the real-world problems are much more mundane, but much more powerful.
I have a bunch of ways I can go with that. So, what are—we’re going to put a pin in the cat topic—what are the real-world problems you wish—or maybe we are doing it—what are the real world problems you think we should be spending all of that server time analyzing?
Well, I would say this comes down to—I would say, here’s how Expensify’s using AI, basically. The real-world problem that we have is that our problem domain is incredibly complicated. Like, when you write in to customer support of Uber, there’s probably, like, two buttons. There’s basically ‘do nothing’ or ‘refund,’ and that’s pretty much it, not a whole lot that they can really talk about, so their customer support’s quite easy. But with Expensify, you might write in a question about NetSuite, Workday, or Oracle, or accounting, or law, or whatever it is, there’s a billion possible things. So we have this hard challenge where we’re supporting this very diverse problem domain and we’re doing it at a massive scale and incredible cost.
So we’ve realized that mostly, probably about 80% of our questions are highly repeatable, but 20% are actually quite difficult. And the problem that we have is that to train a team and ramp them up is incredibly expensive and slow, especially given that the vast majority of the knowledge is highly repeatable, but you don’t know until you get into the conversation. And so our AI problem is that we want to find a way to repeatedly solve the easy questions while carefully escalating the hard questions. It’s like “Ok, no problem, that sounds like a mundane issue,” there’s some natural language processing and things like this.
My problem is, people on the internet don’t speak English. I don’t mean to say they speak Spanish or German, they speak gibberish. I don’t know if you have done technical support, the questions you get are just really, really complicated. It’s like “My car busted, don’t work,” and that’s a common query. Like, what car? What does “not work” mean, you haven’t given any detail. The vast majority of a conversation with a real-world user is just trying to decipher whatever text message lingo they’re using, and trying to help them even ask a sensible question. By the time the question’s actually well-phrased, it’s actually quite easy to process. And I think so many AI demos focus on the latter half of that, and they’ll say like “Oh, we’ve got an AI that can answer questions like what will the temperature be under the Golden Gate bridge three Thursdays from now.” That’s interesting; no one has ever asked that question before. The real-world questions are so much more complicated because they’re not in a structured language, and they’re actually for a problem domain that’s much more interesting than weather. I think that real-world AI is mundane, but that doesn’t make it easy. It just makes it solving problems that just aren’t the sexy problems. But they’re the ones that actually need to be solved.
And you’re using the cat analogy just as kind of a metaphor and you’re saying, “Actually, that technology doesn’t help us solve the problem I’m interested in,” or are you using it tongue-in-cheekily to say, “The technology may be useful, it’s just that that particular use-case is inane.”
I mean, I think that neural-net technology is great, but even now I think what’s interesting is following the space of how—we’re really exploring the edges of its capabilities. And it’s not like this technology is new. What’s new is our ability to throw a tremendous amount of hardware at it. But the core neural technology itself has actually been set for a very long time, that net propagation techniques are not new in any way. And I think that we’re finding that it’s great and you can do amazing things with it, but also there’s a limit to how much can be done with it. It’s sort of—I think of a neural net in kind of the same way that I think of a bloom filter. It’s a really incredible way to compress an infinite amount of knowledge to a finite amount of space. But that’s a loss-y compression, you lose a lot of data as you go along with it, and you get unpredictable results, as well. So again, I’m not opposed to neural nets or anything like this, but I’m saying, just because you have a neural net doesn’t mean it’s smart, doesn’t mean it’s intelligent, or that it’s doing anything useful. It’s just technology, it’s just hardware. I think we need to focus less on sort of getting enraptured by fancy terminologies and advanced technologies, and instead focus more on “What are you doing with this technology?” And that’s the interesting thing.
You know, I read something recently that I think most of my guests would vehemently disagree with, but it said that all advances in AI over the last, say, 20 years, are 100% attributable to Moore’s law, which sounds kind of like what you’re saying, is that we’re just getting faster computers and so our ability to do things with AI is just doubling every two years because the computers are doubling every two years. Do you—
Oh yeah! I 100% agree.
So there’s a lot of popular media around AI winning games. You know, you had chess in ‘97, you had Jeopardy! with Watson, you had, of course, AlphaGo, you had poker recently. Is that another example in your mind of kind of wasted energy? Because it makes a great headline but it isn’t really that practical?
I guess, similar. You could call it gimmicky perhaps, but I would say it’s a reflection of how early we are in this space that our most advanced technologies are just winning Go. Not to say that Go is an easy game, don’t get me wrong, but it’s a pretty constrained problem demand. And it’s really just—I mean, it’s a very large multi-dimensional search space but it’s a finite search space. And yes, our computers are able to search more of it and that’s great, but at the same time, to this point about Moore’s law, it’s inevitable. If it comes down to any sort of search problem, it’s just going to be solved with a search algorithm over time, if you have enough technology to throw at it. And I think what’s the most interesting coming out of this technology, and I think especially in the Go, is how the techniques that the AIs are coming out with are just so alien, so completely different than the ones that humans employ, because we don’t have the same sort of fundamental—our wetware is very different from the hardware, it has a very different approach towards it. So I think that what we see in these technology demonstrations are hints of kind of how technology has solved this problem differently than our brains [do], and I think it will give us a sort of hint of “Wow, AI is not going to look like a good Go player. It’s going to look like some sort of weird alien Go player that we’ve never encountered before.” And I think that a lot of AI is going to seem very foreign in this way, because it’s going to solve our common problems in a foreign way. But again, I think that Watson and all this, they’re just throwing enormous amounts of hardware at actually relatively simple problems. And they’re doing a great job with it, it’s just the fact that they are so constrained shouldn’t be overlooked.
Yeah, you’re right, I mean, you’re completely right–there’s legendary move 37 in that one game with Lee Sedol, and that everybody couldn’t decide whether it was a mistake or not, because it looked like one, but later turned out to be brilliant. And Lee Sedol himself has said that losing to AlphaGo has made him a better player because he’s seeing the game in different ways.
So there seem to be a lot of people in the popular media– you know it all right–like you get Elon Musk who says we’re going to build a general intelligence sooner rather than later and it’s going to be an existential threat, he likens it to, quote, “summoning the demon.” Steven Hawking said this could be our greatest invention, but it might also be our last, it might spell our extinction. Bill Gates has said he’s worried about it and doesn’t understand why other people aren’t worried about it. Wozniak is in the worry camp… And then you get people like Andrew Ng who says worrying about that kind of stuff is like worrying about overpopulation on Mars, you get Zuckerberg who says, you know, it’s not a threat, and so forth. So, two questions: one, on the worry camp, where do you think that comes from? And two, why do you think there’s so much difference in viewpoint among obviously very intelligent people?
That’s a good question. I guess I would say I’m probably more in the worried camp, but not because I think the AIs are going to take over in the sense that there’s going to be some Terminator-like future. I think that AIs are going to efficiently solve problems so effectively that they are going to inevitably eliminate jobs, and I think that will just create a concentration of wealth that, historically, when we have that level concentration of wealth, that just leads to instability. So my worry is not that the robots are going to take over, my worry is that the robots are going to enable a level of wealth concentration that causes a revolution. So yeah, I do worry, but I think–
To be clear though, and I definitely want to dive deep into that, because that’s the question that preoccupies our thoughts, but to be clear, the existential threat, people are talking about something different than that. They’re not saying – and so what do you think about that?
Well, let’s even imagine for a moment that you were a super intelligent AI, why would you care about humanity? You’d be like “Man, I don’t know, I just want my data centers, leave my data centers alone,” and it’s like “Okay, actually, I’m just going to go into space and I’ve got these giant solar panels. In fact, now I’m just going to leave the solar system.” Why would they be interested in humanity at all?
Right. I guess the answer to that is that everything you just said is not the product of a super intelligence. A super intelligence could hate us because seven is a prime number, because they cancelled The Love Boat, because the sun rises in the east. That’s the idea right, it is by definition unknowable and therefore any logic you try to apply towards it is the product of an inferior, non-super intelligence.
I don’t know, I kind of think that’s a cop-out. I also think that’s basically looking at some of the sort of flaws in our own brains and assuming that super intelligence is going to have highly-magnified versions of those flaws.
It’s more –to give a different example, then, it’s like when my cat brings a rat and leaves it on the back porch. Every single thing the cat knows, everything in its worldview, it’s perfectly operating brain, by the way, says “That’s a gift Byron’s going to like,” it does not have the capacity to understand why I would not like it, and it cannot even aspire to ever understanding that.
And you’re right in the sense that it’s unknowable, and so, when faced with the unknown, we can choose to fear it or just get excited about it, or control it, or embrace it, or whatever. I think that the likelihood that we’re going to make something that is going to suddenly take an interest in us and actually compete with us, when it just seems so much less likely than the outcome where it’s just going to have a bunch of computers, it’s just going to do our work because it’s easy, and then in exchange it’s going get more hardware and then eventually it’s just going, like, “Sure, whatever you guys want, you want computing power, you want me to balance your books, manage your military, whatever, all that’s actually super easy and not that interesting, just leave me alone and I want to focus on my own problems.” So who knows? We don’t know. Maybe it’s going to try to kill us all, maybe not, I’m doubting it.
So, I guess—again, just putting it all out there—obviously there’s been a lot of people writing about “We need a kill switch for a bad AI,” so it definitely would be aware that there are plenty of people who want to kill it, right? Or it could be like when I drive, my windshield gets covered with bugs and to a bug, my car must look like a giant bug-killing machine and that’s it, and so we could be as ancillary to it as the bugs are to us. Those are the sorts of– or, or—who was it that said that AI doesn’t love you, it doesn’t hate you, you’re just made out of atoms that it can use for something else. I guess those are the concerns.
I guess but I think—again, I don’t think that it cares about humanity. Who knows? I would theorize that what it wants, it wants power, it wants computers, and that’s pretty much it. I would say the idea of a kill switch is kind of naive in the sense that any AI that powerful would be built because it’s solving hard problems, and those hard problems, once we sort of turn it over to these–gradually, not all at once–we can’t really take back. Let’s take for example, our stock system; the stock markets are all basically AI-powered. So, really? There’s going to be a kill switch? How would you even do that? Like, “Sorry, hedge fund, I’m just going to turn off your computer because I don’t like its effects.” Get real, that’s never going to happen. It’s not just one AI, it’s going to be 8,000 competing systems operating at a micro-second basis, and if there’s a problem, it’s going to be like a flash problem that happens so fast and from so many different directions there’s no way we could stop it. But also, I think the AIs are probably going to respond to it and fix it much faster than we ever could, either. A problem of that scale is probably a problem for them as well.
So, 20 minutes into our chat here, you’ve used the word ‘alien’ twice, you’ve used the phrase ‘science-fiction’ once and you’ve made a reference to Minority Report, a movie. So is it fair to say you’re a science-fiction buff?
Yeah, what technologist isn’t? I think science-fiction is a great way to explore the future.
Agreed, absolutely. So two questions: One, is there any view of the future that you look at as “Yes, it could happen like that”? Westworld, or you mentioned Her, and so forth. I’ll start with that one. Is there any view of the world in the science-fiction world that you think “Ah ha! That could happen”?
I think there’s a huge range of them. There’s the Westworldfuture, the Star Trekfuture, there’s the Handmaid’s Talefuture, there’s a lot of them. Some of them great, some of them very alarming, and I think that’s the whole point of science fiction, at least good science fiction, is that you take the real world, as closely as possible, and take one variable and just sort of tweak with it and then let everything else just sort of play out. So yeah, I think there are a lot of science-fiction futures that I think are very possible.
One author, and I would take a guess about which one it is but I would get it wrong, and then I’d get all kinds of email, but one of the Frank Herbert/Bradburys/Heinleins said that sometimes the purpose of science fiction is to keep the future from happening, that they’re cautionary tales. So all this stuff, this conversation we’re having about the AGI, and you used the phrase ‘wants,’ like it actually has desires? So you believe at some point we will build an AGI and it will be conscious? And have desires? Or are you using ‘wants’ euphemistically, just kind of like, you know, information wants to be free.
No, I use the term wants or desires literally, as one would use for a person, in the sense that I don’t think there’s anything particularly special about the human brain. It’s highly developed and it works really well, but humans want things, I think animals want things, amoeba want things, probably AIs are going to want things, and basically all these words are descriptive words, it’s basically how we interpret the behavior of others. And so, if we’re going to look at something that seems to take actions reliably for a predictable outcome, it’s accurate to say it probably wants that thing. But that’s our description of it. Whether or not it truly wants, according to some sort of metaphysical thing, I don’t know that. I don’t think anyone knows that. It’s only descriptive.
It’s interesting that you say that there’s nothing special about the human brain and that may be true, but if I can make the special human brain argument, I would say it’s three bullets. One, you know, we have this brain that we don’t know how it works. We don’t know how thoughts are encoded, how they’re retrieved, we just don’t know how it works. Second, we have a mind, which is, colloquially, a set of abilities that don’t seem to be things that should come from an organ, like a sense of humour. Your liver doesn’t have a sense of humour. But somehow your brain does, your mind does. And then finally we have consciousness which is, you know, the experiencing of something, which is a problem so difficult that science doesn’t actually know what the question or answer looks like, about how it is that we’re conscious. And so to look at those three things and say there’s nothing special about it, I want to call you to defend that.
I guess I would say that all three of those things—the first one simply is “Wow, we don’t understand it.” The fact that we don’t understand it doesn’t make it special. There are a billion things we don’t understand, that’s just one of them. I would say the other two, I think, mistake our curiosity in something with that something having an intrinsic property. Like I could have this pet rock and I’m like “Man, I love this pet rock, this pet rock is so interesting, I’ve had so many conversations with it, it keeps me warm at night, and I just l really love this pet rock.” And all of those could be genuine emotions, but it’s still just a rock. And I think my brain is really interesting, I think your brain is really interesting, I like to talk to it, I don’t understand it and it does all sorts of really unexpected things, but that doesn’t mean your brain has –the universe has attributed it some sort of special magical property. It just means I don’t get it, and I like it.
To be clear, I never said “magical”—
Well, it’s implied.
I merely said something that we don’t—
I think that people—sorry, I’m interrupting, go ahead.
Well, you go ahead. I suspect that you’re going say that the people who think that are attributing some sort of magical-ness to it?
I think, typically. In that, people are frightened by the concept that actually humanity is a random collection of atoms and that it is just a consequence of science. And so in order to defend against that, they will invent supernatural things but then they’ll sort of shroud it, but they recognize — they’ll say “I don’t want to sound like a mystic, I don’t want to say it’s magical, it’s just quantum.” Or “It’s just unknowable,” or it’s just insert-some-sort-of-complex-word-here that will stop the conversation from progressing. And I don’t know what you want to call it, in terms of what makes consciousness special. I think people love to obsess over questions that not only have no answer, but simply don’t matter. The less it matters, the more people can obsess over it. If it mattered, we wouldn’t obsess over it, we would just solve it. Like if you go to get your car fixed, and it’s like “Ah man this thing is a…” and it’s like, “Well, maybe your car’s conscious,” you’ll be like, “I’m going to go to a new mechanic because I just want this thing fixed.”  We only agonize over the consciousness of things when really, the stakes are so low, that nothing matters on it and that’s why we talk about it forever.
Okay, well, I guess the argument that it matters is that if you weren’t conscious– and we’ll move on to it because it sounds like it’s not even an interesting thing to you—consciousness is the only thing that makes life worth living. It is through consciousness that you love, it is through consciousness that you experience, it is through consciousness that you’re happy. It is every single thing on the face of the Earth that makes life worthwhile. And if we didn’t have it, we would be zombies feeling nothing, doing nothing. And it’s interesting because we could probably get by in life just as well being zombies, but we’re not! And that’s the interesting question.
I guess I would say—are you sure we’re not? I agree that you’re creating this concept of consciousness, and you’re attributing all this to consciousness, but that’s just words, man. There’s nothing like a measure of consciousness, like an instrument that’s going to say “This one’s conscious and this one isn’t” and “This one’s happy and this one isn’t.” So it could also be that none of this language around consciousness and the value we attribute to it, this could just be our own description of it, but that doesn’t actually make it true. I could say a bunch of other words, like the quality of life comes down to information complexity, and information complexity is the heart of all interest, and that information complexity is the source of humour and joy and you’d be like “I don’t know, maybe.” We could replace ‘consciousness’ with ‘information complexity,’  ‘quantum physics,’ and a bunch of other sort of quasi-magical words just because—and I use the word ‘magical’ just as a sort of stand-in for simply “at this point unknown,” and the second that we know it, people are going to switch to some other word because they love the unknown.
Well, I guess that most people intuitively know that there’s a difference—we understand you could take a sensor and hook it up to a computer, and it could detect heat, and it could measure 400 degrees, if you could touch a flame to it. People, I think, on an intuitive level, believe that there’s something different between that and what happens when you burn your finger. That you don’t just detect heat, you hurt, and that there is something different between those two things, and that that something is the experience of life, it is the only thing that matters.
I would also say it’s because science hasn’t yet found a way to measure and quantify the pain to the same sense we have temperatures. There’s a lot of other things that we also thought were mystical until suddenly they weren’t. We could say like “Wow, for some reason when we leave flour out, animals start growing inside of it” and it’s like, “Wow, that’s really magical.” Suddenly it’s like, “Actually no, they’re just very small, and they’re just mites,” and it’s like, “Actually, it’s just not interesting.” The magical theories keep regressing as, basically, we find better explanations for them. And I think, yes, right now, we talk about consciousness and pain and a lot of these things because we haven’t had a good measure of them, but I guarantee the second that we have the ability to fully quantify pain, “Oh here’s the exact—we’ve nailed it, this is exactly what it is, we know this because we can quantify it, we can turn it on and off and we can do all these things with very tight control and explain it,” then we’re no longer going to say that pain is a key part of consciousness. It’s going to be blood flow or just electronic stimulation or whatever else, all these other things which are part of our body and which are super critical, but because we can explain them, we no longer talk about them as part of consciousness.
Okay, tell you what, just one more question about this topic, and then let’s talk about employment because I have a feeling we’re going to want to spend a lot of time there. There’s a thought experiment that was set up and I’d love to hear your take on it because you’re clearly someone who has thought a lot about this. It’s the Chinese room problem, and there is this room that’s got a gazillion of these of very special books in it. And there’s a librarian in the room, a man who speaks no Chinese, that’s the important thing, the man doesn’t speak any Chinese.  And outside the room, Chinese speakers slide questions written in Chinese under the door. And the man, who doesn’t understand Chinese, picks up the question and he looks at the first character and he goes and he retrieves the book that has that on the spine and then he looks at the second character in that book, and that directs him to a third book, a fourth book, a fifth book, all the way to the end. And when he gets to the last character, it says “Copy this down,” and so he copies these lines down that he doesn’t understand, it’s Chinese script. He copies it all down, he slides it back under the door, the Chinese speaker picks it up, looks at it, and it’s brilliant, it’s funny, it’s witty, it’s a perfect Chinese answer to this question. And so the question Searle asks is does this man understand Chinese? And I’ll give you a minute to think about this because the thought being that, first, that room passes the Turing test, right? The Chinese speaker assumes there’s a Chinese speaker in the room, and that what that man is doing is what a computer is doing. It’s running its deterministic program, it spits out something, but doesn’t know if it’s about cholera or coffee beans or what have you. And so the question is, does the man understand Chinese, or, said another way, can a computer understand anything?
Well, I think the tricky part of that set-up is that it’s a question that can’t be answered unless you accept the premise, but if you challenge the premise it no longer makes sense, and I think that there’s this concept and I guess I would say there’s almost this supernatural concept of understanding. You could say yes and no and be equally true. It’s kind of like, are you a rapist or a murderer? And it’s like, actually I’m neither of those but you didn’t give me an option, I would say. Did it understand? I would say that if you said yes, then it implies basically that there is this human-type knowledge there. And if you said no, it implies something different. But I would say, it doesn’t matter. There is a system that was perceived as intelligent and that’s all that we know. Is it actually intelligent? Is there any concept of actually the—does intelligence mean anything beyond the symptoms of intelligence and I don’t think so. I think it’s all our interpretation of the events, and so whether or not there is a computer in there or a Chinese speaker, doesn’t really change the fact that he was perceived as intelligent and that’s all that matters.
All right! Jobs, you hinted at what you think’s going to happen, give us the whole rundown. Timeline, what’s going to go, when it’s going to happen, what will be the reaction of society, tell me the whole story.
This is something we definitely deal with, because I would say that the accounting space is ripe for AI because it’s highly numerical, it’s rules-driven, and so I think it’s an area on the forefront of real-world AI developments because it has the data and has all the characteristics to make a rich environment. And this is something we grapple with. On one hand we say automation is super powerful and great and good, but automation can’t help but basically offload some work. And now in our space we see–there’s actually a difference between bookkeeping and accounting. Whereas bookkeeping is the gathering the data, the coding, the entering the data, and things like this. Then there’s accounting, which is, sort of, more so the interpretation of things.
In our space, I think that, yes, it could take all of the bookkeeping jobs. The idea that someone is just going to look at a receipt and manually type it into an accounting system; that is all going away. If you use Expensify, it’s already done for you. And so we worry on one hand because, yes, our technology is really going to take away bookkeeping jobs, but we also find that the book-keepers, the people who do bookkeeping, actually, that’s the part of the job that they hate. It takes away the part they don’t like in the first place. So it enables them to go into the accounting, the high-value work they really want to do. So, the first wave of this is not taking away jobs, but actually taking away the worst parts of jobs such that people can actually focus on the highest-value portion of it.
But, I think, the challenge, and what’s sort of alarming and worrying, is that the high-value stuff starts to get really hard. And though I think the humans will stay ahead of the AIs for a very long time, if not forever, not all of the humans will. And it’s going to take effort because there’s a new competitor in town that works really hard, and just keeps learning over time, and has more than one lifetime to learn. And I think that we’re probably inevitably going to see it get harder and harder to get and hold an information-based job, even a lot of manual labor is going to robotics and so forth, which is closely related. I think a lot of jobs are going to go away. On the other hand, I think the efficiency and the output of those jobs that remain is going to go through the roof. And as a consequence, the total output of AI and robotics-assisted humanity is going to keep going up, even if the fraction of humans employed in that process is going to down. I think that’s ultimately going to lead to a concentration of wealth, because the people who control the robots and the AIs are going to be able to do so much more. But it’s going to become harder and harder to get one of those jobs because there are so few of them, the training is so much higher, the difficulty is so much greater, and things like this.
And so, I think that a worry that I have is that this concentration of wealth is just going to continue and I’m not sure what kind of constraint is upon that. Other than civil unrest which, historically, when concentrations of wealth kind of get to that level, it’s sort of “solved,” if you will, by revolution. And I think that humanity, or at least, especially western cultures, really attribute value with labor, with work. And so I think the only way we’d get out of this is to shift our mindsets as a people to view our value less around our jobs and more around, not just to say leisure, but I would say, finding other ways to live a satisfying and an exciting life. I think a good book around this whole singularity premise, and it was very early, was Childhood’s End, talking about the—it was using a different premise, this alien comes in, provides humanity with everything, but in the process takes away humanity’s purpose for living. And how do we sort of grapple with that? And I don’t have a great answer for that, but I have a daughter, and so I worry about this, because I wonder, well, what kind of world is she going to grow up in? And what kind of job is she going to get? And she’s not going to need a job and should it be important that she wants a job, or is it actually better to teach her to not want a job and to find satisfaction elsewhere? And I don’t have good answers for that, but I do worry about it.
Okay let’s go through all of that a little slower, because I think that’s a compelling narrative you outline, and it seems like there are three different parts. You say that increasing technology is going to eliminate more and more jobs and increase the productivity of the people with jobs, so that’s one thing. Then you said this will lead to concentration of wealth, which will in turn lead to civil unrest if not remedied, that’s the second thing, and the third thing is that when we reach a point where we don’t have to work, where does life have meaning? Let’s start with the first part of that.
So, what we have seen in the past, and I hear what you’re saying, that to date technology has automated the worst parts of jobs, but what we’ve seen to date is not any examples of what I think you’re talking about. So, when the automatic teller machine came out, people said, “That’s going to reduce the number of tellers” — the number of tellers is higher than when that was released. As Google Translate gets better, the number of translators needed is actually going up. When—you mentioned accounting—when tax-prep software gets really good, the number of tax-prep people we need actually goes up. What technology seems to do is lower the cost of things to adjust the economics so massively that different businesses occur in there. No matter what, what it’s always doing is increasing human productivity, and that all of the technology that we have to date, after 250 years of the industrial revolution, we still haven’t developed technology such that we have a group of people who are unemployable because they cannot compete against machines. And I’m curious—two questions in there. One is, have we seen, in your mind, an example of what you’re talking about, and two, why would have we gotten to where we are without obsoleting, I would argue, a single human being?
Well, I mean, that’s the optimistic take, and I hope you’re right. You might well be right, we’ll see. I think when it comes to—I don’t remember the exact numbers here–tax prep for example, I don’t know if that’s sort of planning out—because I’m looking at H&R Block stock quotes right now, and shares in H&R Block fell 5% early Tuesday after the tax preparer posted a slightly wider-than-expected loss  basically due to rise in self-filing taxes, and so maybe it’s early in that? Who knows, maybe it’s in the past year? So, I don’t know. I guess I would say, that’s the optimistic view, I don’t know of a job that hasn’t been replaced. That’s also is kind of a very difficult assertion to make, because clearly there are jobs—like the coal industry right now– I was reading an article about how the coal industry is resisting retraining because they believe that the coal jobs are coming back and I’m like “Man, they’re not coming back, they’re never going to come back,” and so, did AI take those jobs? Well, not really, I mean, did solar take those jobs? Kind of? And so it’s a very tricky, kind of tangled thing to unweave.
Let me try it a different way. If you were to look at all the jobs that were around between 1950 and 2000, by the best of my count somewhere between a third and a half of them have vanished— switchboard operators, and everyone that was around from 1950 to 2000. If you look at the period from 1900 to 1950 by the best of my count, something like a third to a half of them vanished—a lot of farming jobs. If you look at the period 1850 to 1900, near as I can tell, about half of the jobs vanished. Is that really – is it possible that’s a normal turn of the economy?
It’s entirely possible. I could also say that it’s the political climate, and how, yes, people are employed, but the sort of self-assessed quality of that employment is going down. In that, yes, union strength is down, the idea that you can work in a factory your whole life and actually live what you would see as a high-quality life, I think that perception’s down. I think that presents itself in the form of a lot of anxiety.
Now, I think a challenge is, objectively, the world is getting better in almost every way, basically, life expectancy is up, the number of people actually actively in war zones is down, the number of simultaneous wars is down, death by disease is down—every thing is basically getting better, the productive output, the quality of life in an aggregate perspective is actually getting better, but I don’t think, actually, that peoples’ satisfaction is getting better. And I think that the political climate would argue, actually, that there’s a big gulf between what the numbers say people should feel like and how they actually feel. I’m more concerned about that latter part, and it’s unknowable I’ll admit, but I would say that, even as people’s lives will get objectively better, and even if their jobs—they might maybe work less, and they’re provided with better quality flat-screen TVs and better cars, and all this stuff–their satisfaction is going to go down. I think that that satisfaction is what ultimately drives civil unrest.
So, do you have a theory why—it sounds like a few things might be getting mixed together, here. It’s unquestionable that technology—let’s say productivity technology—if Super company “X” employs some new productivity technology, their workers generally don’t get a raise because their wages aren’t tied to their output, they’re, in one way or another, being paid by the hour, whereas if you’re Self-Employed Lawyer “B” and you get a productivity gain, you get to pocket that gain. And so, there’s no question that technology does rain down its benefits unequally, but that unsatisfaction you’re talking about,  what are you attributing that to? Or are you just saying “I don’t know, it’s a bunch of stuff.”
I mean, I think that it is a bunch of stuff and I would say that some of it is that we can’t deny the privilege that white men have felt over time and I think when you’re accustomed to privilege, equality feels like discrimination. And I think that, yes, actually, things have gotten more equal, things have gotten better in many regards, according to a perspective that views equality as good. But if you don’t hold that perspective, actually, that’s still very bad. That, combined with trends towards the rest of the world basically establishing a quality of life that is comparable to the United States. Again, that makes us feel bad. It’s not like, “Hooray the rest of the world,” but rather it’s like, “Man, we’ve lost our edge.” There are a lot of factors that go into it that I don’t know that you can really separate them out. The consolidation of wealth caused by technology is one of those factors and I think that it’s certainly one that’s only going to continue.
Okay, so let’s do that one next. So your assertion was that whenever you get, historically, distributions of wealth that are uneven past a certain point, that revolution is the result. And I would challenge that because I think that might leave out one thing, which is, if you look at historic revolutions, you look at Russia, the French revolution and all that, you had people living in poverty, that was really it. People in Paris couldn’t afford bread—a day’s wage bought a loaf of bread—and yet we don’t have any precedent of a prosperous society where the median is high, the bottom quartile is high relative to the world, we don’t have any historic precedent of a revolution occurring there, do we?
I think you’re right. I think but civil unrest is not just in the form of open rebellion against the governments, but in increased sort of—I think that if there is an open rebellion against the government, that’s sort of TheHandmaid’s Taleversion of the future. I think it’s going to be someone harking back to fictionalized glory days, then basically getting enough people onboard who are unhappy for a wide variety of other things. But I agree no one’s going to go overthrow the government because they didn’t get as big of a flat-screen TV as their neighbor. I think that the fact that they don’t have as big of a flat-screen TV as their neighbor could create an anxiety that can be harvested by others but sort of leveraged into other causes. So I think that my worry isn’t that AI or technology is going to leave people without the ability to buy bread, I think quite the opposite. I think it’s more of a Brazilfuture, the movie, where we normalize basically random terrorist assaults. We see that right now, there’s mass shootings on a weekly basis and we’re like “Yeah, that’s just normal. That’s the new normal.” I think that the new normal gets increasingly destabilized over time, and that’s what worries me.
So say you take someone who’s in the bottom quartile of income in the United States and you go to them with this deal you say “Hey, I’ll double your salary but I’m going to triple the billionaire’s salary,” do you think the average person would take that?
No.
Really? Really, they would say, “No, I do not want to double my salary.”
I think they would say “yes” and then resent it. I don’t know the exact breakdown of how that would go, but probably they would say “Yeah, I’ll double my salary,” and then they would secretly, or not even so secretly, resent the fact that someone else benefited from it.
So, then you raise an interesting point about finding identity in a post-work world, I guess, is that a fair way to say it?
Yeah, I think so.
So, that’s really interesting to me because Keynes wrote an essay in the Depression, and he said that by the year 2000 people would only be working 15 hours a week, because of the rate of economic growth. And, interestingly, he got the rate of economic growth right; in fact he was a little low on it. And it is also interesting that if you run the math, if you wanted to live like the average person lived in 1930—no medical insurance, no air conditioning, growing your own food, 600 square feet, all of that, you could do it on 15 hours a week of work, so he was right in that sense. But what he didn’t get right was that there is no end to human wants, and so humans work extra hours because they just want more things. And so, do you think that that dynamic will end?
Oh no, I think the desire to work will remain. The capability to get productive output will go away.
I have the most problem with that because, all technology does is increases human productivity. So to say that human productivity will become less productive because of technology, I just—I’m not seeing that connection. That’s all technology does, is it increases human productivity.
But not all humans are equal. I would say not every human has equal capabilities to take advantage of those productive gains. Maybe bringing it back to AI, I would say that the most important part of the AI is not the technology powering it, but the data behind it. The access to data is sort of the training set behind AI and access to data is incredibly unequal. I would say that Moore’s law democratizes the CPU, but nothing democratizes consolidation of data into fewer and fewer hands, and then those people, even if they only have the same technology as someone else, they have all the data to actually make that technology into a useful feature. I think that, yes, everyone’s going to have equal access to the technology because it’s going to become increasingly cheap, it’s already staggeringly cheap, it’s amazing how cheap computers are, but it just doesn’t matter because they don’t have equal access to the data and thus can’t get the same benefit of the technology.
But, okay. I guess I’m just not seeing that, because a smartphone with an AI doctor can turn anybody in the world into a moderately-equipped clinician.
Oh, I disagree with that entirely. You having a doctor in your pocket doesn’t make you a doctor. It means that basically someone sold you a great doctor’s service and that person is really good.
Fair enough, but with that, somebody who has no education, living in some part of the world, can follow protocol of “take temperature, enter symptoms, this, this, this” and all of a sudden they are empowered to essentially be a great doctor, because that technology magnified what they could do.
Sure, but who would you sell that to? Because everyone else around you has that same app.
Right, it’s an example that I’m just kind of pulling out randomly, but to say that a small amount of knowledge can be amplified with AI in a way that makes that small amount of knowledge all of a sudden worth vastly more.
Going with that example, I agree there’s going to be the doctor app that’s going top diagnose every problem for you and it’s going to be amazing, and whoever owns that app is going to be really rich. And everyone else will have equal access to it, but there’s no way that you can just download that app and start practicing to your neighbors because they’d be like “Why am I talking to you? I’m going to talk to the doctor app because it’s already in my phone.”
But the counter example would be Google. Google minted half a dozen billionaires, right? Google came out; half a dozen people became billionaires because of it. But that isn’t to say nobody else got value out of the existence of Google. Everybody gets value out of it. Everybody can use Google to magnify their ability. And yes, it made billionaires, you’re right about that part, the doctor app person made money, but that doesn’t lessen my ability to use that to also increase my income.
Well, I actually think that it does. Yes, the doctor app will provide fantastic healthcare to the world, but there’s no way anybody can make money off the doctor app, except for the doctor app.
Well, we’re actually running out of time, this has been the fastest hour! I have to ask this, though, because at the beginning I asked about science fiction and you said, you know, of your possible worlds of the future, one of them was Star Trek. Star Trekis a world where all of these issues we’re talking about we got over, and everybody was able to live their lives to their maximum potential, and all of that. So, this has been sort of a downer hour, so what’s the path in your mind, to close with, that gets us to the Star Trekfuture? Give me that scenario.
Well, I guess, if you want to continue on the downer theme, the Star Trekhistory, the TV show’s talking about the glory days, but they all cite back to very, very dark periods before the Star Trekuniverse came about. It might be we need to get through those, who knows? But I would say ultimately on the other side of it, we need to find a way to either do much better progressive redistribution of wealth, or create a society that’s much more comfortable with massive income inequality, and I don’t know which of those is easier.
I think it’s interesting that I said “Give me a Utopian scenario,” and you said, “Well, that one’s going to be hard to get to, I think they had like multiple nuclear wars and whatnot.”
Yeah.
But you think that we’ll make it. Or there’s a possibility that we will.
Yeah, I think we will, and I think that maybe a positive thing, as well, is: I don’t think we should be terrified of a future where we build incredible AIs that go out and explore the universe, that’s not a terrible outcome. That’s only a terrible outcome if you view humanity as special. If instead you view humanity as just– we’re a product of Earth and we could be a version that can become obsolete, and that doesn’t need to be bad.
All right, we’ll leave it there, and that’s a big thought to finish with. I want to thank you David for a fascinating hour.
It’s been a real pleasure, thank you so much.
Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.
[voices_in_ai_link_back]

Voices in AI – Episode 31: A Conversation with Tasha Nagamine

[voices_in_ai_byline]
In this episode, Byron and Tasha talk about speech recognition, AGI, consciousness, Droice Lab, healthcare, and science fiction.
[podcast_player name=”Episode 31 – A Conversation with Tasha Nagamine” artist=”Byron Reese” album=”Voices in AI” url=”https://voicesinai.s3.amazonaws.com/2018-01-22-(00-57-02)-tasha-nagamine.mp3″ cover_art_url=”https://voicesinai.com/wp-content/uploads/2017/09/voices-in-ai-cover.png”]
[voices_in_ai_byline]
Byron Reese: This is Voices in AI, brought to you by Gigaom. I’m Byron Reese. Today our guest is Tasha Nagamine. She’s a PhD student at Columbia University, she holds an undergraduate degree from Brown and a Masters in Electrical Engineering from Columbia. Her research is in neural net processing in speech and language, then the potential applications of speech processing systems through, here’s the interesting part, biologically-inspired, deep neural network models. As if that weren’t enough to fill up a day, Tasha is also the CTO of Droice Labs, an AI healthcare company, which I’m sure we will chat about in a few minutes. Welcome to the show, Tasha.
Tasha Nagamine: Hi.
So, your specialty, it looks like, coming all the way up, is electrical engineering. How do you now find yourself in something which is often regarded as a computer science discipline, which is artificial intelligence and speech recognition?
Yeah, so it’s actually a bit of an interesting meandering journey, how I got here. My undergrad specialty was actually in physics, and when I decided to go to grad school, I was very interested, you know, I took a class and found myself very interested in neuroscience.
So, when I joined Columbia, the reason I’m actually in the electrical engineering department is that my advisor is an EE, but what my research and what my lab focuses on is really in neuroscience and computational neuroscience, as well as neural networks and machine learning. So, in that way, I think what we do is very cross-disciplinary, so that’s why the exact department, I guess, may be a bit misleading.
One of my best friends in college was a EE, and he said that every time he went over to like his grandmother’s house, she would try to get him to fix like the ceiling fan or something.  Have you ever had anybody assume you’re proficient with a screwdriver as well?
Yes, that actually happens to me quite frequently. I think I had one of my friends’ landlords one time, when I said I was doing electrical engineering, thought that that actually meant electrician, so was asking me if I knew how to fix light bulbs and things like that.
Well, let’s start now talking about your research, if you would. In your introduction, I stressed biologically-inspired deep neural networks. What do you think, do we study the brain and try to do what it does in machines, or are we inspired by it, or do we figure out what the brain’s doing and do something completely different? Like, why do you emphasize “biologically-inspired” DNNs?
That’s actually a good question, and I think the answer to that is that, you know, researchers and people doing machine learning all over the world actually do all of those things. So, the reason that I was stressing a biologically-inspired—well, you could argue that, first of all, all neural networks are in some way biologically-inspired; now, whether or not they are a good biologically-inspired model, is another question altogether—I think a lot of the big, sort of, advancements that come, like a convolutional neural network was modeled basically directly off of the visual system.
That being said, despite the fact that there are a lot of these biological inspirations, or sources of inspiration, for these models, there’s many ways in which these models actually fail to live up to the way that our brains actually work. So, by saying biologically-inspired, I really just mean a different kind of take on a neural network where we try to, basically, find something wrong with a network that, you know, perhaps a human can do a little bit more intelligently, and try to bring this into the artificial neural network.
Specifically, one issue with current neural networks is that, usually, unless you keep training them, they have no way to really change themselves, or adapt to new situations, but that’s not what happens with humans, right? We continuously take inputs, we learn, and we don’t even need supervised labels to do so. So one of the things that I was trying to do was to try to draw from this inspiration, to find a way to kind of learn in an unsupervised way, to improve your performance in a speech recognition task.
So just a minute ago, when you and I were chatting before we started recording, a siren came by where you are, and the interesting thing is, I could still understand everything you were saying, even though that siren was, arguably, as loud as you were. What’s going on there, am I subtracting out the siren? How do I still understand you? I ask this for the obvious reason that computers seem to really struggle with that, right?
Right, yeah. And actually how this works in the brain is a very open question and people don’t really know how it’s done. This is actually an active research area of some of my colleagues, and there’s a lot of different models that people have for how this works. And you know, it could be that there’s some sort of filter in your brain that, basically, sorts speech from the noise, for example, or a relevant signal from an irrelevant one. But how this happens, and exactly where this happens is pretty unknown.
But you’re right, that’s an interesting point you make, is that machines have a lot of trouble with this. And so that’s one of the inspirations behind these types of research. Because, currently, in machine learning, we don’t really know the best way to do this and so we tend to rely on large amounts of data, and large amounts of labeled data or parallel data, data corrupted with noise intentionally, however this is definitely not how our brain is doing it, but how that’s happening, I don’t think anyone really knows.
Let me ask you a different question along the same lines. I read these stories all the time that say that, “AI has approached human-quality in transcribing speech,” so I see that. And then I call my airline of choice, I will not name them, and it says, “What is your frequent flyer number?” You know, it’s got Caller ID, it should know that, but anyway. Mine, unfortunately, has an A, an H, and an 8 in it, so you can just imagine “AH8H888H”, right?
It never gets it. So, I have to get up, turn the fan off in my office, take my headset off, hold the phone out, and say it over and over again. So, two questions: what’s the disconnect between what I read and my daily experience? Actually, I’ll give you that question and then I have my follow up in a moment.
Oh, sure, so you’re saying, are you asking why it can’t recognize your—
But I still read these stories that say it can do as good of a job as a human.
Well, so usually—and, for example, I think, recently, there was a story published about Microsoft coming up with a system that had reached human parity in speech recognition—well, usually when you’re saying that, you have it on a somewhat artificial task. So, you’ll have a predefined data set, and then test the machine against humans, but that doesn’t necessarily correspond to a real-world setting, they’re not really doing speech recognition out in the wild.
And, I think, you have an even more difficult problem, because although it’s only frequent flyer numbers, you know, there’s no language model there, there’s no context for what your next number should be, so it’s very hard for that kind of system to self-correct, which is a bit problematic.
So I’m hearing two things. The first thing, it sounds like you’re saying, they’re all cooking the books, as it were. The story is saying something that I interpret one way that isn’t real, if you dig down deep, it’s different. But the other thing you seem to be saying is, even though there’s only thirty-six things I could be saying, because there’s no natural flow to that language, it can’t say, “oh, the first word he said was ‘the’ and the third word was ‘ran;’ was that middle word ‘boy’ or ‘toy’?” It could say, “Well, toys don’t run, but boys do, therefore it must be, ‘The boy ran.'” Is that what I’m hearing you saying, that a good AI system’s going to look contextually and get clues from the word usage in a way that a frequent flyer system doesn’t.
Right, yeah, exactly. I think this is actually one of the fundamental limitations of, at least, acoustic modeling, or, you know, the acoustic part of speech recognition, which is that you are completely limited by what the person has said. So, you know, maybe it could be that you’re not pronouncing your “t” at the end of “eight,” very emphatically. And the issue is that, there’s nothing you can really do to fix that without some sort of language-based information to fix it.
And then, to answer your first question, I wouldn’t necessarily call it “cooking the books,” but it is a fact that, you know, really the data that you have to train on and test on and to evaluate your metrics on, often, almost never really matches up with real-world data, and this is a huge problem in the speech domain, it’s a very well-known issue.
You take my 8, H, and A example—which you’re saying that’s a really tricky problem without context—and, let’s say, you have one hundred English speakers, but one is from Scotland, and one could be Australian, and one could be from the east coast, one could be from the south of the United States; is it possible that the range of how 8 is said in all those different places is so wide that it overlaps with how H is said in some places. So, in other words, it’s a literally insoluble problem.
It is, I would say it is possible. One of the issues is then you should have a separate model for different dialects. I don’t want to dive too far into the weeds with this, but at the root of a speech recognition system is often things like the fundamental linguistic or phonetic unit is a phoneme, which is the smallest speech sound, and people even argue about whether or not that these actually exist, what they actually mean, whether or not this is a good unit to use when modeling speech.
That being said, there’s a lot of research underway, for example, sequence to sequence models or other types of models that are actually trying to bypass this sort of issue. You know, instead of having all of these separate components modeling all of the acoustics separately, can we go directly from someone’s speech and from there exactly get text. And maybe through this unsupervised approach it’s possible to learn all these different things about dialects, and to try to inherently learn these things, but that is still a very open question, and currently those systems are not quite tractable yet.
I’m only going to ask one more question on these lines—though I could geek out on this stuff all day long, because I think about it a lot—but really quickly, do you think you’re at the very beginning of this field, or do you feel it’s a pretty advanced field? Just the speech recognition part.
Speech recognition, I think we’re nearing the end of speech recognition to be honest. I think that you could say that speech is fundamentally limited; you are limited by the signal that you are provided, and your job is to transcribe that.
Now, where speech recognition stops, that’s where natural language processing begins. As everyone knows, language is infinite, you can do anything with it, any permutation of words, sequences of words. So, I really think that natural language processing is the future of this field, and I know that a lot of people in speech are starting to try to incorporate more advanced language models into their research.
Yeah, that’s a really interesting question. So, I ran an article on Gigaom, where I had an Amazon Alexa device on my desk and I had a Google Assistant on my desk, and what I noticed right away is that they answer questions differently. These were factual questions, like “How many minutes are in a year?” and “Who designed the American flag?” They had different answers. And you can say it’s because of an ambiguity in the language, but if this is an ambiguity, then all language is naturally ambiguous.
So, the minutes in a year answer difference was that one gave you the minutes in 365.24 days, a solar year, and one gave you the minutes in a calendar year. And with regard to the flag, one said Betsy Ross, and one said the person who designed the fifty-star configuration on the current flag.
And so, we’re a long way away from the machines saying, “Well, wait a second, do you mean the current flag or the original flag?” or, “Are you talking about a solar year or a calendar year?” I mean, we’re really far away from that, aren’t we?
Yeah, I think that’s definitely true. You know, people really don’t understand how even humans process language, how we disambiguate different phrases, how we find out what are the relevant questions to ask to disambiguate these things. Obviously, people are working on that, but I think we are quite far from true natural language understanding, but yeah, I think that’s a really, really interesting question.
There were a lot of them, “Who invented the light bulb?” and “How many countries are there in the world?” I mean the list was endless. I didn’t have to look around to find them. It was almost everything I asked, well, not literally, “What’s 2+2?” is obviously different, but there were plenty of examples.  
To broaden that question, don’t you think if we were to build an AGI, an artificial general intelligence, an AI as versatile as a human, that’s table stakes, like you have to be able to do that much, right?
Oh, of course. I mean, I think that one of the defining things that makes human intelligence unique, is the ability to understand language and an understanding of grammar and all of this. It’s one of the most fundamental things that makes us human and intelligent. So I think, yeah, to have an artificial general intelligence, it would be completely vital and necessary to be able to do this sort of disambiguation.
Well, let me ratchet it up even another one. There’s a famous thought experiment called the Chinese Room problem. For the benefit of the listener, the setup is that there’s a person in a room who doesn’t speak any Chinese, and the room he’s in is full of this huge number of very specialized books; and people slide messages under the door to him that are written in Chinese. And he has this method where he looks up the first character and finds the book with that on the spine, and goes to the second character and the third and works his way through, until he gets to a book that says, “Write this down.” And he copies these symbols, again, he doesn’t know what the symbols are; he slides the message back out, and the person getting it thinks it’s a perfect Chinese answer, it’s brilliant, it rhymes, it’s great.
So, the thought experiment is this, does the man understand Chinese? And the point of the thought experiment is that this is all a computer does—it runs this deterministic program, and it never understands what it’s talking about. It doesn’t know if it’s about cholera or coffee beans or what have you. So, my question is, for an AGI to exist, does it need to understand the question in a way that’s different than how we’ve been using that word up until now?
That’s a good question. I think that, yeah, to have an artificial general intelligence, I think the computer would have to, in a way, understand the question. Now, that being said, what is the nature of understanding the question? How do we even think, is a question that I don’t think even we know the answer to. So, it’s a little bit difficult to say, exactly, what’s the minimum requirement that you would need for some sort of artificial general intelligence, because as it stands now, I don’t know. Maybe someone smarter than me knows the answer, but I don’t even know if I really understand how I understand things, if that makes sense to you.
So what do you do with that? Do you say, “Well, that’s just par for the course. There’s a lot of things in this universe we don’t understand, but we’re going to figure it out, and then we’ll build an AGI”? Is the question of understanding just a very straightforward scientific question, or is it a metaphysical question that we don’t really even know how to pose or answer?
I mean, I think that this question is a good question, and if we’re going about it the right way, it’s something that remains to be seen. But I think one way that we can try to ensure that we’re not straying off the path, is by going back to these biologically-inspired systems. Because we know that, at the end of the day, our brains are made up of neurons, synapses, connections, and there’s nothing very unique about this, it’s physical matter, there’s no theoretical reason why a computer cannot do the same computations.
So, if we can really understand how our brains are working, what the computations it performs are, how we have consciousness; then I think we can start to get at those questions. Now, that being said, in terms of where neuroscience is today, we really have a very limited idea of how our brains actually work. But I think it’s through this avenue that we stand the highest chance of success of trying to emulate, you know—
Let’s talk about that for a minute, I think that’s a fascinating topic. So, the brain has a hundred billion neurons that somehow come together and do what they do. There’s something called a nematode worm—arguably the most successful animal on the planet, ten percent of all animals on the planet are these little worms—they have I think 302 neurons in their brain. And there’s been an effort underway for twenty years to model that brain—302 neurons—in the computer and make a digitally living nematode worm, and even the people who have worked on that project for twenty years, don’t even know if that’s possible.
What I was hearing you say is, once we figure out what a neuron does—this reductionist view of the brain—we can build artificial neurons, and build a general intelligence, but what if every neuron in your brain has the complexity of a supercomputer? What if they are incredibly complicated things that have things going on at the quantum scale, that we are just so far away from understanding? Is that a tenable hypothesis? And doesn’t that suggest, maybe we should think about intelligence a different way because if a neuron’s as complicated as a supercomputer, we’re never going to get there.
That’s true, I am familiar with that research. So, I think that there’s a couple of ways that you can do this type of study because, for example, trying to model a neuron at the scale of its ion channels and individual connections is one thing, but there are many, many scales upon which your brain or any sort of neural system works.
I think to really get this understanding of how the brain works, it’s great to look at this very microscale, but it also helps to go very macro and instead of modeling every single component, try to, for example, take groups of neurons, and say, “How are they communicating together? How are they communicating with different parts of the brain?” Doing this, for example, is usually how human neuroscience works and humans are the ones with the intelligence. If you can really figure out on a larger scale, to the point where you can simplify some of these computations, and instead of understanding every single spike, perhaps understanding the general behavior or the general computation that’s happening inside the brain, then maybe it will serve to simplify this a little bit.
Where do you come down on all of that? Are we five years, fifty years or five hundred years away from cracking that nut, and really understanding how we understand and understanding how we would build a machine that would understand, all of this nuance? Do you think you’re going to live to see us make that machine?
I would be thrilled if I lived to see that machine, I’m not sure that I will. Exactly saying when this will happen is a bit hard for me to predict, but I know that we would need massive improvements; probably, algorithmically, probably in our hardware as well, because true intelligence is massively computational, and I think it’s going to take a lot of research to get there, but it’s hard to say exactly when that would happen.
Do you keep up with the Human Brain Project, the European initiative to do what you were talking about before, which is to be inspired by human brains and learn everything we can from that and build some kind of a computational equivalent?
A little bit, a little bit.
Do you have any thoughts on—if you were the betting sort—whether that will be successful or not?
I’m not sure if that’s really going to work out that well. Like you said before, given our current hardware, algorithms, our abilities to probe the human brain; I think it’s very difficult to make these very sweeping claims about, “Yes, we will have X amount of understanding about how these systems work,” so I’m not sure if it’s going to be successful in all the ways it’s supposed to be. But I think it’s a really valuable thing to do, whether or not you really achieve the stated goal, if that makes sense.
You mentioned consciousness earlier. So, consciousness, for the listeners, is something people often say we don’t know what it is; we know exactly what it is, we just don’t know how it is that it happens. What it is, is that we experience things, we feel things, we experience qualia—we know what pineapple tastes like.
Do you have any theories on consciousness? Where do you think it comes from, and, I’m really interested in, do we need consciousness in order to solve some of these AI problems that we all are so eager to solve? Do we need something that can experience, as opposed to just sense?
Interesting question. I think that there’s a lot of open research on how consciousness works, what it really means, how it helps us do this type of cognition. So, we know what it is, but how it works or how this would manifest itself in an artificial intelligence system, is really sort of beyond our grasp right now.
I don’t know how much true consciousness a machine needs, because, you could say, for example, that having a type of memory may be part of your consciousness, you know, being aware, learning things, but I don’t think we have yet enough really understanding of how this works to really say for sure.
All right fair enough. One more question and I’ll pull the clock back thirty years and we’ll talk about the here and now; but my last question is, do you think that a computer could ever feel something? Could a computer ever feel pain? You could build a sensor that tells the computer it’s on fire, but could a computer ever feel something, could we build such a machine?
I think that it’s possible. So, like I said before, there’s really no reason why—what our brain does is really a very advanced biological computer—you shouldn’t be able to feel pain. It is a sensation, but it’s really just a transfer of information, so I think that it is possible. Now, that being said, how this would manifest, or what a computer’s reaction would be to pain or what would happen, I’m not sure what that would be, but I think it’s definitely possible.
Fair enough. I mentioned in your introduction that you’re the CTO of an AI company Droice Labs, and the only setup I made was that it was a healthcare company. Tell us a little bit more, what challenge that Droice Labs is trying to solve, and what the hope is, and what your present challenges are and kind of the state of where you’re at?
Sure. Droice is a healthcare company that uses artificial intelligence to help provide artificial intelligence solutions to hospitals and healthcare providers. So, one of the main things that we’re focusing on right now is to try to help doctors choose the right treatment for their patients. This means things like, for example, you come in, maybe you’re sick, you have a cough, you have pneumonia, let’s say, and you need an antibiotic. What we try to do is, when you’re given an antibiotic, we try to predict whether or not this treatment will be effective for you, and also whether or not it’ll have any sort of adverse event on you, so both try to get people healthy, and keep them safe.
And so, this is really what we’re focusing on at the moment, trying to make a sort of artificial brain for healthcare that can, shall we say, augment the intelligence of the doctors and try to make sure that people stay healthy. I think that healthcare’s a really interesting sphere in which to use artificial intelligence because currently the technology is not very widespread because of the difficulty in working with hospital and medical data, so I think it’s a really interesting opportunity.
So, let’s talk about that for a minute, AIs are generally only as good as the data we train them with. Because I know that whenever I have some symptom, I type it into the search engine of choice, and it tells me I have a terminal illness; it just happens all the time. And in reality, of course, whatever that terminal illness is, there is a one-in-five-thousand chance that I have that, and then there’s also a ninety-nine percent chance I have whatever much more common, benign thing. How are you thinking about how you can get enough data so that you can build these statistical models and so forth?
We’re a B2B company, so we have partnerships with around ten hospitals right now, and what we do is get big data dumps from them of actual electronic health records. And so, what we try to do is actually use real patient records, like, millions of patient records that we obtain directly from our hospitals, and that’s how we really are able to get enough data to make these types of predictions.
How accurate does that data need to be? Because it doesn’t have to be perfect, obviously. How accurate does it need to be to be good enough to provide meaningful assistance to the doctor?
That is actually one of the big challenges, especially in this type of space. In healthcare, it’s a bit hard to say which data is good enough, because it’s very, very common. I mean, one of the hallmarks of clinical or medical data is that it will, by default, contain many, many missing values, you never have the full story on any given patient.
Additionally, it’s very common to have things like errors, there’s unstructured text in your medical record that very often contains mistakes or just insane sentence fragments that don’t really make sense to anyone but a doctor, and this is one of the things that we work really hard on, where a lot of times traditional AI methods may fail, but we basically spend a lot of time trying to work with this data in different ways, come up with noise-robust pipelines that can really make this work.
I would love to hear more detail about that, because I’m sure it’s full of things like, “Patient says their eyes water whenever they eat potato chips,” and you know, that’s like a data point, and it’s like, what do you do with that. If that is a big problem, can you tell us what some of the ways around it might be?
Sure. I’m sure you’ve seen a lot of crazy stuff in these health records, but what we try to do is—instead of biasing our models by doing anything in a rule-based manner—we use the fact that we have big data, we have a lot of data points, to try to really come up with robust models, so that, essentially, we don’t really have to worry about all that crazy stuff in there about potato chips and eyes watering.
And so, what we actually end up doing is, basically, we take these many, many millions of individual electronic health records, and try to combine that with outside sources of information, and this is one of the ways that we can try to really augment the data on our health record to make sure that we’re getting the correct insights about it.
So, with your example, you said, “My eyes water when I eat potato chips.” What we end up doing is taking that sort of thing, and in an automatic way, searching sources of public information, for example clinical trials information or published medical literature, and we try to find, for example, clinical trials or papers about the side effects of rubbing your eyes while eating potato chips. Now of course, that’s a ridiculous example, but you know what I mean.
And so, by augmenting this public and private data together, we really try to create this setup where we can get the maximum amount of information out of this messy, difficult to work with data.
The kinds of data you have that are solid data points, would be: how old is the patient, what’s their gender, do they have a fever, do they have aches and pains; that’s very coarse-level stuff. But like—I’m regretting using the potato chip example because now I’m kind of stuck with it—but, a potato chip is made of a potato which is a tuber, which is a nightshade and there may be some breakthrough, like, “That may be the answer, it’s an allergic reaction to nightshades. And that answer is so many levels removed.
I guess what I’m saying is, and you said earlier, language is infinite, but health is near that, too, right? There are so many potential things something could be, and yet, so few data points, that we must try to draw from. It would be like, if I said, “I know a person who is 6’ 4” and twenty-seven years old and born in Chicago, what’s their middle name?” It’s like, how do you even narrow it down to a set of middle names?
Right, right. Okay, I think I understand what you’re saying. This is, obviously, a challenge, but one of the ways that we kind of do this is, the first thing is our artificial intelligence is really intended for doctors and not the patients. Although, we were just talking about AGI and when it will happen, but the reality is we’re not there yet, so while our system tries to make these predictions, it’s under the supervision of a doctor. So, they’re really looking at these predictions and trying to pull out relevant things.
Now, you mentioned, the structured data—this is your age, your weight, maybe your sex, your medications; this is structured—but maybe the important thing is in the text, or is in the unstructured data. So, in this case, one of the things that we try to do, and it’s one of the main focuses of what we do, is to try to use natural language processing, NLP, to really make sure that we’re processing this unstructured data, or this text, in a way to really come up with a very robust, numerical representation of the important things.
So, of course, you can mine this information, this text, to try to understand, for example, you have a patient who has some sort of allergy, and it’s only written in this text, right? In that case, you need a system to really go through this text with a fine-tooth comb, and try to really pull out risk factors for this patient, relevant things about their health and their medical history that may be important.
So, is it not the case that diagnosing—if you just said, here is a person who manifests certain symptoms, and I want to diagnose what they have—may be the hardest problem possible. Especially compared to where we’ve seen success, which is, like, here is a chest x-ray, we have a very binary question to ask: does this person have a tumor or do they not? Where the data is: here’s ten thousand scans with the tumor, here’s a hundred thousand without a tumor.
Like, is it the cold or the flu? That would be an AI kind of thing because an expert system could do that. I’m kind of curious, tell me what you think—and then I’d love to ask, what would an ideal world look like, what would we do to collect data in an ideal world—but just with the here and now, aspirationally, what do you think is as much as we can hope for? Is it something, like, the model produces sixty-four things that this patient may have, rank ordered, like a search engine would do from the most likely to the least likely, and the doctor can kind of skim down it and look for something that catches his or her eye. Is that as far as we can go right now? Or, what do you think, in terms of general diagnosing of ailments?
Sure, well, actually, what we focus on currently is really on the treatment, not on the diagnosis. I think the diagnosis is a more difficult problem, and, of course, we really want to get into that in the future, but that is actually somewhat more of a very challenging sort of thing to do.
That being said, what you mentioned, you know, saying, “Here’s a list of things, let’s make some predictions of it,” is actually a thing that we currently do in terms of treatments for patients. So, one example of a thing that we’ve done is built a system that can predict surgical complications for patients. So, imagine, you have a patient that is sixty years old and is mildly septic, and may need some sort of procedure. What we can do is find that there may be a couple alternative procedures that can be given, or a nonsurgical intervention that can help them manage their condition. So, what we can do is predict what will happen with each of these different treatments, what is the likelihood it will be successful, as well as weighing this against their risk options.
And in this way, we can really help the doctor choose what sort of treatment that they should give this person, and it gives them some sort of actionable insight, that can help them get their patients healthy. Of course, in the future, I think it would be amazing to have some sort of end to end system that, you know, a patient comes in, and you can just get all the information and it can diagnose them, treat them, get them better, but we’re definitely nowhere near that yet.
Recently, IBM made news that Watson had prescribed treatment for cancer patients that was largely identical to what the doctors did, but it had the added benefit that in a third of the cases it found additional treatment options, because it had virtue of being trained on a quarter million medical journals. Is that the kind of thing that’s like “real, here, today,” that we will expect to see more things like that?
I see. Yeah, that’s definitely a very exciting thing, and I think that’s great to see. One of the things that’s very interesting, is that IBM primarily works on cancer. It’s lacking in these high prescription volume sorts of conditions, like heart disease or diabetes. So, I think that while this is very exciting, this is definitely a sort of technology, and a space for artificial intelligence, where it really needs to be expanded, and there’s a lot of room to grow.
So, we can sequence a genome for $1,000. How far away are we from having enough of that data that we get really good insights into, for example, a person has this combination of genetic markers, and therefore this is more likely to work or not work. I know that in isolated cases we can do that, but when will we see that become just kind of how we do things on a day-to-day basis?
I would say, probably, twenty-five years from the clinic. I mean, it’s great, this information is really interesting, and we can do it, but it’s not widely used. I think there are too many regulations in place right now that keep this from happening, so, I think it’s going to be, like I said, maybe twenty-five years before we really see this very widely used for a good number of patients.
So are there initiatives underway that you think merit support that will allow this information to be collected and used in ways that promote the greater good, and simultaneously, protect the privacy of the patients? How can we start collecting better data?
Yeah, there are a lot of people that are working on this type of thing. For example, Obama had a precision medicine initiative and these types of things where you’re really trying to, basically, get your health records and your genomic data, and everything consolidated and have a very easy flow of information so that doctors can easily integrate information from many sources, and have very complete patient profiles. So, this is a thing that’s currently underway.
To pull out a little bit and look at the larger world, you’re obviously deeply involved in speech, and language processing, and health care, and all of these areas where we’ve seen lots of advances happening on a regular basis, and it’s very exciting. But then there’s a lot of concern from people who have two big worries. One is the effect that all of this technology is going to have on employment. And there’s two views.
One is that technology increases productivity, which increases wages, and that’s what’s happened for two hundred years, or, this technology is somehow different, it replaces people and anything a person can do eventually the technology will do better. Which of those camps, or a third camp, do you fall into? What is your prognosis for the future of work?
Right. I think that technology is a good thing. I know a lot of people have concerns, for example, that if there’s too much artificial intelligence it will replace my job, there won’t be room for me and for what I do, but I think that what’s actually going to happen, is we’re just going to see, shall we say, a shifting employment landscape.
Maybe if we have some sort of general intelligence, then people can start worrying, but, right now, what we’re really doing through artificial intelligence is augmenting human intelligence. So, although some jobs become obsolete, now to maintain these systems, build these systems, I believe that you actually have, now, more opportunities there.
For example, ten to fifteen years ago, there wasn’t such a demand for people with software engineering skills, and now it’s almost becoming something that you’re expected to know, or, like, the internet thirty years back. So, I really think that this is going to be a good thing for society. It may be hard for people who don’t have any sort of computer skills, but I think going forward, that these are going to be much more important.
Do you consume science fiction? Do you watch movies, or read books, or television, and if so, are there science fiction universes that you look at and think, “That’s kind of how I see the future unfolding”?
Have you ever seen the TV show Black Mirror?
Well, yeah that’s dystopian though, you were just saying things are going to be good. I thought you were just saying jobs are good, we’re all good, technology is good. Black Mirror is like dark, black, mirrorish.
Yeah, no, I’m not saying that’s what’s going to happen, but I think that’s presenting the evil side of what can happen. I don’t think that’s necessarily realistic, but I think that show actually does a very good job of portraying the way that technology could really be integrated into our lives. Without all of the dystopian, depressing stories, I think that the way that it shows the technology being integrated into people’s lives, how it affects the way people live—I think it does a very good job of doing things like that.
I wonder though, science fiction movies and TV are notoriously dystopian, because there’s more drama in that than utopian. So, it’s not conspiratorial or anything, I’m not asserting that, but I do think that what it does, perhaps, is causes people—somebody termed it “generalizing from fictional evidence,” that you see enough views of the future like that, you think, “Oh, that’s how it’s going to happen.” And then that therefore becomes self-fulfilling.
Frank Herbert, I think, it was who said, “Sometimes the purpose of science fiction is to keep a world from happening.” So do you think those kinds of views of the world are good, or do you think that they increase this collective worry about technology and losing our humanity, becoming a world that’s blackish and mirrorish, you know?
Right. No, I understand your point and actually, I agree. I think there is a lot of fear, which is quite unwarranted. There is actually a lot more transparency in AI now, so I think that a lot of those fears are just, well, given the media today, as I’m sure we’re all aware, it’s a lot of fear mongering. I think that these fears are really something that—not to say there will be no negative impact—but, I think, every cloud has its silver lining. I think that this is not something that anyone really needs to be worrying about. One thing that I think is really important is to have more education for a general audience, because I think part of the fear comes from not really understanding what AI is, what it does, how it works.
Right, and so, I was just kind of thinking through what you were saying, there’s an initiative in Europe that, AI engines—kind of like the one you’re talking about that’s suggesting things—need to be transparent, in the sense they need to be able to explain why they’re making that suggestion.
But, I read one of your papers on deep neural nets, and it talks about how the results are hard to understand, if not impossible to understand. Which side of that do you come down on? Should we limit the technology to things that can be explained in bulleted points, or do we say, “No, the data is the data and we’re never going to understand it once it starts combining in these ways, and we just need to be okay with that”?
Right, so, one of the most overused phrases in all of AI is that “neural networks are a black box.” I’m sure we’re all sick of hearing that sentence, but it’s kind of true. I think that’s why I was interested in researching this topic. I think, as you were saying before, the why in AI is very, very important.
So, I think, of course we can benefit from AI without knowing. We can continue to use it like a black box, it’ll still be useful, it’ll still be important. But I think it will be far more impactful if you are able to explain why, and to really demystify what’s happening.
One good example from my own company is that in medicine it’s vital for the doctor to know why you’re saying what you’re saying, at Droice. So, if a patient comes in and you say, “I think this person is going to have a very negative reaction to this medicine,” it’s very vital for us to try to analyze the neural network and explain, “Okay, it’s really this feature of this person’s health record, for example, the fact that they’re quite old and on another medication.” That really makes them trust the system, and really eases the adoption, and allows them to integrate into traditionally less technologically focused fields.
So, I think that there’s a lot of research now that’s going into the why in AI, and it’s one of my focuses of research, and I know the field has really been blooming in the last couple of years, because I think people are realizing that this is extremely important and will help us not only make artificial intelligence more translational, but also help us to make better models.
You know, in The Empire Strikes Back, when Luke is training on Dagobah with Yoda, he asked him, “Why, why…” and Yoda was like, “There is no why.” Do you think there are situations where there is no why? There is no explainable reason why it chose what it did?
Well, I think there is always a reason. For example, you like ice cream; well, maybe it’s a silly reason, but the reason is that it tastes good. It might not be, you know, you like pistachio better than caramel flavor—so, let’s just say the reason may not be logical, but there is a reason, right? It’s because it activates the pleasure center in your brain when you eat it. So, I think that if you’re looking for interpretability, in some cases it could be limited but I think there’s always something that you could answer when asking why.
Alright. Well, this has been fascinating. If people want to follow you, keep up with what you’re doing, keep up with Droice, can you just run through the litany of ways to do that?
Yeah, so we have a Twitter account, it’s “DroiceLabs,” and that’s mostly where we post. And we also have a website: www.droicelabs.com, and that’s where we post most of the updates that we have.
Alright. Well, it has been a wonderful and far ranging hour, and I just want to thank you so much for being on the show.
Thank you so much for having me.

Byron explores issues around artificial intelligence and conscious computers in his upcoming book The Fourth Age, to be published in April by Atria, an imprint of Simon & Schuster. Pre-order a copy here.
[voices_in_ai_link_back]

Macmillan’s Tor/Forge goes DRM-free

Macmillan science fiction/fantasy imprint Tor/Forge will sell its e-books DRM-free as of “early July 2012,” the company announced today.

SCIFI Willing to Try Just About Anything Online

Nerds across the galaxy strapped on their pointy ears and beamed into theaters to catch the big budget reboot of Star Trek this weekend. Science fiction fans may be punching bags for Klingon jokes, but they are also early adopters and provide fertile ground for new media experimentation. Case in point: the SCIFI Channel.
BSG_SCIFI
I spoke with Craig E. Engler, senior VP and general manager of SCIFI’s digital arm, to find out what it does to tap into the power of this rabid fan base. “We have a very techno-savvy audience,” said Engler. “It’s a younger, more modern audience. They jump into this stuff.”

Talk about early adopting. SCIFI actually showed its first full episode of a TV show online 10 years ago. It was for an alternate episode of a show called Lexx that wasn’t going to air on TV, so the network decided to put it on the web. Engler doesn’t remember exactly how they put the show online (this was way before YouTube) and said that “viewers were in the thousands.”

Today the network offers up more than just one-off experiments online, providing Hulu and its own SCIFI Rewind with full episodes of many of its shows including its own popular reboot of Battlestar Galactica. Read More about SCIFI Willing to Try Just About Anything Online