Voices in AI – Episode 30: A Conversation with Robert Mittendorff and Mudit Garg

In this episode, Byron, Robert and Mudit talk about Qventus, healthcare, machine learning, AGI, consciousness, and medical AI.
[podcast_player name=”Episode 30 – A Conversation with Robert Mittendorff and Mudit Garg” artist=”Byron Reese” album=”Voices in AI” url=”https://voicesinai.s3.amazonaws.com/2018-01-22-(00-58-58)-garg-and-mittendorf.mp3″ cover_art_url=”https://voicesinai.com/wp-content/uploads/2018/01/voices-headshot-card-3.jpg”]
Byron Reese: This is Voices in AI, brought to you by Gigaom. I’m Byron Reese. Today is a first for Voices in AI, we have two guests. The first one is from Qventus; his name is Mudit Garg. He’s here with Robert Mittendorff, who’s with Norwest Venture Partners, who also serves on Qventus’ board. Mudit Garg is the co-founder and CEO of Qventus, and they are a company that offers artificial-intelligence-based software designed to simplify hospital operations. He’s founded multiple technology companies before Qventus, including Hive, a group messaging platform. He spent two years as a consultant with Seattle-based McKinsey & Company, focusing, I think, on hospital operations.
Robert, from Norwest Ventures, before he was VP of Marketing and Business Development at Hansen Medical, a publicly traded NASDAQ company. He’s also a board-certified emergency physician who completed his residency training at Stanford. He received his MD from Harvard Medical School, his MBA from Harvard Business School, and he has a BS in Biomedical Engineering from Johns Hopkins University. Welcome to the show, gentlemen.
Mudit Gard: Thank you. Good morning. Thank you for having us.
Robert Mittendorff: Thank you, Byron.
Mudit, I’ll start with you. Tell us about Qventus and its mission. Get us all oriented with why we’re here today.
Mudit: Absolutely. The best way to think of Qventus, our customers often describe us like air traffic control. Much like what air traffic control does for airports, where it allows many flights to land, much more than if they were uncoordinated, and much more safely than if they were uncoordinated. We do the same for healthcare and hospitals.
For me—as, kind of, boring and uncool as a world of operations and processes might be—I had a chance to see that firsthand working in hospitals when I was at McKinsey & Company, and really just felt that we were letting all of our clinicians down. If you think about the US healthcare system, we have the best clinicians in the world, we have great therapies, great equipment, but we fail at providing great medicine. Much of that was being held back by the complex operations that surround the delivery of care.
I got really excited about using data and using AI to help support these frontline clinicians in improving the core delivery of care in the operation. Things like, as a patient sitting in an emergency department, you might wonder what’s going on and why you aren’t being taken care of faster. On the flip side, there’s a set of clinicians who are putting in heroic efforts trying to do that, but they are managing so many different variables and processes simultaneously that it’s almost humanly impossible to do that.
So, our system observes and anticipates problems like, it’s the Monday after Thanksgiving, it’s really cold outside, Dr. Smith is working, he tends to order more labs, our labs are slow—all these factors that would be hard for someone to keep in front of them all the time. When it realizes we might run out of capacity, three or four hours in advance, they will look and find the bottleneck, and create a discussion on how to fix that. We do things like that at about forty to fifty hospitals across the country, and have seen good outcomes through that. That’s what we do, and that’s been my focus in the application of AI.
And Robert how did you get involved with Qventus?
Robert: Well, so Qventus was a company that fit within a theme that we had been looking at for quite some time in artificial intelligence and machine learning, as it applies to healthcare. And within that search we found this amazing company that was founded by a brilliant team of engineers/business leaders who had a particular set of insights from their work with hospitals, at McKinsey, and it identified a problem set that was very tractable for machine learning and narrow AI which we’ll get into. So, within that context in the Bay Area, we found Qventus and we’re just delighted to meet the team and their customers, and really find a way to make a bet in this space.
We’re always interested in case studies. We’re really interested in how people are applying artificial intelligence. Today, in the here and now, put a little flesh on the bones of what are you doing, what’s real and here, how did you build it, what technology you are using, what did you learn? Just give us a little bit of that kind of perspective.
Mudit: Absolutely. I’ll first start with the kinds of things that we are doing, and then we’ll go into how did we build it, and some of the lessons along the way as well. I just gave you one example of running an emergency department. In today’s world, there is a charge nurse that is responsible for managing the flow of patients through that emergency department, constantly trying to stay ahead of it. The example I gave was where, instead the systems are observing it, realizing, learning from it, and then creating a discussion among folks about how to change it.
We have many different things—we call them recipes internally—many different recipes that the system keeps looking for. It looks for, “Hey, here’s a female who is younger, who is waiting and there are four other people waiting around her, and is an acute pain.” She is likely to get up and leave without being seen by a doctor much more than other folks, and you might nudge and greet her, to go up and talk to them. We have many recipes and examples like these, I won’t go into specific examples in each of those, but we do that in different areas of delivery of healthcare.
So, patient flow, just having patients go through the health systems in ways that don’t require them to add resources, but allow them to provide the same care is one big category. You do that in the emergency department, in unison to the hospital and in the operating room. More recently, starting to do that in pharmacy operations, pharmacy costs have started rising. What are the things that today require a human to manually realize, follow up on, escalate and manage, and how can we help the AIs with that process? We’ve seen really good results with that.
I think you’re asking about case studies, in the emergency department side alone, one of our customers treated three thousand more patients in that ED this year than last, without adding resources. They saved almost a million minutes of patient wait time in that single ED alone and that’s been fascinating. What’s been even more amazing is hearing from the nurse manager there how the staff feel like they have the ability to shape the events versus always being behind, and always feeling like they are trying to solve the problem after the fact. They’ve seen some reductions in turnover and that ability of using AI to, in some ways, making health care more human for the people who help us, the caregivers, is what’s extremely exciting in this work for me.
Just to visualize that for a moment, if I looked at it from thirty thousand feet—people come into a hospital, all different ways, and they have all different characteristics of all the things you would normally think, and then there’s a number of routings through the hospital experience, right? Rush them straight into here, or there, or this, so it’s kind of a routing problem. It’s a resource allocation problem, right? What does all of that look like? This is not a rhetorical question, what is all that similar to outside of the hospital? Where is that approach broadly and generally applicable to? It’s not a traffic routing problem, it’s not an inventory management problem, are there any corollaries you can think of?
Mudit: Yeah. In many ways there are similarities to anywhere where there are high fixed asset businesses and there’s a distributed workforce, there’s lots of similarities. I mean, logistics is a good example of it. Thinking about how different deliveries are routed and how they are organized in a way that you meet the SLAs for different folks, but your cost of delivery is not too high. It has similarities to it.
I think hospitals are, in many ways, one of the most complex businesses, and given the variability is much, much higher, traditional methods have failed. In many of the other such logistical and management problems you could use your optimization techniques, and you could do fairly well with them. But given the level of variability is much, much higher in healthcare—because the patients that walk in are different, you might have a ton walk in one day and very few walk in the next, the types of resources they need can vary quite a bit—that makes the traditional methods alone much, much harder to apply. In many ways, the problems are similar, right? How do you place the most product in a warehouse to make sure that deliveries are happening as fast as possible? How do you make sure you route flights and cancel flights in a way that causes minimum disruption but still maximize the benefit of the entirety of the system? How do you manage the delivery of packages across a busy holiday season? Those problems have very similar elements to them and the importance of doing those well is probably similar in some ways, but the techniques needed are different.
Robert, I want to get to you in just a minute, and talk about how you as a physician see this, but I have a couple more technical questions. There’s an emergency room near my house that has a big billboard and it has on there the number of minutes of wait time to get into the ER. And I don’t know, I’ve always wondered is the idea that people drive by and think, “Oh, only a four-minute wait, I’ll go to the ER.” But, in any case, two questions, one, you said that there’s somebody who’s in acute pain and they’ve got four people, and they might get up and leave, and we should send a greeter over… In that example, how is that data acquired about that person? Is that done with cameras, or is that a human entering the information—how is data acquisition happening? And then, second, what was your training set to use AI on this process, how did you get an initial training set?
Mudit: Both great questions. Much of this is part of the first-mile problem for AI in healthcare, that much of that data is actually already generated. About six or seven years ago a mass wave of digitization started in healthcare, and most of the digitization was taking existing paper-based processes and having them run through electronic medical record systems.
So, what happens is when you walk into the emergency department, let’s say, Byron, you walk in, someone would say, “Okay, what’s your name? What are you here for?” They type your name in, and a timestamp is stored alongside that, and we can use that timestamp to realize a person’s walked in. We know that they walked in for this reason. When you got assigned a room or assigned a doctor then I can, again, get a sense of, okay, at this time they got assigned a room, at this time they got assigned a doctor, at this time their blood was drawn. All of that is getting stored in existing systems of record already, and we take the data from the systems of record, learn historically—so before we start we are able to learn historically—and then in the moment, we’re able to intervene when a change needs to take place.
And then the data acquisition part of the acute patient’s pain?
Mudit: The pain in that example is actually coming from the kind of what they have complained about.
I see, perfect.
Mudit: So, we’re looking the types of patients who complain about similar pieces, what’s their likelihood versus this likelihood, that’s what we will be learning on it.
Robert, I have to ask you before we dive into this, I’m just really intensely curious about your personal journey, because I’m guessing you began planning to be a medical practitioner, and then somewhere along the way you decided to get an MBA, and then somewhere along the way you decided to invest in technology companies and be on their boards. How did all of that happen? What was your progressive realization that took you from place to place to place?
Robert: I’ll spend just a couple of minutes on it, but not exactly. I would say in my heart I am an engineer. I started out as an engineer. I did biomedical electrical engineering and then I spent time at MIT when I was a medical student. I was in a very technical program between Harvard and MIT as a medical student. In my heart, I’m an engineer which means I try to reduce reality to systems of practice and methods. And coupled with that is my interest in mission-driven organizations that also make money, so that’s where healthcare and engineering intersect.
Not to go into too much detail on a podcast about myself, I think the next step in my career was to try to figure out how I could deeply understand the needs of healthcare, so that I could help others and myself bring to bear technology to solve and address those needs. The choice to become a practitioner was partially because I do enjoy solving problems in the emergency department, but also because it gave me a broad understanding of opportunities in healthcare at the ground level and above in this way.
I’ll just give you an example, when I first saw what Mudit and his team had done in the most amazing way at Qventus, I really understood the hospital as an airport with fifty percent of the planes landing on schedule. So, to go back to your emergency department example, imagine if you were responsible for safety and efficiency at SFO, San Francisco airport, without a tower and knowing only the schedule landing times for half of the jets, where each jet is patient. Of the volume of patients that spend their night in the hospital, about half come to the ED, and when I show up for a shift that first, second, and third patient can be stroke, heart attack, broken leg, can be shortness of breath, skin rash, etcetera. The level of complexity in health care to operationalize improvements in the way that Mudit has is incredibly high. We’re just at the beginning, they are clearly the leader here, but what I saw in my personal journey in this company is the usage of significant technology to address key throughput needs in healthcare.
When one stack-ranks what we hope artificial intelligence does for the world, on most people’s list, right up there at the very top is impact health. Do you think that’s overly hyped because there’s all kinds of, you know, we have an unending series of wishes that we hope artificial intelligence can do? Do you think it’s possible that it delivers eventually on all of that, that it really is a transformative technology that materially alters human health at a global level?
Robert: Absolutely and wholeheartedly. My background as a researcher in neuroscience was using neural networks to model brain function in various animal models, and I would tell you that the variety of ways that machine learning and AI, which are the terms we use now for these technologies, the variety of ways they will affect human health are massive. I would say within the Gartner hype cycle we are early, we are overhyping in the short term the value of this technology. We are not overhyping the value of this technology in the next ten, twenty, or thirty years. I believe that AI is the driver of our Industrial Revolution. This will be looked back at as an industrial revolution of sorts. I think there’s a huge benefit that are going to be accrued to healthcare providers and patients to the usage of these technologies.
Talk about that a little more, paint a picture of the world in thirty years, assuming all goes well. Assuming all goes well, what would our health experience look like in that world?
Robert: Yeah, well, hopefully your health experience, and I think Mudit’s done a great job describing this, will return to a human experience between a patient and a physician, or provider. I think in the backroom, or when you’re at home interacting with that practice, I think you’re going to see a lot more AI.
Let me give you one example. We have a company that went public, a digital health company, that uses machine learning to read EKG data, so cardiac electrical activity data. A typical human would take eight hours to read a single study on a patient, but by using machine learning they get down to five to tens of minutes. The human is still there, overreading what the machine learned software is producing—this company is called iRhythm—and what that allows us to do is reach a lot more patients at a lower cost than you could achieve with human labor. You’ll see this in radiology. You’ll see this in coaching patients. You’ll see this in where I think Mudit has really innovated, which is he has created a platform that is enabling.
In the case that I gave you with humans being augmented by, what I call, the automation or semi-automation of a human task, that’s one thing, but what Mudit is doing is truly enabling AI. Humans cannot do what he does in the time and scale that he does it. That is what’s really exciting—machines that can do things that humans cannot do. Just to visualize that system, there are some things that are not easily understood today, but I think you will see radiology improve with semi-automation. I think patients will be coached with smart AI to improve their well-being, and that’s already being seen today. Human providers will have leverage because the computer, the machine will help prioritize their day, which patient talk to about, what, when, how, why. So, I think you’ll see a more human experience.
That’s the concern is that we will see a more manufactured experience. I don’t think that’s the case at all. The design that we’ll probably see succeed is one where the human will become front and center again, where physicians will no longer be looking at screens typing in data, they’ll be communicating face to face with a human, with an AI helping out, advising, enabling those tedious tasks that the human shouldn’t be burdened with, to allow the relationship between the patient and physician to return.
So, Mudit, when you think of artificial intelligence and applying artificial intelligence to this particular problem, where do you go from that? Is the plan to take that learning—and, obviously, scale it out to more hospitals—but what is the next level to add depth to it to be able to say, “Okay, we can land all the planes now safely, now we want to refuel them faster, or…”? I don’t know, the analogy breaks down at some point. Where would you go from here?
Mudit: We already as customers are starting to see results of this approach in one area. We’ve started expanding already and have a lot more expansion coming down the line as well. If you think of it, at the end of the day, so much of healthcare delivery is heavily process driven, right? Anywhere from how your bills get generated to when you get calls. I’ve had times when I might get a call from a health system saying I have a ten-dollar bill that they are about to send to collection but I paid all the bills today. There are things like that that are constantly happening that are breakdowns in processes, across delivery, across the board.
We started, as I said, four or five years ago and very specifically focused on the emergency department. Going from there into the surgery area, where operating rooms can cost upwards of hundreds of dollars a minute, so how do you manage that complex an operation, and the logistics setting to deliver the best value? And I’ve seen really good results there, managing the entirety of all the units in the hospital. More recently, as I was saying, we are now starting to work with Sutter Health across twenty-six of their hospital pharmacies, in looking at what are the key pieces around operations in the pharmacy which are, again, manually holding people back from delivering the best care. These are the different pieces across the board that we are already starting to see.
The common thread across all of these I find is that we have amazing, incredible clinicians today, that, if they had all the time and energy in the world to focus on anticipating these problems and delivering the best care, they would do a great job, but we cannot afford to keep having more people solve these problems. There are significant margin pressures across healthcare. The same people who were able to do these things before have to-do lists that are growing faster than they can ever comprehend. The job of AI really is to act as, kind of, their assistant and watch those decisions on their behalf, and help make those really, really easy. To take all of the boring, mundane logistics out of their hands, so they can focus on what they can do best which is deliver care to their patients. So, right now, as I said, we started on the flow side, pharmacies are a new area, outpatient clinics, and imaging centers is another area that we are working with a few select customers on and there’s some really, really exciting stuff there in increasing the access to care—when you might call a physician to get access—while reducing the burden on that physician, that we are working on.
Another really exciting piece for me is, in many ways the US healthcare system is unique, but in this complexity of logistics and operation it is not. So, we are already signed to work with hospitals globally, just started with working with our first international customer recently, and the same problems exist everywhere. There was an article in BBC, I think a week or two weeks ago, where there’s a long surgery waiting lists in the UK, and they are struggling to get those patients seen in that system, due to lack of efficiency in these logistics. So, that’s the other piece that I’m really excited about, it’s not only the breadth of these problems where there’s complexity of processes, but also the global applicability of it.
The exciting thing to me about this episode of Voices is that I have two people who are engineers, who understand AI, and who have a deep knowledge of health. I just have several questions that kind of sit at the intersection of all of that I would love to throw at you.
My first one is this, the human genome is, however many billions of base pairs that works out to something like 762MB of data, but if you look at what makes us different than, say, chimps, it may be one percent of that. So, it’s something like 7MB or 8MB of data is the code you need to build an intelligent brain, a person. Does that imply to you that artificial intelligence might have a breakthrough, there might be a relatively straightforward and simple thing about intelligence that we’re going to learn, that will supercharge it? Or, is your view that, no, unfortunately, something like a general intelligence is going to be, you know, hunks of spaghetti code that kind of work together and pull off this AGI thing. Mudit, I’ll ask you first.
Mudit: Yeah, and boy that’s a tough question. I will do my best in answering that one. Do I believe that we’ll be able to get a general-purpose AI, with, like, 7MB or 8MB of code? There’s a part of me that does believe in that simplicity, and does want to believe in that the answer. If you look at a lot of the machine learning code, it’s not the code itself that’s actually that complex, it’s the first mile and the last mile of that code that ends up taking the vast majority of the code. How to get the training sets in and how do you get the output out—that is what takes the majority of the AI code today.
The fundamental learning code isn’t that big today. I don’t know if we’ll solve general purpose AI anytime soon. I’m certainly not holding my breath for that, but there’s a part of me that feels and hopes that the fundamental concepts of the learning and the intelligence, will not be that complicated at an individual micro scale. Much like ourselves, we’ll be able to understand them, and there will be some beauty and harmony and symphony in how they all come together. And that actually won’t be complex in hindsight, but it will be extremely complex to figure out the first time around. That’s purely speculative but that would be might be my belief and my hunch right now.
Robert, do you want to add anything to that, or let that answer stand?
Robert: I’d be happy to. I think it’s an interesting analogy to make. There are some parts of it that will break down and parts that will parallel between the human genomes complexity, and utility, and the human brain. You know, just I think when we think about the genome you’re right, it’s several billion base pairs where we only have twenty thousand genes, and a small minority percentage that actually code for protein, and a minority of those that we understand affect the human in a diseased way, like a thousand genes to two thousand genes. There’s a lot of base pairs that we don’t understand and could be related to structure of the genome as it needs to do what it does in the human body, in the cell.
On the brain side, though, I think I would go with your latter response which is if you look at the human brain—and I’ve had the privilege of working with animal models and looking at human data—the brain is segmented into various functional units. For example, the auditory cortex is responsible for taking information from the ear and converting it to signals that then are pattern-recognized in to, say, language, and where those symbols of what words we’re speaking are then processed by other parts of the cortex. Similarly, the hippocampus, which sits in, kind of, the oldest part of the brain, is responsible for learning. It is able to look at various inputs from all of these, from the visual and auditory and other courtesies, and then upload them to long-term memory from short-term memory, so that the brain is functionally segmented and physically segmented.
I believe that a general-purpose AI will have the same kind of structure. It’s funny we have this thing called the AI effect where when we solve a problem with code or with machinery, it’s no longer AI. So, for example, natural language processing, some would consider now not part of AI because we’ve somewhat solved it, or speech recognition used to be AI, but now it’s an input to the AI, because the AI is thinking about more understanding than interpretation of audio signals and converting them into words. I would say what we’re going to see, which is similar to the human body encoded by these twenty thousand genes, is you will have functional expertise with, presumably, code that is used for segmenting the problem of creating a general AI.
A second question then. You, Robert, waxed earlier about how big the possibilities are for using artificial intelligence with health. Of course, we know that the number of people who are living to one hundred keeps going up, up, up. The number of people who become supercentenarians is in the dozens, who’ve gotten to one hundred and ten. The number of people who have lived to one hundred and twenty-five is stubbornly fixed at zero. Do you believe—and not even getting aspirational about “curing death”—that what’s most likely to happen is more of us are going to make it to one hundred healthily, or do you think that one hundred and twenty-five is something we’ll break and maybe somebody will live to one hundred and fifty. What do you think about that?
Robert: That’s a really hard question. I would say that if I look at the trajectory of gains that, public health, primarily, with things like treated water to medicine, we’ve seen a dramatic increase in human longevity in the developed world. From taking down the number of children dying during childbirth, which lowers the average obviously, to extending life in the later years, and if you look at the effects there those conclusions have never effects on society. For example, when Social Security was invented a minority of individuals would live to the age in which they would start accruing significant benefits, obviously that’s no longer the case.
So, to answer your question, there is no theoretical reason that I can come up with that I can’t imagine someone making it to one hundred and twenty-five. One hundred and fifty is obviously harder to imagine. But we understand the human cell at a certain level, and the genome, and the machinery of the human body, and we’ve been able to thwart the body’s effort to fatigue and expire, a number of times now. Whether it’s cardiovascular disease or cancer, and we’ve studied longevity—“we” meaning the field, not myself—so, I don’t see any reason why we would say we will not have individuals reach one hundred and twenty-five, or even one hundred and fifty.
Now, what is the time course of that? Do we want that to happen and what are the implications for society? Those are big questions to answer. But science will continue to push the limits of understanding human function at the cellular and the physiologic level to extend the human life. And I don’t see a limit to that currently.
So, there is this worm, called the nematode worm, little bitty fella, he’s as long as a hair is wide, the most successful animal on the planet. Something like seventy percent of all animals are nematode worms. The brain of the nematode worm has 302 neurons, and for twenty years or so, people have been trying to model those 302 neurons in a computer, the OpenWorm project. And even today they don’t know if they can do it. That’s how little we understand. We don’t not understand the human brain because it’s so complex, we don’t understand anything—or I don’t want to say anything—we don’t understand just how neurons themselves work.  
Do you think that, one, we need to understand how our brains work—or how the nematode brain works for that matter—to make strides towards an AGI? And, second, is it possible that a neuron has stuff going on at the Planck level that it’s as complicated as a supercomputer, making intelligence acquired that way incredibly difficult? Do either of you want to comment on that?
Mudit: It’s funny that you mention that, when I was at Stanford doing some work in the engineering, one of the professors used to say that our study of the human brain is sort of like someone just had a supercomputer and two electrodes and they’re poking the electrodes in different places and trying to figure out how it works. And I can’t imagine ever figuring out how a computer works outside-in by just having like two electrodes and seeing the different voltages coming out of it. So, I do see the complexity of it.
Is it necessary for us to understand how the neuron works? I’m not sure it’s necessary for us to understand how the neuron works, but if you were to come up with a way where we can build a system that’s, both resilient, redundant, and simple, that can do that level of intelligence, I think that’s hundreds of thousands of years of evolution that have helped us get to that solution, so it would, I think, be a critical input.
Without that, I see a different approach, which is what we are taking today, which is inspired, likely, but it’s not the same. In our brain when neurons fire, yes, we now have a similar transfer function for many of our neural networks of how the neuron fires, but for any kind of meaningful signal to come out we have a population of neurons firing in our brain that makes the impulsing more continuous and very redundant and very resilient. It wouldn’t fail even if some portion of those neurons stopped working. But that’s not how our models work, that’s not how our math works today. I think in finding the most optimized, probably, elegant and resilient way of doing it, I think it would be remiss not to take inspiration from what has been evolved over a long, long period of time, to, perhaps, being one of the most efficient ways of having general purpose AI. So, at least my belief would be we will have to learn, and I would think that our understanding is still largely simplistic and, at least, I would hope and believe that we’ll learn a lot more and find out that, yeah, each one of those perhaps either communicates more, or does it in a way that brings the system to the optimal solution a lot faster than we would imagine.
Robert: Just to add to that I would say, I agree with everything Mudit said, I would say do we need to study the neuron and neural networks in vivo, in animals? And the answer to that is, as humans, we do. I mean, I believe that we have an innate curiosity to understand ourselves and that we need to do. Whether it’s funded or not, the curiosity to understand who we are, where we came from, how we work, will drive that just like it’s driven fields as diverse as astronomy to aviation.
I think, do we need to understand at the level of detail you’re describing, for example, what exactly happens at the synapse stochastically, where neurotransmitters find the receptors that open ion channels that change the resting potential of a neuron, such that additional axonal effects occur where at the end of that neuron you then release another neurotransmitter. I don’t think so. Because I think we learn a lot, as Mudit said, from understanding how these highly developed and trained systems we call, animals and humans, work, but they were molded over large periods of time for specific survival tasks, to live in the environment that they live in.
The systems we’re building, or Mudit’s building, and others, are designed for other uses, and so we can take, as he said, inspiration from them, but we don’t need to model how a nematode thinks to help the hospital work more effectively. In the same way that, there are two ways, for example, someone could fly from here in San Francisco, where I’m sitting, to, let’s say, Los Angeles. You could be a bird, which is a highly evolved flying creature which has sensors, which has, clearly, neural networks that are able to control wing movement, and effectively the wing surface area to create lift, etcetera. Or, you could build a metal tube with jets on it that gets you there as well. I think they have different use cases and different criteria.
The airplane is inspired by birds. The wing of an airplane, the cross-section of the wing is designed like a bird’s wing is in that the one pathway is longer than the other which changes pressure above and below the wing that allows flight to occur. But clearly, the rest of it is very different. And so, I think the inspiration drove aviation to a solution that has many parts from what birds have, but it’s incredibly different because the solution was to the problem of transporting humans.
Mudit, earlier you said we’re not going to have an AGI anytime soon. I have two questions to follow up on that thought. The first is that among people who are in the tech space there’s a range of something like five to five hundred years as to when we might get a general intelligence. I’m curious, one, why do you think there’s such a range? And, two, I’m curious, with both of you, if you were going to throw a dart at that dartboard, where would you place your bet, to mix a metaphor.
Mudit: I think in the dart metaphor, chances of being right are pretty low, but we’ll give it a shot. I think part of it, at least I ask myself, is the bar we hold for AGI too high? At what point do we start feeling that a collection of special-purpose AIs that are welded together can start feeling like an AGI and is that good enough? I don’t know the answer to that question and I think that’s part of what makes the answer harder. Similar to what Robert was saying where the more problems we solve, the more we see them as algorithmic and less as AI.
But I do think at some point, at least in my mind, if I can see an AI starting to question the constraints of the problem and the goal it’s trying to maximize, that’s where true creativity for humans comes from; when we break rules and when we don’t follow the rules we were given. And that’s also the scary part of AI comes from because it can do that at scale then. I don’t see us close to that today. And if I had to guess I’m going to just say, on this exponential curve, I’m going to probably not pick out the right point, but four to five decades is when we start seeing enough of the framework and maybe essentially, we can see some tangible general-purpose AI come to form.
Robert, do you want to weigh in, or you will take a pass on that one?
Robert: I’ll weigh in quickly. I think we often see this in all of investing, actually—whether it’s augmented reality, virtual reality, whether it’s stenting or robotics in medicine—we as investors have to work hard to not overestimate the effect of technology now, and not underestimate the effect of technology in the long run. This came from, I believe a Stanford professor Roy Amara, who unfortunately passed a while ago, but that idea of saying, “Let’s not overhype it, but it’s going to be much more profound than we can even imagine today,” puts my estimate, probably—and it depends how you define general AI which is probably not worth doing—I would say it’s within fifteen to twenty years.
We have this brain, the only general intelligence that we know of. And then we have the mind and, kind of, a definition of that which I think everybody can agree to that the mind as a set of abilities that don’t seem, at first glance, to be something an organ could do, like creativity, or a sense of humor. And then we have consciousness, we actually experience the world. A computer to measure temperature, but we can burn our finger and feel that. My questions are, we would expect the computer to have a “mind,” we would expect an AGI to be creative, do you think, one, that consciousness is required for general intelligence, and, to follow up on that, do you believe computers can become conscious? That they can experience the world as opposed to just measure it?
Mudit: That’s a really hard one too. I think actually in my mind what’s most important, and there’s kind of a grey line between the two, is creativity is what’s most important, the element of surprise is what’s most important. The more an AI can surprise you, the more you feel like it is truly intelligent. So, that creativity is extremely important. But I think the reason I said there’s kind of a path from one to the other is—and this is very philosophical of how to define consciousness—in many ways it’s when we start taking a specific task that is given to us, but really start asking the larger objective, the larger purpose, that’s when, I feel like, that’s what truly distinguishes a being or a person being conscious.
Until the AIs are able to be creative and break the bounds of the specific rules, or the specific expected behavior that it’s programmed to do, certainly the path to consciousness is very, very hard. So, I feel like creativity and surprising us is probably the first piece, which is also the one that honestly scares us as humans the most, because that’s when we feel a sense of losing control over the AI. I don’t think true consciousness is necessary, but they might go hand in hand. I can’t think of it being necessary, but they might evolve simultaneously and they might go hand in hand.
Robert: I would just add one other thought there which is, so I spent many hours in college having this debate of what is consciousness, you know, where is the sea of consciousness? Anatomists for centuries have dissected and dissected it, you know, is it this gland, or is it that place, or is it an organized effect of the structure and function of all of these parts. I think that’s why we need to study the brain, to be fair.
One of the underlying efforts there is to understand consciousness. What is it that makes a physical entity able to do what you said, to experience what you said? More than just experiencing a location, experiencing things like love. How could a human do that if they were a machine? Can a machine of empathy?
But I think beyond that, as I think practically as an investor and as a physician, I frankly, I don’t know if I care if the machine is conscious or not, I care more about who do I assign responsibility to for the actions and thoughts of that entity. So, for example, if they make a decision that harms someone, or if they make the wrong diagnosis, what recourse do I have? Consciousness in human beings, well, we believe in free will, and that’s where all of our entities around human justice come from. But if the machine is deterministic, then a higher power, may be the human that designed it, is ultimately responsible. For me, it’s a big question about responsibility with effect to these AIs, and less about whether they’re conscious or not. If they’re conscious then we might be able to assign responsibility to the machine, but then how do we penalize it—financially, otherwise? If they’re not conscious, then we probably need to assign responsibility to the owner, or the person that configured the machine.
I started the question earlier about why is there such a range of beliefs about when we might get a general intelligence, but the other interesting thing, which you’re kind of touching on, is there’s a wide range of belief about whether we would want one. You’ve got the Elon Musk camp of summoning the demon, Professor Hawking saying it’s an existential threat, and Bill Gates said, “I don’t understand why more people aren’t worried about it,” and so forth. And on the other end, you have people like Andrew Ng who said, “That’s like worrying about overpopulation of Mars,” and Rodney Brooks the roboticist, and so forth, who dismissed those. It’s almost eye-rolling, that you can see. What are the core assumptions that those two groups have, and why are they so different from each other in their regard to this technology?
Mudit: To me it boils down to the same things that make me excited about large-scale potential, from a general-purpose side, are the things that make me scared. You know how we were talking about what creativity is, if I go back to creativity for a second. Creativity will come from if an AI is told to maximize an objective function and the objective function has constraints, should it be allowed to question the constraints and the problem itself? If it is allowed to do that that’s where true creativity would come from, right? That’s what a human would do. I might give someone a task or a problem, but then they might come back and question it, and that’s where true creativity will come from. But the minute we allow an AI to do that is also when we lose that sense of control. We also don’t have that sense of control in humans today, but what freaks us out about AI is that AI can take that and do that at very, very rapid scale, at a pace at which we may not even as a society catch up to, realize, and be able to control or regulate, which we can in case of humans. I think that’s both the exciting part and the fear, they are really hand in hand.
The pace at which AI can then bring about the change once those constraints are loosened is something we haven’t seen before. And we already see, in today’s environment, our inability to keep pace with how fast technology is changing, from a regulation, from a framework standpoint as a society. And I think once that happens that will be called into question even more. I think that’s probably why many in the camp of Elon Musk, Sam Altman, and others, in many ways, I think, the part of their ask that resonates with me is we probably should start thinking about how we will tackle the problem, what framework should we have in place earlier, so we have time as a society to wrestle with it before it comes and it’s right in our face.
Robert: I would add to that with four things. I would say the four areas that I think kind of define us a bit—and there were a couple of them that were mentioned by Mudit—I think it’s speed, so speed of computation of affecting the world in which the machine would be in; scalability; the fact that it can affect the physical environment; and the fact that machines as we currently believe them do not have morals or ethics, I don’t know how you define it. So, there’s four things. Something that’s super fast, that’s highly scaled, that can affect the physical world with no ethics or morality, that is a scary thing, right? That is a truck on 101 with a robotic driver that is going to go 100 MPH and doesn’t care what it hits. That’s the scary part of it. But there’s a lot of technology that looks like that. If you are able to design it properly and constrain it, it can be incredibly powerful. It’s just that the conflict in those four areas could be very detrimental to us.
So, to pull the conversation back closer to the here and now, I want to ask each of you what’s a breakthrough in artificial intelligence in the medical profession that we may not have heard about, because there are so many of them? And then tell me something—I’ll put both of you on the spot on this—you think we’re going to see in, like, two or three years; something that’s on a time horizon where we can be very confident we’re going to go see that. Mudit, why don’t you start, what is something we may not know about, and what is something that will happen pretty soon do you think, in AI and medicine?
Mudit: I think—and this might go back to what I was saying—the breakthrough is less in the machine learning itself, but the operationalization of it. The ability—if we have the first mile and the last mile solved—to learn exists, but in the real, complex world of high emotions, messy human-generated data, the ability to actually, not only predict, but, in the moment, prescribe and persuade people to take action, is what I’m most excited about and I’m starting to see happen today, that I think is going to be transformative in the ability of existing machine learning prowess to actually impact our health and our healthcare system. So, that’s the part that I’m most excited about. It may not be, Byron, exactly what you’re looking for in terms of what breakthrough, but I think it’s a breakthrough of a different type. It’s not an algorithmic breakthrough, but it’s an operationalization breakthrough which I’m super excited about.
The part you asked about, what do I think in two to three years we could start doing, that we perhaps don’t do as well now… I know one that is very clear is places where there’s high degrees of structured data that we require humans to pore through—and I know Robert spent a lot of time on this, so I’ll leave this one to him—around radiology, around EKG data, around these huge quantities of structured data that are just impossible to monitor. But the number of poor quality outcomes, mortality, and bad events like that that happen which, if it was humanly feasible to monitor all that and realize, I believe we are two to three years away from starting to meaningfully bend that, both kind of process-wise, logistically, and then from a diagnosis standpoint. And it will be basic stuff, it will be stuff that we have known for a long time that we should do. But, you know, as the classic saying goes, it takes seventeen years from knowing something should be done, to doing it at scale in healthcare; I think it will be that kind of stuff where it will start rapidly shortening and reducing that cycle time and seeing vast effects of that in a healthcare system.
Robert: I’ll give you my two, briefly. I think it’s hard to come up with something that you may not have heard about, Byron, with your background, so I’ll think more about the general audience. First of all, I agree with Mudit, I think the two to three year time frame what’s obvious is that any signal processing in healthcare that is being done by human is going to be rapidly moved to a computer. So, iRhythm as an example of a company trading over a billion in a little over a year out of its IPO does that for cardiology data, EKG data, acquired through a patch. There are over forty companies that we have tracked in the radiology space that are prereading, or in some sense providing a pre-diagnostic read of CTs, MRIs, x-rays, for human radiology overreads for diagnosis. That is happening in the next two to five years. That is absolutely going to happen in the next two to five years. Companies like GE and Philips are leading it, there are lots of startups doing work there.
I think the area that might not be so available to the general public is the usage of machine learning on human conversation. Imagine in therapy, for example, therapy is moving to teletherapy, telemedicine; those are digitized conversations, they can be recorded and translated into language symbols, which can then be evaluated. Computational technology is being developed and is available today that can look at those conversations to decipher whether, for example, someone is anxious today, or depressed, needs more attention, may need a cognitive behavioral therapy intervention that is compatible with their state. And that allows, not only the scaling of signal processing, but the scaling of human labor that is providing psychological therapy to these patients. And so, I think, where we start looking at conversations, this is already being done in the management of sales forces with companies using AI to monitor sales calls and coach sales reps as to how to position things in those calls, to more effectively increase the conversion of a sale, we’re seeing that in healthcare as well.
All right, well that is all very promising, that’s all like kind of lifts up our day to know that there’s stuff coming and it’s going to be here relatively soon. I think that’s probably a good place to leave it. As I look at our timer, we are out of time, but I want to thank both of you for taking the time out of, I’m sure, your very busy days, to have this conversation with us and let us in on a little bit of what you’re thinking, what you’re working on, so thank you.
Mudit: Thank you very much, thanks, Byron.
Robert: You’re welcome.
Byron explores issues around artificial intelligence and conscious computers in his upcoming book The Fourth Age, to be published in April by Atria, an imprint of Simon & Schuster. Pre-order a copy here.