Five Questions for Melissa Kramer of Live UTI Free

While the notion of healthcare technology may be in the spotlight with AI, blockchain and all that, the coalface of care requires building an understanding of patient needs and responding in an appropriate way. Today, in many cases, even some of the most common conditions are subject to a dearth of information, or worse, misinformation that results in poor diagnosis and treatment. I learned this when working with a London hospital on care pathways for DVT; I was naturally interested in the work of Live UTI Free, which offers a clear information resource for patients, practitioners and indeed, researchers.
Read on to learn from Melissa Kramer, founder, how not all technological innovations need to maximize the use of buzzwords or bandwagons, and what lessons can be learned across healthcare diagnostics and beyond. 

1. Let’s set some context — what’s the purpose behind Live UTI Free?

We founded Live UTI Free to address a gap in the sharing of evidence-based information to sufferers of recurrent and chronic urinary tract infection (UTI).
To provide some context for why closing that gap is important, 1 in 2 females will experience a UTI in their lifetime, and of those, up to 44% will suffer a recurrence. With each recurrence, the chance of another increases. For many, recurrent UTI is debilitating, and the impact extends to the economy, with billions spent each year on UTI alone.
Despite how common UTI is, there has never been an accurate, standard method of UTI testing.
Although the impact of this issue is significant on many levels, UTI remains an area of women’s health that suffers from steadfastly held misinformation on both sides of the patient/practitioner relationship.
We aim to act as a conduit of information between researchers and patients, bridging gaps in knowledge where possible and shedding light on potential avenues for better diagnosis and treatment. Ultimately, our goal is to use our insights to advance research and development in this space.

2. How do you go about collating and delivering information, or is it ‘simply’ that even the most straightforward info is difficult to find today?

We created our platform because we identified how difficult it was for patients to find straightforward information online, and we wanted to fix this. In order to do so, we first had to collect information from patients themselves, to discover what it was they were looking for and how.
We spent more than 6 months interviewing patients and learning about their online behavior, before we put a single piece of information online. This activity alone meant we had collected more patient-perspective data on the subject than most recent studies.
Once we understood the typical patient journey, and where the glitches were, we started to collate scientific evidence and to interpret it into everyday language. We do this with the help of researchers, but the process is hardly straightforward.
If we relied on peer-reviewed studies alone, there would be little we could offer our audience in terms of new diagnosis and treatment options. Instead, we’ve developed our offering via a combination of studies, and direct input from practitioners, researchers and pharmacists.
This requires a continuous loop of interviews, academic research, and amendments to the information we provide. And on top of that is another layer of patient feedback that directly shapes what we offer on our site.
Long story short: straightforward info, particularly on health topics, is difficult to find. But once you do find it, you also have to make sure it’s useful to whoever it’s intended for.

3. What mechanisms do you have to do this, beyond the online site and do you think your user-centric approach has been worthwhile?

Aside from the patient interviews mentioned in the last question, we also launched a patient quiz at the same time as launching our site. The quiz has served two purposes:

  1. First, it has allowed us to help direct users to the most pertinent information, based on their current knowledge and experience.
  2. At the same time, we have collected thousands of data points that, when aggregated, provide incredible insight into patient experience, why people use our site, and what we can do better. Our approach has culminated in extremely fast growth in traffic to the site, and daily positive feedback.

Beyond the online site, we have developed a network of scientists, practitioners and other medical professionals.
We’re also in regular contact with commercial companies that are working on products or services that address specific aspects of recurrent UTI.
By maintaining a user-centric approach and fostering relationships with other key stakeholders, we hope to provide value that extends beyond problem-solving for individual patients. We have already begun to steer change for those in our network.  

4. What challenges have you faced starting up Live UTI Free, and how have you overcome them?

We are, and always have been, acutely aware of the position we hold in between patients and practitioners, and information that connects the two. Our primary concern revolves around how to achieve our goals, while adhering to the ethical standards we’ve placed upon ourselves. This in itself is a challenge.
We look at everything we do through this ethics lens. We question how any potential partnership or revenue opportunity fits within our own ethical guidelines, and we carefully consider data privacy when it comes to our patient quiz, interviews and correspondence we receive.
To help overcome this challenge we’ve put in place a funding policy and community guidelines, as well as implementing an ethics advisory board to help with these decisions.
A further challenge has been navigating the line between neutral accuracy, and providing information that is actionable for our audience. We don’t provide recommendations of any kind, but we know through our research that patients want a work flow, rather than a ‘choose your own adventure’.
We’ve partially overcome this by constructing our content in such a way that the user is guided through a logical sequence. The rest is a work in progress, as the required scientific study to truly point someone towards action steps for recurrent UTI, is still in the future. When it exists, we’ll be ready to relay the information to our audience.   

5. How do you see things moving into the future?

The data we have collected via our patient quiz is one of a kind, and we’re now starting to use these insights to help guide product R&D for this patient population.
We are currently assessing grant opportunities, in collaboration with researchers, with a focus on patient perspective data. Our reach means we will make a valuable partner in larger research studies and clinical trials, and we’re open to discussion in this regard. We plan to launch an evidence based e-commerce site next year, to bring our many user requests for this to fruition.
Live UTI Free will continue as a user-centric patient advocacy organisation, existing to support our fast-growing community, which includes sufferers of chronic and recurrent UTI, practitioners, and researchers. Readers can get in touch if interested in:

  • Patient perspective data and patient experience
  • Patient recruitment for clinical trials
  • Product development for recurrent UTI sufferers
  • Our practitioner and researcher network


5 questions for… Nuance – does speech recognition have a place in healthcare?

Speech recognition has been on the brink of major success for decades, so it feels. Rather than set of generic “when will it be mainstream” questions, I was keen to catch up with Martin Held, Senior Product Manager, Healthcare at Nuance, to find out how things stood in this, specific and highly relevant context.

  1. How do you see the potential for speech recognition in the healthcare sector?

Right now, the most gain will be from general documentation, enabling people to dictate instead of type, to get text out faster. In some areas of healthcare, things are pretty structured – you have to fill forms electronically, with drop-down lists and so on. That’s not a primary application for speech, but anything that requires free text, there’s no comparison or alternative. Areas where handwritten notes are put into notes fields, that’s a good application. Discharge notes can be also be very wordy.
From a use case perspective, we’ve done analysis on how much time teams are spending on documentation and it’s huge — three quarters of medical practices are spending half of their time on documentation alone. In the South Tees Emergency department, we did a study where use of speech recognition reduced documentation time by 40%. In another study with Dukinfield, a smaller practice, by introducing our technology they were able to see 4 more patients (about a 10% increase) per day.

  1. What has happened over the past 5 years in terms of performance improvements and innovation?

In these scenarios, it’s a question of “can it work, can it perform” across a range of input devices. General speech recognition has improved so much that we are in the upper 90% range straight out of the gate. Now none of our products require training, based on new technology that was introduced using deep neural networks and machine learning.
In healthcare, we have also added cloud computing and changed the architecture: we put a lightweight client on the end-point machine or device, which streams audio to a back-end recognition server hosted in Microsoft Azure. We announced recently the general availability of Dragon Medical One — cloud-based recognition.
Still connectivity is a big issue, in particular for mobile situations, such as a community nurse — it’s not always possible to use recognition back in the car, if a mobile signal is poor for example. We are looking at technology that could record, then transcribe later.

  1. How have you addressed the privacy and risk implications?

We are certified to connect to N3 network, allowing NHS entities to connect according to requirements around governance and privacy, for example patient confidentiality. Offering a service through the NHS N3 network requires an Information Governance Statement of Compliance and submission of IG Toolkit through NHS Digital — this involves a relatively long and detailed certification process, including disaster recovery, Nuance internal processes and practices, employees with access and so on.
We are also offering input via the public Internet, as encryption and other technologies are secure so customers can connect through these means. So, for example, we can use mobile phones as an input device. We are not trying to build mobile medical devices, we know how difficult that is, but we are looking to replace the keyboard (which is not a medical device!)
As a matter of best practice, it is still required that the doctor has to sign the discharge or confirm an entry in electronic medical record system, whether it has been typed or dictated. So generated text is always a reference: and that will need to stay there. It’s more than five years before the computer can be seen as taking this responsibility from the doctor. Advice similarly can only be guidance.

  1. How do you see the market need for speech recognition maturing in healthcare? 

Right now we’re still very much in an enablement situation with our customers, helping with their documentation needs. From a recognition perspective we can see the potential of moving from enablement to augmentation, making it simpler and hands-free, moving to more of a virtual assistant approach for a single person. In the longer-term, further out, we have the potential to do that for multiple people at the same time, for example a clinician, parent and child.
We’re also looking at the coding side of things — categorising disease, treatment, length of stay and so on from patient documentation. Codes are used for multiple things – reimbursement with insurance, negotiation between GPs, primary and secondary care about services to provide in future, with commissioner and trust to negotiate on payment levels. For primary care, doctors do coding but in secondary care, it’s done by a coder looking through a record after the discharge of a patient. If data is incomplete or non-specific, trusts can miss out on funding. Nuance already offers Natural Language Understanding based-coding products in the US, and these are being evaluated for the specifics of the healthcare market in the UK.
So we want to help turn documentation into something that can be easily analysed. Our technology cannot just recognise what you say, but in natural language understanding we can analyse the text and match against codes, potentially opening the door to offering prompts. For example, if doctor diagnoses a COPD, the clinician may need to ask if patient is a smoker, which will have a consequence in the code.

  1. How does Nuance see the next 5 years panning out, in terms of measuring success for speech recognition?

We believe speech recognition is ready to deliver a great deal of benefit to healthcare, gaining efficiency and freeing up clinical staff. In terms of the future, we recently showed a prototype of a virtual assistant that combines a lot of technologies, including biometrics, complete speech control, text analysis and meaning extraction, and also appropriate selection — so the machine can distinguish between a command and whether I just wanted to say something.
This combination should make the reaction a lot more human — we call this conversational artificial intelligence. Another part of this is about making text to speech as human as possible. Then combining that with cameras and microphones in the environment, for example pointing at something and saying, give me more information about ‘this’. That’s all longer term, but the virtual assistant and video are things we are working on.
My take: healthcare needs all the help it can get
So, does speech recognition have a place? Over the past couple of decades of use, we have learned that we generally do not like talking into thin air, and particularly not to a computer: the main change over recent years, the reduction in training time, has done little to reduce this very psychological blocker, which means that speech recognition remains in a highly useful, yet relatively limited niche of auto-transcription.
Turning specifically to the healthcare industry, a victim of its own science-led success: it is difficult to think of an industry vertical in which staff efficiency is more important. In every geography, potential improvements to patient outcomes are being stymied by a lack of funds, symptomized by waiting lists, bed shortages and so on, while being burdened by the weight of ever-increasing bureaucracy.
Even if speech recognition could knock one or two percentage points off the time taken to execute a clinical pathway, the overall savings could be massive. Greater efficiency also opens the door to higher potential quality, as clinicians can focus on ‘the job’ rather than the paperwork.
For the future, use of speech recognition beyond the note-taking this also links to the potential for improved diagnosis, through augmented decision making, and indeed, improved patient safety as technology provides more support to what is still a highly manual industry. This will take time, but our general habits are changing as the likes of Alexa and Siri make us more comfortable about talking to inanimate objects.
Overall, progress may be slow for speech recognition particularly in healthcare, but it is heading in the right direction. One day, our lives might depend on it.

Voices in AI – Episode 31: A Conversation with Tasha Nagamine

In this episode, Byron and Tasha talk about speech recognition, AGI, consciousness, Droice Lab, healthcare, and science fiction.
[podcast_player name=”Episode 31 – A Conversation with Tasha Nagamine” artist=”Byron Reese” album=”Voices in AI” url=”″ cover_art_url=””]
Byron Reese: This is Voices in AI, brought to you by Gigaom. I’m Byron Reese. Today our guest is Tasha Nagamine. She’s a PhD student at Columbia University, she holds an undergraduate degree from Brown and a Masters in Electrical Engineering from Columbia. Her research is in neural net processing in speech and language, then the potential applications of speech processing systems through, here’s the interesting part, biologically-inspired, deep neural network models. As if that weren’t enough to fill up a day, Tasha is also the CTO of Droice Labs, an AI healthcare company, which I’m sure we will chat about in a few minutes. Welcome to the show, Tasha.
Tasha Nagamine: Hi.
So, your specialty, it looks like, coming all the way up, is electrical engineering. How do you now find yourself in something which is often regarded as a computer science discipline, which is artificial intelligence and speech recognition?
Yeah, so it’s actually a bit of an interesting meandering journey, how I got here. My undergrad specialty was actually in physics, and when I decided to go to grad school, I was very interested, you know, I took a class and found myself very interested in neuroscience.
So, when I joined Columbia, the reason I’m actually in the electrical engineering department is that my advisor is an EE, but what my research and what my lab focuses on is really in neuroscience and computational neuroscience, as well as neural networks and machine learning. So, in that way, I think what we do is very cross-disciplinary, so that’s why the exact department, I guess, may be a bit misleading.
One of my best friends in college was a EE, and he said that every time he went over to like his grandmother’s house, she would try to get him to fix like the ceiling fan or something.  Have you ever had anybody assume you’re proficient with a screwdriver as well?
Yes, that actually happens to me quite frequently. I think I had one of my friends’ landlords one time, when I said I was doing electrical engineering, thought that that actually meant electrician, so was asking me if I knew how to fix light bulbs and things like that.
Well, let’s start now talking about your research, if you would. In your introduction, I stressed biologically-inspired deep neural networks. What do you think, do we study the brain and try to do what it does in machines, or are we inspired by it, or do we figure out what the brain’s doing and do something completely different? Like, why do you emphasize “biologically-inspired” DNNs?
That’s actually a good question, and I think the answer to that is that, you know, researchers and people doing machine learning all over the world actually do all of those things. So, the reason that I was stressing a biologically-inspired—well, you could argue that, first of all, all neural networks are in some way biologically-inspired; now, whether or not they are a good biologically-inspired model, is another question altogether—I think a lot of the big, sort of, advancements that come, like a convolutional neural network was modeled basically directly off of the visual system.
That being said, despite the fact that there are a lot of these biological inspirations, or sources of inspiration, for these models, there’s many ways in which these models actually fail to live up to the way that our brains actually work. So, by saying biologically-inspired, I really just mean a different kind of take on a neural network where we try to, basically, find something wrong with a network that, you know, perhaps a human can do a little bit more intelligently, and try to bring this into the artificial neural network.
Specifically, one issue with current neural networks is that, usually, unless you keep training them, they have no way to really change themselves, or adapt to new situations, but that’s not what happens with humans, right? We continuously take inputs, we learn, and we don’t even need supervised labels to do so. So one of the things that I was trying to do was to try to draw from this inspiration, to find a way to kind of learn in an unsupervised way, to improve your performance in a speech recognition task.
So just a minute ago, when you and I were chatting before we started recording, a siren came by where you are, and the interesting thing is, I could still understand everything you were saying, even though that siren was, arguably, as loud as you were. What’s going on there, am I subtracting out the siren? How do I still understand you? I ask this for the obvious reason that computers seem to really struggle with that, right?
Right, yeah. And actually how this works in the brain is a very open question and people don’t really know how it’s done. This is actually an active research area of some of my colleagues, and there’s a lot of different models that people have for how this works. And you know, it could be that there’s some sort of filter in your brain that, basically, sorts speech from the noise, for example, or a relevant signal from an irrelevant one. But how this happens, and exactly where this happens is pretty unknown.
But you’re right, that’s an interesting point you make, is that machines have a lot of trouble with this. And so that’s one of the inspirations behind these types of research. Because, currently, in machine learning, we don’t really know the best way to do this and so we tend to rely on large amounts of data, and large amounts of labeled data or parallel data, data corrupted with noise intentionally, however this is definitely not how our brain is doing it, but how that’s happening, I don’t think anyone really knows.
Let me ask you a different question along the same lines. I read these stories all the time that say that, “AI has approached human-quality in transcribing speech,” so I see that. And then I call my airline of choice, I will not name them, and it says, “What is your frequent flyer number?” You know, it’s got Caller ID, it should know that, but anyway. Mine, unfortunately, has an A, an H, and an 8 in it, so you can just imagine “AH8H888H”, right?
It never gets it. So, I have to get up, turn the fan off in my office, take my headset off, hold the phone out, and say it over and over again. So, two questions: what’s the disconnect between what I read and my daily experience? Actually, I’ll give you that question and then I have my follow up in a moment.
Oh, sure, so you’re saying, are you asking why it can’t recognize your—
But I still read these stories that say it can do as good of a job as a human.
Well, so usually—and, for example, I think, recently, there was a story published about Microsoft coming up with a system that had reached human parity in speech recognition—well, usually when you’re saying that, you have it on a somewhat artificial task. So, you’ll have a predefined data set, and then test the machine against humans, but that doesn’t necessarily correspond to a real-world setting, they’re not really doing speech recognition out in the wild.
And, I think, you have an even more difficult problem, because although it’s only frequent flyer numbers, you know, there’s no language model there, there’s no context for what your next number should be, so it’s very hard for that kind of system to self-correct, which is a bit problematic.
So I’m hearing two things. The first thing, it sounds like you’re saying, they’re all cooking the books, as it were. The story is saying something that I interpret one way that isn’t real, if you dig down deep, it’s different. But the other thing you seem to be saying is, even though there’s only thirty-six things I could be saying, because there’s no natural flow to that language, it can’t say, “oh, the first word he said was ‘the’ and the third word was ‘ran;’ was that middle word ‘boy’ or ‘toy’?” It could say, “Well, toys don’t run, but boys do, therefore it must be, ‘The boy ran.'” Is that what I’m hearing you saying, that a good AI system’s going to look contextually and get clues from the word usage in a way that a frequent flyer system doesn’t.
Right, yeah, exactly. I think this is actually one of the fundamental limitations of, at least, acoustic modeling, or, you know, the acoustic part of speech recognition, which is that you are completely limited by what the person has said. So, you know, maybe it could be that you’re not pronouncing your “t” at the end of “eight,” very emphatically. And the issue is that, there’s nothing you can really do to fix that without some sort of language-based information to fix it.
And then, to answer your first question, I wouldn’t necessarily call it “cooking the books,” but it is a fact that, you know, really the data that you have to train on and test on and to evaluate your metrics on, often, almost never really matches up with real-world data, and this is a huge problem in the speech domain, it’s a very well-known issue.
You take my 8, H, and A example—which you’re saying that’s a really tricky problem without context—and, let’s say, you have one hundred English speakers, but one is from Scotland, and one could be Australian, and one could be from the east coast, one could be from the south of the United States; is it possible that the range of how 8 is said in all those different places is so wide that it overlaps with how H is said in some places. So, in other words, it’s a literally insoluble problem.
It is, I would say it is possible. One of the issues is then you should have a separate model for different dialects. I don’t want to dive too far into the weeds with this, but at the root of a speech recognition system is often things like the fundamental linguistic or phonetic unit is a phoneme, which is the smallest speech sound, and people even argue about whether or not that these actually exist, what they actually mean, whether or not this is a good unit to use when modeling speech.
That being said, there’s a lot of research underway, for example, sequence to sequence models or other types of models that are actually trying to bypass this sort of issue. You know, instead of having all of these separate components modeling all of the acoustics separately, can we go directly from someone’s speech and from there exactly get text. And maybe through this unsupervised approach it’s possible to learn all these different things about dialects, and to try to inherently learn these things, but that is still a very open question, and currently those systems are not quite tractable yet.
I’m only going to ask one more question on these lines—though I could geek out on this stuff all day long, because I think about it a lot—but really quickly, do you think you’re at the very beginning of this field, or do you feel it’s a pretty advanced field? Just the speech recognition part.
Speech recognition, I think we’re nearing the end of speech recognition to be honest. I think that you could say that speech is fundamentally limited; you are limited by the signal that you are provided, and your job is to transcribe that.
Now, where speech recognition stops, that’s where natural language processing begins. As everyone knows, language is infinite, you can do anything with it, any permutation of words, sequences of words. So, I really think that natural language processing is the future of this field, and I know that a lot of people in speech are starting to try to incorporate more advanced language models into their research.
Yeah, that’s a really interesting question. So, I ran an article on Gigaom, where I had an Amazon Alexa device on my desk and I had a Google Assistant on my desk, and what I noticed right away is that they answer questions differently. These were factual questions, like “How many minutes are in a year?” and “Who designed the American flag?” They had different answers. And you can say it’s because of an ambiguity in the language, but if this is an ambiguity, then all language is naturally ambiguous.
So, the minutes in a year answer difference was that one gave you the minutes in 365.24 days, a solar year, and one gave you the minutes in a calendar year. And with regard to the flag, one said Betsy Ross, and one said the person who designed the fifty-star configuration on the current flag.
And so, we’re a long way away from the machines saying, “Well, wait a second, do you mean the current flag or the original flag?” or, “Are you talking about a solar year or a calendar year?” I mean, we’re really far away from that, aren’t we?
Yeah, I think that’s definitely true. You know, people really don’t understand how even humans process language, how we disambiguate different phrases, how we find out what are the relevant questions to ask to disambiguate these things. Obviously, people are working on that, but I think we are quite far from true natural language understanding, but yeah, I think that’s a really, really interesting question.
There were a lot of them, “Who invented the light bulb?” and “How many countries are there in the world?” I mean the list was endless. I didn’t have to look around to find them. It was almost everything I asked, well, not literally, “What’s 2+2?” is obviously different, but there were plenty of examples.  
To broaden that question, don’t you think if we were to build an AGI, an artificial general intelligence, an AI as versatile as a human, that’s table stakes, like you have to be able to do that much, right?
Oh, of course. I mean, I think that one of the defining things that makes human intelligence unique, is the ability to understand language and an understanding of grammar and all of this. It’s one of the most fundamental things that makes us human and intelligent. So I think, yeah, to have an artificial general intelligence, it would be completely vital and necessary to be able to do this sort of disambiguation.
Well, let me ratchet it up even another one. There’s a famous thought experiment called the Chinese Room problem. For the benefit of the listener, the setup is that there’s a person in a room who doesn’t speak any Chinese, and the room he’s in is full of this huge number of very specialized books; and people slide messages under the door to him that are written in Chinese. And he has this method where he looks up the first character and finds the book with that on the spine, and goes to the second character and the third and works his way through, until he gets to a book that says, “Write this down.” And he copies these symbols, again, he doesn’t know what the symbols are; he slides the message back out, and the person getting it thinks it’s a perfect Chinese answer, it’s brilliant, it rhymes, it’s great.
So, the thought experiment is this, does the man understand Chinese? And the point of the thought experiment is that this is all a computer does—it runs this deterministic program, and it never understands what it’s talking about. It doesn’t know if it’s about cholera or coffee beans or what have you. So, my question is, for an AGI to exist, does it need to understand the question in a way that’s different than how we’ve been using that word up until now?
That’s a good question. I think that, yeah, to have an artificial general intelligence, I think the computer would have to, in a way, understand the question. Now, that being said, what is the nature of understanding the question? How do we even think, is a question that I don’t think even we know the answer to. So, it’s a little bit difficult to say, exactly, what’s the minimum requirement that you would need for some sort of artificial general intelligence, because as it stands now, I don’t know. Maybe someone smarter than me knows the answer, but I don’t even know if I really understand how I understand things, if that makes sense to you.
So what do you do with that? Do you say, “Well, that’s just par for the course. There’s a lot of things in this universe we don’t understand, but we’re going to figure it out, and then we’ll build an AGI”? Is the question of understanding just a very straightforward scientific question, or is it a metaphysical question that we don’t really even know how to pose or answer?
I mean, I think that this question is a good question, and if we’re going about it the right way, it’s something that remains to be seen. But I think one way that we can try to ensure that we’re not straying off the path, is by going back to these biologically-inspired systems. Because we know that, at the end of the day, our brains are made up of neurons, synapses, connections, and there’s nothing very unique about this, it’s physical matter, there’s no theoretical reason why a computer cannot do the same computations.
So, if we can really understand how our brains are working, what the computations it performs are, how we have consciousness; then I think we can start to get at those questions. Now, that being said, in terms of where neuroscience is today, we really have a very limited idea of how our brains actually work. But I think it’s through this avenue that we stand the highest chance of success of trying to emulate, you know—
Let’s talk about that for a minute, I think that’s a fascinating topic. So, the brain has a hundred billion neurons that somehow come together and do what they do. There’s something called a nematode worm—arguably the most successful animal on the planet, ten percent of all animals on the planet are these little worms—they have I think 302 neurons in their brain. And there’s been an effort underway for twenty years to model that brain—302 neurons—in the computer and make a digitally living nematode worm, and even the people who have worked on that project for twenty years, don’t even know if that’s possible.
What I was hearing you say is, once we figure out what a neuron does—this reductionist view of the brain—we can build artificial neurons, and build a general intelligence, but what if every neuron in your brain has the complexity of a supercomputer? What if they are incredibly complicated things that have things going on at the quantum scale, that we are just so far away from understanding? Is that a tenable hypothesis? And doesn’t that suggest, maybe we should think about intelligence a different way because if a neuron’s as complicated as a supercomputer, we’re never going to get there.
That’s true, I am familiar with that research. So, I think that there’s a couple of ways that you can do this type of study because, for example, trying to model a neuron at the scale of its ion channels and individual connections is one thing, but there are many, many scales upon which your brain or any sort of neural system works.
I think to really get this understanding of how the brain works, it’s great to look at this very microscale, but it also helps to go very macro and instead of modeling every single component, try to, for example, take groups of neurons, and say, “How are they communicating together? How are they communicating with different parts of the brain?” Doing this, for example, is usually how human neuroscience works and humans are the ones with the intelligence. If you can really figure out on a larger scale, to the point where you can simplify some of these computations, and instead of understanding every single spike, perhaps understanding the general behavior or the general computation that’s happening inside the brain, then maybe it will serve to simplify this a little bit.
Where do you come down on all of that? Are we five years, fifty years or five hundred years away from cracking that nut, and really understanding how we understand and understanding how we would build a machine that would understand, all of this nuance? Do you think you’re going to live to see us make that machine?
I would be thrilled if I lived to see that machine, I’m not sure that I will. Exactly saying when this will happen is a bit hard for me to predict, but I know that we would need massive improvements; probably, algorithmically, probably in our hardware as well, because true intelligence is massively computational, and I think it’s going to take a lot of research to get there, but it’s hard to say exactly when that would happen.
Do you keep up with the Human Brain Project, the European initiative to do what you were talking about before, which is to be inspired by human brains and learn everything we can from that and build some kind of a computational equivalent?
A little bit, a little bit.
Do you have any thoughts on—if you were the betting sort—whether that will be successful or not?
I’m not sure if that’s really going to work out that well. Like you said before, given our current hardware, algorithms, our abilities to probe the human brain; I think it’s very difficult to make these very sweeping claims about, “Yes, we will have X amount of understanding about how these systems work,” so I’m not sure if it’s going to be successful in all the ways it’s supposed to be. But I think it’s a really valuable thing to do, whether or not you really achieve the stated goal, if that makes sense.
You mentioned consciousness earlier. So, consciousness, for the listeners, is something people often say we don’t know what it is; we know exactly what it is, we just don’t know how it is that it happens. What it is, is that we experience things, we feel things, we experience qualia—we know what pineapple tastes like.
Do you have any theories on consciousness? Where do you think it comes from, and, I’m really interested in, do we need consciousness in order to solve some of these AI problems that we all are so eager to solve? Do we need something that can experience, as opposed to just sense?
Interesting question. I think that there’s a lot of open research on how consciousness works, what it really means, how it helps us do this type of cognition. So, we know what it is, but how it works or how this would manifest itself in an artificial intelligence system, is really sort of beyond our grasp right now.
I don’t know how much true consciousness a machine needs, because, you could say, for example, that having a type of memory may be part of your consciousness, you know, being aware, learning things, but I don’t think we have yet enough really understanding of how this works to really say for sure.
All right fair enough. One more question and I’ll pull the clock back thirty years and we’ll talk about the here and now; but my last question is, do you think that a computer could ever feel something? Could a computer ever feel pain? You could build a sensor that tells the computer it’s on fire, but could a computer ever feel something, could we build such a machine?
I think that it’s possible. So, like I said before, there’s really no reason why—what our brain does is really a very advanced biological computer—you shouldn’t be able to feel pain. It is a sensation, but it’s really just a transfer of information, so I think that it is possible. Now, that being said, how this would manifest, or what a computer’s reaction would be to pain or what would happen, I’m not sure what that would be, but I think it’s definitely possible.
Fair enough. I mentioned in your introduction that you’re the CTO of an AI company Droice Labs, and the only setup I made was that it was a healthcare company. Tell us a little bit more, what challenge that Droice Labs is trying to solve, and what the hope is, and what your present challenges are and kind of the state of where you’re at?
Sure. Droice is a healthcare company that uses artificial intelligence to help provide artificial intelligence solutions to hospitals and healthcare providers. So, one of the main things that we’re focusing on right now is to try to help doctors choose the right treatment for their patients. This means things like, for example, you come in, maybe you’re sick, you have a cough, you have pneumonia, let’s say, and you need an antibiotic. What we try to do is, when you’re given an antibiotic, we try to predict whether or not this treatment will be effective for you, and also whether or not it’ll have any sort of adverse event on you, so both try to get people healthy, and keep them safe.
And so, this is really what we’re focusing on at the moment, trying to make a sort of artificial brain for healthcare that can, shall we say, augment the intelligence of the doctors and try to make sure that people stay healthy. I think that healthcare’s a really interesting sphere in which to use artificial intelligence because currently the technology is not very widespread because of the difficulty in working with hospital and medical data, so I think it’s a really interesting opportunity.
So, let’s talk about that for a minute, AIs are generally only as good as the data we train them with. Because I know that whenever I have some symptom, I type it into the search engine of choice, and it tells me I have a terminal illness; it just happens all the time. And in reality, of course, whatever that terminal illness is, there is a one-in-five-thousand chance that I have that, and then there’s also a ninety-nine percent chance I have whatever much more common, benign thing. How are you thinking about how you can get enough data so that you can build these statistical models and so forth?
We’re a B2B company, so we have partnerships with around ten hospitals right now, and what we do is get big data dumps from them of actual electronic health records. And so, what we try to do is actually use real patient records, like, millions of patient records that we obtain directly from our hospitals, and that’s how we really are able to get enough data to make these types of predictions.
How accurate does that data need to be? Because it doesn’t have to be perfect, obviously. How accurate does it need to be to be good enough to provide meaningful assistance to the doctor?
That is actually one of the big challenges, especially in this type of space. In healthcare, it’s a bit hard to say which data is good enough, because it’s very, very common. I mean, one of the hallmarks of clinical or medical data is that it will, by default, contain many, many missing values, you never have the full story on any given patient.
Additionally, it’s very common to have things like errors, there’s unstructured text in your medical record that very often contains mistakes or just insane sentence fragments that don’t really make sense to anyone but a doctor, and this is one of the things that we work really hard on, where a lot of times traditional AI methods may fail, but we basically spend a lot of time trying to work with this data in different ways, come up with noise-robust pipelines that can really make this work.
I would love to hear more detail about that, because I’m sure it’s full of things like, “Patient says their eyes water whenever they eat potato chips,” and you know, that’s like a data point, and it’s like, what do you do with that. If that is a big problem, can you tell us what some of the ways around it might be?
Sure. I’m sure you’ve seen a lot of crazy stuff in these health records, but what we try to do is—instead of biasing our models by doing anything in a rule-based manner—we use the fact that we have big data, we have a lot of data points, to try to really come up with robust models, so that, essentially, we don’t really have to worry about all that crazy stuff in there about potato chips and eyes watering.
And so, what we actually end up doing is, basically, we take these many, many millions of individual electronic health records, and try to combine that with outside sources of information, and this is one of the ways that we can try to really augment the data on our health record to make sure that we’re getting the correct insights about it.
So, with your example, you said, “My eyes water when I eat potato chips.” What we end up doing is taking that sort of thing, and in an automatic way, searching sources of public information, for example clinical trials information or published medical literature, and we try to find, for example, clinical trials or papers about the side effects of rubbing your eyes while eating potato chips. Now of course, that’s a ridiculous example, but you know what I mean.
And so, by augmenting this public and private data together, we really try to create this setup where we can get the maximum amount of information out of this messy, difficult to work with data.
The kinds of data you have that are solid data points, would be: how old is the patient, what’s their gender, do they have a fever, do they have aches and pains; that’s very coarse-level stuff. But like—I’m regretting using the potato chip example because now I’m kind of stuck with it—but, a potato chip is made of a potato which is a tuber, which is a nightshade and there may be some breakthrough, like, “That may be the answer, it’s an allergic reaction to nightshades. And that answer is so many levels removed.
I guess what I’m saying is, and you said earlier, language is infinite, but health is near that, too, right? There are so many potential things something could be, and yet, so few data points, that we must try to draw from. It would be like, if I said, “I know a person who is 6’ 4” and twenty-seven years old and born in Chicago, what’s their middle name?” It’s like, how do you even narrow it down to a set of middle names?
Right, right. Okay, I think I understand what you’re saying. This is, obviously, a challenge, but one of the ways that we kind of do this is, the first thing is our artificial intelligence is really intended for doctors and not the patients. Although, we were just talking about AGI and when it will happen, but the reality is we’re not there yet, so while our system tries to make these predictions, it’s under the supervision of a doctor. So, they’re really looking at these predictions and trying to pull out relevant things.
Now, you mentioned, the structured data—this is your age, your weight, maybe your sex, your medications; this is structured—but maybe the important thing is in the text, or is in the unstructured data. So, in this case, one of the things that we try to do, and it’s one of the main focuses of what we do, is to try to use natural language processing, NLP, to really make sure that we’re processing this unstructured data, or this text, in a way to really come up with a very robust, numerical representation of the important things.
So, of course, you can mine this information, this text, to try to understand, for example, you have a patient who has some sort of allergy, and it’s only written in this text, right? In that case, you need a system to really go through this text with a fine-tooth comb, and try to really pull out risk factors for this patient, relevant things about their health and their medical history that may be important.
So, is it not the case that diagnosing—if you just said, here is a person who manifests certain symptoms, and I want to diagnose what they have—may be the hardest problem possible. Especially compared to where we’ve seen success, which is, like, here is a chest x-ray, we have a very binary question to ask: does this person have a tumor or do they not? Where the data is: here’s ten thousand scans with the tumor, here’s a hundred thousand without a tumor.
Like, is it the cold or the flu? That would be an AI kind of thing because an expert system could do that. I’m kind of curious, tell me what you think—and then I’d love to ask, what would an ideal world look like, what would we do to collect data in an ideal world—but just with the here and now, aspirationally, what do you think is as much as we can hope for? Is it something, like, the model produces sixty-four things that this patient may have, rank ordered, like a search engine would do from the most likely to the least likely, and the doctor can kind of skim down it and look for something that catches his or her eye. Is that as far as we can go right now? Or, what do you think, in terms of general diagnosing of ailments?
Sure, well, actually, what we focus on currently is really on the treatment, not on the diagnosis. I think the diagnosis is a more difficult problem, and, of course, we really want to get into that in the future, but that is actually somewhat more of a very challenging sort of thing to do.
That being said, what you mentioned, you know, saying, “Here’s a list of things, let’s make some predictions of it,” is actually a thing that we currently do in terms of treatments for patients. So, one example of a thing that we’ve done is built a system that can predict surgical complications for patients. So, imagine, you have a patient that is sixty years old and is mildly septic, and may need some sort of procedure. What we can do is find that there may be a couple alternative procedures that can be given, or a nonsurgical intervention that can help them manage their condition. So, what we can do is predict what will happen with each of these different treatments, what is the likelihood it will be successful, as well as weighing this against their risk options.
And in this way, we can really help the doctor choose what sort of treatment that they should give this person, and it gives them some sort of actionable insight, that can help them get their patients healthy. Of course, in the future, I think it would be amazing to have some sort of end to end system that, you know, a patient comes in, and you can just get all the information and it can diagnose them, treat them, get them better, but we’re definitely nowhere near that yet.
Recently, IBM made news that Watson had prescribed treatment for cancer patients that was largely identical to what the doctors did, but it had the added benefit that in a third of the cases it found additional treatment options, because it had virtue of being trained on a quarter million medical journals. Is that the kind of thing that’s like “real, here, today,” that we will expect to see more things like that?
I see. Yeah, that’s definitely a very exciting thing, and I think that’s great to see. One of the things that’s very interesting, is that IBM primarily works on cancer. It’s lacking in these high prescription volume sorts of conditions, like heart disease or diabetes. So, I think that while this is very exciting, this is definitely a sort of technology, and a space for artificial intelligence, where it really needs to be expanded, and there’s a lot of room to grow.
So, we can sequence a genome for $1,000. How far away are we from having enough of that data that we get really good insights into, for example, a person has this combination of genetic markers, and therefore this is more likely to work or not work. I know that in isolated cases we can do that, but when will we see that become just kind of how we do things on a day-to-day basis?
I would say, probably, twenty-five years from the clinic. I mean, it’s great, this information is really interesting, and we can do it, but it’s not widely used. I think there are too many regulations in place right now that keep this from happening, so, I think it’s going to be, like I said, maybe twenty-five years before we really see this very widely used for a good number of patients.
So are there initiatives underway that you think merit support that will allow this information to be collected and used in ways that promote the greater good, and simultaneously, protect the privacy of the patients? How can we start collecting better data?
Yeah, there are a lot of people that are working on this type of thing. For example, Obama had a precision medicine initiative and these types of things where you’re really trying to, basically, get your health records and your genomic data, and everything consolidated and have a very easy flow of information so that doctors can easily integrate information from many sources, and have very complete patient profiles. So, this is a thing that’s currently underway.
To pull out a little bit and look at the larger world, you’re obviously deeply involved in speech, and language processing, and health care, and all of these areas where we’ve seen lots of advances happening on a regular basis, and it’s very exciting. But then there’s a lot of concern from people who have two big worries. One is the effect that all of this technology is going to have on employment. And there’s two views.
One is that technology increases productivity, which increases wages, and that’s what’s happened for two hundred years, or, this technology is somehow different, it replaces people and anything a person can do eventually the technology will do better. Which of those camps, or a third camp, do you fall into? What is your prognosis for the future of work?
Right. I think that technology is a good thing. I know a lot of people have concerns, for example, that if there’s too much artificial intelligence it will replace my job, there won’t be room for me and for what I do, but I think that what’s actually going to happen, is we’re just going to see, shall we say, a shifting employment landscape.
Maybe if we have some sort of general intelligence, then people can start worrying, but, right now, what we’re really doing through artificial intelligence is augmenting human intelligence. So, although some jobs become obsolete, now to maintain these systems, build these systems, I believe that you actually have, now, more opportunities there.
For example, ten to fifteen years ago, there wasn’t such a demand for people with software engineering skills, and now it’s almost becoming something that you’re expected to know, or, like, the internet thirty years back. So, I really think that this is going to be a good thing for society. It may be hard for people who don’t have any sort of computer skills, but I think going forward, that these are going to be much more important.
Do you consume science fiction? Do you watch movies, or read books, or television, and if so, are there science fiction universes that you look at and think, “That’s kind of how I see the future unfolding”?
Have you ever seen the TV show Black Mirror?
Well, yeah that’s dystopian though, you were just saying things are going to be good. I thought you were just saying jobs are good, we’re all good, technology is good. Black Mirror is like dark, black, mirrorish.
Yeah, no, I’m not saying that’s what’s going to happen, but I think that’s presenting the evil side of what can happen. I don’t think that’s necessarily realistic, but I think that show actually does a very good job of portraying the way that technology could really be integrated into our lives. Without all of the dystopian, depressing stories, I think that the way that it shows the technology being integrated into people’s lives, how it affects the way people live—I think it does a very good job of doing things like that.
I wonder though, science fiction movies and TV are notoriously dystopian, because there’s more drama in that than utopian. So, it’s not conspiratorial or anything, I’m not asserting that, but I do think that what it does, perhaps, is causes people—somebody termed it “generalizing from fictional evidence,” that you see enough views of the future like that, you think, “Oh, that’s how it’s going to happen.” And then that therefore becomes self-fulfilling.
Frank Herbert, I think, it was who said, “Sometimes the purpose of science fiction is to keep a world from happening.” So do you think those kinds of views of the world are good, or do you think that they increase this collective worry about technology and losing our humanity, becoming a world that’s blackish and mirrorish, you know?
Right. No, I understand your point and actually, I agree. I think there is a lot of fear, which is quite unwarranted. There is actually a lot more transparency in AI now, so I think that a lot of those fears are just, well, given the media today, as I’m sure we’re all aware, it’s a lot of fear mongering. I think that these fears are really something that—not to say there will be no negative impact—but, I think, every cloud has its silver lining. I think that this is not something that anyone really needs to be worrying about. One thing that I think is really important is to have more education for a general audience, because I think part of the fear comes from not really understanding what AI is, what it does, how it works.
Right, and so, I was just kind of thinking through what you were saying, there’s an initiative in Europe that, AI engines—kind of like the one you’re talking about that’s suggesting things—need to be transparent, in the sense they need to be able to explain why they’re making that suggestion.
But, I read one of your papers on deep neural nets, and it talks about how the results are hard to understand, if not impossible to understand. Which side of that do you come down on? Should we limit the technology to things that can be explained in bulleted points, or do we say, “No, the data is the data and we’re never going to understand it once it starts combining in these ways, and we just need to be okay with that”?
Right, so, one of the most overused phrases in all of AI is that “neural networks are a black box.” I’m sure we’re all sick of hearing that sentence, but it’s kind of true. I think that’s why I was interested in researching this topic. I think, as you were saying before, the why in AI is very, very important.
So, I think, of course we can benefit from AI without knowing. We can continue to use it like a black box, it’ll still be useful, it’ll still be important. But I think it will be far more impactful if you are able to explain why, and to really demystify what’s happening.
One good example from my own company is that in medicine it’s vital for the doctor to know why you’re saying what you’re saying, at Droice. So, if a patient comes in and you say, “I think this person is going to have a very negative reaction to this medicine,” it’s very vital for us to try to analyze the neural network and explain, “Okay, it’s really this feature of this person’s health record, for example, the fact that they’re quite old and on another medication.” That really makes them trust the system, and really eases the adoption, and allows them to integrate into traditionally less technologically focused fields.
So, I think that there’s a lot of research now that’s going into the why in AI, and it’s one of my focuses of research, and I know the field has really been blooming in the last couple of years, because I think people are realizing that this is extremely important and will help us not only make artificial intelligence more translational, but also help us to make better models.
You know, in The Empire Strikes Back, when Luke is training on Dagobah with Yoda, he asked him, “Why, why…” and Yoda was like, “There is no why.” Do you think there are situations where there is no why? There is no explainable reason why it chose what it did?
Well, I think there is always a reason. For example, you like ice cream; well, maybe it’s a silly reason, but the reason is that it tastes good. It might not be, you know, you like pistachio better than caramel flavor—so, let’s just say the reason may not be logical, but there is a reason, right? It’s because it activates the pleasure center in your brain when you eat it. So, I think that if you’re looking for interpretability, in some cases it could be limited but I think there’s always something that you could answer when asking why.
Alright. Well, this has been fascinating. If people want to follow you, keep up with what you’re doing, keep up with Droice, can you just run through the litany of ways to do that?
Yeah, so we have a Twitter account, it’s “DroiceLabs,” and that’s mostly where we post. And we also have a website:, and that’s where we post most of the updates that we have.
Alright. Well, it has been a wonderful and far ranging hour, and I just want to thank you so much for being on the show.
Thank you so much for having me.

Byron explores issues around artificial intelligence and conscious computers in his upcoming book The Fourth Age, to be published in April by Atria, an imprint of Simon & Schuster. Pre-order a copy here.

Voices in AI – Episode 30: A Conversation with Robert Mittendorff and Mudit Garg

In this episode, Byron, Robert and Mudit talk about Qventus, healthcare, machine learning, AGI, consciousness, and medical AI.
[podcast_player name=”Episode 30 – A Conversation with Robert Mittendorff and Mudit Garg” artist=”Byron Reese” album=”Voices in AI” url=”″ cover_art_url=””]
Byron Reese: This is Voices in AI, brought to you by Gigaom. I’m Byron Reese. Today is a first for Voices in AI, we have two guests. The first one is from Qventus; his name is Mudit Garg. He’s here with Robert Mittendorff, who’s with Norwest Venture Partners, who also serves on Qventus’ board. Mudit Garg is the co-founder and CEO of Qventus, and they are a company that offers artificial-intelligence-based software designed to simplify hospital operations. He’s founded multiple technology companies before Qventus, including Hive, a group messaging platform. He spent two years as a consultant with Seattle-based McKinsey & Company, focusing, I think, on hospital operations.
Robert, from Norwest Ventures, before he was VP of Marketing and Business Development at Hansen Medical, a publicly traded NASDAQ company. He’s also a board-certified emergency physician who completed his residency training at Stanford. He received his MD from Harvard Medical School, his MBA from Harvard Business School, and he has a BS in Biomedical Engineering from Johns Hopkins University. Welcome to the show, gentlemen.
Mudit Gard: Thank you. Good morning. Thank you for having us.
Robert Mittendorff: Thank you, Byron.
Mudit, I’ll start with you. Tell us about Qventus and its mission. Get us all oriented with why we’re here today.
Mudit: Absolutely. The best way to think of Qventus, our customers often describe us like air traffic control. Much like what air traffic control does for airports, where it allows many flights to land, much more than if they were uncoordinated, and much more safely than if they were uncoordinated. We do the same for healthcare and hospitals.
For me—as, kind of, boring and uncool as a world of operations and processes might be—I had a chance to see that firsthand working in hospitals when I was at McKinsey & Company, and really just felt that we were letting all of our clinicians down. If you think about the US healthcare system, we have the best clinicians in the world, we have great therapies, great equipment, but we fail at providing great medicine. Much of that was being held back by the complex operations that surround the delivery of care.
I got really excited about using data and using AI to help support these frontline clinicians in improving the core delivery of care in the operation. Things like, as a patient sitting in an emergency department, you might wonder what’s going on and why you aren’t being taken care of faster. On the flip side, there’s a set of clinicians who are putting in heroic efforts trying to do that, but they are managing so many different variables and processes simultaneously that it’s almost humanly impossible to do that.
So, our system observes and anticipates problems like, it’s the Monday after Thanksgiving, it’s really cold outside, Dr. Smith is working, he tends to order more labs, our labs are slow—all these factors that would be hard for someone to keep in front of them all the time. When it realizes we might run out of capacity, three or four hours in advance, they will look and find the bottleneck, and create a discussion on how to fix that. We do things like that at about forty to fifty hospitals across the country, and have seen good outcomes through that. That’s what we do, and that’s been my focus in the application of AI.
And Robert how did you get involved with Qventus?
Robert: Well, so Qventus was a company that fit within a theme that we had been looking at for quite some time in artificial intelligence and machine learning, as it applies to healthcare. And within that search we found this amazing company that was founded by a brilliant team of engineers/business leaders who had a particular set of insights from their work with hospitals, at McKinsey, and it identified a problem set that was very tractable for machine learning and narrow AI which we’ll get into. So, within that context in the Bay Area, we found Qventus and we’re just delighted to meet the team and their customers, and really find a way to make a bet in this space.
We’re always interested in case studies. We’re really interested in how people are applying artificial intelligence. Today, in the here and now, put a little flesh on the bones of what are you doing, what’s real and here, how did you build it, what technology you are using, what did you learn? Just give us a little bit of that kind of perspective.
Mudit: Absolutely. I’ll first start with the kinds of things that we are doing, and then we’ll go into how did we build it, and some of the lessons along the way as well. I just gave you one example of running an emergency department. In today’s world, there is a charge nurse that is responsible for managing the flow of patients through that emergency department, constantly trying to stay ahead of it. The example I gave was where, instead the systems are observing it, realizing, learning from it, and then creating a discussion among folks about how to change it.
We have many different things—we call them recipes internally—many different recipes that the system keeps looking for. It looks for, “Hey, here’s a female who is younger, who is waiting and there are four other people waiting around her, and is an acute pain.” She is likely to get up and leave without being seen by a doctor much more than other folks, and you might nudge and greet her, to go up and talk to them. We have many recipes and examples like these, I won’t go into specific examples in each of those, but we do that in different areas of delivery of healthcare.
So, patient flow, just having patients go through the health systems in ways that don’t require them to add resources, but allow them to provide the same care is one big category. You do that in the emergency department, in unison to the hospital and in the operating room. More recently, starting to do that in pharmacy operations, pharmacy costs have started rising. What are the things that today require a human to manually realize, follow up on, escalate and manage, and how can we help the AIs with that process? We’ve seen really good results with that.
I think you’re asking about case studies, in the emergency department side alone, one of our customers treated three thousand more patients in that ED this year than last, without adding resources. They saved almost a million minutes of patient wait time in that single ED alone and that’s been fascinating. What’s been even more amazing is hearing from the nurse manager there how the staff feel like they have the ability to shape the events versus always being behind, and always feeling like they are trying to solve the problem after the fact. They’ve seen some reductions in turnover and that ability of using AI to, in some ways, making health care more human for the people who help us, the caregivers, is what’s extremely exciting in this work for me.
Just to visualize that for a moment, if I looked at it from thirty thousand feet—people come into a hospital, all different ways, and they have all different characteristics of all the things you would normally think, and then there’s a number of routings through the hospital experience, right? Rush them straight into here, or there, or this, so it’s kind of a routing problem. It’s a resource allocation problem, right? What does all of that look like? This is not a rhetorical question, what is all that similar to outside of the hospital? Where is that approach broadly and generally applicable to? It’s not a traffic routing problem, it’s not an inventory management problem, are there any corollaries you can think of?
Mudit: Yeah. In many ways there are similarities to anywhere where there are high fixed asset businesses and there’s a distributed workforce, there’s lots of similarities. I mean, logistics is a good example of it. Thinking about how different deliveries are routed and how they are organized in a way that you meet the SLAs for different folks, but your cost of delivery is not too high. It has similarities to it.
I think hospitals are, in many ways, one of the most complex businesses, and given the variability is much, much higher, traditional methods have failed. In many of the other such logistical and management problems you could use your optimization techniques, and you could do fairly well with them. But given the level of variability is much, much higher in healthcare—because the patients that walk in are different, you might have a ton walk in one day and very few walk in the next, the types of resources they need can vary quite a bit—that makes the traditional methods alone much, much harder to apply. In many ways, the problems are similar, right? How do you place the most product in a warehouse to make sure that deliveries are happening as fast as possible? How do you make sure you route flights and cancel flights in a way that causes minimum disruption but still maximize the benefit of the entirety of the system? How do you manage the delivery of packages across a busy holiday season? Those problems have very similar elements to them and the importance of doing those well is probably similar in some ways, but the techniques needed are different.
Robert, I want to get to you in just a minute, and talk about how you as a physician see this, but I have a couple more technical questions. There’s an emergency room near my house that has a big billboard and it has on there the number of minutes of wait time to get into the ER. And I don’t know, I’ve always wondered is the idea that people drive by and think, “Oh, only a four-minute wait, I’ll go to the ER.” But, in any case, two questions, one, you said that there’s somebody who’s in acute pain and they’ve got four people, and they might get up and leave, and we should send a greeter over… In that example, how is that data acquired about that person? Is that done with cameras, or is that a human entering the information—how is data acquisition happening? And then, second, what was your training set to use AI on this process, how did you get an initial training set?
Mudit: Both great questions. Much of this is part of the first-mile problem for AI in healthcare, that much of that data is actually already generated. About six or seven years ago a mass wave of digitization started in healthcare, and most of the digitization was taking existing paper-based processes and having them run through electronic medical record systems.
So, what happens is when you walk into the emergency department, let’s say, Byron, you walk in, someone would say, “Okay, what’s your name? What are you here for?” They type your name in, and a timestamp is stored alongside that, and we can use that timestamp to realize a person’s walked in. We know that they walked in for this reason. When you got assigned a room or assigned a doctor then I can, again, get a sense of, okay, at this time they got assigned a room, at this time they got assigned a doctor, at this time their blood was drawn. All of that is getting stored in existing systems of record already, and we take the data from the systems of record, learn historically—so before we start we are able to learn historically—and then in the moment, we’re able to intervene when a change needs to take place.
And then the data acquisition part of the acute patient’s pain?
Mudit: The pain in that example is actually coming from the kind of what they have complained about.
I see, perfect.
Mudit: So, we’re looking the types of patients who complain about similar pieces, what’s their likelihood versus this likelihood, that’s what we will be learning on it.
Robert, I have to ask you before we dive into this, I’m just really intensely curious about your personal journey, because I’m guessing you began planning to be a medical practitioner, and then somewhere along the way you decided to get an MBA, and then somewhere along the way you decided to invest in technology companies and be on their boards. How did all of that happen? What was your progressive realization that took you from place to place to place?
Robert: I’ll spend just a couple of minutes on it, but not exactly. I would say in my heart I am an engineer. I started out as an engineer. I did biomedical electrical engineering and then I spent time at MIT when I was a medical student. I was in a very technical program between Harvard and MIT as a medical student. In my heart, I’m an engineer which means I try to reduce reality to systems of practice and methods. And coupled with that is my interest in mission-driven organizations that also make money, so that’s where healthcare and engineering intersect.
Not to go into too much detail on a podcast about myself, I think the next step in my career was to try to figure out how I could deeply understand the needs of healthcare, so that I could help others and myself bring to bear technology to solve and address those needs. The choice to become a practitioner was partially because I do enjoy solving problems in the emergency department, but also because it gave me a broad understanding of opportunities in healthcare at the ground level and above in this way.
I’ll just give you an example, when I first saw what Mudit and his team had done in the most amazing way at Qventus, I really understood the hospital as an airport with fifty percent of the planes landing on schedule. So, to go back to your emergency department example, imagine if you were responsible for safety and efficiency at SFO, San Francisco airport, without a tower and knowing only the schedule landing times for half of the jets, where each jet is patient. Of the volume of patients that spend their night in the hospital, about half come to the ED, and when I show up for a shift that first, second, and third patient can be stroke, heart attack, broken leg, can be shortness of breath, skin rash, etcetera. The level of complexity in health care to operationalize improvements in the way that Mudit has is incredibly high. We’re just at the beginning, they are clearly the leader here, but what I saw in my personal journey in this company is the usage of significant technology to address key throughput needs in healthcare.
When one stack-ranks what we hope artificial intelligence does for the world, on most people’s list, right up there at the very top is impact health. Do you think that’s overly hyped because there’s all kinds of, you know, we have an unending series of wishes that we hope artificial intelligence can do? Do you think it’s possible that it delivers eventually on all of that, that it really is a transformative technology that materially alters human health at a global level?
Robert: Absolutely and wholeheartedly. My background as a researcher in neuroscience was using neural networks to model brain function in various animal models, and I would tell you that the variety of ways that machine learning and AI, which are the terms we use now for these technologies, the variety of ways they will affect human health are massive. I would say within the Gartner hype cycle we are early, we are overhyping in the short term the value of this technology. We are not overhyping the value of this technology in the next ten, twenty, or thirty years. I believe that AI is the driver of our Industrial Revolution. This will be looked back at as an industrial revolution of sorts. I think there’s a huge benefit that are going to be accrued to healthcare providers and patients to the usage of these technologies.
Talk about that a little more, paint a picture of the world in thirty years, assuming all goes well. Assuming all goes well, what would our health experience look like in that world?
Robert: Yeah, well, hopefully your health experience, and I think Mudit’s done a great job describing this, will return to a human experience between a patient and a physician, or provider. I think in the backroom, or when you’re at home interacting with that practice, I think you’re going to see a lot more AI.
Let me give you one example. We have a company that went public, a digital health company, that uses machine learning to read EKG data, so cardiac electrical activity data. A typical human would take eight hours to read a single study on a patient, but by using machine learning they get down to five to tens of minutes. The human is still there, overreading what the machine learned software is producing—this company is called iRhythm—and what that allows us to do is reach a lot more patients at a lower cost than you could achieve with human labor. You’ll see this in radiology. You’ll see this in coaching patients. You’ll see this in where I think Mudit has really innovated, which is he has created a platform that is enabling.
In the case that I gave you with humans being augmented by, what I call, the automation or semi-automation of a human task, that’s one thing, but what Mudit is doing is truly enabling AI. Humans cannot do what he does in the time and scale that he does it. That is what’s really exciting—machines that can do things that humans cannot do. Just to visualize that system, there are some things that are not easily understood today, but I think you will see radiology improve with semi-automation. I think patients will be coached with smart AI to improve their well-being, and that’s already being seen today. Human providers will have leverage because the computer, the machine will help prioritize their day, which patient talk to about, what, when, how, why. So, I think you’ll see a more human experience.
That’s the concern is that we will see a more manufactured experience. I don’t think that’s the case at all. The design that we’ll probably see succeed is one where the human will become front and center again, where physicians will no longer be looking at screens typing in data, they’ll be communicating face to face with a human, with an AI helping out, advising, enabling those tedious tasks that the human shouldn’t be burdened with, to allow the relationship between the patient and physician to return.
So, Mudit, when you think of artificial intelligence and applying artificial intelligence to this particular problem, where do you go from that? Is the plan to take that learning—and, obviously, scale it out to more hospitals—but what is the next level to add depth to it to be able to say, “Okay, we can land all the planes now safely, now we want to refuel them faster, or…”? I don’t know, the analogy breaks down at some point. Where would you go from here?
Mudit: We already as customers are starting to see results of this approach in one area. We’ve started expanding already and have a lot more expansion coming down the line as well. If you think of it, at the end of the day, so much of healthcare delivery is heavily process driven, right? Anywhere from how your bills get generated to when you get calls. I’ve had times when I might get a call from a health system saying I have a ten-dollar bill that they are about to send to collection but I paid all the bills today. There are things like that that are constantly happening that are breakdowns in processes, across delivery, across the board.
We started, as I said, four or five years ago and very specifically focused on the emergency department. Going from there into the surgery area, where operating rooms can cost upwards of hundreds of dollars a minute, so how do you manage that complex an operation, and the logistics setting to deliver the best value? And I’ve seen really good results there, managing the entirety of all the units in the hospital. More recently, as I was saying, we are now starting to work with Sutter Health across twenty-six of their hospital pharmacies, in looking at what are the key pieces around operations in the pharmacy which are, again, manually holding people back from delivering the best care. These are the different pieces across the board that we are already starting to see.
The common thread across all of these I find is that we have amazing, incredible clinicians today, that, if they had all the time and energy in the world to focus on anticipating these problems and delivering the best care, they would do a great job, but we cannot afford to keep having more people solve these problems. There are significant margin pressures across healthcare. The same people who were able to do these things before have to-do lists that are growing faster than they can ever comprehend. The job of AI really is to act as, kind of, their assistant and watch those decisions on their behalf, and help make those really, really easy. To take all of the boring, mundane logistics out of their hands, so they can focus on what they can do best which is deliver care to their patients. So, right now, as I said, we started on the flow side, pharmacies are a new area, outpatient clinics, and imaging centers is another area that we are working with a few select customers on and there’s some really, really exciting stuff there in increasing the access to care—when you might call a physician to get access—while reducing the burden on that physician, that we are working on.
Another really exciting piece for me is, in many ways the US healthcare system is unique, but in this complexity of logistics and operation it is not. So, we are already signed to work with hospitals globally, just started with working with our first international customer recently, and the same problems exist everywhere. There was an article in BBC, I think a week or two weeks ago, where there’s a long surgery waiting lists in the UK, and they are struggling to get those patients seen in that system, due to lack of efficiency in these logistics. So, that’s the other piece that I’m really excited about, it’s not only the breadth of these problems where there’s complexity of processes, but also the global applicability of it.
The exciting thing to me about this episode of Voices is that I have two people who are engineers, who understand AI, and who have a deep knowledge of health. I just have several questions that kind of sit at the intersection of all of that I would love to throw at you.
My first one is this, the human genome is, however many billions of base pairs that works out to something like 762MB of data, but if you look at what makes us different than, say, chimps, it may be one percent of that. So, it’s something like 7MB or 8MB of data is the code you need to build an intelligent brain, a person. Does that imply to you that artificial intelligence might have a breakthrough, there might be a relatively straightforward and simple thing about intelligence that we’re going to learn, that will supercharge it? Or, is your view that, no, unfortunately, something like a general intelligence is going to be, you know, hunks of spaghetti code that kind of work together and pull off this AGI thing. Mudit, I’ll ask you first.
Mudit: Yeah, and boy that’s a tough question. I will do my best in answering that one. Do I believe that we’ll be able to get a general-purpose AI, with, like, 7MB or 8MB of code? There’s a part of me that does believe in that simplicity, and does want to believe in that the answer. If you look at a lot of the machine learning code, it’s not the code itself that’s actually that complex, it’s the first mile and the last mile of that code that ends up taking the vast majority of the code. How to get the training sets in and how do you get the output out—that is what takes the majority of the AI code today.
The fundamental learning code isn’t that big today. I don’t know if we’ll solve general purpose AI anytime soon. I’m certainly not holding my breath for that, but there’s a part of me that feels and hopes that the fundamental concepts of the learning and the intelligence, will not be that complicated at an individual micro scale. Much like ourselves, we’ll be able to understand them, and there will be some beauty and harmony and symphony in how they all come together. And that actually won’t be complex in hindsight, but it will be extremely complex to figure out the first time around. That’s purely speculative but that would be might be my belief and my hunch right now.
Robert, do you want to add anything to that, or let that answer stand?
Robert: I’d be happy to. I think it’s an interesting analogy to make. There are some parts of it that will break down and parts that will parallel between the human genomes complexity, and utility, and the human brain. You know, just I think when we think about the genome you’re right, it’s several billion base pairs where we only have twenty thousand genes, and a small minority percentage that actually code for protein, and a minority of those that we understand affect the human in a diseased way, like a thousand genes to two thousand genes. There’s a lot of base pairs that we don’t understand and could be related to structure of the genome as it needs to do what it does in the human body, in the cell.
On the brain side, though, I think I would go with your latter response which is if you look at the human brain—and I’ve had the privilege of working with animal models and looking at human data—the brain is segmented into various functional units. For example, the auditory cortex is responsible for taking information from the ear and converting it to signals that then are pattern-recognized in to, say, language, and where those symbols of what words we’re speaking are then processed by other parts of the cortex. Similarly, the hippocampus, which sits in, kind of, the oldest part of the brain, is responsible for learning. It is able to look at various inputs from all of these, from the visual and auditory and other courtesies, and then upload them to long-term memory from short-term memory, so that the brain is functionally segmented and physically segmented.
I believe that a general-purpose AI will have the same kind of structure. It’s funny we have this thing called the AI effect where when we solve a problem with code or with machinery, it’s no longer AI. So, for example, natural language processing, some would consider now not part of AI because we’ve somewhat solved it, or speech recognition used to be AI, but now it’s an input to the AI, because the AI is thinking about more understanding than interpretation of audio signals and converting them into words. I would say what we’re going to see, which is similar to the human body encoded by these twenty thousand genes, is you will have functional expertise with, presumably, code that is used for segmenting the problem of creating a general AI.
A second question then. You, Robert, waxed earlier about how big the possibilities are for using artificial intelligence with health. Of course, we know that the number of people who are living to one hundred keeps going up, up, up. The number of people who become supercentenarians is in the dozens, who’ve gotten to one hundred and ten. The number of people who have lived to one hundred and twenty-five is stubbornly fixed at zero. Do you believe—and not even getting aspirational about “curing death”—that what’s most likely to happen is more of us are going to make it to one hundred healthily, or do you think that one hundred and twenty-five is something we’ll break and maybe somebody will live to one hundred and fifty. What do you think about that?
Robert: That’s a really hard question. I would say that if I look at the trajectory of gains that, public health, primarily, with things like treated water to medicine, we’ve seen a dramatic increase in human longevity in the developed world. From taking down the number of children dying during childbirth, which lowers the average obviously, to extending life in the later years, and if you look at the effects there those conclusions have never effects on society. For example, when Social Security was invented a minority of individuals would live to the age in which they would start accruing significant benefits, obviously that’s no longer the case.
So, to answer your question, there is no theoretical reason that I can come up with that I can’t imagine someone making it to one hundred and twenty-five. One hundred and fifty is obviously harder to imagine. But we understand the human cell at a certain level, and the genome, and the machinery of the human body, and we’ve been able to thwart the body’s effort to fatigue and expire, a number of times now. Whether it’s cardiovascular disease or cancer, and we’ve studied longevity—“we” meaning the field, not myself—so, I don’t see any reason why we would say we will not have individuals reach one hundred and twenty-five, or even one hundred and fifty.
Now, what is the time course of that? Do we want that to happen and what are the implications for society? Those are big questions to answer. But science will continue to push the limits of understanding human function at the cellular and the physiologic level to extend the human life. And I don’t see a limit to that currently.
So, there is this worm, called the nematode worm, little bitty fella, he’s as long as a hair is wide, the most successful animal on the planet. Something like seventy percent of all animals are nematode worms. The brain of the nematode worm has 302 neurons, and for twenty years or so, people have been trying to model those 302 neurons in a computer, the OpenWorm project. And even today they don’t know if they can do it. That’s how little we understand. We don’t not understand the human brain because it’s so complex, we don’t understand anything—or I don’t want to say anything—we don’t understand just how neurons themselves work.  
Do you think that, one, we need to understand how our brains work—or how the nematode brain works for that matter—to make strides towards an AGI? And, second, is it possible that a neuron has stuff going on at the Planck level that it’s as complicated as a supercomputer, making intelligence acquired that way incredibly difficult? Do either of you want to comment on that?
Mudit: It’s funny that you mention that, when I was at Stanford doing some work in the engineering, one of the professors used to say that our study of the human brain is sort of like someone just had a supercomputer and two electrodes and they’re poking the electrodes in different places and trying to figure out how it works. And I can’t imagine ever figuring out how a computer works outside-in by just having like two electrodes and seeing the different voltages coming out of it. So, I do see the complexity of it.
Is it necessary for us to understand how the neuron works? I’m not sure it’s necessary for us to understand how the neuron works, but if you were to come up with a way where we can build a system that’s, both resilient, redundant, and simple, that can do that level of intelligence, I think that’s hundreds of thousands of years of evolution that have helped us get to that solution, so it would, I think, be a critical input.
Without that, I see a different approach, which is what we are taking today, which is inspired, likely, but it’s not the same. In our brain when neurons fire, yes, we now have a similar transfer function for many of our neural networks of how the neuron fires, but for any kind of meaningful signal to come out we have a population of neurons firing in our brain that makes the impulsing more continuous and very redundant and very resilient. It wouldn’t fail even if some portion of those neurons stopped working. But that’s not how our models work, that’s not how our math works today. I think in finding the most optimized, probably, elegant and resilient way of doing it, I think it would be remiss not to take inspiration from what has been evolved over a long, long period of time, to, perhaps, being one of the most efficient ways of having general purpose AI. So, at least my belief would be we will have to learn, and I would think that our understanding is still largely simplistic and, at least, I would hope and believe that we’ll learn a lot more and find out that, yeah, each one of those perhaps either communicates more, or does it in a way that brings the system to the optimal solution a lot faster than we would imagine.
Robert: Just to add to that I would say, I agree with everything Mudit said, I would say do we need to study the neuron and neural networks in vivo, in animals? And the answer to that is, as humans, we do. I mean, I believe that we have an innate curiosity to understand ourselves and that we need to do. Whether it’s funded or not, the curiosity to understand who we are, where we came from, how we work, will drive that just like it’s driven fields as diverse as astronomy to aviation.
I think, do we need to understand at the level of detail you’re describing, for example, what exactly happens at the synapse stochastically, where neurotransmitters find the receptors that open ion channels that change the resting potential of a neuron, such that additional axonal effects occur where at the end of that neuron you then release another neurotransmitter. I don’t think so. Because I think we learn a lot, as Mudit said, from understanding how these highly developed and trained systems we call, animals and humans, work, but they were molded over large periods of time for specific survival tasks, to live in the environment that they live in.
The systems we’re building, or Mudit’s building, and others, are designed for other uses, and so we can take, as he said, inspiration from them, but we don’t need to model how a nematode thinks to help the hospital work more effectively. In the same way that, there are two ways, for example, someone could fly from here in San Francisco, where I’m sitting, to, let’s say, Los Angeles. You could be a bird, which is a highly evolved flying creature which has sensors, which has, clearly, neural networks that are able to control wing movement, and effectively the wing surface area to create lift, etcetera. Or, you could build a metal tube with jets on it that gets you there as well. I think they have different use cases and different criteria.
The airplane is inspired by birds. The wing of an airplane, the cross-section of the wing is designed like a bird’s wing is in that the one pathway is longer than the other which changes pressure above and below the wing that allows flight to occur. But clearly, the rest of it is very different. And so, I think the inspiration drove aviation to a solution that has many parts from what birds have, but it’s incredibly different because the solution was to the problem of transporting humans.
Mudit, earlier you said we’re not going to have an AGI anytime soon. I have two questions to follow up on that thought. The first is that among people who are in the tech space there’s a range of something like five to five hundred years as to when we might get a general intelligence. I’m curious, one, why do you think there’s such a range? And, two, I’m curious, with both of you, if you were going to throw a dart at that dartboard, where would you place your bet, to mix a metaphor.
Mudit: I think in the dart metaphor, chances of being right are pretty low, but we’ll give it a shot. I think part of it, at least I ask myself, is the bar we hold for AGI too high? At what point do we start feeling that a collection of special-purpose AIs that are welded together can start feeling like an AGI and is that good enough? I don’t know the answer to that question and I think that’s part of what makes the answer harder. Similar to what Robert was saying where the more problems we solve, the more we see them as algorithmic and less as AI.
But I do think at some point, at least in my mind, if I can see an AI starting to question the constraints of the problem and the goal it’s trying to maximize, that’s where true creativity for humans comes from; when we break rules and when we don’t follow the rules we were given. And that’s also the scary part of AI comes from because it can do that at scale then. I don’t see us close to that today. And if I had to guess I’m going to just say, on this exponential curve, I’m going to probably not pick out the right point, but four to five decades is when we start seeing enough of the framework and maybe essentially, we can see some tangible general-purpose AI come to form.
Robert, do you want to weigh in, or you will take a pass on that one?
Robert: I’ll weigh in quickly. I think we often see this in all of investing, actually—whether it’s augmented reality, virtual reality, whether it’s stenting or robotics in medicine—we as investors have to work hard to not overestimate the effect of technology now, and not underestimate the effect of technology in the long run. This came from, I believe a Stanford professor Roy Amara, who unfortunately passed a while ago, but that idea of saying, “Let’s not overhype it, but it’s going to be much more profound than we can even imagine today,” puts my estimate, probably—and it depends how you define general AI which is probably not worth doing—I would say it’s within fifteen to twenty years.
We have this brain, the only general intelligence that we know of. And then we have the mind and, kind of, a definition of that which I think everybody can agree to that the mind as a set of abilities that don’t seem, at first glance, to be something an organ could do, like creativity, or a sense of humor. And then we have consciousness, we actually experience the world. A computer to measure temperature, but we can burn our finger and feel that. My questions are, we would expect the computer to have a “mind,” we would expect an AGI to be creative, do you think, one, that consciousness is required for general intelligence, and, to follow up on that, do you believe computers can become conscious? That they can experience the world as opposed to just measure it?
Mudit: That’s a really hard one too. I think actually in my mind what’s most important, and there’s kind of a grey line between the two, is creativity is what’s most important, the element of surprise is what’s most important. The more an AI can surprise you, the more you feel like it is truly intelligent. So, that creativity is extremely important. But I think the reason I said there’s kind of a path from one to the other is—and this is very philosophical of how to define consciousness—in many ways it’s when we start taking a specific task that is given to us, but really start asking the larger objective, the larger purpose, that’s when, I feel like, that’s what truly distinguishes a being or a person being conscious.
Until the AIs are able to be creative and break the bounds of the specific rules, or the specific expected behavior that it’s programmed to do, certainly the path to consciousness is very, very hard. So, I feel like creativity and surprising us is probably the first piece, which is also the one that honestly scares us as humans the most, because that’s when we feel a sense of losing control over the AI. I don’t think true consciousness is necessary, but they might go hand in hand. I can’t think of it being necessary, but they might evolve simultaneously and they might go hand in hand.
Robert: I would just add one other thought there which is, so I spent many hours in college having this debate of what is consciousness, you know, where is the sea of consciousness? Anatomists for centuries have dissected and dissected it, you know, is it this gland, or is it that place, or is it an organized effect of the structure and function of all of these parts. I think that’s why we need to study the brain, to be fair.
One of the underlying efforts there is to understand consciousness. What is it that makes a physical entity able to do what you said, to experience what you said? More than just experiencing a location, experiencing things like love. How could a human do that if they were a machine? Can a machine of empathy?
But I think beyond that, as I think practically as an investor and as a physician, I frankly, I don’t know if I care if the machine is conscious or not, I care more about who do I assign responsibility to for the actions and thoughts of that entity. So, for example, if they make a decision that harms someone, or if they make the wrong diagnosis, what recourse do I have? Consciousness in human beings, well, we believe in free will, and that’s where all of our entities around human justice come from. But if the machine is deterministic, then a higher power, may be the human that designed it, is ultimately responsible. For me, it’s a big question about responsibility with effect to these AIs, and less about whether they’re conscious or not. If they’re conscious then we might be able to assign responsibility to the machine, but then how do we penalize it—financially, otherwise? If they’re not conscious, then we probably need to assign responsibility to the owner, or the person that configured the machine.
I started the question earlier about why is there such a range of beliefs about when we might get a general intelligence, but the other interesting thing, which you’re kind of touching on, is there’s a wide range of belief about whether we would want one. You’ve got the Elon Musk camp of summoning the demon, Professor Hawking saying it’s an existential threat, and Bill Gates said, “I don’t understand why more people aren’t worried about it,” and so forth. And on the other end, you have people like Andrew Ng who said, “That’s like worrying about overpopulation of Mars,” and Rodney Brooks the roboticist, and so forth, who dismissed those. It’s almost eye-rolling, that you can see. What are the core assumptions that those two groups have, and why are they so different from each other in their regard to this technology?
Mudit: To me it boils down to the same things that make me excited about large-scale potential, from a general-purpose side, are the things that make me scared. You know how we were talking about what creativity is, if I go back to creativity for a second. Creativity will come from if an AI is told to maximize an objective function and the objective function has constraints, should it be allowed to question the constraints and the problem itself? If it is allowed to do that that’s where true creativity would come from, right? That’s what a human would do. I might give someone a task or a problem, but then they might come back and question it, and that’s where true creativity will come from. But the minute we allow an AI to do that is also when we lose that sense of control. We also don’t have that sense of control in humans today, but what freaks us out about AI is that AI can take that and do that at very, very rapid scale, at a pace at which we may not even as a society catch up to, realize, and be able to control or regulate, which we can in case of humans. I think that’s both the exciting part and the fear, they are really hand in hand.
The pace at which AI can then bring about the change once those constraints are loosened is something we haven’t seen before. And we already see, in today’s environment, our inability to keep pace with how fast technology is changing, from a regulation, from a framework standpoint as a society. And I think once that happens that will be called into question even more. I think that’s probably why many in the camp of Elon Musk, Sam Altman, and others, in many ways, I think, the part of their ask that resonates with me is we probably should start thinking about how we will tackle the problem, what framework should we have in place earlier, so we have time as a society to wrestle with it before it comes and it’s right in our face.
Robert: I would add to that with four things. I would say the four areas that I think kind of define us a bit—and there were a couple of them that were mentioned by Mudit—I think it’s speed, so speed of computation of affecting the world in which the machine would be in; scalability; the fact that it can affect the physical environment; and the fact that machines as we currently believe them do not have morals or ethics, I don’t know how you define it. So, there’s four things. Something that’s super fast, that’s highly scaled, that can affect the physical world with no ethics or morality, that is a scary thing, right? That is a truck on 101 with a robotic driver that is going to go 100 MPH and doesn’t care what it hits. That’s the scary part of it. But there’s a lot of technology that looks like that. If you are able to design it properly and constrain it, it can be incredibly powerful. It’s just that the conflict in those four areas could be very detrimental to us.
So, to pull the conversation back closer to the here and now, I want to ask each of you what’s a breakthrough in artificial intelligence in the medical profession that we may not have heard about, because there are so many of them? And then tell me something—I’ll put both of you on the spot on this—you think we’re going to see in, like, two or three years; something that’s on a time horizon where we can be very confident we’re going to go see that. Mudit, why don’t you start, what is something we may not know about, and what is something that will happen pretty soon do you think, in AI and medicine?
Mudit: I think—and this might go back to what I was saying—the breakthrough is less in the machine learning itself, but the operationalization of it. The ability—if we have the first mile and the last mile solved—to learn exists, but in the real, complex world of high emotions, messy human-generated data, the ability to actually, not only predict, but, in the moment, prescribe and persuade people to take action, is what I’m most excited about and I’m starting to see happen today, that I think is going to be transformative in the ability of existing machine learning prowess to actually impact our health and our healthcare system. So, that’s the part that I’m most excited about. It may not be, Byron, exactly what you’re looking for in terms of what breakthrough, but I think it’s a breakthrough of a different type. It’s not an algorithmic breakthrough, but it’s an operationalization breakthrough which I’m super excited about.
The part you asked about, what do I think in two to three years we could start doing, that we perhaps don’t do as well now… I know one that is very clear is places where there’s high degrees of structured data that we require humans to pore through—and I know Robert spent a lot of time on this, so I’ll leave this one to him—around radiology, around EKG data, around these huge quantities of structured data that are just impossible to monitor. But the number of poor quality outcomes, mortality, and bad events like that that happen which, if it was humanly feasible to monitor all that and realize, I believe we are two to three years away from starting to meaningfully bend that, both kind of process-wise, logistically, and then from a diagnosis standpoint. And it will be basic stuff, it will be stuff that we have known for a long time that we should do. But, you know, as the classic saying goes, it takes seventeen years from knowing something should be done, to doing it at scale in healthcare; I think it will be that kind of stuff where it will start rapidly shortening and reducing that cycle time and seeing vast effects of that in a healthcare system.
Robert: I’ll give you my two, briefly. I think it’s hard to come up with something that you may not have heard about, Byron, with your background, so I’ll think more about the general audience. First of all, I agree with Mudit, I think the two to three year time frame what’s obvious is that any signal processing in healthcare that is being done by human is going to be rapidly moved to a computer. So, iRhythm as an example of a company trading over a billion in a little over a year out of its IPO does that for cardiology data, EKG data, acquired through a patch. There are over forty companies that we have tracked in the radiology space that are prereading, or in some sense providing a pre-diagnostic read of CTs, MRIs, x-rays, for human radiology overreads for diagnosis. That is happening in the next two to five years. That is absolutely going to happen in the next two to five years. Companies like GE and Philips are leading it, there are lots of startups doing work there.
I think the area that might not be so available to the general public is the usage of machine learning on human conversation. Imagine in therapy, for example, therapy is moving to teletherapy, telemedicine; those are digitized conversations, they can be recorded and translated into language symbols, which can then be evaluated. Computational technology is being developed and is available today that can look at those conversations to decipher whether, for example, someone is anxious today, or depressed, needs more attention, may need a cognitive behavioral therapy intervention that is compatible with their state. And that allows, not only the scaling of signal processing, but the scaling of human labor that is providing psychological therapy to these patients. And so, I think, where we start looking at conversations, this is already being done in the management of sales forces with companies using AI to monitor sales calls and coach sales reps as to how to position things in those calls, to more effectively increase the conversion of a sale, we’re seeing that in healthcare as well.
All right, well that is all very promising, that’s all like kind of lifts up our day to know that there’s stuff coming and it’s going to be here relatively soon. I think that’s probably a good place to leave it. As I look at our timer, we are out of time, but I want to thank both of you for taking the time out of, I’m sure, your very busy days, to have this conversation with us and let us in on a little bit of what you’re thinking, what you’re working on, so thank you.
Mudit: Thank you very much, thanks, Byron.
Robert: You’re welcome.
Byron explores issues around artificial intelligence and conscious computers in his upcoming book The Fourth Age, to be published in April by Atria, an imprint of Simon & Schuster. Pre-order a copy here.

How Artificial Intelligence in Medicine is Improving Healthcare (and Business)

The use of AI, or artificial intelligence, in the medical field is an
emerging trend that promises exponential advances in the way we
diagnose and treat a multitude of health conditions. Advances in the
application of medical AI technology are occurring at a lightning pace,
with new developments rendering prior solutions obsolete in a matter
of months.
In this article, we’ll review some of the ways that AI technology is
making the healthcare field more efficient, improving the quality of
care, raising ethical concerns, and offering medical practices a
competitive advantage.
History of AI in Healthcare
As early as 1959, the medical research field has been fascinated with
the potential for the application of artificial intelligence. Early
researchers envisioned a machine that could hold a vast amount of
medical knowledge and possess the ability to provide potential
diagnoses. In the early 1980s, the emerging field of Artificial
Intelligence in Medicine (AIM) was urged on by advancements in the
storage and processing power of digital technology. Research
conducted at Rutgers, Stanford, and MIT paved the way for today’s
extensive use of AI in medicine.
Predictive Diagnoses
The use of AI allows medical teams to create diagnoses based on large
data sets. The various medical tests, and the data generated, can be
extremely complicated and extensive. AI can analyze this data in
seconds and observe statistical, as well as causal, relationships in the
data set. These correlations can be difficult or impossible for human
researchers and health professionals to identify. When the patient’s
medical condition is precarious and requires urgent care, and accurate
diagnoses, AI can provide prompt prediction to aid in the practitioner’s
Precision Medicine
Perhaps the most promising benefit of AI is precision, not only in the
sense of making accurate prescriptions, but precision in terms of the
application of suggested medical treatments. AI has the capacity to
utilize the patient’s genetic profile to create recommendations that are
unique to the person’s code. AI systems can store and process an
essentially limitless amount of data on medical conditions, patient
histories, case studies, and pharmaceutical compounds.
Another example of this precision is the use of AI driven robotic surgery
equipment. The Smart Tissue Autonomous Robot (STAR), developed by
researchers at the Sheikh Zayed Institute, has been proven more
accurate in performing, and making real-time modifications to, planned
surgical procedures than human surgeons; however, the research cited
the need for human intervention in about 40% of the cases.
AI can also make quick and accurate work of processing medical
imagery. Software can identify almost imperceptible characteristics,
handle the tremendous amount of data generated by digital scanning
technology, and decrease the analysis period from days to minutes.
While this technology may eventually allow the complete automation
and application of medical treatments, it will almost certainly always
require a human element. The precision of AI is the perfect
complement to the personal, analytical, and technical skills of human
medical professionals.
Process Management
For medical practices, AI technology promises greater treatment
capacity, reduced medical liability, labor savings, and improved
customer satisfaction. Artificial intelligence makes sense from an
entrepreneurial stand point, contributing economically to its rapid
development and broad application to many industries. Busy practices
can automate (with supervision) the scheduling, check-in, diagnosis,
and follow-up process, as well as obtaining process-refining feedback
from patients. This technology can improve customer retention by
boosting customer involvement through consistent communication and
speed of service.
Savings on personnel costs and avoiding human error is are a major
factor in the adoption of AI and robotics. Practices can reduce the need
for doctors and nurses to perform routine tasks, and minimize the time
required to perform essential functions. AI equips them with the tools
to provide higher-quality services, and build a differentiating
competitive advantage as a technological leader in their market.
AI and Human Interaction
Should your doctor or nurse be replaced by a life-like simulation driven
by AI?
This is an emerging question the medical field is being forced to address
due to the availability of technology, the promise of reduced labor
costs, enhanced operational efficiency, and the potential reduction of
liability for medical practices.
Modern processing power allows programs to speak and act with a
closely human quality that could reduce the workload on medical staff,
and increase the efficiency of the medical process. Despite the logistical
advantages of this technology, controversy has arisen as to the ethical
considerations and social impact.
The Future of Medicine
When a device or software has no emotions, the empathy factor is
removed. Can we rely on AI to deliver emotionally difficult diagnoses?
What is the value of authentic human interaction in the process? Does
emotion hinder accurate diagnoses and care? Or, does the passion of
medical personnel ensure due diligence and creative thinking?
These questions are subjective at heart, and will be a source of debate
for generations. The use of AI in medicine is entrenched, and certain to
be a leading source of change in the dynamic medical field.
At GrailAI, we are working to compliment all of the great research to
date by using our own unique algorithms to compare data from
wearables, DNA and other sources with symptoms that are predictive
of cancer to find instances in early stages before each are more difficult
to treat. As artificial intelligence and machine learning technologies
evolve, the developers behind each must carefully consider how to be
HiPPA compliant, yet push forward aggressively as helping families to
keep their loved one safe is worth the investment.

Voices in AI – Episode 4: A Conversation with Jeff Dean

In this episode, Byron and Jeff talk about AGI, machine learning, and healthcare.
[podcast_player name=”Episode 4: A Conversation with Jeff Dean” artist=”Byron Reese” album=”Voices in AI” url=”″ cover_art_url=””]
Byron Reese: Hello, this is Voices in AI brought to you by Gigaom. I am your host, Byron Reese. Today we welcome Jeff Dean onto the show. Jeff is a Google Senior Fellow and he leads the Google Brain project. His work probably touches my life, and maybe yours, about every hour of every day, so I can’t wait to begin the conversation. Welcome to the show Jeff. 
Jeff Dean: Hi Byron, this is Jeff Dean. How are you?
I’m really good, Jeff, thanks for taking the time to chat. You went to work for Google, I believe, in the second millennium. Is that true?
Yes, I did, in 1999.
So the company wasn’t even a year old at that time.
That’s right, yeah it was pretty small. We were all kind of wedged in the second-floor office area, above what is now a T-Mobile store in downtown Palo Alto.
And did it feel like a start-up back then, you know? All the normal trappings that you would associate with one?
We had a ping pong table, I guess. That also doubled as where we served food for lunch. I don’t know—yeah, it felt exciting and vibrant, and we were trying to build a search engine that people would want to use. And so there was a lot of work in that area, which is exciting.
And so, over the last seventeen years… Just touch on, it’s an amazing list of the various things you’ve worked on.
Sure. The first thing I did was put together the initial skeleton of what became our advertising system, and I worked on that for a little while. Then mostly for the next four or five years I spent my time with a handful of other people working on our core search system. That’s everything from the calling system—when it goes out and fetches all the pages on the web that we can get our hands on—to the indexing system that then turns that into a system that we can actually query quickly when users are asking a question.
They type something into Google, and we want to be able to very quickly analyze what pages are going to be relevant to that query, and return the results we return today. And then the serving system that, when a query comes into Google, decides how to distribute that request over lots and lots of computers to have them farm that work out and then combine the results of their individual analyses into something that we can then return back to the user.
And that was kind of a pretty long stretch of time, where I worked on the core search and indexing system.
And now you lead the Google Brain project. What is that?
Right. So, it’s basically we have a fairly large research effort around doing machine learning and artificial intelligence research, and then using the results of our research to make intelligent systems. Where an intelligent system may be something that goes into a product, it might be something that enables new kinds of products, it might be, you know, some combination of that.
When we’re working with getting things into existing products, we often collaborate closely with different Google product teams to get the results of our work out into products. And then we also do a lot of research that is sort of pure research, untied to any particular products. It’s just something that we think will advance the capabilities of the kinds of systems we’re able to build, and ultimately will be useful even if they don’t have a particular application in mind at the moment.
“Artificial intelligence” is that phrase that everybody kind of disowns, but what does it mean to you? What is AI? When you think about it, what is it? How would you define it in simple English?
Right, so it’s a term that’s been around since the very beginning of computing. And to me it means essentially trying to build something that appears intelligent. So, the way we distinguish humans from other organisms is that we have these higher-level intelligence capabilities. We can communicate, we can absorb information, and understand it at a very high level.
We can imagine the consequences of doing different things as we decide how we’re going to behave in the world. And so we want to build systems that embody as many aspects of intelligence as we can. And sometimes those aspects are narrowly defined, like we want them to be able to do a particular task that we think is important, and requires a narrow intelligence.
But we also want to build systems that are flexible in their intelligence, and can do many different things. I think the narrow intelligence aspects are working pretty well in some areas today. The broad, really flexible intelligence is clearly an open research problem, and it’s going to consume people for a long time—to actually figure out how to build systems that can behave intelligently across a huge range of conditions.
It’s interesting that you emphasize “behave intelligently” or “appear intelligent.” So, you think artificial intelligence, like artificial turf, isn’t really turf—so the system isn’t really intelligent, it is emulating intelligence. Would you agree with that?
I mean, I would say it exhibits many of the same characteristics that we think of when we think of intelligence. It may be doing things differently, because I think you know biology and silicon have very different strengths and weaknesses, but ultimately what you care about is, “Can this system or agent operate in a manner that is useful and can augment what human intelligence can do?”
You mentioned AGI, an artificial general intelligence. The range of estimates on when we would get such a technology are somewhere between five and five hundred years. Why do you think there’s such a disparity in what people think?
I think there’s a huge range there because there’s a lot of uncertainty about what we actually need. We don’t quite know how humans process all the different kinds of information that they receive, and formulate strategies. We have some understanding of that, but we don’t have deep understanding of that, and so that means we don’t really know the scope of work that we need to do to build systems that exhibit similar behaviors.
And that leads to these wildly varying estimates. You know, some people think it’s right around the corner, some think it’s nearly impossible. I’m kind of somewhere in the middle. I think we’ve made a lot of progress in the last five or ten years, building on stuff that was done in the twenty or thirty years before that. And I think we will have systems that exhibit pretty broad kinds of intelligence, maybe in the next twenty or thirty years, but I have high error bars on those estimates.
And the way you describe that, it sounds like you think an AGI is an evolution from the work that we’re doing now, as opposed to it being something completely different we don’t even know. You know, we haven’t really started working on the AGI problem. Would you agree with that or not?
I think some of what we’re doing is starting to touch on the kind of work that we’ll need to build artificial general intelligence systems. I think we have a huge set of things that we don’t know how to solve yet, and that we don’t even know that we need yet, which is why this is an open and exciting research problem. But I do think some of the stuff we’re doing today will be part of the solution.
So you think you’ll live to see an AGI, while you’re still kind of in your prime?
Ah well, the future is unpredictable. I could have a bike accident tomorrow or something, but I think if you look out fifteen or twenty years, there will be things that are not really imaginable, that we don’t have today, that will do impressive things ten, fifteen, twenty years down the run.
Would that put us on our way to an AGI being conscious, or is machine consciousness a completely different thing which may or may not be possible?
I don’t really know. I tend not to get into the philosophical debates of what is consciousness. To my untrained neuroscience eye, consciousness is really just a certain kind of electrical activity in the neurons in a living system—that it can be aware of itself, that it can understand consequences, and so on. And so, from that standpoint consciousness doesn’t seem like a uniquely special thing. It seems like a property that is similar to other properties that intelligent systems exhibit.
So, absent your bicycle crash, what would that world look like, a world twenty years from now where we’ve made incredible strides in what AI can do, and maybe have something that is close to being an AGI? How do you think that plays out in the world? Is that good for humanity?
I think it will almost uniformly be good. I think if you look at technological improvements in the past—major things like the shift from an agrarian society to one that the Industrial Revolution fueled, which allowed what used to be ninety-nine percent of people working to grow food now, is now a few percent of people in many countries working on producing food supply. And that has freed up people to do many, many other things, all the other things that we see in our society, as a result of that big shift.
So, I think like any technology, there can be uses for it that are not so great, but by-and-large the vast set of things that happen will be improvements. I think the way to view this is, a really intelligent sidekick is something that would really improve humanity.
If I have a question, a very complicated thing—that today I can do via search engine, if I sit down for nine hours or ten hours and really think through and say, “I really want to learn about a particular topic, so I need to find all these papers and then read them and summarize them myself.” If I had an intelligent system that could do that for me, and I could say, “Find me all the papers on reinforcement learning for robotics and summarize them.” And the system could go back, and in twenty seconds do that, that would be hugely useful for humanity.
Oh absolutely. So, what are some of the challenges that you think separate us from that world? Like what are the next obstacles we need to overcome in the field?
One of the things that I think is really important today in the field of machine learning research, that we’ll need to overcome, is… Right now, when we want to build a machine learning system for a particular task we tend to have a human machine learning expert involved in that. So, we have some data, we have some computation capability, and then we have a human machine learning expert sit down and decide: Okay, we want to solve this problem, this is the way we’re going to go about it roughly. And then we have the system that can learn from observations that are provided to it, how to accomplish that task.
That’s sort of what generally works, and that’s driving a huge number of really interesting things in the world today. And you know this is why computer vision has made such great strides in the last five years. This is why speech recognition works much better. This is why machine translation now works much, much better than it did a year or two ago. So that’s hugely important.
But the problem with that is you’re building these narrowly defined systems that can do one thing and do it extremely well, or do a handful of things. And what we really want is a system that can do a hundred thousand things, and then when the hundred thousand-and-first thing comes along that it’s never seen before, we want it to learn from its experience to be able to apply the experience it’s gotten in solving the first hundred thousand things to be able to quickly learn how to do thing hundred thousand-and-one.
And that kind of meta learning, you want that to happen without a human machine learning expert in the loop to teach it how to do the hundred thousand-and-first thing.
And that might actually be your AGI at that point, right?  
I mean it will start to look more like a system that can improve on itself over time, and can add the ability to do new novel tasks by building on what it already knows how to do.
Broadly speaking, that’s transferred learning, right? Where we take something in one space and use that to influence the other one. Is that a new area of study, or is that something that people have thought about for a long time, and we just haven’t gotten around to building a bunch of—
People have thought about that for quite a while, but usually in the context of, I have a few tasks that I want to do, and I’m going to learn to do three of them. And then, use the results of learning to do three, to do the fourth better with less data, maybe. Not so much at the scale of a million tasks… And then completely new ones come along, and without any sort of human involvement, the system can pick up and learn to do that new task.
So I think that’s the main difference. Multitask learning and transfer learning have been done with some success at very small scale, and we need to make it so that we can apply them at very large scales.
And the other thing that’s new is this meta learning work, that is starting to emerge as an important area of machine learning research—essentially learning to learn. And that’s where you’ll be able to have a system that can see a completely novel task and learn to accomplish it based on its experience, and maybe experiments that it conducts itself about what approaches it might want to try to solve this new task.
And that is currently where we have a human in the loop, to try different approaches and where we think this ‘learning to learn’ research is going to make faster progress.
There are those who worry that the advances in artificial intelligence will have implications for human jobs. That eventually machines can learn new tasks faster than a human can, and then there’s a group of people who are economically locked out of the productive economy. What are your thoughts on that?
So, I mean I think it’s very clear that computers are going to be able to automate some aspects of some kinds of jobs, and that those jobs—the things they’re going to be able to automate—are a growing set over time. And that has happened before, like the shift from agrarian societies to an industrial-based economy happened largely because we were able to automate a lot of the aspects of farm production, and that caused job displacement.
But people found other things to do. And so, I’m a bit of an optimist in general and I think, you know, politicians and policymakers should be thinking about what the society structures we want to have in place should be if computers can suddenly do a lot more things than they used to be able to. But I think that’s of largely a governmental and policy set of issues.
My view is, a lot of the things that computers will be able to automate are these kinds of repetitive tasks that humans currently do because they’re too complicated for our computers to learn how to do.
So am I reading you correctly, that you’re not worried about a large number of workers displaced from their jobs, from the technology?
Well I definitely think that there will be some job displacement, and it’s going to be uneven. Certain kinds of jobs are going to be much more amenable to automation than others. The way I like to think about it is, if you look at the set of things that a person does in their job, if it’s a handful of things that are all repetitive, that’s something that’s more likely to be automatable, than someone whose job involves a thousand different things every day, and you come in tomorrow and your job is pretty different from what you did today.
And within that, what are the things that you’re working on—on a regular basis—in AI right now?
Our group as a whole does a lot of different things, and so I’m leading our group to help provide direction for some of the things we’re doing. Some of the things we’re working on within our group that I’m personally involved in are use of machine learning for various healthcare related problems. I think machine learning has a real opportunity to make a significant difference in how healthcare is provided.
And then I’m personally working on how can we actually build the right kinds of computer hardware and computer software systems that enable us to build machine learning systems which can successfully try out lots of different machine learning ideas quickly—so that you can build machine learning systems that can scale.
So that’s everything from, working with our hardware design team to make sure we build the right kind of machine learning hardware. TensorFlow is an open source package that our group has produced—that we open-sourced about a year and a half ago—that is how we express our machine learning research ideas, and use it for training machine learning systems for our products. And we’ve now released it, so lots of people outside Google are using this system as well, and working collaboratively to improve it over time.
And then we have a number of different kinds of research efforts, and I’m personally following pretty closely our “learning to learn” efforts, because I think that’s going to be a pretty important area.
Many people believe that if we build an AGI, it will come out of a Google. Is that a possibility?
Well, I think there’s enough unknowns in what we need to do that it could come from anywhere. I think we have a fairly broad research effort because we think this is, you know, a pretty important field to push forward, and we certainly are working on building systems that can do more and more. But AGI is a pretty long-term goal, I would say.
It isn’t inconceivable that Google itself reaches some size where it takes on some emergent properties which are well, I guess, by their definition unforeseeable?
I don’t quite know what that means, I guess.
People are emergent, right? You’re a trillion cells that don’t know who you are, but collectively… You know none of your cells have a sense of humor, but you do. And so at some level the entire system itself acquires characteristics that no parts of it have. I don’t mean it in any ominous way. Just to say that it’s when you start looking at numbers, like the number of connections in the human brain and what not, that we start seeing things of the same sort of orders in the digital world. It just invites one to speculate.
Yeah, I think we’re still a few orders of magnitude off in terms of where a single human brain is, versus what the capabilities of computing systems are. We’re maybe at like newt or something. But, yes, I mean presumably the goal is to build more intelligent systems, and as you add more computational capability, those systems will get more capable.
Is it fair to say that the reason we’ve had such a surge in success with AI in the last decade is this, kind of, perfect storm of GPUs, plus better algorithms, plus better data collection—so better training sets, plus Moore’s Law at your back? Is it nothing more complicated than that? That there have just been a number of factors that have come together? Or did something happen, some watershed event that maybe passed unnoticed, that gave us this AI Renaissance that were in now?
So, let me frame it like this: A lot of the algorithms that we’re using today were actually developed twenty, twenty-five years ago during the first upsurge in interest in neural networks, which is a particular kind of machine learning model. One that’s working extremely well today, but twenty or twenty-five years ago showed interesting signs of life on a very small problem… But we lacked the computational capabilities to make them work well on large problems.
So, if you fast-forward twenty years to maybe 2007, 2008, 2009, we started to have enough computational ability, and data sets that were big enough and interesting enough, to make neural networks work on practical interesting problems—things like computer vision problems or speech recognition problems.
And what’s happened is neural networks have become the best way to solve many of these problems, because we now have enough computational ability and big enough data sets. And we’ve done a bunch of work in the last decade, as well, to augment the sort of foundational algorithms that were developed twenty, thirty years ago with new techniques and all of that.
GPUs are one interesting aspect of that, but I think the fundamental thing is the realization that neural nets in particular, and these machine learning models, really have different computational characteristics than most code you run today on computers. And those characteristics are that they essentially mostly do linear algebra kinds of operations—matrix multiply vector operations—and that they are also fairly tolerant of reduced precision. So you don’t need six or seven digits of precision when you’re doing the computations for a neural net—you need many fewer digits of precision.
Those two factors together allow you to build specialized kinds of hardware for very low-precision linear algebra. And that’s what’s kind of augmented the ability of us to apply more computation to some of these problems. GPUs being one thing, Google has developed a new kind of custom chip called the Tensor processing unit, a TPU, that uses lower-precision than GPUs and offers significant performance advantages, for example. And I think this is an interesting and exploding area. Because when building specialized hardware that’s tailored to a subset of things, as opposed to very general kinds of computations like a CPU does, you run the risk that that specialized subset is only a little bit of what you want to do in a computing system.
But the thing that neural nets and machine learning models have today is that they’re applicable to a really broad range of things. Speech recognition and translation and computer vision and medicine and robotics—all these things can use that same underlying set of primitives, you know, accelerated linear algebra to do vastly different things. So you can build specialized hardware that applies to a lot of different things.
I got you. Alright, well I think we’re at time. Do you have any closing remarks, or any tantalizing things we might look forward to coming out of your work?
Well, I’m very excited about a lot of different things. I’ll just name a few…
So, I think the use of machine learning for medicine and healthcare is going to be really important. It’s going to be a huge aid to physicians and other healthcare workers to be able to give them quick second opinions about what kinds of things might make sense for patients, or to interpret a medical image and give people advice about what kinds of things they should focus on in a medical image.
I’m very excited about robotics. I think machine learning for robotics is going to be an interesting and emerging field in the next five years, ten years. And I think this “learning to learn” work will lead to more flexible systems which can learn to do new things without requiring as much machine learning expertise. I think that’s going to be pretty interesting to watch, as that evolves.
Then, beneath all the machine learning work, this trend toward building customized hardware that is tailored to particular kinds of machine learning models is going to be an interesting one to watch over the next five years, I think.
Alright, well…
One final thought, I guess, is that I think the field of machine learning has the ability to touch not just computer science but lots and lots of fields of human endeavor. And so, I think that it’s a really exciting time as people realize this and want to enter the field, and start to study and do machine learning research, and understand the implications of machine learning for different fields of science or different kinds of application areas.
And so that’s been really exciting to see over the last five or eight years, is more and more people from all different kinds of backgrounds are entering the field and doing really interesting, cool new work in this field.
Excellent. Well I want to thank you for taking the time today. It has been a fantastically interesting hour.
Okay thanks very much. Appreciate it.
Byron explores issues around artificial intelligence and conscious computers in his upcoming book The Fourth Age, to be published in April by Atria, an imprint of Simon & Schuster. Pre-order a copy here

Report: The importance of benchmarking clouds

Our library of 1700 research reports is available only to our subscribers. We occasionally release ones for our larger audience to benefit from. This is one such report. If you would like access to our entire library, please subscribe here. Subscribers will have access to our 2017 editorial calendar, archived reports and video coverage from our 2016 and 2017 events.
Windowed City Skyscraper Architecture Beneath Cloudscape in Black and White
The importance of benchmarking clouds by Paul Miller:
For most businesses, the debate about whether to embrace the cloud is over. It is now a question of tactics — how, when, and what kind? Cloud computing increasingly forms an integral part of enterprise IT strategy, but the wide variation in enterprise requirements ensures plenty of scope for very different cloud services to coexist.
Today’s enterprise cloud deployments will typically be hybridized, with applications and workloads running in a mix of different cloud environments. The rationale for those deployment decisions is based on a number of different considerations, including geography, certification, service level agreements, price, and performance.
To read the full report, click here.

Box buys small security startup to court more risk-averse clients

Fresh off its IPO in January, Box has made its first acquisition of the year, buying a small security startup called Subspace, the company said on Wednesday. Financial terms of the deal were not disclosed, but all seven Subspace employees will be joining Box and the startup will be closing up shop by April 3.

Subspace touts a supposedly secure browser that connects to a corporate network, whether it be on-premise or cloud-based. The browser is hooked up to the Subspace cloud-based backend where an organization’s IT staff can control access and craft data-protection policies for the websites and applications that a user might visit within the Subspace browser.

?In a blog post on the acquisition, Box CEO Aaron Levie wrote that the Subspace staff will be working on Box’s data security efforts and “will let us go even deeper with our security and data policies, enabling reliable corporate security policies, even when content leaves the Box platform to be accessed on a customer or partner’s device.”

As [company]Box[/company] continues to push its new Box for Industries product lineup, its going to need more security features to court customers who may be paranoid of cloud offerings. The types of customers Box wants to sign up for Box for Industries are the types of clients found in heavily-regulated industries like healthcare, finance and legal. So far, Box has made public that Stanford Health Care, [company]Eli Lilly[/company], T Rowe Price and Nationwide Insurance all feel comfortable with using Box as their work/cloud storage hub.

In February, Box rolled out the Box Enterprise Key Management (EKM) service, which lets users hold on to their encryption keys while using the Box platform. Box partnered up with the company SafeNet as well as [company]Amazon[/company] Web Services to help customers set up the service.

Another big data breach, this time at insurance company Anthem

Anthem, the nation’s second largest insurance provider, was hit by hackers who stole lots of customer data including names, birth dates, medical IDs, social security numbers, snail-mail and e-mail addresses, and employment information —  but allegedly no credit card or medical information, the company said. Although with all that other information out there, that may not be much comfort.

In a letter to customers, Anthem CEO Joseph Swedish acknowledged that his own information was stolen but said there is no evidence that credit card or medical information were compromised. [company]Anthem[/company], formerly known as [company]Wellpoint[/company], posted more information here for customers.

Little is known about which of the company’s databases or applications were hijacked, but Anthem said all of its businesses were affected. And there was the usual butt-covering: Swedish said the company “immediately made every effort to close the security vulnerability, contacted the FBI and began fully cooperating with their investigation.” Anthem also characterized the breach as a result of “a very sophisticated external cyber attack.” But, seriously, what else would they say? As a couple wiseguys on Twitter put it: “It’s better than saying you left the front door open.” Or the keys on the visor.

Anthem also said it hired Mandiant, a sort of cybersecurity SWAT team, to assess its systems and recommend solutions. Cybersecurity specialist Brian Krebs has more on the potential impact.

The topic of the breach came up during a call earlier today during which the White House discussed its interim report on big data opportunties with reporters. The gist was that Anthem appeared to have notified authorities within 30 days of finding the problem, which is what the White House would stipulate in bills it is formulating.

The security of healthcare data is of particular concern — and preserving patient privacy was the impetus behind HIPAA and other regulations. But, as Gigaom pointed out earlier this year, that data security may be as much fiction as fact.

The benefits of consolidating digital patient data in one place so that a patient or her doctors can access it spells convenience for authorized users, but that data conglomeration also offers a compelling target for bad guys.

At this point it would be natural for a given consumer to feel both spooked and jaded by these security snafus. Last year alone, there were major breaches at Target, Home Depot, and JPMorgan Chase, affecting hundreds of millions of people in aggregate.