Voices in AI – Episode 67: A Conversation with Amir Khosrowshahi

[voices_in_ai_byline]

About this Episode

Episode 67 of Voices in AI features host Byron Reese and Amir Khosrowshahi talk about the explainability, privacy, and other implications of using AI for business. Amir Khosrowshahi is VP and CTO at Intel. He holds a Bachelor’s Degree from Harvard in Physics and Math, a Master’s Degree from Harvard in Physics, and a PhD in Computational Neuroscience from UC Berkeley.
Visit www.VoicesinAI.com to listen to this one-hour podcast or read the full transcript.

Transcript Excerpt

Byron Reese: This is Voices in AI brought to you by GigaOm. I’m Byron Reese. Today I’m so excited that my guest is Amir Khosrowshahi. He is a VP and the CTO of AI products over at Intel. He holds a Bachelor’s Degree from Harvard in Physics and Math, a Master’s Degree from Harvard in Physics, and a PhD in Computational Neuroscience from UC Berkeley. Welcome to the show, Amir.
Amir Khosrowshahi: Thank you, thanks for having me.
I can’t imagine someone better suited to talking about the kinds of things we talk about on this show, because you’ve got a PhD in Computational Neuroscience, so, start off by just telling us what is Computational Neuroscience?
So neuroscience is a field, the study of the brain, and it is mostly a biologically minded field, and of course there are aspects of the brain that are computational and there’s aspects of the brain that are opening up the skull and peering inside and sticking needles into areas and doing all sorts of different kinds of experiments. Computational neuroscience is a combination of these two threads, the thread that there [are] computer science statistics and machine learning and mathematical aspects to intelligence, and then there’s biology, where you are making an attempt to map equations from machine learning to what is actually going on in the brain.
I have a theory which I may not be qualified to have and you certainly are, and I would love to know your thoughts on it. I think it’s very interesting that people are really good at getting trained with a sample size of one, like draw a made up alien you’ve never seen before and then I can show you a series of photographs, and even if that alien’s upside down, underwater, behind a tree, whatever, you can spot it.
Further, I think it’s very interesting that people are so good at transfer learning, I could give you two objects like a trout swimming in a river, and that same trout in a jar of formaldehyde in a laboratory and I could ask you a series of questions: Do they weigh the same, are they the same color, do they smell the same, are they the same temperature? And you would instantly know, and yet, likewise, if you were to ask me if hitting your thumb with a hammer hurts, and I would say “yes,” and then somebody would say, “Well, have you ever done it?” And I’m like, “yeah,” and they would say, “when?” And it’s like, I don’t really remember, I know I have. Somehow we take data and throw it out, and remember metadata and yet the fact a hammer hurts your thumb is stored in some little part of your brain that you could cut it out and somehow forget that. And so when I think of all of those things that seem so different than computers to me, I kind of have a sense that human intelligence doesn’t really tell us anything about how to build artificial intelligence. What do you say?
Okay, those are very deep questions and actually each one of those items is a separate thread in the field of machine learning and artificial intelligence. There are lots of people working on things, so the first thing you mentioned I think, was one shot learning where you have, you see as something that’s novel. From the first time you see it, you recognize it as something that’s singular and you retain that knowledge to then identify if it occurs again—such as for a child it would be like a chair, for you it’s potentially an alien. So, how do you learn from single examples?
That’s an open problem in machine learning and is very actively studied because you want to be able to have a parsimonious strategy for learning and the current ways that—it’s a good problem to have—the current ways that we’re doing learning in, for example, online services that sort photos and recognize objects and images. It’s very computationally wasteful and it’s actually wasteful in usage of data. You have to see many examples of chairs to have an understanding of a chair, and it’s actually not clear if you actually have an understanding of a chair, because the models that we have today for chairs, they do make mistakes. When you peer into where the mistakes were made, it seems like there the machine learning model doesn’t actually have an understanding of a chair, it doesn’t have a semantic understanding of a scene or of grammar, or of languages that are translated, and we’re noticing these efficiencies and we’re trying to address them.
You mentioned some other things, such as how do you transfer knowledge from one domain to the next. Humans are very good at generalizing. We see an example of something in one context, and it’s amazing that you can extrapolate or transfer it to a completely different context. That’s also something that we’re working on quite actively, and we have some initial success in that we can take a statistical model that was trained on one set of data and then we can then apply to another set of data by using that previous experience as a warm start, and then moving away from that old domain to the new domain. This is also possible to do in continuous time.
Much of the things we experience in the real world—they’re not stationary, and that’s a statistics change with time. We need to have models that can also change. For a human it’s easy to do that, it’s very good at going from… it’s good at handling non-stationary statistics, so we need to build that into our models, be cognizant of it, we’re working on it. And then [for] other things you mentioned—that intuition is very difficult. It’s potentially one of the most difficult things for us to translate from human intelligence to machines, and remembering things and having kind of a hazy idea of having done something bad to yourself with a hammer, that I’m not actually sure where that falls in into the various subdomains of machine learning.
Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com
[voices_in_ai_link_back]
 
Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.

The digital world needs new lawmakers

The current, “sudden” plague of deepfake videos is just the latest in a series of “unexpected” events caused by “unplanned” use of technology. More will occur, and indeed are already happening: in a similar vein the computer-generated mash-up videos on YouTube that care more about eyeballs than child protection; the ongoing boom in cyber-trolling; bitcoin pimping and pumping. To be expected are misuse of augmented and virtual reality, 3D printing and robotics. Wait, 3D-printing of guns is so five years ago.

As I’ve written before, such bleak illustrations are the yang to innovation’s ying: trolling, for example, is the downside to the explosion of transparency illustrated by the ongoing, global wave of #MeToo revelations (“revelations” in its traditional, not salacious media sense). The present day is multi-dimensional and complex, and it is often difficult to separate positives from the negatives: so much so that we, and our legislative bodies, act like rabbits in headlights, doing little more than watch as the future unfolds before our eyes.

Or, we try to address the challenges using ill-equipped mechanisms — was it Einstein who said, “We can’t solve problems by using the same kind of thinking we used when we created them”? Nice words, but this is what we are doing, wholesale and globally: lawmakers are taking fifteen years to create laws such as GDPR which, while good as they go, are also, immediately insufficient; meanwhile the court of public opinion is both creating, and driven by, power-hungry vested interests; and service providers operate stable-door approaches to policy.

What’s the answer? To quote another adage, “If you want to get there, don’t start from here.” We need to start our governance processes from the perspective of the future, rather than the past, assessing where society will be in five, ten, fifteen years’ time. In practice this means accepting that we will be living in a fully digitized, augmented world. The genie is out of the bottle, so we need to move focus from dealing with the potential consequences of magic, and towards accepting a world with genies needs protections.

In practical terms, this means applying the same principles of societal fair play, collective conscience and individual freedom to the virtual world, as the physical. I’m not a lawmaker but I keep coming back to the idea that our data should be considered as ourselves: so for example, granting access to a pornographic virtual or 3D-printed robot representation of an individual, against their will, should be considered to be abuse. It’s also why speed cameras can be exploitative, if retrofitted to roads as money generators.

Right now, we are trying to contain the new wine of the digital age in very old, and highly permeable skins created over previous centuries. I remain optimistic: we shall no doubt look back on this era as a time of great change, with all its ups and downs. I also remain confident in the democratizing power of data, for all its current, quite messy state, and that we shall start seeing more tech-savvy approaches to legal and policy processes.

Meanwhile, perhaps we shall rely on younger, ‘digital native’ generations to deliver the new thinking required, or maybe — is this too big an ask? — those currently running our institutions and corporations will have the epiphanies required to start delivering on our legislative needs, societal or contractual. Yes, I remain optimistic and confident that we will get there; however, when this actually happens is anybody’s guess. We are not out of the woods yet.

GDPR quick tip: Know what data (models) you have

Amid all the kerfuffle around the General Data Protection Regulation, GDPR (which applies to any organization handling European citizen data, wherever they are located), it can be hard to know where to start. I don’t claim to be a GDPR expert – I’ll leave that to the lawyers and indeed, the government organizations responsible. However, I can report from my conversations around getting ready for the May 25th deadline.
In terms of policies and approach, GDPR is not that different to existing data management best practice. One potential difference, from a UK perspective, is that it may mean the end of unsolicited calls, letters and emails: for example, the CEO of a direct mail organization told me it may be the demise of ‘cold lists’, that is, collections of addresses to be targeted without any prior engagement (which drives many ‘legitimate interest’ justifications), contract or consent.
But this isn’t a massive leap from, say, MailChimp’s confirmation checks, themselves based on spam blacklisting and the right to complain. And indeed, in this age of public, sometimes viral discontent, no organization wants to have its reputation hauled over the coals of social media. When they do, it appears, they can get away with it for so long before they cave in to public pressure to do a better job (recent examples, Uber and a few budget airlines).
All this reinforces the point that organizations doing right by their customers, and therefore their data, are likely already on the right path to GDPR compliance. The Jeff Goldblum-sized fly in the ointment, however, is the conclusion reached in survey after survey about enterprise data management: most corporations today don’t actually know what information they have, including about the people with whom they interact.
This is completely understandable. As technology has thrown innovation after innovation at the enterprise, many have adopted a layer-the-new-on-top-of-the-old approach: to do otherwise would have left them at the wayside long ago. Each massive organisation is an attic of data archival, a den of data duplication, a cavern of complexity. To date, the solution has been a combination of coping strategies, even as we add new layers on top.
But now, faced with the massive potential fines (up to 4% of revenue or €20 million), our corporations and institutions can no longer de-prioritise how they manage their data pools. At the same time, there is no magic wand to be waved, no way of really knowing whether the data stored within is appropriate to the organization’s purposes (which indeed, may be very different to when they were established).
Meanwhile, looking at the level of systems is not going to be particularly revealing, so is there an answer? A starting point is to look somewhere in-between data and systems, focusing on meta-data. Data models, software designs and so on can be revelatory in terms of what data is held and how it is being used, and can enable prioritization of what might be higher-risk (of non-compliance) systems and data stores.
Knowing this information enables a number of decisions, not only about the data but also what to do with it. For example, a system holding information about the children of customers may still be running, without anyone’s real knowledge. Just knowing it is there, and that it hasn’t been accessed for several years, should be reason enough to switch it off and dispose of its contents. And indeed, even if 75% of marketing data will be ‘rendered obsolete‘, surely that’s not the good part anyway?
Even if you have a thousand such systems, knowing what they are and what types of data they contain puts you in a much better position than not knowing. It’s not a surprise that software vendors (such as Erwin, founded as a data modelling company in the 90’s, vanished into CA, divested and portfolio broadened), who have struggled to demonstrate their relevance in the face of “coping strategy” approaches to enterprise data governance, are now setting their stalls around GDPR.
Again, no magic wands exist but the bottom line is that it is becoming an enforceably legal requirement for organizations to be able to explain what they are holding and why. As a final thought, this has to be seen as good for business: focus on what matters, the ability to prioritize, to better engage, to deliver more personalized customer services, all of these are seen as high-value benefits above and beyond a need to comply with some legislative big stick.

Five 2018 Predictions — on GDPR, Robot Cars, AI, 5G and Blockchain

Predictions are like buses, none for ages and then several come along at once. Also like buses, they are slower than you would like and only take you part of the way. Also like buses, they are brightly coloured and full of chatter that you would rather not have in your morning commute. They are sometimes cold, and may have the remains of somebody else’s take-out happy meal in the corner of the seat. Also like buses, they are an analogy that should not be taken too far, less they lose the point. Like buses.

With this in mind, here’s my technology predictions for 2018. I’ve been very lucky to work across a number of verticals over the past couple of years, including public and private transport, retail, finance, government and healthcare — while I can’t name check every project, I’m nonetheless grateful for the experience and knowledge this has brought, which I feed into the below. I’d also like to thank my podcaster co-host Simon Townsend for allowing me to test many of these ideas.

Finally, one prediction I can’t make is whether this list will cause any feedback or debate — nonetheless, I would welcome any comments you might have, and I will endeavour to address them.

1. GDPR will be a costly, inadequate mess

Don’t get me wrong, GDPR is a really good idea. As a lawyer said to me a couple of weeks ago, it is a combination of the the UK data protection act, plus the best practices that have evolved around it, now put into law at a European level with a large fine associated. The regulations are also likely to become the basis for other countries — if you are going to trade with Europe, you might as well set it as the baseline, goes the thinking. All well and good so far.

Meanwhile, it’s an incredible, expensive (and necessary, if you’re a consumer that cares about your data rights) mountain to climb for any organisation that processes or stores your data. The deadline for compliance is May 25th, which is about as likely to be hit as I am going to finally get myself the 6-pack I wanted when I was 25.

No doubt GDPR will one day be achieved, but the fact is that it is already out of date. Notions of data aggregation and potentially toxic combinations (for example, combining credit and social records to show whether or not someone is eligible for insurance) are not just likely, but unavoidable: ‘compliant’ organisations will still be in no better place to protect the interests of their customers than currently.

The challenges, risks and sheer inadequacy of GDPR can be summed up by a single tweet sent by otherwise unknown traveller — “If anyone has a boyfriend called Ben on the Bournemouth – Manchester train right now, he’s just told his friends he’s cheating on you. Dump his ass x.” Whoever sender “@emilyshepss” or indeed, “Ben” might be, the consequences to the privacy of either cannot be handled by any data legislation currently in force.

2. Artificial Intelligence will create silos of smartness

Artificial Intelligence (AI) is a logical consequence of how we apply algorithms to data. It’s as inevitable as maths, as the ability our own brains have to evaluate and draw conclusions. It’s also subject to a great deal of hype and speculation, much of which tends to follow that old, flawed futurist assumption: that a current trend maps a linear course leading to an inevitable conclusion. But the future is not linear. Technological matters are subject to the laws of unintended consequences and of unexpected complexity: that is, the future does not follow a linear path, and every time we create something new, it causes new situations which are beyond its ability to deal with.

So, yes, what we call AI will change (and already is changing) the world. Moore’s, and associated laws are making previously impossible computations now possible, and indeed, they will become the expectation. Machine learning systems are fundamental to the idea of self-driving cars, for example; meanwhile voice, image recognition and so on are having their day. However these are still a long way from any notion of intelligence, artificial or otherwise.

So, yes, absolutely look at how algorithms can deliver real-time analysis, self-learning rules and so on. But look beyond the AI label, at what a product or service can actually do. You can read Gigaom’s research report on where AI can make a difference to the enterprise, here.

In most cases, there will be a question of scope: a system that can save you money on heating by ‘learning’ the nature of your home or data centre, has got to be a good thing for example. Over time we shall see these create new types of complexity, as we look to integrate individual silos of smartness (and their massive data sets) — my prediction is that such integration work will keep us busy for the next year or so, even as learning systems continue to evolve.

3. 5G will become just another expectation

Strip away the techno-babble around 5G and we have a very fast wireless networking protocol designed to handle many more devices than currently — it does this, in principle, by operating at higher frequencies, across shorter distances than current mobile masts (so we’ll need more of them, albeit in smaller boxes). Nobody quite knows how the global roll-out of 5G will take place — questions like who should pay for it will pervade, even though things are clearer than they were. And so on and so on.

But when all’s said and done, it will set the baseline for whatever people use it for, i.e. everything they possibly can. Think 4K video calls, in fact 4K everything, and it’s already not hard to see how anything less than 5G will come as a disappointment. Meanwhile every device under the sun will be looking to connect to every other, exchanging as much data as it possibly can. The technology world is a strange one, with massive expectations being imposed on each layer of the stack without any real sense of needing to take responsibility.

We’ve seen it before. The inefficient software practices of 1990’s Microsoft drove the need for processor upgrades and led Intel to a healthy profit, illustrating the vested interests of the industry to make the networking and hardware platforms faster and better. We all gain as a result, if ‘gain’ can be measured in terms of being able to see your gran in high definition on a wall screen from the other side of the world. But after the hype, 5G will become just another standard release, a way marker on the road to techno-utopia.

On the upside, it may lead to a simpler networking infrastructure. More of a hope than a prediction would be the general adoption of some kind of mesh integration between Wifi and 5G, taking away the handoff pain for both people, and devices, that move around. There will always be a place for multiple standards (such as the energy-efficient Zigbee for IoT) but 5G’s physical architecture, coupled with software standards like NFV, may offer a better starting point than the current, proprietary-mast-based model.

4. Attitudes to autonomous vehicles will normalize

The good news is, car manufacturers saw this coming. They are already planning for that inevitable moment, when public perception goes from, “Who’d want robot cars?” to “Why would I want to own a car?” It’s a familiar phenomenon, an almost 1984-level of doublethink where people go from one mindset to another seemingly overnight, without noticing and in some cases, seemingly disparaging the characters they once were.  We saw it with personal computers, with mobile phones, with flat screen TVs — in the latter case, the the world went from “nah, thats never going to happen” to recycling sites being inundated with perfectly usable screens (and a wave of people getting huge cast-off tellies).

And so, we will see over the next year or so, self-driving vehicles hit our roads. What drives this phenomenon is simple: we know, deep down, that robot cars are safer — not because they are inevitably, inherently safe, but because human drivers are inevitably, inherently dangerous. And autonomous vehicles will get safer still. And are able to pick us up at 3 in the morning and take us home.

The consequences will be fascinating to watch. First that attention will increasingly turn to brands — after all, if you are going to go for a drive, you might as well do so in comfort, right? We can also expect to see a far more varied range of wheeled transport (and otherwise — what’s wrong with the notion of flying unicorn deliveries?) — indeed, with hybrid forms, the very notion of roads is called into question.

There will be data, privacy, security and safety ramifications that need to be dealt with — consider the current ethical debate between leaving young people without taxis late at night, versus the possible consequences of sharing a robot Uber with a potential molester. And I must recall a very interesting conversation with my son, about who would get third or fourth dibs at the autonomous vehicle ferrying drunken revellers (who are not always the cleanliest of souls) to their beds.

Above all, business models will move from physical to virtual, from products to services. The industry knows this, variously calling vehicles ‘tin boxes on wheels’ while investing in car sharing, delivery and other service-based models. Of course (as Apple and others have shown), good engineering continues to command a premium even in the service-based economy: competition will come from Tesla as much as Uber, or whatever replaces its self-sabotaging approach to world domination.

Such changes will take time but in the short term, we can fully expect a mindset shift from the general populace.

5. When Bitcoins collapse, blockchains will pervade

The concept that “money doesn’t actually exist” can be difficult to get across, particularly as it makes such a difference to the lives of, well, everybody. Money can buy health, comfort and a good meal; it can also deliver representations of wealth, from high street bling to mediterranean gin palaces. Of course money exists, I’m holding some in my hand, says anyone who wants to argue against the point.

Yet, still, it doesn’t. It is a mathematical construct originally construed to simplify the exchange of value, to offer persistence to an otherwise transitory notion. From a situation where you’d have to prove whether you gave the chap some fish before he’d give you that wood he offered, you can just take the cash and buy wood wherever you choose. It’s not an accident of speech that pound notes still say, “I promise to pay the bearer on demand…”

While original currencies may have been teeth or shells (happy days if you happened to live near a beach), they moved to metals in order to bring some stability in a rather dodgy market. Forgery remains an enormous problem in part because we maintain a belief that money exists, even though it doesn’t. That dodgy-looking coin still spends, once it is part of the system.

And so to the inexorable rise of Bitcoin, which has emerged from nowhere to become a global currency — in much the same way as the dodgy coin, it is accepted simply because people agree to use it in a transaction. Bitcoin has a chequered reputation, probably unfairly given that our traditional dollars and cents are just as likely to be used for gun-running or drug dealing as any virtual dosh. It’s also a bubble that looks highly likely to burst, and soon — no doubt some pundits will take that as a proof point of the demise of cryptocurrency.

Their certainty may be premature. Not only will Bitcoin itself pervade (albeit at a lower valuation), but the genie is already out of the bottle as banks and others experiment with the economic models made possible by “distributed ledger” architectures such as The Blockchain, i.e. the one supporting Bitcoin. Such models are a work in progress: the idea that a single such ledger can manage all the transactions in the world (financial and otherwise) is clearly flawed.

But blockchains, in general, hold a key as they deal with that single most important reason why currency existed in the first place — to prove a promise. This principle holds in areas way beyond money, or indeed, value exchange — food and pharmaceutical, art and music can all benefit from knowing what was agreed or planned, and how it took place. Architectures will evolve (for example with sidechains) but the blockchain principle can apply wherever the risk of fraud could also exist, which is just about everywhere.

6. The world will keep on turning

There we have it. I could have added other things — for example, there’s a high chance that we will see another major security breach and/or leak; augmented reality will have a stab at the mainstream; and so on. I’d also love to see a return to data and facts on the world’s political stage, rather than the current tub-thumping and playing fast and loose with the truth. I’m keen to see breakthroughs in healthcare from IoT, I also expect some major use of technology that hadn’t been considered arrive, enter the mainstream and become the norm — if I knew what it was, I’d be a very rich man. Even if money doesn’t exist.

Truth is, and despite the daily dose of disappointment that comes with reading the news, these are exciting times to be alive. 2018 promises to be a year as full of innovation as previous years, with all the blessings and curses that it brings. As Isaac Asimov once wrote, “An atom-blaster is a good weapon, but it can point both ways.”

On that, and with all it brings, it only remains to wish the best of the season, and of 2018 to you and yours. All the best!

 
Photo credit: Birmingham Mail