Voices in AI – Episode 59: A Conversation with Tiger Tyagarajan

[voices_in_ai_byline]

About this Episode

Episode 59 of Voices in AI features host Byron Reese and Tiger Tyagarajan talking about AI, augmented intelligence, and its use in the enterprise. Tiger Tyagarajan is the President and CEO at GenPact. He holds a degree in mechanical engineering, and he also holds an MBA.
Visit www.VoicesinAI.com to listen to this one-hour podcast or read the full transcript.

Transcript Excerpt

Byron Reese: This is Voices in AI, brought to you by GigaOm, I’m Byron Reese. Today I’m so excited my guest is Tiger Tyagarajan, he is the President and CEO at GenPact. He holds a degree in mechanical engineering, and he also holds an MBA. Welcome to the show Tiger.
Tiger Tyagarajan: Byron, great to be on the show, thank you.
So let’s start, tell me about GenPact, what your mission is and how it came about.
Our mission continues to be, Byron, to work with global enterprises in a variety of industries, to actually help them become more competitive in the markets they are in. We do that by actually helping them undertake change agendas—transformation agendas to drive value for them—either by helping them drive growth or better pricing, or better risk management or lower fraud, better working capital, better cash flow etc.
Our history goes back to when we were set up as an enterprise and 100% subsidiary of the General Electric company (GE) in the late 90s. Then in 2005, seven years into our existence we spun off into a separate company, so that we could serve other clients. Today we are about $3 billion in revenue, serving 700 clients across the globe. GE continues to be a big relationship of ours, but only accounts for less than 10% of our revenue, as compared to everyone else that accounts for the balance of about 95%.
And tell me, you’re using artificial intelligence to achieve that mission in some cases. Can you talk about that, like what you’re doing?
So Byron, early days, I would say about 5+ years back, we came to the conclusion that digital is going to pretty dramatically change the way work gets done along many dimensions. We picked 12 different digital technologies to actually bring into the company, build capabilities, and change the way a lot of our services get delivered, and a lot of the way work gets done by our clients, and one of them we picked was artificial intelligence. Within the family of AI, we picked computer vision, we picked computational linguistics, we picked machine learning, three examples that are very relevant to the kind of services we offer. We’ve gone down the path of building those capabilities, acquiring those capabilities, partnering with other companies in the ecosystem on these capabilities, so that we can change the way work gets done and services will get delivered, in, I would say, a dramatic fashion that I would suspect some of us could not have imagined.
Well, don’t just leave it there, give me an example of something dramatic that’s happened.
I’ll give you a couple. Some of the clients that we deal with are banks, and think about a bank that is in the business of small and medium business lending. So half a million dollar leases for equipment or a loan for equipment to a mid-market company, that is actually manufacturing a product somewhere in Ohio etc. And the way the small business lending world works is that the customer gives to the sales person a bunch of documents, and this would be financial statements of the company, cash flows of the company etc. A lot of those documents are produced by these companies, in their own way they are audited by a small audit firm, somewhere in the vicinity and therefore they are written up in different ways, with different accounting standards and so on.
Now when a bank receives it, typically they would have to change it to actually match their understanding of cash flow the way they define it. They have to recast all the numbers, they have to read the footnotes, and then after a few days, they have 5 questions to ask, so they go back to the customer, ask those questions, and finally [it] takes about 15 days, 20 days in some cases to say, “hey customer, I’ve given an approval for half a million dollars, go buy our equipment.”
Now, in today’s world, that is way too long. Now if you bring in a combination of being able to read those documents, read unstructured data, read the language in the footnotes, interpret it using computational linguistics that then converts it into a specific standard financial statement in the way that particular bank understands financial statements, the way their definition works… You could actually argue that I could take a decision, the bank would take a decision in 30 minutes.
So think about the ability to tell a customer that your application for a loan to buy your equipment is approved in 30 minutes versus 3 weeks. I mean that makes a huge difference to the small/medium enterprise, that makes a huge difference to their business, their ability to grow, and if you think about the U.S. and if you think about small/medium enterprises in the U.S., that is the backbone of this economy, we’re beginning to see the use of this in a number of banking relationships.
I would say it’s still early days, and I would say it could make a huge difference to the top line of the banks, to the pricing power of the banks, to the ability to actually satisfy your customer dramatically. I think that is a great example of some of the ways that service changes versus a human being spending a lot of their time in actually passing the data before they take a decision. Now in the end the decision, by the way, is still taken by the human being who brings their expertise which is why we think about AI, and it’s always a combination of man + machine.
Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com
[voices_in_ai_link_back]
 
Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.

Voices in AI – Episode 57: A Conversation with Akshay Sabhikhi

[voices_in_ai_byline]

About this Episode

Episode 57 of Voices in AI features host Byron Reese and Akshay Sabhikhi talking about how AI augments and informs human intelligence. Akshay Sabhikhi is the CEO and Co-founder of CognitiveScale. He’s got more than 18 years of entrepreneurial leadership, product development and management experience with growth stage venture backed companies and high growth software divisions within Fortune 50 companies. He was a global leader for Smarter Care at IBM, and he successfully led and managed the acquisition of Cúram Software to establish IBM’s leadership at the intersection of social programs and healthcare. He has a BS and MS in electrical and computer engineering from UT at Austin and an MBA from the Acton School of Entrepreneurship.
Visit www.VoicesinAI.com to listen to this one-hour podcast or read the full transcript.

Transcript Excerpt

Byron Reese: This is Voices in AI brought to you by GigaOm, I’m Byron Reese. Today my guest is Akshay Sabhikhi. He is the CEO and Co-founder of CognitiveScale. He’s got more than 18 years of entrepreneurial leadership, product development and management experience with growth stage venture backed companies and high growth software divisions within Fortune 50 companies. He was a global leader for Smarter Care at IBM, and he successfully led and managed the acquisition of Cúram Software to establish IBM’s leadership at the intersection of social programs and healthcare. He has a BS and MS in electrical and computer engineering from UT at Austin and an MBA from the Acton School of Entrepreneurship. Welcome to the show, Akshay.
Akhay Sabhikhi: Thank you Byron, great to be here.
Why is artificial intelligence working so well now? I mean like, my gosh, what has changed in the last 5-10 years?
You know, the big difference is everyone knows artificial intelligence has been around for decades, but the big difference this time as I’d like to say, is there’s a whole supporting cast of characters that’s making AI really come into its own. And it all starts firstly with the fact that it’s delivering real value to clients, so let’s dig into that.
Firstly, data is a field for AI and we all know with the amount of information we’re surrounded with, we certainly hear about big data all over the place, and you know, it’s the amount and the volume of the information, but it’s also systems that are able to interpret that information. So the type of information I’m talking about is not just your classic databases, nice neatly packaged structured information; it is highly unstructured and messy information that includes, you know, audio, video, certainly different formats of text, images, right? And our ability to really bring that data and reason over that data is a huge difference.
We talk about a second big supporting cost or supporting character here is the prominence of social, and I say social because this is the amount of data that’s available through social media, where we can in real time see consumers and how they behave, or whether it is mobile, and the fact that you have devices now in the hands of every consumer, and so you have touch points where insights can be pushed out. Those are the different, I guess supporting costs that are now there which didn’t exist before, and that’s one of the biggest changes with the prominence and true, sort of, value people are seeing with AI.
And so give us some examples, I mean you’re at the forefront of this with CognitiveScale. What are some of the things that you see that are working that wouldn’t have worked 5-10 years ago?
Well, so let’s take some examples. So, we use an analogy which is, we all sort have used WAZE as an application to get from point A to point B, right? When you look at WAZE, it’s a great consumer tool that tells you exactly what’s ahead of you: cop, traffic, debris on the road and so on, and it guides you through your journey right? Well if you look at applying a WAZE-like analogy to the enterprise where you have a patient, and I’ll use a patient as an example because that’s how we started the company. You’re largely unmanaged, all you do is you show up to your appointments, you get prescriptions, you’re told about your health condition, but then once you leave that appointment, you’re pretty much on your own right? But think about everything that’s happening around you, think about social determinants, for example, the city you live in, whether you live in the suburbs or you live in downtown, the weather patterns, the air quality, such as the pollen counts for example, or allergens that affect you or whether it is a specific zip code within the city that tells us about the food choices that exist around me.
There’s a lot of determinants that go well beyond your pure sort of structured information that comes from an electronic medical record. If you bring all of those pieces of data together, an AI system is able to look at that information and the biggest difference here being in the context of the consumer, in this case, the patient, and surface unique insights to them, but it doesn’t stop right there. What an AI system does is, it takes it a step or two further by saying, “I’m going to push insights based on what I’ve learned from data that surrounds you, and hopefully it makes sense to you. And I will give you the mechanisms to provide a thumbs up/thumbs down or specific feedback that I can then incorporate back into a system to learn from it. So that’s a real life example of an AI system that we’ve stood up for many of our clients using various kinds of structured and unstructured information to be brought together.
Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com
[voices_in_ai_link_back]
 
Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.