Voices in AI – Episode 76: A Conversation with Rudy Rucker

[voices_in_ai_byline]

About this Episode

Episode 76 of Voices in AI features host Byron Reese and Rudy Rucker discuss the future of AGI, the metaphysics involved in AGI, and delve into whether the future will be for humanity’s good or ill. Rudy Rucker is a mathematician, a computer scientist, as well as being a writer of fiction and nonfiction, with awards for the first two of the books in his Ware Tetralogy series.
Visit www.VoicesinAI.com to listen to this one-hour podcast or read the full transcript.

Transcript Excerpt

Byron Reese: This is Voices in AI brought to you by GigaOm, I’m Byron Reese. Today my guest is Rudy Rucker. He is a mathematician, a computer scientist and a science fiction author. He has written books of fiction and nonfiction, and he’s probably best known for his novels in the Ware Tetralogy, which consists of software, wetware, freeware and realware. The first two of those won Philip K. Dick awards. Welcome to the show, Rudy.
Rudy Rucker: It’s nice to be here Byron. This seems like a very interesting series you have and I’m glad to hold forth on my thoughts about AI.
Wonderful. I always like to start with my Rorschach question which is: What is artificial intelligence? And why is it artificial?
Well a good working definition has always been the Turing test. If you have a device or program that can convince you that it’s a person, then that’s pretty close to being intelligent.
So it has to master conversation? It can do everything else, it can paint the Mona Lisa, it could do a million other things, but if it can’t converse, it’s not AI?
No those other things are also a big part of if. You’d want it to be able to write a novel, ideally, or to develop scientific theories—to do the kinds of things that we do, in an interesting way.
Well, let me try a different tack, what do you think intelligence is?
I think intelligence is to have a sort of complex interplay with what’s happening around you. You don’t want the old cliche that the robotic voice or the screen with capital letters on it, just not even able to use contractions, “do not help me.” You want something that’s flexible and playful in intelligence. I mean even in movies when you look at the actors, you often will get a sense that this person is deeply unintelligent or this person has an interesting mind. It’s a richness of behavior, a sort of complexity that engages your imagination.
And do you think it’s artificial? Is artificial intelligence actual intelligence or is it something that can mimic intelligence and look like intelligence, but it doesn’t actually have any, there’s no one actually home?
Right, well I think the word artificial is misleading. I think as you asked me before the interview about my being friends with Stephen Wolfram, and one of Wolfram’s points has been that any natural process can embody universal computation. Once you have universal computation, it seems like in principle, you might be able to get intelligent behavior emerging even if it’s not programmed. So then, it’s not clear that there’s some bright line that separates human intelligence from the rest of the intelligence. I think when we say “artificial intelligence,” what we’re getting at is the idea that it would be something that we could bring into being, either by designing or probably more likely by evolving it in a laboratory setting.
So, on the Stephen Wolfram thread, his view is everything’s computation and that you can’t really say there’s much difference between a human brain and a hurricane, because what’s going on in there is essentially a giant clockwork running its program, and it’s all really computational equivalence, it’s all kind of the same in the end, do you ascribe to that?
Yeah I’m a convert. I wouldn’t use the word ‘clockwork’ that you use because that already slips in an assumption that a computation is in some way clunky and with gears and teeth, because we can have things—
But it’s deterministic, isn’t it?
It’s deterministic, yes, so I guess in that sense it’s like clockwork.
So Stephen believes, and you hate to paraphrase something as big as like his view on science, but he believes that everything is—not a clockwork, I won’t use that word—but everything is deterministic. But, even the most deterministic things, when you iterate them, become unpredictable, and they’re not unpredictable inherently, like from a universal standpoint. But they’re unpredictable from how finite our minds are.
They’re in practice unpredictable?
Correct.
So, a lot of natural processes, like well there’s like when you take Physics I, you say oh, I can predict where, if I fire an artillery shot where it’s going to land, because it’s going to travel along a perfect parabola and then I can just work it out on the back of an envelope in a few seconds. And then when you get into reality, well they don’t actually travel on perfect parabolas, they have this odd shaped curve due to air friction, that’s not linear, it depends how fast they’re going. And then, you skip into saying “Well, I really would have to simulate this click.”
And then when you get into saying you have to predict something by simulating the process, then the event itself is simulating itself already, and in practice, the simulation is not going to run appreciably faster than just waiting for the event to unfold, and that’s the catch. We can take a natural process and it’s computational in the sense that it’s deterministic, so you think well, cool, I’ll just find out the rule it’s using and then I’ll use some math tricks and I’ll predict what it’s going to do.
For most processes, it turns out there aren’t any quick shortcuts, that’s actually all. It was worked on by Alan Turing way back when he proved that you can’t effectively get extreme speed ups of universal processes. So then we’re stuck with saying, maybe it’s deterministic, but we can’t predict it, and going slightly off on a side thread here, this question of free will always comes up, because we say well, “we’re not like deterministic processes, because nobody can predict what we do.” And the thing is if you get a really good AI program that’s running at its top level, then you’re not going to be able to predict that either. So, we kind of confuse free will with unpredictability, but actually unpredictability’s enough.
Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com
[voices_in_ai_link_back]

Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.

Voices in AI – Episode 75: A Conversation with Kevin Kelly

[voices_in_ai_byline]

About this Episode

Episode 75 of Voices in AI features host Byron Reese and Kevin Kelly discuss the brain, the mind, what it takes to make AI and Kevin’s thoughts on its inevitability. Kevin has written books such as ‘The New Rules for a New Economy’, ‘What Technology Wants’, and ‘The Inevitable’. Kevin also started Wired Magazine, an internet and print magazine of tech and culture.
Visit www.VoicesinAI.com to listen to this one-hour podcast or read the full transcript.

Transcript Excerpt

Byron Reese: This is Voices in AI, brought to you by GigaOm, and I’m Byron Reese. Today I am so excited we have as our guest, Kevin Kelly. You know when I was writing the biography for Kevin, I didn’t even know where to start or where to end. He’s perhaps best known for a quarter of a century ago, starting Wired magazine, but that is just one of many many things on an amazing career [path]. He has written a number of books, The New Rules for a New Economy, What Technology Wants, and most recently, The Inevitable, where he talks about the immediate future. I’m super excited to have him on the show, welcome Kevin.
Kevin Kelly: It’s a real delight to be here, thanks for inviting me.
So what is inevitable?
There’s a hard version and a soft version, and I kind of adhere to the soft version. The hard version is kind of a total deterministic world in which if we rewound the tape of life, it all unfolds exactly as it has, and we still have Facebook and Twitter, and we have the same president and so forth. The soft version is to say that there are biases in the world, in biology as well as its extension into technology, and that these biases tend to shape some of the large forms that we see in the world, still leaving the particulars, the specifics, the species to be completely, inherently, unpredictable and stochastic and random.
So that would say that things like you’re going to find on any planet that has water, you’ll find fish, it has life and in water you’ll find fish, or will things, if you rewound the tape of life you’d probably get flying animals again and again, but you’ll never, but I mean, a specific bird, a robin is not inevitable. And the same thing with technology. Any planet that discovers electricity and mixed wires will have telephones. So telephones are inevitable, but the iPhone is not. And the internet’s inevitable, but Google’s not. AI’s inevitable, but the particular variety or character, the specific species of AI is not. That’s what I mean by inevitable—that there are these biases that are built by the very nature of chemistry and physics, that will bend things in certain directions.
And what are some examples of those that you discuss in your book?
So, technology’s basically an extension of the same forces that drive life, and a kind of accelerated evolution is what technology is. So if you ask the question about what are the larger forces in evolution, we have this movement towards complexity. We have  a movement towards diversity; we have a movement towards specialization; we have a movement towards mutualism. Those also are happening in technology, which means that all things being equal, technology will tend to become more and more complex.
The idea that there’s any kind of simplification going on in technology is completely erroneous, there isn’t. It’s not that the iPhone is any simpler. There’s a simple interface. It’s like you have an egg, it’s a very simple interface but inside it’s very complex. The inside of an iPhone continues to get more and more complicated, so there is a drive that, all things being equal, technology will be more complex and then next year it will be more and more specialized.
So, the history of technology in photography was there was one camera, one kind of camera. Then there was a special kind of camera you could do for high speed; maybe there’s another kind of camera that could do underwater; maybe there was a kind that could do infrared; and then eventually we would do a high speed, underwater, infrared camera. So, all these things become more and more specialized and that’s also going to be true about AI, we will have more and more specialized varieties of AI.
So let’s talk a little bit about [AI]. Normally the question I launch this with—and I heard your discourse on it—is: What is intelligence? And in what sense is AI artificial?
Yes. So the big hairy challenge for that question is, we humans collectively as a species at this point in time, have no idea what intelligence really is. We think we know when we see it, but we don’t really, and as we try to make artificial synthetic versions of it, we are, again and again, coming up to the realization that we don’t really know how it works and what it is. Their best guess right now is that there are many different subtypes of cognition that collectively interact with each other and are codependent on each other, form the total output of our minds and of course other animal minds, and so, I think the best way to think of this is we have a ‘zoo’ of different types of cognition, different types of solving things, of learning, of being smart, and that collection varies a little bit by person to person and a lot between different animals in the natural world and so…
That collection is still being mapped, and we know that there’s something like symbolic reasoning. We know that there’s kind of deductive logic, that there’s something about spatial navigation as a kind of intelligence. We know that there’s mathematical type thinking; we know that there’s emotional intelligence; we know that there’s perception; and so far, all the AI that we have been ‘wowed’ by in the last 5 years is really all a synthesis of only one of those types of cognition, which is perception.
So all the deep learning neural net stuff that we’re doing is really just varieties of perception of perceiving patterns, and whether there’s audio patterns or image patterns, that’s really as far as we’ve gotten. But there’s all these other types, and in fact we don’t even know what all the varieties of types [are]. We don’t know how we think, and I think one of the consequences of AI, trying to make AI, is that AI is going to be the microscope that we need to look into our minds to figure out how they work. So it’s not just that we’re creating artificial minds, it’s the fact that that creation—that process—is the scope that we’re going to use to discover what our minds are made of.
Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com
[voices_in_ai_link_back]
 
Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.

Voices in AI – Episode 71: A Conversation with Paul Daugherty

[voices_in_ai_byline]

About this Episode

Episode 71 of Voices in AI features host Byron Reese and Paul Daugherty discuss transfer learning, consciousness and Paul’s book “Human + Machine: Reimagining Work in the Age of AI.” Paul Daugherty holds a degree in computer engineering from the University of Michigan, and is currently the Chief Technology and Innovation Officer at Accenture.
Visit www.VoicesinAI.com to listen to this one-hour podcast or read the full transcript.

Transcript Excerpt

Byron Reese: This is Voices in AI brought to you by GigaOm. Today my guest is Paul Daugherty. He is the Chief Technology and Innovation Officer at Accenture. He holds a computer engineering degree from the University of Michigan. Welcome to the show Paul.

Paul Daugherty: It’s great to be here, Byron.

Looking at your dates on LinkedIn, it looks like you went to work for Accenture right out of college and that was a quarter of a century or more ago. Having seen the company grow… What has that journey been like?

Thanks for dating me. Yeah it’s actually been 32 years, so I guess I’m going on a third of a century, joined Accenture back in 1986, and the company’s evolved in many ways since then. It’s been an amazing journey because the world has changed so much since then and a lot of what’s fueled the change in the world around us has been what’s happened with technology. I think [in] 1986 the PC was brand new, and we went from that to networking and client server and the Internet, cloud computing mobility, internet of things, artificial intelligence and the things we’re working on today. So it’s been a really amazing journey fueled by the way the world’s changed, enabled by all this amazing technology.

So let’s talk about that, specifically artificial intelligence. I always like to get our bearings by asking you to define either artificial intelligence or if you’re really feeling bold, define intelligence.

I’ll start with artificial intelligence which we define as technology that can sense, think, act and learn, is the way we describe it. And [it’s] systems that can then do that, so sense: like vision in a self-driving car, think: making decisions on what the car does next, acts: in terms of they actually steer the car and then learn: to continuously improve behavior. So that’s the working definition that we use for artificial intelligence, and I describe it more simply to people sometimes, as fundamentally technology that has more human-like capability to approximate the things that we’re used to assuming and thinking that only humans can do: speech, vision, predictive capability and some things like that.

So that’s the way I define artificial intelligence. Intelligence I would define differently. Intelligence I would just define more broadly. I’m not an expert in neuroscience or cognitive science or anything, but I define intelligence generally as the ability to both reason and comprehend and then extrapolate and generalize across many different domains of knowledge. And that’s what differentiates human intelligence from artificial intelligence, which is something we can get a lot more into. Because I think the fact that we call this body of work that we’re doing artificial intelligence, both the word artificial and the word intelligence I think lead to misleading perceptions on what we’re really doing.

So, expand that a little bit. You said that’s the way you think human intelligence is different than artificial, — put a little flesh on those bones, in exactly what way do you think it is?

Well, you know the techniques we’re really using today for artificial intelligence, they’re generally from the branch of AI around machine learning, so machine learning, deep learning, neural nets etc. And it’s a technology that’s very good at using patterns and recognizing patterns in data to learn from observed behavior, so to speak. Not necessarily intelligence in a broad sense, it’s ability to learn from specific inputs. And you can think about that almost as idiot savant-like capability.

So yes, I can use that to develop Alpha Go to beat the world’s Go master, but then that same program wouldn’t know how to generalize and play me in tic-tac-toe. And that ability, the intelligence ability to generalize, extrapolate, rather than interpolate, is what human intelligence is differentiated by, and the thing that would bridge that, would be artificial general intelligence, which we can get into a little bit, but we’re not at that point of having artificial general intelligence, we’re at a point of artificial intelligence, where it could mimic very specific, very specialised, very narrow human capabilities, but it’s not yet anywhere close to human-level intelligence.

Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com
[voices_in_ai_link_back]
 
Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.

Voices in AI – Episode 65: A Conversation with Luciano Floridi

[voices_in_ai_byline]

About this Episode

Episode 65 of Voices in AI features host Byron Reese and Luciano Floridi discuss ethics, information, AI and government monitoring. They also dig into Luciano’s new book “The Fourth Revolution” and ponder how technology will disrupt the job market in the days to come. Luciano Floridi holds multiple degrees including a PhD in philosophy and logic from the University of Warwick. Luciano currently is a professor of philosophy and ethics of information, as well as the director of Digital Ethics Lab at the University of Oxford. Along with his responsibilities as a professor, Luciano is also the chair of the Data Ethics Group at the Alan Turing Institute.
Visit www.VoicesinAI.com to listen to this one-hour podcast or read the full transcript.

Transcript Excerpt

Byron Reese: This is Voice in AI, brought to you by GigaOm, I’m Byron Reese. Today our guest is Luciano Floridi, he is a professor of philosophy and ethics of information, and the director at the Digital Ethics Lab at the University of Oxford. In addition to that, he is the chair at the Data Ethics Group at the Alan Turing Institute. Among multiple degrees, he holds a Doctor of Philosophy in philosophy and logic from the University of Warwick. Welcome to the show, Luciano.
Luciano Floridi: Thank you for having me over.
I’d like to start with a simple question which is: what is intelligence, and by extension, what is artificial intelligence?
Well this is a great question and I think one way of getting away with a decent answer, is to try to understand, what’s the lack of intelligence. So that you recognize it by spotting when there isn’t intelligence around.
So, imagine you are, say, nailing something on the wall and all of a sudden you hit your finger. Well, that was stupid, that was a lack of intelligence, it would have been intelligent not to do that. Or, imagine that you get all the way to the supermarket and you forgot your wallet so you can’t buy anything, well that was also stupid, so you would need intelligence to take your wallet. You can multiply that by, shall we say, a million cases, so there are a million cases in which you can be, or—just to be more personal—I can be stupid, and therefore I can be intelligent by the other way around.
So intelligence is a way of, shall we say, sometimes, coping with the world in a way that is effective, successful, but it also can be so many other things. It’s not intelligent, or it would be intelligent not to talk to your friend about the wrong topic, because that’s not the right day. It is intelligent, or not very intelligent, to make sure that that party you organize, you don’t invite Mary and Peter because they can’t stand each other.
The truth is that we don’t have a definition for intelligence or vice versa, for the lack of it. But at this point, I can sort of recycle an old joke by one of the judges in the Supreme Court, I’m sure everyone listening to or reading this knows that very well, but always ask for a definition of pornography, as you know, he said, “I don’t have one, but I recognize it when I see it.” I think that that sounds good—we know when we’re talking to someone intelligent on a particular topic, we know when we are doing something stupid about a particular circumstance, and I think that that’s the best that we can do.
Now, let me just add one last point just in case, say, “Oh, well isn’t that funny that we don’t have a definition for such a fundamental concept?” No it isn’t. In fact, most of the fundamental concepts that we use, or experiences we have, don’t have a definition. Think about friendship, love, hate, politics, war, on and on. You start getting a sense of, okay, I know what we’re talking about, but this is not like water equal to H2O, it’s not like a triangle is a figure with a plain of three sides and three angles, because we’re not talking about simple objects that we can define in terms of necessary and sufficient condition, we’re talking about having criteria to identify what it looks like to be intelligent, what it means to behave intelligently. So, if I really have to go out of my way and provide a definition—intelligence is nothing, everything is about behaving intelligently. So, let’s get an adverb instead of a noun.
I’m fine with that. I completely agree that we do have all these words, like, “life” doesn’t have a consensus definition, and “death” doesn’t have a consensus definition and so forth, so I’m fine with leaving it in a gray area. That being said, I do think it’s fair to ask how big of a deal is it—is it a hard and difficult thing, there’s only a little bit of it, or is it everywhere? If your definition is about coping with the world, then plants are highly intelligent, right? They will grow towards light, they’ll extend their roots towards water, they really cope with the world quite well. And if plants are intelligent, you’re setting a really low bar, which is fine, but I just want to kind of think about it. You’re setting a really low bar, intelligence permeates everything around us.
That’s true. I mean, you can even say, well look the way the river goes from that point to that point, and reaches the sea through the shortest possible path, well, that looks intelligent. I mean, remember that there was a past when we thought that precisely because of this reason, and many others, plants were some kinds of gods, and the river was a kind of god, that it was intelligent, purposeful, meaningful, goal-oriented, sort of activity there, and not simply a good adaptation, some mechanism, cause and effect. So what I wanted to detach here, so to speak, is our perception of what it looks like, and what it actually is.
Suppose I go back home, and I find that the dishes have been cleaned. Well, do I know whether the dishes have been cleaned by the dishwasher or by, say, my friend Mary? Well, looking at the dishes, I cannot. They’re all clean, so the output looks pretty much the same, but of course the two processes have been very different. One thing requires some intelligence on Mary’s side, otherwise she would break things and so on, waste soap, and so on. And the other one is, well, simple dishwashing machine, so, zero intelligence as far as I’m concerned—of the kind that we’ve been discussing, you know, that goes back to the gray area, the pornography example, and so on.
I think what we can do here is to say, look, we’re really not quite sure about what intelligence means. It has a thousand different meanings we can apply to this and that, if you really want to be inclusive, even a river’s intelligence, why not? The truth is that when we talk about our intelligence, well then we have some kind of a meter, like a criteria to measure, and we can say, “Look, this thing is intelligent, because had it been done by a human being, it would have required intelligence.” So, they say, “Oh, that was a smart way of doing things,” for example, because had that been left to a human being, well I would have been forced to be pretty smart.
I mean, chess is a great example today—my iPhone is as idiotic, as my grandmother’s fridge, you know zero intelligence of the sort we’ve been discussing here, and yet, it plays better chess than almost anyone I can possibly imagine. Meaning? Meaning that we have managed to detach the ability to pursue a particular goal, and been successful in implementing a process from the need to be intelligent. It doesn’t have to be to be successful.
Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com
[voices_in_ai_link_back]
 
Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.

Facebook Beta Launches Work Chat Application

Late last week, Facebook quietly made its entry into the work chat (enterprise real-time messaging) arena with the very limited release of its appropriately-named Work Chat application. There was no announcement in the Facebook Newsroom; the app just showed up in the Google Play store and was called out in a TechCrunch article. Work Chat is available for Android devices only now; an iOS version is in development and expected to be available soon.
Work Chat is the corporate equivalent of Facebook Messenger. Those applications appear to have the same user experience and feature set, although TechCrunch noted that Work Chat allows individuals to temporarily turn off their notifications, so as to not be disturbed when on vacation and when other personal activities are prioritized.
Work Chat is intended solely for organizations that are Facebook at Work customers. Anyone can download and install the app, but it will not work without a Facebook at Work login.
Screenshot_2015-11-20-14-13-51
Facebook at Work is still in closed beta, so very few companies and individuals will be able to use Work Chat today.

Is This a Market Disruptor?

While it’s impossible to gauge the actual market impact of Facebook’s Work Chat at this point, we can draw some conclusions about its potential effect. First, it will boost awareness of, and interest in, chat-based, real-time communication tools in organizations of all size. Individuals who use Facebook and its Messenger app in the personal lives will push their IT departments to consider the Facebook at Work and Work Chat combination.
In all likelihood, many organizations will try Work Chat, at least in a pilot implementation. It’s been reported that Facebook at Work will be available in a free version that will likely have a limited feature set and support. If that is true and the same applies to Work Chat, then a company’s cost to try the app is negligible.
Facebook’s land and expand strategy for enterprise sales may indeed work and, if it does, Work Chat would likely be swept along with the tide of Facebook at Work adoption. Facebook has already said that some of the roughly 300 companies in the Facebook at Work trial program have announced their intent to scale its use next year. Heineken has already grown its user base from 40 to 550. Royal Bank of Scotland plans to have 30,000 employees on the platform by the end of Q1 2016 and aims to roll it out to all 100,000 employees before the end of the year.
It is entirely possible that Work Chat will see those kind of adoption numbers as well, resulting in a decent share of the enterprise real-time chat market segment for Facebook. Other vendors of communication and collaboration platforms, suites, and applications should not dismiss the potential impact that Facebook at Work and Work Chat could have on their revenue streams. If Facebook can build an enterprise sales capacity and execute well, they will become a formidable competitor.

Farhad Manjoo on Apple’s Antigravity

Farhad Manjoo points out that Apple seems to be the counterexample to the disruption mantra that has become foundational to the business weltanschauung of our time: expensive tech with high margins is supposed to be disrupted by lower-cost, lower margin upstarts. That’s Clay Christensen’s disruption theory, which is taken as a given in most circles. (Jill Lepore and others have exploded Christensen’s dogma, but that has had little effect: the model is persuasive, even though overly simplistic.)

Here’s Farhad’s take:

In many fundamental ways, the iPhone breaks the rules of business, especially the rules of the tech business. Those rules have more or less always held that hardware devices keep getting cheaper and less profitable over time. That happens because hardware is easy to commoditize; what seems magical today is widely copied and becomes commonplace tomorrow. It happened in personal computers; it happened in servers; it happened in cameras, music players, and — despite Apple’s best efforts — it may be happening in tablets.

In fact, commoditization has wreaked havoc in the smartphone business — just not for Apple. In the last half-decade, sales of devices running Google’s Android operating system have far surpassed sales of Apple’s devices, and now account for the vast majority of smartphones in use.

For years, observers predicted that Android’s rising market share would in turn lead to lower profits for Apple (profits, not market share, being the point of business). If that had happened, it would have roughly approximated the way the Windows PC industry eclipsed Apple’s Mac business. “Hey, Apple, wake up — it’s happening again,” Henry Blodget, of Business Insider, warned in 2010. And again in 201120122013 and 2014.

None of those predictions came true. While the iPhone’s sales growth slowed in 2013 and 2014, it rebounded to near-record levels later last year, and its profits have remained lofty.

Leaving Blodgett’s endless folly aside, what are we to make of Apple’s denial of gravity: the company’s stranglehold on profits in a world that should want to pay less for what should be a commodity, but somehow continues not to be?

Disruption, Innovation or Process Model Change? Why Banks Are A Great Example of Every Firm’s Dilemma

The debate

What do companies really need to do to succeed over the next five to ten years – and give yourself some strategic latitude here. Is it more innovation, more social communications in the enterprise? The need to find more creative responses to disruption? Or is it the bogey most firm’s fear most – deep process reengineering?
In this update we’ll look at the case of banks and conclude that, sorry, the future is all about process model innovation, or some kind of BPR. Process model innovation requires companies once again to look deep into how they do business and redesign how their people execute on company objectives.
The new process model is the business platform and executives in finance are beginning to realize they need one too.

The E2.0 Era and Social Business

Over the past decade an abundance of literature told us the real answer to productivity issues and workplace performance was “social”. In a variety of guises “social” has been the modern day but gentler business process reengineering meme.
There are various definitions: “Enterprise 2.0 is the strategic integration of Web 2.0 technologies into an enterprise’s intranet, extranet and business processes.” Yes, it represented a change in process but a manageable one for the folks affected. Enterprise 2.0 is “the use of emergent social software platforms within companies, or between companies and their partners or customers.”
This was change without layoffs, an answer to silos without process redesign.
Gigaom was on top of it from the start (see The Future of Work Platforms and the discussion around Technology and The Future of Work ) and is one of the few places that maintained a critical viewpoint. The challenge for banks is that no amount of socialising the enterprise will provide the answer to  the current disruption they face. If that’s the case for banks, it’s also the case for many other sectors too. And the reasons go deep.

The Changing Economy

Banks, as we know, are more susceptible to global economic change that other companies. They had a bull run during the period 1995 – 2005, some would say on the back of collateralised derivatives but perhaps more pertinent, on the back of a long run of increased global trade that was closely tied to growth in gross domestic product in major economies like the USA. The 1980s and 1990s were the era of booming trade in goods. It’s over, according to a recent working paper from IMF staffers.
The key reason? The relationship between US growth and Chinese growth is broken. That conjoined growth is fixed in most people’s minds by the Apple-Foxconn relationship. Apple creates the IP and design values, Foxconn builds and ships. While nothing can derail Apple, a new reality is emerging – in many emerging markets local manufacturers and suppliers are beating out the multinationals.
The graph below shows the bald truth of China’s changing position as an importer of parts for onward manufacture and shipping.
Figure 1. China’s Share of Imports of Parts and Components in Exports of Merchandise and Manufacturing (percent)

china parts and goods

The evidence supports the idea that globally we will trade less. However there is a parallel development. Small companies are trading more – a whole lot more.
Evidence for the internationalization of small business trade comes from a variety of sources.
Figure 2. Small Business Internationalisation to end 2016

small business internationalization

The table above is supported by evidence from surveys by the World Trade Organisation and by companies like DHL that have a stake in any small package international trade. According to DHL:

81% of high performing SMEs trade internationally – high performing being measured as 3 years of 10% + growth in the OECD and 20% annual growth over three year in the BRICM (BRICS plus Mexico) countries, and much of that performance is attributable to a range of international activities – import, export, partnering, sub-contracting etc. and 60% of these companies expect to increase their international activities to a total of 20% of all turnover over the coming three years.

So here is the problem for banks. In fact it is a generic problem for companies looking a few years out.
Their large customers are being beaten out of emerging markets by local competitors because there are too few supportive financial functions for western companies in, say, the second cities of Vietnam, or the third cities of China. It is tough to get the credit checks, to provide the merchant credit, to find the data on consumption patterns, all the things that go into a western market entry campaign. So much so that Nestle is less able to sell its ice creams or P&G to sell its detergent.
Their small customers are meanwhile expanding into new markets. Traditionally these are precisely the customers that banks don’t want to loan to – sub-$1million dollar loans are not cost effective for relationship managers.
Leave aside the esoteric discussion banks are currently having about Bitcoin and distributed ledger technology, they are losing credibility with customers. This is not necessarily a fatal position but does call for substantial process change.

The case for banks as platforms

Banks will have to gravitate towards platforms if they are to serve small businesses and if they are to improve their relationships with larger ones. The importance of the first of these is that the rosy future (growth) lies with the small business that is internationalising. The second is their mainstay. It’s where the big money mandates come from.
In the case of small businesses, banks typically have too high a cost base for serving small business needs, depending on real-world relationship building, high margins and outdated credit scoring. New limited feature platforms like Cashflower can help with transparent cash management and platforms like PayPal are stepping in with working capital support.
Platforms like Alibaba fund the customer, the merchant and the manufacturer, as well as hiving off cash to fund managers, as well as providing escrow to secure trust. US and European innovators have a long way to go to be competitive in this area.
For larger customers banks need to adapt the integrated model that Alibaba has proven, and provide cash visibility, new foreign exchange management services, innovate in credit references, and devise other services that will make more trade happen in more localities.

The Process Model Innovation Challenge

Here’s the challenge. Most people who do not understand the world of platforms, think Uber. That’s essentially an upgrade of the late 1990s Application service provider-thin client model of platform that has people raving, has VC money gushing, but is not era defining. Integration is era defining.
For banks to move to platforms they have to look beyond Uber or the two-sided market model. They need to think how to deal with  x 10 the number of customers on a small business platform, how to engage the developers who might innovate around it, how to brand a platform with no ethnic or nationalist legacy, and how to promote brand inspirationally. They need to learn new traction rules. They have to go beyond departments to a Netflix style of internal platform configuration so that they can move with agility to solve problems as they scale.
In that context E2.0 makes sense too because suddenly everything you want people to do comes back to how they communicate, project the brand and engage people online at low unit cost.
The challenge though is how to get a departmentally silo-ed organisation with a traditionally minded executive team to see itself owning a platform that requires a totally new skill set. In that context here are some thoughts wrapped up in the takeaways but here is a final piece of evidence to ponder.
In a recent study of innovation capability among banks, conducted by The Disruption House, we found leadership to be the single biggest deficit of these august organisations. The graph compares the top 10 companies in terms of innovation capability from a 2013 study of innovation across all sectors (blue line) with a 2015 study of financial institutions (red line, including companies like Alibaba), with the top 10 banks in the same study (yellow line) and the top 10 Globally Systemically Important Banks (or G-SIB, green line). It shows the G-SIB with the lowest innovation leadership capability.
Figure 3. Comparing Financial Institution Innovation Capability

fintech comparisons

Takeaways

  1. Large scale economic change is upon us, even if we set aside any notion of some new technology or idea being disruptive. The old supply chain complexity model is flatlining.
  2. The momentum lies with smaller businesses that need different types of support
  3. Platforms are the answer but we are over-seduced by the Uber model and should be looking instead to integrated platform models.
  4. To do that, large organisations, like banks, need to develop an inter-generational leadership dialogue.
  5. The inter-generational leadership dialogue is a forum that large banks, and their peers in other industries, now urgently need in order that they can explore options openly. Optionality as a strategic tool was explored in this Gigaom paper. Process model innovation is an imperative but the ones who have to make it succeed are likely not the ones who commission the change. Difficult pill to swallow for many banks but leadership has to be a shared vision.

Apple is accelerating its enterprise push

New evidence is emerging that shows Apple is doubling down on its push into the enterprise software sector.

This past week, at a Goldman Sachs conference, Tim Cook was very clear about the partnership between Apple and IBM when responding to a question from Gary Cohn, the CEO of Goldman Sachs group.

Cohn: How important is IBM partnership? What makes it interesting for two titans?

Cook: IBM has very deep knowledge of a number of verticals, we don’t have that. IBM has field expertise, we don’t have that. Apple has devices people want, programming languages easy to write for. IBM doesn’t have that. We want to change the way people work. So when we thought how do we do this, we realized we didn’t know enough and didn’t have all of these people on the street and didn’t have all of these engineers to write these unique apps. But we knew we needed them at the enterprise level. Presentation, word processors, those are general. But when you start talking about tools for the pilot, the nurse, the banker, you need unique apps.

So we knew we needed to partner with someone. We looked around, and it became clear IBM would be an outstanding partner. The relationship there is good, the teams work well together, and the teams are very complementary. Enterprise has not moved nearly as far as consumer. Kind of like experience for digital kids. Enterprise is like this…you live one way at home and then turn back the clock when you go to work. It doesn’t have to be like that.

So, I admit I was surprised to hear that Apple went looking for a partner to attack the enterprise with, as opposed to IBM approaching Apple.

Other evidence of a full Apple attack on the enterprise?

  1. UBS analyst Steve Milunovich reports — without citing sources — that Apple is going beyond the initial IBM/Apple focus on iOS business apps: “Now we hear that IBM will be adding horizontal apps as well over time, such as supply chain capabilities. IBM has been backing into applications through its SaaS acquisitions and analytics expertise; this could be powerful.”
  2. Apple is hiring enterprise sales reps.
  3. Rumors of a larger format iPad (iPad Pro?), presumably more attractive to enterprise users.
  4. Apple announced on 13 February that iWork for iCloud beta now works on Linux box, Google Chromebook, and — cue the drumroll — Microsoft Windows PC, as well as from Apple hardware. And it’s free. This is a very-late-in-the-day attempt by Apple to counter Office and Office 365 dominance on Windows devices.

So Apple — which just posted the best quarter of any company ever, making $6 billion in profits per month — is going to invest some serious bank to expand dramatically in the enterprise. Perhaps Cook&Co are looking to additional markets to continue their astonishing growth: it can’t be just iPhones. Now it seems that Apple is planning to make electric cars, along with Apple Pay, the coming wearable Watch, Beats streaming music service, and the long-rumored TV initiative. What will they disrupt next?