Voices in AI – Episode 76: A Conversation with Rudy Rucker


About this Episode

Episode 76 of Voices in AI features host Byron Reese and Rudy Rucker discuss the future of AGI, the metaphysics involved in AGI, and delve into whether the future will be for humanity’s good or ill. Rudy Rucker is a mathematician, a computer scientist, as well as being a writer of fiction and nonfiction, with awards for the first two of the books in his Ware Tetralogy series.
Visit www.VoicesinAI.com to listen to this one-hour podcast or read the full transcript.

Transcript Excerpt

Byron Reese: This is Voices in AI brought to you by GigaOm, I’m Byron Reese. Today my guest is Rudy Rucker. He is a mathematician, a computer scientist and a science fiction author. He has written books of fiction and nonfiction, and he’s probably best known for his novels in the Ware Tetralogy, which consists of software, wetware, freeware and realware. The first two of those won Philip K. Dick awards. Welcome to the show, Rudy.
Rudy Rucker: It’s nice to be here Byron. This seems like a very interesting series you have and I’m glad to hold forth on my thoughts about AI.
Wonderful. I always like to start with my Rorschach question which is: What is artificial intelligence? And why is it artificial?
Well a good working definition has always been the Turing test. If you have a device or program that can convince you that it’s a person, then that’s pretty close to being intelligent.
So it has to master conversation? It can do everything else, it can paint the Mona Lisa, it could do a million other things, but if it can’t converse, it’s not AI?
No those other things are also a big part of if. You’d want it to be able to write a novel, ideally, or to develop scientific theories—to do the kinds of things that we do, in an interesting way.
Well, let me try a different tack, what do you think intelligence is?
I think intelligence is to have a sort of complex interplay with what’s happening around you. You don’t want the old cliche that the robotic voice or the screen with capital letters on it, just not even able to use contractions, “do not help me.” You want something that’s flexible and playful in intelligence. I mean even in movies when you look at the actors, you often will get a sense that this person is deeply unintelligent or this person has an interesting mind. It’s a richness of behavior, a sort of complexity that engages your imagination.
And do you think it’s artificial? Is artificial intelligence actual intelligence or is it something that can mimic intelligence and look like intelligence, but it doesn’t actually have any, there’s no one actually home?
Right, well I think the word artificial is misleading. I think as you asked me before the interview about my being friends with Stephen Wolfram, and one of Wolfram’s points has been that any natural process can embody universal computation. Once you have universal computation, it seems like in principle, you might be able to get intelligent behavior emerging even if it’s not programmed. So then, it’s not clear that there’s some bright line that separates human intelligence from the rest of the intelligence. I think when we say “artificial intelligence,” what we’re getting at is the idea that it would be something that we could bring into being, either by designing or probably more likely by evolving it in a laboratory setting.
So, on the Stephen Wolfram thread, his view is everything’s computation and that you can’t really say there’s much difference between a human brain and a hurricane, because what’s going on in there is essentially a giant clockwork running its program, and it’s all really computational equivalence, it’s all kind of the same in the end, do you ascribe to that?
Yeah I’m a convert. I wouldn’t use the word ‘clockwork’ that you use because that already slips in an assumption that a computation is in some way clunky and with gears and teeth, because we can have things—
But it’s deterministic, isn’t it?
It’s deterministic, yes, so I guess in that sense it’s like clockwork.
So Stephen believes, and you hate to paraphrase something as big as like his view on science, but he believes that everything is—not a clockwork, I won’t use that word—but everything is deterministic. But, even the most deterministic things, when you iterate them, become unpredictable, and they’re not unpredictable inherently, like from a universal standpoint. But they’re unpredictable from how finite our minds are.
They’re in practice unpredictable?
So, a lot of natural processes, like well there’s like when you take Physics I, you say oh, I can predict where, if I fire an artillery shot where it’s going to land, because it’s going to travel along a perfect parabola and then I can just work it out on the back of an envelope in a few seconds. And then when you get into reality, well they don’t actually travel on perfect parabolas, they have this odd shaped curve due to air friction, that’s not linear, it depends how fast they’re going. And then, you skip into saying “Well, I really would have to simulate this click.”
And then when you get into saying you have to predict something by simulating the process, then the event itself is simulating itself already, and in practice, the simulation is not going to run appreciably faster than just waiting for the event to unfold, and that’s the catch. We can take a natural process and it’s computational in the sense that it’s deterministic, so you think well, cool, I’ll just find out the rule it’s using and then I’ll use some math tricks and I’ll predict what it’s going to do.
For most processes, it turns out there aren’t any quick shortcuts, that’s actually all. It was worked on by Alan Turing way back when he proved that you can’t effectively get extreme speed ups of universal processes. So then we’re stuck with saying, maybe it’s deterministic, but we can’t predict it, and going slightly off on a side thread here, this question of free will always comes up, because we say well, “we’re not like deterministic processes, because nobody can predict what we do.” And the thing is if you get a really good AI program that’s running at its top level, then you’re not going to be able to predict that either. So, we kind of confuse free will with unpredictability, but actually unpredictability’s enough.
Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com

Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.

Voices in AI – Episode 75: A Conversation with Kevin Kelly


About this Episode

Episode 75 of Voices in AI features host Byron Reese and Kevin Kelly discuss the brain, the mind, what it takes to make AI and Kevin’s thoughts on its inevitability. Kevin has written books such as ‘The New Rules for a New Economy’, ‘What Technology Wants’, and ‘The Inevitable’. Kevin also started Wired Magazine, an internet and print magazine of tech and culture.
Visit www.VoicesinAI.com to listen to this one-hour podcast or read the full transcript.

Transcript Excerpt

Byron Reese: This is Voices in AI, brought to you by GigaOm, and I’m Byron Reese. Today I am so excited we have as our guest, Kevin Kelly. You know when I was writing the biography for Kevin, I didn’t even know where to start or where to end. He’s perhaps best known for a quarter of a century ago, starting Wired magazine, but that is just one of many many things on an amazing career [path]. He has written a number of books, The New Rules for a New Economy, What Technology Wants, and most recently, The Inevitable, where he talks about the immediate future. I’m super excited to have him on the show, welcome Kevin.
Kevin Kelly: It’s a real delight to be here, thanks for inviting me.
So what is inevitable?
There’s a hard version and a soft version, and I kind of adhere to the soft version. The hard version is kind of a total deterministic world in which if we rewound the tape of life, it all unfolds exactly as it has, and we still have Facebook and Twitter, and we have the same president and so forth. The soft version is to say that there are biases in the world, in biology as well as its extension into technology, and that these biases tend to shape some of the large forms that we see in the world, still leaving the particulars, the specifics, the species to be completely, inherently, unpredictable and stochastic and random.
So that would say that things like you’re going to find on any planet that has water, you’ll find fish, it has life and in water you’ll find fish, or will things, if you rewound the tape of life you’d probably get flying animals again and again, but you’ll never, but I mean, a specific bird, a robin is not inevitable. And the same thing with technology. Any planet that discovers electricity and mixed wires will have telephones. So telephones are inevitable, but the iPhone is not. And the internet’s inevitable, but Google’s not. AI’s inevitable, but the particular variety or character, the specific species of AI is not. That’s what I mean by inevitable—that there are these biases that are built by the very nature of chemistry and physics, that will bend things in certain directions.
And what are some examples of those that you discuss in your book?
So, technology’s basically an extension of the same forces that drive life, and a kind of accelerated evolution is what technology is. So if you ask the question about what are the larger forces in evolution, we have this movement towards complexity. We have  a movement towards diversity; we have a movement towards specialization; we have a movement towards mutualism. Those also are happening in technology, which means that all things being equal, technology will tend to become more and more complex.
The idea that there’s any kind of simplification going on in technology is completely erroneous, there isn’t. It’s not that the iPhone is any simpler. There’s a simple interface. It’s like you have an egg, it’s a very simple interface but inside it’s very complex. The inside of an iPhone continues to get more and more complicated, so there is a drive that, all things being equal, technology will be more complex and then next year it will be more and more specialized.
So, the history of technology in photography was there was one camera, one kind of camera. Then there was a special kind of camera you could do for high speed; maybe there’s another kind of camera that could do underwater; maybe there was a kind that could do infrared; and then eventually we would do a high speed, underwater, infrared camera. So, all these things become more and more specialized and that’s also going to be true about AI, we will have more and more specialized varieties of AI.
So let’s talk a little bit about [AI]. Normally the question I launch this with—and I heard your discourse on it—is: What is intelligence? And in what sense is AI artificial?
Yes. So the big hairy challenge for that question is, we humans collectively as a species at this point in time, have no idea what intelligence really is. We think we know when we see it, but we don’t really, and as we try to make artificial synthetic versions of it, we are, again and again, coming up to the realization that we don’t really know how it works and what it is. Their best guess right now is that there are many different subtypes of cognition that collectively interact with each other and are codependent on each other, form the total output of our minds and of course other animal minds, and so, I think the best way to think of this is we have a ‘zoo’ of different types of cognition, different types of solving things, of learning, of being smart, and that collection varies a little bit by person to person and a lot between different animals in the natural world and so…
That collection is still being mapped, and we know that there’s something like symbolic reasoning. We know that there’s kind of deductive logic, that there’s something about spatial navigation as a kind of intelligence. We know that there’s mathematical type thinking; we know that there’s emotional intelligence; we know that there’s perception; and so far, all the AI that we have been ‘wowed’ by in the last 5 years is really all a synthesis of only one of those types of cognition, which is perception.
So all the deep learning neural net stuff that we’re doing is really just varieties of perception of perceiving patterns, and whether there’s audio patterns or image patterns, that’s really as far as we’ve gotten. But there’s all these other types, and in fact we don’t even know what all the varieties of types [are]. We don’t know how we think, and I think one of the consequences of AI, trying to make AI, is that AI is going to be the microscope that we need to look into our minds to figure out how they work. So it’s not just that we’re creating artificial minds, it’s the fact that that creation—that process—is the scope that we’re going to use to discover what our minds are made of.
Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com
Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.

Google on collaboration

Our customers often tell us that encouraging and enabling collaboration has dramatically improved their business. We decided to dig a little deeper by conducting some original cross-industry research that measures the power of workplace collaboration in concrete terms.

This is how Google introduces the findings of its recent survey of senior staff and C-suite executives at 258 North American companies across a wide range of business sectors and sizes. (PDF of full report.) The primary conclusion is presented up front:

… collaboration has a significant impact on business innovation, performance, culture and even the bottom line.

This is quite right and quite wrong. Collaboration is at once driven and the driver; it is both a cause and an effect. As is culture come to that. Effectively, Google must grapple with two distinct appreciations of business among its customers and prospects.

Simply complex

If there’s one thing that differentiates organization this century from the last it’s that we may now acknowledge complexity and do something about it. We increasingly have the technologies to help navigate complexity. Choosing to do so offers competitive advantage for the time being; there will soon come a time when failing to do so renders an organization unresponsive, fragile and, consequently, bust. (Note that complexity and complication are different things.)
As we are in the midst of this transition, Google’s report walks a line to make sense to those for whom the penny is yet to drop. On the one hand it recognizes that (too) many business leaders still regard digital transformation as not much more than the digitalization of the pre-digital. Absent an understanding of complexity and systems thinking, deliberate strategy formulation and mechanistic organizational alignment remain unchallenged dogma within an organization’s four walls. This then is a world in which one might consider a strategic investment (in technology for example) a potential cure-all. Or ‘cure-lots’ at least. The report’s conclusion is, in this context, spot on.
And yet, on the other hand, the Google For Work team appreciates that work is collaboration. As Esko Kilpi puts it:

The basic unit of work is not an individual, but individuals in interaction.

Laszlo Bock, Google’s SVP People Operations, asserts:

If you give people freedom, they will amaze you. All you need to do is give them a little infrastructure and a lot of room.

Bock notes that because constant innovation is increasingly a group endeavor, people who succeed in the company “tend to be those with a lot of soft skills: leadership, humility, collaboration, adaptability, and loving to learn and re-learn.”
In groping then for a more emergent rather than deliberate understanding and approach you might say:

Organisational objectives are best met not by the optimisation of the technical system and the adaptation of a social system to it, but by the joint optimisation of the technical and social aspects, thus exploiting the adaptability and innovativeness of people in attaining goals instead of over-determining technically the manner in which these goals should be attained.

And in fact Albert Cherns did say this, in 1976! The report’s conclusion may then be considered misleading when the context does not encompass what I like to refer to as the fabric.


… when asked to name the realistic measure that would have the biggest business impact on knowledge-sharing and collaboration, investment in relevant technology came out on top. It was named most frequently as the #1 measure for business impact, and also appeared in respondents’ lists of top three factors more than any other option.

Respondents to Google’s survey identified the technical as #1 for business impact, yet that might be because it’s relatively new and shiny and looks like no more than a purchase order away. Adjusting behavioral norms and hierarchy may only be ranked as less important on the other hand (#’s 3, 4 and 5) because we’ve all seen how difficult transformation of these can be, and indeed the survey’s respondents identified the difficulty in changing working habits as the foremost challenge to creating a more collaborative culture.
Organization requires an organizational fabric for it to act coherently with due speed. It is the sociotechnical substrate that supports and nurtures a healthy living system. In transitioning from deliberate to the more emergent, from the Newtonian to the complex, from the 20th to 21st Century, we must lean on one last heuristic to ready ourselves for competing in rude chaos – beat your competitors in getting the sociotechnical working for you. So not the social or the tech, and not the social and the tech as if they’re separate components that just need to be introduced to each other, but the sociotechnical as one – the qualities that combine holistically to deliver such easy-to-say-hard-to-achieve aspirations as a great culture and productive collaboration.
The ingredients in such combination rarely adhere to some qualitative ranking.

Process is dead, long live process

The report identifies four categories of organizational culture in decreasing technological maturity for want of a better turn of phrase, labelled pioneers (18%), believers (34%), agnostics (27%) and traditionalists (21%).
Google collaboration report 2015
I was attracted by the report authors’ observation that:

‘Believers’ … put less emphasis on systems and processes, which could suggest that they consider these to be regressive and inhibitive.

I have had conversations of late that support this interpretation. It appears that an aspiration for adaptability may tempt a disregard for process given its historical association with repeatability and efficiency at the expense of responsiveness, and yet such conclusion increases business risk and injures adaptability. Consider that adaptability works on two levels, agility (adaptable strategy) and flexibility (adaptable tactical execution). Maintaining relevant strategy – identifying where to play and how to win – is a disciplinary process, and equally the corresponding tactics and execution require constant improvement (kaizen in lean speak).
In short, the power of process is no longer in the fixed process but in fixing attention on its derivative so that change becomes routine. And this then neatly returns to my main thrust here. Change of any one or two things is unlikely to effect the desired improvement. It’s complex. An organization doesn’t so much exist as transmute, and many dials need to be twiddled and many things need to be sensed constantly by everyone involved to ensure that transmutation lives up to all stakeholders’ expectations.

Simple, but not for much longer

And as complexity has it, this works at many levels. This year’s Global Drucker Forum focuses on this topic at the organizational and societal levels and I’ve had the opportunity to contribute a pre-event post: The human web and sustainability. This is the mother of all “management” challenges, so one can appreciate why Google’s report defers to the simple.
To paraphrase its conclusion then and reading between the lines – if you haven’t already, work out how you want to work and procure some modern collaboration technology to support you working that way. And then it gets interesting.

Are you ready to take on tomorrow’s IT? Think again.

Let’s get one thing out of the way right up front. The business of IT is very complex and getting increasingly more complex every day. It does not matter whether you are the buyer or the seller; the industry is evolving into a very different and complex beast.

Evolution of the CIO

How we, as CIOs, have lead IT organizations is very different today from how it was done just 5-10 years ago. In many ways, it is easier to forget what we learned about leading IT and starting over. Of course, the leadership aspects are perennial and will always endure and grow. I wrote a bit about the evolutionary changes for the CIO in more detail with the 5 Tectonic shifts facing today’s CIO. In essence, tomorrow’s CIO is a business leader that also has responsibility for IT.

Consider for a moment that the CIO and IT organization sits on a spectrum.

CIO IT Org Traits

Where the CIO and IT sit along the spectrum impacts perspective, delivery of solutions, target, and responsibilities along with a host of other attributes for both the organization and providers alike.

The changing vendor landscape

Add it all together and today is probably the most confusing time for providers of IT products and services. Traditionally, providers have asked customers what they need and then delivered it. Today, many customers are not really sure what they need or the direction they should take. And the providers are not well equipped to lead the industry in their particular sector let alone tell a good story of how their solution fits into the bigger picture.

As an example, one provider would tell customers their cloud solution ‘transforms’ their business (the company IT is part of). This is completely wrong and over-extends beyond anything their solution is capable of. As such, it positions the company to over commit and under deliver. For the wise CIO, it leads to a serious credibility problem for the provider. It would be pretty unique for any vendor to truly ‘transform’ a company with a single technology let alone one that is far removed from the core business functions. A better, more accurate statement would be: We help enable transformation.

Be careful of Buzzword Bingo. Bingo!

In another recent IT conversation, the perception was that all Infrastructure as a Service (IaaS) solutions were ubiquitous and interchangeable. While we hope to get there some day, the reality is far from standardized. Solutions from providers like Amazon (AWS), Google (GCE), Microsoft (Azure) are different in their own rights. But also very different from solutions provided by IBM (SoftLayer), CenturyLink (SAVVIS), HP (Helion). Do they all provide IaaS services? Yes. Are they similar, interchangeable and address the same need? No. For the record: Cloud is not Cloud, is not Cloud.

The terms IaaS and Cloud bring market cache and attention. And they should! Cloud presents the single largest opportunity for IT organizations today. However, it is important to understand the actual opportunity considering your organization, strategy, capability, need and market options available. The options alone are quite a job to stay on top of.

Keeping track of the playing field

The list of providers above is a very small list of the myriad spread across the landscape. To expect an IT organization to keep track of the differences between providers and map their needs to the appropriate solutions takes a bit of work. Add that the landscape is more like the shifting sands of a desert and you get the picture.

The mapping of services, providers and a customer’s needs along with the fact that their very needs are in a state of flux create a very complex situation for CIO, IT organization and providers.

Is it time to give up? No!

Today’s CIO is looking to up-level the conversation. They are less interested in a technology discussion and one about business. Specifically, by ‘business’ conversation, today’s CIO is interested in talking about things of interest to the board of directors, CEO and rest of the executive team. Trying to discuss the latest technology bell or whistle with a CEO will go nowhere. They are interested in ways to tap new revenue streams, greater customer engagement and increasing market share.

For the CIO, focus on the strategic conversations. Focus on the business opportunities and look for opportunities that technology can help catapult the company forward. Remember that the IT organization no longer has to do everything themselves. Divest those functions that are not differentiating. As an example, consider my recent post: CIOs are getting out of the data center business. If you are not willing to (or capable of) competing at the level that Google runs their data center, it is time to take that last post very seriously. Getting rid of the data center is not the end state. It is only the start.

Countering the traps of complexity and growth by creativity and context: the Netflix model

Reed Hastings, the CEO of Netflix, has a presentation at Slideshare called Netflix Culture: Freedom and Responsibility. I am not going to recap it completely — although it does make for interesting reflection — but I do want to pull a few critical concepts from it, because I think that Netflix has adopted a great many characteristics of the new fast-and-loose form factor of work I have been writing about the past few years.

The presentation starts with a discussion of the sort of principles that form a cultural foundation for the business — perhaps I will revisit them in a later post — but I will extract just the concept of being responsible in a context that provides a great deal of autonomy.

from the presentation:

Responsible people thrive on freedom, and are worthy of freedom.

Our model is to increase employee freedom as we grow, rather than limit it, to continue to attract and nourish innovative people, so we have a better chance of sustained success.

Hastings makes the case that most companies curtail freedom as they get bigger, because with growth comes complexity. And one counter to complexity — that worked in the industrial, slow-and-tight business context of the 20th century — was to codify processes to stop the chaos that comes with complexity. The increase in process controls and the reduction of freedom drives the best performing employees out.

Screenshot 2013-10-15 06.53.58


Process-driven companies do well when efficiency is the fulcrum for competitiveness. But in times of fast change and a market full of innovators, that execution approach is all bad.

This turns out to be one of three bad options:

  1. Become process-bound.
  2. Stay creative by staying small, but therefore limit your impact and reach.
  3. Avoid rules, and suffer chaos.

But he says there is a fourth way:

Avoid chaos as you grow with ever more high performance people — not with rules:

  • The you can continue to mostly run informally with self-discipline, and avoid chaos
  • The ‘run informally’ part is what enables and attracts creativity

Screenshot 2013-10-15 06.55.31

I love the concept introduced here: increasing talent density faster than complexity is riding the wave and staying ahead of chaos. Both trend lines can be managed: at the top, finding and producing high performance, creative, responsible people, and below, intentionally taking steps to retard the rise of complexity.

In this latter case complexity can be held back: a company can opt to focus on a few big products instead of many small ones, or avoid ‘efficiencies’ that lead to rigidity (process-bound, gain). Or, as I discussed in several other posts, the firm can intentionally accept lower cross-communication and collaboration, which requires deep consensus building. Instead, a fast-and-loose independence is viewed as central to the freedoms demanded by high performance staff.

So Hastings believes that if you have the right people, you can remain loosely coupled and stay ahead of the chaos arising from complexities. Instead of control (processes again) directing people what to do, you need to set context:

Screenshot 2013-10-15 07.03.10



So it seems that Netflix has turned the corner into the fast-and-loose world of work, adopting a cooperative ethos in which the work of leadership is setting context, and working to remove the obstacles — like reducing complexity — so that the great people who thrive in a laissez-faire cultural milieu can accomplish great things. At Netflix a mistake made by a project team or an individual is more likely to be bad context setting by management than anything else. And — explicit in the principles that shape the company’s credo — is the notion that it is better to respond quickly to glitches arising from a mismatch between context and action than to create a culture in which people are not experimenting.

Bottom Line

There are some aspects of Netflix culture that I accept only grudgingly — like the companies apparent unwillingness to work explicitly on career planning — but given the fact that they have squeezed out all the chicken shit left over from the 20th century social, that is a lot easier to take. For example, Netflix has no vacation policy, meaning that people can take as much vacation as they need, so long as work is getting done and projects are moving along. And their expense policy is five words:

Act in Netflix’s best interest.

I intend to make a visit to Netflix, and hopefully talk to Hastings and others to get a better sense of how this all works in practice.

Beyond social: the rise of the emergent business

The increased speed, complexity, and uncertainty of the business environment today means that businesses are operating in a world that is fundamentally different from that of only ten years ago.