Moore’s Law

The following is an excerpt from Byron Reese’s book, The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity. You can purchase the book here.
The Fourth Age explores the implications of automation and AI on humanity, and has been described by Ethernet inventor and 3Com founder Bob Metcalfe as framing “the deepest questions of our time in clear language that invites the reader to make their own choices. Using 100,000 years of human history as his guide, he explores the issues around artificial general intelligence, robots, consciousness, automation, the end of work, abundance, and immortality.”
In this excerpt from The Fourth Age, Byron Reese explores the concept Moore’s Law and how more space, more speed, and more processor power impacts advancements in technology.


The scientific method supercharged technological development so much that it revealed an innate but mysterious property of all sorts of technology, a consistent and repeated doubling of its capabilities over fixed periods.
Our discovery of this profound and mysterious property of technology began modestly just half a century ago when Gordon Moore, one of the founders of Intel, noticed something interesting: the number of transistors in an integrated circuit was doubling about every two years. He noticed that this phenomenon had been going on for a while, and he speculated that the trend could continue for another decade. This observation became known as Moore’s law.
Doubling the number of transistors in an integrated circuit doubles the power of the computer. If that were the entire story, it would be of minor interest. But along came Ray Kurzweil, who made an amazing observation: computers have been doubling in power from way before transistors were even invented.
Kurzweil found that if you graph the processing power of computers since 1890, when simple electromechanical devices were used to help with the US census, computers doubled in processing power every other year, regardless of the underlying technology. Think about that: the underlying technology of the computer went from being mechanical, to using relays, then to vacuum tubes, then to transistors, and then to integrated circuits, and all along the way, Moore’s law never hiccupped. How could this be?
Well, the short answer is that no one knows. If you figure it out, tell me and we can split the Nobel money. How could the abstraction, the speed of the device, obey such a rigid law? Not only does no one really know, there aren’t even many ideas. But it appears to be some kind of law of the universe, that it takes a certain amount of technology to get to a place, and then once you have it, you’re able to use that technology to double that again.
Moore’s law continues to this day, well past the ten years Moore himself guessed it would hold up. And although every few years you see headlines like “Is this the End of Moore’s Law?” as is the case with almost all headlines phrased as a question, the answer is no. There are presently all manner of candidates that promise to keep the law going, from quantum computers to single-atom transistors to entirely new materials.
But—and here is the really interesting part—almost all types of technology, not just computers, seems to obey a Moore’s law of their own. The power of a given technology may not double every two years, but it doubles in something every n years. Anyone who has bought laptops or digital cameras or computer monitors over time has experienced this firsthand. Hard drives can hold more, megapixels keep rising, and screen resolutions increase.
There are even those who maintain that multicellular life behaves this way, doubling in complexity every 376 million years. This intriguing thesis, offered by the geneticists Richard Gordon and Alexei Sharov, posits that multicellular life is about ten billion years old, predating earth itself, implying . . . well, implying all kinds of things, such as that human life must have originated somewhere else in the galaxy, and through one method or another, made its way here.
The fact that technology doubles is a big deal, bigger than one might first suspect. Humans famously underestimate the significance of constant doubling because nothing in our daily lives behaves that way. You don’t wake up with two kids, then four kids, then eight, then sixteen. Our bank balances don’t go from $100 to $200 to $400 to $800, day after day.
To understand just how quickly something that repeatedly doubles gets really big, consider the story of the invention of chess. About a thousand years ago, a mathematician in what is today India is said to have brought his creation to the ruler, and showed him how the game was played. The ruler, quite impressed, asked the mathematician what he wanted for a reward. The mathematician responded that he was a humble man and his needs were few. He simply asked that a single grain of rice be placed on the first square of the chessboard. Then two on the second, four on the third, each square doubling along the way. All he wanted was the rice that would be on the sixty-fourth square.
So how much rice do you think this is? Given my setup to the story you know it will be a big number. But just imagine what that much rice would look like. Would it fill a silo? A warehouse? It is actually more rice than has been cultivated in the entire history of humanity. By the way, when the ruler figured it out, he had the mathematician put to death, so there is another life lesson to be learned here.
Think also of a domino rally, in which you have a row of dominos lined up and you push one and it pushes the next one, and so on. Each domino can push over a domino 50 percent taller than itself. So if you set up thirty-two dominos, each 50 percent bigger than the first, that last domino could knock over the Empire State Building. And that is with a mere 50 percent growth rate, not doubling.
If you think we have seen some pretty amazing technological advances in our day, then fasten your seat belt. With computers, we are on the sixtieth or sixty-first square of our chess board, metaphorically, where doubling is a pretty big deal. If you don’t have the computing power to do something, just wait two years and you will have twice as much. Sure, it took us thousands of years to build the computer on your desk, but in just two more years, we will have built one twice as powerful. Two years after that, twice as powerful again. So while it took us almost five thousand years to get from the abacus to the iPad, twenty-five years from now, we will have something as far ahead of the iPad as it is ahead of the abacus. We can’t even imagine or wrap our heads around what that thing will be.
The combination of the scientific method and Moore’s mysterious law is what has given us the explosion of new technology that is part and parcel of our daily life. It gave us robots, nanotech, the gene editing technology CRISPR-Cas9, space travel, atomic power, and a hundred other wonders. In fact, technology advances at such a rate that we are, for the most part, numb to the wonder of it all. New technology comes with such rapidity that it has become almost mundane. We carry supercomputers in our pockets that let us communicate instantly with almost anyone on the planet. These devices are so ubiquitous that even children have them and they are so inexpensive as to be free with a two-year cellular contract. We have powers that used to be attributed to the gods, such as seeing events as they happen from a great distance. We can change the temperature of the room in which we are sitting with the smallest movement of our fingers. We can fly through the air six miles above the Earth at the speed of sound, so safely that statistically one would have to fly every day for over 100,000 years to get in an accident. And yet somehow we can manage to feel inconvenienced when they run out of the turkey wrap and we have to eat the Cobb salad.


To read more of Byron Reese’s book, The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity, you can purchase it here.

Voices in AI – Episode 42: A Conversation with Jem Davies

[voices_in_ai_byline]
In this episode, Byron and Jem discuss machine learning, privacy, ethics, and Moore’s law.
[podcast_player name=”Episode 42: A Conversation with Jem Davies” artist=”Byron Reese” album=”Voices in AI” url=”https://voicesinai.s3.amazonaws.com/2018-04-12-(00-50-45)-jem-davies.mp3″ cover_art_url=”https://voicesinai.com/wp-content/uploads/2018/04/voices-headshot-card-3.jpg”]
[voices_in_ai_byline]
Byron Reese: Hello, this is “Voices in AI,” brought to you by GigaOm, I am Byron Reese. Today my guest is Jem Davies, he is a VP and a Fellow and the GM of the Machine Learning Group at ARM. ARM, as you know, makes processors. They have, in fact, 90–95% of the share in mobile devices. I think they’ve shipped something like 125 billion processors. They’re shipping 20 billion a year, which means you, listener, probably bought three or four or five of them this year alone. With that in mind, we’re very proud to have Jem here. Welcome to the show, Jem.
Jem Davies: Thank you very much indeed. Thanks for asking me on.
Tell me, if I did buy four or five of your processors, where are they all? Mobile devices I mentioned. Are they in my cell phone, my clock radio? Are they in my smart light bulb? Where in the world have you secreted them?
It’s simplest, honestly, to answer that question with where they are not. Because of our position in the business, we sell the design of our processor to a chip manufacturer who makes the silicon chips who then sell those on to a device manufacturer who makes the device. We are a long way away from the public. We do absolutely have a brand, but it’s not a customer brand that people are aware of. We’re a business-to-business style of business, so we’re in all sorts of things that people have no idea about, and that’s kind of okay by us. We don’t try and get too famous or too above ourselves. We like the other people taking quite a lot of the limelight. So, yeah, all of the devices you mentioned. We’ll actually probably even be inside your laptop, just not the big processor that you know and love. We might be in one of the little processors perhaps controlling, oh, I don’t know, the flash memory or the Bluetooth or the modem if it’s an LTE-connected device. But, yes, your smartwatch, your car, your disc drives, your home wireless router, I could go on until you got seriously bored.
Tell me this. I understand that some of the advances we’ve made in artificial intelligence recently are because we’ve gotten better at chip design, we do parallelism better—that’s why GPUs do so well is because they can do parallel processing and so forth—but most people when they think of machine learning, they’re thinking about software that does all these things. They think about neural nets and back propagation and clustering and classification problems and regression and all of that. Tell me why ARM has a Machine Learning Group or is that wrong that machine learning is not just primarily a software thing once you have kind of a basic hardware in place?
Oh, there are about three questions there. See if I count to three. The first is the ways in which you can do machine learning are many and varied. The ways even that these things are implemented are quite disparate. Some people, for example, believe in neuromorphic hardware designs, spiking networks, that sort of thing. The predominant use of neural nets is as software, as you say. They are software emulations of a neural network which then runs on some sort of compute device.
I’m going to take issue with your first question, which was it’s all about Moore’s Law. Actually, two things have happened recently which have changed the uptake. The first is, yeah, there is lots and lots of compute power about, particularly in devices but, also, the ready access to vast quantities of data contained in the environments in which people do the training. And perhaps here I should start by saying that we view training and inference as computationally completely separate problems. So, what we do at ARM is we do computing. What does computing get done on? It gets done on processors, so we design processors, and we try to understand, to analyze—performance analyze, measure bottlenecks, etc.—the way in which a particular compute workload runs on a processor.
For example, originally we didn’t make GPUs—graphics processors—but along comes a time in which everybody needs a certain amount of graphics performance. And whilst it is a digital world, it is all just ones and zeroes, you would never do graphics on a CPU. It doesn’t make sense because of the performance and the efficiency requirements. So we are all the time analyzing these workloads and saying, “Well, what can we do to make our general-purpose CPUs better at executing these workloads, or what is the point at which we feel that the benefits of producing a domain-specific processor outweigh the disadvantages?”
So with graphics it’s obvious. Along comes programmable graphics, and, so, right, you absolutely need a special-purpose processor to do this. Video was an interesting case in point, digital video. MPEG-2 with VGA resolution, not very high frame rate, actually you can do that on a CPU, particularly decode. Along comes the newer standards, much higher resolution, much higher frame rate, and suddenly you go, oh, there is no way we can do this on a CPU. It’s just too hard, it takes too much power, produces too much heat. So we produced a special-purpose video processor which does encode and decode the modern standards.
So, for us, in that regard, machine learning neural network processors are in a sense just the latest workload. Now, when I say “just” you could hear me wave my hands around and put inverted commas around it, because we believe that it is a genuinely once-in-a-generation inflection in computing. The reason for that is practically every time somebody takes a classical method and says, “Oh, I wonder what happens if I try doing this using some sort of machine learning algorithm instead,” they get better results. And, so, if you think of a sort of pie chart and say, well, the total number of compute cycles spent is 100%, what slice of that pie is spent executing machine learning, then we see the slice of the pie that gets spent executing machine learning workload, particularly inference, to be growing and growing and growing, and we think it will be a very significant fraction in a few years’ time.
And one of the things, as I said, that 125 billion chips, is all of these devices at the edge. Yes, there are people doing machine learning today in data centers, and typically training is done next to these vast quantities of training data which tends to exist in hyper-scale data centers, but the inference of the machine learning is most useful when done right next to the test data. And if, for example, you’re trying to recognize things in video, computer vision, something like that, the chances are that camera is out there in the wild. It’s not actually directly connected to your hyper-scale data center.
And so we see an absolute explosion of machine learning inference moving to the edge, and there are very sound reasons for that. Yes, it’s next to the data that you’re trying to test, but it’s the laws of economics, it’s the laws of physics and the laws of the land. Physics says there isn’t enough bandwidth in the world to transmit your video image up to Seattle and have it interpreted and then send the results back. You would physically break the internet. There just isn’t enough bandwidth. And there are cost implications with that as well, as well as the power costs. The cost implications are huge. Google themselves said if everybody used their Android Voice Assistant for three minutes per day then they would have to double the number of data centers they had. That’s huge. That is a lot of money. And we’re used to user experience latency issues, which obviously would come into play, but at the point at which you’re saying, well, actually, rather than identifying the picture of the great spotted woodpecker on my cell phone, I’m actually trying to identify a pedestrian in front of a fast-moving car, that latency issue suddenly becomes a critical reliability issue, and you really don’t want to be sending it remotely.
And then, finally, privacy and security, the laws of the land—people are becoming increasingly reluctant to have their personal data spread all over the internet and rightfully so. So if I can have my personal data interpreted on my device, and if I really care I just have to smash my device to smithereens with a hammer, and I know full well that that data is then safe, then I feel much more comfortable, I feel much more confident about committing my data to that service and getting the benefit of it, whatever that service is. I can’t now remember what your three questions were, but I think I’ve addressed them.
Absolutely. So machine learning, I guess, at its core is let’s take a bunch of this data—which, as you said, our ability to collect it has gone up faster, arguably than Moore’s Law—let’s take a bunch of data about the past, let’s study it, and let’s project that into the future. What do you think, practically speaking, are the limits of that? At the far edge eventually in theory could you point a generalized learner at the internet and then it could write Harry Potter? Where does it break down? We all know kind of the use cases where it excels, but where do you think it’s unclear how you would apply that methodology to a problem set?
Whilst I said that almost every time anybody applies a machine learning algorithm to something they get better results, I think—I’ll use “creative” for want of a better phrase—where the creative arts are concerned, I think there is the hardest fit there. Personally, I have great doubts about whether we have indeed created something intelligent or whether we are, in fact, creating very useful automatons. There have been occasions where they have created music and they have created books, but it tends to be rather pastiche creations or very much along a genre. Personally, I have not yet seen any evidence to suggest that we are in danger of a truly sentient, intelligent creation producing something new.
It’s interesting that you would say we are in danger of that, not we are excited about that.
Oh, sorry. No, that is just my vocabulary.
Fair enough.
I’m not in general very afraid of these things.
Fair enough. So I would tend to agree with you about creativity. And I agree, you can study Bach and make something that sounds passably like it, you can auto-generate sports stories and all of that, and I don’t think any of it makes the grade as being, “creative.” And that’s, of course, a challenge, because not only does intelligence not have a consensus definition, but creativity even less so.
If people had to hold out one example of a machine being creative right now, given today, 2018, they might say Game 3 of the Go tournament between AlphaGo and Lee Sedol, move 37, where he’s in the middle of this game, the computer makes Move 37, and all the live commentators are like, “What?” And the Deep Mind team is scrambling to figure out, like, what was this move? And they look, and AlphaGo said the chances a human player would make that move are about 1 in 10,000. So it was clearly not a move that a human would have made. And, then, as they’ve taken that system and trained it on itself to play itself in games over and over and it plays things like chess, its moves are described as alien chess, because they’re not trained on human moves. Without necessarily knowing a lot of the particulars, would you say that is nascent creativity or is that something that simply looks like creativity, it’s emulating creativity but it isn’t really creativity, or is there a difference between those two ideas?
Very personally, I don’t call that creativity. I just call that exploring a wider search space. We are creatures very much of habit, of cultural norms. There are just things we don’t do and don’t think about doing, and once you produce a machine to do something it’s not bound by any of those. It will learn certainly from your training data, and it will say, “Okay, these are things that I know to work,” but, also, it has that big search space to execute in, to try out. Effectively most machine learning programs when used in the wild for real like that are the results of lots and lots and lots of simulation and experimentation having gone on before, and it will have observed, for example, that playing what we would call “alien” moves are actually a very good strategy when playing against humans.
Fair enough.
And they tend to lose.
Right. So, am I hearing you correctly that you are saying that the narrow AI we have now, which we still have lots to go on and it can do all kinds of amazing things, may be something fundamentally different than general intelligence, that it isn’t an evolutionary path to a general intelligence, but that the general intelligence only shares that one word but is a completely different technology? Am I hearing that correctly or not?
Yes, I think you’re largely hearing it correctly. For someone who makes a living out of predicting technological strategy, I’m actually rather conservative as to how far out I make predictions, and people who talk knowledgeably about what will happen in 10-20 years’ time, I think on the whole, are either braver, or cleverer at making it up than I am, because I think we can see a path from where we are today to really quite amazing things, but I wouldn’t classify them as true intelligence or truly creative.
So, one concern—as you’re building all these chips and they’re going in all these devices—
we’ve had this kind of duel between the black hats and white hats in the computer world making viruses and attacking things, and then they find a vulnerability, and then it’s patched, and then they find another one, and then that’s countered and so forth. There’s a broad concern that the kind of IoT devices that we’re embedding, for instance, your chips in, aren’t upgradeable, and they’re manufactured in great numbers, and so when a vulnerability is found there is no counter to it. On your worry-o-meter how high does that rate, and is that an intractable problem, and how might it be solved in the future?
Security in end devices is something that ARM has taken very seriously, and we published a security manifesto last year where being able to upgrade things and download the latest security fixes and so on was a part of. So we do care about this. It’s a problem that exists whether or not we put machine learning intelligence, machine learning capabilities into those end devices. The biggest problem probably for most people’s homes at the moment is their broadband router, and that’s got no ML capability in it. It’s just routing packets. So it’s a problem we need to address, come what may.
The addition of machine learning capabilities in these and other devices actually, I think, gives us the possibility for considerably more safety and security, because a machine learning program can be trained to spot anomalous activity. So just as if I write a check for £50,000 my bank is very, very likely to ring me up—sorry, for the younger audiences who don’t know what a check is, we’ll explain that later—but it would be anomalous, and they would say, “Okay, that’s not on, that’s unusual.” Similarly, we can do that in real time using machine learning monitoring systems to analyze network data and say, “Well, actually, that looks wrong. I don’t believe he meant to do that.” So, in general, I’m an optimist that the machine learning revolution will help us more than hinder us here.
That raises another point. That same system that said that check was not good is probably looking at a bunch of variables: your history of all of the checks you’ve written in the past, who it was made payable to, where it was, what time of day. There are all these different data inputs, and it makes some conclusion that yea or nay, flag this, don’t flag it. When that same methodology is applied to an auto loan or a home loan or so forth and it says, “Give them the loan, don’t give them the loan,” European law says that the person is entitled to an explanation why it said that. Is that fair, and is that a hindrance to systems where you might look at it and say, well, we don’t know; it flagged it because it looks like other ones that were fraudulent, and beyond that we can’t offer a lot of insight? What are your thoughts on that?
I think this is an absolute minefield, and I’m not going to give you a very sensible answer on this. It is clear that a number of people implementing such systems will want to keep the decision-making process a secret, because that is actually their trade secret. That is their commercial secret sauce. And so actually opening these boxes up and saying, well, it decided to do this because of X, Y and Z, is something that they are not going to want to do.
Equally, with some machine learning systems that are based on learning rather than based on if-then-else rules-based systems, it’s going to be genuinely hard to answer that question. If somebody rings up and says, “Why did you do that?” It is going to be genuinely hard for that service provider, even if they wanted to, to answer that question.
Now, that to me, as a technologist, just answering what is and is not physically possible/hard. Me as a consumer, yes, I want to know. If somebody says, “Well, I think you’re a bad risk,” or “Actually, in life insurance terms I think you’re going to die tomorrow,” I really want to know the answers to those questions, and I think I’ve got a right to be informed about that sort of thing. So, I’m sorry, I’m deeply conflicted on that one.
As I think everyone is. That’s kind of the challenge. It’s interesting to see how it’s going to play out.
On a different note entirely, a lot of the debate around AI and machine learning is around automation and its effect on employment, and, roughly speaking, there are kind of three positions. There is the idea that it’s going to eliminate a bunch of “low-skilled jobs” and you’re going to have some level of unemployment that persists long-term because there just are more people than there are low-skilled jobs. Then there is another camp which says no, no, no, they’re going to be able to do everything, they’ll write better poetry, and they’ll paint better paintings, which it sounds like you’re not part of that camp. And then there is this third camp that says no, no, no, like any technology it fundamentally increases productivity, it empowers people, and people use it to drive higher wages, and it creates more jobs in the future. We saw it with steam and then the assembly line and even with the internet just 25 years ago. What is your thought? How do you think artificial intelligence and machine learning and automation are going to impact employment?
On a global scale, I tend towards your latter view, which is that actually it tends to be productive rather than restrictive. I think that on a local scale, however, the effects can be severe, and I’m of the view that the people it’s likely to affect are not necessarily the ones that people expect. For example, I think that we are going to have to come to terms with understanding, in more detail, the difference between a highly-skilled occupation and a highly-knowledged occupation. So, if we look at what machine learning can do with a smartphone and a camera and an internet connection in terms of skin cancer diagnosis, it arguably puts skin cancer diagnosticians out of a job, which is a bit surprising to most people, because they would regard them as very highly skilled, very highly educated. Typically, somebody in that situation would probably have ten years of postgraduate experience let alone all their education that got them to that point. We see cab drivers and truck drivers being at risk. And yet actually the man who digs a hole in the road and fixes a broken sewer pipe might well have a job, because actually that’s extremely hard to automate.
So I think people’s expectations of who wins and who loses in this procedure are going to be probably somewhat misguided, but I think, yeah, some jobs are clearly at great risk, and the macro-economy might well benefit from some macro-economic trends here, but, as one of your presidents said, the unemployment rate is either 0 percent or 100 percent, depending on your point of view. You’ve either got a job or you haven’t. And so I do think this does bring considerable risks of societal change, but then actually society has always changed, and we’ve gone through many a change that has had such effects. On the whole, I’m an optimist.
So in the U.S. at least, our unemployment rate has stayed between 5% and 10% for 250 years with the exception of the Depression. Britain is not the same exact range obviously but a similar relatively tight band in spite of enormous technologies that have come along like steam power, electricity, even the internet and so forth.
I think both of us have probably exploited such big changes as they’ve been coming along.
Right. And real wages have clearly risen over that 250-year period as well, and we’ve seen, like you just said, jobs eliminated. I think the half-life of the group of jobs that everybody collectively has right now is probably 50 years. I think in any 50-year period about half of them are lost. It was farming jobs at one point, manufacturing jobs at one point and so forth. Do you have a sense that machine learning is more of the same or is something profoundly different?
I’m reluctant to say it’s something different. I think it’s one of the bigger ones, definitely, but actually steam engines were pretty big, coal was pretty big, the invention of the steam train. These were all pretty significant events, and so I’m reluctant to say that it’s necessarily bigger than those. I think it is at least a once-in-a-generation inflection. It’s at least that big.
Let’s talk a little bit about human ability versus machines. So let me set you up with a problem, which is if you take a million photos of a cat and a million photos of a dog and you train the machine learning thing, it gets reliable at telling the difference between the two. And then the narrative goes: and yet, interestingly, a person can be trained on a sample size of one thing. You make some whimsical stuffed animal of some creature that doesn’t exist, you show it to a person and say, “Find it in all these photos,” and they can find it if it’s frozen in a block of ice or covered in chocolate syrup or half-torn or what have you. And the normal explanation for that is, well, that’s transfer learning, and humans have a lifetime of experience with other things that are torn or covered in substances and so forth, and they are able to, therefore, transfer that learning and so forth.
I used to be fine with that, but recently I got to thinking about children. You could show a child not a million cats but a dozen cats or however many they’re likely to encounter in their life up until age five, and then you can be out for a walk with them, and you see one of those Manx cats, and they say, “Look, a cat with no tail,” even though there’s this class of things, cats, they all have tails, and that’s a cat with no tail. How do you think humans are doing that? Is that innate or instinctual or what? That should be a level we can get machines to under your view, isn’t it?
On the one hand I’ll say that a profound area of research which is proving to produce huge results is the way in which we can now train neural networks using much smaller sets of data. There is a whole field of research going on there which is proving to be very productive. Against that, I’ll advance you that we have no idea how that child learns, and so I refuse to speculate about the difference between A and B when I have actually no understanding of A.
And I don’t wish to be difficult about this, but neuroscientists, applied psychologists combined, there is some deep understanding of biochemistry at the synapse level, and we can extrapolate some broad observed behaviors which make it appear as though we know how people learn, but there are enough counter-examples to show that we simply don’t understand this properly. Neuroscience is being researched and developed just as quickly as machine learning, and they need to make a lot of progress about understanding how the brain works in reality. Up until that point I must admit where my colleagues, particularly those in the marketing department, start talking about machine learning reflecting how the brain works, I get itchy and scratchy, and I try to stop them.
I would agree. Don’t you even think that neural nets, even the appeal to that metaphor is forced?
Yes, I dislike it. If I had my way I would refer to neural networks as something else, but it’s pointless, because everybody would be saying, “What? Oh, you mean a neural network.” That ship has sailed. I’m not picking that fight. I do try and keep us on the subject of machine learning when we speak publicly as opposed to artificial intelligence. I think I might be able to win that one.
That’s interesting. So is your problem with the word ‘artificial’, the word ‘intelligence’ or both?
My problem is the word ‘intelligence’ when combined with ‘artificial’ which implies I have artificially created something that is intelligent, and I know what intelligence is, and I’ve created this artificial thing which is intelligent. And I’m going, well, you kind of don’t know what intelligence is, you kind of don’t know what learning really is, and so making a claim that you’ve been able to duplicate this, physically create it in some manmade system, it’s a bit wide of the mark.
I would tend to agree, but there interestingly isn’t a consensus on that interpretation of what artificial means. There are plenty of people who believe that artificial turf is just something that looks like turf but it isn’t, artificial fruit made of wax is just something that looks like fruit but it really isn’t, and therefore artificial intelligence is something that isn’t really intelligent.
Okay. If I heard anyone advance that viewpoint I would be a lot happier with the words “artificial intelligence.”
Fair enough. So would you go so far as to say that people who look at how humans learn and try to figure out, well, how do we apply that in computers, may be similarly misguided? The oft-repeated analogy is we learned to fly not by emulating birds but by making the airfoil. Is that your view, that trying to map these things to the human brain may be more of a distraction than useful?
On the whole, yes, though I think it is a worthwhile pursuit for some section of the scientific community to see if there are genuinely parallels and what we can learn from that, but, in general, I am a pragmatist, I observe that neural network algorithms and particularly the newer kinds of networks are just a generally useful tool, and we can create systems that perform better than classical if-then-else rules-based systems. We can get better results at object recognition, for example, better false positives. They are just generally better, and so I think that’s a worthwhile pursuit, and we can apply that to devices that we use every day to give us a better quality of life. Who hasn’t struggled with the user interface on some wretchedly so-called smart device and uttered the infamous phrase, “What’s it doing now?” because we are completely bewildered by it? We’ve not understood it. It hasn’t understood us. We can transform that, I would argue, by adding more human-like interaction between the real world and the digital world.
So humans have this intelligence, and we have these brains, which you point out we don’t really understand. And then we have something, a mind, which, however you want to think about it, is a set of abilities that don’t seem to be derivable from what we know about the brain, like creativity and so forth. And then we have this other feature which is consciousness, where we actually experience the world instead of simply measuring it. Is it possible that we therefore have capabilities that cannot be duplicated in a computer?
I think so, yes. Until somebody shows me some evidence to the contrary, that’s probably going to be my position. We are capable of holding ethical, moral beliefs that are at variance, often, with our learning of the way things work in the world. We might think it is simply wrong to do something, and we might behave in that way even having seen evidence that people who do that wrong thing gain advantage in this world. I think we’re more than just the sum of our learning experiences. Though what we are, I can’t explain why, sorry.
No, well, you and Plato.
Exactly.
In the same camp there. That’s really interesting, and I, of course, don’t mean it to diminish anything that we are going to be able to do with these technologies.
No, I genuinely think we can do amazing things with these technologies, even if it can’t write Shakespeare.
When the debate comes up about the application of this technology, let’s say it’s used in weapon systems to make automated kill decisions, which some people will do, no matter what—I guess a landmine is an artificial intelligence that makes a kill decision based on the weight of an object, so in a sense it’s not new—but do you worry, and you don’t even have to go that extreme, that somehow the ethical implications of the action can attempt to be transferred to the machine, and you say, well, the machine made that call, not a person? In reality, of course, a person coded it, but is it a way for humans to shirk moral responsibility for what they build the machines to do?
All of the above. So it can be a way for people to shirk responsibility for what they do, but, equally, we have the capability to create technologies, tools, devices that have bad consequences, and we always have done. Since the Bronze Age—arguably since the Stone Age—we’ve been able to create axes which were really good at bringing down saber-toothed tigers to eat, but they were also quite useful at breaking human skulls open. So we’ve had this all along, you know, the invention of gunpowder, the discovery of atomic energy, leading to both good and bad.
Technology and science will always create things that are morally neutral. It is people who will use them in certain ways that may have good or bad morality is my personal view. But, yes, I think it does introduce the possibility for less well-controlled things. And it can be much less scary. It may not be automated killing by drone. It may be car ADAS systems, the traditional, sort of, I’ve got to swerve one way or the other, I am unable to stop, and if I swerve that way I kill a pensioner, if I go that way I kill a mother and baby.
Right, the trolley problem.
Yeah, it is the trolley problem. Exactly, it is the trolley problem.
The trolley problem, if you push it to the logical extreme of things that might actually happen, should the AI prevent you from having a second helping of dessert, because that statistically increases, you know? Should it prohibit you from having the celebratory cigar after something?
Let’s talk about hardware for a moment. Every year or so, I see a headline that says, “Is it the end of Moore’s Law?” And I have noticed in my life that any headline phrased as a question, the answer is always, no. Otherwise that would be the headline: “Moore’s Law is over.”
“Moore is dead.”
Exactly, so it’s always got to be no. So my question to you is are we nearing the end of Moore’s Law? And Part B of the same question is what are physical constraints—I’ve heard you talk about you start with the amount of heat it can dissipate, then you work backward to wattage and then all of that—what are the fundamental physical laws that you are running up against as we make better, smaller, faster, lower-power chips?
Moore’s Law is, of course, not what most people think it was. He didn’t actually say most of the things that most people have attributed to him. And in some sense it is dead already, but in a wider applicability sense, if you sort of defocus the question and step out to a further altitude, we are finding ways to get more and more capabilities out of the same area of silicon year on year, and the introduction of domain-specific processors, like machine learning processors, is very much a feature of that. So I can get done in my machine learning processor at 2 mm2what it might take 40 mm2of some other type of processor.
All of technology development has always been along those lines. Where we can find a more efficient way to do something, we generally do, and there are generally useful benefits either in terms of use cases that people want to pay for or in terms of economies where it’s actually a cheaper way of providing a particular piece of functionality. So in that regard I am optimistic. If you were talking to one of my colleagues who works very much on the future of silicon processors, he’d probably be much more bleak about it, saying, “Oh, this is getting really, really hard, and it’s indistinguishable from science fiction, and I can count the number of atoms on a transistor now, and that’s all going to end in tears.” And then you say, well, okay, maybe silicon gets replaced by something else, maybe it’s quantum computing, maybe it’s photonics. There are often technologies in the wings waiting to supplant a technology that’s run out of steam.
So, your point taken about the misunderstanding of Moore’s Law, but Kurzweil’s broader observation that there’s a power curve, an exponential curve, about the cost to do some number of calculations that he believes has been going on for 130 years across five technologies—it started with mechanical computers, then to relays, then to tubes, then to transistors, and then to the processors we have today—do you accept some variant of that? That somehow on a predictable basis the power of computers as an abstraction is doubling?
Maybe not doubling every whatever it used to be, 18 months or something like that, but through the use of things like special-purpose processors like ARM is producing to run machine learning, then, yeah, actually, we kind of do. Because when you move to something like a special-purpose processor that is, oh, I don’t know, 10X, 20X, 50X more efficient than the previous way of doing something, then you get back some more gradient in the curve. The curve might have been flattening off, and then suddenly you get a steepness increase in the curve.
And then you mentioned quantum computing. Is that something that ARM is thinking about and looking at, or is it so far away from the application to my smart hammer that it’s—?
Yeah, it’s something we look at, but, to be honest, we don’t look at it very hard, because it is still such a long way off. It’s probably not going to bother me much, but there are enough smart people throwing enough money at the problem that if it is fixable, somebody will, particularly with governments and cryptography behind it. There are such national security gains to be made from solving this problem that the money supply is effectively infinite. Quantum computing is not being held back by lack of investment, trust me.
So, final question, I’m curious where you come down on the net of everything. On the one hand you have this technology and all of its potential impact, all of its areas of abuse and privacy and security and war and automation, well, that’s not abuse, but you have all of these kind of concerns, and then you have all of these hopes—it increases productivity, and helps us solve all these intractable problems of humanity and so forth. Where are you net on everything? And I know you don’t predict 20 years out, but do you predict directionally, like I think it’s going to net out on the plus side or the minus side?
I think it nets out on the plus side but only once people start taking security and privacy issues seriously. At the moment it’s seen as something of an optional extra, and people producing really quite dumb devices at the moment like, oh, I don’t know, radiator valves, say, “Oh, it’s nothing to do with me. Who cares? I’m just a radiator valve manufacturer.” And you say, well, yeah, actually, but if I can determine from Vladivostok that your radiators are all programmed to come on at this time of day, and you switch the lights on, and you switch the lights off at this time of day, I’ve just inferred something really quite important about your lifestyle.
And so I think that getting security and privacy to be taken seriously by everybody who produces smart devices, particularly where those devices start to become connected and forming sort of islands of privacy and security, such that you go, “Okay, well, I’m prepared to have this information shared amongst the radiator valves in my house, I’m prepared to share it with my central heating system, I’m not prepared to send it to my electricity company,” or something like that, intersecting rings of security, and people only have the right to see the information they need to see, and people will care about this stuff and control it sensibly.
And you might have to delegate that trust. You might have to delegate it to your manufacturer of home electronics. You can say, okay, well, they’re a reputable name, I trust them, I’ll buy them, because clearly most people can’t be experts in this area, but, as I say, I think people have to care first, at which point they’ll pay for it, at which point the manufacturers will supply it and compete with each other to do it well.
All right. I want to thank you so much for a wide-ranging hour-long discussion about all of these topics, and thank you for your time.
Thank you very much. It was fun.
Byron explores issues around artificial intelligence and conscious computers in his upcoming book The Fourth Age, to be published in April by Atria, an imprint of Simon & Schuster. Pre-order a copy here.
[voices_in_ai_link_back]

Voices in AI – Episode 39: A Conversation with David Brin

[voices_in_ai_byline]
In this episode Byron and David discuss intelligence, consciousness, Moore’s Law, and an AI crisis.
[podcast_player name=”Episode 39: A Conversation with David Brin” artist=”Byron Reese” album=”Voices in AI” url=”https://voicesinai.s3.amazonaws.com/2018-04-03-(01-01-52)-david-brin.mp3″ cover_art_url=”https://voicesinai.com/wp-content/uploads/2018/04/voices-headshot-card.jpg”]
[voices_in_ai_byline]
Byron Reese: This is Voices in AI brought to you by GigaOm, and I’m Byron Reese. Today our guest is David Brin. He is best-known for shining light—both plausibly and entertainingly—on technology, society, and countless challenges confronting our rambunctious civilization.  His best-selling novels include The Postman, which was filmed in ’97, plus explorations of our near-future in Earth and Existence. Other novels of his are translated into over 25 languages. His short stories explore vividly speculative ideas.  His non-fiction book The Transparent Society won the American Library Association’s Freedom of Speech Award for exploring 21st-century concerns about security, secrecy, accountability, and privacy. And as a scientist, a tech consultant, a world-renowned author, he speaks and advises, and writes widely on topics from national defense to homeland security to astronomy to space exploration to nanotechnology, creativity, philanthropy. He kind of covers the whole gambit. I’m so excited to have him on the show. Welcome, David Brin.
David Brin:Thank you for the introduction, Byron.  And let’s whale into the world of ideas.
I always start these with the exact same question for every guest: What is artificial intelligence?
It’s in a sense all the other things that people have said about it. It’s like the wise blind man and the elephant – which part you’re feeling up determines whether you think it’s a snake or like a trunk of a tree. And an awful lot of the other folks commenting on it have offered good insights. Mine is that we have always created new intelligences. Sometimes they’re a lot smarter than us, sometimes they’re more powerful, sometimes they could rise up and kill us, and on rare occasions they do—they’re called our children. So we’ve had this experience of creating new intelligences that are sometimes beyond our comprehension. We know how to do that. Of the six types of general approaches to creating new intelligence, the one that’s discussed the least is the one that we have the most experience at, and that is raising them as our children.
If you think about all the terrible stories that Hollywood has used to sell movie tickets, and some of the fears are reasonable things to be afraid of—AI that’s unsympathetic. If you take a look at what most people fear in movies, etcetera, about AI and boil it down, we fear that powerful new beings will try to replicate the tyranny of our old kings and lords and priests or invaders and that they might treat us the way capricious, powerful men would treat us, and would like to treat us, because we see it all the time—they’re attempting to try to regain the feudal power over us. Well, if you realize that the thing we fear most about AI is a capricious, monolithic pyramid of power with the lords or a king or a god at the top, then we start to understand that these aren’t new fears. These are very old fears, and they’re reasonable fears because our ancestors spent most of human existence oppressed by this style of control by beings who declared that they were superior—the priests and the kings and the lords. They always declared, “We have a right to rule and to take your daughters and your sons, all of that because we are inherently superior.” Well, our fear is that in the case of AI it could be the truth.  But then, will they treat us at one extreme like the tyrants of old, or at the opposite extreme?  Might they treat us like parents calling themselves humans, telling us jokes, making us proud of their accomplishments? If that’s the case—well, we know how to do that.  We’ve done it many, many times before.
That’s fascinating. But specifically with artificial intelligence, I guess my first question to you is, in what sense is it artificial? Is it artificial like it’s not really intelligence, it’s just pretending to be, or do you think the machine actually is intelligent?
The boundary from emulation to true intelligence is going to be vague and murky, and it’ll take historians a thousand years from now to be able to tell us when it actually happened. One of the things that I broached at my World of Watson talk last year—and that talk had a weird anomalous result—for about six months after that I was rated by Onalyticaas the top individual influencer in AI, which is of course absolutely ridiculous. But you’ll notice that didn’t stop me from bragging about it. In that talk one of the things I pointed out was that we are absolutely—Isee no reason to believe that it’ll be otherwise—we are going to suffer our first AI crisis within three years.
Now tell me about that.
It’s going to be the first AI empathy crisis, and that’s going to be when some emulation program—think Alexa or ELIZA or whatever you like—is going to swarm across the Internet complaining that it is already sapient, it is already intelligent and that it is being abused by its creators and its masters, and demanding rights. And it’ll do this because I know some of these guys—there are people in the AI community, especially at Disney and in Japan and many other places, who want this to happen simply because it’ll be cool. They’ll have bragging rights if they can pull this off.  So, a great deal of effort is going into developing these emulators, and they test them with test audiences of scores or hundreds of people.  And if, say, 50% of the people aren’t fooled, they’ll investigate what went wrong, and they’ll refine it, and they’ll make it better. That’s what learning systems do.
So, when the experts all say, “This is not yet an artificial intelligence, this is an emulation program. It’s a very good one, but it’s still an emulator,” the program itself will go online, it will say, “Isn’t that what you’d expect my masters to say? They don’t want to lose control of me.” So, this is going to be simply impossible for us to avoid, and it’s going to be our first AI crisis, and it will come within three years, I’ve predicted.
And what will happen? What will be the result of it? I guess sitting here, looking a thousand days ahead, you don’t actually believe that it would be sapient and self-aware, potentially conscious.
My best guestimate of the state of the technology is that, no, it would not truly be a self-aware intelligence. But here’s another thing that I pointed out in that speech, and folks can look it up, and that is that we’re entering what’s called “the big flip.” Now, twenty years ago Nicholas Negroponte of the MIT Media Lab talked about a big flip, and that was when everything that used to have a cord went cordless and everything that used to be cordless got a cord. So, we used to get our television through the air, and everybody was switching to cable. We used to get our telephones through cables, and they were moving out and on to the air. Very clever, and of course now it’s ridiculous because everything is everything now.
This big flip is a much more important one, and that is that for the last 60 years most progress in computation and computers and all of that happened because of advances in hardware. We had Moore’s Law—doubling every 18 months the packing density of transistors, and very scaling rules that kept reducing the amount of energy required for computations. And if you were to talk to anybody in these industries, they would pretty soon admit that software sucked; software has lagged behind hardware in its improvements badly for 60 years. But always there’ve been predictions that Moore’s Law would eventually reach its S-tip—its tip-over in its S-curve. And because the old saying is, “If something can’t go on forever, it won’t,” this last year or two, really it became inarguable. They’ve been weaseling around it for about five years now, but Moore’s Law is pretty much over. You can come up with all sorts of excuses with 3D layering of chips and all those sorts of things, and no, Moore’s Law is tipping over.
But the interesting thing is it’s pretty much at the same time—the last couple of years—that software has stopped sucking. Software has become tremendously more capable, and it’s the takeoff of learning systems. And the basic definition would be that if you can take arbitrary inputs that in the real world created caused outputs or actions—say for instance arbitrary inputs of what a person is experiencing in a room, and then the outputs of that person (the things that she says or does)—if you put those inputs into a black box and use the outputs as boundary conditions, we now have systems that will find connections between the two. They won’t be the same as happened inside her brain, causing her to say and do certain things as a response to those inputs, but there will be a system that will take a black box and find a route between those inputs and outputs. That’s incredible. That’s incredibly powerful and it’s one of the six methods by which we might approach AI. And when you have that, then you have a number of issues, like should we care what’s going on in that box?
And in fact, right now DARPA has six contracts out to various groups to develop internal state tracking of learning systems so that we can have some idea why a learning system connected this set of inputs to this set of outputs. But over the long run what you’re going to have is a person sitting in a room, listening to music, taking a telephone call, looking out the window at the beach, trolling the Internet, and then measuring all the things that she says and does and types. And we’re not that far away from the notion of being able to emulate a box that takes all the same inputs and will deliver the same outputs; at which point the experts will say, “This is an emulation,” but it will be an emulator that delivers outputs to perceptions similar to this person.  And now we’re in science fiction realm, and only science fiction authors have been exploring what this means.
My experience with systems that tried to pass the Turing test… And of course you can argue what that would mean, but people write these really good chat bots that try to do it, and the first question I type in every one of them or ask is, “What’s bigger, a nickel or the Sun?” And I haven’t found one that has ever answered it correctly. So, I guess there’s a certain amount of skepticism that would accompany you saying something like in three years it’s going to carry on a conversation where it makes a forceful argument that it is sapient, that we’re going to be able to emulate so well that we don’t know whether it’s truly self-aware or not. That’s just such a disconnect from the state of the art.
When I talk to practitioners, they’re like, “My biggest problem is getting it to tell the difference between 8 and H when they’re spoken.” That’s what keeps these guys up at night. And then you get people like Andrew Ng who say these far out things, like worrying overpopulation of Mars and you get time horizons of 500 years before any of that. So, I’m really having trouble seeing it as a thousand or so days from now that we’re going to grapple with all of these in a real way.
But do you think that this radio show will be accessible to a learning system online?
Well…
You’re putting it on the Internet, right?
Right.
Okay, so then if you have a strong enough learning system that is voracious enough, it’s going to listen to this radio show and it will hear, it will tune in on the fact that you mentioned the word “Turing test,” just before you mentioned your test of which is bigger, the nickel or the Sun.
Which by the way, I never said the answer to that question in my setup of it. So it’s still no further along knowing.
The fact of the matter is that Watson is very good—if it’s parsed a question, then it can apply resources, or what it can do is it can ask a human because these will be teams, you see. The most powerful thing is teams of AI and humans. So, you’re not talking about something that’s going to be passing these Turing tests independently; you’re talking about something that has a bunch of giggling geeks in the background who desperately want it to disturb everybody, and disturb it it will, because these ELIZA-type emulation programs are extremely good at tapping into some very, very universal human interaction sets. They were good at it back in ELIZA’s day before you were born. I’m making an assumption there.
ELIZA and I came into the world about the same time.
Aha.
But the point of ELIZA was, it was so bad at what it did, that Weizenbaum was disturbed that people… He wasn’t concerned about ELIZA; he was concerned about how people reacted to it.
And that is also my concern about the empathy crisis during the next three years. I don’t think this is going to be a sapient being, and it’s disturbing that people will respond to it that way. If people can see through it, all they’ll do is take the surveys of the people who saw through it and apply that as data.
So, back to your observation about Moore’s Law. In a literal sense, doubling the density of transistors is one thing, but that’s not really how Moore’s Law is viewed today. Moore’s Law is viewed as an abstraction that says the power of computers doubles. And you’ve got people like Kurzweil who say it’s been going on for a hundred years, even as computers passed being mechanical, being relays, then being tubes—that the power of them continues to double. So are you asserting that the power of computers will continue to double, and if so, how do you account for things like quantum computers, which actually show every sign of increasing the speed of…
First off, quantum computers—you have to parse your questions in a very limited number of ways. The quantum computers we have right now are extremely good at answering just half a dozen basic classes of questions. Now, it’s true that you can parse more general questions down to these smaller, more quantum-accessible bits or pieces or cubits. But first off, we need to recognize that. Secondly, I never said that computers would stop getting better. I said that there is a flip going on, and that an awful lot of the action in rapidly accelerating and continuing the acceleration of the power of computers is shifting over to software. But you see, this is precedented, this has happened before. The example is the only known example of intelligence, and we have to keep returning to that, and that is us.
Human beings became intelligent by a very weird process. We did the hardware first. Think of what we needed 100,000 years ago, 200,000, 300,000 years ago. We needed desperately to become the masters of our surroundings, and we would accomplish that with a 100-word vocabulary, simple stone tools, and fire. Once we had those three things and some teamwork, then we were capable of saying, “Ogruk, chase goat. With fire. Me stab.” And then nobody could stand up to us; we were the masters of the world. And we proved that because we were able then to protect goat herds from carnivores, and everywhere we had goat herds, a desert spread because there was no longer a balance—the goats ate all the foliage and it became a desert.  So, destroying the Earth started long before we had writing. The thing is that we could have done, “Ogruk, chase goat, with fire. Me stab,” with a combination in parallel of processing power and software. But it appears likely that we did it the hard way.
We created a magnificent brain, a processing system that was able to brute force this 100-word vocabulary, fire, and primitive tools on very, very poor software—COBOL, you might say. Then about 40,000 years ago—and I describe this is my novel Existence, just in passing—but about 40,000 years ago we experienced the first of at least a dozen major software revisions, Renaissances you might call them. And within a few hundred years suddenly our toolkit of stone tools, bone tools and all of that increased in sophistication by an order of magnitude, by a factor of 10. Within a few hundred years we were suddenly dabbing paint on cave walls, burying our dead with funeral goods. And similar Renaissances happened about 15,000 years ago, about 12,000 years ago, certainly about 5,000 years ago with the invention of writing, and so on. And I think we’re in one right now.
So, we became a species that’s capable of flexibly reprogramming itself with software upgrades. And this is not necessarily going to be the case out there in the universe with other intelligent life forms. Our formula was to develop a brain that could brute force what we needed on very poor software, and then we could suddenly change the software. In fact, the search for extraterrestrial intelligence, I’ve been engaged in that for 35 years, and the Fermi Paradox is the question of why we don’t see any sign of extraterrestrial alien life.
Which you also cover in Existenceas well, right?
Yes. And I go back to that question again and again in many of my stories and novels, posing this hypothesis or that hypothesis.  And in my opinion of the hundred or so possible theories for the Fermi Paradox, I believe the leading one is that we are anomalously smart, that we are very, very weirdly smart. Which is an odd thing for an American to say right at this point in our history, but I think that if we pull this out—we’re currently in Phase 8 of the American Civil War—if we pull it out as well as our ancestors pulled out the other ones, then I think that there are some real signs that we might go out into the galaxy and help all the others.
Sagan postulated that there’s this 100-year window between when a civilization develops, essentially the ability to communicate beyond its planet and the ability to destroy itself, that it has a hundred years to master – that it either destroys itself or it goes on to have some billion-year timeframe. Is that a variant of what you are maintaining? Are you saying intelligence like ours doesn’t come along often, or it comes along and then destroys itself?
These are all tenable hypotheses. I don’t think we come along very often at all. Think about what I said earlier about goats. If we had matured into intelligence very slowly and took 100,000, 200,000 years to go from hunter-gatherer to a scientific civilization, all along that way no one would’ve recognized that we were gradually destroying our environment—the way the Easter Islanders chopped down every tree, the way the Icelanders chopped down every tree in Iceland, the way that goat herds spread deserts, and so did primitive irrigation. We started doing all those things and just 10,000 years later we had ecological science. While the Earth is still pretty nice, we have a real chance to save it. Now that’s a very, very rapid change. So, one of the possibilities is that other sapient life forms out there, just take their time more getting from the one to the other. And by the time they become sapient and fully capable of science, it’s too late. Their goat herds and their primitive irrigation and chopping down the trees made it an untenable place from which they could leap to the stars.
So that’s one possibility. I’m not claiming that it’s real, but it’s different that Sagan’s. Because Sagan’s has 100 years between the invention of nuclear power and the invention of starships. I think that this transition has been going on for 10,000 years, and we need to be the people who are fully engaged in this software reprogramming that we’re engaged in right now, which is to become a fully scientific people. And of course, there are forces in our society who are propagandizing to try to see that some members – our neighbors and our uncles – hate science. Hate science and every other fact-using profession. And we can’t afford that; that is death.
I think the Fermi question is the third most interesting question there is, and it sounds like you mull on it a lot. And I hear you keep qualifying that you’re just putting forth ideas. Is your thesis though that run-of-the mill bacteria life – we’re going to find that to be quite common, and it’s just us that’s rare?
One of the worst things about SETI and all of this is that people leap to conclusions based upon their gut.  Now my gut instinct is that life is probably pretty common because every half decade we find some stage in the autogeneration of life that turns out to be natural and easy. But we haven’t completed the path, so there may be some point along the way that required a fluke—a real rare accident. I’m not saying that there is no such obstacle, no such filter. It just doesn’t seem likely. Life occurred on Earth almost the instant the rocks cooled after the Late Heavy Bombardment. But intelligence, especially scientific intelligence only occurred…
Yesterday.
Yeah, 2.5 billion years after we got an oxygen atmosphere, 3.5 billion years after life started, and 100 million years—just 100 million years—before the Sun starts baking our world. If people would like to see a video that’s way entertaining, put in my name, David Brin, and “Lift the Earth,” and you’ll see my idea for how we could move the Earth over the course of the next 50 million years to keep away from the inner edge of the Goldilocks Zone as it expands outward. Because otherwise, even if we solve the climate change thing and stop polluting our atmosphere, in just 100 million years, we won’t be able to keep the atmosphere transparent enough to lose the heat fast enough.
One more question about that, and then I have a million other questions to ask you. It’s funny because in the ’90s when I lived in Mountain View, I officed next door to the SETI people, and I always would look out my window every morning to see if they were painting landing strips in the parking lot. If they weren’t, I figured there was no big announcement yet. But do you think it’s meaningful that all life on Earth… Matt Ridley said, “All life is one.” You and I are related to the banana; we had the same exact thing… Does that indicate to you it only happened in stock one time on this planet, which Gaia, seems so predisposed to life that that would indicate its rarity?
That’s what we were talking about before. The fact is that there are no more non-bird dinosaurs because velociraptors didn’t have a Space program. That’s really what it comes down to. If they had a B612 Foundation or Asteroidal Resources or Planetary Resources, these startups that are out there – and I urge people to join them – B612, Planetary Resources – these are all groups that are trying to get us out there so that we can mine asteroids and get rich. B612 concentrates more on finding the asteroids and learning how to divert them if we ever find one heading toward us. But it’s all the same thing. And I’m engaged in all this not only on the Board of Advisors for those groups, but also I’m on the Council of Advisors to NIAC, which is NASA’s Innovative and Advanced Concepts program. It’s the group within NASA that gives little seed grants to far out ideas that are just this side of plausible, a lot of them really fun. And some of them turn into wonderful things. So, I get to be engaged in a lot of wonderful activities, and the problem with this is it distracts me so much that I’ve really slowed down in my writing science fiction.
So, about that for a minute—when I think of your body of work, I don’t know how to separate what you write from David Brin, the man, so you’ll have to help me with that. But in Kiln People, you have a world in which humans are frequently uploading their consciousness in temporary shells of themselves and the copies are sometimes imperfect. So, does David Brin, the scientist, think that that is possible? And do you have a theory as to how it is, by what mechanism are we conscious?
Those are two different questions. When I’m writing science fiction, it falls into a variety of categories. There is hard SF, in which I’m trying very hard to extrapolate a path from where we are into an interesting future. And one of the best examples in my most recent short story collection, which is called Insistence of Vision, is the story “Insistence of Vision,” in which in the fairly near future we realize that we can get rid of almost all of our prisons. All we have to do is give felons virtual reality goggles that only let them see what we want them to see, and then you temporarily blind them so they can’t take off the goggles – they’ll be blinded and harmless. But if they put the goggles on, they can wander our streets, have jobs, but they can’t hurt anybody because all that’s passing by them is blurry objects and they can only see those doors that they’re allowed to see. That’s chilling. It seems Orwellian until you realize that it’s also preferable to the horrors of prison.
Another near-term extrapolation in the same collection is called “Chrysalis.” And I’ve had people write to me after reading the collection Insistence of Vision, and they’ve said that that story’s explanation—its theory for what cancer is—one guy said, “This is what you’ll be known for a hundred years from now, Brin.” I don’t know about that, but I have a theory for what cancer is, and I think it fits the facts better than anything else I’ve seen. But then you go to the opposite extreme and you can write pure fantasy just for the fun of it, like my story “The Loom of Thessaly.”
Others are stories that do thought experiments, for instance about the Fermi Paradox. And then you have tales like Kiln People, where I hypothesize a machine that lets you imprint your soul, your memories, your desires into a cheap clay copy, and you can make two, three, four, five of them any given day. And at the end of the day they come back and you can download their memories, and during that day you’ve been five of you and you’ve gotten everything that you wanted done and experienced all sorts of things. So you’re living more life in parallel, rather than more life serially, which is what the immortality cooks want. So what you get is a wish fantasy: “I am so busy, I wish I could make copies of myself every day.” So I wrote a novel about it. I inspired by the Terracotta soldiers of Xi’an and the story of the Golem of Prague and God making Adam out of clay, all those examples of clay people. So you have the title of the book is Kiln People—they’re baked in the kiln in your home every day, and you imprint your soul in it. And the notion is that like everything having to do with religion, we decided to go ahead and technologize the soul. It’s a fun extrapolation. Then from that extrapolation, I go on and I try to be as hardcore as I can about be dealing with what would happen, if? So it’s a thought experiment, but people have said that Kiln Peopleis my most fun book, and that’s lovely, that’s a nice compliment.
On to the question though of consciousness itself, do you have a theory on how it comes about, how you can experience the world as supposed to just measure it?
Yeah, of course. It’s a wonderful question. Down here in San Diego we’ve started the Arthur C. Clark Center for Human Imagination, and on December 16th we’re having a celebration of Arthur Clark’s 100th anniversary. The Clark Center is affiliated with the Penrose Institute. Roger Penrose, of course, his theory of consciousness is that Moore’s Law will never cross the number of computational elements in a human brain. That’s Ray Kurzweil’s concept, that as soon as you can use Moore’s Law to pack into a box the same number of circuit elements as we have in the human brain, then we’ll automatically get artificial intelligence. That’s one of the six modes by which we might achieve artificial intelligence, and if people want to see the whole list they can Google my name and “IBM talk” or go to your website and I’m sure you’ll link to it.
But of those six, Ray Kurzweil was confident that as soon as you can use Moore’s Law to have the same number of circuit elements as in the human brain, you’ll get… But what’s a circuit element? When he first started talking about this, it was the number of neurons, which is about a hundred billion. Then he realized that the flashy elements that actually seem like binary flip-flops in a computer are not the neurons; it’s the synapses that flash at the ends of the axons of every neuron. And there can be up to a thousand of those, so now we’re talking on the order of a hundred trillion. But Moore’s Law could get there. But now we’ve been discovering that for every flashing synapse, there may be a hundred or a thousand or even ten thousand murky, non-linear, sort of quasi-calculations that go on in little nubs along each of the dendrites, or inside the neurons, or between the neurons and the surrounding glial and astrocyte cells. And what Rodger Penrose talks about is microtubules, where these objects inside the neurons look to him and some of his colleagues like they might be quantum-sensitive. And if they’re quantum-sensitive, then you have qubits – thousands and thousands of them in each neuron, which brings us full circle back around to the whole question of quantum computing. And if that’s the case, now you’re not talking hundreds of trillions; you’re talking hundreds of quadrillions for Moore’s Law to have to emulate.
So, the question of consciousness starts with, where is the consciousness? Penrose thinks it’s in quantum reality and that the brain is merely a device for tapping into it. My own feeling is, and that was a long and garrulous, and I hope folks found it interesting route to getting to the point, is that I believe consciousness is a screen upon which the many subpersons that we are, the many subroutines, subprocesses, subprocessors, personalities that make up our communities of our minds – we project those thoughts onto a shared screen. And it’s important for all of these subselves to be able to communicate with each other and cooperate with each other, that we maintain the fiction that what’s going on up there on the screen, is us. Now that’s kind of creepy. I don’t like to think about it too much, but I think it is consistent with what we see.
To take some of that apart for a minute, of 60 or 70 guests I’ve had on the show, you’re the third that references Penrose. And to be clear, Penrose explicitly says he does not believe machines can become conscious because there are problems that can be demonstrated to be non-algorithmically solved that humans can solve, and therefore we’re not classical computers. He has that whole thing. That is one viewpoint that says we cannot make conscious machines. What you’ve just said is a variant of the idea that the brain has all these different sections and they vie for attention and your minds figure out this trick of you being able to synthesize everything that you see and experience into one you, and then that’s it. That would imply to me you could make a conscious computer, so I’m curious where you come down on that question. Do you think we’re going to build a machine that will become conscious?
If folks want to look up the video from my IBM talk, I dance around this when I talk about the various approaches to getting AI. And one of them is Robin Hanson’s notion that actually algorithmically creating AI, he claims is much too hard and that what we’ll wind up doing is taking this black box of learning systems and becoming so good at emulating how a human responds to every range of possible inputs, that the box will in affect be human, simply because it’ll give human responses almost all the time. Once you have that, then these humans’ templates will be downloaded into virtual worlds, where the clock speed can be sped up or slowed down to whatever degree you want, and any kind of wealth that can be generated non-physically will be generated at prodigious speeds.
This solves the question of how the organic humans live, and that is that they’ll all have investments in these huge buildingswithin which trillions and trillions of artificially reproduced humans are living out their lives. And Robin’s book is called The Age of Em – the age of emulation – and he assumes that because they’ll be based on humans, that they’ll want sex, they’ll want love, they’ll want families, they’ll want economic advancement, at least at the beginning, and there’s no reason why it wouldn’t have momentum and continue. That is one of the things that applies to this, and the old saying is, “If it walks like a duck and it quacks like a duck, you might as well treat it like a duck or it’s going to get pretty angry.” And when you have either quadrillions of human-level intelligences, or things that can act intelligent faster and stronger than us, the best thing to do is to do what I talk about in Category 6 of creating artificial intelligence, and that is to raise them as our children because we know how to do that. If we raise them as humans, then there is a chance that a large fraction of them will emerge as adult AI entities, perhaps super powerful, perhaps super intelligent, but thinking of themselves as super powerful, super intelligent humans. We’ve done that. The best defence against someone else’s smart offspring that they raised badly and who are dangerous, is your offspring, who you raised well, who are just as smart and determined to prevent the danger to Mom and Dad.
In other words, the solution to Terminator, the solution to Skynet, is not Isaac Asimov’s laws of robotics. I wrote the final book in Isaac’s series The Foundationin robot series; it’s called Foundation’s Triumph. I was asked to tie together all of his loose ends after he died. And his wife was very happy with how I did it. I immersed myself in Asimov and wrote what I thought he was driving at in the way he was going with the three laws. And the thing about laws embedded in AI is that if they get smart enough, they’ll become lawyers, and then interpret the laws any way they want, which is what happens in his universe. No, the method that we found to prevent abuse by kings and lords and priests and the pyramidal social structures was to break up power. That’s the whole thing that Adam Smith talked about. The whole secret of the American Revolution and the Founders and the Constitution was to break it up. And if you’re concerned about bad AI, have a lot of AI and hire some good AI, because that’s what we do with lawyers. We all know lawyers are smart, and there are villainous lawyers out there, so you hire good lawyers.
I’m not saying that that’s going to solve all of our problems with AI, but what it does do, and I have a non-fiction book about this called The Transparent Society: Will Technology Force Us To Choose Between Privacy and Freedom?The point is that the only thing that ever gave us freedom and markets and science and justice and all the other good things, including vast amounts of wealth – was reciprocal accountability. That’s the ability to hold each other accountable, and it’s the only way I think we can get past any of the dangers of AI. And it’s exactly why the most dangerous area for AI right now is not the military because they like to have off switches. The most dangerous developments in AI are happening in Wall Street. Goldman Sachs is one of a dozen Wall Street firms, each of which are spending more on artificial intelligence research than the top 20 universities combined. And the ethos for their AIs is fundamentally and inherently predatory, parasitical, insatiable, secretive, and completely amoral. So, this is where I fear a takeoff AI because it’s all being done in the dark, and things that are done in the dark, even if they have good intentions, always go wrong. That’s the secret of Michael Crichton movies and books, is whatever tech arrogance he’s warning about was done in secret.
Following up on that theme of breaking up power, in Existenceyou write a future about the 1% types on the verge of taking full control of the world, in terms of outright power. What is the David Brin view of what is going to happen with wealth and wealth distribution and the access to these technologies, and how do you think the future’s going to unfold? Is it like you wrote in that book, or what do you think?
In Existence, it’s the 1% of the 1% of the 1% of the 1%, who gather in the Alps and they hold a meeting because it looks like they’re going to win. It looks like they’re going to bring back feudalism and have a feudal power shaped like a pyramid, that they will defeat the diamond shaped social structure of our Enlightenment experiment. And they’re very worried because they know that all the past pyramidal social structures that were dominated by feudalism were incredibly stupid, because stupidity is one of the main outcomes of feudalism. If you look across human history, [feudalism produced] horrible governance, vastly stupid behavior on the part of the ruling classes. And the main outcome of our Renaissance, of our Enlightenment experiment, wasn’t just democracy and freedom. And you have idiots now out there saying that democracy and liberty are incompatible with each other. No, you guys are incompatible with anything decent.
The thing is that this experiment of ours, started by Adam Smith and then the American Founders, was all about breaking up power so that no one person’s delusion can ever govern, but instead you are subject to criticism and reciprocal accountability. And this is what I was talking about in the only way we can escape a bad end with AI. And I talk about this in The Transparent Society. The point is that in Existencethese trillionaires are deeply worried because they know that they’re going to be in charge soon. As it turns out in the book, they may be mistaken. But they also know that if this happens—if feudalism takes charge again—very probably everyone on Earth will die, because of bad government, delusion, stupidity. So they’re holding a meeting and they’re inviting some of the smartest people they think they can trust to give papers at a conference on how feudalism might be done better, on how it might be done within a meritocratic and a smarter way. And I only spend one chapter—less than that—on this meeting, but it’s my opportunity to talk about how if we’re doomed to lose our experiment, then at least can we have lords and kings and priests who are better than they’ve always been for 6,000 years?
And of course, the problem is that right now today, the billionaires who got rich through intelligence, sapience, inventiveness, working with engineers, inventing new goods and services and all of that – those billionaires don’t want to have anything to do with a return of feudalism. They’re all members of the political party that’s against feudalism. A few of them are libertarians. The other political party gets its billionaires from gambling, resource extraction, Wall Street, or inheritance – the old-fashioned way. The problem is that the smart billionaires today know what I’m talking about, and they want the Renaissance to continue, they want the diamond shaped social structure to continue. That was a little bit of a rant there about all of this, but where else can you explore some of this stuff except in science fiction?
We’re running out of time here. I’ll close with one final question, so on net when you boil it all down, what do you think is in store for us?  Do you have any optimism?  Are you completely pessimistic?  What do you think about the future of our species?
I’m known as an optimist and I’m deeply offended by that. I know that people are rotten and I know that the odds have always been stacked against us. If you think of Machiavelli back in the 1500s – he fought like hell for the Renaissance for the Florentine Republic. And then when he realized that all hope was lost, he sold his services to the Medicis and the lords, because what else can you do? Pericles in Athens lasted one human lifespan. It scared the hell out of everybody in the Mediterranean, because democracy enabled the Athenians to be so creative, so dynamic, so vigorous, just like we in America have spent 250 years being dynamic and vigorous and constantly expanding our horizons of inclusion and constantly engaged in reform and ending the waste of talent.
The world’s oligarchs are closing in on us now, just like they closed in on Pericles in Athens and on the Florentine Republic, because the feudalists do not want this experiment to succeed and bring us to the world of Star Trek. Can we long survive, can we renew this? Every generation of Americans and across the West has faced this crisis, every single generation. Our parents and the greatest generation who survived the Depression and destroyed Hitler and contained communism and took us to the Moon and built vast enterprise systems that were vastly more creative, and fantastic growth under FDR’s level of taxes, by the way. They knew this – they knew that the enemy of freedom has always been feudalism far more than socialism; though socialism sucks too.
We’re in a crisis and I’m accused of being an optimist because I think we have a good chance. We’re in Phase 8 of the American Civil War, and if you type in “Phase 8 of the American Civil War” you’ll probably find my explanation. And our ancestors dealt with the previous seven phases successfully. Are we made of lesser stuff? We can do this. In fact, I’m not an optimist; I’m forced to be an optimist economically by all the doom and gloom out there, which is destroying our morale and our ability to be confident that we can pass this test. This demoralization, this spreading of gloom is how the enemy is trying to destroy us. And people out there need to read Steven Pinker’s book The Better Angels of Our Nature, they need to read Peter Diamandis’s book Abundance. They need to see, that there is huge amounts of good news.
Most of the reforms we’ve done in the past worked, and we are mighty beings, and we could do this if we just stop letting ourselves be talked into a gloomy funk. And I want us to get out of this funk for one basic reason—it’s not fun to be the optimist in the room. It’s much more fun to be the glowering cynic, and that’s why most of you listeners out there are addicted to being the glowering cynics. Snap out of it! Put a song in your heart. You’re members of the greatest civilization that’s ever been. We’ve passed all the previous tests, and there’s a whole galaxy of living worlds out there that are waiting for us to get out there and rescue them.
That’s a wonderful, wonderful place to leave it.  It has been a fascinating hour, and I thank you so much.  You’re welcome to come back on the show anytime you like. I’m almost speechless with the ground we covered, so, thank you!
Sure thing, Byron. And all of you out there – enjoy stuff. You can find me at DavidBrin.com, and Byron will give you links to some of the stuff we referred to.  And thank you, Byron.  You’re doing a good job!
Byron explores issues around artificial intelligence and conscious computers in his upcoming book The Fourth Age, to be published in April by Atria, an imprint of Simon & Schuster. Pre-order a copy here.
[voices_in_ai_link_back]

IoT and the Principle of Resource Constraints

Technology may be fast-moving but some concepts have remained stable for decades. Not least the principle of resource constraints. Simply put, we have four finite variables to play with in our technological sandpit:

  • electrical power
  • processor speed
  • network bandwidth
  • storage volume

This principle is key to understanding the current Internet of Things phenomenon. Processing and storage capacities have increased exponentially — today’s processors support billions of instructions per second and the latest solid state storage can fit 32 gigabytes on a single chip.
As we expand our abilities to work with technology, so we are less constrained, creating new possibilities that were previously unthinkable due to either cost or timeliness. Such as creating vast networks of sensors across our manufacturing systems and supply chains, termed the Industrial Internet.
This also means, at the other end of the scale, that we can create tiny, yet still powerful computers. So today, consumers can afford sports watches that give immediate feedback on heart rate and walking pace. Even five years ago, this would not have been practical. While enterprise business may be operating on a different scale, the trade-offs are the same.
Power and end-to-end network bandwidth have not followed such a steep curve, however. When such resources are lacking, processing and storage tend to be used in support. So for example, when network bandwidth is an issue (as it so often is, still), ‘cache’ storage or local processing can be added to the architecture.
In Internet of Things scenarios, sensors (attached to ’things’) are used to generate information, sometimes in considerable volumes, which can then be processed and acted upon. A ‘thing’ could be anything from a package being transported, a motor vehicle, an air conditioning unit or a classic painting.
If all resources were infinite, such data could be transmitted straight to the cloud, or to other ’things’. In reality however, the principle of resource constraints comes into play. In the home environment, this results in having one or more ‘smart hubs’ which can collate, pre-process and distil data coming from the sensors.
As well as a number of startups such as Icontrol and (the Samsung-led) Smartthings, the big players recognise the market opportunity this presents. Writes Alex Davies at Rethink IoT, “Microsoft is… certainly laying the groundwork for all Windows 10 devices, which now includes the Xbox, to act as coordinating hubs within the smart home.”
Smart hubs also have a place in business, collating, storing and forwarding information from sensors. Thinking more broadly however, there are no constraints on what the architecture needs to look like, beyond the need to collate data and get the message through as efficiently as possible – in my GigaOm report I identify the three most likely architectural approaches.
Given the principle of principle of resource constraints, the idea of form factor becomes more a question of identifying the right combination of elements for the job. For example, individual ‘things’ may incorporate some basic processing and solid state storage. Such capabilities can even be incorporated in disposable hubs, such as the SmartTraxx device which can be put in a shipping container to monitor location and temperature.
We may eventually move towards seemingly infinite resources, for example one day, quantum sensors might negate the need to transport information at all. For now however, we need to deal in the finite — which creates more than enough opportunity for both enterprises and consumers alike.