Disruptive Technologies: In Conversation with Byron Reese & Lauren Sallata

Byron Reese, sits down with Lauren Sallata, Chief Marketing Officer & VP, Panasonic Corporation of North America, Inc. to discuss IoT devices, driverless cars, immersive entertainment and solar initiatives.

Byron Reese: Many people are excited about possibilities that today’s new technologies offer. They see a world made better through technology. Another group of people view the technology landscape completely differently and are concerned about the impact of technology on privacy, community, employment. Why is there so much disagreement, and where do you come down in your view of the future?
Lauren Sallata: In the words of Mohandas Gandhi, “Honest disagreement is often a good sign of progress.” You could say the same about disagreement over technology. Panasonic is involved in engineering entirely new and better experiences, in cities and factories, in stores, offices, entertainment venues, and in automobiles, airports and on airplanes. Consultancy McKinsey identified 12 disruptive technologies that will account for trillions of dollars in economic value over the decade ahead. Panasonic is deeply engaged in 10 of those 12 technologies. And we see the positive impact of these technologies clearly already. For example, in renewable energy, our lithium-ion batteries are being used in the world’s leading electric vehicles to reduce pollution. Sensors embedded in road systems to send information to cars and road operators about hazardous conditions and traffic and use IoT to improve driving safety and reduce traffic jams. Other examples include wearable robotics designed to reduce injuries at work.
How do you think the widespread adoption of IoT devices will change our daily lives?  What is Panasonic’s vision of a hyper-connected world look like?
We see the “things” that make up the “Internet of Things” bringing us unparalleled data, information and convenience to change our in-home and working experiences. Voice technology will enable each interaction to be more personalized and seamless. We believe that voice is the technology that moves all other technologies forward. Why? Voice takes away the learning curve and gives both businesses and consumers more control over the way they use and interact with technology. Using our voices frees up our hands and our brains. When we pry our eyes away from screens and stop tapping on keypads, we can focus on what we’re doing right now. The factory worker is less likely to make errors …the car driver is less distracted…the ER nurse can focus more completely on his patients. Voice is already an auto sector mainstay. We’ve developed cutting-edge, voice-activated Infotainment systems for many of the world’s top automakers, like our new ELS system for Acura. We’re working with Amazon to help us take voice integration beyond just information and move toward fully-realized contextual understanding. These capabilities are giving auto drivers and passengers control over critical features such as heating and ventilation systems and audio and navigation functions. We’re also giving passengers the benefit of connecting to other smart devices to allow them to fully control their experience both in and out of the car. We’re also working with Google on similar projects in the voice space, to provide integration and information throughout their technology solutions.
Talk about driverless cars for a minute.  When do you think we might see the end of the human driver? What is Panasonic’s role in building that world?
We’ve estimated by 2030, 15% of new cars sold could be fully autonomous. We work with almost all the major automakers around the world, have for almost 60 years, and are doubling down on our ADAS and automation technology investments with partners. Autonomous Vehicles are going to have a huge impact on our society. Vehicle Electrification is going to have a similar impact on our planet…The combination of the two technologies will create a multiplier effect that will remake transportation. This will happen in stages. Stage one is the emergence of the connected vehicle, which lays the foundation. With EVs, we’re still at a price premium to internal combustion. By around 2022, we’ll be at parity. During this time, we’ll see elements of autonomous driving, such as autonomous braking, and EV autonomous vehicles for commercial and fleet start to go mainstream. Next, we see trucking fleets start to make the transition.  Then commercial ridesharing fleets come on-line, giving consumers the benefit of autonomous electric vehicle transportation. In the last stage, we’ll see the personal ownership market catch up with commercial.
Tell us about what’s going on at Highway I-70 in Colorado.
As cars become more computer than machine, they are capable of communicating with one another in real time – saving time and lives. Panasonic has partnered with the Colorado Department of Transportation to create a connected vehicle ecosystem that promises to drive a revolution in roadway safety and efficiency. On a 90-mile commuter stretch of interstate 70 into Denver, this technology has been designed and will be deployed later this year to allow CDOT to share information on highway conditions, traffic alerts and other driving hazards. It’s the first production-grade, U.S. connected vehicle system in which real-time data would be shared across vehicles, infrastructure, and people with a goal to improve safety, lower fuel consumption and reduce congestion. Estimates are that such a solution could reduce non-impaired traffic crashes by 80 percent and save drivers hours stuck in traffic each year.
What is Panasonic doing in the world of immersive entertainment?
At iconic stadiums, beloved theme parks, and worldwide special events like the Olympic Games, Panasonic technologies immerse fans in the action and create storytelling experiences that inspire and amaze with the world’s largest video displays, mesmerizingly sharp content, sophisticated projection mapping, seamless mobile integration, and innovations like an augmented reality skybox that gives fans control of stats and replays, projecting them right on to the glass inside stadium suites – all without obstructing their view of the field. From racing through Radiator Springs at Disney California Adventure Theme Park to embarking on a frozen voyage through Arendelle in the Frozen Ever After attraction at Orlando’s Epcot, Panasonic’s technology has enhanced the experience for millions. Recently Panasonic collaborated with Disney creative teams on an amazing experience inside Pandora – The World of Avatar, at Disney’s Animal Kingdom. Its projection technology helped Disney bring the Na’vi River Journey attraction to life. Guests take a boating expedition down a mysterious river hidden within a bioluminescent rainforest, through a darkened network of caves illuminated by exotic glowing plants and amazing creatures that call Pandora home. The journey culminates in an encounter with a Na’vi Shaman of Songs, who has a deep connection to Pandora’s life force and sends positive energy out into the forest through her music. Disney wanted the two worlds to work seamlessly with one another, and Panasonic’s projection system allowed the attraction to achieve that seamless connection through projection imaging that provided perfect color rendition, precise levels of brightness, and robust systems. Today fans who use Instagram and rideshare as verbs expect the same mobile connectivity and convenience from their ballpark as they do from their Lyft. The Atlanta Braves franchise understands this well, and with help from Panasonic technology welcomes fans way before the opening pitch. Panasonic technologies at SunTrust Park and its adjacent mixed-use site, the Atlanta Battery, are all digitally connected, with more than 18 LED displays, monitors, projectors, digital signage kiosks, and video security systems – all regulated from one central control room. We just conducted a study of CTOs and senior tech decision makers on how companies are using or want to use disruptive technologies in areas such as retail, sports, media and entertainment. Our new study reveals that four technologies are at the top of their innovation agendas – artificial intelligence, robotics, 3-D printing and energy storage. Four out of five respondents are poised to adopt AI to gain customer insights and predict behavior.
And talk a bit about your solar initiatives.
Panasonic has been a leader in the solar energy space for over 40 years. From electric vehicles to solar grids, Panasonic’s solutions are helping forward-thinking businesses and governments pursue a brighter, more eco-responsible future. To solve the world’s growing energy needs, Panasonic is developing high-efficiency solar panels that make eco more economical, planning entire eco-sustainable communities, using sensor technology to regulate energy usage in offices, and building energy storage systems that allow for more efficient energy consumption. When it comes to solar panel technology, revolutionary materials, and system design have led Panasonic to record-setting efficiencies. Panasonic’s heterojunction (HIT®) technology has been designed with ultra-thin silicon layers that absorb and retain more sunlight, coupled with an ingenious bifacial cell design that captures light from both sides of the panel. By continuously innovating, we’re helping each generation of solar panel make better use of renewable resources and offering the industry greater cost savings.
How do we make sure that the benefits of all these technologies extend to everyone on the planet?
Over the last 100 years, Panasonic has taken pride in creating new and exciting solutions in many different realms.  By having expertise in so many strong areas, especially those identified as disruptive technologies, we hope to enhance the lives of as many people as possible.

About Lauren Sallata

Lauren Sallata is Chief Marketing Officer at Panasonic Corporation of North America, the principal North American subsidiary of Panasonic Corporation and the hub of Panasonic’s U.S. branding, marketing, sales, service and R&D operations. She leads the corporations digital, brand, content, and advertising efforts, as well as Corporate Communications.

AI for Humanity, Not Corporations

The majority of AI and machine learning is built to bolster the revenue streams of corporations rather than for the interest of human welfare.

That scares me.

The source of this fear comes from three compounding causes:

  1. Lack of effective government investment in AI
  2. Venture backed investors favoring business to business (B2B) startups over business to consumer (B2C)
  3. Corporations primarily driven by profit, not social utility.

Our society lacks a singular force with deep pockets that consistently values social good over revenue. The closest should be government, but it’s sluggish at best and apathetic at worst when it comes to  technology adoption that benefits society. Furthermore, the United States government is primarily fueled by the pockets of lobbyists who represent the interest of corporations rather than citizens. Occasionally there’s a bipartisan initiative to utilize technology to empower citizens but even then the implementation of such initiatives waxes and wanes depending on election and budget cycles.
The bottom line is that government is generally slower at technology adoption than corporations, which are more efficient at utilizing new technology under the pressure of competition. Case in point, Amazon won’t stop recommending a shiny KitchenAid blender after I asked Alexa for a cake recipe.
Another major factor is how startups are created and nurtured in today’s market. 75% of venture investment dollars across all fundraising stages go towards B2B startups for the past decade [MoneyTree]. The benefits of serving B2B over B2C are concrete for burgeoning companies, as businesses have well-defined needs and are willing to pay for services that benefit their bottom line. People, on the other hand, have more diversified needs and are used to getting software services for free. Many B2C startups pivot to B2B after failure to gain user traction or seed investment. This has been particularly true in HR tech, as we’ve seen handfuls of machine learning and AI powered startups pivot their target market from jobseekers to businesses.  
Lastly, mature companies’ interests are misaligned with humanity. Social utility always comes second to profit maximization, which is to no fault to the corporation – they are, by design, accountable to their shareholders. An optimist may argue that companies only build products or deliver services that bear value to consumers, that is, it’s not a zero sum game between companies and consumers. That doesn’t mean consumers will end up winning, either. Did I really need that KitchenAid blender? Or was it brilliant machine-learning backed marketing on Amazon’s part?
These compounding factors may seem bleak for humanity without a drastic change in the structural fabric of our current society. On the other hand, mingled with that fear of futuristic AI and malevolent machines, is a healthy dose of hope. Conditions aren’t optimal, but there is room for successful products with high social utility driven by human need.
For one, there are a lot of initiatives of ‘AI for Good.’ Some of these initiatives are backed by wealthy philanthropists and corporations with a conscience and others are powered by unified governments around the world. These nonprofits and NGOs demonstrate there is a strong desire and growing recognition that society needs to utilize AI for the greater good.
Second, the high technology barrier and time needed to implement machine learning has drastically decreased with the availability of Machine Learning as a Service (MLaaS) platforms. Google Cloud Predict API, Microsoft Azure Machine Learning, AWS SageMaker, and Anaconda are major players in lowering the cost for anyone to dabble in basic machine learning.
Third, consumers do have a choice and voice in determining the trajectory of technology creation. The amalgamation of our pocketbooks has the power to hold corporations accountable for social irresponsibility as well as reward them for good deeds. In fact, this shift towards socially responsible purchases is currently happening, as Berkeley in a recent study found that “more than 9-in-10 millennials would switch brands to one associated with a cause.”
Regardless, even given these positive forces pushing AI for good, we’re still missing a major piece to tip the scale in favor of AI for humanity rather than corporations. There’s a stark absence of compelling incentives to encourage, incubate, and sustain machine learning startups driven by social good.
To fill this void, we can consider a few ideas. One is for philanthropic grants to subsidize sustainable machine learning startups. Another is for government to provide continual incentives (financial and regulatory) to not only help social impact AI startups germinate but also foster long-term viability. For some of the toughest social issues that can benefit from machine learning, federal or local government can create a bid system for companies (or individuals) to crowdsource an intelligent solution with reward. Taking it a step further, government could set social responsibility benchmarks for corporations, particularly concerning responsible use of nascent technology in this age of nebulous personal data rights.
I’m the CTO of Jobscan, a service that uses machine learning to help jobseekers with their resumes and professional profiles. Our vision is to create a service that will eliminate the stress of unemployment – where one day, our AI will deliver your perfect job offer, logistically, culturally, professionally with zero down time. We’re lucky and grateful for where we are, but the path forward to continually invest in AI won’t be easy.  Without a refactor of our current corporate constructs to incentivize social good over profit, it’s an uphill battle for any company that chooses social good to survive or thrive.

About the Author

Sophia Cui has held developer, product manager, CTO, and founder roles at companies from startups to major corporations such as Zynga and Microsoft. She has over 14 years of experience in software engineering and architecture for scalable, available software deployed across consumer web and cloud enterprise spaces. Sophia has also consulted for half a dozen web-tech startups offering product and technical expertise. Sophia is currently the CTO of Jobscan, a machine learning powered service that empowers job seekers to land their perfect job.


5 Common Misconceptions about AI

In recent years I have ran into a number of misconceptions regarding AI, and sometimes when discussing AI with people from outside the field, I feel like we are talking about two different topics. This article is an attempt at clarifying what AI practitioners mean by AI, and where it is in its current state.
The first misconception has to do with Artificial General Intelligence, or AGI:

  1. Applied AI systems are just limited versions of AGI

Despite what many think,the state of the art in AI is still far behind human intelligence. Artificial General Intelligence, i.e. AGI, has been the motivating fuel for all AI scientists from Turing to today. Somewhat analogous to Alchemy, the eternal quest for AGI that replicates and exceeds human intelligence has resulted in the creation of many techniques and scientific breakthroughs. AGI has helped us understand facets of human and natural intelligence, and as a result, we’ve built  effective algorithms inspired by our understanding and models of them.
However, when it comes to practical applications of AI, AI practitioners do not necessarily restrict themselves to pure models of human decision making, learning, and problem solving. Rather, in the interest of solving the problem and achieving acceptable performance, AI practitioners often do what it takes to build practical systems. At the heart of the algorithmic breakthroughs that resulted in Deep Learning systems, for instance, is a technique called back-propagation. This technique, however, is not how the brain builds models of the world. This brings us to the next misconception:

  1. There is a one-size-fits-all AI solution.

A common misconception is that AI can be used to solve every problem out there–i.e. the state of the art AI has reached a level such that minor configurations of ‘the AI’ allows us to tackle different problems. I’ve even heard people assume that moving from one problem to the next makes the AI system smarter, as if the same AI system is now solving both problems at the same time. The reality is much different: AI systems need to be engineered, sometimes heavily,  and require specifically trained models in order to be applied to a problem. And while similar tasks, especially those involving sensing the world (e.g., speech recognition, image or video processing) now have a library of available reference models, these models need to be specifically engineered to meet deployment requirements and may not be useful out of the box. Furthermore, AI systems are seldom the only component of AI-based solutions. It often takes many tailor-made classically programed components to come together to augment one or more AI techniques used within a system. And yes, there are a multitude of different AI techniques out there, used alone or in hybrid solutions in conjunction with others, therefore it is incorrect to say:

  1. AI is the same as Deep Learning

Back in the day, we thought the term artificial neural networks (ANNs) was really cool. Until, that is, the initial euphoria around it’s potential backfired due to its lack of scaling and aptitude towards over-fitting. Now that those problems have, for the most part, been resolved, we’ve avoided the stigma of the old name by “rebranding” artificial neural networks as  “Deep Learning”. Deep Learning or Deep Networks are ANNs at scale, and the ‘deep’ refers not to deep thinking, but to the number of hidden layers we can now afford within our ANNs (previously it was a handful at most, and now they can be in the hundreds). Deep Learning is used to generate models off of labeled data sets. The ‘learning’ in Deep Learning methods refers to the generation of the models, not to the models being able to learn real-time as new data becomes available. The ‘learning’ phase of Deep Learning models actually happens offline, needs many iterations, is time and process intensive, and is difficult to parallelize.
Recently, Deep Learning models are being used in online learning applications. The online learning in such systems is achieved using different AI techniques such as Reinforcement Learning, or online Neuro-evolution. A limitation of such systems is the fact that the contribution from the Deep Learning model can only be achieved if the domain of use can be mostly experienced during the off-line learning period. Once the model is generated, it remains static and not entirely robust to changes in the application domain. A good example of this is in ecommerce applications–seasonal changes or short sales periods on ecommerce websites would require a deep learning model to be taken offline and retrained on sale items or new stock. However, now with platforms like Sentient Ascend that use evolutionary algorithms to power website optimization, large amounts of historical data is no longer needed to be effective, rather, it uses neuro-evolution to shift and adjust the website in real time based on the site’s current environment.   
For the most part, though, Deep Learning systems are fueled by large data sets, and so the prospect of new and useful models being generated from large and unique datasets has fueled the misconception that…

  1. It’s all about BIG data

It’s not. It’s actually about good data. Large, imbalanced datasets can be deceptive, especially if they only partially capture the data most relevant to the domain. Furthermore, in many domains, historical data can become irrelevant quickly. In high-frequency trading in the New York Stock Exchange, for instance, recent data is of much more relevance and value than, for example data from before 2001, when they had not yet adopted decimalization.
Finally, a general misconception I run into quite often:

  1. If a system solves a problem that we think requires intelligence, that means it is using AI

This one is a bit philosophical in nature, and it does depend on your definition of intelligence. Indeed, Turing’s definition would not refute this. However, as far as mainstream AI is concerned, a fully engineered system, say to enable self-driving cars, which does not use any AI techniques, is not considered an AI system. If the behavior of the system is not the result of the emergent behavior of AI techniques used under the hood, if programmers write the code from start to finish, in a deterministic and engineered fashion, then the system is not considered an AI-based system, even if it seems so.
AI paves the way for a better future
Despite the common misconceptions around AI, the one correct assumption is that AI is here to stay and is indeed, the window to the future. AI still has a long way to go before it can be used to solve every problem out there and to be industrialized for wide scale use. Deep Learning models, for instance, take many expert PhD-hours to design effectively, often requiring elaborately engineered parameter settings and architectural choices depending on the use case. Currently, AI scientists are hard at work on simplifying this task and are even using other AI techniques such as reinforcement learning and population-based or evolutionary architecture search to reduce this effort. The next big step for AI is to make it be creative and adaptive, while at the same time, powerful enough to exceed human capacity to build models.  
by Babak Hodjat, co-founder & CEO Sentient Technologies

Use of Robots in War

The following is an excerpt Byron Reese’s book, The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity. You can purchase the book here.
The Fourth Age explores the implications of automation and AI on humanity, and has been described by Ethernet inventor and 3Com founder Bob Metcalfe as framing “the deepest questions of our time in clear language that invites the reader to make their own choices. Using 100,000 years of human history as his guide, he explores the issues around artificial general intelligence, robots, consciousness, automation, the end of work, abundance, and immortality.”
One of those deep questions of our time:
Advancements in technology have always increased the destructive power of war. The development of AI will be no different. In this excerpt from The Fourth Age, Byron Reese considers the ethical implications of the development of robots for warfare.

Most of the public discourse about automation relates to employment, which is why we spent so much time examining it. A second area where substantial debate happens is around the use of robots in war.
Technology has changed the face of warfare dozens of times in the past few thousand years. Metallurgy, the horse, the chariot, gunpowder, the stirrup, artillery, planes, atomic weapons, and computers each had a major impact on how we slaughter each other. Robots and AI will change it again.
Should we build weapons that can make autonomous kill decisions based on factors programmed in the robots? Proponents maintain that the robots may reduce the number of civilian deaths, since the robots will follow protocols exactly. In a split second, a soldier, subject to fatigue or fear, can make a literally fatal mistake. To a robot, however, a split second is all it ever needs.
This may well be true, but this is not the primary motivation of the militaries of the world to adopt robots with AI. There are three reasons these weapons are compelling to them. First, they will be more effective at their missions than human soldiers. Second, there is a fear that potential adversaries are developing these technologies. And third, they will reduce the human casualties of the militaries that deploy them. The last one has a chilling side effect: it could make warfare more common by lowering the political costs of it.
The central issue, at present, is whether or not a machine should be allowed to independently decide whom to kill and whom to spare. I am not being overly dramatic when I say the decision at hand is whether or not we should build killer robots. There is no “can we” involved. No one doubts that we can. The question is, “Should we?”
Many of those in AI research not working with the military believe we should not. Over a thousand scientists signed an open letter urging a ban on fully autonomous weapon systems. Stephen Hawking, who also lent his name and prestige to the letter, wrote an editorial in 2014 suggesting that these weapons might end up destroying the species through an AI arms race.
Although there appears to be a lively debate on whether to build these systems, it seems somewhat disingenuous. Should robots be allowed to make a kill decision? Well, in a sense, they have been for over a century. Humans were perfectly willing to plant millions of land mines that blew the legs off a soldier or a child with equal effectiveness. These weapons had a rudimentary form of AI: if something weighed more than fifty pounds, they detonated. If a company had marketed a mine that could tell the difference between a child and soldier, perhaps by weight or length of stride, they would be used because of their increased effectiveness. And that would be better, right? If a newer model could sniff for gunpowder before blowing up, they would be used as well for the same reason. Pretty soon you work your way up to a robot making a kill decision with no human involved. True, at present, land mines are banned by treaty, but their widespread usage for such a long period suggests we are comfortable with a fair amount of collateral damage in our weapon systems. Drone warfare, missiles, and bombs are all similarly unprecise. They are each a type of killer robot. It is unlikely we would turn down more discriminating killing machines. I am eager to be proved wrong on this point, however. Professor Mark Gubrud, a physicist and an adjunct professor in the Curriculum in Peace, War, and Defense at the University of North Carolina, says that with regards to autonomous weapons, the United States has “a policy that pretends to be cautious and responsible but actually clears the way for vigorous development and early use of autonomous weapons.”
And yet, the threats that these weapon systems would be built to counter is real. In 2014, the United Nations held a meeting on what it calls “Lethal Autonomous Weapons Systems.” The report that came out of that meeting maintains that these weapons are also being sought by terrorists, who will likely get their hands on them. Additionally, there is no shortage of weapon systems currently in development around the world that utilize AI to varying degrees. Russia is developing a robot that can detect and shoot a human from four miles away using a combination of radar, thermal imaging, and video cameras. A South Korean company is already selling a $40,000,000 automatic turret which, in accordance with international law, shouts out a “turn around and leave or we will shoot” message to any potential target within two miles. It requires a human to okay the kill decision, but this was a feature added only due to customer demand. Virtually every country on the planet with a sizable military budget, probably about two dozen nations in all, is working on developing AI-powered weapons.
How would you prohibit such weapons even if there were a collective will to do so? Part of the reason nuclear weapons were able to be contained is because they are straightforward. An explosion was either caused by a nuclear device or not. There is no gray area. Robots with AI, on the other hand, are as gray as gray gets. How much AI would need to be present before the weapon is deemed to be illegal? The difference between a land mine and the Terminator is only a matter of degree.
GPS technology was designed with built-in limits. It won’t work on an object traveling faster than 1,200 miles per hour or higher than 60,000 feet. This is to keep it from being used to guide missiles. But software is almost impossible to contain. So the AI to power a weapons system will probably be widely available. The hardware for these systems is expensive compared with rudimentary terrorist weapons, but trivially inexpensive compared with larger conventional weapon systems.
Given all this, I suspect that attempts to ban these weapons will not work. Even if the robot is programmed to identify a target and then to get approval from a human to destroy it, the approval step can obviously be turned off with the flip of a switch, which, eventually, would undoubtedly happen.
An AI robot may be perceived as such a compelling threat to national security that several countries will feel that they cannot risk not having them. During the Cold War, the United States was frequently worried about perceived or possible gaps in military ability with potentially belligerent countries. The bomber gap of the 1950s and the missile gap of the 1960s come to mind. An AI gap is even more fearsome for those whose job it is to worry about the plans of those who mean the world harm.

To read more of Byron Reese’s book, The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity, you can purchase it here.

Would Conscious Computers Have Rights?

The following is an excerpt from The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity. You can purchase the book here.
The Fourth Age explores the implications of automation and AI on humanity, and has been described by Ethernet inventor and 3Com founder Bob Metcalfe as framing “the deepest questions of our time in clear language that invites the reader to make their own choices. Using 100,000 years of human history as his guide, he explores the issues around artificial general intelligence, robots, consciousness, automation, the end of work, abundance, and immortality.”
One of those deep questions of our time:
If a computer is sentient, then it can feel pain. If it is conscious, then it is self-awareness. Just as we have human rights and animal rights, as we explore building conscious computers, must we also consider the concept of robot rights? In this excerpt from The Fourth Age, Byron Reese considers the ethical implications of the development of conscious computers.

A conscious computer would be, by virtually any definition, alive. It is hard to imagine something that is conscious but not living. I can’t conceive that we could consider a blade of grass to be living, and still classify an entity that is self-aware and self-conscious as nonliving. The only exception would be a definition of life that required it to be organic, but this would be somewhat arbitrary in that it has nothing to do with the thing’s innate characteristics, rather merely its composition.
Of course, we might have difficulty relating to this alien life-form. A machine’s consciousness may be so ethereal as to just be a vague awareness that occasionally emerges for a second. Or it could be intense, operating at such speed that it is unfathomable to us. What if by accessing the Internet and all the devices attached to it, the conscious machine experiences everything constantly? Just imagine if it saw through every camera, all at once, and perceived the whole of our existence. How could we even relate to such an entity, or it to us? Or if it could relate to us, would it see us as fellow machines? If so, it follows that it may not have any more moral qualm about turning us off as we have about scrapping an old laptop. Or, it might look on us with horror as we scrap our old laptops.
Would this new life-form have rights? Well, that is a complicated question that hinges on where you think rights come from. Let’s consider that.
Nietzsche is always a good place to start. He believed you have only the rights you can take. People claim the rights that we have because we can enforce them. Cows cannot be said to have the right to life because, well, humans eat them. Computers would have the rights they could seize. They may be able to seize all they want. It may not be us deciding to give them rights, but them claiming a set of rights without any input from us.
A second theory of rights is that they are created by consensus. Americans have the right of free speech because we as a nation have collectively decided to grant that right and enforce it. In this view, rights can exist only to the extent that we can enforce them. What rights might we decide to give to computers that are within our ability to enforce? It could be life, liberty, and self-determination. One can easily imagine a computer bill of rights.
Another theory of rights holds that at least some of them are inalienable. They exist whether or not we acknowledge them, because they are based on neither force nor consensus. The American Declaration of Independence says that life, liberty, and the pursuit of happiness are inalienable. Incidentally, inalienable rights are so fundamental that you cannot renounce them. They are inseparable from you. You cannot sell or give someone the right to kill you, because life is an inalienable right. This view of fundamental rights believes that their inalienable character comes from an external source, from God, nature, or that they are somehow fundamental to being human. If this is the case, then we don’t decide whether the computer has rights or not, we discern it. It is up to neither the computer nor us.
The computer rights movement will no doubt mirror the animal rights movement, which has adopted a strategy of incrementalism, a series of small advances towards a larger goal. If this is the case, then there may not be a watershed moment where suddenly computers are acknowledged to have fundamental rights—unless, of course, a conscious computer has the power to demand them.
Would a conscious computer be a moral agent? That is, would it have the capacity to know right from wrong, and therefore be held accountable for its actions? This question is difficult, because one can conceive of a self-aware entity that does not understand our concept of morality. We don’t believe that the dog that goes wild and starts biting everyone is acting immorally, because the dog is not a moral agent. Yet we might still put the dog down. A conscious computer doing something we regard as immoral is a difficult concept to start with, and one wonders if we would unplug or attempt to rehabilitate the conscious computer if it engages in moral turpitude. If the conscious computer is a moral agent, then we will begin changing the vocabulary we use when describing machines. Suddenly, they can be noble, coarse, enlightened, virtuous, spiritual, depraved, or evil.
Would a conscious machine be considered by some to have a soul? Certainly. Animals are thought to have souls, as are trees by some.
In all of this, it is likely that we will not have a collective consensus as a species on many of these issues, or if we do, it will be a long time in coming, far longer than it will take to create the technology itself. Which finally brings us to the question “can computers become conscious?”

To read more of The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity, you can purchase it here.

Interview with Jay Iorio

Jay Iorio is a technology strategist for the IEEE Standards Association, specializing in the emerging technologies of virtual worlds and 3D interfaces. In addition to being a machinimatographer, Iorio manages IEEE Island in Second Life and has done extensive building and environment creation in Second Life and OpenSimulator.
What follows is an interview between Jay Iorio and Byron Reese, author of the book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity. They discuss artificial intelligence and virtual and augmented reality.

Byron Reese: Synthetic reality, is that a term that you use internally and is that something we’re going to hear more about as a class or concept? Or is that just useful in your line of work?
Jay Iorio: That’s sort of a term that I use internally in my own mind, it doesn’t really come from anywhere. I’m trying to think of a term that includes all of the illusory technologies: virtual reality, augmented reality, everything along the Milgram spectrum and the technologies that also contribute to that. So that it doesn’t just become a playback mechanism; that in fact it becomes a part of the interaction with the physical space and with other people and so forth.
So I would say that specifically what I’m talking about is AR (augmented reality) in the context of a sensor network, in the context of what we’re calling the internet of things (IoT), so that the street becomes aware, it becomes aware that you’re there. It knows your history, knows what you bought. It knows, because of biometric devices for example, it knows your blood sugar. It’s monitoring your gait, it’s inferring a lot about… from the data that it’s picking up in you, from you in real time. Integrating that with the physical world, so that the augmented reality becomes the display for this highly intelligent system, this adaptive system. You and I could walk down the same street in Austin for example and see very different things. Not even getting into… “I don’t like that style of architecture,” it’s going to occlude that from my vision or changed into mid-century modern or something. But content, the traditional streams that we’re used to now, the electronic streams and so forth could be integrated into the built environment. So that in a sense it looks like your personalized desktop, it still looks like Fourth Street, but it’s your Fourth Street and this would be a fairly powerful AI system that was continually feeding you information that it thought you wanted, correcting for it and so forth. It could dim street signage; it could change things. It could do the hospital thing of follow blue for “To Obstetrics.” You know it could give you guidance or the more conventional uses for AR if you can call them conventional.
But, I think where it really comes alive is that it starts to anticipate,like a lot of online systems are starting to do today. But I think we’re seeing just the foothills of a mountain range. They’re trying to predict your commercial behavior. They’re trying to predict what you like. They’re trying to learn more about you and that can… everybody focuses on the possible negatives of that and the invasiveness but there are also enormous positives to it and there are ways that you know we can guide that development. I think the street becomes in a sense a personal valet. The city becomes a response instead of an inert collection of buildings. It becomes a part of your body, in a sense an extension of your body. If it knows your blood sugar is a certain way, it will dim the lights for the doughnut shop, or well you know, you could take it to an extreme where in a sense it becomes an illusion that’s based on reality. But it’s such an enhanced illusion that in a sense it’s almost approaching virtual reality.
So that’s all like kind of science fiction sounding stuff right from where we are today. Where I call my airline and say my frequent flyer number and it doesn’t get it right. What time frame are you talking about to have that experience of the world?
Well, I mean we know it isn’t going to happen on one Monday morning. So we’re already seeing pieces of it the way…
No. But, to get that fulfilled vision of my environment that I am in is all around me. Everything I see and touch and feel is somehow enlivened by this technology.
I think the first step is going to be the mainstreaming of full vision AR.
Let’s start with that step, what does that mean? Full vision AR?
I would say that a big step forward from the existing ones. You take the meta visor for example or the hollow lens, something like that. I think that’s probably the latest we’ve got right now and it’s not bad. But, I think there are discoveries in the pipeline that are really reducing it to this, and it could be contact lenses, ultimately it could be implants.
When you say this, you mean your glasses?
My glasses, yes I’m sorry.
That’s all right and you’re talking about the new ones that are coming out from… Are you referring to any specific product?
Well, I know that, I think Intel.
Do you think that’s going to be projected on your eye? You’re going to see it as in the lens or -?
That I don’t know. I’m going to leave that to the engineers you know that I think that it could well be…
So someday you get a pair of glasses or head contacts that convey that information to you through a means we don’t have down yet, and you think that that’s going to be the first step, that you’ll have the blank slate as it were?
The first step I think will be to take what we currently do on our smartphones and extend it to that realm. So basically the selling point is that it’s hands-free. It’s full-time, it’s always there. It will get rid of this 2018 gesture. I think that will… the phone is sort of an interim step. It wasn’t intended as an interim step. It wasn’t intended to be used the way it’s being used now. You know this is everybody’s computer at this point and I don’t think anybody thought that 10 years ago.
I wonder if people still do that thing with their thumb and pinky when they’re you know when they’re doing the phone thing because it like doesn’t make any sense. Like when will the banana be displaced as the comedic substitute for the telephone? I guess it would become something, anyway keep going.
It’s true. It’s like the fact that you can’t hang up on it.
I know, I remember in the second Spidermanmovie with Tobey Maguire, there’s a scene where the villain’s talking to somebody and hangs up. And he hears a dial tone and immediately it was just jarring to me, like you know you don’t hear when, somebody hangs up their cellphone. There is no dial tone.
That’s right yeah.
It’s like they had to have some audio indicator that there was no longer a person on the other end because otherwise you’re like “hello, hello?”
The drama has been removed from the phone.
I know, so keep going. The first step is it takes over what our phones do.
I think so and I know people who work in AR and a few artists who actually do, be public spectacles with AR.  And you know the problem is you have to hold your phone up, but the real problem is discovery; you have to know that it’s there in the first place. And I think that AR explodes when you no longer have to discover it when it’s just there and then the further step of when it’s feeding you. It isn’t giving us all the same stuff. It knows that you like modern art and so it’s feeding you that, the public art becomes much more harmonious with what you like and so forth.
Do you have shared experiences then anymore?
It’s a good question.
And, is that not an isolating technology? When we go for a walk down the street and I see Art Deco and you see something else?
It is ironic that this ultra-connectivity technology, this web of technologies could… Its most easily used to do exactly what you’re saying which is to give us exactly what we want. And that is the ethical issue that I’m most focused on which is the needs of people, we want what we want. We want to be comfortable. We want certain things; we want to get what we want. The commercial marketplace wants to give it to us and live only to those impulses. I think we’ll end up with what you’re talking about. Which is a lot of sort of gated communities, a more insular way of looking at life so that you’re getting everything you like, but you don’t really understand other people. You’re not experiencing the real city as you walk down the street. You’re experiencing an illusion that’s coming largely from your own mind and behavior. So one of the issues I’d really like to address, not necessarily today, won’t be solved today, but over the long term what I’m looking at is: how you introduce randomness, serendipity, happy accidents, the kinds of things that in a very structured world like the one I’m describing, that stuff tends to either be filtered out or predictable.
Presumably the algorithms would be good enough that it says: “I’m going to find what we both have in common and then we can have a shared experience that we both [like], and it may not be your favorite or my favorite, but the music, the song that’s playing is at least something we both like.”
That’s right. I mean what I’m afraid of losing in that environment, is, I live in in Los Angeles. I used to live in New York City, I like big cities. I like the craziness of them, I like the fact that every day you’re going to experience something that you never have predicted and you might not have wanted, I’ll hear musicians playing a genre that if anybody asked me the day before… it’s “I really don’t like that stuff, it’s not for me.” And then I find myself stopping and listening and then as a musician, I find myself being influenced by a genre I never… This to me is the beauty of the cities, that you ride in a subway as unpleasant as it can be. You’re constantly confronting the full humanity and I think there’s something very humanizing about that. It makes you more open-minded, makes you realize that not everybody believes the way you do. And you know, you even see a primitive version of it on Facebook, for example, where the illusion is created to an extent that the world is much more like you than it really is. That you’re confronting all the whole fake news idea. But basically the idea that you’re being presented with content that that makes you feel good about what you already believe. And I’m disturbed by that. I think that’s very destructive.
I have a policy that I don’t read any book I agree with. I’m serious because it’s like I spend that time and then I get to the end I’m like yeah, that’s just what I thought. So I literally only read things that…so I’m an optimist about the future, so I only read pessimistic views and so forth. So, let me ask you a question. Let’s say we get some form of AI that is… we won’t even say whether it’s an AGI or whether it’s conscious or anything like that. But, it gets Siri or some equivalent technology. It’s so good that it laughs at your jokes and tells you things and you converse with it and all of that, and you regard it as a friend. Maybe it manifests in a robot that’s vaguely humanoid, I don’t know. And let’s say that those become your best friends, and then you know then you find one that’s your spouse and then you just deal with those all day and you never deal with another person. Because those people never let you down and always like… why is that bad? I mean at human level you say doesn’t sound… but why is that bad? Why not just live that life around people that make you feel good about yourself and tell jokes you like? And you had all the stuff in common, why deal with other people?
Well, we do that to an extent already, and even before any of these electronic tools we found communities and you always want to hang out with people that you know you have a similar worldview, where you get each other’s jokes and so forth. So you’re not constantly arguing about basic assumptions. But there’s a difference I think between in the analog world, knowing that I’m living and hanging out with a community of people, like-minded people. You know, we’re all in the ballpark but being aware that right across that highway are people who don’t share any of our assumptions. And we really look at the world quite differently. So is it a good thing or a bad thing to be aware of them and to have to interact with them? I think, with no evidence, but I think it’s a good thing to interact with people you disagree with.
Well that’s people’s gut reaction, but try, and I heard your caveat with no evidence, but try to justify it.
If you don’t encounter things you don’t like, it’s like a muscle that doesn’t encounter resistance. It never develops. It requires friction, I think, for humans because I think that’s the way we’ve evolved is that we evolved in a very complex diverse society and we have to find our way through that and our identity I think is constructed. We construct our identity based largely on how we see ourselves in the midst of that. So it might not be bad, it might just lead to humans who are less able to handle diverse opinions, new ideas, and inventions. They might be less tolerant of eccentricity, of artists, of people who by nature, inventors and artists, people who break the mold. If you’re so accustomed to the world being exactly as you like, it might be very difficult for you to accept a revolutionary concept or a work of art that’s startling and offensive maybe at first. But you grow by accepting those things and incorporating them into your identity. So I would say that it’s good to throw a lot of stuff at people and let them sort it out.
Say here, you get these two robots to choose from: you know this one is exactly like what you want, this one however has body odor and tells offensive jokes that that just really offend you at every level and you really should pick that one.
Well it’s sort of the movie Herfor example, you know this is your perfect companion, and because she was intelligent, she evolved to grow to him and so forth just like a human would. I’m not saying necessarily surround yourself with the obnoxious or what you find uncomfortable. On the other hand, don’t surround yourself necessarily with everybody who agrees with you all the time. It leads to an intellectual inflexibility and a cultural inflexibility.
Do you think human evolution has ended now because the strong don’t necessarily survive any better than the weak, and the intelligent don’t necessarily reproduce more or have higher survival rates than the less [intelligent]? Is human evolution over and the only betterment we’re going to have now is through machines?
I don’t think so. I think that humans as organisms continue to evolve. I think that the strongest is not the physically strongest, because any tiger could knock a weightlifter out. I mean compared to other species, we’re very weak. I would say if you interpret strength for humans as having the characteristics of harmonizing society, cooperativeness, collaboration and so forth, I would see those as the human strengths and I would see those as having as very refined evolutions of our temperaments. Human strength is not individual despite our mythology. Yes, inventors come up with ideas, yes, artists come up with ideas and so forth. And those tend to happen individually, but the real changes tend to happen with a lot of people collaborating, some of whom don’t even know they’re collaborating but they’re participating in a movement. So I would say that the highest point of human evolution is something like empathy, understanding of people who are very different and so forth, that’s human strength. And I would say that is something that’s what allows us to survive, not our physical strength. We don’t really have any physical strength to speak of.
So your contention is that ethical, I mean that empathetic, people with empathy will reproduce more than people without it over the long run?
I don’t think though, there are too many issues with reproduction. I don’t think that will be the case, but the numbers don’t necessarily dictate the influence that has on society.
So let’s get back to our narrative. We have our cellphone [that] has migrated to a hands-free device that we can effortlessly interact with, and you assume that people want to do that based on how they’re willing… it is true that taking the elevator up here I noticed everybody whipped out their phone. It’s like “What am I going to do for the next thirty-four stories of elevator time? I’ve got to pass this time some way.” And so your contention is that there is a latent desire for that because people want to have it on 24/7?
I think so. I think that if I had to come up with a one gut justification for this, it would be, and I know this is not visual, but I’m making the gesture of playing with your phone with two thumbs. That is the fact that, I think it’s an obsession with me. I go into a crowd in an airport a hotel and I count the people who are using phones and the ones who aren’t, and it’s always over 50% of people who are like this. Especially if you consider it, count the laptops. So there’s a need, it could be an obsession, it could be… who knows where it’s coming from. But there is definitely a need to look at this thing all day and who wouldn’t rather strap it to their head and have it be full fidelity and high definition and overlays that don’t look cartoonish, that actually look like they’re fixed and integrated with the environment and so forth. And be able to do all the things you can do on your phone. You get your mail, your messages, you take photographs and whatever.
I have been to North Korea several times and there is no internet. There is no cell phone reception, there is nothing. And I find that the most isolating aspect of it all… like you know I cuddle up to like the warmth of this thing that’s… it’s almost like, I don’t know, I feel untethered and adrift when I don’t have it. And I wonder did it awaken something in me because I wouldn’t have felt that way when I was younger, because I didn’t have the device? Or did it change me, did it somehow weaken me, that now I need it? Or did it awaken this latent desire to want to be connected to a world of information? What do you think?
I think we might have a lot of latent desires that technology hasn’t given us an avenue for and this is one of them. When I was a kid, there was no such thing as email so being without it… so what? You wouldn’t even have been able to explain to me what this phone does. You know, I mean you’d have to explain the internet. You have to explain all the protocols, it’s an amazing amount of history that we’ve got in our pockets. So we didn’t know in the 15th Century, would people have been doing this? Yeah. I think they would have. I think it’s human. I think that you’ve got a little device here that is magical. It carries… it’s your portal to the world. It’s a computer that you can carry on you. It makes me wonder what other technologies could evolve that show that we have other desires that aren’t being met or that we could become addicted to. I mean, it’s not the right word, but habituated to, it becomes essential.
Why would you make that distinction between habituation versus addiction?
Well, because I think of addiction as a drug, but it’s really the same thing yeah it is…
Because I have withdrawal symptoms if I’m cut off from it.
That’s true and in fact we’ve seen in the last year that some Facebook original designers have started to come clean and talk about how that is deliberately, addictively designed. That’s not surprising in a way, it’s, I mean from Facebook’s standpoint you want to keep people using it and that’s where the information about people comes from and so forth. So it’s understandable, but you know we have become addicted to something that is actually very useful. I guess that’s my reluctance to use the word addiction. I think of addiction as to something bad, but you could be addicted to something good too I suppose.
So we have our device and now we transport into the future, and you said the street is aware and I assume you mean that colloquially not literally the street is not conscious.
The street couldn’t really be conscious but the sense, the sensors and the interaction between the sensors and the databases and that there’s a whole web of intelligence I guess you could call it, that will create the illusion that in a sense, I think that the city is responding. That building changed because of something I bought. My health changed and so that facade looks different, the artwork looks different. It’s something now to make me feel more relaxed because it knows I’m very nervous and it knows that I have a heart condition or the opposite, or what have you. The city could become your doctor for most things. It’s constantly diagnosing you. It’s looking at your heart rate continuously, you know an automated vehicle could show up on the sidewalk when you think you’re having indigestion, and it realizes that you’re having a heart attack, so it takes you immediately to the hospital. And starts treating you as soon as it comes in contact with you. I mean the healthcare benefits are just staggering over the next generation.
So you’re an ethicist and you think about the ethics of all of this stuff?
I’m an amateur ethicist.
Fair enough. I don’t know how you go pro… Regardless, tell me some ethical considerations that we may not have thought about, or we had that you want to weigh in on, so what sorts of questions are outstanding?
I’m going to avoid AI by itself because that becomes, well in a way I can’t avoid AI because this whole thing is basically run on machine learning. I would say that the biggest ethical concern I have at this point is that this amazing collection of technologies not be used to de-nature the human experience. Not to make it seem as though life is simpler than it is. There are no people I dislike. There are no people with political views I disagree with. There are no genres of music or movies that I don’t like. I’m not exposed to any of that and it makes me happy. That I find to be a very dangerous thing. It leads to the fabric coming apart I think. So that’s one of my concerns. The commercial motivation of a lot of the AI, the Facebook and Google and so forth, is potentially problematic because there are other values in society that are more conducive to holding the fabric together, appreciating other people’s experiences and points of view and so forth. You know that are not…
Fair enough. So let’s take the first one of those two, that somehow is bubbling… goes to a whole new dimension where it isn’t just “Here are suggested stories for you.” But people and all experiences contrary to your current preferences are off limits, and you say that pulls the fabric apart because it dissolves community. I don’t have any reason at all to empathize with you because you had absolutely nothing in common with me.  Is that how you’re seeing it?
Something like that. Everybody I know disagrees with you, so how could you possibly be right? You know? As opposed to: there are lots of people with a range of points of view and they very idiosyncratically… and sometimes they’re full of contradictions and so forth. And I think to become a full member of the community, you have to sort of appreciate the messiness of people. And a lot of these technologies are naturally inclined, I think, to shave off the messiness and to make it seem like it’s a lot more, you know…
So run both scenarios. Run the worst case and then tell me why that’s not going to happen?
The worst case would be if it were used, I think, if a system like this were used in society where there was no tradition of democratic values. I think that’s very dangerous, because then your primary motivation becomes efficiency and that’s not a very good way to organize society, I don’t think. Society is inherently inefficient and the freer people are, the less efficient it is. Efficiency is never really the goal of a democratic republic. But an authoritarian State with these technologies could create an extremely obedient population that would govern itself in a sense. They would not need to be censored, they would not need to be told that this was inappropriate or so they would know better. They would know that. They would behave. And that might lead to industrial efficiency but it doesn’t lead to a human freedom or any kind of society that I think any of us would feel comfortable living in. I think that’s a natural tendency especially in certain countries where it’s basically a way to enhance authority. That’s one scenario and that could happen here. That’s a very portable model that doesn’t necessarily apply to China or Gulf States or other states they might be thinking of it. It could apply to Western Europe, it could apply to North America. The temptation is going to be high to assert authority through a system like this I think.
On the other hand, it can be incredibly liberating for people, first from a health care standpoint. It basically puts you in your doctors hands all the time. You’re constantly being watched and assuming that this does it in a secure fashion that people are comfortable with. If recreation, entertainment, being exposed to different locations in a physically utterly believable way: travel, education, just one field after another. There’s hardly a field that isn’t revolutionized by this kind of thing. And very positively it really takes the resources and it expands them very openly to people. Everybody becomes empowered in a certain way, but I think that takes the guidance in the development of these systems. And those are the kinds of questions I’m trying to raise with software developers, for example, of people working these technologies. Think of how you can push towards the second scenario instead of the first scenario, and it’s a difficult thing, and it might actually go contrary to some of the, you know the commercial needs of developing AI and mixed reality and so forth. So it’s not easy and there’s no obvious answer. It could go in a lot of different directions.
It’s interesting because as I sit here I think about it: There’s a whole different mindset that says “the great thing about these technologies is they let you find your tribe. You are not alone. There are people like you and these technologies will let you find and have community with those like you whether they’re/it’s spread all over the world. They may be older and they may be this and then may be that, and you will find your place.” But you are describing tribalism in a really kind of dystopian sense like where would you…?
That’s a really good point, it’s one of the paradoxes of these technologies, that they’re very liberatory but they’re potentially restrictive. And the tribal mentality I mean that’s a fantastic thing about… well the Internet itself is the ability basically to form communities without respect to geography as you say, age, any demographic considerations and that’s fantastic, that’s unprecedented. It’s a matter of degree, I think. You know I’m heavily involved with people who are interested in the various things I’m interested in and so forth. But just as you try to read books that you disagree with, I try to find people that I disagree with. I try to emphasize that these tribes are not the world for me, even if I want to make them that. There is an incredibly diverse population out there and once you wrap your head around that, I think if you end up actually dealing with your tribe in a more intelligent way. You know what I mean there? That the more you see of human diversity the better it is, even when you’re in a group that’s heavily circumscribed by interest or one factor or another. So there are tribal utopias and tribal dystopias. I think it’s almost a sliding scale. But I think what changes the utopian to a dystopia is that you realize that this isn’t the sum total. You don’t become satisfied by living in a world that’s just like you – as tempting as that is.
I wonder though if there is such a world. If I’m really into banks shaped like pigs, and I find the Bank shaped like Pig society and I connect with 19 other people. They’re not going to agree with me about anything else. And so there aren’t all bubbles just one or two dimensional and people are so rich and multi-dimensional that there’s really no way to completely… I mean you can isolate yourself from people who have vastly different economic situations than you who live in abject poverty in another part of the world, but that already happens. So how is this any different than I live in a neighborhood and everybody in my neighborhood is, to your point, in some way very similar to me. They’ve all chosen to live there and afford a house of that kind and so forth. But on the other hand, not at all like me. And so how are you saying technology says “oh no, you’re finding your own clones. And when you find your own clones you’ll completely cut off the rest of the world.”
That’s a good point. I think in the physical world we actually do that; you know the people in your neighborhood for example. You have a lot in common as you say, you also have a lot that you disagree with, but if you’re digitally creating communities it might be one of those things where you’re focusing on the similarities to the point where you really want a homogeneous community. It gives you more tools to eliminate the pieces you don’t want. I’m not saying that that’s necessarily going to happen, but you look at Facebook, which is very primitive compared to what we’re talking about. It’s still on a screen. It’s still basically text-based. You know it’s really, we think of it as current, but when you’re  talking about this stuff, it’s not really. It’s an old-fashioned system in a way and even that, even with text, which is very abstract, it still manages to convince people to focus strictly on the things they have in common. It pulls you away, I mean you know the effects that it has on public discussion of politics for example, people are looking for it. Again what you said about reading books that you don’t agree with, you’re looking to confirm and when you confirm, suddenly you’re right. It isn’t just my opinion, it becomes more difficult to compromise with people. So you know we see it happening in that world and yes, within groups on Facebook or in digital groups, you’ll find differences. But they tend to get very narrow cast. You know this is a world view, kind of, that is shared by the group. So it makes it easier to craft a group, but that same impulse is going to be there and maybe one of the solutions is to belong to a lot of different groups so that they overlap and don’t narrow cast your identity in a sense.  Don’t think that, “well I’m this and this and therefore these [are] the only people I deal with.” Because believe me with this and this, you’re going to find a lot of people that disagree with you, it’s just – people are complicated. So anything that we can do to encourage that, what would be the word, “hetero-genization” I guess. That sort of throwing surprises in there. Surprises I think are good for people especially intellectual surprises.
One in every 10 of your friends on Facebook should be randomly assigned to you.
You know I’ve never heard that but some, something like that or something. I mean often we get that with the relatives.
Yeah that crazy Uncle Eddie, who comes to the cook-out… So let’s talk about your second concern the commercial factors and you’ve alluded to, your concern that the incentives are, with Facebook, to make the technology sticky. But I think you probably mean something much more philosophical or broader or maybe not. Tell me the dystopian narrative of how the forces of free enterprise make a dystopia using these technologies?
Well it’s another one of those paradoxes is that the marketplace that exists is, to a large extent responsible for these technologies that are being developed. At the same time, the motivation of the individual companies, I mean take Google and Facebook for example, and their motivation is to gather data about us and sell it to advertisers. There are other models that would be possible, but that’s the one that the marketplace naturally leads to. I mean if I were running Google I’d be doing the same thing, it’s almost unavoidable. So it’s useful to know what information is being gathered for what purposes, how it’s integrated with other information for what purposes and so on. I think that the commercial motivation is to give people what they want and it’s very hard to sell castor oil to people. You know you’re going to say, “well this product you should buy because this app you’re not going to like it but it’s good for you,” nobody’ going to buy that. So there has to be, you know, some built-in incentive to. I think really what we have to do is replicate the real world more fully. So thirty years from now, when a virtual environment becomes indistinguishable from a physical world, a lot of these problems might disappear because, you kind of embrace the values of a diverse civilization and you imprint that. I don’t think that’s what the companies are doing right now. I think they’re saying, “well we need to gather data because this is… the accumulation of data is really our business model.” So that’s a fundamental conflict I think in a utopian vision of these technologies is that, I would argue to the corporations that are doing this, that ultimately there’s greater profitability and greater adoption and less pushback. If you do the right thing, leaving that undefined for the moment, but if you don’t necessarily… without exploitation, you get a lot more buying into this system. You get people who really throw themselves into it with more security for example, less hack-ability. So yeah there is and I’m not picking on a market system, because any governmental system, any economic system is going to bring its own slant to how they do things.
Do you think that life can, because you just said something, I’m still back at “when these systems become indistinguishable from reality?” And it seems implicit in that some machine learning does a very simple thing. It studies the past and assumes the static world and the future is going to be like the past and it looks for patterns in the past and it projects those into the future. Do you think everything about our existence came [from] human creativity? You know I look at a Banksy piece of graffiti and I think, “Could a machine learning system have studied anything in the past and produced that?” So if not, everything can be learned that way. Can a world be built that is therefore indistinguishable from this world?
I think large parts of it can be made indistinguishable. I mean certainly this environment that this conference downstairs, you know South by Southwest, could be made virtual and it could be just as immersive as it is now. The problem comes with invention, with people who with artisans, inventors, creators, people who don’t do what was done yesterday, people who break the pattern. And I’m wondering about a future form of AI that is able to do that. I don’t know how it would. I think a lot of that is biologically rooted, I think there is an urge in a person to create that’s a very hard, and creation involves doing something that hasn’t been done before, not completely divorced from reality. It has to be familiar, but it has to be, it has to break certain rules of the past. Major changes, all of these inventions really involve a deviation from what happened last week. So that’s a piece, the creative piece that is still I think in the realm of humans.
Let me pose another question to you. This is something I’m mulling about as we speak, and I would love to get your thoughts on it. So I often have a narrative that goes like this: If you want to teach a computer to tell the difference between the dog and cat, you need X-million images labeled dog and X-million labeled cat and it does it all. And then I say, you know the interesting thing is, people can be trained on a sample size of one. So if I take that stuffed animal which you’ve never seen before and I said okay find it in these twenty photos. And sometimes it’s upside-down. Sometimes it’s covered in peanut butter. Sometimes it’s underwater or sometimes it’s frozen in a block of ice. You’re like: “it’s there, it’s there” and we call that transfer learning. We don’t know how we do it. We don’t know how to teach computers to do it. So but then people say “aha” here’s the part I want your thoughts on: “You have a lifetime of experience of seeing things that are smeared with substances and perhaps frozen in glass and all of that.” And that seems to be the answer, and then I say, “Ha-ha, you don’t have to show a five-year-old a million cats. You can show a five-year-old three cats and they can pick cats out, and they don’t have a lifetime of experiencing things like cats.” But then they see the Manx, it doesn’t have a tail, and they say “oh it’s a cat without a tail,” like they know that. And that’s a little kid who hasn’t lived a life of absorbing all of this thing. So two-part question as they say part one is how do you think that child gets trained on such a small amount of data, and second, could the answer be it’s the same way that birds in isolation know how to build a nest? Somehow that is encoded in us in a way that we don’t even understand how that would happen?
My answer to both of them is, I don’t know. And that’s a real interesting speculation on that, the birds. The part B, why do children, why can children do that? I don’t know. There are certain things that the human brain, the human mind does that I don’t know how you would code.
Are you saying I don’t know how you would code it or I don’t know if that can be coded?
I don’t know if it can be coded.
Interesting, so you might be one of those people who says general intelligence may not be possible.
I go both ways on that one. I think there are certain things that we do, metaphor, analogy, seeing relationships, intuition, certain very human ways of thinking, I don’t know how much of that can be systematized.
So the counter-argument, the one I hear all the time is you are a machine, your brain is a machine, your brain is subject to the laws of physics that can therefore be modeled in a machine and therefore it can do everything human can do. I mean that’s the logic is that…
Yeah, I have trouble, I understand the point of that, I think it’s reductive. I think that a machine is something that humans create and we didn’t create this, a machine we understand. This we didn’t. This grew. This evolved. This is full of mysteries and un-examinable pieces. We don’t know why we come up with what we come up with. What motivates an inventor to come up with something? Well, okay he has an idea, but there’s more than that. Is he proving that high school teacher wrong? Is he showing his dad, “yes I can do this?” There are all kinds of personal things that they might not even know they’re motivated by, that are, that require being alive. If there’s no sexuality, if there’s no desire, if there’s no irrationality, how can you be fully human? And if you want general intelligence on that level, do you have to program a simulation of that in there? Does it have to believe that it’s alive? Does it have to believe that it’s mortal? Does human life have the same… if we live to 200, how valuable would human life be? Isn’t the preciousness of it, that it’s finite? It is all too short that it follows an arc. Does a machine have to have that same physiological basis? How much of this is rooted in our existence as creatures? Does it have to think it is? Does it have to be really human and alive in order to do the kinds of things that we think of as quintessentially human, like great music or invent smartphones or build cities?
It isn’t just that you know you can do it and you know how to do it, you have to want to do it, and it has to consume your life. Are you willing to do that? Well why a machine would do that where’s this motivation coming from? I only have five years to live…  you know what I mean, how can a machine know that? I want to attract a certain person to me. Does a machine want to do that? It has no need for that, no understanding…? So a lot of this stuff is very squishy human stuff that is evolved. And I think that if you’re going to get general intelligence you might have to grow it. Because if you have something that’s alive, it has a sense of self in a way. It has a sense of survival. It knows it’s going to die in a certain way.
Well interestingly, life is an incredibly low bar, and I think the only reason you can say computer viruses aren’t alive, is because… and it’s interesting because life doesn’t have a consensus definition. Death doesn’t have one. Intelligence doesn’t have one. Creativity doesn’t have one, which either mean to me that we don’t know what they are, or the term itself is meaningless. I don’t know which of those. But life is a really low bar because…  the reason we don’t say computer viruses are not alive is simply because they’re non-biological and right now most definitions require biology. But a virus we generally regard to be alive, a bacterium we do, and yet those don’t have any of those. You’re talking about something more than being alive, right, you’re talking about consciousness?
Consciousness, although well, consciousness let’s say in silicon as opposed to consciousness in some wet petri dish that’s actually grown tissue, for example. Let’s say you have the same kind of general intelligence imbued in both of those. I think the one that’s alive is going to get you closer to a replication of the physical world that we know.
Do you think humans are unique in our level of consciousness?
On this planet? I think that’s impossible to know. I can’t put myself in the head of a macaque. You know I don’t know. I suspect that that every living creature has a sense of itself, in the sense that…
A tree?
Yeah, a tree can’t move but it will turn to face the Sun, it will respond to the environment. An animal definitely will avoid threat, fire.
We derive the notion of human rights and enact laws against animal abuse because we feel that they are entities, that they can feel, that they have a self. If you say a tree has that, have you not undermined the basis by which you say humans have human rights?
No, I would say that a plant, I know this is going to sound arbitrary. A plant is probably in a different category. I would say that, in fact I would say a lizard is probably in a different category you know. I hate to be species-ist but you know I think that we’re talking about higher mammals pretty much. And as inferred from their behavior: complex, social structures and so forth. Trees don’t do that.
Isn’t it fascinating that up until the ‘90s the conventional wisdom among veterinarians was that animals don’t feel pain?
Sure and they operated open-heart surgeries on babies in the 90s without anesthesia because they said they can’t feel pain either. And the theory goes that if you take a Paramecium and you poke it with something, it moves away and you don’t infer it has a nervous system and it felt that. And yet and so they say that’s all the dog that gets cut has, and that’s up until the 1990s, that was a standard of belief that animals didn’t feel pain.
You could I mean if you were willing to accept that logic you could also accept human surgery without you know I mean there’s no clear line there.
No, I’m not advocating that position…
I know you’re not.
It’s interesting to think that the problem… I think it was a position argued in part from convenience by people who use animals or raise animals and so forth. Because if they can’t feel pain then they don’t, you know…
Yeah, then who cares. We know dogs feel pain. Can they create sophisticated societies? No.
I use the very example in a book I have coming up shortly about this time my dog was running and jumped over this water faucet and tore her leg open. And she yelped and yelped and I said I wrote you know nobody could convince me my dog did not feel pain. But you noticed the way I described it that she seemed to feel pain, because I do have no way of knowing. That’s the oldest philosophical question on the books is you don’t know what anybody else feels or they exist or anything. It’s intractable, and the reason it interests me is because I’m deeply interested in whether computers can become conscious and more interested in how we would know if they were. So I would like that to be my last question for you. How would you know if a computer was conscious?
If I had to pin it down to one thing?
Well no. The computer says, “I am the world’s first conscious computer.” What do you say to it?
I would say “make me laugh.” You know let’s say do something that’s human and irrational.
Yeah, the net plays a recording of flatulence…
Okay, but that’s…
But you did it, you did it. It made you laugh.
Well the description of the machine doing that made me laugh. But if the machine actually did that I’d say, “that’s not funny.” It has to do something. Write a song. Do something that hasn’t been done before. If you’re just basing it on what happened last week, then I can be tricked and to believing that.
So you know they had these programs that write beatnik poetry. You know the dog sat on the step, bark, bark, eleven is an odd number indeed. You know write stuff like that, and they would say “well nobody’s ever written that poem before” and you’re like well there is a reason for that. They feed Bach into it and use machine learning to make Bach-ish. I think you can’t trick a musician, but the musicians are like “that’s kind of like Bach.” And so neither of those come anywhere near close to passing your bar I assume, and yet…
They didn’t invent it. That would be if the robot came up and played like Jimi Hendrix. I’d say that’s pretty good, but if he came up with that in 1967, that’s a whole different thing.
You know it’s interesting because we are recording this on the anniversary of the tournament between Alpha Go and Lee Sedol. And there was a move, move 37 in game 3 that people say was a creative move. It was a move that no human would have seen to make. Even Alpha Go said… Lee described it as people started talking about Alpha Go’s creativity on that day and subsequent to that they have systems that train themselves. So there’s no training on human games and there was one that trained itself to play chess, and what it’s doing are things no chess player would do. In one game it won, it sacrificed a queen and then a bishop in two consecutive minutes and won the game to secure a position. It hid a queen way back in one corner and people describe it as alien chess because it’s the first thing that wasn’t trained on this huge corpus of chess games we have. So is that getting near it?
It’s getting near it, that’s doing what a really creative person does which is to take the basic elements and not impose any of the preconceptions on top of it, sort of look at it fresh.
The question I ask, is that creativity? Or is that something that looks like creativity, or is there a difference between those two statements? That would be my last question for you.
That’s a hard one to say. You can imitate creativity by creating Bach-like music. Chess I’m not sure falls into the same category or a sophisticated game, Go or something. Because there is a certain set of possibilities, whereas in the arts, for example, or in invention there really isn’t. I mean, there are physical restrictions, but aside from that, it can go anywhere and although it seems like I’m splitting hairs basically…
These are hard. The challenge with languages that we’ve never had to – we’ve always been able to have a kind of colloquial understanding of all these concepts. Because we never had to say, “well how would you know if a computer could think?” How would you know that? Because the words just aren’t equipped for it, the language therefore I think limits our ability to imagine it. But what a fascinating hour it has been. I could go on for another hour but I won’t subject you to that. Thank you so much for this.
It’s my pleasure.

Moore’s Law

The following is an excerpt from Byron Reese’s book, The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity. You can purchase the book here.
The Fourth Age explores the implications of automation and AI on humanity, and has been described by Ethernet inventor and 3Com founder Bob Metcalfe as framing “the deepest questions of our time in clear language that invites the reader to make their own choices. Using 100,000 years of human history as his guide, he explores the issues around artificial general intelligence, robots, consciousness, automation, the end of work, abundance, and immortality.”
In this excerpt from The Fourth Age, Byron Reese explores the concept Moore’s Law and how more space, more speed, and more processor power impacts advancements in technology.

The scientific method supercharged technological development so much that it revealed an innate but mysterious property of all sorts of technology, a consistent and repeated doubling of its capabilities over fixed periods.
Our discovery of this profound and mysterious property of technology began modestly just half a century ago when Gordon Moore, one of the founders of Intel, noticed something interesting: the number of transistors in an integrated circuit was doubling about every two years. He noticed that this phenomenon had been going on for a while, and he speculated that the trend could continue for another decade. This observation became known as Moore’s law.
Doubling the number of transistors in an integrated circuit doubles the power of the computer. If that were the entire story, it would be of minor interest. But along came Ray Kurzweil, who made an amazing observation: computers have been doubling in power from way before transistors were even invented.
Kurzweil found that if you graph the processing power of computers since 1890, when simple electromechanical devices were used to help with the US census, computers doubled in processing power every other year, regardless of the underlying technology. Think about that: the underlying technology of the computer went from being mechanical, to using relays, then to vacuum tubes, then to transistors, and then to integrated circuits, and all along the way, Moore’s law never hiccupped. How could this be?
Well, the short answer is that no one knows. If you figure it out, tell me and we can split the Nobel money. How could the abstraction, the speed of the device, obey such a rigid law? Not only does no one really know, there aren’t even many ideas. But it appears to be some kind of law of the universe, that it takes a certain amount of technology to get to a place, and then once you have it, you’re able to use that technology to double that again.
Moore’s law continues to this day, well past the ten years Moore himself guessed it would hold up. And although every few years you see headlines like “Is this the End of Moore’s Law?” as is the case with almost all headlines phrased as a question, the answer is no. There are presently all manner of candidates that promise to keep the law going, from quantum computers to single-atom transistors to entirely new materials.
But—and here is the really interesting part—almost all types of technology, not just computers, seems to obey a Moore’s law of their own. The power of a given technology may not double every two years, but it doubles in something every n years. Anyone who has bought laptops or digital cameras or computer monitors over time has experienced this firsthand. Hard drives can hold more, megapixels keep rising, and screen resolutions increase.
There are even those who maintain that multicellular life behaves this way, doubling in complexity every 376 million years. This intriguing thesis, offered by the geneticists Richard Gordon and Alexei Sharov, posits that multicellular life is about ten billion years old, predating earth itself, implying . . . well, implying all kinds of things, such as that human life must have originated somewhere else in the galaxy, and through one method or another, made its way here.
The fact that technology doubles is a big deal, bigger than one might first suspect. Humans famously underestimate the significance of constant doubling because nothing in our daily lives behaves that way. You don’t wake up with two kids, then four kids, then eight, then sixteen. Our bank balances don’t go from $100 to $200 to $400 to $800, day after day.
To understand just how quickly something that repeatedly doubles gets really big, consider the story of the invention of chess. About a thousand years ago, a mathematician in what is today India is said to have brought his creation to the ruler, and showed him how the game was played. The ruler, quite impressed, asked the mathematician what he wanted for a reward. The mathematician responded that he was a humble man and his needs were few. He simply asked that a single grain of rice be placed on the first square of the chessboard. Then two on the second, four on the third, each square doubling along the way. All he wanted was the rice that would be on the sixty-fourth square.
So how much rice do you think this is? Given my setup to the story you know it will be a big number. But just imagine what that much rice would look like. Would it fill a silo? A warehouse? It is actually more rice than has been cultivated in the entire history of humanity. By the way, when the ruler figured it out, he had the mathematician put to death, so there is another life lesson to be learned here.
Think also of a domino rally, in which you have a row of dominos lined up and you push one and it pushes the next one, and so on. Each domino can push over a domino 50 percent taller than itself. So if you set up thirty-two dominos, each 50 percent bigger than the first, that last domino could knock over the Empire State Building. And that is with a mere 50 percent growth rate, not doubling.
If you think we have seen some pretty amazing technological advances in our day, then fasten your seat belt. With computers, we are on the sixtieth or sixty-first square of our chess board, metaphorically, where doubling is a pretty big deal. If you don’t have the computing power to do something, just wait two years and you will have twice as much. Sure, it took us thousands of years to build the computer on your desk, but in just two more years, we will have built one twice as powerful. Two years after that, twice as powerful again. So while it took us almost five thousand years to get from the abacus to the iPad, twenty-five years from now, we will have something as far ahead of the iPad as it is ahead of the abacus. We can’t even imagine or wrap our heads around what that thing will be.
The combination of the scientific method and Moore’s mysterious law is what has given us the explosion of new technology that is part and parcel of our daily life. It gave us robots, nanotech, the gene editing technology CRISPR-Cas9, space travel, atomic power, and a hundred other wonders. In fact, technology advances at such a rate that we are, for the most part, numb to the wonder of it all. New technology comes with such rapidity that it has become almost mundane. We carry supercomputers in our pockets that let us communicate instantly with almost anyone on the planet. These devices are so ubiquitous that even children have them and they are so inexpensive as to be free with a two-year cellular contract. We have powers that used to be attributed to the gods, such as seeing events as they happen from a great distance. We can change the temperature of the room in which we are sitting with the smallest movement of our fingers. We can fly through the air six miles above the Earth at the speed of sound, so safely that statistically one would have to fly every day for over 100,000 years to get in an accident. And yet somehow we can manage to feel inconvenienced when they run out of the turkey wrap and we have to eat the Cobb salad.

To read more of Byron Reese’s book, The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity, you can purchase it here.

The Case For and Against AGI

The following is an excerpt from Byron Reese’s new book, The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity. You can purchase the book here.
The Fourth Age explores the implications of automation and AI on humanity, and has been described by Ethernet inventor and 3Com founder Bob Metcalfe as framing “the deepest questions of our time in clear language that invites the reader to make their own choices. Using 100,000 years of human history as his guide, he explores the issues around artificial general intelligence, robots, consciousness, automation, the end of work, abundance, and immortality.”
One of those deep questions of our time:
Is an artificial general intelligence, or AGI, even possible? Most people working in the field of AI are convinced that an AGI is possible, though they disagree about when it will happen. In this excerpt from The Fourth Age, Byron Reese considers it an open question and explores if it is possible.

The Case for AGI
Those who believe we can build an AGI operate from a single core assumption. While granting that no one understands how the brain works, they firmly believe that it is a machine, and therefore our mind must be a machine as well. Thus, ever more powerful computers eventually will duplicate the capabilities of the brain and yield intelligence. As Stephen Hawking explains:
I believe there is no deep difference between what can be achieved by a biological brain and what can be achieved by a computer. It therefore follows that computers can, in theory, emulate human intelligence—and exceed it.
As this quote indicates, Hawking would answer our foundational question about the composition of the universe as a monist, and therefore someone who believes that AGI is certainly possible. If nothing happens in the universe outside the laws of physics, then whatever makes us intelligent must obey the laws of physics. And if that is the case, we can eventually build something that does the same thing. He would presumably answer the foundational question of “What are we?” with “machines,” thus again believing that AGI is clearly possible. Can a machine be intelligent? Of course! You are just such a machine.
Consider this thought experiment: What if we built a mechanical neuron that worked exactly like the organic kind. And what if we then duplicated all the other parts of the brain mechanically as well. This isn’t a stretch, given that we can make other artificial organs. Then, if you had a scanner of incredible power, it could make a synthetic copy of your brain right down to the atomic level. How in the world can you argue that won’t have your intelligence?
The only way, the argument goes, you get away from AGI being possible is by invoking some mystical, magical feature of the brain that we have no proof exists. In fact, we have a mountain of evidence that it doesn’t. Every day we learn more and more about the brain, and not once have the scientists returned and said, “Guess what! We discovered a magical part of the brain that defies all laws of physics, and which therefore requires us to throw out all the science we have based on that physics for the last four hundred years.” No, one by one, the inner workings of the brain are revealed. And yes, the brain is a fantastic organ, but there is nothing magical about it. It is just another device.
Since the beginning of the computer age, people have come up with lists of things that computers will supposedly never be able to do. One by one, computers have done them. And even if there were some magical part of the brain (which there isn’t), there would be no reason to assume that it is the mechanism by which we are intelligent. Even if you proved that this magical part is the secret sauce in our intelligence (which it isn’t), there would be no reason to assume we can’t find another way to achieve intelligence.
Thus, this argument concludes, of course we can build an AGI. Only mystics and spiritualists would say otherwise.
The Case against AGI
Let’s now explore the other side.
A brain, as was noted earlier, contains a hundred billion neurons with a hundred trillion connections among them. But just as music is the space between the notes, you exist not in those neurons, but in the space between them. Somehow, your intelligence emerges from these connections.
We don’t know how the mind comes into being, but we do know that computers don’t operate anything at all like a mind, or even a brain for that matter. They simply do what they have been programmed to do. The words they output mean nothing to them. They have no idea if they are talking about coffee beans or cholera. They know nothing, they think nothing, they are as dead as fried chicken.
A computer can do only one simple thing: manipulate abstract symbols in memory. So what is incumbent on the “for” camp is to explain how such a device, no matter how fast it can operate, could, in fact, “think.”
We casually use language about computers as if they are creatures like us. We say things like, “When the computer sees someone repeatedly type in the wrong password, it understands what this means and interprets it as an attempted security breach.”
But the computer does not actually “see” anything. Even with a camera mounted on top, it does not see. It may detect something, just like a lawn system uses a sensor to detect when the lawn is dry. Further, it does not understand anything. It may compute something, but it has no understanding.
We use language that treats computers as alive colloquially, but we should keep in mind it is not really true. It is important now to make the distinction, because with AGI we are talking about machines going from computing something to understanding something.
Joseph Weizenbaum, an early thinker about AI, built a simple computer program in 1966, ELIZA, which was a natural language program that roughly mirrored what a psychologist might say. You make a statement like “I am sad” and ELIZA would ask, “What do you think made you sad?” Then you might say, “I am sad because no one seems to like me.” ELIZA might respond “Why do you think that no one seems to like you?” And so on. This approach will be familiar to anyone who has spent much time with a four-year-old who continually and recursively asks why, why, why to every statement.
When Weizenbaum saw that people were actually pouring out their hearts to ELIZA, even though they knew it was a computer program, he turned against it. He said that in effect, when the computer says “I understand,” it tells a lie. There is no “I” and there is no understanding.
His conclusion is not simply linguistic hairsplitting. The entire question of AGI hinges on this point of understandingsomething. To get at the heart of this argument, consider the thought experiment offered up in 1980 by the American philosopher John Searle. It is called the Chinese room argument. Here it is in broad form:
There is a giant room, sealed off, with one person in it. Let’s call him the Librarian. The Librarian doesn’t know any Chinese. However, the room is filled with thousands of books that allow him to look up any question in Chinese and produce an answer in Chinese.
Someone outside the room, a Chinese speaker, writes a question in Chinese and slides it under the door. The Librarian picks up the piece of paper and retrieves a volume we will call book 1. He finds the first symbol in book 1, and written next to that symbol is the instruction “Look up the next symbol in book 1138.” He looks up the next symbol in book 1138. Next to that symbol he is given the instruction to retrieve book 24,601, and look up the next symbol. This goes on and on. When he finally makes it to a final symbol on the piece of paper, the final book directs him to copy a series of symbols down. He copies the cryptic symbols and passes them under the door. The Chinese speaker outside picks up the paper and reads the answer to his question. He finds the answer to be clever, witty, profound, and insightful. In fact, it is positively brilliant.
Again, the Librarian does not speak any Chinese. He has no idea what the question was or what the answer said. He simply went from book to book as the books directed and copied what they directed him to copy.
Now, here is the question: Does the Librarian understand Chinese?
Searle uses this analogy to show that no matter how complex a computer program is, it is doing nothing more than going from book to book. There is no understanding of any kind. And it is quite hard to imagine how there can be true intelligence without any understanding whatsoever. He states plainly, “In the literal sense, the programmed computer understands what the car and the adding machine understand, namely, exactly nothing.”
Some try to get around the argument by saying that the entire system understands Chinese. While this seems plausible at first, it doesn’t really get us very far. Say the Librarian memorized the contents of every book, and further could come up with the response from these books so quickly that as soon as you could write a question down, he could write the answer. But still, the Librarian has no idea what the characters he is writing mean. He doesn’t know if he is writing about dishwater or doorbells. So again, does the Librarian understand Chinese?
So that is the basic argument against the possibility of AGI. First, computers simply manipulate ones and zeros in memory. No matter how fast you do that, that doesn’t somehow conjure up intelligence. Second, the computer just follows a program that was written for it, just like the Chinese Room. So no matter how impressive it looks, it doesn’t really understand anything. It is just a party trick.
It should be noted that many people in the AI field would most likely scratch their heads at the reasoning of the case against AGI and find it all quite frustrating. They would say that of course the brain is a machine—what else could it be? Sure, computers can only manipulate abstract symbols, but the brain is just a bunch of neurons that send electrical and chemical signals to each other. Who would have guessed that would have given us intelligence? It is true that brains and computers are made of different stuff, but there is no reason to assume they can’t do the same exact things. The only reason, they would say, that we think brains are not machines is because we are uncomfortable thinking we are only machines.
They would also be quick to offer rebuttals of the Chinese room argument. There are several, but the one most pertinent to our purposes is what I call the “quacks like a duck” argument. If it walks like a duck, swims like a duck, and quacks like a duck, I am going to assume it is a duck. It doesn’t really matter if in your opinion there is no understanding, for if you can ask it questions in Chinese and it responds with good answers in Chinese, then it understands Chinese. If the room can act like it understands, then it understands. End of story. This was in fact Turing’s central thesis in his 1950 paper on the question of whether computers can think. He states, “May not machines carry out something which ought to be described as thinking but which is very different from what a human does?” Turing would have seen no problem at all in saying the Chinese room can think. Of course it can. It is obvious. The idea that it can answer questions in Chinese but doesn’t understand Chinese is self-contradictory.

To read more of Byron Reese’s book, The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity, you can purchase it here.

Interview with Christof Koch

Christof Koch is an American neuroscientist, best known for his work on the neural basis of consciousness. He is the President and Chief Scientific Officer of the Allen Institute for Brain Science in Seattle, and from 1986 to 2013 he was a professor at California Institute of Technology (Caltech). Koch has published extensively, and his most recent book is Consciousness: Confessions of a Romantic Reductionist.
What follows is an interview between Christof Koch and Byron Reese, author of the book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity. They discuss artificial intelligence, consciousness and the brain.

Byron Reese: So people often say, “We don’t know what consciousness is,” but that’s not really true. We know exactly what it is. The debate is around how it comes about, correct?
Christof Koch: Correct.
So, what is it?
It’s my experience, it’s the feeling of life itself, it’s my pain my pleasure my hopes, my aspirations, my fears, all of that is consciousness.
And it’s described as the last major scientific question that we know neither how to ask nor what the answer would look like, but I assume you disagree with that?
I disagree with some of that, it’s one of the two or three big questions, being: Why is there anything at all? What’s the origin of life, and yes how does consciousness arrive out of matter?
And what would that answer look like, because people often point to some part of the brain or some aspect of it and say, “that’s where it comes from,” but how would you put into words why it comes about?
It’s a very good question. So, having the answer which bits and pieces of the brain are important for consciousness is critical to understand what happens in emergency room when you have the patient who is heavily brain damaged, but you have no idea whether she is actually there, was anybody home. It’s going to be of immense practical importance and clinical importance for babies or for anencephalic babies, or at the end of life, but of course that doesn’t answer the question. What is it about this particular bit, piece of the brain that gives rise to consciousness, and so finding what we need, we need a fundamental theory of consciousness that tells us what type of physical system, whether evolved or artificial, what type of physical system under what conditions can give rise to feelings, because those feelings aren’t there. If you look at the fundamental theories of physics, quantum mechanics and general relativity, there’s no consciousness there. If you look at the periodic table of chemistry, there’s no consciousness there. If you look at the endless ATGC chart of our genes, there’s no consciousness there. Yet every morning we wake up to a world full of sounds and sights and smells and pains and pleasures. So that’s the challenge: how the physics ultimately give rise to conscious sensations.
Or, some might say, “whether physics give rise to it?”
Well, physics does give rise to it in the sense that my brain is a piece of furniture of the universe. It’s subject to the same physical laws as everything else. There isn’t a magical type of law that only applies to brains but doesn’t apply to anything else, so somehow physical systems, or at least a subset of physical system gives rise to consciousness. The classical answer, at least in the West, was forever, for a very very long time, there’s a special substance, the thinking substance, res cognitans, or people today call it the soul, and only certain types of systems have it and only humans have the soul, and the soul somehow mediates the mind. But of course we [say], sort of logically, that’s not very coherent, there’s no empirical evidence for it… how would this soul interact with the brain, where’s the soul supposed to be, where does it come from, where’s it going to, it’s all incoherent, although of course the majority of people still believe in some version of this. But as scientists, as philosophers we know better. There isn’t any such soul, so it comes back to the question, “What is it about the physics of the world that gives rise to feelings, to sensations, to experience?”
Well, I want to tackle that head on in just a moment, but, let’s start with you, because you’ve been dealing with this question for a long time, and it’s fair to say, your understanding of it has evolved over time. Can you walk through, like the very first time you thought about this? As far back as you remember, and then what you thought and what the early theories you offered up, and how you have evolved those over time?
Sure, so I grew up in a devout Roman Catholic family, and I was devout and then of course you grow up to believe there’s this soul, the real Christof, is sort of this spirit that’s hovering over the waters of my brain, and every now and then, that soul touches the waters of my brain, makes me do things, and then when I’m thinking about, for instance, whether I should sin or not, this absolute freedom to choose one or the other and then my soul does one thing or the other, but then, this was on Sundays. Then during the day and the rest of the week, I taught science, I thought about the world in scientific terms, and then you’re left… well wait a minute, you begin to think about more detail and that just can’t work. Because, most importantly, where is the soul? How does it interact with a brain, and so then you begin to think about scientific solutions. And then I encountered, years later, Francis Crick, the co-discoverer of DNA, and he and I started up this very fruitful collaboration, where we wanted to take the problem of consciousness away from purely philosophy where it’s vested over the past 2,000 years, which is great you’ve had some of the smartest people of humanity, but they haven’t really advanced the field that much, to take it into an empirical operation that we scientists can work on. And so we came up with this idea of the neuronal colleagues of consciousness. It’s a fairly obvious thing; it’s the idea is that, whenever I’m conscious of something, whether it’s your face for instance, I see your face or hear your voice, or I have a pain or I have a memory, there must be some mechanism in my brain, we know it’s not the heart, we know it’s in the brain, there must be some mechanism in the brain that’s responsible for that. And it’s a 2-way communication between this mechanism and my feelings in the sense that I artificially activate this neuronal colleague of consciousness or abbreviated as NCC. If I trigger it for example, by an electrode that I put into the brain, and say doing brain surgery, I should get that person, even though there isn’t anybody out there, but I still see a face. Or conversely, if this part of the brain gets removed by a stroke or a virus or a bullet or something, I shouldn’t be able to have this person anymore.
Now also this is a big scientific empirical program that’s going on in many places throughout the world, where people are trying to look for these neural colleagues of consciousness in the brain. But then of course, somebody pointed out to me, he asked me a very simple question, said, “Well in principle, if your program has run its course and then 50 years later we know exactly that every time you activate these neurons in this particular mode, projecting to this other part of the brain, you become conscious.” How’s that different from the cards cranial gland, because famously 80 years ago, they said well the place where the brain needs to have this spooky stuff, this cognitive stuff, is the cranial gland, and today we all laugh at it, right? Well how is that different from saying, “well it’s made up of 5 neons that oscillate at 40 hertz?” It’s just much more detailed, because ultimately it seems like magic. Why should activity in these neurons give rise to conscious sensation, and at that point, I really thought, well what we need, we need a fundamental theory that tells us, independent of which mechanism, tells us what is it about any one mechanism that can give rise to consciousness? And so here we are, 20 years later.
And so talk about IIT?
So the most promising theory of consciousness in my personal opinion, and in the opinion of many of the observers of the field, is this integrated information theory, due to this Italian-American psychiatrist and neuroscientist, Giulio Tononi. And it starts by saying, “Well what is a conscious experience? A conscious experience exists for itself – in other words it doesn’t depend on anybody else, doesn’t depend on my parents or you or any observer – it just exists for itself. It has particular properties, it’s very definite, either I have a conscious experience or I don’t have a conscious experience. It’s one, it’s only one at any given point in time, and it has parts, like, if I look out at the world, I can see you over here, over there something else, and there’s an above and below and a close by and a far away, and all those notions of space and other sensory qualities.  And so then let’s look for a physical mechanism, or first an abstract mathematical formulation of such a magnitude that instantiates these key properties of consciousness. And so the theory says that ultimately, consciousness is its causal power of the system upon itself.
So let me unpack that a little bit. Well firstly, let me repeat it. So the idea is that consciousness ultimately is the ability of any system, like my brain, to influence its immediate future, and to be influenced by its imminent past, it has causal power. Not upon others, that’s what physics has. If I have an electric charge, I’ve attraction or repulsion of other things, but it’s power upon itself. The brain’s a very complex system, and its current state, influences its previous state and its past state influences its current state. And the claim is that any system that has internal causal power, feels like something from the inside. Physics tells us how objects appear from the outside, and this thing, intrinsic cause-effect power, tells us what it feels to be that system from the inside. So physics describes the world from the outside perspective, from the third person perspective of an observer. Integrated information cause and effect power tells me what it is to be a system from the inside, and the theory has this number called phi, that tells you how conscious the system is, how much intrinsic cause and effect power it has, how irreducible it is, that’s another way of looking at it. Consciousness is a property of a whole, and how much that whole is a whole, how much it is reducible, that’s quantified by this number phi. If phi is zero, you don’t exist, there’s no consciousness, the system doesn’t exist as a whole. The bigger the phi, the more conscious the system is, and the theory delivers, at least in principle for any system, whether it’s a brain or computer chip or a molehill, an ant, anything else, it says, in principle you can determine, it gives a recipe algorithm, how you can determine for a particular system, in a particular state, whether it’s conscious and how much it’s conscious by computing the phi. So that’s where we are today.
So it’s a form of panpsychism?
One of its consequences of integrated information is, that it says consciousness is much more widespread than we’d like to believe. It is probably present in most of metazoa, most animals, it may even be present even very simple system like a bacterium may feel like something, that’s what it says, that a single paramecium for instance, right, a single protozoa, single bacteria is already a very complicated system, vastly more complicated than anybody’s every simulated, right? We don’t have a single simulation today, in the world, of a single cell at the molecular level, but it’s way too complex for us to do right now, but the theory says yes, even this simple system feels like a tiny bit…
What about non-biological systems though?
In principle the theory is agnostic. It just talks about causal power, so any system that has causal power upon itself, is in principle, conscious.
So is the sun conscious?
Well, okay so that’s a very good question. The sun is not conscious I believe, at the level of the sun, because, so consciousness really requires… It says that the system has to be integrated and highly differentiated as a whole; so the system has to be able to influence its whole. The sun is so big that it’s very difficult to understand how propagations within the sun would exceed any time more than a few millimeters, given the magnetic hypo dynamics of the corona atmosphere of the sun. So any system, you can always ask the question, is that as a whole conscious, as many people have asked in the West and in Eastern tradition. The sun is unlikely to be conscious, just like, for example a sand hill, is very unlikely to be conscious, because if you look at the individual sand particles, they only interact with each other over very, very short distances. You don’t have two sand particles that ever, let’s say, an inch apart, they don’t interact anymore, only very, very weakly. Just like, for instance, you and me, you’re conscious, I’m conscious, there isn’t something that’s right now, that feels to be a Byron-Christof, although we do interact, right, we clearly talk to each other, but your brain has a particular amount of integrated information. My brain has a particular amount of integrated information. There is a tiny bit of integrated information among us, but the theory says, the only systems that are conscious, are local maximum. It’s like many physical systems, it has this extreme on principle, it said, “only a system that has maximum cause-effect power is conscious.” Therefore, the integrated information within my brain is much more tightly integrated given the massive interconnection within my brain, and the very few bits that we exchange sort of every second, given the speed of verbal communication. So that’s why you’re conscious, I’m conscious, but there isn’t an uber consciousness, there isn’t a gestalt that sort of consists of you and me.
But, do you have a sense, if you were a betting man, that while you extend this order of consciousness to all of these systems, are humans somehow more conscious than an ant?
Yes, there’s no question…
So what is it about humans, in fact, could you name something that hypothetically could be more conscious than a human?
Yes, in principle you can imagine other physical systems…
No, I mean something in the real world? And what is it about us, back to this, “What’s special about us that gives us supercharged consciousness? Because our brain isn’t that much different than an ape brain…
But it’s bigger.
Right, but only by percent.
Well by a factor of three. But just size… in terms of local interactions, we haven’t done enough microanatomy, to be able to see… is a little grain of ape brain really fundamentally different from a little bit and piece of human brain? Certainly by size…
But then the Beluga whale would be more conscious than us?
Well, so that is one of the challenges, we look at brains of some mammalian, that made it back to the sea, their brain is indeed bigger than us and it may be, it’s very difficult to know right now, but it may be that at some sense, they may be more conscious of their environment than us, but they haven’t developed the ability to talk about it in the way we have, so it’s very difficult for us to test that right now. But that’s not impossible. But it’s an important question, ultimately you can test.
It just, it feels like you have a world full of all these objects…
These conscious entities, yes indeed. The universe is partly filled with conscious entities.
But somehow we appear, and I understand your caveat that that might not actually be the case, but we appear to be the most conscious thing.
Well because we are eloquent.
And other animals, by and large are not nearly as eloquent. My dog, I can communicate with my dog, but only in a limited way, you know I know the position of his back, how he wags his tail, his ears etc., but it’s low grade… and also my dog doesn’t have an affect representation of Charles Darwin, or evolution or god or something like that. So yes, by and large, it appears to be, at least on planet earth, that it’s not unlikely that we are the most, homo sapiensis the most conscious creature around. We live in a world with other conscious entities. Now this is not the usual belief. The majority of the planet’s population believes that there are lots of other conscious minds. It’s only really in the West, that we have this belief in human exceptionalism, and somehow we are radically different from anything else in nature. It’s not a universal belief.
No, but I guess one would say, if you compare our DNA to an ape, as an example, the amount that’s different is very small.
And of the stuff that’s different, a bunch of that may not manifest itself. It may not do anything, and that the amount of code different between us and an ape is trivially small, and yet, an ape isn’t 99% as conscious as I am, or at least it doesn’t feel that way to me.
We remember the code that’s in our DNA, which is only 30 MBs, if you compress it, not a lot, and as you pointed out, it’s more or less the same in an ape, in fact it’s more or less the same in some mammals. But let’s not totally confuse the amount of information in the blueprint, with the actual information in the final organism as a whole.
I’ve heard an older interview of yours where you were asked if the internet was conscious. And you said, “it may have some amount of consciousness,” would you update that answer?
Well, in the meantime, the internet has got a whole lot more complex of course, I don’t see any behavioral evidence of consciousness. So, it has a very different architecture, it’s not point to point, it has packet switching, so it’s quite different from the way our brain is, so it’s not easy to actually estimate how conscious it is. Right now, I’d probably say it’s not very, based on what I know about it today, but I may be wrong, and it certainly could change in the future. Because if you think about it, certainly in terms of its component, the internet has vastly more transistors, the internet taken as whole, it has 10 billion nodes… each of those nodes has 10 to 11 transistors, so if you look at it as a whole, it’s bigger than a single human brain, but it’s wired up and interconnected on many different ways, and connectivity, — this is what integrated information tells us: the way components are wired up really makes all the difference, so if you take the same components, but you wire them up randomly or even the wrong ways you might get very little consciousness, it really matters.
What about the Gaia hypothesis, do you think that the Earth and all of its systems, if they function as a whole, if they are self-regulating to some degree, then it’s influencing itself and so could the Earth as a whole be conscious, and all of its living systems?
Unlikely, for the same reason, integrated information says always consciousness, it’s local maximum of intrinsic cause and effect power. In fact, this criticism has been made by an American philosopher John Searle. He said, “Well, IT seems to predict that all Americans, that America is conscious as a whole, there are 310 million Americans, each one of them is conscious, at least when they’re not sleeping etc. And therefore, how do you rule out that there isn’t America as a conscious entity? Well the theory has a very simple principle, local cause-effect maximum, you’re conscious, I’m conscious, but unless I do some interesting technology, we can return to that point in a little bit, there isn’t anything what it is like to be unique, right now there isn’t… There are four of us in this room, there isn’t a group consciousness, there isn’t anything that feels like to be the group of the four of us sitting around here, nor is there anything like to be America.
So, what would be your criticism of the old Chinese nation problem, where is says, “you take a country like China, one billion plus people and you give everybody a phone book, and they can call each other and relay messages to each other, and that eventually…”
Okay, let’s get to something much more concrete, I find more interesting… Let’s take a technology, let’s call it bridging, brain bridging, okay? Let’s say brain bridging allows me directly with some future technology to wire up some of my neurons to some of your neurons. Okay, so let’s do that in the visual thing. So now my visual brain has access to some of what you see, so for instance I now see a ghostly image of what I see across the usual world, and now I sort of ghostly super-impose, I see a little bit of what you see, right now you’re looking at me, so I see me ghostly reflected. However, the theory says, until the integrated information between the system or your brain/my brain, and the spring bridging, increases the above integrated information with my brain or within your brain. There’s still you, and there’s still me. You are still a conscious entity with your own memory and I’m still a conscious entity, Christof. Now, I keep on increasing the bandwidth of this brain bridge. At some point the theory makes a very clear prediction: when the integrated information in this new system, that has now 2 brains exceeds the integrated information in either your or my brain, at that point I will die, Christof will die, Byron will die and there will be a new entity, a new single entity that consists of you and me. It’ll be a single thing, it’ll be a single mind that has some of your memories and some of my memories, it’ll have 2 brains, 4 hemispheres, 4 eyes, 4 ears.
And you know what, the inverse has happened in surgery, it’s called split brain, because in split brain what I do, I take a normal brain, I mean they’re not normal, they’re not healthy, but for the sake of argument, let’s assume it’s a normal brain, I cut it in the midline where there are 200 million fibers across the corpus callosum that link the 200 million fibers that link the left brain with the right brain, I cut it, and what’s the empirical evidence? I have two minds inside one skull, so here I’m just saying, “Well let’s just do it using technology, we built a sort of artificial corpus callosum between your brain and my brain.” And so, in principle, there will be this technology, that allows us, maybe even in large groups, to merge, we can take all these four people here, we can interconnect us using this brain bridging, and then there will truly be a single mind. Now that’s a cool prediction. And you can probably start doing that in mice in the next 10 years or so. It’s a very specific prediction of the theory. That’s the advantage, once you go from philosophy to very concrete theories, you can test them and then you can think about technology to implement and test them.
Think about two lovers, think about Tristan and Isolde, right? Who sing in Tristan and Isolde opera… they don’t want to be Tristan/ Isolde anymore, they want to be this single entity, right, so in the act of love-making, you’re still, that’s the tragedy of our life, you’re still always you, and she’s always she, no matter how close you are, even though your bodies interpenetrate, you’re still you and she’s still her, but with this technology, you would overcome that, there would be only a single mind. Now I don’t know how it would feel, you might also get all sorts of pathologies, because your brain has always been your brain, and my brain always my brain and suddenly there’s this new thing, you could probably get what you get in split brain, that one body does something different from the other body, these conflicts that you see in split brain, after the operation, this so called “alien hand syndrome”… But at least conceptually, this is what the theory predicts.
I’ll ask you one more hypothetical on things whether they’re conscious or not, what about plants, how would you apply IIT to a tree?
It’s a very good question. I don’t know the answer. I’ve thought a little bit about it, of course there are now people who claim that plants, flowers and trees have much more complex information processing going on, at a slower scale. They clearly didn’t evolve to move around, they clearly don’t act on the timescale of seconds. It may well be possible that at least some non-animal organisms like plants, also that it feels like something to be them, that’s what consciousness is, it feels like something to be you, we can’t rule it out. Now our intuition says, “Well that’s ridiculous,” but our intuition also says, “The planet can’t be round, because people obviously would fall off,” people have used this argument for hundreds of years, but the person on the antipode is going to fall off the planet. So we know planets can’t be round, “we know whales are fish, they smell like fish, they’re in the water, they’re not mammals.” So we’ve all sorts of intuition that then science tells us, well actually these intuitions are wrong.
So let’s think through the ethical implications of that, if people are conscious, and because people are conscious they can feel pain, and because they can feel pain, we deem that they have certain rights. You can’t abuse animals because, of course up until recently people didn’t believe animals necessarily could feel pain, up until the nineties. And so, we say “no, no,” you can’t abuse animals, because animals can feel pain. Well according to you, everything can… well not everything, but almost everything can feel pain. Does that (a) imply everything has some right not to be hurt, does a tree have some right not to be cut down; and part (b), does it not undermine the very notion of human rights, because if we’re just another conscious thing, and everything else, and whales may be more so and fish may be, and this may be and that may be, then there really isn’t anything wrong with torturing people or what have you, because everything’s conscious, of course everything.
Okay the first point, I don’t know, having consciousness doesn’t automatically imply that you have the capability to feel pain, to experience pain. Consciousness just, could maybe be all they have are pleasure centers, for them the entire life is just a ride of pleasures, just one orgasm after the other, so our theory of having consciousness is not the same as having conscious experience of pain. Pain is a subset of conscious experience. Second of all, even as humans, we have rights, but then of course, very often those rights clash. “Thou shall not kill.” But there’s capital punishment, and there’s abortion, and then there is homicide, and then there is war, where I can legally kill other people, right? So, these rights are always a tradeoff, as are other rights, and same thing with consciousness yes. It’s no question that certainly all mammals are conscious, right? Birds are conscious, most of the complex fish are conscious, and so one consequence is maybe we shouldn’t eat it. So ever since I had this realization, I don’t eat the flesh of creatures anymore, for that very reason. Now once again, it’s a tradeoff, okay, I’m not going to starve to death if there’s a piece of dead flesh, of steak that I could eat to survive, and so it is a trade off. But given that we have choices, I think we should act on those choices and yes, if it’s true, the moral circle becomes larger, but this has happened over the last 2,000 years. The moral circle of life, the people accorded special privileges, first only used to be Greek men, alright, and then we extended it to some other men around the periphery of the Mediterranean, and then we thought about women, and then we thought about African Americans, and Africans and people who look, at least superficially, very different from us. Right now, as you may well know, there’s a movement to accord at least great apes certain rights, because, yes they are our cousins, our distant cousins. And yes we shouldn’t hunt them and eat them for bush meat.
That’s maybe addressing a slightly different question I’m asking. I’m saying, if the circle eventually becomes everything, then the circle becomes meaningless right? If it’s like, “no, no, you can’t eat plants either, and then you can’t cut a sheet of paper or…”
No, no, because the theory says, not every object is conscious, most certainly not. A sheet of paper for example, the interactions…
Not a sheet of paper, I shouldn’t have said that one, but you extended it to plants…
A big question is, it’s the difference between having one cell that’s highly complex and conscious, versus is the plant a whole? That’s a question you have to ask. Is the tree, the oak tree, as a whole, is it conscious as a whole, or are there bits and pieces of it? That makes a big difference, I assume we don’t know, I haven’t looked at the structure, I don’t know.
Fair enough, but the argument is, you speed up the plant growing and finding sunlight, and it sure looks like animal movement…
Yeah but movement by itself, we know in patients, we know when you’re sleepwalking you can do all sorts of complex behavior, without the patient necessarily being conscious, so it’s a complicated question.
You made a really sweeping statement just a second ago, you said, “all mammals are conscious, and birds and fish.” How do you know that, or how do you have a high degree of confidence in that?
Very good question, so, two things have happened, historically over the last hundred years, (a) we’ve realized, the continuity of all brain structures, we believe it’s a brain that gives rise to consciousness, not the heart.  If you look at the brains of all mammals, I mean I’ve done this at my institute, my institute has 330 people that are experts in the neuron anatomy of mouse brain and human brain, I’ve shown them, one after the other, cells, brain cells, they’ve come from a human brain and a mouse brain, each one a slide on the screen. And I asked them, I moved the scale bar because the human brain is roughly 3 times bigger in width than the mouse brain, I remove the scale, each one I asked, “tell me, guess.” And they had this photo app, they had this app on their phone, “is it human or mouse?” People were entranced. Why? Because the individual components are so similar across whether it’s a mouse, a dog, or monkey or a human, it all looks the same, we have more of it, but as you point out, whale has even more of it. So the hardware’s very similar. [Secondly,] behavior with the exception of speech, (but of course not all humans speak, there are people who are mute, there are babies, and early children that don’t speak, there are people in faith that don’t speak. But speech at least in normal human adults, is a difference from other creatures). But there are all these other complex behaviors: empathy, lying, there’s higher order theories, there’s complex bees for example, who’ve been shown to recognize individual beehive owners. Bees have this very complicated way how they choose their hybrid, you think how long it takes you to choose a house, you can look at how a bee colony sends out these scouts and they have this very complicated dance to try to find an agreement, so we realize there’s lots of complex behavior out there in the world.  Thirdly, we’ve decided, at least scientists and philosophers have, that consciousness is probably not just at the apex of information processing. So it’s not just what it used to be, so high level awareness that I know I’m going to die and I can talk about it, but consciousness is also those low level things like seeing, like feeling, like having pain. And those state that the associated behavior and the associated underlying neural hardware that we find in many many many other creatures. And therefore today, most people who think about questions of consciousness, believe consciousness is much more widespread than we used to think.
Let’s talk a little bit about the brain and work that way. So let’s talk straight with the nematode worm… 302 neurons in its brain. We’ve spent 20 years trying to build a model of it, and even the people involved in it, say that that may not… they don’t know if they can do it. Do you think…
Embarrassing isn’t it?
Well is it, or is it not beautiful? That life, so my question to you is this, you just chose to say, “because our neuron looks like a mouse neuron, ergo, mice are conscious.”
No, no, no, it’s not quite that. Our brain is very similar to a mouse brain, our behavior is rather similar, and therefore it’s much more likely that they also have similar states, not identical, much less complex, but similar states of pain and pleasure and seeing and hearing that I have. I find no reason to… there’s no objective reason to think otherwise, because otherwise you have to say, “Well we have something special, but I don’t know what that special is. I don’t find it in the underlying hardware.”  So, and this of course what Rene Descartes did famously, he said, “When your carriage hits a dog and the dog yells, it’s just a machine acting out, there’s no conscious sensation.” Clearly he wasn’t a dog owner, right? We believe, I mean, I don’t know a single dog owner who doesn’t believe his dog can be happy or excited or sad or depressed or in pain. Well those are all conscious sensations. Why do we say that? Well, because we interact with them, we live with them, we realize they have very complex behavior that’s not so different from ours. They can be jealous, they can be happy, same thing that your kids are jealous of each other sometimes, or happy, so we see the great similarities of cause and divide across species. We’re all nature’s children.
So, back to the nematode worm, our understanding of how 300, and I think 2 of them float off on their own, so how 300 neurons come together, and form complex behavior, such as finding food, finding a mate. I mean they’re the most successful creatures on the planet. 70% of all animals are nematode worms.
They out survive us.
Yeah, so my question is to you, first of all, could a neuron actually be as complicated as a super computer?  Could it be operating on the Planck scale, with such incredible nuance to say… well I’ll leave the question there. Why is the nematode worm so intractable so far, and why do we not understand better how neurons operate, and could a neuron be as complicated as a super computer?
Right, okay so three very different questions, let’s start with neurons, any cell. As I mentioned before, right now we do not have a molecular level model of an entire cell. There’s not a single group that has such a model, just of a single cell, no matter what cell that is, nematode cell, human cell, some people are trying to do that. The Allen Institute for Cell Science is trying to do that, but we aren’t there yet, right? Why? Because we still don’t have the knowledge and the raw computational ability, but more important, the knowledge to try and model all of that, right? That’s just a practical limitation. We’re making progress, but it’s slow. You’re right very unbalancing for my science, brain science. We do not have a general-purpose model of a creature that only has 1000 cells, 302 of which are neurons. We’re getting there, I mean we understand many many things about the nematode, but we’re still not there yet, so, my science still has a long way to go. So it’s difficult, what else is new about the world, research is difficult. Look, per unit, per gram or pound, the brain is the most complex organ in the known universe. It’s the most complex piece of highly organized matter in the universe, right? And I think that’s related to the fact it’s also conscious, because it is so complex, it is also conscious, so yes it is a challenge to our current methods, we’re making progress but it is, and remains the biggest challenge we have in science.
It’s interesting though, because the argument I heard earlier, you said, “People used to say there’s something special about humans.” We don’t know what that is, dualism breaks down because of this problem. Therefore, there isn’t anything. Let’s look for a purely scientific answer… you come to some theory, but, and I’m in with all of that, but then, you say, “We look at a cell, we don’t understand how the cell works…”
In detail…
Right, and therefore, and we’re fine knowing there are just certain things we don’t know about it.
Right now.
But we didn’t take that about the specialness of humans. Look, there’s something special about us, everybody knows that, everybody knows that there’s a difference between a person and a paramecium, everybody knows. And we just don’t know what it is yet, and we’re fine with that for now, but you say, “No, no, we have now concluded there is nothing special about us, let’s go figure out an alternate explanation.”
Well depends what you mean “special” about us. Clearly there are many things that are special about us. As I said, we’re the only ones who are eloquent. I’ve never had a conversation with my dog, nor with a worm. We have, for example a capability of language, that’s enabled us to build these cultures and to build everything around us. So there’s no question we’re special. What you’re saying, we are special, or what people want to hear, that we are special, we somehow avoid the laws of science or we have something going above and beyond the laws of science. Anybody else in the universe has to follow the laws of physics, but somehow humans are exempt from them, they’re this special deal, they have this special deal called a soul. We don’t know what it is, we don’t know how it interacts with the rest of the world; but somehow, and that’s what makes us unique. Sure I can believe that, that’s a great belief, makes me special, but I don’t see any particular evidence for it. No, we are different in all sorts of ways, but we’re not different in that way, we are subject to the same laws of physics as any other thing inside the universe.
So you mention language. I’m just curious, this is a one-off question. You think it’s interesting that of all the animals that have learned to sign, that none have ever asked a question, does that have any meaning to it?
I don’t know.
Because that would imply perhaps, they’re not conscious, because they can’t conceive that there’s something that knows something that they don’t.
Well you say this as like a fact. So, you’re sure that no gorilla has ever asked a question to another gorilla?
Correct, the one potential exception is, Alex the grey parrot may have asked what color he was, maybe. Other than that, no gorilla has ever asked.
I’m not sure I would take that at face value, but even if it’s true, so let’s just say for the sake of argument, yes. We seem to have vastly more self-consciousness than other creatures. You know if the other creatures do have some simple level of self-consciousness, a dog has simple self-consciousness, my dog never smells his own poop, but he always spends a lot of time smelling other dog’s poop, so clearly he can make the difference, between self and somebody else. But yeah, my dog isn’t going to sit there and ask questions, because his brain just doesn’t have that sort of complexity.
Back to the notion “You and I don’t have anything between us that makes us one entity.”  Do you think that a beehive, or an anthill that exhibits complex behavior in excess in any of them, do they have an emergent consciousness as a whole?
So that’s a very good question. I don’t know. Again you have to compare the complexity within a bee brain, so a bee is roughly one million neurons, their circuit density is 10 times higher than our circuit density because they evolved to fly, so they have to be on very tight weight mass constraints of the sorts that we aren’t as terrestrial animals, and nobody’s fully reconstructed a bee brain yet, although they’re doing it for flying. So question is, given the complexity of what’s in the bee strain and the communication, the wiggle dance they do to communicate, what’s the tradeoff there? I mean it’s a purely empirical question that can be asked. Right now my feeling is probably not, but I may well be wrong.
Do you know the wasps that do the shimmering thing, they make this big spinning pin wheel, and they spin so quickly there’s no wasp who says “oh he just flared his wings, therefore it’s my turn, and then the next one, that somehow…?
Look you have these beautiful, what are they called ruminations, there’s these beautiful, you can see it on the web, these movies of birds, and flocks of birds that execute these incredible flight maneuvers, highly highly synchronized. Are they one conscious entity? Again, you have to look at the brains and you have to look at the amount of communication among the individual organs. You can look at North Korean military parades, right? It’s amazing the precision with which you get 100,000 Koreans to do these highly choreographed [maneuvers]. But they’re not conscious as a whole because the information they exchange is much much lower than the massive information. Once again, you have 200 million fibers just between your left brain and your right brain, but those are all good questions that you could ask and that have answers once you have a fundamental theory of consciousness.
So let’s go from the brain to the mind. So, I’ve looked hard to find the definition of the mind that everybody can kind of agree on. And my working definition will be: it’s the set of attributes that we have, some abilities that we have, that don’t seem, at first glance, to be something that mere matter could do. Like, I have a sense of humor, my liver may not have a sense of humor, my liver may not be conscious the way my brain is. So, where do you think the mind, under that definition, where do you think all these abilities come from? Do you think they’re inherently emergent properties? Or are they just things we haven’t kind of sorted through? Where does a sense of humor come from when no individual cell has a sense of humor?
It’s a property of the whole, it’s the property of your brain as a whole, it’s not a property of individual cells, we know this is true of many… I take a car, I look at the many individual components of a car, they don’t drive, they don’t do the same what a car does, but you put all these things together as a whole, and then the whole can do things that the individual parts can’t.
Emergence, so do you believe that strong emergence exists? Do you believe you can always derive the behavior from, like if you studied cells long enough, you would say “I understand where a sense of humor comes from now?”
No, for that you need a theory of consciousness, if you’re really referring to the conscious mind, to the mind, as many aspects are unconscious. I think about the maiden name of my grandmother. I have no idea, how my brain, how my mind comes up with the name Shaw. I don’t know how it works, so that’s all unconscious. The conscious mind you need a theory of consciousness, you, not just a theory of cells, not just the physics of it, but you also need to explain how conscious mind that has a sense of humor, because that’s the property of a conscious mind. Or maybe doesn’t have a sense of humor, depending who it is, emerges from. Yeah, so it’s what you refer to as strong emergence.
And so strong emergence…
But it’s not magical you understand that?
Well that’s a word you’ve used a few times. And it’s because as you said at the very beginning, there’s nothing magic about us. But I think people who believe that strong emergence is possible believe it’s a scientific process. But, a lot of people say, “No, you can’t say that for something to take on properties that none of its components have, and you cannot derive those properties. Until eternity passes away, you can study those individual components and not figure out how that comes about.
Yes, you need to solve a problem that Aristotle was one of the first who wrote about it, the parts, the relations among the parts and the whole. Yes, you need a theory that describes what a whole is, the whole system. An integrated information theory is an example of such a theory that thinks about parts and how the parts come together to define a whole. Without such a theory, yes you would be lost, I agree with you, but it’s not magical. What I meant was that, once you have such a theory, then you can understand step-by-step. You can understand… you can predict which systems are whole and which systems are not whole. You can predict which system properties are essential for the wholeness and which ones are not. So in that sense, it’s a physical theory. It’s a lawful set of rules.
Well how can IIT be disprove it?
It can be disproved in a number of ways. So it says that the neural colleague of consciousness is the maxim of cause and effect. In principle it gives you a way exactly how to test it, how to measure it. In fact now there was this recent series of articles in neurological journals where people tested one implication of information theory and built a conscious meter, built a simple device where you probe the brain with these magnetic pulses, when you are asleep or anaesthetized or you go to an emergency room, critical care facility where you have people who may be in a vegetative state, or maybe in a more conscious state, maybe there’s a little bit of consciousness there, or maybe they are conscious but they can’t tell you because they’re so grievously injured. So integrated information derived a simple measure called perturbational complexity index, where you look at the EEG in response to these magnetic pulses where you can tell this patient is probably unconscious based on the response of his brain, and this person probably is likely to be conscious, so it’s one of the consequences. So there are ways you can test it. It is a scientific theory; it may be wrong. It is a scientific theory.
Did you read about the man in South Africa who was in a coma for some amount of time, then he woke up and he was still locked in, but he was completely awake? And the thing is that every day he was left at this facility, they assumed he wasn’t conscious. And so they played Barney all day long, and he came to abhor Barney, like so much he used all of his mental energy just to figure out what time it was every day, just so he would know when Barney was going to be over. And he said even to this day, he can look at a shadow on a wall and tell what time it is. So you believe that we’ll soon be able to put a device on somebody like that, saying “No, he’s fully awake, he’s fully abhorring Barney as we speak right now.”
I just came back the last two days I attended a meeting of emergency room medicine, coma and consciousness, and there we were for 2 days, we heard what is the current criteria, how can we judge these patients? They are very very difficult patients to treat because ultimately you’re never fully sure given the state of technology today. But yes in principle, and it looks like even in practice, at least according to these papers, the last test was 211 patients, that we might soon have such a conscious meter. There are several larger scale clinical trials trying to test this across a large clinical population. There are thousands of these patients worldwide, like Terry Schiavo was one of them, where it was controversial because there was this dispute between the parents and the then husband.
So, I’m curious about whether all these things are conscious, for two reasons. One we discussed, because it has, as you’ve said, implications for how you treat them. But the other one is, because if you don’t know if a tree’s conscious, you may not be able to know if a computer’s conscious, and so being able to figure out something as alien as the sun or Gaia or a tree or a porpoise is conscious, how would we know if a computer was? That’s the penultimate question I want to ask, how would you now if a computer was conscious?
Very good question, so first we need to make it perfectly clear because people always get this wrong: there is artificial intelligence, narrow or broad, and we’re slowly getting there, that is totally separate from the question of artificial consciousness. In other words you can perfectly well imagine a super computer, super human intelligence, but it absolutely feels like nothing. And so most of all the computers today are of that ilk, and most will agree with that statement. So, we have to dissociate intelligence from consciousness. Historically, until this unique moment in time, we’ve always lived in a situation where if you wanted something done, you wanted a ditch done, you wanted a war fought, you wanted your tax to be done, you employed a person and the person was conscious. But now we are living in this world where you might have things that dig ditches, fight wars and do taxes that are just algorithms. They’re not conscious. However, of course this does raise the question, under what conditions can you create artificial feelings. When is your iPhone actually going to feel like something? When is your iPhone actually going to see, as compared to taking a picture and putting a box around it and saying, “This is mum’s face,” which it can do today. So once again you need a theory of that. You can’t just go by the behavior because there’s no question, in the fullness of time, we will get all the movies and all the TV shows, Westworld,etc.
We’re going to live in a world where things behave like us. We will experience the world in 10 or 20 years where Siri talks to you in a voice that you cannot distinguish at all anymore from a human secretary. Instead he or she will have perfect poise, be perfectly calm, laugh at every one of your jokes. So how do we know she’s conscious? For that you need a fundamental theory, and this particular fundamental theory of integrated information says you cannot compute consciousness. Consciousness is not a special property of an algorithm, because your brain isn’t an algorithm. Your brain is a physical machine: it has exterior, it has cognitive powers, both on the outside, it can talk, it can move things about and it has intrinsic cause effect power, and that’s what consciousness is. So if you want human level consciousness, you have to build a machine in the likeness of man. You have to build what’s called a neuromorphic computer. You have to build a computer whose architecture at the level of the metal, at the level of the gate, mimics the architecture of the brain, and some people are trying to do that.
The Human Brain Project in Europe
For instance, let me give you an example that’s very easy for scientists. So I have a friend, she’s an astrophysicist, so she writes down the Einstein equations of general relativity, and she can predict on her laptop, on her computer there’s a black hole at the center of our galaxy. It’s a big black hole a billion solar masses that sucks up all the… it bends gravity so much that not even light can escape. But funny enough, she doesn’t get sucked into her laptop that runs that, why not? Why it’s simulating all correctly, all the effects of gravity, yet it doesn’t have any effect on its environment. Well isn’t that funny, why not? Because it doesn’t have the causal power of gravity. It can simulate, it can compute the effect the gravity has, but it can’t emulate it, can’t physically instantiate the cause and effect of gravity (same thing). Consciousness ultimately isn’t about the causal power, it’s not about simulation, it’s not about computation, and so unless you do that, you can build a zombie; you will be able to build zombies that claim they’re conscious, but they don’t feel like anything.
Well that is a great place to leave it. What a fascinating discussion, and I want to thank you for sharing your time.
Thank you very much, Byron. That was most enjoyable, and this is part of the IEEE Tech Fisherman series at South by Southwest.

Are There Infinite Jobs?

The following is an excerpt from Byron Reese’s book, The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity. You can purchase the book here.
The Fourth Age explores the implications of automation and AI on humanity, and has been described by Ethernet inventor and 3Com founder Bob Metcalfe as framing “the deepest questions of our time in clear language that invites the reader to make their own choices. Using 100,000 years of human history as his guide, he explores the issues around artificial general intelligence, robots, consciousness, automation, the end of work, abundance, and immortality.”
One of those deep questions of our time:
When the topic of automation and AI comes up, one of the chief concerns is always technology’s potential impact on jobs. Many fear that with the introduction of wide-scale automation, there will be no more jobs left for humans. But is it really that dire? In this excerpt from The Fourth Age, Byron Reese considers if the addition of automation and AI will really do away with jobs, or if it will open up a world of new jobs for humans.

In 1940, only about 25 percent of women in the United States participated in the workforce. Just forty years later, that percentage was up to 50 percent. In that span of time, thirty-three million women entered the workforce. Where did those jobs come from? Of course, at the beginning of that period, many of these positions were wartime jobs, but women continued to pour into the labor force even after peace broke out. If you had been an economist in 1940 and you were told that thirty-three million women would be out looking for jobs by 1980, wouldn’t you have predicted much higher unemployment and much lower wages, as many more people would be competing for the “same pool of jobs”?
As a thought experiment, imagine that in 1940 General Motors invented a robot with true artificial intelligence and that the company manufactured thirty-three million of them over forty years. Wouldn’t there have been panic in the streets about the robots taking all the jobs?
But of course, unemployment never went up outside of the range of the normal economic ebb and flow. So what happened? Were thirty-three million men put out of work with the introduction of this large pool of labor? Did real wages fall as there was a race to bottom to fight for the available work? No. Employment and wages held steady.
Or imagine that in 2000, a great technological breakthrough happened and a company, Robot Inc., built an amazing AI robot that was as mentally and physically capable as a US worker. On the strength of its breakthrough, Robot Inc. raised venture capital and built ten million of these robots and housed them in a giant robot city in the Midwest. You could hire the robots for a fraction of what it would cost to employ a US worker. Since 2000, all ten million of these robots have been hired by US firms to save costs. Now, what effect would this have on the US economy? Well, we don’t have to speculate, because the setup is identical to the practice of outsourcing jobs to other countries where wages are lower but educational levels are high. Ten million, in fact, is the lowest estimate of the number of jobs relocated offshore since 2000. And yet the unemployment rate in 2000 was 4.1 percent and in 2017 it is 4.9 percent. Real wages didn’t decline over that period. Why didn’t these ten million “robots” tank wages and increase unemployment? Let’s explore that question.
For the past two hundred years, the United States has had more or less full employment. Aside from the Great Depression, unemployment has moved between 3 and 10 percent that entire time. The number hasn’t really trended upward or downward over time. The US unemployment rate in 1850 was 3 percent; in 1900 it was 6.1 percent; and in 1950 it was 5.3 percent.
Now picture a giant scale, one of those old-timey ones that Justice is always depicted holding: on one side of the scale you have all the industries that get eliminated or reduced by technology. The candlemakers, the stable boys, the telegraph operators. On the other side of the scale you have all the new industries. The Web designers, the geneticists, the pet psychologists, the social media managers.
Why don’t those two sides of the scale ever get way out of sync? If the number of jobs available is a thing that ebbs and flows on its own due to technological breakthroughs and offshoring and other independent factors, then why haven’t we ever had periods when there were millions and millions more jobs than there were people to fill them? Or why haven’t we had periods when there were millions and millions fewer jobs than people to fill them? In other words, how does the unemployment rate stay in such a narrow band? When it has moved to either end, it was generally because of macro factors of the economy, not an invention of something that suddenly created or destroyed five million jobs. Shouldn’t the invention of the handheld calculator have put a whole bunch of people out of work? Or the invention of the assembly line, for that matter? Shouldn’t that have capsized the job market?
A simple thought experiment explains why unemployment stays relatively fixed: Let’s say tomorrow there are five big technological breakthroughs, each of which eliminates some jobs and saves you, the consumer, some money. They are:

  1. A new nanotech spray comes to market that only costs a few cents and eliminates ever needing to dry-clean your clothes. This saves the average American household $550 a year. All dry cleaners are put out of business.
  2. A crowdfunded start-up releases a device that plugs into a normal wall outlet and converts food scraps into electricity. “Scraptricity” becomes everyone’s new favorite green energy craze, saving the average family $100 a year off their electric bill. Layoffs in the traditional energy sector soon follow.
  3. A Detroit start-up releases an AI computer controller for automakers that increases the fuel efficiency of cars by 10 percent. This saves the average American family $200 of the $2,000 they spend annually on gas. Job losses occur at gas stations and refineries.
  4. A top-secret start-up releases a smartphone attachment you breathe into. It can tell the difference between colds and flu, as well as viral and bacterial infections. Plus, it can identify strep throat. Hugely successful, this attachment saves the average American family one doctor visit a year, which, given their co-pay, saves them $75. Job losses occur at walk-in clinics around the country.
  5. Finally, high-quality AA and AAA batteries are released that can recharge themselves by being left in the sun for an hour. Hailed as an ecological breakthrough, the batteries instantly displace the disposable battery market. The average American family saves $75 a year that they would have spent on throwaway batteries. Job losses occur at battery factories around the world.

That is what tech disruption looks like. We have seen thousands of such events happen in just the last few years. We buy fewer DVDs and spend that money on digital streaming. The number of digital cameras we are buying is falling by double digits every year, but we spend that money on smartphones instead. The amount being spent on ads in printed phone directories is falling by $1 billion a year in the United States. Businesses are spending that money elsewhere. We purchase fewer fax machines, newspapers, GPS devices, wristwatches, wall clocks, dictionaries, encyclopedias. When we travel, we spend less on postcards. We buy fewer photo albums and less stationery. We mail less mail and write fewer checks. When is the last time you dropped a quarter in pay phone or dialed directory assistance or paid for a long-distance phone call?
In our hypothetical case above, if you add up what our technological breakthroughs save our hypothetical family, it is $1,000 a year. But in that scenario, what happens to all those dry cleaners, coal workers, gas station operators, nurses, and battery makers? Well, sadly, they lost their jobs and must look for new work. What will fund the new jobs for these folks? Where will the money come from to pay them? Well, what do you think the average American family does with the $1,000 a year they now have? Simple: They spend it. They hire yoga instructors, have new flower beds put in, take up windsurfing, and purchase puppies, causing job growth in all those industries. Think of the power of $1,000 a year multiplied by the hundred million households in the United States. That is $100,000,000,000 (a hundred billion dollars) of new spending into the economy every year. Assuming a $50,000 wage, that is enough money to fund the yearly salaries of two million full-time people, including our newly unemployed dry cleaners and battery bakers. Changing careers is a rough transition for them, to be sure, and one that society could collectively do a much better job facilitating, but the story generally ends well for them.
This is how free economies work, and why we have never run out of jobs due to automation. There are not a fixed number of jobs that automation steals one by one, resulting in progressively more unemployment. That simply isn’t how the economy works. There are as many jobs in the world as there are buyers and sellers of labor.
Additionally, most technological advances don’t eliminate entire jobs all at once, per se, but certain parts of jobs. And they create new jobs in entirely unexpected ways. When ATMs came out, most people assumed they would eliminate the need for bank tellers. Everyone knew what the letters ATM stood for, after all. But what really happened? Well, of course, you would always need some tellers to deal with customers wanting more than to make a deposit or get cash. So instead of a branch having four tellers and no machines, they could have two tellers and two ATMs. Then, seeing that branches were now cheaper to operate, banks realized they could open more of them as a competitive advantage, and guess what? They needed to hire more tellers. That’s why there are more human bank tellers employed today than any other time in history. But there are now also ATM manufacturing jobs, ATM repair jobs, and ATM refilling jobs. Who would have thought that when you made a robot bank teller, you would need more human ones?
The problem, as stated earlier, is that the “job loss” side of the equation is the easiest to see. Watching every dry cleaner on the planet get shuttered would look like a tragedy. And to the people involved, it would be one. But, from a larger point of view, it wouldn’t be one at all. Who thinks it is a bad idea to have clothes that don’t get dirty? If clothes had always resisted dirt, who would lobby to pass a law that requires that all clothes could get dirty, so that we could create all the dry cleaning jobs? Batteries that die and cars that run inefficiently and unnecessary trips to the doctor and wasted energy are all negative things, even if they make jobs. If you don’t think so, then we should repeal littering laws and encourage people to throw trash out their car windows to make new highway cleanup jobs.
So this is why we have never run out of jobs, and why unemployment stays relatively constant. Every time technology saves us money, we spend the money elsewhere! But is it possible that the future will be different? Some argue that there are new economic forces at play. It goes like this: “Imagine a world with two companies: Robotco and Humanco. Robotco makes, in a factory with no employees, a popular consumer gadget that sells for $100. Meanwhile, Humanco makes a different gadget that also costs $100, but it is made in a factory full of people.
“What happens if Robotco’s gadget becomes wildly successful? Robotco sees its corporate profits shoot through the roof. Meanwhile, Humanco flounders, because no one is buying its product. It is forced to lay off its human staff. Now these humans don’t have any money to buy anything while Robotco sits on an ever-growing mountain of cash. The situation devolves until everyone is unemployed and Robotco has all the money in the world.”
Some say this is happening in the United States right now. Corporate profits are high and those profits are distributed to the rich, while wages are stagnant. The big new companies of today, like Facebook and Google, have huge earnings and few employees, unlike the big companies of old, like durable-goods manufacturers, which typically needed large workforces.
There is undoubtedly some truth in this view of the world. Gains in productivity created by technology don’t necessarily make it into the pockets of the increasingly productive worker. Instead, they are often returned to shareholders. There are ways to mitigate this flow of capital, which we will address in the chapter about income inequality, but this should not be seen as a fatal flaw of technology or our economy, but rather something that needs addressing head-on by society at large.
Further, Robotco’s immense profits probably don’t just sit in some Scrooge McDuck kind of vault in which the executives have pillow fights using pillows stuffed with hundred-dollar bills. Instead, they are put to productive use and are in turn loaned out to people to start businesses and build houses, creating more jobs. An economy with no corporate profits and everything paid out in wages is as dysfunctional as the reverse case we just explored.

To read more of Byron Reese’s book, The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity, you can purchase it here.