Voices in AI – Episode 2: A Conversation with Oren Etzioni

[voices_in_ai_byline]
In this episode Byron and Oren talk about AGI, Aristo, the future of work, conscious machines, and Alexa.
[podcast_player name=”Episode 2: A Conversation with Oren Etzioni” artist=”Byron Reese” album=”Voices in AI” url=”https://voicesinai.s3.amazonaws.com/2017-09-28-(00-57-00)-oren-etzioni.mp3″ cover_art_url=”https://voicesinai.com/wp-content/uploads/2017/09/voices-headshot-card-1.jpg”]
[voices_in_ai_link_back]
Byron Reese: This is Voices in AI, brought to you by Gigaom. I’m Byron Reese. Today, our guest is Oren Etzioni. He’s a professor of computer science who founded and ran University of Washington’s Turing Center. And since 2013, he’s been the CEO of the Allen Institute for Artificial Intelligence. The Institute investigates problems in data mining, natural language processing, and the semantic web. And if all of that weren’t enough to keep a person busy, he’s also a venture partner at the Madrona Venture Group. Business Insider called him, quote: “The most successful entrepreneur you’ve never heard of.”
Welcome to the show, Oren.
Oren Etzioni: Thank you, and thanks for the kind introduction. I think the key emphasis there would be, “you’ve never heard of.”
Well, I’ve heard of you, and I’ve followed your work and the Allen Institute’s as well. And let’s start there. You’re doing some fascinating things. So if you would just start off by telling us a bit about the Allen Institute, and then I would love to go through the four projects that you feature prominently on the website. And just talk about each one; they’re all really interesting.
Well, thanks. I’d love to. The Allen Institute for AI is really Paul Allen’s brainchild. He’s had a passion for AI for decades, and he’s founded a series of institutes—scientific institutes—in Seattle, which were modeled after the Allen Institute for Brain Science, which has been very successful running since 2003. We were founded—got started—in 2013. We were launched as a nonprofit on January 1, 2014, and it’s a great honor to serve as CEO. Our mission is AI for the common good, and as you mentioned, we have four projects that I’m really excited about.
Our first project is the Aristo project, and that’s about building a computer program that’s able to answer science questions of the sort that we would ask a fourth grader, and now we’re also working with eighth-grade science. And people sometimes ask me, “Well, gosh, why do you want to do that? Are you trying to put 10-year-olds out of work?” And the answer is, of course not.
We really want to use that test—science test questions—as a benchmark for how well are we doing in intelligence, right? We see tremendous success in computer programs like AlphaGo, beating the world champion in Go. And we say, “Well, how does that translate to language—and particularly to understanding language—and understanding diagrams, understanding science?”
And one way to answer that question is to, kind of, level the playing field with, “Let’s ask machines and people the same questions.” And so we started with these science tests, and we can see that, in fact, people do much better. It turns out, paradoxically, that things that are relatively easy for people are really quite hard for machines, and things that are hard for people—like playing Go at world championship level—those are actually relatively easy for the machine.
Hold on there a minute: I want to take a moment and really dissect this. Any time there’s a candidate chatbot that can make a go at the Turing test, I have a standard question that I start with, and none of them have ever answered it correctly.
It’s a question a four-year-old could answer, which is, “Which is bigger, a nickel or the sun?” So why is that a hard problem? Is what you’re doing, would it be able to answer that? And why would you start with a fourth grader instead of a four-year-old, like really go back to the most basic, basic questions? So the first part of that is: Is what you’re doing, would it be able to answer the question?
Certainly our goal is to give it the background knowledge and understanding ability to be able to answer those types of questions, which combine both basic knowledge, basic reasoning, and enough understanding of language to know that, when you say “a nickel,” you’re not referring to the metal, but you’re referring to a particular coin, with a particular size, and so on.
The reason that’s so hard for the machine is that it’s part of what’s called ‘common sense’ knowledge, right? Of course, the machine, if you programmed it, could answer that particular question—but that’s a stand-in for literally billions of other questions that you could ask about relative sizes, about animal behavior, about the properties of paper versus feathers versus furniture.
There’s really a seemingly infinite—or certainly a very, very large number—of basic questions that people, that certainly eight-year-olds can answer, or four-year-olds, but that machines struggle with. And they struggle with it because, what’s their basis for answering the questions? How would they acquire all that knowledge?
Now, to say, “Well, gosh, why don’t we build a four-year-old, or maybe even a one-year-old?” I’ve actually thought about that. So at the university, we investigated for a summer, trying to follow the developmental ladder, saying: “Let’s start with a six-month-old, and a one-year-old, etc., etc.”
And my interest, in particular, is in language. So I said, “Well, gosh, surely we can build something that can say ‘dada’ or ‘mama’, right?” And then work our way from there. What we found is that, even a very young child, their ability to process language and understand the world around them is so involved with their body—with their gaze, with their understanding of people’s facial expressions—that the net effect was that we could not build a one-year-old.
So, in a funny way, once you’re getting to the level of a fourth grader, who’s reading and answering multiple choice science questions, it gets easier and it gets more focused on language and semantics, and less on having a body, being able to crawl—which, of course, are challenging robotics problems.
So, we chose to start higher up in the ladder, and it was kind of a Goldilocks thing, right? It was more language-focused and, in a funny way, easier than doing a one-year-old, or a four-year-old. And—at the same time—not as hard as, say, college-level biology questions or AP questions, which involve very complicated language and reasoning.
So it’s your thinking that by talking about school science examinations, in particular, that you have a really, really narrow vocabulary that you have to master, a really narrow set of objects you have to understand the property of, is that the idea? Like, AI does well at games because they’re constrained worlds with fixed rules. Are you trying to build that, an analog to that?
It is an analog, right? In the sense that AI has done well with having narrow tasks and, you know, limited domains. At the same time, it’s probably not the word, really. There are, if you look—and this is something that we’ve learned—at the tremendous variety in these questions, and not only variety of ways of saying things, but also variety because these tests often require you to take something that you could have an understanding of—like gravity or photosynthesis—but then apply it to a particular situation.
“What happens if we take a plant and move it nearer to the window?” So that combination means that the combination of basic scientific knowledge, with an application to a real-world situation, means that it’s really quite varied. And it’s really a much harder AI problem to answer fourth-grade science questions than it is to solve Go.
I completely get that. I’m going to ask you a question, and it’s going to sound like I’m changing the topic, but it is germane. Do you believe that we’re on a path to building an AGI—a general intelligence? You’re going to learn things doing this, and is it, like, all we will need to do is scale them up more and more, faster, faster, better and better, and you’ll have an AGI? Is this on that trajectory, or is an AGI something completely unrelated to what you’re trying to do here?
That’s a very, very key question. And I would say that we are not on a path to building an AGI—in the sense that, if you build Aristo, and then you scale it to twelfth grade, and more complex vocabulary, and more complex reasoning, and, “Hey, if we just keep scaling this further, we’ll end up with artificial general intelligence, with an AGI.” I don’t think that’s the case.
I think there are many other problems that we have to solve, and this is a part of a very complex picture. And if it’s a path, it’s a very meandering one. But really, the point is that the word “research,” which is obviously what we’re doing here, has the word “search” in it. And that means that we’re iterating, we’re going here, we’re going there, we’re looking, you know.
“Oh, where did I put my keys?” Right? How many times do you retrace your steps and open that drawer, and say, “Oh, but I forgot to look under the socks,” or “I forgot to look under the bed”? It’s this very complex, uncertain process; it’s quite the opposite of, “Oh, I’m going down the path, the goal is clear, and I just have to go uphill for five miles, and I’ll get there.”
I’ve got a book on AI coming out towards the end of this year, and in it, I talk about the Turing test. And I talk about, like, the hardest question I can think of to ask a computer so that I could detect if it’s a computer or a person. And here’s a variant of what I came up with, which is:
“Doctor Smith is eating at his favorite restaurant, that he eats at frequently. He gets a call, an emergency call, and he runs out without paying his bill. Are the owners likely to prosecute?” So, if you think about that… Wow, you’ve got to know he’s a doctor, the call he got is probably a medical emergency, you have to infer that he eats there a lot, that they know who he is, they might even know he’s a doctor. Are they going to prosecute? So, it’s a gazillion social things that you have to know in order to answer that question.
Now, is that also on the same trajectory as solving twelfth grade science problems? Or is that question that I posed, would that require an AGI to answer?
Well, one of the things that we’ve learned is that, whenever you define a task—say answering story types of questions that involve social nuance, and maybe would involve ethical and practical considerations—that is on the trajectory of our research. You can imagine Aristo, over time, being challenged by these more nuanced questions.
But, again, we’ve gotten so good at identifying those tasks, building training sets, building models and then answering those questions, and that program might get good at answering those questions but still have a hard time crossing the street. Still have a hard time reading a poem or telling a joke.
So, the key to AGI is the “G”; the generality is surprisingly elusive. And that’s the amazing thing, because that four-year-old that we were talking about has generality in spades, even though she’s not necessarily a great chess player or a great Go player. So that’s what we learned.
As our AI technology evolves, we keep learning about what is the most elusive aspect of AI. At first, if you read some of the stuff that was written in the ’60s and the ’70s, people were very skeptical that the program could ever play chess, because that was really seen as, very intelligent people are very good chess players.
And then, that became solved, and people talked about learning. They said, “Well, gosh, but programs can’t learn.” And as we’ve gotten better, at least at certain kinds of learning, now the emphasis is on generality, right? How do we build a general program, given that all of our successes, whether it’s poker or chess or certain kinds of question answering, have been on very narrow tasks?
So, one sentence I read about Aristo says, “The focus of the project is explained by the guiding philosophy that artificial intelligence is about having a mental model for how things operate, and refining that mental model based on new knowledge.” Can you break that down for us? What do you mean?
Well, I think, again, lots of things. But I think a key thing not to forget—and it goes from your favorite question about a nickel and the sun—is that so much of what we do makes use of background knowledge, just extensive knowledge of facts, of words, of all kinds of social nuances, etc., etc.
And the hottest thing going is deep learning methods. Deep learning methods are responsible for the success in Go, but the thing to remember is that often, at least by any classical definition, those programs are very knowledge-poor. If you could talk to them and ask them, “What do you know?” you’d find out that—while they may have stored a lot of implicit information, say, about the game of Go—they don’t know a whole heck of a lot. And that, of course, touches onto the topic of consciousness, which I understand is also covered in your book. If I asked AlphaGo, “Hey, did you know you won?” AlphaGo can’t answer that question. And it’s not because it doesn’t understand natural languages. It’s not conscious.
Kasparov said that about Deep Blue. He said, “Well, at least it can’t gloat. At least it doesn’t know that it beat me.” To that point, Claude Shannon wrote about computers playing chess back in the ’50s, but it was an enormous amount of work. It took the best minds a long time to build something that could beat Kasparov. Do you think that something like that is generalizable to a lot of other things? Or am I hearing you correctly that that is not a step towards anything general? That’s a whole different kind of thing, and therefore Aristo is, kind of, doing something very different than AlphaGo or chess, or Jeopardy?
I do think that we can generalize from that experience. But I think that generalization isn’t always the one that people make. So what we can generalize is that, when we have a very clear “objective function” or “performance criteria”—basically it’s very clear who won and who lost—and we have a lot of data, that as computer scientists we’re very, very good—and it still, as you mentioned, took decades—but we’re very, very good at continuing to chip away at that with faster computers, more data, more sophisticated algorithms, and ultimately solving the problem.
However, in the case of natural language: If you and I, let’s say we’re having a conversation here on this podcast—who won that conversation? Let’s say I want to do a better job if you ever invite me for another podcast. How do I do that? And if my method for getting better involves looking at literally millions of training examples, you’re not going to do millions of podcasts. Right?
So you’re right, that a very different thing needs to happen when things are vaguer, or more uncertain, or more nuanced, when there’s less training data, etc., etc.—all these characteristics that make Aristo and some of our other projects very, very different than chess or Go.
So, where is Aristo? Give me a question it can answer and a question it can’t. Or is that even a cogent question? Where are you with it?
First of all, we keep track of our scores. So, I can give you an example in a second. But when we look at what we call “non-diagram multiple choice”—questions that are purely in language, because diagrams can be challenging for the machine to interpret—we’ve been able to reach very close to eighty percent correctness. Eighty percent accuracy on non-diagram multiple choice questions for fourth grade.
When you say any questions, there we’re at sixty percent. Which is either great, because when we started—all these questions with diagrams and what’s called “direct answer questions,” where you had to answer them with a phrase or a sentence, you don’t just get to choose between four choices—we were close to twenty percent. We were far lower.
So, we’ve made a lot of progress, so that’s on the glass-half-full side. And the glass-half-empty side, we’re still getting a D on a fourth-grade science test. So it’s all a question of how you look at it. Now, when you ask, “What questions can we solve?” We actually have a demo on our website, on AllenAI.org, that illustrates some of these.
If I go to the Aristo project there, and I click on “live demo,” I see questions like, “What is the main source of energy for the water cycle?” Or even, “The diagram below shows a food chain. If the wheat plants died, the population of mice would likely _______?” So, these are fairly complex questions, right?
But they’re not paragraph-long, and the thing that we’re still struggling with is what we call “brittleness.” If you take any one of these questions that we can answer, and then change the way you ask the question a bit, all of a sudden we fail. This is, by the way, a characteristic of many AI systems, this notion of brittleness—where a small change that a human might say, “Oh, that’s no different at all.” It can make a big difference to the machine.
It’s true. I’ve been playing around with an Amazon Alexa, and I noticed that if I say, “How many countries are there?” it gives me one number. If I say, “How many countries are there in the world?” it gives me a different number. Even though a human would see that as the same question. Is that the sort of thing you’re talking about?
That’s exactly the sort of thing I’m talking about, and it’s very frustrating. And, by the way, Alexa and Siri, for the people who want to take the pulse of AI—I mean, again, we’re one of the largest nonprofit AI research institutes in the world, but we’re still pretty small at 72 people—Alexa or Siri, that’s for-profit companies; there are thousands of people working on those, and it’s still the case that you can’t carry on a halfway decent dialogue with these programs.
And I’m not talking about the cutesy answers about, you know, “Siri, what are you doing tonight?” Or, “Are you better than Alexa?” I’m talking about, let’s say, the kind of dialogue you’d have with a concierge of a hotel, to help you find a good restaurant downtown. And, again, it’s because how do you score dialogues? Right? Who won the dialogue? All those questions, that are very easy to solve in games, are not even really well-posed in the context of a dialogue.
I pinned an article about how—and I have to whisper her name, otherwise it will start talking to me—Alexa and Google Assistant give you different answers to factual questions.
So if you ask, “How many seconds are there in a year?” they give you different answers. And if you say, “Who designed the American flag?” they’ll give you different answers. Seconds in a year, you would think that’s an objective question, there’s a right and a wrong answer, but actually one gives you a calendar year, and one gives you a solar year, which is a quarter-day different.
And with the American flag, one says Betsy Ross, and the other one says the person who designed the 50-star configuration of the flag, which is our current flag. And in the end, both times those were the questioner’s fault, because the question itself is inherently vague, right? And so, even if the system is good, if the questions are poorly phrased, it still breaks, right? It’s still brittle.
I would say that it’s the computer’s fault. In other words, again, an aspect of intelligence is being able to answer vague questions and being able to explain yourself. But these systems, even if their fact store is enormous—and one day, they’ll certainly exceed ours—if all it can do when you say, “Well, why did you give me this number?” is say, “Well, I found it here,” then really it’s a big lookup table.
It’s not able to deal with the vagueness, or to explain itself in a more meaningful way. What if you put the number three in that table? You ask, “How many seconds are there in a year?” The program would happily say, “Three.” And you say, “Does that really make sense?” And it would say, “Oh, I can’t answer that question.” Right? Whereas a person, would say, “Wait a minute. It can’t be three seconds in a year. That just doesn’t make sense!” Right? So, we have such a long way to go.
Right. Well, let’s talk about that. You’re undoubtedly familiar with John Searle’s Chinese Room question, and I’ll set it up for the listener—because what I’m going to ask you is, is it possible for a computer to ever understand anything?
The setup, very briefly—I mean, I encourage people to look it up—is that there’s a person in a room and he doesn’t speak any Chinese, and he’s given Chinese questions, and he’s got all these books he can look it up in, but he just copies characters down and hands them back. And he doesn’t know if he’s talking about cholera or coffee beans or what have you. And the analogy is, obviously, that’s what a computer does. So can a computer actually understand anything?
You know, the Chinese Room thought experiment is really one of the most tantalizing and fun thought experiments in philosophy of mind. And so many articles have been written about it, arguing this, that or the other thing. In short, I think it does expose some of the issues, and the bottom line is when you look under the hood at this Chinese Room and the system there, you say, “Gosh, it sure seems like it doesn’t understand anything.”
And when you take a computer apart, you say, “Gosh, how could it understand? It’s just a bunch of circuits and wires and chips.” The only problem with that line of reasoning is, it turns out that if you look under the hood in a person’s mind—in other words, if you look at their brain—you see the same thing. You see neurons and ion potentials and chemical processes and neurotransmitters and hormones.
And when you look at it at that level, surely, neurons can’t understand anything either. I think, again, without getting to a whole other podcast on the Chinese Room, I think that it’s a fascinating thing to think about, but it’s a little bit misleading. Understanding is something that emerges from a complex technical system. That technical system could be built on top of neurons, or it could be built on top of circuits and chips. It’s an emergent phenomenon.
Well, then I would ask you, is it strong emergence or is it weak emergence? But, we’ve got three more projects to discuss. Let’s talk about Euclid.
Euclid is, really, a sibling of Aristo, and in Euclid we’re looking at SAT math problems. The Euclid problems are easier in the sense that you don’t need all this background knowledge to answer these pure math questions. You surely need a lot less of that. However, you really need to very fully and comprehensively understand the sentence. So, I’ll give you my favorite example.
This is a question that is based on a story about Ramanujan, the Indian number theorist. He said, “What’s the smallest number that’s the sum of two cubes in two different ways?” And the answer to that question is a particular number, which the listeners can look up on Google. But, to answer that correctly, you really have to fully parse that rather long and complicated sentence and understand “the sum of two cubes in two different ways.” What on earth does that mean?
And so, Euclid is working to have a full understanding of sentences and paragraphs, which are the kind of questions that we have on the SATs. Whereas often with Aristo—and certainly, you know, with things like Watson and Jeopardy—you could get away with a much more approximate understanding, “this question is sort of about this.” There’s no “sort of” when you’re dealing with math questions, and you have to give the answer.
And so that is, as you say, a sibling to Aristo; but Plato, the third one we’re going to discuss, is something very different, right?
Right. Maybe if we’re using this family metaphor, Plato is Aristo’s and Euclid’s cousin, and what’s going on there is we don’t have a natural benchmark test, but we’re very, very interested in vision. We’ve realized that a lot of the questions that we want to address, a lot of the knowledge that is present in the world isn’t expressed in text, certainly not in any convenient way.
One great way to learn about the sizes of things—not just the sun and a nickel, but maybe even a giraffe and a butterfly—is through pictures. You’re not going to find the sentence that says, “A giraffe is much bigger than a butterfly,” but if you see pictures of them, you can make that connection. Plato is about extracting knowledge from images, from videos, from diagrams, and being able to reason over that to draw conclusions.
So, Ali Farhadi, who leads that project and who shares his time between us and the Allen School at University of Washington, has done an amazing job generating result after result, where we’re able to do remarkable things based on images.
My favorite example of this—you kind of have to visualize it—imagine drawing a diagonal line and then a ball on top of that line. What’s going to happen to that ball? Well, if you can visualize it, of course the ball’s going to roll down the line—it’s going to roll downhill.
It turns out that most algorithms are actually really challenged to make that kind of prediction, because to make that kind of prediction, you have to actually reason about what’s going on. It’s not just enough to say, “There’s a ball here on a line,” but you have to understand that this is a slope, and that gravity is going to come into play, and predict what’s going to happen. So, we really have some of the state-of-the-art capabilities, in terms of reasoning over images and making predictions.
Isn’t video a whole different thing, because you’re really looking at the differences between images, or is it the same basic technology?
At a technical level, there are many differences. But actually, the elegant thing about video, as you intimated, a video is just a sequence of images. It’s really our eye, or our mind, that constructs the continuous motion. All it is, is a number of images shown per second. Well, for us, it’s a wonderful source of training data, because I can take the image at Second 1 and make a prediction about what’s going to happen in Second 2. And then I can look at what happened at Second 2, and see whether the prediction was correct or not. Did the ball roll down the hill? Did the butterfly land on the giraffe? So there’s a lot of commonalities, and video is actually a very rich source of images and training data.
One of the challenges with images is—well, let me give an example, then we can discuss it. Say I lived on a cul-de-sac, and the couple across the street were expecting—the woman is nine months pregnant—and one time I get up at three in the morning and I look out the window and their car is gone. I would say, “Aha, they must have gone to the hospital.” In other words, I’m reasoning from what’s not in the image. That would be really hard, wouldn’t it?
Yes. You’re way ahead of Plato. It’s very, very true.
To anticipate that you’ll go to Semantic Scholar; I want to make sure that we get to that. With Semantic Scholar, a number of the capabilities that we see in these other projects come together. Semantic Scholar is a scientific search engine, it’s available 24/7 at semanticscholar.org and it allows people to look for computer science papers, for neuroscience papers. Soon we’re going to be launching the ability to cover all the papers in biomedicine that are available on engines like PubMed.
And what we’re trying to do there is deal with the fact that there are so many, you know, over a hundred million scientific research papers, and more are coming out every day, and it’s virtually impossible for anybody to keep up. Our nickname for Semantic Scholar sometimes is Da Vinci, because we say Da Vinci was the last Renaissance man, right?
The person who, kind of, knew all of science. There are no Renaissance men or women anymore, because we just can’t keep up. And that’s a great place for AI to help us, to make scientists more efficient in their literature searches, more efficient in their abilities to generate hypotheses and design experiments.
That’s what we’re trying to do with Semantic Scholar, and that involves understanding language, and that involves understanding images and diagrams, and it involves a lot more.
Why do you think the semantic web hasn’t taken off more, and what is your prediction about the semantic web?
I think it’s important to distinguish between “semantics,” as we use it at Semantic Scholar, and “semantic” in the semantic web. In Semantic Scholar, we try to associate semantic information with text. For example, this paper is about a particular brain region, or this paper uses fMRI methodology, etc. It’s pretty simple semantic distinctions.
The semantic web was a very rich notion of semantics that, frankly, is superhuman and is way, way, way beyond what we can do in a distributed world. So that vision by Tim Berners-Lee really evolved over the years into something called “linked open data,” where, again, the semantics is very simple and the emphasis is much more about different players on the web linking their data together.
I think that very, very few people are working on the original notion of the semantic web, because it’s just way too hard.
I’m just curious, this is a somewhat frivolous question: But the names of your projects don’t seem to follow an overarching naming scheme. Is that because they were created and named elsewhere or what?
Well, it’s because, you know, if you let a computer scientist, which is me, if you put him or her in charge of branding, you’re going to run into problems. So, I think, Aristo and Euclid are what we started with and those were roughly analogous. Then we added Plato, which is an imperfect name, but still roughly in the mythological world. And then Semantic Scholar really is a play off of Google Scholar.
So Semantic Scholar is, if you will, really the odd duck here. And when we had a project, we were considering doing work on dialogue—which we still are—we called that project Socrates. But then I’m also thinking “Do we really want all the projects to be named after men?” which is definitely not our intent. So, I think the bottom line is it’s an imperfect naming scheme and it’s all my fault.
So, the mission of the Allen Institute for AI is, quote: “Our mission is to contribute to humanity through high-impact AI research and engineering.” Talk to me about the “contribute to humanity” part of that. What do you envision? What do you hope comes of all of this?
Sure. So, I think that when we started, we realized that so often AI is either vilified—particularly in Hollywood films, but also by folks like Stephen Hawking and Elon Musk—and we wanted to emphasize AI for the common good, AI for humanity, where we saw some real benefits to it.
And also, in a lot of for-profit companies, AI is used to target advertising, or to get you to buy more things, or to violate your privacy, if it’s being used by intelligence agencies or by aggressive marketing. And we really wanted to find places like Semantic Scholar, where AI can help solve some of humanity’s thorniest problems by helping scientists.
And so, that’s where it comes from; it’s a contrast to these other, either more negative uses, or more negative views of AI. And we’ve been really pleased that, since we were founded, organizations like OpenAI or the Partnership on AI, which is an industry consortium, have adopted missions that are very consistent and kind of echo ours, you know: AI to benefit humanity and society and things like that. So it seems like more and more of us in the field are really focused on using AI for good.
You mentioned fear of AI, and the fear manifests—and you can understand Hollywood, I mean, it’s drama, right—but the fear manifests in two different ways. One is what you alluded to, that it’s somehow bad, you know, Terminator or what have you. But the other one that is on everybody’s mind is, what do you think about AI’s effect on employment and jobs?
I think that’s a very serious concern. As you can tell, I’m not a big fan of the doomsday scenarios about AI. I tell people we should not confuse science with science fiction. But another reason why we shouldn’t concern ourselves with Skynet and doomsday scenarios is because we have a lot more realistic and pressing problems to worry about. And that, for example, is AIs impact on jobs. That’s a very real concern.
We’ll see it in the transportation sector, I predict, particularly soon. Where truck drivers and Uber drivers and so on are going to be gradually squeezed out of the market, and that’s a very significant number of workers. And it’s a challenge, of course, to help these people to retrain them, to help them find other jobs in an increasingly digital economy.
But, you know, in the history of the United States, at least, over the past couple of hundred years, there have been a number of really disruptive technologies that have come along—the electrification of industry, the mechanization of industry, the replacement of animal power, going into steam—things that really impacted quickly, and yet unemployment never once budged because of that. Because what happens is, people just use the new technology. And isn’t it at least possible that, as we move along with the development of artificial intelligence, that it actually is an empowering technology that lets people use it to increase their own productivity? Like, anybody could use it to increase their productivity.
I do think that AI will have that role, and I do think that, as you intimated, these technological forces have some real positives. So, the reason that we have phones and cars and washing machines and modern medicine, all these things that make our lives better and that are broadly shared through society, is because of technological advances. So I don’t think of these technological advances, including AI advances, as either a) negative; or b) avoidable.
If we say, “Okay, we’re not going to have AI,” or “We’re not going to have computers,” well, other countries will and they’ll overtake us. I think that it’s very, very difficult, if not impossible to stop broad-based technology change. Narrow technologies that are particularly terrible, like landmines or biological weapons, we’ve been able to stop. But I think AI isn’t stoppable because it’s much broader, and it’s not something that should be stopped, it’s not like that.
So I very much agree with what you said, but with one key caveat. We survived those things and we emerged thriving, but the disruption over significant periods of time and for millions of people was very, very difficult. So right as we went from a society that was ninety-something percent agricultural to one where there were only two percent workers in agriculture—people suffered and people were unemployed. And so, I do think that we need to have the programs in place to help people with these transitions.
And I don’t think that they’re simple because some people say, “Sure, those old jobs went away, but look at all these great jobs. You know, web developer, computer programmer, somebody who leverages these technologies to make themselves more effective at their jobs.” That’s true, but the reality is a lot more complicated. Are all these truck drivers really going to become web developers?
Well, I don’t think that’s the argument, right? The argument is that everybody moves one small notch up. So somebody who was a math teacher in a college, maybe becomes a web developer, and a high school teacher becomes the college teacher, and then a substitute teacher gets the full time job.
Nobody says, “Oh, no, no, we’re going to take these people, you know, who have less training and we’re going to put them in these highly technical jobs.” That’s not what happened in the past either, right? The question is can everybody do a job a little more complicated than the one they have today? And if the answer to that is yes, then do we have a big disruption coming?
Well, first of all, you’re making a fair point. I was oversimplifying by mapping the truck drivers to the developers. But, at the same time, I think we need to remember that these changes are very disruptive. And, so, the easiest example to give, because it’s fresh in my mind and, I think, other people’s mind—let’s look at Detroit. This isn’t technological changes, it’s more due to globalization and to the shifting of manufacturing jobs out of the US.
But nevertheless, these people didn’t just each take a little step up or a little step to the right, whatever you want to say. These people and their families suffered tremendously. And it’s had very significant ramifications, including Detroit going bankrupt, including many people losing their health care, including the vote for President Trump. So I think if you think on a twenty-year time scale, will the negative changes be offset by positive changes? Yes, to a large extent. But if you think on shorter time scales, and you think about particular populations, I don’t think we can just say, “Hey, it’s going to all be alright.” I think we have a lot of work to do.
Well, I’m with you there, and if there’s anything that I think we can take comfort in, it’s that the country did that before. There used to be a debate in the country about whether post-literacy education was worth it. This was back when we were an agricultural society. And you can understand the logic, right? “Well once somebody learns to read, why do you need to keep them in school?” And then, people said, “Well, the jobs of the future are going to need a lot more skills.” That’s why the United States became the first country in the world to guarantee a high school education to every single person.
And it sounds like you’re saying something like that, where we need to make sure that our education opportunities stay in sync with the requirements of the jobs we’re creating.
Absolutely. I think we are agreeing that there’s a tremendous potential for this to be positive, you know? Some people, again, have a doomsday scenario for jobs and society. And I agree with you a hundred percent; I don’t buy into that. And it sounds like we also agree, though, that there are things that we could do to make these transitions smoother and easier on large segments of society.
And it definitely has to do with improving education and finding opportunities etc., etc. So, I think it’s really a question of how painful will this change be, and how long will it take until we’re at a new equilibrium that, by the way, could be a fantastic one? Because, you know, the interesting thing about the truck jobs, and the toll jobs that went away, and a lot of other jobs that went away; some of these jobs are awful. They’re terrible, right? People aren’t excited about a lot of these jobs. They do them because they don’t have something better. If we can offer them something better, then the world will be a better place.
Absolutely. So we’ve talked about AGI. I assume you think that we’ll eventually build a general intelligence.
I do think so. I think it will easily take more than twenty-five years, it could take as long as a thousand years, but I’m what’s called a materialist; which doesn’t mean that I like to shop on Amazon; it means that I believe that when you get down to it, we’re constructed out of atoms and molecules, and there’s nothing magical about intelligence. Sorry—there’s something tremendously magical about it, but there’s nothing ineffable about it. And, so, I think that, ultimately, we will build computer programs that can do and exceed what we can do.
So, by extension, you believe that we’ll build conscious machines as well?
Yes. I think consciousness emerges from it. I don’t think there’s anything uniquely human or biological about consciousness.
The range of time that people think it will be before we create an AGI, in my personal conversations, range from five to five hundred years. Where in that spectrum would you cast your ballot?
Well, I would give anyone a thousand-to-one odds that it won’t happen in the next five years. I’ll bet ten dollars against ten thousand dollars, because I’m in the trenches working on these problems right now and we are just so, so far from anything remotely resembling an AGI. And I don’t know anybody in the field who would say or think otherwise.
I know there are some, you know, so-called futurists or what have you… But people actively working on AI don’t see that. And furthermore, even if somebody says some random thing, then I would ask them, “Back it up with data.” What’s your basis for saying that? Look at our progress rates on specific benchmarks and challenges; they’re very promising but they’re very promising for a very narrow task, like object detection or speech recognition or language understanding etc., etc.
Now, when you go beyond ten, twenty, thirty years, who can predict what will happen? So I’m very comfortable saying it won’t happen in the next twenty-five years, and I think that it is extremely difficult to predict beyond that, whether it’s fifty or a hundred or more, I couldn’t tell you.
So, do you think we have all the parts we need to build an AGI? Is it going to take some breakthrough that we can’t even fathom right now? Or with enough deep learning and faster processors and better algorithms and more data, could you say we are on a path to it now? Or is your sole reason for believing we’re going to build an AGI that you’re a materialist—you know, we’re made of atoms, we can build something made of atoms.
I think it’s going to require multiple breakthroughs which are very difficult to imagine today. And let me give you a pretty concrete example of that.
We want to take the information that’s in text and images and videos and all that, and represent that internally using a representation language that captures the meaning, the gist of it, like a listener to this podcast has kind of a gist of what we’ve talked about. We don’t even know what that language looks like. We have various representational languages, none of them are equal to the task.
Let me give you another way to think about it as a thought experiment. Let’s suppose I was able to give you a computer, a computer that was as fast as I wanted, with as much memory as I wanted. Using that unbelievable computer, would I now be able to construct an artificial intelligence that’s human-level? The answer is, “No.” And it’s not about me. None of us can.
So, if it was really about just the speed and so on, then I would be a lot more optimistic about doing it in a short term, because we’re so good at making it run two times faster, making it run ten times faster, building a faster computer, storing information. We used to store it on floppy disk, and now we store it here. Next we’re going to be storing it in DNA. This exponential march of technology under Moore’s Law—keep getting faster and cheaper—in that sense, is phenomenal. But that’s not enough to achieve AGI.
Earlier you said that you tell people not to get confused with science and science fiction. But, about science fiction, is there anything that you’ve seen, read, watched that you actually think is a realistic scenario of what we may be able to do, what the future may hold? Is there anything that you look at and say, well, it’s fiction, but it’s possible?
You know, one of my favorite pieces of fiction is the book Snow Crash, where it, kind of, sketches this future of Facebook and the future of our society and so on. If I were to recommend one book, it would be that. I think a lot of the books about AI are long on science fiction and short on what you call “hard science fiction”; short on reality.
And if we’re talking about science fiction, I’d love to end with a note where, you know, there’s this famous Arthur C. Clarke quote, “Any sufficiently advanced technology is indistinguishable from magic.” So, I think, to a lot of people AI seems like magic, right? We can beat the world champion in Go—and my message to people, again, as somebody who works in the field day in and day out, it couldn’t be further from magic.
It’s blood, sweat, and tears—and, by the way, human blood, sweat and tears—of really talented people, to achieve the limited successes that we’ve had in AI. And AlphaGo, by the way, is the ultimate illustration of that. Because it’s not that AlphaGo defeated Lee Sedol, or the machine defeated the human. It’s this remarkably-talented team of engineers and scientists at Google, working at Google DeepMind, working for years; they’re the ones who defeated Lee Sedol, with some help from technology.
Alright. Well, that’s a great place to leave it, and I want to thank you so much. It’s been fascinating.
It’s a real pleasure for me, and I look forward both to listening to this podcast, to your other ones, and to reading your book.
Thank you.
[voices_in_ai_link_back]
Byron explores issues around artificial intelligence and conscious computers in his upcoming book The Fourth Age, to be published in April by Atria, an imprint of Simon & Schuster. Pre-order a copy here

Slack invests in ecosystem of bot companies

Slack has announced $2 million in funding for bot startups in its Slack Fund, that has been passed out to 11 new companies and 3 existing investments. The list, as reported by Techcrunch:

Abacus is intelligent expense reporting software that brings report creation and approvals right into Slack.

Automat is making it easier for anyone to build a bot that passes the Turing Test. Automat is in private beta today.

Birdly connects Slack and Salesforce so that anyone can access the information they need about a given account.

Butter.ai is a personal assistant that makes all of your company knowledge easily accessible. Butter is in private beta.

Candor, Inc. aims to improve working relationships through radically candid feedback. Candor’s Slack app is not generally available yet today.

Growbot lets you encourage and commend your teammates for a job well done with a helpful bot.

Konsus gets you 24/7 access to on-demand freelancers to help you get the job done, all via Slack.

Lattice helps you establish goals, OKRs weekly check-in and continuous feedback with your Slack team.

Myra Labs helps you build amazing bots with an API that provides machine learning modules out of the box. Myra is in private beta.

Sudo is a bot that manages your CRM, taking all of the pain of manual data entry away from the sales rep. Sudo is in private beta.

Wade & Wendy are two intelligent recruiting assistants. Wade is a career advocate who helps find you opportunities and Wendy helps recruiting teams to source candidates. Wade and Wendy aren’t live yet, but you can sign up for their waitlist.

Previously funded

Awesome.ai helps team stay in sync, find clarity and reflect on what’s important.

Begin is a bot that helps improve your focus and efficiency, keeping you on top of all of your work.

Howdy is a friendly, trainable bot that powers teams by automating common tasks.

Bots are a very hot area right now, part of a growing trend toward contextual conversation in enterprise work technologies (see Contextual conversation: Work chat will dominate collaboration) and in the consumer sector, in open messaging apps.

I will be developing a report on Chat, Bots, and the Future of Work Communications in the fall, based on research launching soon. This trend interacts with the rise of AI, and spoken communications, and even the rise of augmented and virtual reality.

Little Bird’s Thought Leaders on the Future of Work

I saw that Little Bird’s Marshall Kirkpatrick has identified 20 thought leaders in the future of work domain (sign up for the report, here, and read the story about it: The C-Suite’s Challenge: The Changing Future of Work). I’m happy that I’m in the list, along with old friends like Charlene Li, Dave Gray, Seth Godin, and Dion Hinchcliffe.
Screen Shot 2015-09-04 at 11.59.58 AM
My own list overlaps in part with Little Bird, but diverges in many ways. I guess I will turn my list into a report in the next weeks, so stay tuned.

What we can learn from a conference changing its name: a lot

The rise of digital transformation, and the decline of social business and enterprise 2.0


I am honored to be in the speaker’s roster for the upcoming Enterprise Digital Summit, scheduled for 21-22 October 2015 in London. My keynote is safely entitled Building Blocks of the Organization in the Digital Age, which gives me a great deal of leeway to talk about the future of the organization. But I am not going to dig into my talk, here. I have months to do that. (Although let me say that I will have to explode the premise of the title — that organizations can be ‘designed’ and ‘built’ like buildings or machines — and offer up more biological or sociological metaphors, instead.) Instead, I’d rather discuss the recent name change of the conference itself, and what that says about shifts in the global discourse around new ways of work.
The newly dubbed Enterprise Digital Summit was formerly known as the Enterprise 2.0 Summit. As the conference producer, Bjoern Negelmann, recently wrote,

We have been thinking about the scope of the Enterprise 2.0 Summit for quite some time. For a while now our beloved expert community has been telling us that “Social” has moved on, the “Enterprise 2.0” term is “dead” and that our conference heading doesn’t match the general “zeitgeist” of the current business landscape. We have argued against change, both because of the name recognition our event has in the community and because not every organisation is at the leading edge of change. However, in today’s disruptive business climate, every organisation’s business model is under threat and we are no different. It’s time to re-adjust. It’s time to change our name!
The question is where are we heading to? What is the best way of explaining the projects and programs of today and tomorrow?

Negelmann goes on to make a concise and partly convincing case that the rise in interest around digital transformation of the business is sucking all the oxygen out of the room, and subordinating activities that formerly might have been called enterprise 2.0 (when focused on technology first, and culture/organization/people second) or social business (when vice-versa). His colleague David Terrar added this,

During 2014 we started to shift our terminology again to digital disruption and digital transformation. The topic we are discussing is about much more than the tools and technology that organisations use to collaborate more effectively, to empower employees, to innovate and to connect with their customers, partners, employees and stakeholders in new and better ways.
It is about those things, but it is also about rethinking the world of work, adopting emergent strategy, and recognising the management shift required, along with new business models, that we must use to react and compete in the 21st century.

In a recent survey, 98% reported they are undergoing digital transformation, while only 25% could say they had a clear understanding of what that means. It’s clear we are grappling with the digital imperative, like it or not.
I define digital transformation this way:

A new operating model of business based on continuous innovation through the application of digital technologies and the restructuring of operations around customer experience to better engage with customers, the company ecosystem, and the greater marketplace.

This is both a customer-centric and technology centric perspective, and one in which workers and their work are subsumed in the efforts for innovation and operational effectiveness. In essence, the last decade of initiatives that were called social business or enterprise 2.0 (or, generically, social collaboration) are decreasing as a priority, or being completely dropped from the future agenda. Why? Why is it that digital transformation seems to be picking up where social business and enterprise 2.0 left off?
A few observations might make this clear.
First, social business is a web 2.0 era trend. The architecture of ‘social collaboration/enterprise 2.0’ tools is principally for office-bound knowledge workers with desktop computers, and based on fairly dated architectural motifs. Part of this new digital transformation is reaching all workers — on the manufacturing floor, building houses in the field, or in retail outlets — not just office workers, not just employees, not just knowledge workers.
Second, We’re now in a ‘mobile 1st, cloud 1st, people 1st’ era. Mobility is causing us to rethink nearly everything about work and business, which is invalidating many of the premises of social collaboration. We are truly working everywhere with everyone.
Third, the promise of higher productivity hasn’t materialized. I written a great deal about the failure of social collaboration, so I won’t elaborate here, except to assert that the productivity gains from this generation of social collaboration tools have been less than anticipated, to be generous.
I believe that the hard part of moving to a new way of work is not selecting tools to communicate with team members, or making old web 2.0 solutions work in a mobile world. On the contrary, the real barriers to a new way of work are cultural barriers. Or turned around, to get to a new way of work — one that is based on increased agility, resilience, and autonomy — requires a deepening of culture. And it may be that deep culture is what social business was always intended to mean, or at least what I thought it should mean.
The Boston Consulting Group makes a case for two chapters in digital transformation, where the first chapter is dedicated to operational turnaround based on the adoption of new technologies and practices.
bcg chapter 2
The second chapter is where the proof of the transformation lies, and it requires a transition to what the authors call adaptive innovation. That second chapter requires deepening culture, so that the organization is oriented toward new ways of working that align with both of — at the same time — the requirements of the new business model and the aspirations and motivations of the new workforce, those who are living on the other side of the transformation’s technological and sociological changes.
In a second post in this series of posts, I will explore the sense of urgency needed for deep cultural change to happen, and why the lack of a true sense of urgency can block deep change. Suffice it to say that adaptive innovation requires deep cultural change, and a sense of urgency to make those changes.
So, in the final analysis, as we entered chapter two in the realm of social business/enterprise 2.0, we hit the downward arc. Of course, there is a great deal of innovation in the broader area of work tech and the future of work, and we are entering chapter 1 of the digital transformation story. In that chapter, social business and enterprise 2.0 become historical antecedents. But the need for deep cultural change — yet again — will play the pivotal role in the coming second chapter of digital transformation.
That’s probably what I will talk about in October at the newly dubbed Enterprise Digital Summit event in London.


IBM is a sponsor of the Enterprise Digital Summit event.


This post was brought to you by IBM for MSPs and opinions are my own. To read more on this topic, visit IBM’s PivotPoint. Dedicated to providing valuable insight from industry thought leaders, PivotPoint offers expertise to help you develop, differentiate and scale your business.

The twelve posts of Christmas, part 2

I interviewed Susan Scrupski, the founder of Change Agents Worldwide, back in February, and she shared her thoughts about building a network of change agents that work with or in large companies.

SB: You’ve now started Change Agents Worldwide. What’s the vision for that group?

SS: My vision has remained constant since I started tracking this space. I’ve always advocated for advancing the liberating, evolved freedoms that come along with the adoption of more human-based technologies and processes for the large enterprise. I learned a lot about networks and how people behave and what they can achieve together in networks via my experience with the Council. More importantly, I learned that there are a lot of people around the world who share my beliefs, and that there is a certain DNA required to do this sort of work.

Change Agents Worldwide is the next evolution of the work I’ve been doing since 2006. The group’s vision is squarely centered on helping large companies transition from old world models established in the industrial era to modern network-based, agile models that improve not only the work experience for the workforce, but lead to top-line gains in innovation and growth. We are a small cadre of professionals from various disciplines (HR/learning, IT, Marketing, R&D, OD, KM, Innovation) who share the same vision and values, and we run our company in the way we’re advocating by putting these principles in practice.

[…]

We see ourselves more as a coalition and engagement within our network is fairly high. When we’re talking about project work, we like to describe our network as a “collaborative sharing economy model for consulting.” We don’t have employees; we have network members who consult. So, the same way Airbnb is in the hospitality business without hotels, we are in the consulting business without employees.


I wrote about the history of the terms social graph and the work graph back in May (see Forget the social graph: pay attention to the work graph):

However, over time I came to appreciate that the social graph is actually a larger formulation than social networks: it is a graph (or network) of people as well as social objects — the things that people are talking about, or sharing, that shape the relationships between the people in the social networks.

It turns out that the term was originally offered up by my friend, Jyri Engstrom, the founder of Jaiku, back in 2005, when he wrote Why some social network services work and others don’t — Or: the case for object-centered sociality:

Social network theory fails to recognise such real-world dynamics because its notion of sociality is limited to just people.

Another friend, the cartoonist Hugh MacLoed (@gapingvoid) popularized the term in the years following, as in Social Objects for Beginners.

Justin Rosenstein of Asana spelled out that in the work context the social graph becomes the work graph,

A work graph consists of the units of work (tasks, ideas, clients, goals, agenda items); information about that work (relevant conversations, files, status, metadata); how it all fits together; and then the people involved with the work (who’s responsible for what? which people need to be kept in the loop?).

The upshot of the latter data structure is having all the information we need when we need it. Where the enterprise social graph requires blasting a whole team with messages like “Hey, has anyone started working on this yet?”, we can just query the work graph and efficiently find out exactly who’s working on that task and how much progress they’ve made. Where the enterprise social graph model depends on serendipity, the work graph model routes information with purpose: towards driving projects to conclusions.

And I concluded that the problem with conventional work management tools stray too far from the work graph:

My sense is that the reason we are seeing a stall in the uptake of the current generation of work media apps (enterprise social networks, social ‘collaboration’ tools, etc.) is that they don’t stick close enough to the work graph and pull communications, and focus too much on the network and push communications.

Don’t get me wrong. I am not saying all we need is a shared file system and a way to chat. On the contrary. But we have to get the dynamics right. When people are talking about work, or sharing work objects, the objects must almost be treated as people too, with deep metadata, persistent identities, and following/follower relationship with other objects and people in the graph.


In Why do Americans work so much?, I looked at the numbers showing that 85.8% of US men and 66.5% US women work more than 40 hours per week, almost 500 hours per year more than the average French worker.  And it doesn’t make us happy.

We should learn from the Danes, the happiest nation on Earth.

As Alexander Kjerulf points out,

Not only do Danes tend to leave work at a reasonable hour most days, but they also get five to six weeks of vacation per year, several national holidays and up to a year of paid maternity/paternity leave. While the average American works 1,790 hours per year, the average Dane only works 1,540, according to Organization for Economic Cooperation and Development (OECD) statistics. Danes also have more leisure hours than any other OECD workers and the link between sufficient leisure and happiness is well established in the research.

Only 10% of Danish workers are actively disengaged as work, compared to 18% of Americans, and one of the key factors is overwork. We need to expect — demand — a work environment that is based on work happiness — or what the Danes call arbejdsglæde — and

spend more time daydreaming, learning new skills, walking the dog, or relaxing with friends and family.


A few weeks ago, I wrote about the information deluge, and how companies are not doing a great job helping workers deal with it.

The Deloitte team cited Julian Birkinshaw and Jordan Cohen, who researched how knowledge workers can deal with demanding schedules and they found, unsurprisingly, that the best course is to eliminate or delegate unimportant tasks and spend more time on important ones. Forty-one percent of an individual’s time is wasted on discretionary activities that could be handed over to others to make room for important, fulfilling activities or more down time.

The pair led “interventions” with 15 executives at different companies with this strategy, and it led to six hours less of desk work and two hours less of meetings. At one company, a sales exec chopped administrative tasks and meetings to focus on helping her staff. Sales increased 5 percent over a three-week trial period.

These are the costs of feeling entangled in a web of commitments that many company cultures engender. Instead of trying to decrease overcommitment and making the company fast-and-loose, there is a steady pressure to focus on nonessential, time-wasting activities: sitting in on weekly status meetings, reading reports from other groups, filling out expense reports. All those should be eliminated.

When forty-one percent of the average person’s work time is spent on inessential tasks, it’s easy to see why companies slow to a crawl as they grow, because of more and more wasted time.

Pew releases AI, Robotics, and the Future of Jobs

It’s been quite a few days.

The PewResearch Internet Center released their AI, Robotics, and the Future of Jobs report this week, where we were asked this question:

The economic impact of robotic advances and AISelf-driving cars, intelligent digital agents that can act for you, and robots are advancing rapidly. Will networked, automated, artificial intelligence (AI) applications and robotic devices have displaced more jobs than they have created by 2025?

I was prominently quoted on the first page:

PI_14.08.06_FutureQuotes_Boyd2

The full quote is this:

Stowe Boyd, lead researcher at GigaOM Research, said, “As just one aspect of the rise of robots and AI, widespread use of autonomous cars and trucks will be the immediate end of taxi drivers and truck drivers; truck driver is the number-one occupation for men in the U.S.. Just as importantly, autonomous cars will radically decrease car ownership, which will impact the automotive industry. Perhaps 70% of cars in urban areas would go away. Autonomous robots and systems could impact up to 50% of jobs, according to recent analysis by Frey and Osborne at Oxford, leaving only jobs that require the ‘application of heuristics’ or creativity…An increasing proportion of the world’s population will be outside of the world of work—either living on the dole, or benefiting from the dramatically decreased costs of goods to eke out a subsistence lifestyle. The central question of 2025 will be: What are people for in a world that does not need their labor, and where only a minority are needed to guide the ‘bot-based economy?”

I’ve been interviewed several times about the report — by the AP, various newspapers, and others — and its been quoted all over, as in the NY Times, Fortune, and Forbes. Here’s an example from CIO:

Steve Rosenbush, The Morning Download: Facebook,Yahoo Developing New Models for Data Protection

AI, robotics, and the future of jobs. With Watson angling for the corner office, no job is safe from automation. Are the robots coming to take our jobs? The Pew Research Center recently asked nearly 2,000 technologists what the employment landscape will look like over the next decade, as artificial intelligence and robotics continue to gain ground. The experts, who included CEOs, tech journalists, Internet pioneers and researchers at tech vendors, are divided almost 50-50 on whether AI applications and robots will displace more jobs than they create. Some foresee more income inequality and more blue and white-collar displacement. “The central question of 2025 will be: What are people for in a world that does not need their labor, and where only a minority are needed to guide the ‘bot-based economy?” wonders Stow Boyd [sic], lead researcher at GigaOM research.  Others are more optimistic, citing humanity’s ability to bounce back. “Technology will continue to disrupt jobs, but more jobs seem likely to be created,” said Jonathan Grudin, principal researcher for Microsoft MSFT -0.30% Corp. So the short answer is that the jury’s still out on whether we’re heading towards a breakdown in social order or a new era of techno-based entrepreneurship. Someone should ask Watson what it thinks.

Interesting that the business publications, like Forbes and Fortune, were more interested in my predictions about robot sex partners than the impacts on work:

Robotic sex partners will be a commonplace, although the source of scorn and division, the way that critics today bemoan selfies as an indicator of all that’s wrong with the world.

One thing that has been made clear in the fallout since the report was published: there is a sizable contingent who — like me — are convinced that increasing automation will lead to a large reduction in employment, and there appears to be an equally vocal group that believe that either new work will arise that only people can do or governmental controls will be put in place so that people will be employed whether we need them to be or not.

The authors of the report, Aaron Smith and Janna Anderson, characterized the various positions of the hopeful (52%) and the concerned (48%):

Key themes: reasons to be hopeful

  1. Advances in technology may displace certain types of work, but historically they have been a net creator of jobs.
  2. We will adapt to these changes by inventing entirely new types of work, and by taking advantage of uniquely human capabilities.
  3. Technology will free us from day-to-day drudgery, and allow us to define our relationship with “work” in a more positive and socially beneficial way.
  4. Ultimately, we as a society control our own destiny through the choices we make.

Key themes: reasons to be concerned

  1. Impacts from automation have thus far impacted mostly blue-collar employment; the coming wave of innovation threatens to upend white-collar work as well.
  2. Certain highly-skilled workers will succeed wildly in this new environment—but far more may be displaced into lower paying service industry jobs at best, or permanent unemployment at worst.
  3. Our educational system is not adequately preparing us for work of the future, and our political and economic institutions are poorly equipped to handle these hard choices.

I’ve been asked by Aaron to appear at the Pivot Conference this October to speak on this theme.

At the time that the telephone switch was being developed, projections showed that all the women in America would have to be telephone operators in the next 20 years to handle the growth of telephone use. Now the number of telephone operators is functionally zero. Yes, those workers transitioned from that work to other occupations. If 85% of other occupations are either eliminated or disrupted out of existence, and all that remains is a narrow suite of domains where AI and robots can’t play because of insufficient creativity or human emotion — like improvisational jazz, playing Go, or nursing the sick — we will hit a wall.

It’s hard to imagine that our economy can respond to this challenge as quickly as our technologies can make it a reality. We’re still living in a world where women are paid 87¢ for every $1 that men make, and women have been in the workforce for a hundred years. Culture is slow but technology’s fast.

I tweeted earlier this week:

In the next few weeks I plan to write a bit more about this, and it will form one thread of a research note I am working on about the Gig Economy.

Europe grapples with the always-on future of work, here today

I’ve spent quite a lot of time in Europe — two summers at the University of Lisbon, innumerable business trips, including over four months in Switzerland and England in 2006, for example — but I still feel like it’s a faraway place when I read about its labor laws.
Don’t get me wrong: I wish the US had a calmer work culture, but at the same time my deeply ingrained workaholickishness leads me to wry surprise when I hear about the legalisms surrounding work in Europe.
The newest is a Guardian story that relates a new agreement between unions and the consulting and tech industries that requires one million workers to switch off their work phones from 6pm to 9am:

Under the deal, which affects a million employees in the technology and consultancy sectors (including the French arms of Google, Facebook, Deloitte and PwC), employees will also have to resist the temptation to look at work-related material on their computers or smartphones — or any other kind of malevolent intrusion into the time they have been nationally mandated to spend on whatever the French call la dolce vita. And companies must ensure that their employees come under no pressure to do so. Thus the spirit of the law — and of France — as well as the letter shall be observed.

Actually, la dolce vita is Italian — the French would be la douceur de vivre, I think — but that’s a quibble. With a 35 hour workweek, it does seem sweet to me.
Volkswagen has also mandated email-free evenings and weekend, shutting down the routing of email for off-duty employees. The German Labor Ministry followed along, as well. But the US email culture leads to an always-on sort of work life, as related in this BBC piece:

An advertising professional who moved from London to New York describes a different email culture.
“I remember on my second day seeing an email from a work colleague sent very late that evening. To my surprise someone replied to it, and then the interaction continued online. And lo and behold we ‘were working’. By contrast, in the UK, if I worked late I would often draft emails but save them in my inbox and send them first thing the next morning. That now seems ridiculous and archaic to me. Emails are constant here. It’s not that they expect you to answer out of office hours. More that everyone is ‘switched on’ all the time – that’s the culture and pace of New York. I never really heard the concept of work/life balance when I got to the US. There wasn’t much complaining as people’s expectations were different. It’s not just in the corporate world. When my family were moving here and trying to get an apartment I remember being surprised and delighted that our realtor was calling and emailing us late on a Saturday night.”

Freelancers and executives may be able to determine their own priorities, but there is a paternalistic aspect of European public policy that is generally absent in the US. There, the presumption is that more junior or ‘lower-level’ employees might need to be protected from work policies that would lead to longer than legal levels of work. For example, in the UK the great majority of workers cannot be made to work more than 48 hours per week, and if the expectations around a job mean that employees are basically required to read and write email in the evenings and weekends, then that limit can easily be surpassed.
But these laws would never fly in the US. In fact, if I raised it in a discussion people would actually laugh, I bet.

Anne Marie McEwan takes stock on the future of work

[Update: In an earlier version of this, I attributed Anne Marie McEwan’s post to Anne McCrossan. Apologies for the confusion.]

A friend of mine, Anne Marie McEwan, wonders about the barriers that might block the adoption of new ways of work, such as management’s investments in established control structures, and union unwillingness or cultural resistance to new work practices. But she read something I wrote here — Beneath the chatter about the Future Of Work lies a discontinuity — and she creates a useful counter to my premise which I stated like this:

We need to conceive of the company as a world — an ecology — built-up from each individual connecting to other individuals. And stringing these together into an interconnected whole involves associations like sets, and discernible elements like scenes, but increasingly, nothing like brigades and squads.

Anne points out that thinking about companies differently is not new. Gareth Morgan’s Images of Organization (which I explored here: Metaphors matter: Talking about how we talk about organizations), and Karl Weick’s Social Psychology of Organizing both do that, as have many others. But, Anne says, correctly that these ideas haven’t actually led to much change in organizations.

Anne hopes for consumerization of work as the path forward:

Anne Marie McEwan, Taking Stock Of The Future Of Work

I’ve written about the consumerisation of work in an ebook I’m putting the final touches to. Choice is beginning to personalise our experience of work – where and when – but at the moment this remains for the relative few. How far people can impose their expectations of work depends on the extent to which the balance of power shifts in their favour away from organisations. People whose skills are in short supply have always been able to dictate how they work.

The real opportunity for consumerisation of work for the many – choice in changing how we experience work, even if businesses continue to be organised and structured as they currently are – comes through connecting, sharing and learning outside of organisations.

My hope is that change will come through people taking responsibility for their own experience of work and learning, challenging the status quo, creating meaningful work for themselves and their colleagues – and ultimately for the business that employs them. I wrote about how they can do it in this post, How Mentored Open Online Conversations nurture 21st century skills.

It’s also the premise of the book I had published last year, Smart Working: Creating the Next Wave – that we are not prisoners of our work environment and that transformation of work and the systems that support it come about by people taking back control for themselves.

In my Discontinuity piece I make the case that there must be a disruption: a break with the past. Like Anne, I believe one pillar has to be people reengaging with their own work (see Dig your own hole, sharpen your own shovel) and affiliation with others in a shared deep work culture that transcends the narrower and shallower organizational cultures ‘created’ in businesses.

I suggest that our thinking about change in the world of work must be, therefore, oppositional, at least in part. We must actively try to end many business practices, perhaps more than the new practices we hope to initiate.

Anne softens that a hair:

In an information-rich world, we need to be able to ask better questions. How do we think this future of work is going to arrive? Stowe talks about “a looming discontinuity, a break: perhaps a revolution led by a global movement.”

I have spent a long time researching new ways of working, working with people trying to innovate, and trying to do the same myself.  What I can say from all this experience is that there is always a visionary person leading the charge to do things differently, for example Chief Executives, Production Directors, Finance Directors (two I can think of), Area Managers, Marketing Directors, HR Directors and so on.

I wrote about my favourite example in Smart Working – a senior nurse who changed the performance culture on the hospital ward for which she was responsible. Apart from being inspiring, this gives me hope that there are equally value-driven people out there taking responsibility for changing how they and their colleagues experience work for their own and customers’ benefit – and ultimately the business.

So the discontinuity that Stowe envisages is, on reflection, perhaps not a discontinuity at all but a possible acceleration and exponential increase in the number of these people taking action – connected instigators kicking off change within their organisations and learning with others how to do it outside of organisational constraints. And using the loosely-connected, small and simple apps that Stowe speaks about. It’s about a whole lot more than apps though.

Stowe senses a flood coming. I hope he is right. I am more cautious about how long a global movement would take to get off the ground. I hope I am wrong.

I hope so, too, Anne.

I have created an organization — the Future of Work community — trying to create a global network of people who would like to learn more about new ways of work, and perhaps be may be part of that movement. Our first local chapters have formed in New York, Boston, and Austin, and the first monthly meetings start this month. Please join Anne and me, and many others. There is a lot of change ahead.

Juxtaposition: Dachis Group is acquired by Sprinklr, PostShift opens for business

I am launching a new form of blog post here: a juxtaposition, where I take two things that have happened at the same time, and I draw some analogy, metaphor, or correlation from that occurrence.
I read that Dachis Group, the once-upon-a-time social business strategy consultancy that morphed a bit at a time into a social media analytics tool company, was acquired by Sprinklr, a social media consulting and technology firm. It seemed like they couldn’t get traction when the large consulting firms and technology companies were rolling out their own social business strategy capabilities
Jeff Dachis, the co-founder and CEO of Dachis Group responded to an email, saying ‘After a short period of operational transition, I will have a permanent advisory board role with the title of Chief Evangelist’, which sounds like he is leaving to start something new. All of the other folks that I know and respect who were involved in the company in the early days have left, with the exception of Peter Kim and Dion Hinchcliffe. I have emails in to them. But others — Dave Gray, Jevon MacDonald, and Kate Niederhoffer — left years ago, and Lee Bryant and the former Headshift crew that become Dachis Europe left not too long ago, too.
In an eerily well-timed announcement, Lee Bryant posted today that his new business, PostShift, is officially open for business. I wrote about Postshift in July, when he left Dachis and first announced the firm (see Lee Bryant leaves Dachis Group, announces something new).
Lee and company are leapfrogging all the issues of the social business controversy, and attacking today’s real problem:

Our mission: To build 21st Century businesses.
We believe that organisations cannot fully benefit from social technology without also addressing questions of structure, culture and practice in a serious way. We will be working with established firms, to look beyond social technology adoption towards new ways of working and more agile management structures; and with investors and startups, to help them scale without losing what made them special in the first place.

New ways of work, and more agile management styles: sounds like a leanership orientation, to me.
So, the one side of the juxtaposition is the acquisition of Dachis and the end of the era of social business that Dachis personified, and the decline of the principles that motivated its creation in the first place. As I have said, ‘Social Business’ isn’t dead, but it isn’t enough, either. The emotive force of the term has declined as it’s been bandied around by vendors and gurus, touting a hundred different takes on social and without any real crystallization.
On the other side is one of the wisest and deep thinking crews of people, Lee Bryant and the Postshifters, who have reoriented to today’s business challenge: how to look beyond work tech — now largely social in a fundamental way, along with other characteristics —  and to tackle reworking work in a way that will match the new postnormal, 21st century world. A world vastly different than the mid zeroes when social business started to seem promising, and one where everything must be reevaluated.
We’re talking about making the shift, here at GigaOM Research. So don’t be surprised if you land here one day soon to see that my posts and reports are published under The Future Of Work, instead of Social.
[Update – 3:42pm 20 Feb 2014: Susan Scrupski of Change Agents Worldwide mentioned on Twitter that her time at Dachis wasn’t mentioned. An oversight in a way, although I didn’t know Susan when Dachis was founded. She folded her 2.0 Adoption Council into Dachis a few years ago, and left the company just over a year ago. She’s founded CAWW and brought aboard a great constellation of very knowledgeable folks. I plan to interview her to learn more about the network and her goals for it.]

Sharing the map of the future

I am attending IBM’s Connect conference in Orlando, and there’s some really interesting development work being demoed here, in particular a new take on enterprise email that might be the social email I have been writing about for a few years. But that’s not what this post is about. I will write a posts about IBM Connections Mail Next soon — once I get a demo or better screenshots — but let me quote Jay Baer, the MC of the event and well-known blogger:

@jaybaer: Speaking as a consultant, not emcee: this IBM Mail Next software is straight-up amaze-balls. Want. #IBMconnect

I agree with Jay, but, no, this post is about transparency and the future of work.

IBM is an advocate for partially-articulated vision of the company of the near future. That vision is implicit in the products that the company has in the field, and even more so, in its coming products — like Mail Next, that is going to be rolled out in the 4th quarter of 2014 and after, and other announced products, like new people analytics solutions from Kenexa, and many many more.

And then there are explicit characterizations of this coming new generation company, and the environment that company will have to operate in. IBM boils down the swirling forces impacting the world in may presentations: mobile, social, big data, smarter workforce, analytics, and others. But how does a monster company like IBM boil things down? How do they determine that these factors are critical, and other less so?

Taking a glance at the product rollout of something like Mail Next — which won’t be available for another six months at least, and which integrates to a long list of other moving parts, like IBM Connections work management, chat, and who knows how many other services — I think there is another implicit indication of IBM’s vision of its clients of the near future: they want vertically integrated solutions, and they are not in a hurry. If anything, they are slow to roll out technology, and they want much of the risks of rollout to be squeezed out.

Inside of IBM, product marketing managers, sales leaders, and engineers are tied together in some multi-layered, world-spanning, and multivariate discussion about the future of work, both in general, and specifically with regard to IBM’s customers. But that discussion isn’t really shared with us, outside of IBM. Yes, we see the various outputs and results of the discussions, in both the shape and style of products — and in what those products say is important and what isn’t — and we can read white papers and blog posts that characterize some approved version of the discourse that must be going on in IBM, but we don’t have access to the unfiltered starting points of those discussions.

In fact, we don’t really know how IBM — and other large companies, for that matter — go about their deliberations. Are they running ten thousand small experiments, trying to innovate their way to the future? They may be doing so in research labs, but it doesn’t seem that they are oriented that way in the product teams. So, IBM is not a company that is finding its way into the future by sending out a hundred independent explorers, but rather by compiling a map of the future by looking at dozens of the best existing maps, and then pointing an armada over the horizon in that one direction.

In a sense, the market of start-ups is already in the business of doing the work of the hundred for IBM, and so perhaps the IBM approach benefits from both the innovation at the edge, and the maturation of technologies — and lessened risk — that longer-range development timescales can yield.

But I am reminded of my own words, from a recent IBM get-together in New York a few months ago, when I said,

Because of the increasing pace of business today, the only sensible strategy is to become more risk tolerant, but companies are not in general adopting that mindset. Things are moving faster but business leaders are not moving to reduce the friction in their business to keep pace.

The average company lifetime was 75 years 50 years ago, and today it is fifteen. I bet that in five it will be down to ten, and that’s because management still thinks it’s smarter to operate with a foot on the brake instead of on the accelerator. It seems that should be true, it’s intuitive, but it isn’t true anymore.

We’ve moved into a new economy where the fundamental rules have changed, and the operating premises the past are not only broken, but dangerous.

My bet is that IBM’s customers will need to become more risk tolerant in the near future, but IBM — which may in fact agree — doesn’t want to take on all the ramifications of that discussion, including how it will change the relationship between IBM and its customers.

My final thought: IBM should open up whatever processes exist within the company for determining that map of the near future, to share the discussions — the disagreements and dissent — and not just the final polished outcomes of that internal deliberation. IBM and its customers could both benefit from a more open and transparent sharing of those activities. This is not to say that IBM is actively attempting to conceal something, but just that we are in a time of such rapid change that the map of the future has to be constantly redrawn: it can’t be stable for even 18 to 24 month product lifecycles.

And yes, that fact itself is unsettling for some, but it is better to share that fundamental reality than to pretend it’s not true. There is a saying that you cannot delay the dawn by keeping your eyes closed, and all businesses should start with that as a foundation for all deliberations about the future.