Research Agenda of Larry Hawes, Lead Analyst

Greetings! As my colleague Stowe Boyd announced yesterday, I am part of a fabulous group of smart, well-respected people that have joined the rebooted Gigaom Research as analysts. I was affiliated with the original version of Gigaom Research as an Analyst, and am very pleased to be taking the more involved role of Lead Analyst in the firm’s new incarnation, as detailed in Stowe’s post.
For those of you who don’t know me, I’ve spent the last 16 years working as a management and technology consultant, enterprise software industry analyst, writer, speaker and educator. My work during that time has been focused on the nexus of communication, collaboration, content management and process/activity management within and between organizations ─ what I currently call ‘networked business’.
I intend to continue that broad line of inquiry as a Lead Analyst at Gigaom Research. The opportunity to work across technologies and management concepts ─ and the ability to simultaneously address and interrelate both ─ is precisely what makes working with Gigaom Research so attractive to me. The firm is fairly unique in that aspect, in comparison to traditional analyst organizations that pigeonhole employees into discrete technology or business strategy buckets. I hope that our customers will recognize that and benefit from the holistic viewpoint that our analysts provide.
With the above in mind, I present my research agenda for the coming months (and, probably, years). I’m starting at the highest conceptual level and working toward more specific elements in this list.

Evolution of Work

Some analysts at Gigaom Research are calling this ‘work futures’. I like that term, but prefer the ‘evolution of work’, as that allows me to bring the past and, most importantly, the current state of work into the discussion. There is much to be learned from history and we need to address what is happening now, not just what may be coming down the road. Anyway, this research stream encompasses much of what I and Gigaom Research are focused on in our examination of how emerging technologies may change how we define, plan and do business.

Networked Business

This is a topic on which I’ve been writing and speaking since 2012. I’ve defined ‘networked business’ as a state in which an interconnected system of organizations and their value-producing assets are working toward one or more common objectives. Networked business is inherently driven by connection, communication and collaboration, hence my interest in the topic.
While the concept of networked business is not new, it has been gaining currency in the past few years as a different way of looking at how we structure organizations and conduct their activities. As I noted in the first paragraph of this post, there are many technologies and business philosophies and practices that support networked business, and I will do my best to include as many as possible in my research and discussions.

Networks of Everything

This research stream combines two memes that are currently emerging and garnering attention: the Internet of Things and the rise of robots and other intelligent technologies in the workplace. In my vision, networks of everything are where humans, bots, virtual assistants, sensors and other ‘things’ connect, communicate and collaborate to get work done. The Internet, Web, cellular and other types of networks may be used in isolation or, more likely, in combination to create networks of everything.
I’ve had a book chapter published on this topic earlier this year, and I’m looking forward to thinking and writing more about it in the near future.

Microservices

How do we build applications that can support business in a heavily networked environment? While the idea of assembling multiple technology components into a composite application are not new (object-oriented programing and Service Oriented Architecture have been with us for decades), the idea continues to gain acceptance and become more granular in practice.
I intend to chronicle this movement toward microservices and discuss how the atomization of component technology is likely to play out next. As always, my focus will be on collaboration, content management and business process management.

Adaptive Case Management and Digital Experience Management

These two specific, complementary technologies have also been gathering more attention and support over the last two years and are just beginning to hit their stride now. I see the combination of these technologies as an ideal enabler of networked business and early exemplars of component architecture at the application level, not the microservice one (yet).
I’ve written about ACM more, but am eager to expand on the early ideas I’ve had about it working together with DEM to support networked business.

Work Chat

Simply put, I would be remiss to not investigate and write about the role of real-time messaging technology in business. I’ve already called work chat a fad that will go away in time, but it needs to be addressed in depth for Gigaom Research customers, because there are valid use cases and it will enjoy limited success. I will look at the viability of work chat as an extensible computing platform, not just as a stand-alone technology. Fitting with my interest in microservices, I will also consider the role that work chat can play as a service embedded in other applications.
Phew! I’m tired just thinking about this, much less actually executing against it. It’s a full plate, a loaded platter really. The scariest thing is that this list is likely incomplete and that there are other things that I will want to investigate and discuss. However, I think it represents my research and publishing interests pretty  well.
My question is, how does this align with your interests? Are there topics or technologies that you would like to see me include in this framework? If so, please let me know in a comment below. Like all research agendas, mine is subject to change over time, so your input is welcomed and valued.

The Return of Middle Managers

“That experiment broke. I just had to admit it.” — Ryan Carson, CEO of Treehouse Island, on his attempt to run the company without managers

There is currently a widely-held view among organizational design experts and pundits that managers, particularly middle managers, are a harmful artifact of hierarchically-structured, command-and-control organizations. Conventional wisdom holds that middle managers, and their responsibilities and stereotypical behaviors, are outdated and severely constrict the speed at which a business can operate. Flat, democratic organizations made up of loose, recombinant relationships have gained favor in the org design world today because they enable agility and efficiency.
There’s just one problem with that view – it’s not entirely accurate. It represent an ideal that may be right for some organizations, but very wrong for many others.
Carson and Treehouse Island’s failed experiment was one of the examples given in a recent Wall Street Journal article (behind paywall) titled “Radical Idea at the Office: Middle Managers”. The common thread between the companies mentioned in the article was that the elimination of bosses had the opposite effect of what had been envisioned. Productivity decreased because workers weren’t sure of their responsibilities and couldn’t forge consensus-based decisions needed to move forward. Innovation also waned, because new ideas went nowhere without a management-level individual to champion and fund them. Employee morale even took a hit, because no one took over the former middle management’s role of providing encouragement and motivation when they were needed.
Research of over 100 organizations conducted by an INSEAD professor led to this conclusion, cited in the WSJ piece:

“Employees want people of authority to reassure them, to give them direction. It’s human nature.”

Enabling Technologies that Don’t

Another problem experienced by many of the organizations mentioned in the WSJ article was that technologies meant to enable employees to work productively in a manager-less workplace failed to do so. Enterprise chat systems were specifically fingered as a culprit, for a variety of reasons.
At Treehouse Island, which had never used email, decision-making was severely compromised by employees opining on chat threads when they had no expertise on the given subject. This led to “endless discussions”. The chat technology drove conversations, but ideas rarely made it past discussion to a more formal plan. Work tasks informally noted and assigned without accountability in the chat application mostly got lost in the shuffle and weren’t completed. Treehouse Island eventually turned to other communications channels and even acknowledged that email has valid uses.

Worker Education and Training, Not Managers, Are the Problem

While I agree with the assessment that human nature is a barrier to effective manager-less workplaces, I also think that our base impulses can be minimized or completely overcome by alternative, learned attitudes and behaviors. Society and institutions in the United States have programmed multiple generations to submit to authority, seeking and accepting its orders and guidance. Our educational system has largely been designed to to produce ‘loyal and reliable’ workers who can thrive in a narrowly-defined role under the direction of a superior. Putting individuals who have been educated this way into situations where they must think for themselves and work with others to get things done is like throwing a fish out of water.
As for enterprise chat technology, it has seen documented success when deployed and used to help small teams coordinate their work. However, most of those teams working in chat channels either have a single, designated manager with the authority to make things happen, or they are able call upon a small number of individuals who can and will assume unofficial, situational leadership roles when needed. Absent people to act with authority, chat-enabled groups become mired in inaction, as document in the WSJ article. As I put it in my recent Gigaom Research post on enterprise real-time messaging,

The real reason that employees and their organizations continue to communicate poorly is human behavior. People generally don’t communicate unless they have something to gain by doing so. Power, influence, prestige, monetary value, etc. Well-designed technology can make it easier and more pleasant for people to communicate, but it does very little to influence, much less actually change, their behaviors.”

We will see more experiments with Holocracy and other forms of organization that eliminate layers of management and depend on individuals to be responsible for planning, coordinating and conducting their own work activities. Some will succeed; most will fail. We can (and should!) create and implement new technologies that, at least in theory, support the democratization of work. However, until systemic changes are made in the way people are educated and trained to function in society and at work, companies without managers will remain a vision, not a common reality.

Interview with Stephen Wolfram on AI and the future

sw-dr040-5x7Few people in the tech world can truly be said to “need no introduction.” Stephen Wolfram is certainly one of them. But while he may not need one, the breadth and magnitude of his accomplishments over the past four decades invite a brief review:

Stephen Wolfram is a distinguished scientist, technologist and entrepreneur. He has devoted his career to the development and application of computational thinking.

His Mathematica software system, launched in 1988, has been central to technical research and education for more than a generation. His work on basic science—summarized in his bestselling book A New Kind of Science—has defined a major new intellectual direction, with applications across the sciences, technology, and the arts. In 2009 Wolfram built on his earlier work to launch Wolfram|Alpha to make as much of the world’s knowledge as possible computable—and accessible on the web and in intelligent assistants like Apple’s Siri.

In 2014, as a culmination of more than 30 years of work, Wolfram began to roll out the Wolfram Language, which dramatically raises the level of automation and built-in knowledge available in a programming language, and makes possible a new generation of readily deployed computational applications.

Stephen Wolfram has been the CEO of Wolfram Research since its founding in 1987. He was educated at Eton, Oxford, and Caltech, receiving his PhD in theoretical physics at the age of 20.

 

Publisher’s Note: The following interview was conducted on June 27, 2015.  Although it is lengthy, weighing in at over 10,000 words, it is published here in its entirety with only very minor edits for clarity.

Byron Reese: So when do you first remember hearing the term “artificial intelligence”?

Stephen Wolfram: That is a good question. I don’t have any idea. When I was a kid, in the 1960s in England, I think there was a prevailing assumption that it wouldn’t be long before there were automatic brains of some kind, and I certainly had books about the future at that time, and I’m sure that they contained things about them, how there would be some electronic brains, and so on. Whether they used the term “artificial intelligence,” I’m not quite sure. Good question. I don’t know.

Would you agree that AI, up there with space travel, has kind of always been the thing of tomorrow and hasn’t advanced at the rate we thought they would?

Oh, yes. But there’s a very definite history. People assumed, when computers were first coming around, that pretty soon, we’d automate what brains do just like we’ve automated what arms and legs do, and so on. Nobody had any real intuition for how hard that might be. It turned out, for reasons that people simply didn’t understand in the ’40s, and ’50s, and ’60s, that lots of aspects of it were quite hard, and also, the specific problem of reproducing what human brains choose to do may not be the right problem. Just like if you want to build a transportation system, having it based on legs is not the best engineering solution. There was an assumption that we can automate brains just like you can automate mechanical kinds of things, and it’s only a matter of time, and in the early ’60s, it seemed like it would be a short time, but that turned out not to be true, at least for some things.

What is the state of the technology? Have we built something as smart as a bird, for instance?

Well, what does it mean to make something that is as smart as X? In the history of artificial intelligence, there’s been a continuing set of tests that people have come up with. If you can do X, then we’ll know you’re as smart as humans, or something like that. Almost every X that’s been defined so far, machines have ended up being able to do, though the methods that they use to do it are usually utterly different from the ones that seem to be involved with humans. So the types of things that machines find easy are very different from those kinds of things that people find easy. I think it’s also the case that a lot of things people say, “Gosh, we should automate this,” the mode of automation ends up being different from just sort of the way that you would—sort of if you had a brain in a box, the way that you would use that. Probably a core question about AI is, “How do you get all of intelligence?” For that to be a meaningful question, one has to define what one means by “intelligence.” This, I think, gets us into some bigger kinds of questions.

Let’s dive into those questions. But first, one last “groundwork” question: Do you think we’re at a point with AI where we know what to do, and it’s just that we’re waiting on the hardware again? Or do we have plenty of hardware, and are we still kind of just figuring out how to do it?

Well, it depends what “it” is. Let’s talk a little bit more systematically about this notion of artificial intelligence, and what we have, what we could have, and so on. I suppose artificial intelligence is kind of a—it’s just words, but what do we think those words mean? It’s about automating the intellectual activities that humans do. The story of technology has been a long one of automating things that humans do; technology tends to be about picking a task where we understand what the objective is because humans are already doing it, and then we make it possible to do that in an automatic way using technology.

So there’s a whole class of tasks that seem to be associated with what brains and intelligence and so on deal with, which we can also think of automating in that way. Now, if we say, “Well, what would it take? How would I know if this box that’s sitting on my desk was intelligent?” I think this is a slightly poorly defined question because we don’t really have an abstract definition of intelligence, because we actually only have one example of intelligence that we definitively think of as such, which is humans and human intelligence. It’s an analogous situation to defining life, for example. Where we have only one example of that, which is life on Earth, and all the life on Earth is connected in a very historical way—it all has the same RNA and cell membranes, and who knows what else—and if we ask ourselves this sort of abstract question, “How would we recognize abstract life that doesn’t happen to share the same history as all the particular kinds of life on Earth?” That’s a hard question. I remember, when I was a kid, the first spacecraft landed on Mars, and they were kind of like, “How do we tell if there’s life here?” And they would do things like scoop the soil up, and feed it sugar, and see whether it produced carbon dioxide, which is something that is unquestionably much more specific than asking the general question, “Is there life there?”

And I think what one realizes in the end is that these abstract definitions of life—it self-reproduces, it does weird thermodynamic things—none of them really define a convincing boundary around this concept of life, and I think the same is true of intelligence. There isn’t really a bright-line boundary around things which are the general category of intelligence, as opposed to specific human-like intelligence. And I guess, in my own science adventures, I gradually came to understand that, in a sense, sort of, it’s all just computation. That you can have a brain that we identify, okay, that’s an example of intelligence. You have a system that we don’t think of as being intelligent as such; it just does complicated computation. One of the questions is, “Is there a way to distinguish just doing complicated computation from being genuinely intelligent?” It’s kind of the old saying, “The weather has a mind of its own.” That’s sort of a question of, “Is that just pure, primitive animism, or is there, in fact, at some level some science to that?” Because the computations that are going on in the fluid dynamics of the weather are really not that different from the kinds of computations that are going on in brains.

And I think one of the big conclusions that came out of lots of basic science that I did is that, really, there isn’t a distinction between the intelligent and the merely computational, so to speak. In fact, that observation is what got me launched on doing practical things like building Wolfram|Alpha, because I had thought for decades, “Wouldn’t it be great to have some general system that would take knowledge, make it computational, make it so that if there was a question that could in principle be answered on the basis of knowledge that our civilization has accumulated, we could, in practice, do it automatically.”

But I kind of thought the only way to get to that end result would be to build a sort of brain-like thing and have it work kind of the same—I didn’t know how—as humans brains work. And what I realized from the science that I did it was that just doesn’t make sense. That’s sort of a fool’s errand to try to do, because actually, it’s all just computation in the end, and you don’t have to go through this sort of intermediate route of building a human-like, brain-like thing in order to achieve computational knowledge, so to speak.

Then the thing that I found interesting is there are tasks that. … So, if we look at the history of AI, there were all these places where people said, “Well, when computers can do calculus, we’ll know they’re intelligent, or when computers can do some kind of planning task, we’ll know they’re intelligent.” This, that, and the other. There’s a series of these kinds of tests for intelligence. And as we all know, in practice, the whole sequence of these things has been passed by computers, but typically, the computers solve those problems in ways that are really different from brains. One way I like to think about it is when Wolfram|Alpha is trying to solve a physics problem, for example. You might say, “Well, maybe it can solve it in a brain-like way, just like people did in the Middle Ages, where it was a natural philosophy, where you would reason about how things should work in the world, and what would happen if you pushed this lever and did that, and [see] things had a propensity to do this and that.” And it would be all a matter of human-like reasoning.

But in fact, the way we would solve a problem like that is to just turn it into something that uses the last 300 years of science development, turn it into a bunch of mathematical equations, and then just industrially solve those equations and get the answer, kind of doing an end run around all of that human-like, thinking-like, intelligence-like stuff. But still, one of the things that’s happened recently is there are these tasks that have been kind of holdouts, things where they’re really easy for humans, but they’ve seemed to be really hard for computers. A typical example of that is visual object recognition. Is this thing an elephant or a bus? That’s been a type of question that’s been hard for computers to answer. The thing that’s interesting about that is, we can now do that. We have this website, imageidentify.com, that does a quite respectable, not-obviously-horribly-below-human job of saying, “What is this picture of?” And what to me is interesting, and an interesting episode in the history of science, is the methods that it’s using are fundamentally 50 years old. Back in the early 1940s, people were talking about, “Oh, brains are kind of electrical, and they’ve got [things] like wires, and they’ve got like computer-like things,” and McCulloch and Pitts came up with the whole neural network idea, and there was kind of the notion that the brain is an electrical machine, and we should be able to train it by showing it examples of things, and so on.

I worked on this stuff around 1980, and I played around with all kinds of neural networks and tried to see what kinds of behaviors they could produce and tried to see how you would have neural networks be sort of trained, or create attractors that would be appropriate for recognizing different kinds of things. And really, I couldn’t get them to do anything terribly interesting. There was a fair amount of interest around that time in neural networks, but basically, the field—well, it had a few successes, like optical character recognition stuff, where you’re distinguishing 26 characters, and so on. It had a few successes there, but it didn’t succeed in doing some of the more impressive human-like kinds of things, until very recently. Recently, computers, and GPUs, and all that kind of thing became fast enough that, really—there are a bunch of engineering tricks that have been invented, and they’re very clever, and very nice, and very impressive, but fundamentally, the approach is 50 years old, of being able to just take one of these neural network–like systems, and just show it a whole bunch of examples and have it gradually learn distinctions between examples, and get to the point where it can, for example, recognize different kinds of objects and images. And by the way, when you say “neural networks,” you say, “Well, isn’t that an example of why biology has been wonderful, and we’re merely following on the coattails of biology?” Well, biology certainly gave us a big clue, but the fact is that the actual things we use in practice aren’t particularly neural-like. They’re basically just compositions of functions. You can think of them as just compositions of functions that have certain properties, and the one thing that they do have is an ability to incrementally adjust, that allows one to do some kind of incremental learning process. The fact that they get called neural networks is because it historically was inspired by how brains work, but there’s nothing really neurological about it. It’s just some kind of, essentially, composition of simple programs that just happens to have certain features that allow it to be taught by example, so to speak.

Anyway, this has been a recent thing that for me is one of the last major things where it’s looked like, “Oh, gosh! The brain has some magic thing that computers don’t have.” We can go through all kinds of different things about creativity, about language, about this and that and the other, and I think we can put a checkmark against essentially all of them at this point as, yes, that component is automatable. Now, I think it’s an interesting thing that I’ve been slowly realizing recently. It’s kind of a hierarchy of different kinds of what one might call “intelligent activity.” The zero-th level of the hierarchy, if we take the human example, is reflexive-type stuff, stuff that every human is physiologically wired to do, and it’s just part of the hardware, so to speak.

The first level is stuff where we have a plain brain, so to speak, and upon being actually exposed to the world, that plain brain learns certain kinds of things, like physiologic recognition. But that has to be done separately for every generation of the species. It’s not something where the parent can pass to the child the knowledge of how to do physiologic recognition, at least not in the way that it’s directly wired into the brain. Then the second level, the level that we as a species have achieved, and doesn’t look like any other species has achieved, is being able to use language and so on to pass knowledge down from generation to generation, which allows us to build up this thing that goes beyond pure one-brain intelligence, so to speak, and make something which is a collective, progressively growing achievement, which is that corpus of human knowledge.

And the thing that I’ve been interested in is that idea that there is language and knowledge, and that we can create it as a long-term artifact, so what’s the next step beyond that? What I realized is that I think a bunch of things that I’ve been interested in for many decades now is—it’s slowly coming into focus for me that this is actually really the thing that one should view as the next step in this progression. So we have computer languages, but computer languages tend not to be set up to codify knowledge in the kind of way that our civilization has codified knowledge. They tend to be set up to say, “Okay, you’re going to do these operations. Let’s start from the very basic primitives of the computer language, and just do what we’re going to do.”

What I’ve been interested in is building up what I call “knowledge-based language,” and this Wolfram Language thing that I’ve basically been working on for 30 years now is kind of the culmination of that effort. The point of such a language is that one’s starting from this whole corpus of knowledge that’s been built up by our civilization, and then one’s providing something which allows one to systematically build from that. One of the problems with the existing corpus of knowledge that our civilization has accumulated is that we don’t get to do knowledge transplants from brain to brain. The only way we get to communicate knowledge from brain to brain is turn it into something like language, and then reabsorb it in another brain and have that next brain go through and understand it afresh, so to speak.

The great thing about computer language is that you can just pick up that piece of language and run it again and build on top of it. Knowledge usually is not immediately runnable in brains. The next brain down the line, so to speak, or of the next generation or something, has to independently absorb the knowledge before it can make use of it. And so I think one of the things that’s pretty interesting is that we are to the point where when we build up knowledge in our civilization, if it’s encoded in this kind of computable form, this sort of standardized encoding of knowledge, we can just take it and expect to run it, and expect to build on it, without having to go through this rather biological process of reabsorbing the knowledge in the next generation and so on.

I’ve been slowly trying to understand the consequences of that. It’s a little bit beyond what people usually think of as just AI, because AI is about replicating what individual human brains do rather than this thing that is more like replicating, in some more automated way, the knowledge of our civilization. So in a sense, AI is about reproducing level one, which is what individual brains can learn and do, rather than reproducing and automating level two, which is what the whole civilization knows about.

The super-computing phone: AT&T’s predictions for devices in 2020

In the year 2020, today’s smartphones will like the glorified PDAs of the last decade, according to AT&T SVP Jeff Bradley. What should consumers expect? Handsets with nearly 30 GHz of processing power, terabytes of internal storage and half-gig connections to the mobile network.

The Future of Work Means Rewiring Your Company

It isn’t just the nature of work that is changing thanks to the web and a generation of increasingly mobile and inter-connected workers, says John Hagel, co-chairman of Deloitte’s Center for the Edge — it’s the entire way in which many companies operate.

iOS 4.2 Beta: Indicator of a Future All-Cloud Apple?

In an article on AppleInsider, Josh Ong details changes in the upcoming iOS 4.2 update. It seems to blur the line between MobileMe and a user’s Apple ID. It’s a subtle addition, but it might just be the seed of a revolution in personal computing.

The Future of Web Working: Management

If you work from home now, congratulate yourself: chances are, you’ll be managing the web workers of tomorrow. As businesses move their workers out of central offices and embrace the distributed model, even jobs closer to the central core of an organization will be done remotely.