Twitter is poised to make an evolutionary leap over the next year or so that, if all goes well, could dwarf the company’s impact thus far. The tenfold increase in frequency it’s offering to developers who access its feed — what some call the full “fire hose” of information — will bestow a vast array of apps with the unmitigated power of the real-time web. And its increasing use of location-based information adds another game-changing dimension. These two developments will amplify Twitter’s abilities to the point that, before long, the gating factor on its value will be the people tweeting. At that point, Twitter will make another evolutionary leap in usefulness when it is taken over by non-human users.
Human users of Twitter 1.0 have already proven themselves to be overly passive, poor contributors. A Sysomos analysis last summer pointed out that 75 percent of tweets were created by just 5 percent of accounts and that about 24 percent of all tweets were made by automated “bots.” (Those bots, by the way, represented about a third of that 5 percent of users doing most of the work.) As Twitter flows increasingly through sophisticated third-party apps — applications that are richer and more valuable the more information they consume — only bots will be able to keep up with those apps’ insatiable demand for data.
For example, Twitter’s potential in mapping earthquakes was demonstrated last week during a 4.1 quake in the San Francisco Bay Area that saw 296 quake-related tweets per minute. A system that’s dependent on the independent actions of lots of Twitter users may be well-fed by tech-savvy Bay Area residents. What about tornado alleys in Nebraska or hurricane paths in Alabama where Twitter use isn’t as high? The National Weather Service is hoping to use tweets to track developing storms, but that effort would be greatly limited where fewer people use Twitter and even fewer may opt to tweet about the weather rapidly enough to be useful. A network of tweeting weather vanes and windmills, on the other hand, could be more reliable and informative.
Commercial use of Twitterbots could also explode. Companies already use Twitter to find unhappy customers. But what if the products themselves could tweet whenever they were in use? What if each product tweeted when it broke or when there was an outage? Or when a customer pressed a “dissatisfied” button?
Large companies may prefer to use their own internal systems rather than Twitter for this kind of machine-to-machine communication. But there are at least two reasons to tweet like this: One, it cost-effectively offloads the expense and operational complexity of maintaining a large-scale communications system. And using Twitter allows for the possibility of mashing up machine-sent data with the human kind in interesting, perhaps unpredictable, ways. For instance, going back to the National Weather Service example, mashing up the data from scores of tweeting weather vanes with human-tweeted reports of funnel-cloud sightings could provide a richer, more intelligent, storm chronicler. App makers could also mash up data from seemingly unrelated spheres — perhaps tracking the development of storms together with the flow of evacuees in tweeting cars, for example.
And, of course, the escalation of non-human tweeting — machines tweeting in response to trends in other machine tweets, creating secondary and tertiary trends — may be the real storm brewing.