Extreme Networks Reports a Wi-Fi High for Super Bowl LI

Ten years ago, the Colts defeated the Bears, 29–17 in Super Bowl XLI before an iPhone-free audience. It wasn’t until that spring when Apple fans camped out to score the first iPhone, with Android launching later that year.
Fast-forward to 2017 and the game has changed. The excitement of Super Bowl LI played out in front of a smartphone-enabled audience and, according to the Official Wi-Fi and Wi-Fi Analytics Provider for the event, Extreme Networks, it proved to be the most connected one-day event of record.
This isn’t hard to believe—a Deloitte study reported that Americans check their phones eight billion times a day. Yes, we’re hooked. What’s interesting is how a milestone event such as the Super Bowl can offer a bird’s eye view into how we’re hooked. This is a story told in numbers and, thanks to Extreme (and summarized in the infographic below), we have that data.

Record-breaking Data

Let’s start with the volume. According to Extreme’s analytics platform, a whopping 11.8 terabytes of data traversed the Wi-Fi network during Super Bowl LI. For those of us who glaze over when it comes to data metrics, that’s about 5,130 hours of HD Netflix streaming or nearly 3.4 million songs. It’s also nearly double the data reported for Super Bowl XLIX, a significant jump in just a two-year period.
But what’s happening with all this connectivity? Social media was the top activity, with 14 percent of data attributed to fans scrolling and tapping their way through the network feeds. Facebook and Snapchat dominated, collectively comprising 10 percent of that activity. (Instagram and Twitter might want to make note that Snapchat jumped from last place to second in just one year.)
Overall, social activity increased 55 percent from the previous year, a jump that Extreme attributed in part to the availability of live video broadcasting tools like Facebook Live.

Increased reach beyond the stadium

The data also showed that more fans took advantage of the free Wi-Fi at and around the game this year. At peak, there were 27,191 concurrent users, with 49 percent of attendees joining the Wi-Fi network. This is up 41 percent from last year.
It’s also worth noting that Extreme’s connectivity wasn’t limited to NRG Stadium but was also available at the NFL Headquarters (within Houston’s Marriott Marquis), House of Blues and the nine-day Super Bowl LIVE event downtown. So, in addition to connecting over 143,000 fans as well as NFL owners, players and staff and transferring 20.52 terabytes of data across its Wi-Fi network during Super Bowl LI week, Extreme’s network also enabled attractions like Journey to Mars.
The increase in the number of smartphones can contribute to the growth of Wi-Fi users at the event, though, as market intelligence provider IDC has reported, the velocity of smartphone adoption is leveling out.
Another key factor may be awareness of the option for free Wi-Fi service. Consumers—and particularly the hyper-connected social storytelling types—are sensitive to network congestion at highly populated events and may be more likely to seek out alternatives. In fact, that which may be considered a perk right now (free Wi-Fi) may soon fall into the realm of expectation for major events.

Where is it heading?

With the next Super Bowl eleven months away, there’s plenty of time for surprises when it comes to trends in connected behavior. It’s reasonable to assume that social media will continue to dominate, but will Facebook continue to lead the way? How will the recent launch of Instagram Live impact the rankings? And is it possible that another platform could come in at the last minute, Patriots-style, and change the game?
This is all a far cry from 2007’s flip phones, but it’s still just the beginning. Live broadcasting, along with virtual and augmented reality, is still in the novelty stage; we can expect that these and other technologies will be more integrated into the game experience in the coming years. For example, imagine 360-degree views of the field, or haptic technology that allows audiences to experience the sensation of a tackle (in moderation, hopefully), and the eventual blurring of the in-person and at-home experience.
Whatever’s in store, it’s safe to say that this year’s record-breaking connectivity will not hold that record for long.

Four Questions For: Joseph Steinberg

You have written that there is no effective law enforcement to counter or punish cybersecurity attackers and hackers. How do you envision this changing in your lifetime? How can law enforcement and governments protect their citizens?
There are many reasons that cybercrime often goes unpunished today, and entire books could be written to answer how government and law enforcement can better protect citizens. There are many areas in which improvement is needed: Laws need to change, enforcement agencies need more flexibility to hire experts, international cooperation needs to be obtained (diplomatically, if possible), lawmakers need to invest time to stay current with technology knowledge rather than spend their time raising campaign funds, various sections of government need to listen not only to representatives of large corporations, but also to experts who often are independent or work for small firms, enforcement of laws needs to be uniform without regard for alleged perpetrators’ political connections or the political ambitions of prosecutors, stolen data needs to be treated as stolen property, etc.
If you have nothing to hide, what is there to worry about with regards to surveillance?
The argument that anyone who “has nothing to hide” doesn’t need to worry about surveillance is simply wrong, as surveillance undermines privacy, not just “hidden things.” How many people who consistently post about their successes on Facebook don’t mention when they fail at something important or when they are caught doing something that they should not have done? How many people who Tweet regularly tell the world about highly personal issues such as medical problems, marital fights, or embarrassing scenarios? How many people who share selfies also post photos of themselves taking their medicine for a chronic condition, crying over emotional pain, using the bathroom, or engaging in sexual activities? We all have private moments and negative experiences that we do not announce to the world or wish to have others watch. When people think about how much they wish to keep private, they start to grasp how dangerous surveillance can be. Not only may those performing the surveillance obtain our private information, but, if they don’t adequately protect it, the whole world may see it.
What do you believe are the biggest security risks to social media? What should users do to protect themselves against these risks?
While there are multiple issues related to social media security, the biggest risk is people making posts without understanding the consequences of those posts. Besides harming one’s personal relationships, professional career, or reputation, a problematic post can harm one’s employer’s brand image, leak its confidential information, lead to it being sued, or violate regulations. Oversharing information can even help criminals to craft highly-effective spear phishing emails, thereby undermining organizational information security and leading to major data breaches. While people should think about what they post, relying on people to “always do the right thing” is a recipe for disaster (think what would happen if we relied on people to practice good cybersecurity hygiene and did not issue them anti-virus software), which is why technology is needed to warn people in real time when they are making problematic posts, from whatever locations, devices, and accounts they make them.
What pieces of everyday technology are people using without realizing the cybersecurity threats behind them? What kind of data is being shared through things like wearables, smart phones, smart watches, etc?
The less something looks like a classic computer, the less people seem to think about cybersecurity when using it. Even though, in some ways, smartphones and tablets pose greater risks to information security than do laptop computers, for example; people often take fewer precautions with these devices than with their laptops. And, when it comes to wearables, or other connected devices, people almost never consider what security risks are created by utilizing the machines. How many people who have purchased connected televisions, thermostats, or refrigerators have truly thought about segregating those devices on separate networks, of monitoring those devices’ activity for anomalies, etc.? Probably only a small percentage. And smart-device manufacturers often don’t adequately address security either – since purchasers aren’t willing to pay more for it. And, that’s one of the reasons that denial-of-service and other forms of attacks are likely to leverage these devices going forward.
Smart devices don’t create risks only to the data that they house and process; the devices can become launching grounds for attacks against other devices, can be used to monitor network traffic from computers, can be used as zombies as part of distributed denial of service attacks, etc.
joseph-steinberg
 
Joseph Steinberg is a respected cybersecurity expert, who is the founder and CEO of SecureMySocial, which recently brought to market the world’s first system to warn people in real time if they are making inappropriate social-media posts. Earlier, he served for a decade as CEO of cybersecurity firm, Green Armor Solutions, and for five years in several senior capacities at Whale Communications which was acquired by Microsoft. Joseph has been calculated to be one of the top 3 cybersecurity online-influencers worldwide and is a frequent media commentator on cyber-related matters. He is the inventor of several cybersecurity technologies widely-used today; his work is cited in well-over 100 published US patents. He is a regular columnist covering cybersecurity for Inc. magazine (and earlier for Forbes), and has written several books on the field as well. Joseph also serves as an expert witness and consultant on issues related to information security, and is a member of the advisory board of multiple technology companies.
Twitter: @JosephSteinberg

Ad Blocking and Tackling: What 2015’s Ad Blocking Means for 2016’s Marketing

2015 was the year when an unprecedented number of users took action against the ads that slowed web pages and turned the online content experience into a frustrating game of close-that-ad. According to a PageFair and Adobe report, U.S. ad blocking grew 48% in the twelve months leading up to June 2015. That’s 45 million users—16% of the population—who just said no to digital and, in particular, mobile web advertising by downloading ad blocking applications.
With eyeballs and revenue on the line, thought leaders debated whether the ad blocking trend would destroy or save advertising. The Association of National Advertisers (ANA) blamed the digital ecosystem. The Internet Advertising Bureau (IAB) blamed themselves for having “lost track of the user experience.” (They also notably took ad blockers to task for disingenuous practices, most specifically paid “whitelists” for publishers.)
The cost of ad blocking is significant, with an estimated $781 million dollar loss for the industry. But another resonating impact of the Great Ad Rebellion of 2015 will be found in its influence on marketing investments. What will marketers do differently to navigate the digital/mobile landscape in 2016?
Revisiting advertising
Lest there is any question, ad blocking will not prompt an all-out surrender by the ad ecosystem. Some publishers, like GQ, Forbes and more recently Wired, are fighting fire with fire, by blocking users with ad blockers. But the longer term strategy is to address the issues with ad experience. Some of this responsibility falls on publishers, who determine the degree of disruption that must be tolerated to access content, as well as the ad tech landscape, where fierce competition can inspire extreme approaches to ad engagement. (To steer publishers and platforms to a more user-friendly approach, and as part of its mea culpa, the IAB introduced new guidelines that emphasize ‘light, encrypted, ad choice supported, non-invasive ads’.)
But no change can succeed unless marketers direct ad dollars to those that are innovating in favor of an improved experience. This isn’t a simple task, given that site-by-site scrutiny can work against the efficiency gains of programmatic buying, a practice that has itself been blamed for the surge in ad blocking. As such, there will also be other moves to optimize ad impact, including increased investment in emotionally-aware ads, where data is used to extrapolate insights about a user’s psychological state in a given moment. Incorporating a measure of receptivity into ad delivery could prove to be the much-needed difference between engaging a consumer and ticking them off.
Thinking beyond advertising
Ongoing concerns about ad ROI will prompt more marketers to deepen investments in other approaches. Native advertising, the modern day equivalent of the advertorial, offers a worthy complement to traditional ads. Content marketing and branded content will help brands meet the need to feed social channels. Influencer marketing will gain practitioners as marketers struggle to connect with elusive millennial audiences. We’ll also see more brands practicing corporate social responsibility and, of course, promoting those good deeds via social channels.
Each of these tactics offer a subtler alternative to the traditional advertising message. And while this can be a strength in an oversaturated landscape, there is a fine line between subtle marketing and the calculated manipulation of audiences. The FTC tuned into this, releasing guidelines to ensure consumers can distinguish native advertising from content. But marketing’s most powerful critics are the consumers themselves, which leads to the next point…
Embracing feedback—in all forms
In a world of 24/7 marketing, brands are constantly challenged to creatively and authentically engage consumers in “conversation”.  The always-on dialogue represents tremendous opportunity, but it doesn’t come without risk. Today consumers are quick to call brands out when they’ve missed the mark, even when it’s as seemingly innocuous as Red Lobster’s slow response to a shout out from Beyoncé. Success doesn’t grant immunity either, as is evidenced by the less than warm welcome REI received on Reddit following its widely-celebrated #optoutside campaign.
This vulnerability could make one want to crawl back into the safe confines of traditional marketing, but of course that’s not an option. In 2016, more marketers will have strategies in place that allow them to creatively participate in the two-way dialogue while also managing the inherent risk. This means more than having an ear to the ground; brands need a plan that allows them to quickly gauge when and how—or if—it makes sense to engage or respond. (Arby’s farewell to their consistent critic Jon Stewart is a stellar example of a brand creatively and effectively steering into negative feedback.)
It may be that consumer ad blocking is really only part of this feedback cycle— less a mass exodus from advertising than it is an aggressive critique of its current form. Either way, it is a milestone in the ongoing transition from one-way marketing, perhaps one of the last nails in the coffin. Today, consumers have more than just a voice—they control the levers on which messages they receive and when. Marketers will need to keep in mind throughout the execution of every strategy and tactic to have an edge in 2016 and beyond.

Infographic: Social Media

Over recent years, there have been a number of changes that have helped to revolutionise our lives, particularly online. One of the things that have made a huge difference to the lives of people all around the world is social media. This is something that has helped not only general internet users but also business users, as social media platforms have expanded to make life easier in many different ways from communications through to branding, marketing and more.
A very informative infographic has now been put together by VoucherBin, and this provides important details relating to the use of social media and its evolution. Of course, social media is used all around the world by people wanting to communicate with friends and family but as the infographic shows that is far more to these platforms than just communication. They can be used for all sorts of things such as finding out more about entertainment options, new products coming onto the market, getting help with finding a job or even catching up on the latest celebrity gossip.
You can find out more about the different social network platforms, such as Facebook, Twitter, Pinterest, Instagram and Linkedin. You will also see how the popularity of social media platforms has continued to increase between 2010 and 2015, with all sites experiencing growth in terms of active population.
The inforgraphic also details important information relating to business, such as how marketers use social media for more than six hours each week in order to further their marketing efforts. 20 percent of people have also admitted to making stories up for use on social media so that their posts are more attention-grabbing. A surprising figure that is detailed in this infographic is that each month Google+ receives 48 percent more visits than Facebook with 1.2 billion as opposed to 809 million.
Another thing you can read about is how recruiters are now using social media more and more, and how social media has helped to make product launches and promotions far more successful. You will be able to compare the figures between different platforms when it comes to promoting brands and products and you’ll see how event organisers are making use of these platforms too.
When it comes to the use of social media for general consumers, the infographic shows how shoppers have become increasingly reliant on these platforms as well as how consumers feel about helping to promote brands via social media. When it comes to applying for a job many employers now browse applicants’ social media accounts before making a decision, and this is also detailed in the infographic.
The entertainment world also relies on social media, and you will be able to learn more about how people were posting on these sites following major awards events such as the Oscars and Grammy Awards. You can also find out more about which celebrities are major users of sites such as Twitter and which events have managed to result in the accumulation of millions of posts and tweets.
An interesting twist to the inforgraphic is that it also provides information on the downside of social media as well as the more positive side. This includes figures on company responses to consumers via social networks, what percentage of posts such as tweets are ignored and how many users of certain social networking sites are actually active.
Social-Media-The-Largest-International-Expo

Cafyne monitors social media for risky business in regulated industries

In 2015, to be a brand on social media is likely to live in fear of the social media gaffe: that most damning, tone-deaf or downright irresponsible mistake that goes viral and sends PR departments into prolonged frenzies of containment and damage control.

From the Delta giraffe gaffe or the infamous US Airways image mix-up (also, what’s up with you, airlines? Are you guys alright?), brands and their employees have to toe a pretty thin line when it comes to putting brand-representative content out into the ether. Those social media woes only multiply when you’re talking about regulated industries like healthcare, financial services, real estate or government organizations.

And yet, social media remains one of the most powerful tools in an organization’s arsenal. Most brands can’t afford not to be present online.

Cafyne is a new social media monitoring and management tool, designed for use by brands that have complex policies to navigate and aims to help brands stay compliant while reaping the benefits that come with a strong social media presence.

“What Cafyne does is bridges that gap,” says founder and CEO Rohit Valia. “It provides a simple way to be safe, follow all of the regulatory guidelines and tap into the social media opportunity.”

How, you ask? Cafyne’s has developed libraries that include the policies from regulatory bodies like the FDA, FTC, and SEC, along with regulatory measures like HIPAA, COPPA, and the Digital Millennium Copyright Act. Those libraries feed into Cafyne’s policy engine, which also allows users to customize their social media rulebooks with their own policies.

“Organizations have the ability to describe their social media policies into the tool–essentially encode it–using pre-built libraries that we’ve provided for various kinds of use-cases,” says Valia. “Those policies are then applied to the posts as they come through the system in real-time and it will flag content that it deems in-violation of those policies.”

Simply put, Cafyne scans social media posts from all monitored channels for any red flags, based on the parameters set with custom policies and libraries in the policy engine. From there, Cafyne’s Compliance Rule Engine will alert channel managers to any potentially problematic or non-compliant posts.

There are other companies providing watchdog services for regulated industries, like NextGate, for example. NextGate includes a level of content management for brands, identifying problematic content, but also focuses on identifying fraudulent accounts and content that threaten brands. According to Valia, though, Cafyne’s strength comes from its unified platform.

“For those industries that have regulation requirements, it provides all of the things needed from a social media engagement and management perspective in the enterprise,” says Valia.

Cafyne doesn’t just raise red flags. Its software also includes reputation management, engagement and publishing tools. In some ways, it’s like the HootSuite for regulated industries that could use an extra level of protection in dealing with something as serious as HIPAA violations.

The other piece of the Cafyne puzzle is decided more human than its policy and compliance engines, though. Instead, it’s focused on humans. Specifically, employee advocacy.

Figuring that employees can be a brand’s greatest advocate, Cafyne allows channel managers to loop employee accounts into the management platform to monitor engagement and identify potential problems. At its best, this feature keeps everyone out of trouble and allows employees to act as brand advocates… if that’s what they want. However, it’s worth noting that this employee monitoring feature isn’t opt-in, which means that employers can monitor public social media accounts through Cafyne. This isn’t any different than an employer searching your name and finding your Twitter account (provided it’s attached to your real name), but it’s important to be aware that brands are increasingly monitoring the activities of its employees online.

And so, we come back to the age-old adage: “Be careful what you wish for…and post on Twitter.” And if you find yourself in doubt, maybe it’s worth consulting Cafyne to make sure you aren’t breaking the law and/or landing yourself in a major tweetstorm.

From likes to leads: The new metrics for B2B social selling

Hank is the founder and CEO of Trapit. You can follow him on Twitter.

If you’re in any way involved with content marketing or especially with B2B social media, then you’ve likely experienced the push into social selling over the past year. Chances are, demands are being made down the chain for better social ROI, sometimes even without a real strategy or even deep understanding of how social selling integrates with other channels.

On one hand, the social selling emphasis is great for social marketers. It shifts social to a more direct value proposition that is understood by executives, and arms marketers with lead gen and conversion metrics to gauge and prove success and show that social can impact the bottom line. It’s also an excellent new channel and opportunity for salespeople. If we think of social selling as the methodological approach to lead identification and nurturing over social, bringing the value of direct 1:1 interactions to those approaches, then we can also see that it also requires a shift in thinking of both these marketing and sales roles.

Just as with the early days of B2B social adoption, when companies were trying to establish brand awareness and community across social networks, social selling has now become a ubiquitous buzzword and company mandate, and the lack of best practices around what metrics to work towards and how the role of the social seller should be defined confuses even the most progressive sales and marketing leaders.

Strategy comes first

If a company is committed to investing in social selling, they need to begin thinking deeply and strategically about their approach. The stats around why companies should emphasize social as a sales channel are well documented (though still very nascent): from studies showing that salespeople using social outsell 78 percent of their peers, to the recognition that when a lead is developed as a trusted relationship of an employee through social networking, that lead is seven times more likely to close, than other forms of leads.

Despite these convincing stats, only a third of companies have an actual social strategy for their sales departments, and as of 2013, an astounding 93 percent of sales executives had not been given any formal social selling training. That has hopefully improved somewhat as social selling comes into focus, but more often than not the demands in social ROI are not accompanied with a clear overall strategy for social selling.

The lack of strategy is concerning, and companies are getting hung up, so how can we turn this train around before it has gone too far down the wrong track?

The metrics that really matter

More and more major companies may be shifting their social emphasis from likes to leads in deference to these trends, but now we are entering an era of social ROI and metrics that we haven’t quite fully defined.

For so long, social campaigns have been designed for reach rather than conversion. They have focused on tracking for likes or followers or shares, and likewise many of the social media tools developed and now used focus on optimizing to those metrics. We already have a legacy system problem with the tools used for social as we shift focus to selling.

Social of course now needs to be tied into companies’ CRM systems, and leads tracked across networks and nurtured strategically, whether on social, through email campaigns, or other traditional touch points. Social is no longer just a siloed marketing channel.

So how will we measure the value of social in relation to the bottom line, moving from measurement in likes to actual leads and sales? What is the metric for content sharing as it impacts sales, for content as lead-gen?

Content marketers leveraging social need to start with better tracking of campaigns, able to identify how much a piece of content was shared as well as what leads came directly (or even indirectly) from it. That’s not a simple problem to solve. We’re essentially talking about measuring content influence, and how much a piece of content influences thinking after consumption.

And of course the ultimate metric that we looking for to tie the value of content and social to sales is new revenue created, a kind of “revenue index score.”

Content and social is commonly held to top of the funnel activity, but that’s not enough.

One of the challenges for social selling is that it does not operate like the traditional sales funnel. Sales leads and nurturing can jump across platforms and networks, and they can increasingly enter the funnel not just at the top, but much further down.

Any new metric or revenue index score needs to account for every step towards conversion that social can provide. It needs to register and identify targets’ potential to become a lead, turning targets to actual leads, nurturing through the sales funnel, and finally in converting that opportunity into money. Every piece of content and interaction should have a revenue scoring capability associated to it.

The unique new role of the social seller

It’s not simply about the metrics that we might identify and define, however. This quantifiable layer is necessary, and possible in a way that it never was before. But more interesting is the new role that social selling has evolved for companies.

Just as the original social media manager role (and value) had to be defined within organizations, the social seller requires a unique perspective and talent, especially for B2B. It is a combination of content marketing, social management, sales strategy, CRM, and lead generation.

Social teams now sit at a crux of sales and marketing that opens a huge and opportunity for them, but also requires those teams to think and operate as both marketers and salespeople. In essence, it requires the insight of both marketing and sales to the extent that now on social, marketing is sales and sales is marketing.

When we look at what skills and understanding the new social seller will require, a few key features become clear. The social seller needs to be able to:

  • Understand 1:1 authenticity as both marketer and salesperson
  • Understand key social media strategy beyond brand awareness, and how social engagement relates to the funnel (Credibility > Visibility)
  • Needs to be well versed in short-hand and industry lingo of their audience and target market
  • Understand how social fits into and is best applied in their unique funnel
  • Be up-to-date on industry trends, and have opinions and ideas to share, essentially serving as an industry expert that can foster social leads
  • Disseminate that knowledge and empower their teams to likewise be experts and share relevant content
  • Thoroughly understands the unique nuances of the social media habits and expectations of their target customers
  • Believe in conversation and dialogue as a fundamental component of every step in the funnel
  • Value getting to know the buyer even when personal and professional matters intersect, and be trusted with that relationship
  • Understand the unique metrics and KPIs for sales and marketing, and how they intersect and overlap

Social selling requires new metrics, new lead-gen strategies, new content and advocacy approaches, and perhaps most importantly, a new mindset that can work within all of these elements.
The companies that are realizing this new marketing and sales overlap now, and genuinely fostering the authenticity of those social efforts and relationships, are positioned to lead this new channel. And those companies will be the ones to define the terms and metrics for success as social selling rapidly evolves.

‘T’ should be for Twitter

Alphabet isn’t even a week old, but that doesn’t mean we can’t stir the broth and see what might be in store for the company that runs Google and other, less-established businesses like the health-focused Calico and Life Sciences.
And whenever I stir that bowl of alphabet soup, I can’t help but eventually see T-W-I-T-T-E-R.
This might seem like a strange idea, given that Google all-but-extricated itself from the business of social by killing the only thing people liked about Google+, — that’s in addition to announcing that an account with the moribund social network will no longer be required to “share content, communicate with contacts, create a YouTube channel” or use Google’s many other services. But there are many reasons why it would make sense, especially now that the company formerly known as Google Inc. isn’t nearly as committed to a social network that was never a hit with consumers.

A perfect fit

Rumors about Google acquiring Twitter have abounded since 2009. And since Twitter went public in November 2013, its share price has risen whenever rumors about an impending Google takeover percolate back to the top of Techmeme’s homepage as investors show their approval of such a deal.
A deal would also make sense because Google and Twitter have gotten closer in the last year. The two companies announced in February  that would bring tweets into Google’s search results — a rekindling of a relationship that smoldered out in 2011, when the companies’ original partnership expired.
There are other benefits to Alphabet acquiring Twitter. It would no longer have to worry about making money on its own — something with which Twitter has struggled in the past — and could instead focus on giving users the best-possible experience rather than capitulating to business needs.
And then there’s Twitter leadership, which has shifted every couple years since 2008. The search for a new chief executive seems to be all anyone is talking about when Twitter is brought up. In it’s new role as a holding company, Alphabet might be beneficial in preserving continuity while making sure the service continued innovating.
Twitter has long struggled as an independent company. Perhaps joining Alphabet’s pseudo-autonomous portfolio of companies could give it the support it needs without shackling it to another company’s interests.

Evolving beyond 140 characters (aka Google+ done right)

Then there’s the impression that Twitter will become a lot more like Google+ now that it’s removed the 140-character limit from users’ direct messages. Until yesterday direct messages felt like a nearly forgotten add-on, almost as if the service preferred that its users share things in public instead of in private. (For proof, just look at Twitter’s restrictions on sending links via direct messages.)
Removing the 140-character limit from direct messages was, according to Twitter, one of its most-requested features. The character restriction is fine for the public feed; remove it, and all you’re left with is Medium. But people want a little more freedom to communicate however they please in private.
Combine that with the ability to message a group of people on Twitter and you end up with something that looks a lot like the “circles” that allowed Google+ users to share information with discrete groups of people. The two services are starting to seem less differentiated than they ever have before.
To be honest, this move is actually better than Twitter attempting to become the next Facebook. I’ve written in the past that Twitter’s a better copycat than Mark Zuckerberg’s blue juggernaut, but that doesn’t mean that trying to become more like Facebook will do Twitter any favors with most consumers. At that point, why wouldn’t they just keep using the same service as their friends?
On the other hand, Facebook seems just as determined to copy Twitter. Business Insider reported earlier this week that Facebook is working on a “Twitter-like app” that would let publishers “send mobile breaking news alerts to the masses.” Twitter has the advantage here; it should use it. (And it should probably do so before Facebook figures out how to make Twitter a feature within its own service.)

Challenging Facebook on social advertising

Besides owning a social service that doesn’t remind people of a graveyard whenever they visit, an Alphabet-owned Twitter would be in a much better position to compete for social advertising dollars currently being hogged by Facebook.
Additionally, Alphabet could give its other properties full access to all the information shared on Twitter (perhaps even the info contained in direct messages, similarly to how data is collected from Gmail). Provided there are mechanisms allowing Twitter users to keep their data out of the hands of any acquirer, — which I’ll admit is far from a given — it seems like most of the parties involved would get what they want. Alphabet gets more data and a better position on social media. Twitter gets to focus on itself, and Twitter users get the product they want.
For those who decide to let Alphabet keep their data, it isn’t hard to imagine a world where Twitter’s ads are more relevant than ever before. (Surely I’m not the only person who sees promoted tweets unrelated to any of his interests.) Then, tie the advertising infrastructure into Google’s services, combine some of the data Alphabet has from those that use Google products, and watch the social ad revenues trickle in.
But even if Alphabet elects to keep Twitter data separate, it could be successful in driving more regular activity to the service. One of Twitter’s biggest problems is that not enough people are signing up, and many of those that do never return. Being connected to some of the most popular services in the world — from Google search and YouTube, to AdWords and Gmail — couldn’t hurt matters.
No matter how Twitter is related to Alphabet’s other advertisements — whether it’s helping its parent company show them or taking advantage of them to get more users, or convince people to give it another shot — it isn’t hard to see that the companies will probably work better together than apart.

It makes sense

Plenty of others have pointed out that a Twitter sale just doesn’t seem likely, whether it be because of anti-trust concerns or because it doesn’t seem like a “Larry-sized problem.” That doesn’t mean it still wouldn’t make a lot of sense, especially if this Alphabet restructuring means slimming Google down to what its best at. Pushing social — and in some instances media (news aggregation, user-generated video) — to a company that’s primarily concerned with social could strengthen Alphabet’s main source of revenue, Google.
As I said at the top: If “G” is for Google, then “T” should be for Twitter.

Interview with Stephen Wolfram on AI and the future

sw-dr040-5x7Few people in the tech world can truly be said to “need no introduction.” Stephen Wolfram is certainly one of them. But while he may not need one, the breadth and magnitude of his accomplishments over the past four decades invite a brief review:

Stephen Wolfram is a distinguished scientist, technologist and entrepreneur. He has devoted his career to the development and application of computational thinking.

His Mathematica software system, launched in 1988, has been central to technical research and education for more than a generation. His work on basic science—summarized in his bestselling book A New Kind of Science—has defined a major new intellectual direction, with applications across the sciences, technology, and the arts. In 2009 Wolfram built on his earlier work to launch Wolfram|Alpha to make as much of the world’s knowledge as possible computable—and accessible on the web and in intelligent assistants like Apple’s Siri.

In 2014, as a culmination of more than 30 years of work, Wolfram began to roll out the Wolfram Language, which dramatically raises the level of automation and built-in knowledge available in a programming language, and makes possible a new generation of readily deployed computational applications.

Stephen Wolfram has been the CEO of Wolfram Research since its founding in 1987. He was educated at Eton, Oxford, and Caltech, receiving his PhD in theoretical physics at the age of 20.

 

Publisher’s Note: The following interview was conducted on June 27, 2015.  Although it is lengthy, weighing in at over 10,000 words, it is published here in its entirety with only very minor edits for clarity.

Byron Reese: So when do you first remember hearing the term “artificial intelligence”?

Stephen Wolfram: That is a good question. I don’t have any idea. When I was a kid, in the 1960s in England, I think there was a prevailing assumption that it wouldn’t be long before there were automatic brains of some kind, and I certainly had books about the future at that time, and I’m sure that they contained things about them, how there would be some electronic brains, and so on. Whether they used the term “artificial intelligence,” I’m not quite sure. Good question. I don’t know.

Would you agree that AI, up there with space travel, has kind of always been the thing of tomorrow and hasn’t advanced at the rate we thought they would?

Oh, yes. But there’s a very definite history. People assumed, when computers were first coming around, that pretty soon, we’d automate what brains do just like we’ve automated what arms and legs do, and so on. Nobody had any real intuition for how hard that might be. It turned out, for reasons that people simply didn’t understand in the ’40s, and ’50s, and ’60s, that lots of aspects of it were quite hard, and also, the specific problem of reproducing what human brains choose to do may not be the right problem. Just like if you want to build a transportation system, having it based on legs is not the best engineering solution. There was an assumption that we can automate brains just like you can automate mechanical kinds of things, and it’s only a matter of time, and in the early ’60s, it seemed like it would be a short time, but that turned out not to be true, at least for some things.

What is the state of the technology? Have we built something as smart as a bird, for instance?

Well, what does it mean to make something that is as smart as X? In the history of artificial intelligence, there’s been a continuing set of tests that people have come up with. If you can do X, then we’ll know you’re as smart as humans, or something like that. Almost every X that’s been defined so far, machines have ended up being able to do, though the methods that they use to do it are usually utterly different from the ones that seem to be involved with humans. So the types of things that machines find easy are very different from those kinds of things that people find easy. I think it’s also the case that a lot of things people say, “Gosh, we should automate this,” the mode of automation ends up being different from just sort of the way that you would—sort of if you had a brain in a box, the way that you would use that. Probably a core question about AI is, “How do you get all of intelligence?” For that to be a meaningful question, one has to define what one means by “intelligence.” This, I think, gets us into some bigger kinds of questions.

Let’s dive into those questions. But first, one last “groundwork” question: Do you think we’re at a point with AI where we know what to do, and it’s just that we’re waiting on the hardware again? Or do we have plenty of hardware, and are we still kind of just figuring out how to do it?

Well, it depends what “it” is. Let’s talk a little bit more systematically about this notion of artificial intelligence, and what we have, what we could have, and so on. I suppose artificial intelligence is kind of a—it’s just words, but what do we think those words mean? It’s about automating the intellectual activities that humans do. The story of technology has been a long one of automating things that humans do; technology tends to be about picking a task where we understand what the objective is because humans are already doing it, and then we make it possible to do that in an automatic way using technology.

So there’s a whole class of tasks that seem to be associated with what brains and intelligence and so on deal with, which we can also think of automating in that way. Now, if we say, “Well, what would it take? How would I know if this box that’s sitting on my desk was intelligent?” I think this is a slightly poorly defined question because we don’t really have an abstract definition of intelligence, because we actually only have one example of intelligence that we definitively think of as such, which is humans and human intelligence. It’s an analogous situation to defining life, for example. Where we have only one example of that, which is life on Earth, and all the life on Earth is connected in a very historical way—it all has the same RNA and cell membranes, and who knows what else—and if we ask ourselves this sort of abstract question, “How would we recognize abstract life that doesn’t happen to share the same history as all the particular kinds of life on Earth?” That’s a hard question. I remember, when I was a kid, the first spacecraft landed on Mars, and they were kind of like, “How do we tell if there’s life here?” And they would do things like scoop the soil up, and feed it sugar, and see whether it produced carbon dioxide, which is something that is unquestionably much more specific than asking the general question, “Is there life there?”

And I think what one realizes in the end is that these abstract definitions of life—it self-reproduces, it does weird thermodynamic things—none of them really define a convincing boundary around this concept of life, and I think the same is true of intelligence. There isn’t really a bright-line boundary around things which are the general category of intelligence, as opposed to specific human-like intelligence. And I guess, in my own science adventures, I gradually came to understand that, in a sense, sort of, it’s all just computation. That you can have a brain that we identify, okay, that’s an example of intelligence. You have a system that we don’t think of as being intelligent as such; it just does complicated computation. One of the questions is, “Is there a way to distinguish just doing complicated computation from being genuinely intelligent?” It’s kind of the old saying, “The weather has a mind of its own.” That’s sort of a question of, “Is that just pure, primitive animism, or is there, in fact, at some level some science to that?” Because the computations that are going on in the fluid dynamics of the weather are really not that different from the kinds of computations that are going on in brains.

And I think one of the big conclusions that came out of lots of basic science that I did is that, really, there isn’t a distinction between the intelligent and the merely computational, so to speak. In fact, that observation is what got me launched on doing practical things like building Wolfram|Alpha, because I had thought for decades, “Wouldn’t it be great to have some general system that would take knowledge, make it computational, make it so that if there was a question that could in principle be answered on the basis of knowledge that our civilization has accumulated, we could, in practice, do it automatically.”

But I kind of thought the only way to get to that end result would be to build a sort of brain-like thing and have it work kind of the same—I didn’t know how—as humans brains work. And what I realized from the science that I did it was that just doesn’t make sense. That’s sort of a fool’s errand to try to do, because actually, it’s all just computation in the end, and you don’t have to go through this sort of intermediate route of building a human-like, brain-like thing in order to achieve computational knowledge, so to speak.

Then the thing that I found interesting is there are tasks that. … So, if we look at the history of AI, there were all these places where people said, “Well, when computers can do calculus, we’ll know they’re intelligent, or when computers can do some kind of planning task, we’ll know they’re intelligent.” This, that, and the other. There’s a series of these kinds of tests for intelligence. And as we all know, in practice, the whole sequence of these things has been passed by computers, but typically, the computers solve those problems in ways that are really different from brains. One way I like to think about it is when Wolfram|Alpha is trying to solve a physics problem, for example. You might say, “Well, maybe it can solve it in a brain-like way, just like people did in the Middle Ages, where it was a natural philosophy, where you would reason about how things should work in the world, and what would happen if you pushed this lever and did that, and [see] things had a propensity to do this and that.” And it would be all a matter of human-like reasoning.

But in fact, the way we would solve a problem like that is to just turn it into something that uses the last 300 years of science development, turn it into a bunch of mathematical equations, and then just industrially solve those equations and get the answer, kind of doing an end run around all of that human-like, thinking-like, intelligence-like stuff. But still, one of the things that’s happened recently is there are these tasks that have been kind of holdouts, things where they’re really easy for humans, but they’ve seemed to be really hard for computers. A typical example of that is visual object recognition. Is this thing an elephant or a bus? That’s been a type of question that’s been hard for computers to answer. The thing that’s interesting about that is, we can now do that. We have this website, imageidentify.com, that does a quite respectable, not-obviously-horribly-below-human job of saying, “What is this picture of?” And what to me is interesting, and an interesting episode in the history of science, is the methods that it’s using are fundamentally 50 years old. Back in the early 1940s, people were talking about, “Oh, brains are kind of electrical, and they’ve got [things] like wires, and they’ve got like computer-like things,” and McCulloch and Pitts came up with the whole neural network idea, and there was kind of the notion that the brain is an electrical machine, and we should be able to train it by showing it examples of things, and so on.

I worked on this stuff around 1980, and I played around with all kinds of neural networks and tried to see what kinds of behaviors they could produce and tried to see how you would have neural networks be sort of trained, or create attractors that would be appropriate for recognizing different kinds of things. And really, I couldn’t get them to do anything terribly interesting. There was a fair amount of interest around that time in neural networks, but basically, the field—well, it had a few successes, like optical character recognition stuff, where you’re distinguishing 26 characters, and so on. It had a few successes there, but it didn’t succeed in doing some of the more impressive human-like kinds of things, until very recently. Recently, computers, and GPUs, and all that kind of thing became fast enough that, really—there are a bunch of engineering tricks that have been invented, and they’re very clever, and very nice, and very impressive, but fundamentally, the approach is 50 years old, of being able to just take one of these neural network–like systems, and just show it a whole bunch of examples and have it gradually learn distinctions between examples, and get to the point where it can, for example, recognize different kinds of objects and images. And by the way, when you say “neural networks,” you say, “Well, isn’t that an example of why biology has been wonderful, and we’re merely following on the coattails of biology?” Well, biology certainly gave us a big clue, but the fact is that the actual things we use in practice aren’t particularly neural-like. They’re basically just compositions of functions. You can think of them as just compositions of functions that have certain properties, and the one thing that they do have is an ability to incrementally adjust, that allows one to do some kind of incremental learning process. The fact that they get called neural networks is because it historically was inspired by how brains work, but there’s nothing really neurological about it. It’s just some kind of, essentially, composition of simple programs that just happens to have certain features that allow it to be taught by example, so to speak.

Anyway, this has been a recent thing that for me is one of the last major things where it’s looked like, “Oh, gosh! The brain has some magic thing that computers don’t have.” We can go through all kinds of different things about creativity, about language, about this and that and the other, and I think we can put a checkmark against essentially all of them at this point as, yes, that component is automatable. Now, I think it’s an interesting thing that I’ve been slowly realizing recently. It’s kind of a hierarchy of different kinds of what one might call “intelligent activity.” The zero-th level of the hierarchy, if we take the human example, is reflexive-type stuff, stuff that every human is physiologically wired to do, and it’s just part of the hardware, so to speak.

The first level is stuff where we have a plain brain, so to speak, and upon being actually exposed to the world, that plain brain learns certain kinds of things, like physiologic recognition. But that has to be done separately for every generation of the species. It’s not something where the parent can pass to the child the knowledge of how to do physiologic recognition, at least not in the way that it’s directly wired into the brain. Then the second level, the level that we as a species have achieved, and doesn’t look like any other species has achieved, is being able to use language and so on to pass knowledge down from generation to generation, which allows us to build up this thing that goes beyond pure one-brain intelligence, so to speak, and make something which is a collective, progressively growing achievement, which is that corpus of human knowledge.

And the thing that I’ve been interested in is that idea that there is language and knowledge, and that we can create it as a long-term artifact, so what’s the next step beyond that? What I realized is that I think a bunch of things that I’ve been interested in for many decades now is—it’s slowly coming into focus for me that this is actually really the thing that one should view as the next step in this progression. So we have computer languages, but computer languages tend not to be set up to codify knowledge in the kind of way that our civilization has codified knowledge. They tend to be set up to say, “Okay, you’re going to do these operations. Let’s start from the very basic primitives of the computer language, and just do what we’re going to do.”

What I’ve been interested in is building up what I call “knowledge-based language,” and this Wolfram Language thing that I’ve basically been working on for 30 years now is kind of the culmination of that effort. The point of such a language is that one’s starting from this whole corpus of knowledge that’s been built up by our civilization, and then one’s providing something which allows one to systematically build from that. One of the problems with the existing corpus of knowledge that our civilization has accumulated is that we don’t get to do knowledge transplants from brain to brain. The only way we get to communicate knowledge from brain to brain is turn it into something like language, and then reabsorb it in another brain and have that next brain go through and understand it afresh, so to speak.

The great thing about computer language is that you can just pick up that piece of language and run it again and build on top of it. Knowledge usually is not immediately runnable in brains. The next brain down the line, so to speak, or of the next generation or something, has to independently absorb the knowledge before it can make use of it. And so I think one of the things that’s pretty interesting is that we are to the point where when we build up knowledge in our civilization, if it’s encoded in this kind of computable form, this sort of standardized encoding of knowledge, we can just take it and expect to run it, and expect to build on it, without having to go through this rather biological process of reabsorbing the knowledge in the next generation and so on.

I’ve been slowly trying to understand the consequences of that. It’s a little bit beyond what people usually think of as just AI, because AI is about replicating what individual human brains do rather than this thing that is more like replicating, in some more automated way, the knowledge of our civilization. So in a sense, AI is about reproducing level one, which is what individual brains can learn and do, rather than reproducing and automating level two, which is what the whole civilization knows about.

Bad dentist must pay $4,677 in case over Yelp threats

It’s bad enough having a toothache. It’s much worse when your dentist rips you off for $4,000 and then threatens to sue you for complaining about the treatment.

That’s what happened to New York City patient Robert Lee, whose ordeal started in 2011, but ended last week when a federal judge ordered the dentist to pay $4,677 in damages and legal fees.

The dentist in question, Stacey Makhnevich, boasted of being an opera singer who catered to musicians. Her other speciality was short-circuiting negative Yelp reviews with tricky contracts that required patients to assign their copyright in what they wrote about her services. (See Ars Technica for the legal background).

Sure enough, after Lee complained about her on Yelp, Makhnevich went after him. She pointed to the contract to demand that Lee pay $100 in copyright damages for every day the negative review stayed online.

Makhnevich is not the first to try this stunt. Other professionals around the country, mostly doctors and dentists, have also been using service contracts to stifle social media criticism.

Fortunately, they’re not all succeeding. After Lee filed a lawsuit to stop Makhnevich, U.S. District Judge Paul Crotty agreed with him that the Yelp review was fair use under the Copyright Act.

He also chewed out Makhnevich in a default judgment, finding her actions to be unconscionable and a breach of fiduciary duty, and ruling that Lee’s commentary couldn’t be defamatory under New York state law.

The Makhnevich affair is another example of the Streisand effect, and why it’s perilous to use aggressive legal tactics to control social media. (Last year, a hotel in New York found out something similar, when it threatened a bride with $500 fines for every negative review posted by her wedding guests.)

For Lee, however, the $4,677 may be a hollow victory since the rogue dentist is now nowhere to be found. The judgment is below:

Update: For the lawyers out there, Paul Levy of Public Citizen, who represented Lee: “The damages were awarded on a different cause of action than the one about the non-disparagement / copyright assignment agreement.  In addition to that claim, which is what got all the public attention, Lee had a claim for breach of contract, because the dental office promised to send records to his insurance company so he could get reimbursed for her (exorbitant) charges.  They did not send the records so he was out the money, and the damages were ONLY for that.” (I’ve changed the headline to reflect this)

Bad Dentist Judgment

[protected-iframe id=”06e73397363474550cc9191b08c5068e-14960843-34118173″ info=”https://www.scribd.com/embeds/257537238/content?start_page=1&view_mode=scroll&show_recommendations=true” width=”100%” height=”600″ frameborder=”0″ scrolling=”no”]

Social media star Casey Neistat talks Snapchat versus YouTube

Since Snapchat Stories have been around for only a year or so, Snapchat celebrities are still a rare breed. Compared to viral Vine stars with their own cross country tour schedules and YouTube creators with their Hollywood deals, Snapchat stars are still building a case for themselves.

One such star, Casey Neistat, isn’t new to the social media fame game. His primary site is YouTube, but he started creating Snapchat Stories early on and built a separate following there. People watch Casey for his little narrative snippets, where he takes users on a journey throughout his day, whether it involves a flight home from Singapore or filming a movie with fellow Snapchat star Jerome Jarre. Neistat first gained a name for himself with an HBO special about him and his brother.

I chatted with him to hear why he’s bullish on ephemerality, doesn’t care that Snapchat hasn’t courted him, and isn’t interested in Discover.

What compels you to snap?

I don’t keep an ongoing dribble of updates of my day, but I tell little compartmentalized stories everyday on Snapchat. I use it much more like making a movie than maintaining a diary. When people watch my 60-second clips there’s a beginning, middle, and end.

I’m not an exhibitionist, I don’t have a compulsion to share the ins and outs of my daily life with a public audience. The compulsion instead is when I experience something interesting, whether entirely mundane, going to dinner with my wife, or something much more specific, like I’m flying to California and all of the arduous experiences of flying in an American Airlines plane to LA. I just pluck the experiences of my life and there’s always one a day.

How does your Snapchat content differ from your YouTube content?
[youtube https://www.youtube.com/watch?v=bzE-IMaegzQ]
The artist in me that likes to make amazing movies, that doesn’t get to be exercised on Snapchat. But the part of me that likes to share little ideas does. I’m not able to make amazingly perfect precious pieces of content, but I get to make awesome spontaneous content that’s frequently ephemeral. That’s what turns me on about it.

What I hate about Snapchat Stories, and maybe I’m just old, is that you spend time capturing this content and then it just disappears. Does that bother you?

The ephemeral nature of Snapchat is what makes me so willing to make content that I wouldn’t otherwise. When you remove the scrutiny of longevity. I’m so careful about what I post on Instagram because it’s subject to scrutiny from now to the end of time.

I saw that Jerome Jarre left Vine to focus on Snapchat, which seems like a big loss for Vine. Is Snapchat a place to augment stardom — using it to supplement your content on other apps — or can you just be a “Snapchat star?”

It’s largely a new audience for me. Some came from YouTube or other social video outlets but for the most part it’s a really new audience. I think there’s a real social, almost viral side to Snapchat.

[Snapchat] has reached a critical mass. They’re big time now. They’ve grown up. They’re up there with the big boys now, they’re in the Twitter and Instagram space. I think we watched the turning point happened in the last few months where they went from a niche app to something more.
[youtube https://www.youtube.com/watch?v=YQY3rM2f9fA]

I’ve written about how Vine had failed to court its social media stars, who were a big draw for the app. Does it matter whether social media companies build relationships with their biggest content creators? Have you heard from Snapchat or YouTube?

When you’re talking about having several hundred million users, managing a community is incredibly important. The support that I get from YouTube and the relationship I have with YouTube is greater than the support from HBO when I had an HBO show. Technical support, studio resources, if I need a production studio I’ll have it tomorrow. [YouTube] will send someone to my office if I’m having trouble. That’s a big deal for a creator like me.

Snapchat has reached out to me, they’ve been very warm to me. But we’re talking about three emails and a phone call. Exercising some show of appreciation for their community is important, but it’s not make or break. We’re on it because they built a really great tool.

What do you think of Snapchat Discover?

I don’t know. I like what they’re going for, I think it’s probably smart. I think the UX is really interesting but I don’t know if it’s for me. I am a power user news junkie kind of guy and I don’t need that delivered to me via Snapchat.

I don’t send and receive messages on Snapchat, I never have. Stories is the only feature I use. I think of them becoming a more dynamic social network and I think it’s great.