Senate backs down on ‘Facebook Bureau of Investigations’ mandate

Facebook, Twitter, and other social networking companies no longer have to worry about a mandate that would have required them to share with the United States government information about users discussing terrorism-related topics.
Not only is this great news for young students wishing to share info on self-made clock projects, but also for a large portion of citizens that don’t want the feds sifting through private social data without a warrant.
In an effort to pass a funding bill for federal intelligence agencies, the Senate has recently abandoned a provision that would force social networks to share data on users believed to be involved with terrorism activities. The bill itself was initially blocked from reaching the Senate floor by Sen. Ron Wyden, who described the mandate as a “vague [and] dangerous provision.” Wyden said in a statement Monday that he plans to release his hold on the bill, thus allowing it to move forward.
“Going after terrorist recruitment and activity online is a serious mission that demands a serious response from our law enforcement and intelligence agencies,” Wyden said. “Social media companies aren’t qualified to judge which posts amount to ‘terrorist activity,’ and they shouldn’t be forced against their will to create a Facebook Bureau of Investigations to police their users’ speech.”
But the spirit of the provision is unlikely to be gone for long.
A spokesperson for Sen. Dianne Feinstein told the Hill that the senator “regrets having to remove the provision” and “believes it’s important to block terrorists’ use of social media to recruit and incite violence and will continue to work on achieving that goal.” It’ll be back.
This is merely the latest in a string of examples of the government pressuring tech companies to provide it with more information, or to help it take down content related to extremist organizations like the so-called Islamic State. Other efforts relate to encryption, censorship, and access to private communications.

Politwoops shutdown raises questions about Twitter’s rules

Can social websites protect their users while still allowing outside groups to hold politicians and other public figures accountable for their statements?
That’s the question at the heart of a recent controversy between Twitter and Politwoops, a series of websites that archived politicians’ deleted tweets whose access to Twitter’s public API was revoked without warning over the weekend.
Twitter made a similar move earlier this year when it shut down the United States version of Politwoops. The many versions of this tweet-archiving tool were run by two separate groups — the Sunlight Foundation and the Open State Foundation — and both have expressed their concern over Twitter’s decision.
Both groups tell me there are no negotiations in place to restore Politwoops’ access to Twitter’s API. But Arjan el Fassad, the director of the Open State Foundation, did say that the group is “exploring a number of legal and technical options” to see if it can build a similar tool without access to Twitter’s API.
“We believe that what public officials, especially politicians, publicly say is a matter of public record,” Fassad told me. “Even when tweets are deleted, it’s part of parliamentary history. Although Twitter can restrict access to its API, it will not be able to keep deleted tweets by elected public officials in the dark.”
A spokesperson for the Sunlight Foundation said that group has no plans to rebuild Politwoops without access to Twitter’s data stream. Yet the group is no less damning in its stance on Twitter’s decision to revoke the tool’s access to information that was publicly available through multiple outlets before now.
“To prevent public oversight when our representatives try to discreetly change their messaging represents a significant corporate change of heart on the part of Twitter and a major move on their part to privatize public discourse,” they said.
“Imagine if the Washington Post printed a retraction of a story, would it demand that all copies delivered to the home with the original story be returned? When a public statement is made, no matter the medium, can it simply be deleted and claimed as a proprietary piece of information?”
Of course, there is a difference between the Washington Post trying to retrieve a physical object and Twitter cutting off access to its service. And “unpublishing” stories, whether it’s to appease advertisers or because they contained factual errors or were plagiarized from another source, happens at online publications.
For its part, Twitter says it’s merely trying to protect its users. A spokesperson said in a statement that the “ability to delete one’s Tweets – for whatever reason – has been a long-standing feature of Twitter for all users” and that it “will continue to defend and respect our users’ voices in our product and platform.”
I came to a similar conclusion when the U.S. version of Politwoops was shut down. As I wrote at the time:

Twitter isn’t only defending politicians; it’s protecting all of its users. I suspect there are more private citizens than politicians using the platform, so if having a reasonable expectation of privacy makes things harder for a site that collects politicians’ gaffes, well, I’m happy to bid Politwoops a fond, but prompt, adieu.

Both the Sunlight Foundation and Open State Foundation have said that they avoided this issue by focusing Politwoops on politicians. There should be a clear distinction between public figures and other Twitter users, both argued, and others have said that Twitter is either incompetent or capitulating to politicians.
This is a thorny issue without a clear solution. Twitter can be blamed for blocking Politwoops’ access to its service because each of these groups argues that they were holding politicians accountable; it could also be chastised for allowing these groups to break the rules meant to protect its users’ privacy.
Let me put it another way: If a restaurant had tinted windows to prevent outside observers from taking pictures of its diners, should it have to smash them whenever a politician or other public figure enters? Probably not. Its patrons, regardless of their status, expect to be afforded the same privacy.
It’s more troublesome that Twitter changed its mind about Politwoops. As the Sunlight Foundation notes in its blog post about this weekend’s shut down:

In 2012, Sunlight started the U.S. version of Politwoops. At the time, Twitter informed us that the project violated its terms of service, but we explained the goals of the project and agreed to create a human curation workflow to ensure that the site screened out corrected low-value tweets, like typos as well as incorrect links and Twitter handles. We implemented this layer of journalistic judgment with blessings from Twitter and the site continued. In May, Twitter reversed course and shut down Sunlight’s version of Politwoops.

It seems that Twitter was fine with smashing its own windows for three years, provided that Politwoops only used its exceptional ability to ignore the rules governing its API for things that actually matter to the public. Why did it change its mind, and why did months pass between the shuttering of the U.S. version of Politwoops and the revocation of these international versions’ access?
Consistent rules can be lived with and worked around. Inconsistent rules, however, lend some credence to the idea that Twitter might not be wise enough to decide what outside groups can do with public tweets. The company should have either shut down Politwoops before or allowed it to run into perpetuity.
In a way, it’s a lot like the controversy created whenever Politwoops did catch deleted tweets that shamed the politicians who sent them. Many of those tweets would have been fine if they hadn’t been deleted; it was only when their senders tried to act like they never existed that problems arose. It’s hard not to appreciate the symmetry between that and Twitter’s current situation.

Judge halts movie industry-backed probe against Google

A federal judge has agreed to put the brakes on an investigation into Google by Mississippi Attorney General Jim Hood after the company complained that Hood’s inquiry was an illegal censorship campaign cooked up by Hollywood.

In a Monday ruling, U.S. District Judge Henry T. Wingate issued an order that will temporarily bar Hood from forcing Google to comply with the terms of a 79-page subpoena.

“Today, a federal court entered a preliminary injunction against a subpoena issued by the Mississippi Attorney General. We’re pleased with the court’s ruling, which recognizes that the MPAA’s long-running campaign to censor the web—which started with SOPA—is contrary to federal law,” Google wrote in an update to an earlier blog post describing the case.

The ruling by Judge Wingate came from the bench, and a written version is expected to follow in the next week or two.

“Google has the better side of the legal arguments,” the judge told the court, according to a spokesperson for the company.

The ruling is a major victory for Google, which filed a lawsuit challenging Hood’s 79-page subpoena in December.

The ostensible goal of the subpoena is to help Hood discover if Google is violating Mississippi laws by exposing internet users to drugs and pornography. Google, however, filed a court challenge on the ground Hood overstepped federal laws that shield internet companies from liability for what others post online.

The case has also taken on an air of intrigue in light of a secret scheme, known as “Project Goliath,” that came to light as a result of the massive hack on Sony in December 2014.

Documents disclosed by the hack suggested that the Attorney General’s campaign against Google was being underwritten by the Motion Picture Association of American, and even involved movie industry lawyers drafting legal papers for the state. The company has characterized the state investigation as a dirty-tricks campaign by the movie industry to promote the goals of a failed anti-piracy law known as SOPA.

Hood has already come under fire for being among Democratic state attorneys general who appear to have been farming out the investigative powers of their offices to private law firms in return for a cut of the profits.

Monday’s ruling does not put an end to Mississippi’s investigation, but rather puts it on hold while the parties file more evidence. Hood has tried to frame his investigation as a populist campaign on behalf of the state’s citizens and argued that Google should pursue its claims of over-stepping in state, not federal court.

Why we need social media: Press freedom is still declining rapidly

The latest global index from Reporters Without Borders shows that freedom of the press is in decline in a majority of the countries surveyed — including the United States — which makes alternative forms of media such as Twitter more important than ever

Google advisory council: Right to delist should only apply in EU

To help it handle the EU ruling that forces it to delist certain results about people, Google assembled a team of expert advisors that travelled the continent, seeking out various opinions on how best to implement Europeans’ data protection rights. On Friday that advisory council published its report, providing recommendations for the way forward.

The Google advisors’ report (embedded below) makes for a fascinating read, but the highlights are its assertion that the delisting should only apply in Europe, and its nuanced discussion of when publishers or webmasters should be notified of delisting.

The ruling was about data that’s inadequate, irrelevant or excessive – it’s a fundamental right in Europe that people can have such data deleted, and the Court of Justice of the European Union decided last year that this data protection right can be applied to search engines.

The global question

The advisors’ call for a limited geographical scope in applying the so-called “right to be forgotten” – Google’s favored term, but one the group strenuously objected to – directly contradicts the guidance given by the Article 29 Working Party (WP29) band of EU data protection regulators.

WP29 argued that, if the link to the data in question is only removed from Google’s European domains, it’s far too easy for people to access other Google domains, therefore the delisting should take place globally. Indeed, one of Google’s advisors, former German justice minister Sabine Leutheusser-Schnarrenberger, agreed with this in a dissenting opinion in today’s report.

Overall, though, the council said delisting should only apply in Europe. Its report acknowledged that global delisting “may ensure more absolute protection of a data subject’s rights”, but it pointed out that Google users outside Europe had the right to access information according to their own country’s laws, not those of EU countries.

It continued:

There is also a competing interest on the part of users within Europe to access versions of search other than their own. The Council heard evidence about the technical possibility to prevent Internet users in Europe from accessing search results that have been delisted under European law. The Council has concerns about the precedent set by such measures, particularly if repressive regimes point to such a precedent in an effort to ‘lock’ their users into heavily censored versions of search results.

Notifications

On the subject of whether or not to notify publishers that one of their pages is going to be delisted due to a data subject exercising their right, the council noted that it had “received conflicting input about the legal basis for such notice.” It then provided something of a fudge: “Given the valid concerns raised by online publishers, we advise that, as a good practice, the search engine should notify the publishers to the extent allowed by law.”

In other words, do what the law allows, whatever that is. In the opinion of WP29, contacting the webmasters in this way may itself involve “processing” of the subject’s data, which requires a legal basis – and there is none. However, the advisory council and WP29 did agree on one aspect of this question: If the decision to delist a particular piece of information is especially complex and difficult, it may be helpful to all concerned if the search engine could ask the publisher or webmaster for help.

The council also suggested four broad categories of criteria that Google and other search engines should apply when deciding on specific cases:

  • The data subject’s role in public life (Is the person a celebrity or do they have a special role in a certain profession?)
  • The nature of the information (Is it about the subject’s sex life or finances? Does it include private contact or other sensitive information? Is it true or false? Does it relate to public health or consumer protection or criminal information? Is it integral to the historical record? Is it art?
  • The source of the information (Does it come from journalists or “recognized bloggers”? Was it published by the subjects themselves and can they easily remove it?)
  • Time (Is the information about a long-past minor crime? Was it about a crime that’s relevant to the subject’s current role? How prominent were they at the time? Is the data about the subject’s childhood?)

The advisors recommended that Google’s delisting request form should have more fields so the subject can submit more information that will help the balancing test – for example, in which geographical area they’re publicly known, or whether their role in public life was deliberately adopted or not.

Other opinions

The dissenting opinions at the end of the report were interesting. That of Wikipedia founder Jimmy Wales was the starkest – “the recommendations to Google contained in this report are deeply flawed due to the law itself being deeply flawed” – as he entirely opposes the concept of a company being forced to adjudicate between free expression and privacy.

Frank La Rue, the former U.N. free speech rapporteur, also said this shouldn’t be down to Google, arguing that only a state authority should be establishing criteria and procedures for privacy protection. La Rue also criticized the scope of the EU’s data protection itself, saying data should only be removed or delisted if it is “malicious, is false, or produces serious harm to an individual.”

Overall, I think the report is an important document. There are of course many reasons to criticize the process that led to its drafting – it was done according to Google’s terms and timescale, and under the misleading banner of the “right to be forgotten” – and some of its recommendations don’t actually gel with current EU law:

However, I think it’s fair to say the council members were independent-minded and not all singing from the same hymn sheet. Ultimately, as a counterpart to the Article 29 Working Party’s more legalistic set of recommendations (that is their job after all), this was a valuable exercise in chewing over the deeper implications of that CJEU ruling.

Report of the Advisory Committee to Google on the Right to Be Forgotten

[protected-iframe id=”a2093e680e23c149c5ef77e8ab00270e-14960843-16988840″ info=”https://www.scribd.com/embeds/254900585/content?start_page=1&view_mode=scroll&show_recommendations=true” width=”100%” height=”600″ frameborder=”0″ scrolling=”no”]

How social media affects protest movements: It’s complicated

In a new research paper, sociologist Zeynep Tufekci argues that while social media can empower dissidents and make it easier to organize, governments are getting smarter — and the same things that make such tools useful also have a downside

Tango down: Bangladesh blocks messaging apps amid protests

Bangladesh temporarily blocked the messaging apps Viber and Tango on Sunday after intelligence agencies asked the country’s telecoms regulator for help in quelling opposition protests. Reports suggested on Monday that the ban had been lifted early in the morning, allowing the apps’ traffic to move freely again after an outage of around 18 hours. The agencies had reportedly been concerned that they could not monitor communications between “terrorists and militants” that used the apps – anti-government protestors have been carrying out a transport blockade for a couple of weeks.

Twitter fights Turkish order to block newspaper’s Twitter account

Twitter says it plans to fight a court order from the Turkish government that is trying to force the company to block or remove the account belonging to a Turkish newspaper, after the paper tweeted information the authorities say could compromise national security

Google fight over Mosley orgy shows censorship creep in Europe

A rich, powerful man won a series of court victories in France and Germany that arguably helped pave the way for Europe’s controversial “right to be forgotten”, which has helped people erase history by scrubbing search engines. Now, that man is pushing a U.K. court to go a step further — and, unfortunately, it sounds like the court will agree.

As the BBC explains, Max Mosley was in the High Court this week demanding that Google be held accountable for images that show him romping with five German-speaking prostitutes in a prolonged S&M orgy in a posh London apartment.

Mosley, who is the former head of F1 racing and the son of a prominent U.K. fascist, already has the right to ask Google to remove specific search results that link to the pictures or videos in question. What he is seeking now is for the court to designate Google as a publisher in its own right, which would make it responsible for finding and deleting any other links that might appear in the future.

The distinction is crucial because a court ruling in Mosley’s favor would transform Google and other search engines from a passive directory into an active censor. It is the difference between asking a newsstand to remove a certain magazine that has an offensive image, versus making the newsstand responsible for ensuring the image never appears in any other publication it sells in the future.

To support his position, Mosley’s lawyers are pointing to a court ruling against the defunct tabloid News of the World, which was forced to pay Mosley £60,000 for defamation and violating his privacy. They say Google is similarly liable for violating a U.K. law known as the Data Protection Act.

Google’s lawyer, meanwhile, is asking the court to throw out the case on two grounds: that Mosley no longer has a privacy right in the images since they have been so widely disseminated, and because the search engine is not a publisher in the first place. As the FT reports:

“Max Mosley remains in the public eye as a campaigner for privacy rights and this has never been forgotten or receded into the past,” [the lawyer] told the court, adding that in legal terms Google was not a publisher of the images.

While the case would be quickly thrown out in a North American court, other news reports suggest Mosley’s argument gained traction with the judge. The Mirror, for instance, quotes the judge in the case as saying “damages may simply not be available” to Mosley, but that an injunction is “much less problematic.”

This distinction will be cold comfort for Google since Mosley, if the judge issues an injunction, will be in a position to seek damages or a contempt of court charge if Google fails to comply with an order not to display or link to the images.

In the bigger picture, Mosley’s latest gambit appears likely to cause Europe’s creeping cloak of internet censorship to expand further. And a U.K. ruling in his favor will also bring about a further fracturing of the internet as a whole, as North Americans see one version of the web — including the Mosley video — while Europeans see a different one punched full of holes.