Alarms sound over changes to EU roaming, net neutrality and privacy rules

The European Parliament’s liberal-centrist bloc has warned over changes being made by EU countries to incoming telecoms legislation, saying they will severely weaken efforts to introduce unified net neutrality rules and eliminate mobile roaming surcharges for people moving between member states.

The Council of the European Union, which represents member states, is expected to present its position on Wednesday regarding the Telecoms Single Market proposal – this follows the European Commission’s original proposal and changes made by the Parliament, and will trigger negotiations over the final text. The Alliance of Liberals and Democrats for Europe Group (ALDE) said Tuesday that the Council’s position is so watered down that it would undermine campaign pledges made by Commission president Jean-Claude Juncker and the Parliament that came in last year.

Meanwhile, digital rights groups have released leaked documents relating to the Council’s under-development position on a separate legislative package, the new General Data Protection Regulation. The version that left Parliament would introduce very tough new rules for companies and governments handling EU citizens’ personal data, but it appears member states have been agitating for these rules to be weakened.

Roaming rumble

The member states’ keenness to water down the net neutrality proposals is already well documented, with the countries apparently aiming for aspirational principles rather than tough new rules. However, the roaming aspect of the telecoms package is also contentious.

The Commission’s original proposal would have eliminated intra-EU roaming fees, allowing people to move around EU countries without having to pay more for mobile access than they would pay at home. This is integral to the European single market project – cross-border services won’t get anywhere if you can’t freely use them across borders.

However, the Council appears set to allow carriers to charge roaming surcharges for anything above a measly 5MB of data per day. The surcharges would be capped at the maximum wholesale rates charged between carriers, but they would still stymie the original intention of the legislation.

According to ALDE president Guy Verhofstadt:

This is a scandal. An end to roaming charges and the delivery of a genuine single market for telecoms was a campaign priority for all parties, many of whom are today responsible for blocking this measure…

To say this text lacks ambition is an understatement. Certainly our group will not accept this text, as the only winner from it is national telecoms operators themselves. Member States should hang their heads in shame.

Privacy shambles

As for the new data protection package, which is also intended to unify the disparate rules of the 28 EU member states, the rights groups EDRi, Access, Privacy International and the Panoptykon Foundation have warned that the package is “becoming an empty shell”.

On Tuesday the groups issued an analysis (PDF) of leaked documents about the Council’s position on the regulation. Here are the main points to worry about, according to EDRi et al:

  • Consent: The proposals would allow the failure of browser users to opt out of being tracked to be read as a form of consent for tracking and profiling. They would also weaken the limitations on what that consent can allow. “Germany undermines transparency still further by proposing that consent should cover unknown future uses of the data for ‘scientific’ purposes,” the analysis read.
  • Data subject rights: Gone is the article that would mandate “concise, transparent, clear and easily accessible policies” about data use. Governments would also be allowed to cite “national security, defence, public security and ‘other important objectives of general public interest'” as legitimate reasons for profiling people.
  • Fines and remedies: The new rules were supposed to introduce fines of up to five percent of annual turnover for serious data protection infringements, as a deterrent to the likes of [company]Google[/company], who shrug off today’s fines. The new proposals would lower that amount. The possibility of class action lawsuits would also be nixed, and individuals suing over data protection will only be able to take it to local regulators, not courts.
  • Data breach notifications: Companies would only have to tell people that their data has been stolen if the theft is “high risk”.
  • Cross-border complaints: There’s supposed to be an EU “one stop shop” for data protection complaints, which makes sense as the whole point of this regulation is to create a unified EU framework. But no, the Council would want multiple national data protection regulators to be brought in first to try reach consensus, because member states don’t want to cede control.

The deadline on this one is a bit further out, with the Council expected to produce its position on the data protection regulation in the summer, before commencing negotiations with the other legislative branches of the EU.

According to EDRi: “Unless something is done urgently, the Council will simply complete its agreement, at which stage only an absolute majority of the European Parliament would be the only way of saving Europe’s data protection reform.”

I have asked the Latvian presidency of the Council (it’s a rotating presidency) for comment on the leaks, but haven’t received a reply at the time of writing.

Scammers defraud TalkTalk users after UK ISP suffers data breach

The personal details of a number of TalkTalk customers have been stolen. In some cases, the details have been used to scam further information such as bank details from the victims.

TalkTalk is one of the biggest British internet service providers, with more than four million broadband customers. In an email to its customers, the ISP admitted to the breach late last year and said “a small, but nonetheless significant” number of its customers had been contacted by people pretending to be from TalkTalk.

According to a spokesman, the data was taken from TalkTalk’s systems, and the scammers quoted TalkTalk account numbers and phone numbers in order to convince victims to provide access to their computers. TalkTalk’s email suggested that this sometimes yielded sensitive information such as bank details, adding that “in some of these cases we know they may be using the information they have illegally obtained.”

It is so far not terribly clear how many customers’ data was stolen in the first place.

The Guardian reported that this admission lined up with its report in December of a possible data breach associated with one of TalkTalk’s Indian centers, which had resulted in some of the firm’s customers receiving scam calls. It also noted that one customer had been defrauded of more than $4,000 by the scammers.

TalkTalk stressed that bank account details and other sensitive information such as date of birth had not been stolen directly in the breach. In a statement, it said:

As part of our ongoing approach to security we continually test our systems and processes and following further investigation into these reports, we have now become aware that some limited, non-sensitive information about some customers could have been illegally accessed in violation of our security procedures. We are aware of a small, but nonetheless significant, number of customers who have been directly targeted by these criminals and we have been supporting them directly.

The ISP also said it was talking to the Information Commissioner’s Office – the British data protection regulator – and has “taken serious steps to remedy this.” The ICO said in a statement: “We are aware of a possible data breach involving TalkTalk and are making enquiries into the circumstances.”

This article was updated at 2.30am PT to amend “the data was taken from TalkTalk’s servers” to “the data was taken from TalkTalk’s systems”, per a correction from the spokesman. It was updated again at 3.30am PT to include the ICO’s brief statement.

Updated: Mozilla, Deutsche Telekom won’t release “privacy phone”

Update: Mozilla has told TechCrunch that the WSJ’s framing of this as a partnership around an actual phone was inaccurate. In other words, there’s absolutely no news here beyond the more general Firefox OS collaboration that we reported on one year ago to the day. For the record, I did contact Mozilla’s representatives to seek comment before publishing my original piece, but received no reply.

That original story follows thusly:

A year back, Deutsche Telekom and Mozilla said they were working together on privacy-centric features for Firefox OS, including “location blurring” (fine-grained control of how much location information to give to each app), guest mode, and a registration-free “find my phone” tool. It looks like that collaboration is about to bear fruit: According to a Wall Street Journal piece on Tuesday, the companies will unveil a “privacy phone” at the upcoming Mobile World Congress that will include such features. The article also notes how the T-Mobile parent and other German carriers are lobbying against the last-minute watering-down of strict new EU data protection rules that will cover web service providers such as Google and Facebook.

EU privacy ruling should apply globally, says digital chief

One of the most interesting questions in European tech privacy circles right now is about territoriality and the so-called “right to be forgotten” — when an EU citizen requests the delisting of a piece of information about them from Google’s search results, should it apply only in Europe or around the world?

On Thursday Andrus Ansip, the EU vice president in charge of the digital single market, said the delisting should apply globally. He gave this as his personal opinion — he has no say in the matter — but that opinion comes down on the side of Europe’s data protection regulators, who fear that limited implementation is too easily circumvented by visiting non-European versions of Google. Google’s expert advisors have almost unanimously taken the opposing view, arguing that Europe has no right to impose its privacy laws on the rest of the world.

Whose law is it anyway?

Here’s what Ansip said during a round-table Q&A session at the Google-sponsored Startup Europe Summit in Berlin:

Everybody has to respect a decision made by the courts. This right to be forgotten is not just based on a decision of court, but a court decision based on EU legislation. [It is] not a new principle. I think if a decision is made [and the] court case is closed, then it covers the whole company, not just some territories, but this is my personal view in this case.

The decision in question was made last year by the Court of Justice of the European Union, the EU’s highest court, in a case involving a Spanish man who wanted to remove from Google’s results an old news article about his long-ago debt problems. The CJEU’s decision set a new precedent by saying EU data protection law, which allows people to request the erasure of out-of-date information about themselves in limited circumstances, applied to search engines.

The question of how far its implementation should extend touches on a fundamental conundrum about the internet — countries need to be able to apply their laws online as they do offline, but the onlne layer’s lack of inherent borders makes that difficult to do effectively. If countries can impose their laws on the world, that means what internet users see can be affected by the laws of multiple countries at once, and not just their own.

Data protection and net neutrality

Ansip also touched on the issue of Europe’s tough new data protection laws, which he wants to get through the Council stage (i.e. final approval by the member states) by the end of this year. This legislative package would massively boost online privacy rights, providing a right to the erasure of personal data and allowing for much harsher fines on companies from anywhere in the world that mishandle the data of European citizens.

The vice president said he said he was against watering down the new privacy rules in order to strike a compromise: “I don’t think we have to protect everybody’s privacy, everybody’s personal data just because of some kind of directive or agencies that are asking to do that. To protect privacy, to protect data is a must.”

On the subject of net neutrality, which is part of a separate legislative package that’s also being finalized with member states, Ansip appeared a little less clear. Having earlier in the day maintained in an on-stage session that some kinds of web traffic may require prioritization over others, and having then insisted that “similar traffic has to be treated equally”, he went on to suggest during the round-table Q&A that “if we make tough standards then we will not leave space for innovations”.

It is very difficult to tell at this stage where he might compromise with the Council, some members of whom are more keen on loose principles than tight, enforceable net neutrality rules. He said on Thursday that it was important to reach consensus so as to give predictability to network investors and startups alike, and noted (in what seems to be an approving tone) that the latest Council proposals were “pretty close to the Commission’s proposal”.

The Commission’s proposal – the first draft of Europe’s incoming net neutrality law — was heavily toughened up by the European Parliament before it went to the Council, through the addition of strong definitions that precluded the possibility of prioritizing specific services over others. Where the final compromise will lie remains a mystery for now.

Google advisory council: Right to delist should only apply in EU

To help it handle the EU ruling that forces it to delist certain results about people, Google assembled a team of expert advisors that travelled the continent, seeking out various opinions on how best to implement Europeans’ data protection rights. On Friday that advisory council published its report, providing recommendations for the way forward.

The Google advisors’ report (embedded below) makes for a fascinating read, but the highlights are its assertion that the delisting should only apply in Europe, and its nuanced discussion of when publishers or webmasters should be notified of delisting.

The ruling was about data that’s inadequate, irrelevant or excessive – it’s a fundamental right in Europe that people can have such data deleted, and the Court of Justice of the European Union decided last year that this data protection right can be applied to search engines.

The global question

The advisors’ call for a limited geographical scope in applying the so-called “right to be forgotten” – Google’s favored term, but one the group strenuously objected to – directly contradicts the guidance given by the Article 29 Working Party (WP29) band of EU data protection regulators.

WP29 argued that, if the link to the data in question is only removed from Google’s European domains, it’s far too easy for people to access other Google domains, therefore the delisting should take place globally. Indeed, one of Google’s advisors, former German justice minister Sabine Leutheusser-Schnarrenberger, agreed with this in a dissenting opinion in today’s report.

Overall, though, the council said delisting should only apply in Europe. Its report acknowledged that global delisting “may ensure more absolute protection of a data subject’s rights”, but it pointed out that Google users outside Europe had the right to access information according to their own country’s laws, not those of EU countries.

It continued:

There is also a competing interest on the part of users within Europe to access versions of search other than their own. The Council heard evidence about the technical possibility to prevent Internet users in Europe from accessing search results that have been delisted under European law. The Council has concerns about the precedent set by such measures, particularly if repressive regimes point to such a precedent in an effort to ‘lock’ their users into heavily censored versions of search results.

Notifications

On the subject of whether or not to notify publishers that one of their pages is going to be delisted due to a data subject exercising their right, the council noted that it had “received conflicting input about the legal basis for such notice.” It then provided something of a fudge: “Given the valid concerns raised by online publishers, we advise that, as a good practice, the search engine should notify the publishers to the extent allowed by law.”

In other words, do what the law allows, whatever that is. In the opinion of WP29, contacting the webmasters in this way may itself involve “processing” of the subject’s data, which requires a legal basis – and there is none. However, the advisory council and WP29 did agree on one aspect of this question: If the decision to delist a particular piece of information is especially complex and difficult, it may be helpful to all concerned if the search engine could ask the publisher or webmaster for help.

The council also suggested four broad categories of criteria that Google and other search engines should apply when deciding on specific cases:

  • The data subject’s role in public life (Is the person a celebrity or do they have a special role in a certain profession?)
  • The nature of the information (Is it about the subject’s sex life or finances? Does it include private contact or other sensitive information? Is it true or false? Does it relate to public health or consumer protection or criminal information? Is it integral to the historical record? Is it art?
  • The source of the information (Does it come from journalists or “recognized bloggers”? Was it published by the subjects themselves and can they easily remove it?)
  • Time (Is the information about a long-past minor crime? Was it about a crime that’s relevant to the subject’s current role? How prominent were they at the time? Is the data about the subject’s childhood?)

The advisors recommended that Google’s delisting request form should have more fields so the subject can submit more information that will help the balancing test – for example, in which geographical area they’re publicly known, or whether their role in public life was deliberately adopted or not.

Other opinions

The dissenting opinions at the end of the report were interesting. That of Wikipedia founder Jimmy Wales was the starkest – “the recommendations to Google contained in this report are deeply flawed due to the law itself being deeply flawed” – as he entirely opposes the concept of a company being forced to adjudicate between free expression and privacy.

Frank La Rue, the former U.N. free speech rapporteur, also said this shouldn’t be down to Google, arguing that only a state authority should be establishing criteria and procedures for privacy protection. La Rue also criticized the scope of the EU’s data protection itself, saying data should only be removed or delisted if it is “malicious, is false, or produces serious harm to an individual.”

Overall, I think the report is an important document. There are of course many reasons to criticize the process that led to its drafting – it was done according to Google’s terms and timescale, and under the misleading banner of the “right to be forgotten” – and some of its recommendations don’t actually gel with current EU law:

However, I think it’s fair to say the council members were independent-minded and not all singing from the same hymn sheet. Ultimately, as a counterpart to the Article 29 Working Party’s more legalistic set of recommendations (that is their job after all), this was a valuable exercise in chewing over the deeper implications of that CJEU ruling.

Report of the Advisory Committee to Google on the Right to Be Forgotten

[protected-iframe id=”a2093e680e23c149c5ef77e8ab00270e-14960843-16988840″ info=”https://www.scribd.com/embeds/254900585/content?start_page=1&view_mode=scroll&show_recommendations=true” width=”100%” height=”600″ frameborder=”0″ scrolling=”no”]

Facebook faces fight in Europe over new privacy policy

Last week Facebook rolled out a new privacy policy that allows the sharing of data between its various services, such as Instagram and the Atlas ad unit, and the tracking of users across much of the web. At the time, Hamburg’s data protection chief said he was preparing to coordinate with counterparts across Europe to see what might need to be done about this.

Now, according to IDG, EU data protection officials have formed a task force to deal with the matter, on the basis that Facebook’s new policy may well contravene European privacy laws.

The privacy policies of the big U.S. web giants, which make their money by tracking users in great detail so as to sell their profiles to advertisers, have long been a sore point in the EU. On Friday Google and the U.K.’s Information Commissioner’s Office (ICO) announced a settlement to a long-running investigation over that company’s policy – Google will give users more information about how their data collected and shared between services, and perhaps a little more control over how this happens.

This will apply across the world, not just in the U.K., but it remains to be seen whether it will mollify regulators in continental Europe who have spent the last couple years fining Google over its practices. For one thing, the U.K. settlement measures don’t seem to include an explicit opt-in for the sharing of personal data across services, as privacy officials in other EU countries had demanded.

According to IDG, the regulators are now examining several aspects of the behavior allowed by Facebook’s new policy: its off-site tracking of users across sites and apps that are connected to Facebook services, its sharing of data with third parties, its use of personal information and images for commercial purposes, and again the general lack of explicit opt-in user consent for much of this.

Facebook’s new terms aren’t quite the unified privacy policy that Google created — there’s still a data wall between WhatsApp and Facebook, for one thing — but the effects are broadly similar when it comes to mixing and matching personal data between Facebook’s units. In the cases of both Facebook and Google, those units have surprisingly extensive reach across the web and apps.

Here are a few of the key passages in Facebook’s policy:

We collect information when you visit or use third-party websites and apps that use our Services (like when they offer our Like button or Facebook Log In or use our measurement and advertising services). This includes information about the websites and apps you visit, your use of our Services on those websites and apps, as well as information the developer or publisher of the app or website provides to you or us.
Information from third-party partners.

We receive information about you and your activities on and off Facebook from third-party partners, such as information from a partner when we jointly offer services or from an advertiser about your experiences or interactions with them.

Sharing With Third-Party Partners and Customers: We work with third party companies who help us provide and improve our Services or who use advertising or related products, which makes it possible to operate our companies and provide free services to people around the world.

This is all quite similar to what Google does, and the reaction seems set to follow a similar course. In Google’s case, the regulators also banded together to coordinate assaults on a national level. With regulators in Belgium, the Netherlands and now Germany already sniffing around Facebook’s new privacy policy, the company probably has a substantial fight on its hands.

Facebook said in an emailed statement:

We recently updated our terms and policies to make them more clear and concise, to reflect new product features and to highlight how we’re expanding people’s control over advertising. We’re confident the updates comply with applicable laws. As a company with international headquarters in Dublin, we routinely review product and policy updates­ including this one with our regulator, the Irish Data Protection Commissioner, who oversees our compliance with the EU Data Protection Directive as implemented under Irish law.

PS – If you want to opt out of some of the tracking permitted through the new Facebook privacy policy, here’s the relevant settings page.

This article was updated at 7am PT to include Facebook’s statement.

Google to give all users clearer information about data use

Google has vowed to revise its privacy policy and account settings, in order to make it clearer to people what it does with their data and give them more control. This comes as part of a settlement with the U.K. Information Commissioner’s Office, announced on Friday, but the changes will apply globally.

The ICO and other data protection regulators across the EU have been coordinating a crackdown on Google’s practices since 2012, when the company introduced a new unified privacy policy. The unified policy allowed [company]Google[/company] to mix and match personal data across its various services – between YouTube and Search, for example. However, many people did not, and still do not, appreciate what this means in terms of user profiling.

Google has faced repeated fines over its refusal to change the policy in countries such as France, Italy and Germany, but the sums involved were chickenfeed for a company of Google’s girth. The U.K.’s ICO hasn’t fined Google in this way, but has repeatedly said that Google’s settlement proposals didn’t go far enough.

Now this long-running drama may be drawing to a close. On Friday the ICO triumphantly brandished an undertaking in which Google said it would do the following things during the next two years:

  • Make its privacy policy easier to find, and be clearer in that policy about what user information it processes and why.
  • Provide users with “information to exercise their rights” and launch a redesigned account settings version to give them more control.
  • Add two provisions from the Google terms of service to the privacy policy, regarding email data and the “shared endorsement” feature.
  • Add to the privacy policy information about “the entities that may collect anonymous identifiers on Google properties and the purposes to which they put that data.”
  • “Take several measures” to tell passive users – those using third-party services that are plugged into Google services, such as advertising – more about what’s happening with their data. Those running the third-party services will also need to “obtain the necessary consents” for this data collection.
  • “Enhance its guidance for employees regarding notice and consent requirements.”

Google also said it would continuously evaluate the privacy impact of future changes to its services and keep users informed, especially where the changes “might not be within the reasonable expectations of service users.” Particularly significant changes to the privacy policy will be “reviewed by user experience specialists and with representative user groups before the policy and associated tools are launched as appropriate.”

The changes will make sure Google is compliant with the U.K. Data Protection Act, which is based on European law. It is not yet clear whether this is the end of the matter as far as the other EU data protection authorities are concerned — I understand that the changes will apply in all countries around the world, though.

Here’s what ICO enforcement head Steve Eckersley said in a statement:

Google’s commitment today to make these necessary changes will improve the information UK consumers receive when using their online services and products.

Whilst our investigation concluded that this case hasn’t resulted in substantial damage and distress to consumers, it is still important for organisations to properly understand the impact of their actions and the requirement to comply with data protection law… This investigation has identified some important learning points not only for Google, but also for all organisations operating online, particularly when they seek to combine and use data across services.

Although the list of commitments is fairly comprehensive, some terms are vague and the proof may lie in the implementation. For example, the EU privacy watchdogs previously demanded that users get the opportunity to “choose when their data are combined, for instance with dedicated buttons in the services.” That’s not merely a matter of giving users “information to exercise their rights”, so it will be interesting to see what the redesigned account settings entail.

So far, Google has merely said:

We’re pleased that the ICO has decided to close its investigation. We have agreed improvements to our privacy policy and will continue to work constructively with the Commissioner and his team in the future.

Even if this does indicate a conclusion to the unified privacy policy saga, then Google still faces major regulatory headaches in Europe. These include the big search antitrust case – tied in with digital agenda commissioner Günther Oettinger’s apparent desire to extend a version of the “Google tax” copyright levy across Europe – and a potential second antitrust case over Android.

Still, one at a time, eh?

This article was updated at 8.15am PT to note that the changes will apply globally.

Internet of things needs global privacy push, says UK regulator

The U.K. telecommunications regulator Ofcom has called for international industry standards on privacy in the internet of things.

On Tuesday the regulator published an outline of its approach to the developing internet of things, largely based on responses to a call for input that it made last year. It noted that “stakeholders” had identified data privacy and consumer literacy as their primary areas of concern.

“We have concluded that a common framework that allows consumers easily and transparently to authorize the conditions under which data collected by their devices is used and shared by others will be critical to future development of the IoT sector,” Ofcom wrote. “If users do not trust that their data is being handled appropriately there is a risk that they might withhold its use.”

Respondents had said that existing U.K. data protection legislation would be appropriate for regulating the internet of things, though not necessarily a cure-all. They also favored industry-led approaches to keeping consumers in control.

However, Ofcom wrote:

We consider that these approaches should ideally be agreed internationally where possible, so as not to inhibit sale and use of IoT devices and services across international boundaries…

Data captured in one country may be processed or stored in another and different countries may have different data privacy regimes. Addressing such differences will be particularly important if manufacturers market their IoT devices in multiple countries.

The regulator also pointed out that, because many connected devices may not have a traditional screen-and-keyboard-based user interface, users might “not know that their data is being collected, shared and processed and may find it harder to make an informed choice about whether to share their data.”

Many respondents had backed the idea of a common framework based on simple categories of data sharing, for example “unshared, shared only with the service provider or shared with everyone.” However, Ofcom noted that there was “little evidence” of such a standardized system coming out of current industry efforts.

The regulator also wrote up its views on other aspects of the internet of things. On capacity, it suggested that there was probably enough spectrum available and being freed up for now, but it would monitor developments. It said addressing would come down to bespoke systems or IPv6, and it would keep an eye on the IPv6 migration process.

Regarding the network security and resilience aspects of the explosion in connected devices, Ofcom said it would expand its work in this area and cooperate “where appropriate” with regulators in other sectors. On privacy, Ofcom will also need to work closely with the U.K. Information Commissioner’s office, the government and industry.

P.S.: It goes without saying that privacy concerns will be a hot topic across many of the sessions at our upcoming Structure Data conference, which will take place March 18–19 in New York. In particular, don’t miss Jeff Roberts’s chat with FTC commissioner Julie Brill on the Wednesday — this issue is most definitely on the agenda in the U.S.

Smog computing: Why tech biz should learn from past regulations

Watching the President’s State of the Union and his calling upon Congress to enact legislation to better protect our personal information, I started wondering how much good we’ve actually gotten from regulatory compliance standards like HIPAA and Sarbanes-Oxley already in place. I knew I couldn’t dismiss them as useless, but I wanted to look for a model of regulatory compliance that’s been steeping longer from which I could reel some parallels to the current state of information privacy.

California’s air quality law — and subsequent state and federal legislation — was an acknowledgement that progress comes at a price. But environmental regulations are themselves an admission that we cannot completely eliminate the dangers associated with modernity; they are instead an attempt to mitigate the risks.

Los Angeles was still something of a sleepy outpost on America’s burgeoning West Coast when oil was discovered there in the 1890s. According to the 1900 U.S. Census not many more than 100,000 people called L.A. home. The oil boom and a growing film industry attracted enough people through the 1910s that the population quintupled to over a half-million by 1920. Growth continued during the next two decades when the concurrent Great Depression and Dust Bowl catastrophes touched off a westward migration that saw hundreds of thousands of people move from America’s Midwest and relocate to California in search of work.

In 1943, industry and automobiles resulted in the first report of smog in the city. By 1947, Los Angeles’ toxic air had become so problematic that Governor Earl Warren signed the Air Pollution Control Act, thus beginning the age of environmental law.

You might say the Air Pollution Control Act was the first piece of regulation that endeavored to protect people from “the cloud.”

Information and cloud security can take a lesson (and solace) from the pages of environmental law. Despite early attempts at regulation, air quality grew far worse before it got better. We’ve learned more about the dangers and how to better reduce their effects, and so it is with protecting and managing data.

Protecting data in the cloud

As with environmental stewardship, there should be laws in place that create incentives for implementing strong data security practices in the context of cloud adoption. Once again California took the lead with the passage of landmark data breach notification law, SB1386, in 2002. More state and federal laws have followed, including HIPAA/HITECH, Gramm Leach Bliley, Sarbanes-Oxley, Massachusetts 201 CMR 17, PCI DSS and others.

The problem is that the legislative process unfolds based on the lessons of the past while technology advances with an eye toward the future. Attempts to write novel law that anticipates and remedies the unknown and adverse effects of technological innovation can have unforeseen and detrimental consequences, such as discouraging further innovation or the adoption of needed innovations.

Reliance on regulation can also have the effect of directing resources inefficiently — to appease auditors rather than address problems that need solving. In fact, the nature of the third party audit trade itself would seem to reward a pursuit of repeat business rather than effective compliance.

At the same time, CIOs are paradoxically chartered to be both compliant (ticking the box) and innovative (thinking outside the box). Software-as-a-service (SaaS) adoption is a good example of this double standard. Business units demand tools like Salesforce.com, Marketo and SuccessFactors, but compliance teams and auditors raise red flags over lack of data governance and unclear privacy accountability. To outsiders looking in, the need to keep up with progress is self-evident, but doing so in the context of our current regulatory environment puts those who follow the rules at a clear disadvantage.

Looking back at air pollution regulations, California’s compliance standards actually made it more difficult for Californians to buy low emission diesel cars in the previous decade because the idea of a low emission diesel vehicle was not considered by those who created the law. Therefore, in California, it was illegal to sell cars which polluted less than their standard gasoline powered equivalents simply because they were “diesel powered.” Regulators eventually caught up to the times in 2012 when California passed the LEV (Low Emission Vehicle) III regulations.

It is urgent that CIOs regularly examine the impact of regulation on productivity because, unlike California, enterprise IT can’t afford to wait a decade for compliance to catch up to the needs of their business.