As FTC adds encryption to its website, government remains unsure on corporate use

The Federal Trade Commission’s website just got a whole lot safer for people to peruse after the government agency said Friday that it now supports HTTPS encryption. While it used to provide secure transport for the parts of the website that dealt with sensitive information like complaint data and email subscriptions, this is the first time that secure browsing covers the entire site, the FTC said.

When a website is secured through the HTTPS communication protocol, all data passed between the site and the person who is accessing it will be encrypted through the use of either the SSL or TLS encryption protocols. Basically, the person’s browser initiates communication with the locked-down website and through the exchanging of encryption keys, all information should be scrambled from prying eyes.

In theory, this process works fine, but as the latest FREAK bug demonstrates, there can be some holes in the system, especially if the browsers or devices in questions use ineffective security protocols to speak to websites. In the case of FREAK, Android browsers using the OpenSSL protocol, Safari browsers using the Apple TLS/SSL protocol and now all supported versions of Windows that use the Schannel security package (sorry IE users) are vulnerable to hackers who can essentially weaken the encryption that takes place.

Still, many sites use HTTPS as it is one of the most common tools to prevent eavesdroppers from snooping into website sessions. In the case of the FTC, it may seem like a no-brainer to add encryption, but the U.S. government hasn’t always showed support with encryption technology, especially when it comes to tech companies and mobile-device makers who use the tech to mask data.

Both the U.S. and U.K. governments have made it clear they feel that companies using encrypted communications can impede government investigations and even the Chinese government has jumped on the bandwagon with a proposed law that would require tech companies to hand over their encryption keys.

Ironically, a leaked U.S. report on cyber threats explained that encryption technology is the “[b]est defense to protect data,” which shows that the U.S. government hasn’t quite made up its mind on where it sees encryption technology. If it protects consumers from spying eyes as in the case of the FTC website, then that’s great, but if the government perceives that the technology may prevent it from doing its job, it’s a no-go.

Either way, the corporate sector shows no signs of slowing down when it comes to developing new businesses around encryptions, with recent funding rounds for encryption-centric startups like CipherCloud and Ionic Security.

The U.S. government, as well, still has a long way to go. Many .gov domains like whitehouse.gov, the U.S. Department of Education, the U.S. Department of the Treasure and NASA’s website remain unencrypted. So expect this tug-of-war between the need to protect and the government’s need to scan encrypted company data in the case of investigations to continue.

Why 2015 is the year of encryption

During a visit to Silicon Valley earlier this month, President Obama described himself as “a strong believer in strong encryption.” Some have criticized the president for equivocating on the issue, but as “strong believers” ourselves, we’ll take him at his word. Obama isn’t alone; everyone is calling for encryption, from activists to engineers, and even government agencies tasked with cybersecurity.

In the past, using encryption to secure files and communication has typically only been possible for technically sophisticated users. It’s taken some time for the tech industry and the open source community to ramp up their efforts to meet the call for widespread, usable encryption, but the pieces are in place for 2015 to be a turning point.

Last fall, [company]Apple[/company] and [company]Google[/company] announced that the newest versions of iOS and Android would encrypt the local storage of mobile devices by default, and 2015 will be the year this change really starts to takes hold. If your phone is running iOS 8 or Android Lollipop 5.0, photos, emails and all the other data stored on your device are automatically secure against rummaging by someone who happens to pick it up. More important, even the companies themselves can’t decrypt these devices, which is vital for protecting against hackers who might otherwise attempt to exploit a back door.

Of course the protection from these updated operating systems relies on user adoption, either by upgrading an old device or buying a new one with the new OS preinstalled. Gigaom readers might be on the leading edge, but not everyone rushes to upgrade. Based on past adoption trends, however, a majority of cell phone users will finally be running one of these two operating systems by the end of 2015. As the Supreme Court wrote last year, cell phones are a “pervasive and insistent part of modern life.” The world looks a whole lot different when most of those phones are encrypted by default.

There are two more developments involving encryption which might not make the front page this year, but they’re equally as important as the moves by Apple and Google, if not more so.

First, this month saw the finalization of the HTTP/2 protocol. HTTP/2 is designed to replace the aging Hyper-Text Transfer Protocol (HTTP), which for almost two decades has specified how web browsers and web servers communicate with one another. HTTP/2 brings many modern improvements to a protocol that was designed back when dial-up was king, including compression, multiplexed data transfers, and the ability for servers to preemptively push content to browsers.

HTTP/2 was also originally designed to operate exclusively over encrypted connections, in the hope that this would lead to the encryption of the entire web. Unfortunately that requirement was watered down during the standards-making process, and encryption was deemed optional.

Despite this, Mozilla and Google have promised that their browsers will only support encrypted HTTP/2 connections—which means that if website operators want to take advantage of all the performance improvements HTTP/2 has to offer, they’ll have to use encryption to do so or else risk losing a very large portion of their audience. The net result will undoubtedly be vastly more web traffic being encrypted by default.

But as any sysadmin can tell you, setting up a website that supports encryption properly can be a huge hassle. That’s because in order to offer secure connections, websites must have correctly configured “certificates” signed by trusted third parties, or Certificate Authorities. Obtaining a certificate can be complicated and costly, and this is one of the biggest issues standing in the way of default use of HTTPS (and encrypted HTTP/2) by websites.

Fortunately, a new project launching this summer promises to radically lower this overheard. Let’s Encrypt will act as a free Certificate Authority, offering a dramatically sped-up certificate process and putting implementation of HTTPS within the reach of any website operator. (Disclosure: Our employer, the Electronic Frontier Foundation, is a founding partner in Let’s Encrypt.)

Of course there are sure to be other developments in this Year of Encryption. For example, both Google and Yahoo have tantalizingly committed to rolling out end-to-end encryption for their email services, which could be a huge step toward improving the famously terrible usability of email encryption.

Finally, we’d be accused of naiveté if we didn’t acknowledge that despite President Obama’s ostensible support, many high-level law enforcement and national security officials are still calling for a “debate” about the balance between encryption and lawful access. Even putting aside the cold, hard fact that there’s no such thing as a “golden key,” this debate played out in the nineties in favor of strong encryption. We’re confident that in light of the technical strides like the ones we’ve described, calls for backdoored crypto will come to seem increasingly quaint.

Andrew Crocker is an attorney and fellow at the Electronic Frontier Foundation. Follow him on Twitter @AGCrocker.

Jeremy Gillula is a staff technologist at the Electronic Frontier Foundation. Prior to EFF, Jeremy received his doctorate in computer science from Stanford, and a bachelor’s degree from Caltech.

Here’s a draft of the consumer privacy “Bill of Rights” act Obama wants to pass

The White House has released a draft of the Consumer Privacy Bill of Rights Act of 2015. It outlines the steps companies need to take to tell people what data they’re collecting and what they’re doing with that information. It also suggests data opt-out options for identifying details like email addresses and passport numbers. The bill also mandates that companies give people information about how they store the data they collect, for how long, and how consumers can view those details.

Here are some of the key points:

  1. Employee data is excluded from these disclosure requirements, as is information collected to fend off a cybersecurity threat.
  2. Companies must give people information for who they can contact at the organization regarding any privacy questions.
  3. If a person withdraws his or her consent for data collection, the company has 45 days to delete the specific information on the user.
  4. Companies need to thoughtfully design their privacy notifications for users, considering everything from the size of the device displaying the notification to the timing when these notices appear.
  5. Companies must delete user data after it has fulfilled its purpose.
  6. Smaller businesses which have five or fewer employees are exempt from these requirements.

There’s plenty more to parse in the 24-page document, which is chock-full of words like “clear” “transparent” and “individual control.” Some politicians have already spoken out against the draft, saying it puts consumers at risk by lessening, not increasing their protections. The Hill reported that one Democratic official argued that the White House’s bill takes away some of the Federal Trade Commission’s power to prosecute companies that abuse consumer data.

This is an early draft that will go through several revisions before being voted on by the House and Senate. If passed, it will hold tech companies accountable for the way they treat consumer privacy. The FTC, state attorney generals, and people who use the technology will have the power to bring civil suit against those organizations that violate the bill.

Law firms will start sharing security data to prevent attacks

It’s clear that big banks provide a lot of incentive for hackers to launch cyber attacks, given the amount of sensitive data they hold and the cash they oversee. But banks aren’t the only entities hackers are targeting. The law firms that represent financial institutions are also subject to attacks, and as a result a group of law firms is banding together to share security data in order to prevent attacks, according to a New York Times report.

The data held by law firms is a treasure trove for hackers because it includes some of the most secretive aspects of companies, including their business operations, deal making and legal disputes. However, the general public may not be aware of law firm hacks because the firms are private entities and don’t have to abide by the same set of rules as public companies, especially when it comes to disclosing their breaches.

The Times report states that both banks and law firms have been working to create a separate legal group that would be connected to the Financial Services Information Sharing and Analysis Center, which acts as the meeting ground where financial entities can share and analyze security related information. A similar group for law firms could form by the end of 2015.

Supposedly, a half-dozen law firms were hacked over the past couple of months and the security company Mandiant has been working with these organizations on the breach, the Times reports, citing an unidentified source.

There’s not a lot of information out there as to the specifics of the cyber attack, but the Times reports that Mandiant recently said during a conference that “many of the bigger hackings of law firms had ties to the Chinese government, which was seeking information on patent applications, trade secrets, military weapons systems and contract negotiations.”

Sharing security data between organizations appears to be a trend, with President Obama recently signing an executive order calling for businesses and the Federal Government to create some kind of hub where they can exchange information.

Additionally, [company]Facebook[/company] just released its own collaborative threat detection framework, which includes a number of tech companies pledging support, including Pinterest, [company]Yahoo[/company], [company]Twitter[/company] and Dropbox.

What separates the proposed law firm information-sharing group and Facebook’s threat-detection framework from what President Obama is calling companies to establish is the fact that, as far as we know, law enforcement will not be participating in both projects. The White House wants the government to be a part of these data-sharing endeavors, under the premise that it has valuable information, but if organizations want that data, they’ll have to pony up their own.

But privacy concerns in light of the Edward Snowden leaks have caused tech companies to be wary of disclosing information to the government, and in a telling sign, Facebook, [company]Google[/company] and Yahoo chose not to participate in the White House’s Summit on Cybersecurity and Consumer Protection held in Stanford a few weeks ago.

Don’t let AT&T mislead you about its $29 “privacy fee”

This week AT&T got a lot of media attention for its expansion of its GigaPower service to Kansas City announced on Monday. The news wasn’t so much about the expansion, but about the ISP’s plans to to offer a $29 per month discount for customers who let Ma Bell scan their web searches in exchange for targeted advertising. The pricing isn’t new, but Ars Technica noted it as did the Wall Street Journal, and even our own Jeff Roberts wrote a post explaining that he thought it was a good idea that the company was putting an explicit price on privacy.

But $29 isn’t actually the price that AT&T charges per month for privacy. As I discussed back in May last year after I tried to sign up for AT&T’s GigaPower service to find out more about the pricing and the disclosures associated with the plan, the actual costs were closer to $44 or even $62 per month. This time around the price differentials are $44 for gigabit internet and $66 for HD TV and HBO Go plus gigabit internet.

Fact checking Ma Bell

To arrive at those prices I looked at the cost of the Internet Preference Plan (that’s the plan that monitors your web surfing) versus the Standard Plan for gigabit service and gigabit service plus TV. Gigabit service costs $99 per month under the Standard Plan plus a $7 monthly fee modem rental fee and a $99 one-time activation fee, that nets out to a monthly cost of $114. The Internet Preference Plan waives the one-time activation and monthly modem fee which means you pay only $70 a month, giving you a true cost of $44 a month if you choose the privacy-preserving option.

attinternetcompfeb

On the video side numbers are similar as seen below. The Standard Plan has a higher cost of $149 per month plus the $7 monthly fee and a one-time $49 activation fee. Only you also add in a $10 monthly service fee for HD TV and a $16 monthly fee for HBO Go which are included in the Internet Preference Plan. So the comparable plan nets out to $186, which costs $66 more than the $120 you’d pay for letting AT&T sneak a peek at your home broadband web surfing habits.

mabellfeb15

AT&T also makes it tough to find the alternative to its Internet Preferences plan. You have to read the fine print and click to search for options that don’t include the AT&T Internet Preferences plan (they don’t call it something straightforward like the “Ma Bell’s Watching You Plan”). See underlined item below for where to click.

mabellprivacy

AT&T doesn’t make this easy

So it’s tough to avoid the spying plan, but it’s even tougher to actually fact check AT&T to discover the associated fees that make the cost of privacy so much higher than the advertised $29 a month. As part of looking for the pricing this morning to see if anything had changed, I ran into another issue that’s almost as frustrating as AT&T’s misleading number and the media’s acceptance of it.

Uncovering AT&T’s actual pricing requires you to have a legitimate address in Ma Bell’s service area that currently doesn’t have AT&T service. For me, this meant finding a friend who had GigaPower service, getting their address and then using Google Maps to plug in addresses until I found one that worked with the ordering system so I could check pricing.

That’s a significant hurdle to compare prices for the media, for activists, for regulators and really for anyone interested in understanding what broadband costs in the U.S. AT&T isn’t alone in this practice. Almost every ISP has a similar hurdle in part because they charge different rates in different markets and also because they offer different services based on addresses. At a minimum ISPs should post pricing for their services in each market on their web sites up front before then requiring an availability check.

As it stands now, pricing for broadband is so complicated and dependent on contracts, various fees and options that you must go all the way through an order before you understand what your bill will actually be. This makes it hard to compare pricing between what is often the single other competitors in the market, but the contracts often lock the consumer into the ISP for a year or two, further reducing competition.

Someone call the FCC and FTC

So in this case, the media may be lauding AT&T for putting a $29 monthly price on the value of consumer privacy. But when I look at the practice, I see a company that has little competition, manipulating consumers into choosing to give up their privacy. Consumers do this, not because they get a $29 discount, but because after going through a fairly complicated sign up process and managing to click on the right button to even see the option to protect their privacy, they suddenly realize that keeping their privacy doesn’t cost $29 but rather $44 or even $66 per month.

That’s a very different story. And it’s one that AT&T makes it really difficult to report.

Obama’s executive order calls for sharing of security data

President Barack Obama signed an executive order on Friday designed to spur businesses and the Federal Government to share with each other information related to cybersecurity, hacking and data breaches for the purpose of safeguarding U.S. infrastructure, economics and citizens from cyber attacks. He signed the order in front of an audience at Stanford University during his keynote address for the White House’s Summit on Cybersecurity and Consumer Protection.

Obama’s speech started off relatively light-hearted with the President pointing out how much technological innovation could be traced back to Silicon Valley and Stanford and even joking that the big webscale companies of [company]Yahoo[/company] and [company]Google[/company] “were pretty good student projects.”

Things took a turn to the dark side, however, with Obama segueing into the devastation that modern-day technology can bring as exemplified by the major data breaches we’ve seen at Sony Pictures Entertainment and insurance provider Anthem.

The new executive order is supposed to help nullify future attacks with the idea that companies have information related to data breaches that could be helpful to the Federal Government and vice versa.

“So much of our computer networks and critical infrastructure are in the private sector, which means government can’t do this alone,” Obama said. “But the fact is that the private sector can’t do it alone either, because it’s government that often has the latest information on new threats.”

With the new executive order, Obama wants both the private and public sector to create hubs where they can trade information with each other and respond to threats “in as close to real time as possible,” according to the executive order.

Obama insisted at several points throughout his speech (and in the executive order itself) the need to balance privacy concerns with national security concerns, a hot topic that has privacy advocates worried that giving government access to business and personal data will lead to intelligence agencies overstepping their boundaries.

“I have to tell you that grappling with how the government protects the American people from adverse events, while at the same time making sure that government itself is not abusing its capabilities, is hard,” said Obama.

Indeed, this delicate line between privacy and security led to senior executives from Google, Yahoo and [company]Facebook[/company] declining to attend the security summit. It’s no secret there’s been bad blood between these companies and the U.S. government ever since the leaked Edward Snowden documents detailed the government’s data-collection methods as they relate to the tech giants.

Ironically, Facebook earlier this week revealed its own collaborative-threat detection framework dubbed ThreatExchange, in which its purpose is to provide an online hub (hosted by Facebook, of course) where companies can exchange security-related information in order to prevent further data breaches and hacks. Among the companies participating with Facebook on the project are Pinterest, Tumblr, [company]Twitter[/company] and Yahoo.

While ThreatExchange allows the trading of security data, it’s probably not exactly what the U.S. government is looking for since its only available for businesses to tap into.

Whether the private sector wants to voluntarily disclose more information to the U.S. government in the name of security remains to be seen, but in the time being, it’s looking like companies are at least open to sharing information with each other sans government.

Researchers show a machine learning network for connected devices

Researchers at Ohio State University have developed a method for building a machine learning algorithm from data gathered from a variety of connected devices. There are two cool things about their model worth noting. The first is that the model is distributed and second, it can keep data private.

The researchers call their model Crowd-ML and the idea is pretty basic. Each device runs a version of a necessary app, much like one might run a version of [email protected] or other distributed computing application, and grabs samples of data to send to a central server. The server can tell when enough of the right data has been gathered to “teach” the computer and only grabs the data it needs, ensuring a relative amount of privacy.

The model uses a variant of stochastic (sub)gradient descent instead of batch processing, to grab data for machine learning, which is what makes the Crowd-ML effort different. Stochastic gradient descent is the basis for a lot of machine learning and deep learning efforts. It uses knowledge gleaned from previous computations to inform the next computations, making it iterative, as opposed to something processed all at once.

The paper goes on to describe how one can tweak the Crowd-ML model to ensure more or less privacy and process information faster or in greater amounts. It tries to achieve the happy medium between protecting privacy and gathering the right amount of data to generate a decent sample size to train the machine learning algorithm.

Crowd-ML is worth checking out because as we’re bringing more connected devices into our home, we’re also bringing with them the promise that thousands of connected sensors and smart objects can help us use resources more efficiently, manage our health and even direct our traffic once we aggregate and analyze the data they hold. With that promise comes the risk of losing our privacy, but also the risk that the promise falls flat because the ability to share information costs us too much time or energy to actually make it real.

Having a distributed method of machine learning could be one step in solving for some of those issues. Already, we’re seeing plenty of research into alternative networks and architectures for data traversing the internet of things, so what’s one more option to mull as we’re rethinking how we want to build a computing model for billions of always-on connected devices that aren’t managed by a human?