How do you manage or grant identity in a decentralized network?

There are several efforts to build a more decentralized internet — from attempts to build peer-to-peer wireless networks like Commotion or Open Garden to similar efforts with P2P browsers, such as BitTorrent’s Project Maelstrom. And thanks to blockchain, the decentralized data transaction processing technology behind Bitcoin, we’re now hearing more about how we can build a decentralized network for the internet of things.

This isn’t just some geeky effort to test technology or to fight “the man.” There are legitimate business and practical reasons that one might want to operate a peer-to-peer IoT network, as Eric Jennings, the CEO of Filament explains on this week’s podcast. Jennings’ company is building a module for industrial customers that includes a sensor as well as connectivity that will connect the sensor to a cloud if customers want it, but more interestingly to a mesh network of other sensors.

Filament is developing a group of technologies that will allow companies to deploy these sensors in the field without needing some kind of cellular or wireline backhaul to the internet. This is great for remote locations, where such connectivity might be hard to come by but machines still need to communicate with each other, or for people toting tablets. Another reason decentralization works is because it cuts down on costs of sending and storing data in the cloud.

While those costs are low, it may not make sense to store every bit sent over the 15-year life of a washing machine in the cloud, especially if manufacturers can’t come up with a business model that keeps the lights on in that data center. But the most interesting element of my conversation with Jennings came after we discussed the pros and cons of the first three elements of his technology group — blockchain, Telehash and BitTorrent — and dove into the tricky business of managing a device’s identity.

Jennings said the question his team was trying to answer was, “How could we have DNS for these devices without servers?” The answer was blockname, a proposal that uses the existing domain name system, but falls back to the decentralized system when it fails to find the device it’s looking for on the public DNS.

In the blockname trial that Filament has set up, when this happens, the request is routed via Telehash to the blockchain, which then looks throughout the chain for the requested IP address (or identifying number) of the device. I asked Jennings who handles that initial registration of the device in the blockchain, because in a decentralized environment, getting the correct and authenticated version of a device’s identity could be a challenge.

Jennings isn’t sure yet, but he hopes that the consensus elements of the blockchain technology help — basically if people disagree with you, they need more computing power to change the status quo. So to get the details on blockname, which start late at about 46 minutes in (Jennings’ section starts at 22:00), or just to hear Kevin Tofel and me riff on privacy and Samsung’s smart TVs, listen to the show and stretch your brain. When you’re done, feel free to send us a question before the end of the month and you’ll be entered to win a Chamberlain MyQ connected garage door opener.

[soundcloud url=”″ params=”color=ff5500&auto_play=false&hide_related=false&show_comments=true&show_user=true&show_reposts=false” width=”100%” height=”166″ iframe=”true” /]

Let’s learn about blockname, a decentralized version of DNS

[soundcloud url=”″ params=”color=ff5500&auto_play=false&hide_related=false&show_comments=true&show_user=true&show_reposts=false” width=”100%” height=”166″ iframe=”true” /]

This weekend brought us a panic about Samsung TVs listening to every word people say in their presence. A little reporting showed that to be overblown, so Kevin Tofel and I discussed how voice recognition worked right now across devices in the smart home such as the Amazon Echo, Ubi and Google’s phones. We also did a deeper dive into what security and privacy should look like with connected devices and what you should be worried about.

Eric Jennings, CEO of Filament.

Eric Jennings, CEO of Filament.

Then we shifted gears to our guest, Eric Jennings, the CEO of Filament, a company building sensor modules for industrial customers. What’s most interesting about Filament isn’t the sensors, but it’s plan to try to create a decentralized version of the internet of things for those sensor modules to run on. So while it offers a cloud, Jennings and I dove deep into how Filament is building a series of technologies that includes the blockchain, Telehash, BitTorrent and new effort called blockname to create a decentralized network for IoT. It resembles, but is different from IBM’s similar efforts. So tune in to learn more. It’s pretty awesome.

Hosts: Stacey Higginbotham and Kevin Tofel
Guests: Eric Jennings, CEO of Filament

  • The truth about Samsung’s connected TVs and your privacy
  • What you should really worry about in the smart home when it comes to security and privacy
  • More on building a decentralized internet of things with block chain and Telehash
  • What role could BitTorrent play in this decentralized stack?
  • Introducing blockname, a way to build a decentralized version of DNS


Internet of Things Show RSS Feed

Subscribe to this show in iTunes

Download This Episode


Connected apartments may be smarter than connected homes

This week’s podcast unravels the secrets of Thread and HomeKit

The internet of mi. Discussing Xiaomi, Yonomi and smart homes

Wall Street’s perspective on IoT and the plague of CES

Smart coffee makers, cheap light bulbs and better voice control

Hanging with my husband: His thoughts on our smart home

Exploring Amazon’s Echo and the retailer’s home automation channel plans

Looking for an architecture for the internet of things? Try DNS.

Building networks that can expand and survive the internet of things, plus some tips on crowdfunding

Why the internet of things should be designed with efficiency in mind

Mother may I? Building hardware that can change with the flip of an app.

We’re already driving smart cars, so when will they be autonomous?

Everyone should be a maker. So how do we get there?

Netflix exec: company hasn’t changed policy on blocking VPN users

Netflix isn’t cracking down on foreign users utilizing VPNs to access the company’s streaming service, or at least not more than it has always done, according to Netflix’s Chief  Product Officer Neil Hunt. “We haven’t changed our VPN policy at all,” said Hunt during a CES press briefing in Las Vegas on Tuesday.

Hunt’s remarks followed reports that Netflix started to crack down on VPN users following requests of movie studios. Users in countries where [company]Netflix[/company] hasn’t officially launched yet have long used VPNs to bypass geo-blocking mechanisms that would prevent them from accessing Netflix’s streaming service by pretending that their computer resided in the U.S..

Recently, an increased number of VPN users complained online that they haven’t been able to access Netflix anymore, but Hunt said Tuesday that this has nothing to do with any stricter blocking rules. Instead, Netflix Android mobile app is now querying Google’s DNS service if a user’s default DNS service times out. That means that if a VPN service doesn’t return a DNS request in time, apps automatically get the local DNS information from Google, leading to users being locked out if they’re not in a Netflix market.

Hunt added that Netflix has long used “the same VPN block list that everyone else uses,” and that it can only do so much to prevent users from accessing the service from abroad.

This post was updated at 5:28pm to clarify that Netflix only queries Google DNS with its new mobile Android app.


How the internet’s engineers are fighting mass surveillance

The Internet Engineering Task Force has played down suggestions that the NSA is weakening the security of the internet through its standardization processes, and has insisted that the nature of those processes will result in better online privacy for all.

After the Snowden documents dropped in mid-2013, the IETF said it was going to do something about mass surveillance. After all, the internet technology standards body is one of the groups that’s best placed to do so – and a year and a half after the NSA contractor blew the lid on the activities of the NSA and its international partners, it looks like real progress is being made.

Here’s a rundown on why the IETF is confident that the NSA can’t derail those efforts — and what exactly it is that the group is doing to enhance online security.

Defensive stance

The IETF doesn’t have members as such, only participants from a huge variety of companies and other organizations that have an interest in the way the internet develops. Adoption of its standards is voluntary and as a result sometimes patchy, but they are used – this is a key forum for the standardization of WebRTC and the internet of things, for example, and the place where the IPv6 communications protocol was born. And security is now a very high priority across many of these disparate strands.

[pullquote person=”Jari Arkko” attribution=”Jari Arkko, IETF chair” id=”903271″]Fortunately we decided we should have strong encryption[/pullquote]As IETF chair Jari Arkko told me, if previous battles over the inclusion of encryption in the internet protocol set hadn’t been won by those advocating greater security – their opponents were governments, of course – then using the net would be a riskier business than it currently is. “Fortunately we decided we should have strong encryption, and I do not know what would have happened if we did not make that decision at the time,” he said, pointing to e-commerce and internet banking as services that may never have flourished as they have.

With trust in the internet having been severely shaken by Snowden’s revelations, the battle is back on. In May this year, the IETF published a “best practice” document stating baldly that “pervasive monitoring is an attack.” Stephen Farrell, one of the document’s co-authors and one of the two IETF Security Area Directors, explained to me that this new stance meant focusing on embedding security in a variety of different projects that the IETF is working on.

As Arkko put it:

I think a lot of the emphasis today is on trying to make security a little more widely deployed, not just for special banking applications or websites where you provide your credit card number, but as a more general tool that is used for all communications, because we are communicating in insecure environments in many cases — cafeteria hotspots and whatever else.

On Sunday, Germany’s Der Spiegel published details of some of the efforts by the NSA and its partners – such as British signals intelligence agency GCHQ — to bypass internet security mechanisms, in some cases by trying to weaken encryption standards. The piece stated that NSA agents go to IETF meetings “to gather information but presumably also to influence the discussions there,” referring in particular to a GCHQ Wiki page that included a write-up of an IETF gathering in San Diego some years ago.

The report mentioned discussions around the formulation of emerging tools relating to the Session Initiation Protocol (SIP) used in internet telephony, specifically the GRUU extension and the SPEERMINT peering architecture, adding: “Additionally, new session policy extensions may improve our ability to passively target two sides communications by the incorporation of detailed called information being included with XML imbedded [sic] in SIP messages.”


“The IETF meeting trip report mentioned in [the] Spiegel article reads like any boring old trip report, but is of course a bit spooky in that context,” Farrell told me by email (the piece came out after my initial interviews with Farrell and Arkko). “Hopefully intelligence agencies will someday realise that their efforts would be far better spent on improving internet security and privacy. In the meantime, their pervasive monitoring goals are part of the adversary model the IETF considers when developing protocols.”

[pullquote person=”Jari Arkko” attribution=”Jari Arkko, IETF chair” id=”903272″]IETF is committed to finding out all weaknesses and dealing with them[/pullquote]Arkko, meanwhile, said: “The IETF’s open processes, broad review, and open standards provide strong foundations against both unintentional and intentional mistakes and weaknesses in internet protocols. There is obviously no guarantee that there are no unknown weaknesses in internet technology, but the IETF is committed to finding out all weaknesses and dealing with them to the best of our ability.”

Those open processes were apparently enough to, around a year ago, ensure the failure of a campaign to oust an NSA employee from the panel of an IETF working group that deals with cryptographic security. So, if its processes are to be trusted, what exactly can we expect from the IETF regarding combat mass surveillance by such agencies?

Fundamental rethink

Snowden’s revelations prompted a fundamental rethink within the IETF about what kind of security the internet should be aiming for overall. Specifically, the IETF is in the process of formalizing a concept called “opportunistic security” whereby — even if full end-to-end security isn’t practical for whatever reason — some security is now officially recognized as being better than nothing.

“One thing the IETF did wrong in the past is we tried to get you to be either ‘no security’ or ‘really fantastic security’,” Farrell explained. “Typically, until recently you had no choice but to run either no crypto or the full gold-plated stuff, and this slowed down the deployment of cryptographic security mechanisms. The idea of opportunistic security design is that, each time you make a connection, you’re willing to get the best security that you can for that connection.”

So, for example, a provider of a certain service may decide to turn on encryption even if they can’t authenticate the client device. As Farrell put it, these “in-between states are well defined now.”

He noted how web giants such as Facebook and Google have stepped up mail-server-to-mail-server encryption in the wake of Snowden. Facebook sends a lot of emails to its users and, according to Farrell, 90 percent of those are now encrypted between servers. Google has also done a lot of work to send encrypted mail to more providers. “This doesn’t prevent targeted attacks – man-in-the-middle is still possible in a lot of cases, but you can at least get halfway,” he said, adding that this may be enough to dampen pervasive surveillance.

Email - generic

Farrell noted:

My personal belief if that, if you get halfway, it’s much easier to get the second half. I’ve seen really large mail domains turn on the crypto, and some say they can’t see a change in CPU use. Now the next step is getting good certificates in place, getting good administration. It’s easier than going from zero to the end.

One experimental draft that Farrell is working on would see opportunistic security added to the Multiprotocol Label Switching (MPLS) transport mechanism used in core telecommunications networks, “just above the fiber.” This is some way off happening, if indeed it works out at all – it’s dealing with extremely high bitrates and would require implementation in hardware. But, as Farrell noted, it shows how the IETF is working on adding encryption to all layers of the stack.

“The MPLS issue will probably take years before we see progress, but when we do see progress it will have significant impact quickly,” he said. “One reason I understand people are interested in this is because it might be a direct mitigation for some of the fiber-tapping cases that have been reported. Even partial deployment could be quite significant.”

New versions

HTTP 2, currently being finalized by the IETF and the World Wide Web Consortium (W3C), is on the way, and it will support the padding of traffic so as to make it harder for spies to draw inferences from packet size. This will mean the addition of a few bytes here and there, which may have an impact on latency if badly executed, so that’s a challenge for both the IETF and the standard’s implementers.

The IETF is also officially killing off RC4, a cipher used in the Transport Layer Security (TLS) protocol that supposedly provides the security behind the “https” you see denoting secure connections in web addresses. RC4 is now known to be vulnerable to attack. (For that matter, TLS’s security is also up for debate – Sunday’s Spiegel article suggested the NSA and GCHQ were able to decrypt TLS sessions by stealing their keys.)

Farrell noted that TLS 1.3 should be fully-baked sometime in 2015, making it faster and more attractive to implement, and it would incorporate heftier changes than those made in previous iterations. One planned change will involve turning on encryption earlier in the “handshake” process, where the client and server exchange keys, so as to counter monitoring of the handshake contents.

Meanwhile, a separate working group is trying to develop a new DNS Private Exchange (DPRIVE) mechanism to make DNS transactions – where someone enters a web address and a Domain Name System server translates it to a machine-friendly IP address – more private.

[pullquote person=”Stephen Farrell” attribution=”Stephen Farrell, IETF Security Area Director” id=”903273″]Thinking about confidentiality for DNS was so off the table for the last few decades[/pullquote]”Some privacy-sensitive information can be exposed through DNS,” Farrell explained, citing the example of a web address that refers to an embarrassing disease – information that might be exposed even if the web traffic itself is encrypted. “This is a good example of the kind of change that’s happening. Thinking about confidentiality for DNS was so off the table for the last few decades – the people running DNS were saying this is all public data.”

The DNS case highlights one of the key problems that the IETF must wrestle with — encrypting traffic can make it harder to carry out certain network management operations that people are used to being able to carry out. Carriers would find it harder to do load balancing if all DNS activity was secured. As Arkko pointed out, end-to-end encryption would mess with things like caching. These problems are not easily overcome.

“We have to have some real thought go into this and understand what the trade-offs are,” Arkko said. “That is largely the debate we are having now.”

Did NTIA just drive a stake through SOPA?

One interesting result of the NTIA’s propsal to relinquish U.S. control over the DNS could be to drive a final stake through anti-piracy proposals like SOPA that rely on manipulating DNS queries as an enforcement mechanism.