Security researchers Alex Rad and Juliano Rizzo claim to have discovered significant weaknesses in the Telegram secure messaging app, mainly to do with the “visual fingerprint” that correspondents must use to ensure the security of an end-to-end encrypted conversation.
Telegram chats are not end-to-end secure by default, and when users want to set up a fully secure chat they need to compare these visual fingerprints — derived from the shared secret key for the conversation — to check that they see the same thing, so the shared key has not been tampered with.
The biggest problem they highlighted in a Friday blog post was a simple one: As users don’t tend to be standing next to one another, the easiest thing for them to do is share screenshots of the fingerprint through the not-yet-properly-secret conversation – which a man-in-the-middle (MITM) attacker could “auto-replace.” Sharing them via MMS could also cause problems, due to the vulnerability of that channel.
Even if the users don’t make such mistakes, the researchers argued, a very well-resourced “super villain” – as in, one with tens of millions of dollars to spend, or a botnet or supercomputer under its control — might be able to spoof the visual fingerprint. However, Telegram responded on Twitter to say they got their numbers wrong, and this would be prohibitively expensive…
… and also argued that the researchers were wrong to say that social engineering would be able to make the calculation of the fingerprint more manageable.
Rizzo shot back:
Rad and Rizzo also criticized Telegram for using SMS as a user authentication mechanism, as “SMS can be sniffed and cracked, targets can be connected to false base stations, and carriers can be compromised.” This would obviously also affect those using MMS mechanisms to compare visual fingerprints.
The researchers called on Telegram to make all chats end-to-end encrypted by default, switch from per-chat authentication to proper public key cryptography (as used by the likes of OTR, Threema and TextSecure), and introduce a new user authentication scheme.
“Finally, to honor privacy, Telegram must enable communications decoupled from the requirement for address books and a phone number so that people can use Telegram anonymously, which is not currently possible,” they added.
Berlin-based Telegram sent me a statement in response to the blog post, noting in response to the “super villain” attack theory that — on top of the $1 trillion issue — “people usually contact support if a secret chat takes more than a few seconds to be created — and here it would have to take 30 days”. The statement continued:
In terms of comparing key visualizations, pretty much any way of remote identity verification (like sending screenshots) poses similar problems, including public keys suggested in the post. A secure independent channel is required — personal communication being, naturally, the only truly secure option.
As for the possible login SMS interception — it does not affect secret chats. For additional protection of cloud chats, we’ve been working for the last two months on introducing cloud passwords for users who are concerned about the safety of their SIM — that work is nearing conclusion.
On the whole, we’re glad that Telegram’s open structure, code and documentation makes it possible for researchers to contribute and suggest solutions. We’re grateful for each comment of this kind, regardless of whether it describes a realistic attack or not.
This article was updated at 5am PT to note that the insecure channel for sharing visual fingerprints would be MMS, not SMS, and again at 5.30am PT to note Telegram’s statement. It was also amended at 11.40pm PT to remove my erroneous assertion that TextSecure is known as Signal on iOS — the apps are made by the same people and the idea is for TextSecure-compatible messaging to be added to Signal, but for now Signal is only a secure voice app, equivalent to Redphone on Android.