Samaritans charity continues to claim suicide-tweet app is legal, but won’t say how

The Samaritans, the British suicide prevention charity that recently launched a highly controversial Twitter app that scans tweets for signs of severe depression, is not backing down in the face of heavy criticism.

The app, Samaritans Radar, alerts subscribers when people they follow post tweets containing phrases like “help me”, so that the subscriber can then reach out to that person if they agree the tweet is a sign of distress. Many people with experience of mental illness have reacted in horror, suggesting that the tool is a gift for trolls and stalkers, and likely to stop mental illness sufferers from using Twitter to just get stuff off their chest. It doesn’t help that the “depressed” tweeters themselves get no notification of an alert being sent out, nor that they would have to opt out to stop being monitored in this way — a measure that assumes they know the app exists.

In the latest in a series of updates on the matter, Samaritans policy and research chief Joe Ferns said on Tuesday that the charity had taken legal advice and continued to believe that Radar is “compliant with the relevant data protection legislation.”

Ferns wrote that the Samaritans believe they are not, under U.K. law, “the data controller or data processor of the information passing through the app,” and even if they were, “given that vital interests are at stake, exemptions from data protection law are likely to apply” – in other words, they are legally allowed to process that data without the subject’s consent.

I already laid out various people’s analysis of the app in terms of U.K. and EU data protection law in a previous post so I won’t do that again, but I will note that, according to the definitions provided by the U.K. Information Commissioner’s Office (ICO), the country’s privacy watchdog, it does very much seem that the Samaritans are playing the role of data controller:

Data controller means … a person who (either alone or jointly or in common with other persons) determines the purposes for which and the manner in which any personal data are, or are to be, processed.

Scanning and analyzing tweets (personal data) based on keywords, then sending out alerts based on that analysis, fits that bill pretty well (Jon Baines has written a deeper analysis on this point.) As for the “vital interests” exemption, that’s an arguable point when so many people see the app as running counter to those interests.

I was keen to find out more about the Samaritans’ legal advice, but a charity spokesperson told me via email:

As we are in discussions with the Information Commissioners Office, and continue to review the concerns raised, we are not able to give more details at this stage. We will respond to further enquiries over the coming days.

So I rang the ICO, a spokesperson for whom told me: “Whether they can release [details] or not is a decision for themselves.” So there doesn’t seem to be any good reason for the Samaritans not to discuss this matter more openly.

Which they really should. If you follow the #SamaritansRadar hashtag on Twitter, a lot of people are upset with the charity’s handling of the disquiet, even Samaritans volunteers:

A petition against the app has garnered more than 1,000 signatures. Yes, there are arguments to be made in the app’s favor – check out this post by researcher Jonathan Scourfield, who was involved with the project and points out that “there are cases of young people who have taken their lives after repeated tweets stating their suicidal feelings, to which no-one apparently responded.”

But even so, I find it curious that the Samaritans have not at least seen fit to suspend Radar while its legality and ethics are debated and established – if indeed that proves possible. The charity usually does great work, but it’s really hurting its image among many of the people it’s trying to help.