Voices in AI – Episode 73: A Conversation with Konstantinos Karachalios

[voices_in_ai_byline]

About this Episode

Episode 73 of Voices in AI features host Byron Reese and Konstantinos Karachalios discuss what it means to be human, how technology has changed us in the far and recent past and how AI could shape our future. Konstantinos holds a PhD in Engineering and Physics from the University of Stuttgart, as well as being the managing director at the IEEE standards association.
Visit www.VoicesinAI.com to listen to this one-hour podcast or read the full transcript.

Transcript Excerpt

Byron Reese: This is Voices in AI, brought to you by GigaOm, and I’m Byron Reese. Today our guest is Konstantinos Karachalios. He is the Managing Director at the IEEE Standards Association, and he holds a PhD in Engineering and Physics from the University of Stuttgart. Welcome to the show.

Konstantinos Krachalios: Thank you for inviting me.

So we were just chatting before the show about ‘what does artificial intelligence mean to you?’ You asked me that and it’s interesting, because that’s usually my first question: What is artificial intelligence, why is it artificial and feel free to talk about what intelligence is.

Yes, and first of all we see really a kind of mega-wave around the ‘so-called’ artificial intelligence—it started two years ago. There seems to be a hype around it, and it would be good to distinguish what is marketing, what is real, and what is propaganda—what are dreams what are nightmares, and so on. I’m a systems engineer, so I prefer to take a systems approach, and I prefer to talk about, let’s say, ‘intelligent systems,’ which can be autonomous or not, and so on. The big question is a compromise because the big question is: ‘what is intelligence?’ because nobody knows what is intelligence, and the definitions vary very widely.

I myself try to understand what is human intelligence at least, or what are some expressions of human intelligence, and I gave a certain answer to this question when I was invited in front of the House of the Lords testimony. Just to make it brief, I’m not a supporter of the hype around artificial intelligence, also I’m not even supporting the term itself. I find it obfuscates more than it reveals, and so I think we need to re-frame this dialogue, and it takes also away from human agency. So, I can make a critique to this and also I have a certain proposal.

Well start with your critique If you think the term is either meaningless or bad, why? What are you proposing as an alternative way of thinking?

Very briefly because we can talk really for one or two hours about this: My critique is that the whole of this terminology is associated also with a perception of humans and of our intelligence, which is quite mechanical. That means there is a whole school of thinking, there are many supporters there, who believe that humans are just better data processing machines.

Well let’s explore that because I think that is the crux of the issue, so you believe that humans are not machines?

Apparently not. It’s not only we’re not machines, I think, because evidently we’re not machines, but we’re biological, and machines are perhaps mechanical although now the boundary has blurred because of biological machines and so on.

You certainly know the thought experiment that says, if you take what a neuron does and build an artificial one and then you put enough of them together, you could eventually build something that functions like the brain. Then wouldn’t it have a mind and wouldn’t it be intelligent, and isn’t that what the human brain initiative in Europe is trying to do?

This is weird, all this you have said starts with a reductionist assumption about the human—that our brain is just a very good computer. It ignores really the sources of our intelligence, which are really not all in our brain. Our intelligence has really several other sources. We cannot reduce it to just the synapses in the neurons and so on, and of course, nobody can prove this or another thing. I just want to make clear here that the reductionist assumption about humanity is also a religious approach to humanity, but a reductionist religion.

And the problem is that people who support this, they believe it is scientific, and this, I do not accept. This is really a religion, and a reductionist one, and this has consequences about how we treat humans, and this is serious. So if we continue propagating a language which reduces humanity, it will have political and social consequences, and I think we should resist this and I think the best way to express this is an essay by Joichi Ito with the title which says “Resist Reduction.” And I would really suggest that people read this essay because it explains a lot that I’m not able to explain here because of time.

So you’re maintaining that if you adopt this, what you’re calling a “religious view,” a “reductionist view” of humanity, that in a way that can go to undermine human rights and the fact that there is something different about humans that is beyond purely humanistic.

For instance I was in an AI conference of a UN organization which brought all other UN organizations with technology together. It was two years ago, and there they were celebrating a humanoid, which was pretending to be a human. The people were celebrating this and somebody there asked this question to the inventor of this thing: “What do you intend to do with this?” And this person spoke publicly for five minutes and could not answer the question and then he said, “You know, I think we’re doing it because if we don’t do it, others were going to do it, it is better we are the first.”

I find this a very cynical approach, a very dangerous one and nihilistic. These people with this mentality, we celebrate them as heroes. I think this is too much. We should stop doing this anymore, we should resist this mentality, and this ideology. I believe we make machine a citizen, you treat your citizens like machines, then we’re not going very far as humanity. I think this is a very dangerous path.

Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com
[voices_in_ai_link_back]
 
Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.

Voices in AI – Episode 65: A Conversation with Luciano Floridi

[voices_in_ai_byline]

About this Episode

Episode 65 of Voices in AI features host Byron Reese and Luciano Floridi discuss ethics, information, AI and government monitoring. They also dig into Luciano’s new book “The Fourth Revolution” and ponder how technology will disrupt the job market in the days to come. Luciano Floridi holds multiple degrees including a PhD in philosophy and logic from the University of Warwick. Luciano currently is a professor of philosophy and ethics of information, as well as the director of Digital Ethics Lab at the University of Oxford. Along with his responsibilities as a professor, Luciano is also the chair of the Data Ethics Group at the Alan Turing Institute.
Visit www.VoicesinAI.com to listen to this one-hour podcast or read the full transcript.

Transcript Excerpt

Byron Reese: This is Voice in AI, brought to you by GigaOm, I’m Byron Reese. Today our guest is Luciano Floridi, he is a professor of philosophy and ethics of information, and the director at the Digital Ethics Lab at the University of Oxford. In addition to that, he is the chair at the Data Ethics Group at the Alan Turing Institute. Among multiple degrees, he holds a Doctor of Philosophy in philosophy and logic from the University of Warwick. Welcome to the show, Luciano.
Luciano Floridi: Thank you for having me over.
I’d like to start with a simple question which is: what is intelligence, and by extension, what is artificial intelligence?
Well this is a great question and I think one way of getting away with a decent answer, is to try to understand, what’s the lack of intelligence. So that you recognize it by spotting when there isn’t intelligence around.
So, imagine you are, say, nailing something on the wall and all of a sudden you hit your finger. Well, that was stupid, that was a lack of intelligence, it would have been intelligent not to do that. Or, imagine that you get all the way to the supermarket and you forgot your wallet so you can’t buy anything, well that was also stupid, so you would need intelligence to take your wallet. You can multiply that by, shall we say, a million cases, so there are a million cases in which you can be, or—just to be more personal—I can be stupid, and therefore I can be intelligent by the other way around.
So intelligence is a way of, shall we say, sometimes, coping with the world in a way that is effective, successful, but it also can be so many other things. It’s not intelligent, or it would be intelligent not to talk to your friend about the wrong topic, because that’s not the right day. It is intelligent, or not very intelligent, to make sure that that party you organize, you don’t invite Mary and Peter because they can’t stand each other.
The truth is that we don’t have a definition for intelligence or vice versa, for the lack of it. But at this point, I can sort of recycle an old joke by one of the judges in the Supreme Court, I’m sure everyone listening to or reading this knows that very well, but always ask for a definition of pornography, as you know, he said, “I don’t have one, but I recognize it when I see it.” I think that that sounds good—we know when we’re talking to someone intelligent on a particular topic, we know when we are doing something stupid about a particular circumstance, and I think that that’s the best that we can do.
Now, let me just add one last point just in case, say, “Oh, well isn’t that funny that we don’t have a definition for such a fundamental concept?” No it isn’t. In fact, most of the fundamental concepts that we use, or experiences we have, don’t have a definition. Think about friendship, love, hate, politics, war, on and on. You start getting a sense of, okay, I know what we’re talking about, but this is not like water equal to H2O, it’s not like a triangle is a figure with a plain of three sides and three angles, because we’re not talking about simple objects that we can define in terms of necessary and sufficient condition, we’re talking about having criteria to identify what it looks like to be intelligent, what it means to behave intelligently. So, if I really have to go out of my way and provide a definition—intelligence is nothing, everything is about behaving intelligently. So, let’s get an adverb instead of a noun.
I’m fine with that. I completely agree that we do have all these words, like, “life” doesn’t have a consensus definition, and “death” doesn’t have a consensus definition and so forth, so I’m fine with leaving it in a gray area. That being said, I do think it’s fair to ask how big of a deal is it—is it a hard and difficult thing, there’s only a little bit of it, or is it everywhere? If your definition is about coping with the world, then plants are highly intelligent, right? They will grow towards light, they’ll extend their roots towards water, they really cope with the world quite well. And if plants are intelligent, you’re setting a really low bar, which is fine, but I just want to kind of think about it. You’re setting a really low bar, intelligence permeates everything around us.
That’s true. I mean, you can even say, well look the way the river goes from that point to that point, and reaches the sea through the shortest possible path, well, that looks intelligent. I mean, remember that there was a past when we thought that precisely because of this reason, and many others, plants were some kinds of gods, and the river was a kind of god, that it was intelligent, purposeful, meaningful, goal-oriented, sort of activity there, and not simply a good adaptation, some mechanism, cause and effect. So what I wanted to detach here, so to speak, is our perception of what it looks like, and what it actually is.
Suppose I go back home, and I find that the dishes have been cleaned. Well, do I know whether the dishes have been cleaned by the dishwasher or by, say, my friend Mary? Well, looking at the dishes, I cannot. They’re all clean, so the output looks pretty much the same, but of course the two processes have been very different. One thing requires some intelligence on Mary’s side, otherwise she would break things and so on, waste soap, and so on. And the other one is, well, simple dishwashing machine, so, zero intelligence as far as I’m concerned—of the kind that we’ve been discussing, you know, that goes back to the gray area, the pornography example, and so on.
I think what we can do here is to say, look, we’re really not quite sure about what intelligence means. It has a thousand different meanings we can apply to this and that, if you really want to be inclusive, even a river’s intelligence, why not? The truth is that when we talk about our intelligence, well then we have some kind of a meter, like a criteria to measure, and we can say, “Look, this thing is intelligent, because had it been done by a human being, it would have required intelligence.” So, they say, “Oh, that was a smart way of doing things,” for example, because had that been left to a human being, well I would have been forced to be pretty smart.
I mean, chess is a great example today—my iPhone is as idiotic, as my grandmother’s fridge, you know zero intelligence of the sort we’ve been discussing here, and yet, it plays better chess than almost anyone I can possibly imagine. Meaning? Meaning that we have managed to detach the ability to pursue a particular goal, and been successful in implementing a process from the need to be intelligent. It doesn’t have to be to be successful.
Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com
[voices_in_ai_link_back]
 
Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.

The ROI of AI: How Intelligent Technology Transforms Enterprise Surveillance in the Innovation Age

For the world’s largest and most regulated organizations, understanding each employee’s day-to-activities and behaviors is simply impossible and, oftentimes, unnecessary. But uncovering internal issues – such as operational inefficiencies or even criminal activities – that could result in wasted time, lost money or damaged reputations, is critical. Therefore, it is important that businesses invest in tools to effectively identify and help correct these problems, and in turn drive sizable ROI.
Artificial intelligence is one technology that many enterprises have implemented to solve these internal challenges. In fact, IDC forecasts worldwide spending on cognitive and artificial intelligence (AI) systems to reach $57.6 billion in 2021. But, which AI applications are actually helping enterprises, and how are their investments driving returns?
Forget Big Brother: How AI Surveillance Helps Enterprises “Know Your Employee”
There’s a general consensus among consumers that AI technology will create an Orwellian world; but the fact of the matter is that this technology has the potential to do a lot of good for the modern-day enterprise as well as their workforces from a surveillance standpoint.
Oftentimes, enterprises leave employee communications untapped. There simply aren’t enough hours in the day to monitor every message someone sends via email or business chat – nor is this a necessary practice as the majority of communications are benign. But within every e-communication lies unique insights that could lead businesses to uncover some harsh truths about employee activity.
One industry that has quickly adopted AI technology for this exact reason is financial services. Today, smart machines capable of understanding the true meaning behind human communications are augmenting the work of human analysts at most of the world’s major investment banks. The technology extracts messages that indicate misconduct with unparalleled precision, while entity resolution and knowledge mapping helps analysts identify sources of human risk and hidden networks of collusion. Among the compliance organizations of leading investment banks, widespread adoption of AI-enabled analytics has taken place in less than 3 years – it’s no exaggeration to say that, for these organizations, regulatory compliance would be impossible without the amplifying effects of AI.

The innovations happening in financial services herald a paradigm shift in enterprise surveillance that will inevitably encompass other sectors. Gartner estimates that up to 80% of enterprise data is unstructured and the majority of this is made up of communications such as emails, chat messages, phone calls, and other documents. AI has the distinct ability to make sense of this information for the betterment of a business. Whereas banks were effectively forced to analyze this data due to ever-changing regulations, the clear results and benefits of AI technology translate nicely into use cases for businesses across industries – uncovering insights to unlock new business opportunities while hastening the resolution of problems.
Imagine an airline being able to consolidate insights captured in tweets, at a call center, or in emails into a heatmap of problems and opportunities. It could quickly see issues with the quality of meals emanating from a particular supplier. It could see trends in requests for unserved destinations and early insight into the likely popularity of a new route. Anything from confusion about security procedures, praise for great service, or ignorance of company policies, would be surfaced. Such an airline would have turned the inputs of its employees and customers into a valuable asset for management.  
The Real Value of AI: Both Financial and Operational
So what’s the measurable impact of AI for surveillance? AI can bridge the gap between the subject matter expert and the software in the quickest way by transferring the knowledge into the software system. AI can further reduce the human glue by having an easy-to-learn system empowered by the large computational power of today’s machines. Thus AI-enabled software provides that effectiveness (covering more hits or true positives) and efficiency (reducing the false positives and time to learn) tool for all and realizes the ROI in quickest way. Independent assessments have shown that an AI-enabled approach to communication analytics delivers a marked improvement in results, bringing false positive alerts down by at least half, even as monitoring of employees expands from 5% to 100% coverage.
The good news for enterprises looking to get started on their AI journeys is that all the research is out there. Instead of commissioning AI experiments, today’s enterprise leaders can use the lessons learned in financial services to invest in solutions with proven results and a quantifiable ROI. Today, any enterprise can augment its processes and amplify employee productivity by using AI to turn the 80% of data that is currently overlooked into actionable insights.
AI is entering the enterprise at a fast rate – and with more reputable brand names adopting the technology and new applications being announced, the potential business impacts are becoming all the more real and concrete. The experience of financial services is instructive. We know that more productive, profitable, and better-behaved enterprises produce a dividend for society as well as shareholders. And as businesses compete to win customers and attract the best staff, those with an effective enterprise surveillance capability will be best placed to outperform their cohorts.
by Uday Kamath, Chief Analytics Officer at Digital Reasoning

Uday has spent more than two decades developing analytics products and combines this experience with learning in statistics, optimization, machine learning, bioinformatics, and biochemistry. Senior roles, including that of Chief Data Scientist for BAE Systems Applied Intelligence, have seen him apply analytics to challenges in compliance, cybersecurity, banking fraud, anti-money laundering, and insurance. Uday has contributed to many journals, conferences, and books, is the author of Mastering Java Machine Learning, and has a Ph.D. in Big Data Machine Learning and Automated Feature Generation. He likes to volunteer, teach math, and is an avowed foodie – balancing his enthusiasm for cooking with long distance running. When he has the time, he indulges his passions for poetry and Indian classical music.

Decentralized webmail outfit Mailpile scraps beta program for now

When I wrote about the decentralization movement a year back, one of the big pro-privacy hopes was Mailpile, which is ambitiously trying to build a user-friendly yet rock-solid encrypted webmail system with a hybrid desktop/in-browser approach. On Friday, Mailpile’s Bjarni Rúnar Einarsson announced the rejection of the Mailpile beta, saying feedback had led the team to go back to the drawing board. One key issue was unsurprisingly related to making “all that crypto stuff completely seamless.” Iceland-based Einarsson is taking a break to get married (mazel tov) and, with the back-end providing most of the problems, front-end designer Brennan Novak has “moved on to other things for now.” Here’s hoping Mailpile gets back on track when development resumes next month.

Windows users are also vulnerable to FREAK snooping attacks

The “FREAK” vulnerability that downgrades and weakens secure web connections doesn’t just affect Google and Apple users — according to a security advisory from Microsoft, all supported versions of Windows are vulnerable too.

FREAK (Factoring attack on RSA-EXPORT Keys) is a recently discovered hangover from the early 90s, when the U.S. government banned the export of most software that used strong encryption. The SSL web security protocol was for that reason built with a special mode that uses key lengths considered weak today. The law was changed but the weak cipher suites remain, and although most modern browsers are supposed to avoid them like the plague, a widespread bug means they don’t always do that.

The FREAK flaw allows “man-in-the-middle” snoopers to downgrade a session’s security to that mode – as long as the browser is vulnerable and the server accepts those weak old cipher suites — then crack the keys and spy away.

When the flaw was publicized earlier this week, it was Apple’s Safari browser and the stock Android browser that were on the firing line for being vulnerable, endangering those users who communicate with servers that accept “export-grade” encryption – apparently a whopping third of servers with browser-trusted certificates. But it turns out the list of affected browsers and systems is way longer than that.

The big one is Windows. In pretty much every version of Windows that’s out there, Internet Explorer and whatever else uses the Schannel security package are vulnerable to the FREAK attack.

In its advisory, Microsoft said:

We are actively working with partners in our Microsoft Active Protections Program (MAPP) to provide information that they can use to provide broader protections to customers.

Upon completion of this investigation, Microsoft will take the appropriate action to help protect customers. This may include providing a security update through our monthly release process or providing an out-of-cycle security update, depending on customer needs.

Per the researchers who brought this all to our attention, here’s the current list of browsers that need patching:

  • Internet Explorer
  • Chrome on OS X (patch available)
  • Chrome on Android
  • Safari on OS X (patch expected next week)
  • Safari on iOS (patch expected next week)
  • Stock Android browser
  • BlackBerry browser
  • Opera on OS X
  • Opera on Linux

As a Firefox user, I’m feeling slightly smug this week — the researchers’ FREAK test tool just gave my browser a clean bill of health, and told me my never-used IE installation is vulnerable. Not too smug though, given the impact on other Windows software.

Good thing the anti-strong-encryption nonsense that caused this mess is a relic of past decades, eh? Oh wait…

Signal secure comms app for iPhone gains TextSecure compatibility

Open Whisper Systems has released version 2 of its Signal secure calling app for iPhone. This is an important iteration, as it introduces secure text messaging that’s compatible with the outfit’s TextSecure app for Android — for now, Open Whisper Systems’ secure voice app for Android, RedPhone, remains separate from that, though everything will come together later this year in a Signal app that works across iOS, Android and the desktop. As secure communications operations go, Open Whisper Systems has good credibility, offering end-to-end crypto, auditable open-source code and decent identity verification. The TextSecure protocol has also found its way into WhatsApp, which is why Android-toting users of that Facebook-owned messaging app enjoy extra security these days.

In a data coup, Apical analyzes visual data without the video

Apical, a company best known for the years of work it has spent contributing imaging tech to camera lenses inside smart phones and security cameras, has now devised a computer vision program for the smart home and business. The company calls its innovations Spirit and ART (short for Apical Residential Technology), and together they are probably the most disruptive things I’ve seen in terms of deriving context inside the home and processing data for the internet of things.

The reason is because Spirit and ART don’t use video. They takes visual data as seen by camera lenses, but don’t turn that into video for human eyes. Instead, they processes the visual data into computer-usable avatars that represent the people in a home. By doing this on the device that contains the camera lens and Spirit processor running the ART software, it can open up several new features in the smart home without freaking people out that videos of their naked bits will somehow get on the internet.

Because it only transmits the information the computer needs to identify a person and their gesture as opposed to all of the background pixels and filler, the system also saves bandwidth by reducing the data load associated with video files. Plus, it’s much faster for a computer to parse machine readable data as opposed to visual data that’s fit for human consumption, like most camera footage is. Thus, the system can react much more quickly to movements that people make in the home. For example, Apical CEO Michael Tusch says that the ART software can discern people from pets, adults from children and even people who live in a home from strangers.

The ART software is made possible because of the Spirit technology, which is implemented on a piece of silicon on a sensor in the home. That silicon is capable of performing object recognition and machine learning algorithms that researchers are currently using today in helping computers learn to “see. Because it does this on chip, it can compress large incoming data streams from 5 gigabits per second to a few kilobits per second depending on the output required.

More use cases

This means the system could help solve the tedious problem of determining if people in the home have really gone away or not. For example, my Nest tries to tell various devices in my house if I’m away, but it sometimes figures that out based on whether or not I’ve walked near the Nest thermostat in my upstairs hallway anytime recently. That’s not always an accurate measure of who’s home. Pinning my away status on my or my husband’s handset is equally problematic because when we leave the house and our daughter is home with a sitter, we’ll get a call telling us that the lights are suddenly off and the alarm has turned on.

In addition to being able to accurately track how many people are home without taking video of the home, the Spirit ART system could offer some other compelling use cases. For example, it could let you know if strangers are at the door, or notice if one of the avatars representing a person inside your home suddenly moved in an unusual fashion, which might indicate a fall. That would be invaluable if you are monitoring an elderly person but you don’t really want to spy on their every move.

Of course, machine data can be just as telling as visual data, which means that the cloak of privacy that the system provides is mostly just a guarantee that your actual naked pictures don’t end up online. If your employer installed a system in the office, it’s just as effective at monitoring your comings and goings as a traditional camera and maybe even more so, since it can parse data far more effectively that a security guard watching several screens of video feeds day in and day out. Computers don’t rest their eyes or take smoke breaks.

Looking ahead

That also brings up the other really disruptive aspect of this system. Right now, much of the research around computer vision focuses on teaching computers to see like humans do. To take videos and pictures of cats and teach computers what features constitute a cat. This approach is different. It takes the visual world that computers see and tells the computers how to act when they sees items matching certain patterns. If this approach scales, then it could solve a problem that plagues the internet of things and the modern surveillance society in general.

We have far more video being created than we could ever watch or even use, and as we bring cities online and things like self-driving cars into the picture (ha ha) we’re adding the flow of visual information in ways that computers can’t process fast enough. In fact, Bernie Meyerson, a vice president of innovation and IBM Fellow, complained about this very issue with me a few years ago on a podcast, when discussing the internet of things and smarter cities.

One of the big challenges he foresaw with the data being created by cameras located around cities like London was that people cannot parse all of that visual information and neither can computers. But with a system like Apical’s, if it scales, cities could eliminate some of the video and have a system that computers can read. Ironically, you also create a system people are far more likely to read as less privacy invading, while creating one that can actually parse far more data, far more quickly. Perhaps we can hope that it would be used more for helping predict crowd traffic flows, catching actual criminals and for social good as opposed to for casual surveillance.

For now, Apical is marketing this technology for the smart home, and hopes to license it to companies that would implement it into their own devices, much like big-name firms such as Samsung and Polycom already license imaging technology from Apical for their products. When it comes to creating compelling user interfaces that rely on more contextually aware computers in the home this technology is a big winner both in the granularity it can provide and the privacy it offers. However, like all technology, it is a tool that can be used for good or for some really invasive and scary stuff absent some rules to prevent its abuse.

If you’re interested in learning how deep learning works, why it’s such a hot area right now and how it’s being applied commercially, think about attending our Structure Data conference, which takes place March 18 and 19 in New York. Speakers include deep learning and machine learning experts from Facebook, Yahoo, Microsoft, Spotify, Hampton Creek, Stanford and NASA, as well as startups Blue River Technology, Enlitic, MetaMind and TeraDeep.

Update: This story was updated March 5 to clarify that ARM is not not an Apical licensee. It is a technology partner.

Proposed Chinese security law could mean tough rules for tech companies

China apparently wants to one-up the U.S. and the U.K. when it comes to urging technology companies to install security backdoors and break their encrypted documents and user communications in the name of national security.

Reuters reported on Friday that a newly proposed Chinese counterterrorism law calls for technology companies to turn over encryption keys to the Chinese government, allow for ways to bypass security mechanisms in their products, require companies to store user data and maintain servers in China, and remove any content that the country deems supportive of terrorists.

China is expected to adopt the draft legislation in the “coming weeks or months,” according to the report. The proposed law follows a set of banking security rules that the Chinese government adopted in late 2014 that requires companies that sell both software and hardware to Chinese financial institutions to place security backdoors in their products, hand over source code and comply with audits.

The Reuters report cited several anonymous executives of U.S. technology companies who said they are more worried about this newly proposed law than the banking rules because of the connection to national security. Supposedly, the laws are worded in a way as to be open to interpretation, especially in regards to having to comply with Chinese law enforcement, which has some executives fearful of “steep penalties or jail time for non-compliance.”

The newly proposed law follows recent news that China has been peeved by U.S. intelligence-gathering operations revealed by the leaked Edward Snowden NSA documents and allegations by the U.S. government that members of the China’s People’s Liberation Army used cyber espionage tactics to steal business trade secrets. China apparently doesn’t take those allegations too kindly and instead the country claims that products sold in China by U.S. technology companies pose security concerns.

If there’s one thing both China, the U.S. and the U.K. can all agree upon, however, is that companies should not be using encrypted technology to mask user communications. If companies do use the security technology, governments want those companies to hand over their encryption keys in case law enforcement or government investigations warrant it.

Attorney General Eric Holder and FBI Director James Comey have made public their displeasure with how encryption supposedly makes it easier to hide the activities of criminals. However, a recently leaked document from the Edward Snowden NSA data dump showed that some U.S. officials believe encryption is the “[b]est defense to protect data.”

Spyware firm Gamma failed on human rights, says OECD

The Organisation for Economic Co-operation and Development has for the first time found a surveillance software company to be in violation of human rights guidelines, following a complaint about the notorious British-German spyware outfit Gamma International.

Gamma allegedly sold its FinFisher spyware tool to the Bahraini regime, which is a big-time human rights abuser that seems to have used the software to persecute activists. The complaint to the OECD’s U.K. national contact point (NCP – an agency operated by the British government) was made by the right group Privacy International, which has also made a criminal complaint about Gamma, along with Reporters Without Borders and other groups.

However, the OECD’s guidelines for businesses are voluntary, so apart from calling out the fairly shameless Gamma, not much can come directly from this particular decision. In addition, the Gamma Group is these days operating out of Munich rather than the U.K.

The case involved three Bahraini dissidents, two of whom were living outside the country at the time they were apparently targeted using FinFisher. Gamma refused throughout the investigation to confirm whether it supplied the tool to Bahrain’s government, but the evidence indicated pretty clearly that the activists were targeted with Gamma’s product, and it was reasonable to assume it was the Bahrainis behind it.

The big problem, from the OECD’s point of view, is that Gamma doesn’t have human rights policies and due diligence processes to stop its products being used in an abusive way, and had been uncooperative during the investigation:

The UK NCP has concluded that Gamma International UK Limited has not acted consistently with provisions of the OECD Guidelines requiring enterprises to do appropriate due diligence… to encourage business partners to observe Guidelines standards… to have a policy commitment to respect human rights… and to provide for or co-operate through processes to remediate human rights impacts…

Through its legal representative, the company has raised obstacles to the complaint’s progress, whilst failing to provide information that would help the NCP make a prompt and fair assessment of these. The NCP considers that this does not have the appearance or practical effect of acting in good faith and respecting the NCP process.

In a statement, the complainants said they were disappointed that the NCP had not taken “a more pro-active investigatory role” that would have confirmed that Gamma really did sell FinFisher to Bahrain – this would have allowed more strenuous condemnation of the company, they said.

Still, they seemed reasonably happy with the general precedent. Privacy International deputy director Eric King said:

Today’s judgement is a watershed moment recognising that surveillance companies such as Gamma cannot shirk their human rights obligations. This decision reaffirms that supplying sophisticated intrusive surveillance tools to the world’s most repressive regimes is not only irresponsible business conduct, but violates corporate human rights obligations, and the companies that engage in such behaviour must bear the responsibility for how their products are ultimately used.

Gemalto downplays impact of NSA and GCHQ hacks on its SIM cards

Dutch digital security firm Gemalto, which is the world’s biggest manufacturer of SIM cards, has reported back on internal investigations triggered by last week’s revelations about the NSA and GCHQ hacking into its systems and stealing encryption keys that are supposed to protect phone users’ communications.

On Wednesday Gemalto said it reckoned a series of intrusions into its systems in 2010 and 2011 could have matched up with the attacks described in documents leaked by Edward Snowden and published by The Intercept. However, it downplayed the impact of the attacks on its systems and SIM encryption key transfer mechanisms, hinting that the methods described in the documents were more likely to have affected its rivals.

For a start, Gemalto said these attacks, which involved the “cyberstalking” of some of its employees in order to penetrate its systems, only affected its office networks:

The SIM encryption keys and other customer data in general, are not stored on these networks. It is important to understand that our network architecture is designed like a cross between an onion and an orange; it has multiple layers and segments which help to cluster and isolate data…

It is extremely difficult to remotely attack a large number of SIM cards on an individual basis. This fact, combined with the complex architecture of our networks explains why the intelligence services instead, chose to target the data as it was transmitted between suppliers and mobile operators as explained in the documents.

Regarding that method of targeting encryption keys in transit, Gemalto said it had put in place “highly secure exchange processes” before 2010, which explained why the documents noted how the NSA and GCHQ failed to steal the keys for certain Pakistani networks.

The company said that at the time “these data transmission methods were not universally used and certain operators and supplies had opted not to use them,” though Gemalto itself used them as standard practice, barring “exceptional circumstances.” In other words, Gemalto does it right (most of the time) while other suppliers may not have been so cautious.

Gemalto, whose stock price was whacked by last week’s revelations, also said that the attacks could only have affected 2G SIM cards, due to enhanced security measures introduced in 3G and 4G versions. “Gemalto will continue to monitor its networks and improve its processes,” it added. “We do not plan to communicate further on this matter unless a significant development occurs.”

On Tuesday, another SIM card vendor, Germany’s Giesecke & Devrient (G&D), said last week’s report had prompted it to “introduce additional measures to review the established security processes together with our customers.”