Four Questions For: Jean-Philippe Aumasson

Long term, who wins: the cryptographers or the code breakers?
Nobody breaks codes anymore, strictly speaking. When you hear about broken crypto, it’s most of the time about bugs in the implementation or about the use of insecure algorithms. For example, the DROWN attack that just won the Pwnie Award of the Best Cryptographic Attack at Black Hat USA exploits weaknesses in: 1) a protocol already known to be shaky, and 2) an algorithm already known to be insecure. So we’ve got unbreakable crypto, we just need to learn how to use it.
What innovations in cybersecurity should companies implement today?
The hot topic in my field is end-to-end encryption, or encryption all the way from the sender’s device to the recipient’s device. This is therefore the strongest form of encryption. WhatsApp and Facebook recently integrated end-to-end encryption in their messaging platforms for the benefit of their users’ privacy. Enterprise encryption software lags behind, however, with encryption solutions that often expose the unencrypted data to an intermediate server. That’s acceptable, for example, for compliance or controllability reasons, but otherwise you should make sure that you use end-to-end encryption to protect sensitive information, such as VoIP phone calls (telecommunication standards, including the latest LTE, are not end-to-end encrypted).
What are the implications of mobile technology and wearables in personal security?
Companies creating those products often neglect security and privacy concerns to save cost (or through ignorance) while security experts tend to exaggerate these concerns. We’ll have to find a middle ground between the needs and expectations of users and regulations. Meanwhile, the lack of security in IoT systems creates great opportunities for conference talks and marketing FUD.
In the Internet of things, is everything hackable, and if so, will someone hack all the pacemakers some day and turn them off?
The “everything is hackable” mantra is actually less scary than it sounds. Literally everything is hackable: from your refrigerator’s micro controller to your mobile phone, as long as you put enough effort in it. One shouldn’t think in terms of mere possibility but instead in terms of risk and economic interests: if I spend X days and Y dollars to hack a pacemaker, will my profit be worth the X-day and $Y investment? A secure pacemaker is obviously better than an insecure one, but the scenario you describe is unlikely to happen; it would just make a great movie plot.
Jean Philippe Aumasson
Jean-Philippe (JP) Aumasson is Principal Cryptographer at Kudelski Security, and holds a PhD in applied cryptography from EPFL. Switzerland. He has talked at top-tier information security conferences such as Black Hat, DEFCON, and RSA about applications of cryptography and quantum technologies. He designed the popular cryptographic algorithms BLAKE2 and SipHash, and organized the Password Hashing Competition project. He wrote the 2015 book “The Hash Function BLAKE”, and is currently writing a book on modern cryptography for a general audience. JP tweets as @veorq.

Four Questions For: Tod Beardsley

Why do you believe it is important to have open source security software? Wouldn’t that make it easier for hackers to crack the code?
Yes, and this is a good thing! Open source is especially important for core security functions precisely because everyone can take a look at how the security is actually implemented. Hackers, researchers, academics, tinkerers — when everyone can see how security works, everyone wins. People can learn from both good implementations and bad, vulnerabilities can be discovered and disclosed before and while bad actors are exploiting them, and ultimately, open source can help promote a clear, concise, maintainable code base.
What are some easy security protections for companies to implement, especially companies that have never dipped their toes in any kind of security investment?
Companies who are new to the software distribution game should look to assembling, rather than inventing, their own software. Using standard libraries and frameworks can solve many “old” and “easy” computer security problems before they come up. While there are occasional cross-library vulnerabilities, the path of writing one’s own control software opens up a Pandora’s Box of unsanitized input and buffer overflows. Modern application frameworks tend to do a pretty good job at helping developers avoid 99 out of 100 “gotchas” in secure design.
With ransomware crime on the rise, how can everyday citizens protect themselves against being “held hostage?”
The security industry, as well as regular IT industry, has been advocating reliable backups for decades in the context of sudden and unpredictable disaster. A silver lining to the ransomware threat is that it helps promote the idea of backups in the face of malicious, rather than merely accidental, disaster. My hope is that ransomware is the emotional kick that people need to actually take backups and distributed data storage seriously.
What do you predict will be the next major issues in cybersecurity? What industries or devices are particularly vulnerable?
Distributed, malicious computing using a network of popular but insecure IoT devices seems practically inevitable; in particular, the massive install base of small office / home office (SOHO) routers. The problem with a router-hosted botnet is that these devices often don’t have a reasonable patch pipeline, so such infections can last a long time — potentially much longer than standard desktop and server malware.
We saw a hint of this in the “HackCensus” of 2012, where an unknown person temporarily took control of hundreds of thousands of insecure home routers to conduct mass portscanning. While the Carna botnet seems to have been short-lived, it’s only a matter of time before this large, installed base of ready-to-pwn devices gets marshaled into malicious computing again.
Tod Beardsley
Tod Beardsley is the Principle Security Research Manager at Rapid7. He has over 20 years of hands-on security knowledge and experience, reaching back to the halcyon days of 2400 baud textfile BBSes and in-band telephony switching. Since then, he has held IT ops and IT security positions in large footprint organizations such as 3Com, Dell and Westinghouse, as both an offensive and defensive practitioner. Today, Beardsley often speaks at security and developer conferences on open source security software development, managing the human “Layer 8” component of security and software, and reasonable vulnerability disclosure handling. He can be contacted via the many addresses listed at https://keybase.io/todb.

Four Questions For: Ben Rothke

What do you consider to be the biggest challenges facing cybersecurity today?
Some of the challenges are: not enough information security staff.  This is compounded in part by firms being unwilling to pay information security professionals market rates.
Solutions are being rolled out before adequate security review.  Think IoT.
Complexity of systems combined with interconnectivity of many systems leads to myriad avenues for attack. Remember, an attacked only has to find one opening. The owner of the system has to protect every opening.
Will hackers eventually shut down hospitals, break into our medical devices and inflict physical harm on people?
 Eventually? Actually, this is old news. In the last few months Hollywood, CA Presbyterian Medical Center paid $17,000 in bitcoin to ransomware hackers, MedStar Health reported malware had caused a shutdown of some systems at its hospitals in Baltimore, and Methodist Hospital and Prime Healthcare both had phishing-based ransomware attacks. There are many reasons why hospitals are the perfect targets for ransomware and other types of attacks. Hospitals have long build applications with an emphasis on speed an available, as opposed to security. That makes sense, as an emergency room physician shouldn’t have to search for their SecurID token to use the defibrillator.  The downside to that is the easy access approach to defibrillators often translates into easy access to master patient databases.  For a large medical center, that means that millions of records are at risk due to lax information security controls.
Balancing ease of use and strong security controls is a challenge, but acutely so in the medical field.
As to medical devices, some of the manufacturers thought their information security people were as smart as their pharmaceutical engineers. The reality was at times not like that and medical devices were produced without effective security controls.
The following horror story is not atypical: when I was at British Telecom Professional Services, we had proposed a large project to assist a cardiac device manufacturer with their product. Bruce Schneier was with BT at the time and was in a speaking tour of Europe. We arranged that Bruce would stop there and give them an hour-long briefing on the importance of medical device security. They completely misunderstood his message and thought they could do it on their own.
Considering all of the hacks into our governments’ and political organizations’ servers, how likely is it that we will see our voting systems successfully hacked?
I wrote a piece in 2001 titled: Don’t Stop The Handcount; A Few Problems With Internet Voting.
The same problems that existed then, exists now. Considering we can’t keep guns and drugs out of maximum security prisons, it’s ridiculous to think the US Government could deploy a voting system that isn’t highly vulnerable to attack.
It is actually a difficult task, to create a voting system to support hundreds of millions of users, in tens of thousands of physical locations, managed by people who often have little to no technical background. It’s not that a tamper resistant voting system can’t be developed. It’s just that we won’t see it for at least a decade
What is there to be positive about (in regards to cybersecurity) in the face of security threats, cyber warfare and government hacks?
In the past, security was all about fear, uncertainty and doubt.  Now, hardly a day goes by without a story in the Wall Street Journal or Financial Times about information security. That makes the job of selling security much easier.
Many more universities are offer computer security training for computer science graduates, so the book of that with computer security training is much greater.
Security awareness is also required for standards and requirements like ISO/IEC 27001 and PCI DSS, so the trickledown effect means that the information security awareness level is going up for the rank and file employees.
ben-rothke
 
Ben Rothke, CISSP, PCI QSA is a Principal Security Consultant with Nettitude, Ltd.  He has over 15 years of industry experience in information systems security and privacy.
His areas of expertise are in risk management and mitigation, security and privacy regulatory issues, design & implementation of systems security, encryption, cryptography and security policy development, with a specialization in the financial services and aviation sectors.
Ben is the author of Computer Security – 20 Things Every Employee Should Know (McGraw-Hill), and is also a frequent speaker at industry conferences, such as RSA and MISTI.
Twitter: https://twitter.com/benrothke
Blog: https://www.rsaconference.com/blogs?category=security-reading-room

Four Questions For: Ryan Calo

How do you draw the line between prosecuting a robot that does harm and its creator? Who bears the burden of the crime or wrongdoing?
I recently got the chance to respond to a short story by a science fiction writer I admire. The author, Paulo Bacigalupi, imagines a detective investigating the “murder” of a man by his artificial companion. The robot insists it killed its owner intentionally in retaliation for abuse and demands a lawyer.
Today’s robots are not likely to be held legally responsible for their actions. The interesting question is whether anyone will be. If a driverless car crashes, we can treat the car like a defective product and sue the manufacturer. But where a robot causes a truly unexpected harm, the law will struggle. Criminal law looks for mens rea, meaning intent. And tort law looks for foreseeability.
If a robot behaves in a way no one intended or foresaw, we might have a victim with no perpetrator. This could happen more and more as robots gain greater sophistication and autonomy.
Do tricky problems in cyber law and robotics law keep you awake at night?
Yes: intermediary liability. Personal computers and smart phones are useful precisely because developers other than the manufacturer can write apps for them. Neither Apple nor Google developed Pokemon Go. But who should be responsible if an app steals your data or a person on Facebook defames you? Courts and lawmakers decided early on that the intermediary—the Apple or Facebook—would not be liable for what people did with the platform.
The same may not be true for robots. Personal robotics, like personal computers, is likely to rise or fall on the ingenuity of third party developers. But when bones instead of bits are on the line—when the software you download can touch you—courts are likely to strike a different balance. Assuming, as I do, that the future of robotics involves robot app stores, I am quite concerned that the people that make robots will not open them up to innovation due to the uncertainty of whether they will be held responsible if someone gets hurt.
Would prosecution against someone who harms a robotic be different than someone who harms a non-thinking or non-intelligent piece of machinery?
It could be. The link between animal abuse and child abuse, for instance, is so strong that many jurisdictions require authorities responding to an animal abuse allegation to alert child protective services if kids are in the house. Robots elicit very strong social reactions. There are reports of soldiers risking their lives on the battlefield to rescue a robot. In Japan, people have funerals for robotic dogs. We might wonder about a person who abuses a machine that feels like a person or a pet. And, eventually, we might decide to enhance penalties for destroying or defacing a robot beyond what we usually levy for vandalism. Kate Darling has an interesting paper on this.
Should citizens be concerned about robotic devices in their home compromising their privacy or about hackers attacking medical their medial devices? How legitimate or illegitimate are people’s fears about the rise of technology?
People should be concerned about robots and artificial intelligence but not necessarily for the reasons they read about in the press. Kate Crawford of Microsoft Research and I have been thinking through how society’s emphasis on the possibility of the Singularity or a Terminator distorts the debate surrounding the social impact of AI. Some think that superintelligent AI could be humankind’s “last invention.” Many serious computer scientists working in the field scoff at this, pointing out that AI and robotics are technologies still in their infancy. But despite AI’s limits, these same experts advocate introducing AI into some of our most sensitive social contexts such as criminal justice, finance, and healthcare. As my colleague Pedro Domingos puts it: The problem isn’t that AI is too smart and will take over the world. It’s that it is too stupid and already has.
Ryan Calo
Ryan Calo is a law professor at the University of Washington and faculty co-director of the Tech Policy Lab, a unique, interdisciplinary research unit that spans the School of Law, Information School, and Department of Computer Science and Engineering. Calo holds courtesy appointments at the University of Washington Information School and the Oregon State University School of Mechanical, Industrial, and Manufacturing Engineering. He has testified before the U.S. Senate and German Parliament and been called one of the most important people in robotics by Business Insider. This summer, he helped the White House organize a series of workshops on artificial intelligence.
@rcalo on Twitter
http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2402972
http://www.slate.com/articles/technology/future_tense/2016/04/a_robotics_law_expert_on_paolo_bacigalupi_s_mika_model.html

Hackers stole up to $1B from banks worldwide, Kaspersky says

A gang of hackers has, over the course of a year or more, stolen up to $1 billion from financial institutions around the world, including some in the U.S., according to a new report by cybersecurity firm Kasperksy Lab. 

The Carbanak gang — named after the malware they installed on computers at financial institutions — targeted marks in the U.S., Germany and Asia and possibly elsewhere, according to Kaspersky’s Threatpost blog. Instead of relying on phishing attacks that goes after end-user passwords, they targeted bank employees themselves, sending email messages containing malware that then recorded internal interactions to learn the banks’ procedures and processes, in some cases feeding video back to their mothership.

One reason the payoff may have been so big was that the gang was patient, waiting to make their move for months and also moving on from one bank to another after making their, um withdrawals, typically grabbing up no more than $10 million per institution. In some cases, ATM just started spewing cash without anyone requesting it. The money was then picked up by cash “mules.” In others, the banks network was used to move money out of the organization into the cybercriminal’s own accounts. And in some cases, fake accounts were created with high balances which were then tapped by mules.

From the Threatpost blog:

The hackers lived on the bank networks for months after successfully gaining a network foothold, generally through a spearphishing email laced with a malicious .CPL attachment, and in some cases, Word documents. The attachments contained the backdoor named Carbanak which is capable of many of the same data stealing capabilities as notorious APT-style attacks, including remote control.

carbanak targetsKaspersky posted its full report on Monday, an advance copy of which it provided the New York Times. Speaking with that paper, Chris Doggett, managing director of Kaspersky’s North America office characterized this as “the most sophisticated attack the world has seen to date in terms of the tactics and methods that cybercriminals have used to remain covert.” 

As is usually the case, no institutions were named because of non-disclosure agreements. It’s not exactly good advertising to admit that your customers funds are at risk, after all.

Kaspersky told the Times it worked with Interpol and Europol to gather information. Sanjay Virmani, director of Interpol’s digital crime center told BBC News that the “attacks again underline the fact that criminals will exploit any vulnerability in any system.”

 

How tech trials force a choice between bad people and bad law

Of course Ross Ulbricht was guilty. Despite his far-fetched claims of mistaken identity, a New York jury confirmed the obvious: that Ulbricht (aka Dread Pirate Roberts) was the criminal mastermind who took on a villain’s name, and became rich by running an online marketplace that sold any drug imaginable.

All the same, protestors on the internet and at the courthouse still insisted Ulbricht was innocent. More broadly, the Dread Pirate (who is now awaiting sentencing) also enjoyed sympathy from many in the tech press, which often downplayed the bad things he did, and instead cast the FBI as the villain in the case.

Such moral indulgence is odd, and doesn’t extend to Ulbricht alone. Other tech rogues, including a corpulent charlatan and a Nazi sadist, also enjoy public sympathy. But why? A big part of it may lie with the government’s heavy-handed approach to internet-related crime.

Bad people

Ross Ulbricht didn’t start out bad. Indeed, accounts of his past life from his mother and Ulbricht reveal a very different sort of person: an Eagle Scout, and then a bright and sensitive physics student who worked hard to build a used books company.

But then he became someone else. He started the Silk Road marketplace, which began as a relatively benign forum for finding magic mushrooms, but then devolved into a free-wheeling playground for hard drugs, forged documents and prostitution.

The notorious bazaar also changed Ulbricht himself: FBI evidence from a seized laptop suggests that he attempted to hire hit men to murder those he believed had betrayed him (the “hit men” turned out to be government agents but Ulbricht believed they were real).

This tragic arc, which saw the Eagle Scout become the Dread Pirate Roberts, may explain some of the sympathy for Ulbricht. But that’s hardly the case for Weev, another famous figure in internet circles who is also facing prosecution by the Justice Department.

Weev, whose real name is Andrew Auernheimer, was sentenced to three years in prison for what the government describes as a hack on AT&T. But he is better known for his other legacy as one of the cruelest trolls on the internet, whose antics have exposed women to death threats. And last year, Weed reinvented himself as a Jew-hating White Supremacist.

Despite all this, many tech outlets hailed an appeals court decision last year to vacate Weev’s conviction on the hacking charges on procedural grounds, and to release him from prison. And while Weev doesn’t exactly enjoy public sympathy, stories of his legal battle often elide the bad things he has done.

Other antiheroes of the tech worlds include Julian Assange, the self-aggrandizing Wikileaks leader who faces sexual assault accusations in Sweden, and outlaw music mogul Kim Dotcom.

Dotcom has done a litany of bad things, including making millions from purloined movies and allegedly ratting on his rivals, but he is still hugely popular with many internet communities. The 300-pound fugitive is even dabbling in mainstream politics in New Zealand, where he is living while he fights U.S. efforts to extradite him to face multiple criminal charges.

The celebrity-style adulation that Dotcom and the others receive is no doubt frustrating for the law enforcement officials trying to convict them. The reason for it, however, is not just because the outlaws are good at gulling the public (though that’s part of it), but because of people’s legitimate misgivings about the laws that the U.S. is using to prosecute them.

Bad laws

Aaron Swatrz was a genius so beloved in the tech community that a film-maker made an acclaimed movie about him called “The Internet’s Own Boy.” But he was also a criminal in the eyes of the government, and some believe the Justice Department’s relentless effort to prosecute led the 26-year-old Swartz to commit suicide in his Brooklyn apartment two years ago.

What crime led to this end? In 2009, Swartz used MIT computers to download millions of academic articles from a database called JSTOR – articles whose authors are typically unpaid, but that are licensed to universities at high fees. His action may have been ill-advised, but hardly amounts to a serious crime.

Nonetheless, the Justice Department came at Swartz with a law called the Computer Fraud and Abuse Act that gave prosecutors discretion to seek a prison term of 35 years and a $1 million fine.

The CFAA is a clumsy statute dating from long ago that relies on vague concepts like “unauthorized access,” and lawmakers have tried to reform it. Yet those efforts have so far failed, and the Justice Department keeps using it in all sorts of cases — including that of Weev.

In the government’s view, Weev committed illegal hacking under the CFAA when he “accessed” the AT&T website to demonstrate a security flaw that spat out private email addresses. Skeptics, however, point out that Weev simply entered information into a public website available to anyone with an internet browser, and ask how this amounts to hacking.

The CFAA also grounded one of the seven charges– “conspiracy to commit hacking” — on which Ross Ulbricht was convicted, although that charge was overshadowed by other elements of the trial (including a theory that the government itself had violated the CFAA.)

Meanwhile, the CFAA is hardly the only questionable law that is at issue in tech-related prosecutions.

Before the Silk Road case, Ulbricht’s mother made a forceful argument that her son’s prosecution should be seen through the lens of systemic abuse by the Justice Department of surveillance and drug laws. She has a point: whatever harms caused by Silk Road drug deals, they pale in comparison to the destruction wrought by America’s ruinous “war on drugs.”

As for Kim Dotcom, his use of a mass piracy company to get rich is impossible to justify. But so too are many aspects of U.S. copyright laws, whose absurd terms and harsh penalties serve to benefit a narrow sector of the entertainment industry at the expense of the general public. Is it a surprise that knee-jerk attitudes to digital media by government and industry has led some to cheer for Dotcom instead of the industry that wants him prosecuted?

The hard choice

The cases against Ulbricht, Weev and Dotcom raise a dilemma because they can force us to choose between supporting a bad person or a bad law. A choice to convict such men may serve to legitimize unjust laws, while exonerating them amounts to giving them a free pass for unacceptable actions.

The cases can be harder still since they often involve technology (like TOR, peer-to-peer tools and bitcoin) that is unfamiliar to average people, but that the government often characterizes as inherently suspicious and related to “hacking.”

All of this helps to explain why the tech community can embrace antiheroes over Justice Department prosecutors who are apt to employ every legal tool at their disposal — even if it is one that is harsh or outdated.

The solution then is to give the prosecutors better tools, and not simply more of them. If the U.S. government is going to retain credibility in its effort to go after what it sees as online bad guys, it will have to do a better job of defining crime, and matching crime to punishment.

This story was updated on 2/15 to replace the word “charges” with “accusations” to describe the sexual assault allegations against Assange.

“Anonymous” hackers attack European Parliament president’s site

The personal website of European Parliament president Martin Schulz was hacked last week, his office has confirmed.

On Friday someone posted information on Pastebin indicating that they had retrieved database and password information from the martin-schulz.info website. A spokeswoman for Schulz stressed that this was not a page on the European Parliament website itself (although Schulz’s personal sites are now redirecting to his official page there.)

“The investigation is ongoing,” the president’s spokeswoman added.

The Greek security site SecNews reported that the attackers had emailed it, claiming responsibility in the name of Anonymous. It should be noted that, as is so often the case, attribution to “Anonymous” is a tricky matter. The group is nebulous and anyone can claim the name.

That email reportedly claimed that the attack was motivated by Schulz’s alleged aim of “destroying Greece and then other countries” – presumably something to do with the German European Parliament president last week urging the country’s new hard-left government to stick by the agreements made by its predecessors.

The attackers also included screenshots that detailed how they used a SQL injection attack to steal data from and deface the website.

German government website attack may be Ukraine-related

Two German government websites were knocked offline by a distributed denial of service (DDoS) attack around 10am local time on Wednesday. Chancellor Angela Merkel’s site is still down five and a half hours later, but that of the Bundestag came back minutes ago. The pro-Russian CyberBerkut hacker group has claimed responsibility, claiming the attack was carried out as an appeal to Germany to “stop financial and political support of criminal regime in Kiev, which unleashed a bloody civil war” in Ukraine. Although the attribution of today’s attack remains unconfirmed, the group has been highly active since the ouster of Ukrainian president Viktor Yanukovych in February 2014.

Fingerprints can be reproduced from publicly available photos

At a conference in Hamberg Germany this weekend, biometrics researcher Jan Krisller demonstrated how he spoofed a politician’s fingerprint using photos taken by a “standard photo camera.”

Krissler speculated that politicians might even want to “wear gloves when talking in public.”

The Chaos Computer Club, which put on the conference, and Krissler, who goes by Starbug, have demonstrated their ability to breach fingerprint sensors in the past. Shortly after the first Touch ID-equipped iPhone came out, the Chaos Computer Club was the first group to demonstrate that it is possible to beat Touch ID by creating a fake latex finger from a fingerprint left on glass or a smartphone screen.

Krissler claims he isolated German Defense Minister Ursula von der Leyen’s fingerprint from high-resolution photos taken during a public appearance in October using commercially available software called VeriFinger.

Although there are some advantages to a biometric access over traditional passwords — you can’t lose your fingerprint, and it can’t be phished — as the technology goes mainstream, it’s raising its own security issues. In addition to the spoofing problem, there’s a debate in the United States whether a law enforcement officer can compel you to unlock your device with your finger.

Most iOS devices now come with Touch ID, [company]Apple[/company]’s fingerprint security hardware. A recent Apple patent shows a way to beef up fingerprint reader security by adding a swipe motion.

Fingerprint readers aren’t standard on Android phones, but several devices already have them installed, and source code indicates that [company]Google[/company] has been working to add system-wide fingerprint scanning support.

Hackers say Xbox/Playstation attacks are over, target Tor

Christmas Day gamers ran into problems connecting their Xbox or Playstation to the internet thanks to a denial of service attack, and the hackers that have claimed credit are now naming a new target: online anonymity software Tor.

A group operating under the name “Lizard Squadposted a series of tweets today about a planned zero-day attack, which target unnoticed weaknesses. In this case, that appears to be taking over the majority of Tor’s nodes: a series of points through which data sent over the Tor network travels. Tor protects users’ identities with these nodes, which obscure the origin of any data. Lizard Squad’s thought is if it controls enough of the nodes, information will no longer be anonymized.

As of this afternoon, Lizard Squad had about 3,000 nodes — nearly half of the 8,000 in existance, according to Gizmodo. But Redditors are questioning if the 3,000 nodes have enough weight to have any effect, as new nodes are vetted before they receive encrypted data.

Why is a hacker group interested in taking down software that has benefited countless other hackers? Lizard Squad posted a tweet documenting a possible motive:

This story is still developing, as Lizard Squad is working to gain more nodes. What has ended is the attack on Xbox and Playstation consoles. Lizard Squad thanked Kim Dotcom, who gave the group vouchers for his secure file hosting service Mega in exchange for ceasing the attack.