Four Questions For: Joseph Steinberg

You have written that there is no effective law enforcement to counter or punish cybersecurity attackers and hackers. How do you envision this changing in your lifetime? How can law enforcement and governments protect their citizens?
There are many reasons that cybercrime often goes unpunished today, and entire books could be written to answer how government and law enforcement can better protect citizens. There are many areas in which improvement is needed: Laws need to change, enforcement agencies need more flexibility to hire experts, international cooperation needs to be obtained (diplomatically, if possible), lawmakers need to invest time to stay current with technology knowledge rather than spend their time raising campaign funds, various sections of government need to listen not only to representatives of large corporations, but also to experts who often are independent or work for small firms, enforcement of laws needs to be uniform without regard for alleged perpetrators’ political connections or the political ambitions of prosecutors, stolen data needs to be treated as stolen property, etc.
If you have nothing to hide, what is there to worry about with regards to surveillance?
The argument that anyone who “has nothing to hide” doesn’t need to worry about surveillance is simply wrong, as surveillance undermines privacy, not just “hidden things.” How many people who consistently post about their successes on Facebook don’t mention when they fail at something important or when they are caught doing something that they should not have done? How many people who Tweet regularly tell the world about highly personal issues such as medical problems, marital fights, or embarrassing scenarios? How many people who share selfies also post photos of themselves taking their medicine for a chronic condition, crying over emotional pain, using the bathroom, or engaging in sexual activities? We all have private moments and negative experiences that we do not announce to the world or wish to have others watch. When people think about how much they wish to keep private, they start to grasp how dangerous surveillance can be. Not only may those performing the surveillance obtain our private information, but, if they don’t adequately protect it, the whole world may see it.
What do you believe are the biggest security risks to social media? What should users do to protect themselves against these risks?
While there are multiple issues related to social media security, the biggest risk is people making posts without understanding the consequences of those posts. Besides harming one’s personal relationships, professional career, or reputation, a problematic post can harm one’s employer’s brand image, leak its confidential information, lead to it being sued, or violate regulations. Oversharing information can even help criminals to craft highly-effective spear phishing emails, thereby undermining organizational information security and leading to major data breaches. While people should think about what they post, relying on people to “always do the right thing” is a recipe for disaster (think what would happen if we relied on people to practice good cybersecurity hygiene and did not issue them anti-virus software), which is why technology is needed to warn people in real time when they are making problematic posts, from whatever locations, devices, and accounts they make them.
What pieces of everyday technology are people using without realizing the cybersecurity threats behind them? What kind of data is being shared through things like wearables, smart phones, smart watches, etc?
The less something looks like a classic computer, the less people seem to think about cybersecurity when using it. Even though, in some ways, smartphones and tablets pose greater risks to information security than do laptop computers, for example; people often take fewer precautions with these devices than with their laptops. And, when it comes to wearables, or other connected devices, people almost never consider what security risks are created by utilizing the machines. How many people who have purchased connected televisions, thermostats, or refrigerators have truly thought about segregating those devices on separate networks, of monitoring those devices’ activity for anomalies, etc.? Probably only a small percentage. And smart-device manufacturers often don’t adequately address security either – since purchasers aren’t willing to pay more for it. And, that’s one of the reasons that denial-of-service and other forms of attacks are likely to leverage these devices going forward.
Smart devices don’t create risks only to the data that they house and process; the devices can become launching grounds for attacks against other devices, can be used to monitor network traffic from computers, can be used as zombies as part of distributed denial of service attacks, etc.
joseph-steinberg
 
Joseph Steinberg is a respected cybersecurity expert, who is the founder and CEO of SecureMySocial, which recently brought to market the world’s first system to warn people in real time if they are making inappropriate social-media posts. Earlier, he served for a decade as CEO of cybersecurity firm, Green Armor Solutions, and for five years in several senior capacities at Whale Communications which was acquired by Microsoft. Joseph has been calculated to be one of the top 3 cybersecurity online-influencers worldwide and is a frequent media commentator on cyber-related matters. He is the inventor of several cybersecurity technologies widely-used today; his work is cited in well-over 100 published US patents. He is a regular columnist covering cybersecurity for Inc. magazine (and earlier for Forbes), and has written several books on the field as well. Joseph also serves as an expert witness and consultant on issues related to information security, and is a member of the advisory board of multiple technology companies.
Twitter: @JosephSteinberg

How PayPal uses deep learning and detective work to fight fraud

Hui Wang has seen the nature of online fraud change a lot in the 11 years she’s been at PayPal. In fact, a continuous evolution of methods is kind of the nature of cybercrime. As the good guys catch onto one approach, the bad guys try to avoid detection by using another.

Today, said Wang, PayPal’s senior director of global risk sciences, “The fraudsters we’re interacting with are… very unique and very innovative. …Our fraud problem is a lot more complex than anyone can think of.”

In deep learning, though, Wang and her team might have found a way to help level the playing field between PayPal and criminals who want exploit the online payment platform.

Deep learning is a somewhat new approach to machine learning and artificial intelligence that has caught fire over the past few years thanks to companies such as [company]Google[/company], [company]Facebook[/company], [company]Microsoft[/company] and Baidu, and a handful of prominent researchers (some of whom now work for those companies). The field draws a lot of comparisons to the workings of the human brain because deep learning systems use artificial neural network algorithms, although “inspired by the brain” might be a more accurate description than “modeled after the brain.”

How DeepFace sees Calista Flockhart. Source: Facebook

A visual diagram of a deep neural network for facial recognition. Source: Facebook

Essentially, the stacks of neural networks that comprise deep learning models are very good at recognizing patterns and features of the data they’re trained on, which has led to some huge advances in computer vision, speech recognition, text analysis, machine listening and even video-game playing in the past few years. You can learn more about the field at our Structure Data conference later this month, which includes deep learning and artificial intelligence experts from Facebook, Microsoft, Yahoo, Enlitic and other companies.

It turns out deep learning models are also good at identifying the complex patterns and characteristics of cybercrime and online fraud. Machine-learning-based pattern recognition has long been a major part of fraud detection practices, but Wang said PayPal has seen a “major leap forward” in its abilities since it began investigating precursor (what she calls “non-linear”) techniques to deep learning several years ago. PayPal has been working with deep learning itself for the past two or three years, she said.

Some of these efforts are already running in production as part of the company’s anti-fraud systems, often in conjunction with human experts in what Wang describes as a “detective-like methodology.” The deep learning algorithms are able to analyze potentially tens of thousands of latent features (time signals, actors and geographic location are some easy examples) that might make up a particular type of fraud, and are even able to detect “sub modus operandi,” or different variants of the same scheme, she said.

Some of PayPal's fraud-management options for developers.

Some of PayPal’s fraud-management options for developers.

The patterns are much more complex than “If someone does X, then the result is Y,” so it takes artificial intelligence to analyze them at a level much deeper than humans can. “Actually,” Wang said, “that’s the beauty of deep learning.”

Once the models detect possible fraud, human “detectives” can get to work assessing what’s real, what’s not and what to do next.

PayPal uses a champions-and-challengers approach to deciding which fraud-detection models to rely on most heavily, and deep learning is very close to becoming the champion. “We’ve seen roughly a 10 percent delta on top of today’s champion,” Wang said, which is very significant.

And as the fraudulent behavior on PayPal’s platform continues to grow more complex, she’s hopeful deep learning will give her team the ability to adapt to these new patterns faster than before. It’s possible, for example, that PayPal might some day be able to deploy models that take live data from its system and become smarter, by retraining themselves, in real time.

“We’re doing that to a certain degree,” Wang said, “but I think there’s still more to be done.”

DARPA shows off its tech for indexing the deep web

On Sunday night, 60 Minutes aired a segment about the Defense Advanced Research Projects Agency, or DARPA, and its attempts to secure the internet from hackers, human traffickers and other criminals. One of the DARPA efforts the program highlighted — and did so even more in an unaired segment for the web — is a project called Memex, which is essentially a search engine for the deep web and the dark web.

The technology looks pretty amazing in a number of ways, including its scale, its speed and its interface. Of course, it’s also tackling a horrible and often under-appreciated problem, which is the illegal trafficking of women and girls as sex objects. Asked why DARPA is concerned with sex trafficking, Memex inventor Chris White explained that people willing to take part in that endeavor are often more likely to take part in other endeavors — including things like weapons or drug trafficking — that could have national security implications.

A Memex-generated map of sex trafficking.

A Memex-generated map of sex trafficking.

I wrote briefly about Memex last month, as part of a post about DARPA-funded research into machine learning algorithms — including computer vision and text analysis algorithms — for extracting even more info from deep web content.

The work DARPA is doing is part of a larger effort, which also includes tech companies like Google and Palantir, to identify and map instances of human trafficking around the world. It’s one of many problems that has existed for a long time, but that the internet has made easier to engage in. However, these efforts and others also show how the internet is making it easier for law-enforcement agencies to track and prosecute these crimes, provided the right analytical techniques are in place.

The 60 Minutes segment also featured DARPA innovation head Dan Kaufman, who spoke about web security at our Structure conference last June.

http://youtu.be/VXnFNd9WAAk

Password manager Dashlane now offers automatic password changing

The password manager Dashlane, which competes with the likes of LastPass and 1Password, just gained a new trick. Through the acquisition of a New York-based startup called PassOmatic, Dashlane is now able to offer an automated password-changing feature.

Password Changer does what it says on the box. Like most password managers, Dashlane’s software already included a password generator — now, users can automatically change passwords for chosen services with a single click, making it less likely that they’ll use the same password for long periods of time. The firm is touting this as a good counter-measure against security disasters like Heartbleed, where passwords have found their way into the wrong hands.

During the beta phase that launched on Tuesday, Password Changer requires a small amount of manual intervention, but in future it will gain the ability to automatically change passwords at set intervals. It’s already compatible with sites such as [company]Amazon[/company], [company]Facebook[/company], [company]Google[/company], [company]eBay[/company] and [company]PayPal[/company].

Like some other password managers, Dashlane’s service sees users store their passwords on the company’s servers, to enable cross-device syncing (for which Dashlane charges $39.99 per year.) The files are encrypted in the user’s client beforehand, though, and Dashlane maintains that it cannot read anything without the user’s master password, which it does not have.

Asked whether law enforcement or intelligence agencies would be able to access anything, Dashlane CEO Emmanuel Schalit told me via email that agencies could only get encrypted files from the firm if it were subpoenaed, and the password would need to come from the user “as the grade of encryption used by Dashlane makes these encrypted documents very hard to attack.”

Some rivals such as 1Password don’t store any user data on their servers, and do make it possible (with some effort) to synchronize data between devices without the need for a cloud-based service. However, Dropbox remains the most flexible way to synchronize 1Password data, and that service is itself probably approachable by agencies.

It all really comes down to how much you trust the encryption, and whether you count your main threat as agencies or — far more likely — criminals. For general protection, everyone should be using a password manager, and the automated nature of what Dashlane is now offering does seem attractive. This applies to anything that makes it easier for people to adopt plausible security measures.

New York and France-based Dashlane raised a $22 million Series B round back in May and, while the PassOmatic acquisition didn’t come with any announced numbers, this is how Dashlane is using its cash. PassOmatic CEO Chana Kalai, who has now joined Dashlane along with two colleagues, said in a statement that it was “obvious to us that the solution made even more sense when combined with a password manager, and we clearly saw Dashlane as the leading and most innovative company in that field today.”

6 ways big data is helping reinvent enterprise security

What’s true in the rest of the world is true for security software, as well: more data means more intelligence. Thanks to the emergence of new techniques for storing, collecting and analyzing data, there’s a new wave of security companies looking smarter than ever.

An algorithm for tracking viruses (and Twitter rumors) to their source

A team of Swiss researchers thinks it has created an algorithm capable of tracking almost anything — from computer viruses to terrorist attacks to epidemics — back to the source using a minimal amount of data. The trick is focusing on time to figure out who “infected” whom.

6 ways to keep your data safe in the cloud

These suggestions might seem like common sense, but the more we use cloud services, the more we put ourselves at risk of identity theft and other negative effects of cybercrime. Here are six ways to at least make it more difficult to steal your data.

DNSChanger shutdown: 5 ‘doomsdays’ from the internet’s past

As you’ve likely seen by the many blazing headlines, thousands of people may lose access to the Internet on July 9, in what some are calling an “Internet doomsday.” But it’s not the first time a single day has held apocalyptic fascination for the Web.