Businesses Must Get Better at Breach Detection

Information system breaches are bad enough. However, breaches that go undetected prove to be much worse. Take for example Yahoo’s revelation that the company uncovered a data breach that impacted as many as one billion user accounts. While that breach was significant, more significant is that the breach occurred back in august of 2013, and it took years for Yahoo to discover it. Ultimately, the issues surrounding the breach hampered Yahoo’s acquisition by Verizon, resulting in costing  Yahoo some $350 Million.
Unfortunately, Yahoo isn’t alone when it comes to detecting breaches quickly. According to the latest Verizon Data Breach report, dwell time (how long it takes to discover a breach) is averaging more than 200 days. The reasons for that are numerous, ranging from lack of tools, to the lack of technical ability.
Nonetheless, experts agree that something must be done. Faizel Lakhani, president and COO of SS8, a Milpitas, CA based breach detection company, said “despite the best efforts of a barrage of perimeter, network and endpoint security defenses, breaches have continued and will continue to occur.” A statement validated by the company’s 2016 Threat Rewind Report, which shows that the potential for breaches are on the rise and breaches are becoming much more sophisticated.
In Lakhani’s view, it all comes down to improving detection. He said “humans in any organization will make mistakes that allow cyber intrusions. Companies need to accept that reality and develop methods of identifying and counteracting threats.”
To that end, SS8 has introduced technology that they refer to as a Protocol Extraction Engine (PXE), which can be thought of as a deep packet inspection engine which correlates and understands network traffic in real-time. Lakhani added “The idea here is to intelligently automate the detection process to a point where even tunneling or obfuscation techniques can be detected, removing that burden from InfoSec professionals.”
In other words, it seems that SS8 is looking to remove human inefficiencies from the breach detection process. Something that is sorely needed to overcome the more advanced, blended threats that are becoming all too common.  PXE is part of the company’s offerings, which fall under the umbrella of the company’s BreachDetect platform.
BreachDetect is aimed at solving the primary problem facing InfoSec professionals, the ability to gain visibility into the traffic that traverses complex IT infrastructures and application environments, as well as the numerous IoT devices connected to today’s enterprise networks. “The average breach goes undetected for more than 200 days, so it has become essential to understand the full life cycle of an attack, from reconnaissance, to command and control, to data exfiltration. That is the most prudent way to identify the systems and data that have been compromised,” Lakhani told GigaOM. “Obtaining this level of information has been a challenge due to a lack of visibility into network and application activity, and the lack of forensic expertise available to investigate attacks.”
Regardless of what tool an enterprise chooses to use to deal with breach detection, Lakhani’s advice to fully comprehend the full chain of attack and understand the implications of a breach proves valuable to any organization looking to get a handle on breaches.

NaaP (Network as a Platform) is the Latest Acronym to Spell Growth

For the enterprise and service provider crowd, that means riverbed is looking to bring the agility of the cloud to existing infrastructure, by adding a new abstraction layer, which automates and simplifies thorny tasks, such as provisioning, scaling, and overall services management.

It’s hard being HPE these days

I’m sitting here in the blogger lounge at HPE Discover 2016, in London (while also keeping an eye on the news coming from AWS Re:Invent in Las Vegas)… and the only thing that comes to my mind at the moment is something I read on twitter a few days ago: “the software is eating the hardware and the cloud is eating the software”.

More Focus, but…

HPE is much more focused than a year ago (thanks to the split from HP Inc.) and are trying to remain relevant in a world that has been seeing a major change on how datacenters are designed and infrastructures are consumed. They’re trying to remain part of the conversation and bring credible solutions, but it’s quite obvious they are struggling.
hpe-synergy-storageTake Synergy for example – a brilliant idea….had it been launched 5/7 years ago. Now, we have the cloud, the “Nutanixes” and all the software-defined-ish stuff you may need… why should someone buy a configurable piece of hardware from a company that doesn’t even own the whole stack and can’t give an iPhone-like experience? If only the rumours were true and Simplivity were the target of an acquisition… that would be something to talk about (and, probably, it would be as good as the 3PAR acquisition has been).

Server? Storage? What else?

Yes, HPE is the worldwide leader for server revenues and shipments. But this is a declining market. It’s not about the quality of the server it’s about the cloud eating up everything.
The biggest infrastructure investments are made in large datacenters now, with SMEs looking to offload as much as they can to the cloud. Large datacenters are all about efficiency, they’re trying to squeeze everything they can from any resource and this is why they are doing more with fewer servers. Even more so, they don’t want any form of lock-in – hyper-scalers are designing their own equipment to take efficiency to the next level but they also want to make hardware vendors irrelevant. In fact, if you own the specification, and it’s an open standard, there is no way the hardware manufacturer will make a difference, if not in the price…
And what about Storage?…it’s even worse. Even though 3PAR is still doing well (in comparison to the rest of external market), external enterprise storage shipments and revenues are seeing trends which are comparable to servers. And so is the cause. Big datacenters look for very large, and cheap, capacity-driven infrastructures (massive scale-out infrastructures) and very low latency solutions (they look for CPU-storage vicinity).
storagereview-hpe-3par-20000The HPE 3PAR team is working on a very nice/cool and seamless adoption of 3D Xpoint (as everyone else in this industry?!). It will be delivered as a cache first and then as a tier in the future (but isn’t that what happened with Flash adoption a few years back? Yes, it is!). They are still evolving the same platform, which is still good for traditional applications and workloads, but not enough for next generation infrastructures. A 3D Xpoint powered 3Par can give 250 microseconds latency, while a 3DXpoint PCI card can give you 1/10th of that!
Storage as we know it today, will be less and less of interest and less relevant. In-memory (and CPU-driven), ephemeral or persistent storage is the future. Several startups are already working on it, most of them are delivering software solutions (not hardware, software!) And when hardware comes into play, it will be commodity stuff. Yes, again, HPE has shown us a prototype of what they call “The Machine”, but it’s just another science project made out of components that won’t be available for years… while large end users are looking at these problems now.
I won’t comment about networking, not my field of interest, but I suspect it’s not that distant from what I’ve seen in other fields.

And what about the cloud?

Well, the fact that HPE has invested so much in OpenStack and now de-investing just as quickly, is sad. They say they’re going to partner with Suse to build a common platform, the truth is that they’re selling everything to Suse (Helion, Cloud Foundry stuff, engineers and all). And Eucalyptus has disappeared too, although it had some potential to help with this “Hybrid IT” that HPE loves to mention in every speech here… but I wonder if Eucaplyptus could have kept up with the incredible pace set by AWS in terms of innovation, new services and improvements. Having to chase after someone else is always a tough job.
HPE is partnering with everyone but Amazon… and even though one of the partners is Docker (which is everybody’s partner now), I can’t see a lot here… Yes, HPE servers are shipped with Docker on board but I don’t think anyone cares about it.

Closing the circle

hpe-the-machine-05HPE is now more focused than in the past. It has a leaner organization, but it’s also clear that a lot has to be done for it to become fancy again… and I certainly wouldn’t want to be the one in charge of having to come up with the right strategy to achieve that…
I’m not saying that HPE will disappear anytime soon, but the difference between a Server from HPE or Dell (or anyone else) is minimal and doesn’t justify all that effort, especially from large datacenter customers… and the same goes for storage and, probably, networking. Everything is moving up in the stack.
Software could make a difference, a huge difference, but HPE has never been as successful as it thinks about software.… And they are totally missing that HCI is 100% software. Embracing the cloud (I mean embracing it seriously) could be too much for HPE, they aren’t ready and they might not be able to cannibalize themselves to become a totally different company (and the IBM story could tell us a lot about this). Options do exist though, and I think that HPE should try to repeat the 3PAR story with Cloud instead of storage this time round (with Simplivity perhaps).
I’m afraid I don’t have a formula or suggestion for HPE…….it’s just that we’re living in a different world now and even though the products they make are good and still at the core of our infrastructures, they are no longer relevant in the ongoing conversations about the future of IT.

Disclaimer: I was personally invited to attend HPE Discover, with HPE covering my travel and accommodation costs.  However I was not compensated for my time.  I am not required to blog on any content; blog posts are not edited or reviewed by HPE before publication.

Review: SmartDraw Helps to Tame Wild IoT Networks

Comprehending the intricacies of the emerging IoT world takes more than looking at a static Visio diagram, it takes a tool that is designed to deal with both virtual and physical devices and the ability to visualize those complex interconnections dynamically.

Everything You Know About the Stack is About to Change

I am at the OpenStack Summit here in Austin and the announcements and releases keep rolling out, illustrating that the growing OpenStack market has some real teeth, taking a bite out of the market standbys. Even so, there is still a great deal of fear, uncertainty and doubt around the viability of clouds built upon OpenStack. The real question here is if that FUD is unfounded for today’s emerging markets.
That means taking a closer look at OpenStack is a must for businesses delving further into public, private and hybrid clouds.
The OpenStack Project, which is now managed by the OpenStack Foundation, came into being back in 2010 as joint venture between NASA and RackSpace Hosting, with the goal of bringing collaborative, open sourced based software to the then emerging cloud market. Today, the OpenStack Foundation boasts that some 500 companies have joined the project and the community now collaborates around a six-month, time-based release cycle.
Openstack, which is basically an open-source software platform designed for cloud computing, has become a viable alternative to the likes of Amazon (S3, EC2), Microsoft Azure and Digital Ocean. Recent research by the 451 Group has predicted a 40% CAGR, with the OpenStack Market reaching some $3.5 billion by 2018. Enough of a market share to make all players involved take notice.
However, the big news out of the OpenStack Summit Austin 2016, comes in the form of product announcements, with more and more vendors aligning themselves with the platform.
For example, HPE has announced its HPE Helion OpenStack 3.0 platform release, which is designed to improve efficiency and ease private cloud development, all without vendor lock-in problems.
Cisco is also embracing the OpenStack movement with its Cisco MetaPod, an on-premise, preconfigured solution based on OpenStack.
Another solution out of the summit is the Avi Vantage Platform from AVI Networks, which promises to bring software-defined application services to OpenStack clouds, along with load balancing, analytics, and autoscaling. In other words, Avi is aiming to bring agility to OpenStack clouds.
Perhaps the most impressive news out of the summit comes from Dell and Red Hat, with the Dell Red Hat OpenStack Cloud Solution Version 5.0,  which incorporates an integrated, modular, co-engineered, validated core architecture, that leverages optional validated extensions to create a robust OpenStack cloud that integrates with the rest of the OpenStack community offerings.
Other vendors making major announcements at the event include F5 networks, Datera, DreamHost, FalconStor, Mirantis, Nexenta Systems, Midokura, SwiftStack, PureStorage, and many others. All of those announcements have one core element in common, and that is the OpenStack community. In other words, OpenStack is here to stay and competitors must now take the threat of the open-source cloud movement a little more seriously.