More ARM CPUs in the Datacenter for 2017?

AI, Containers, cloud computing, large storage infrastructures, IoT, HPC… and probably more. And to serve all of this, large Service Providers and Enterprises are building huge datacenters where everything is designed to maximize efficiency.
The work covers all the aspects around the data center facility, power and cooling, security and compute power density as well. A lot has been done but there is more that has been asked.

Failed attempts to do more (with less)

In the past many vendors tried to get more work done taking approaches that failed miserably. Do you remember Sun Sparc T1 processors for example? Launched in 2005, 72 Watts, 32 threads (4 threads per core), with a 1.4Ghz clock… but it was too ahead of its time. Most of the software was still single threaded and didn’t work well on this kind of CPU.
In the past we have also seen several attempts to push ARM CPUs in the datacenter, 32 bit processors first and 64 bit later. They all failed for the same reason that afflicted Sun’s CPU… plus the lack of optimized software in some cases.
But the number of cores continued to increase (now Intel can show up to 24 cores in a single CPU) and software followed the same trend with multithreading first and down to micro services now. Apps organized in single process containers are just perfect to run on this type of CPU.

Thank you Raspberry Pi (and others)

pi2modb1gb_-compNow Linux on ARM and lot of other open source software (as well as a specific version of Windows 10!) are much more optimized to run on ARM CPUs than in the past.
Raspberry Pi, the super cheap computer (starting at $5 now) which was launched in 2012, opened a world of opportunities for hobbyists, students and all sorts of developer at all levels. Prototyping is much easier and inexpensive while the community ecosystem is growing exponentially. Raspberry Pi, and all its clones, are not designed for the datacenter of course… but it is also true that this small computer has inspired a lot of people and is at the base of some very cool projects, including HPC and Docker Swarm clusters!

The next step

ARM CPUs are particularly efficient when it comes to power consumption and now they are becoming more and more powerful. What’s more, these CPUs are usually designed with a SoC approach (System-on-a-Chip), which simply means that the CPU already contains a lot of other components needed to build a computer. In fact, multiple cores are often coupled with a GPU, network and storage controllers and so on.
It doesn’t mean more compute power per se, but actually more compute power and less power consumption per square centimeter. And this is what datacenter architects are craving for!

Back to the datacenter

Contrary to the past, there are now all the components ready to build a successful ARM-based datacenter ecosystem. CPUs don’t have the same compute performance per core compared to x86 CPUs, but it is also true that many applications and workloads work in a massive parallel fashion and container development will help to improve this trend furthermore. And, at the end of the day, for many workloads, compute power density is becoming more important than single core performance.
Other aspects include:

  • software which is much more optimized than in the past,
  • 64 bit ARM CPUs, which are much more mature now,
  • Automation and Orchestration tools, which are now ready to handle hundreds of thousands of nodes in a single infrastructure.

Now ARM CPUs are relegated to small appliances, or as a component of larger x86-based systems, but this could change pretty soon. I want to mention Kaleao here, a startup working on an interesting ARM-based HCI solution. This is just an example, but there are many others working on ARM-based solutions for the datacenter now.

Closing the circle

Server room.ARM has its potential in the datacenter, but we’ve been saying that for years now, and reality has always shown the contrary. This time around things could be different, the stars are all aligned and if it doesn’t start happening now I think it will be harder and harder in the future.
It’s also interesting to note that there is a lot of stirring regarding compute power when it comes to large scale datacenters. Google looking at its own specialized chips for AI and alternative CPUs and GPUs for HPC-like applications in the cloud, Quantum computing are just a few examples… ARM is one of the multiple options on the table to build next-gen datacenters.
My last note goes to Intel, which has demonstrated multiple times that they are capable of reacting and innovating when the market changes. Their CPUs are very powerful and the instruction set has improved generation after generation. Are power consumption and density at the core of current compute design? Definitely not and they don’t look like the best CPUs for future cloud applications… but who knows what’s up their sleeve!

Originally posted on Juku.it

The inefficient efficiency of Hyperconvergence (and other alternatives)

Last week I was on a call with an executive of a large HCI vendor. He asked me what I actually thought about Hyperconvergence and whether, from my point of view, it will become dominant in all types of enterprises… my short answer was “yes and no”, and here is why.
When you talk with end users the first reason they give for choosing HCI is its simplicity (which also means lower TCO by the way). Generally speaking, HCI solutions are very good at serving “average” workloads but when the going gets tough, the tough get going… and hyperconvergence as we know it, is no longer enough.

When “good enough” is more than enough

Most modern HCI solutions scale pretty linearly up to a certain extent. By adding additional nodes to the cluster, you get more available resources almost immediately. They are hybrid or all-flash, covering a great number of different workloads, and internal compute resources are used to manage guest VMs as well as the distribute storage layer. You can’t ask too much. Latency consistency and IOPS are not always top, but they are good enough to satisfy the needs of end users.
The beauty of HCI lies in the fact that the SysAdmin can manage all the resources in the cluster from a single interface (usually VMware vCenter) and most of the painful tasks we usually find in traditional storage are simplified or simply non-existent. In fact, a good VMware Sysadmin can easily become a jack of all trades, and manage the whole infrastructure without too much effort.
When the infrastructure is small or highly virtualized, hyper convergence seems very efficient. Not because of its actual efficiency but because it looks efficient and quite predictable.

When “good enough” is not enough

HCI1The problem today, and probably in the foreseeable future, is that you can’t ask too much from a general purpose hyperconverged infrastructure. The perfect HCI cannot run all types of workloads while delivering high performance, large capacity and decent TCO.
HCI effectiveness strictly depends on the type of organization, its size and the kind of data and workloads managed. It can be very high in small organizations but decreases quite rapidly in larger ones, and when workloads, or their sum, have very specific characteristics which stress one particular resource more than others.
In some cases the solution comes form specialized HCI infrastructures. For example, this is somewhat of a stretch but, you can think about an Hadoop cluster as an HCI or, even better, solutions like HDS HSP (which also add OpenStack-based VM management to its specialized file system and is packaged as a single scale-out appliance).
Another interesting trend, especially when a lot of data is involved, is to leverage smarter storage solutions. Now there are several startups working on AWS-lambda-like functions applied to large storage repositories (usually object storage. i.e. OpenIO, NooBaa, Coho Data and, of course, Amazon AWS and Microsoft Azure ) and others are embedding database engines and interfaces into the storage system (e.g.: Iguaz.io). There are plenty of use cases, especially when you think about Big Data analytics or IoT. The idea is to offload specific compute tasks to a specialized data-centric infrastructure.
In other circumstances, when consistent low latency is the most important characteristic of the storage system, in-memory storage and I/O parallelization become more and more interesting. In this space there are several companies working on HW or SW products which leverage large RAM configurations and modern CPUs to achieve unprecedented results. Examples can be found in DataCore with its Parallel I/O technology, Diablo Technologies with its Memory 1 DIMMs and Plexistor as well as many others. In this scenario the goal is to bring data closer to the CPU and the application to reduce latency and optimiez network and storage communication.

Closing the circle

At the end of the day, Hyperconvergence is still a great solution for all those traditional workloads which don’t need large capacity storage or low latency. In other cases it becomes less effective and TCO increases rather quickly.
I don’t have exact numbers here, but it’s easy to see that in small enterprises (and highly virtualized environments) HCI is the way to go and it can potentially cover almost 100% of all needs. At the same time, with highly specialized applications accessing large amounts of data, such as for Big Data Analytics, we will need different architectures designed with high efficiency in mind.
HCI vendors are quite aware of this situation and I wouldn’t be surprised to see vendors like Nutanix leveraging the technology they acquired from Pernix to improve their products (or build more specialized solutions) and cover a larger number of workloads over time.

Originally posted on Juku.it

Tiger (er, Shark) of the Month: Digital Ocean Makes Getting to Cloud Easy

 
Digital Ocean Shark Whilst at Github Universe last month, on my way to learn more about the conference host’s new hardware two-factor security initiative for its developers, I was sidetracked by the sight of a group of developers crowded around a small kiosk, each holding a blue smiling toy shark. Curious, I stopped by to chat with the kiosk owners’, the crew of Digital Ocean, this month’s featured cloud computing “tiger.”
Why is this particular cloud provider a tiger/shark? Simply put, because they make getting onto the cloud easy for developers at a price point that won’t cause any nightmares as the customer company scales up. But it’s not as simple as calling Digital Ocean an “AWS lite” either because at the moment it’s a very different offering. And particularly for companies that were born pre-cloud, or for the largest unicorns like Uber, DO does not have all the features and support that you would need — at least not yet.
From 0 to 236 countries in Three Years
What Digital Ocean does offer to new companies and other cloud first entities is a facilitated and positive user experience. And developers love them — 700,000+ of them representing 8+ million cloud servers. From a 2012 $3.2 million seed round to a March 2014 $37.2 million Series A to its most recent Series B round in July 2015 for $83 million (Access Industries, Andreesson Horowitz, others), the company has found exponential success with a customer-first approach — delivering a streamlined UX with a straight-forward, transparent “no B.S.” or hard upsell approach.
DigitalOcean currently reaches 230 countries and territories. After their Series A the company added datacenters in Singapore, London and Frankfurt. Another facility was added in Toronto since its Series B, and the company expects to onboard India and South America in 2016.
Starter Web Services 
Digital Ocean has established itself in the web services arena primarily by capturing the entry market. If you look at the three aspects of being in the cloud — Computing, Networking and Storage — DO really only serves the first leg of the stool. Which means that the company can only handle small co or early stage company requirements — non-dynamic content web pages, etc. So a chunk of the developers currently using their services eventually outgrow them. *But* flush with cash and building momentum, the company launched its first networking service in late October — floating IPs that solve the problem of reassigning IPs to any droplet in the same datacenter — and may be able to offer a comprehensive cloud solution as early as next year.
The Hidden Barrier to Cloud is Actually HR
What set’s Digital Ocean’s products from the rest is that you don’t necessarily need a senior engineer to get your company’s web presence set up. So DO is not only cheaper and easier to use, but a platform to get you faster to launch. This provides the business with some breathing room as it develops, expanding the base of potential staff whom can handle the website. This has made DO’s Droplet a popular service for not just newcos, but also for discreet web projects and microsites like the launch of Beyonce’s secret album and Universe.com (owned by TicketMaster).
Customer Service Your Way (It’s All About the Love) 
DO’s UX goes far beyond the product UI. As strange as it sounds coming from a web infrastructure company, one of the company’s core values is “Love” and it is practiced throughout the company. I would characterize this love as a “passion for helping others” and a joie de vivre” that infuses the organization and is transferred to its customers. Duly noted that their mascot shark is a smiling happy one. 

How exactly does this Love manifest itself in the business? Zachary Bouzan-Kaloustian, Digital Ocean’s Director of Support describes its IaaS in this way: “Our entire platform is self-managed, which means that the customer is responsible for what runs on their Droplets. Our Platform Support Specialists provide free support if something’s not working with the infrastructure. One way that we demonstrate our core value of love is to ensure we reply quickly, and our 30-day average response time for 1,000+ tickets / day is under 30 minutes! Of course, our 24/7 team doesn’t stop there. We often do extensive troubleshooting with a customer to diagnose the issue, even if it involves parsing server logs. This involves extensive experience, and great relationship skills as we don’t have access to our customer’s Droplets for security reasons.”

But is this love scalable? Maybe not, but certainly the desire for love amongst developers (and us all) is strong so no doubt there is no shortage of demand for DO’s particular brand of customer relations.

Building the Next Generation Infrastructure 
By getting in with developers early, Digital Ocean has set itself up to take advantage of the tipping point of the Internet of Everything — when not only all major services but customer adoption for them reaches critical mass worldwide — likely well within the next 5 years. While newcos are signing up with Digital Ocean today, the company is fortifying and expanding its technical and services staff — growing from 150 to 200 employees in the past quarter alone.
And the big fish are taking notice: Google, Microsoft and Amazon have sliced their prices 3x since Digital Ocean launched and prices continue to drop. So increasingly the companies will begin to compete on volume — of customers and services used.
Fast forward 5 years and DO will have all the pieces of the cloud stool well established as well as worldwide presence. If DO can maintain its vision of making web services simple to consume, and successfully build out its offerings so that it can scale with its customers, the company is well positioned to become the go-to web services company for the post-millennial generation. Considering that there are some 20-30 million potential developer customers out there — it wouldn’t surprise me to see Digital Ocean as the most distributed — if not the biggest — and certainly the most beloved fish in the sea by 2020.
**This post was updated at 1:38pm on November 11, 2015 to reflect factual corrections. Access Industries, not Accel Partners, is a lead investor in Digital Ocean. 

Dropbox Paper is a Wolf in Sheep’s Clothing

Last week, I wrote about the commoditization of the enterprise file sharing market and how pure play vendors are being forced to evolve their offerings to stay alive. My post focused on Hightail (originally YouSendIt) and its announcement of Spaces – a specialized file sharing, annotating and publishing offering for creative professionals.
Dropbox also made a product announcement last week, albeit quietly. The company has expanded beta testing of Paper, a new offering that was released in a highly limited beta, in March, under the name Notes.  Like Hightail’s new offering, Dropbox’s illustrates how they are responding to the functional parity that vendors have achieved with basic file sharing offerings and to their rapid downward price movement.

Yet Another Collaborative Authoring Tool?

Most commentators, including Gigaom’s Nathaniel Mott in his article from last week, described Paper as “a collaborative writing tool”. They compared it to Google Docs, Microsoft Office (especially its Word and OneNote components) and startup Quip. For sure, Paper has similar functionality to those products, and it allows people to write and edit documents together in real-time. However, I don’t believe that is the main point of Dropbox’s beta product. Instead, Paper is intended to be used as a lightweight case management tool.
Case Management is a discipline that brings resources, including relevant content, related to a single instance of a business process or an initiative into a common place – the case folder. While many think of Case Management as a digital technology, its principles were established in business activities that were wholly paper-based.
Think of an insurance claim years ago, where a customer filled out a paper claim form, and it  was then routed throughout the insurance company in a paper folder. As the process continued, additional paper documents, perhaps even printed photographs, were added to the folder. The last documents to go into the folder were the final claim decision letter to the customer and a copy of the check, if a payment was made on the claim.
Today, that same insurance claim process is likely to generate and use a mix of paper-based and electronic documents, although insurance companies are slowly moving as much of the process online as possible. However, the concept of organizing information related to the claim into a single folder remains, although the folder is now likely to be an electronic artifact, not a paper one.

A Wolf in Sheep’s Clothing

Take another look at Dropbox’s beta Paper. Do you see it? Paper is a single point of organization for new content, files stored in Dropbox (and other repositories), existing Web content and discussions on all of those things. It’s a meta-document that acts like a case folder.
Paper enables lightweight case management, not the industrial-strength, production kind needed to handle high-volume, transactional business processes like insurance claims. Paper is case management for small teams, whose work might follow a pattern over time, but does not conform to a well-defined, repeatable process.
Working on a new software product at an early-stage startup with only a few coworkers? Start a new document in Paper, then add the functional and technical requirements, business projections, marketing assets, sales collateral, even the code for the software. Everything that is relevant to the product is one place in which it can be shared, viewed, commented on, discussed, edited and used for decision making. Just like a case folder in Case Management.

A New Way of Working

Still not convinced? Dropbox Product Manager Matteus Pan recently said:
“Work today is really fragmented…teams have really wanted a single surface to bring all of [their] ideas into a single place.” “Creation and collaboration are only half the problem,” he said. “The other half is how information is organized and retrieved across an entire company.”
That sounds like case management to me, but not the old-school type that you are likely more familiar with. Instead, Paper reflects the newer principles of Adaptive Case Management.
Adaptive Case Management (ACM) is a newer technology set that has been evolving from Production Case Management (PCM) over the last few years. ACM helps people deal with volatile processes by including collaboration tools alongside the workflow tools that are the backbone of PCM.
Dropbox Paper may be viewed as an extreme example of ACM, one which relies completely on the manual control of work rather than automating parts of it. In that regard, Paper takes its cues from enterprise social software, which is also designed to enable human coordination of emergent work, rather than the automation of stable processes. As Paper is more widely used in the current beta and beyond, it will be interesting to see if its adoption is stunted by the same obstacles that have limited the wholesale changes to established ways of working that social software requires.

Crashing Waves

I have not yet seen a demo of Dropbox Paper, but the screenshots, textual descriptions and comments from Dropbox employees that I have absorbed are enough to reveal that the product is more than just another collaborative authoring tool. If I was asked to make a comparison between Paper and another existing or previous tool, I would say that it reminds me of Google Wave, not Docs or Microsoft Office. Like Wave, Paper is a blank canvas on which you can collaborate with team members and work with multiple content types related to a single idea or business process in one place.
Google Wave was a powerful, but unintuitive tool that failed to get market traction. Will Paper suffer the same fate? Perhaps, but Dropbox hopes that the world is now ready for this new way to work. In fact, Dropbox is, in some regards, staking its continued existence on just that, as it tries to differentiate itself from other purveyors of commoditized file sharing services.

Brocade Pokes Cisco in the Eye, Switches for IBM

nullCisco Systems’ (s CSCO) decision to launch servers targeting the data center market has turned its allies against the company. To date, the biggest beneficiary of the Cisco-server blowback has been Juniper Networks (s JNPR) — now Brocade Communications (s BRCD) is now moving to take advantage of it as well. IBM, which already sells certain Brocade storage networking products, will now rebrand and sell Brocade switches under the IBM brand. Brocade, when it acquired Foundry Networks, gained a portfolio of IP switches; the IBM deal includes products that range from 10 GB switches to top-of-the-rack switching devices. Read More about Brocade Pokes Cisco in the Eye, Switches for IBM