Amazon Web Services’ Achilles heel: Complexity

Amazon Web Services(s amzn) leads the league in public cloud services. By a lot. Even after big strides by Microsoft(s msft) and Google(s goog), Gartner(S it) analyst Lydia Leong estimated that AWS alone still has five times more IaaS compute capacity than the aggregate total of the other 14 cloud providers Gartner tracked.
That’s not to say AWS is invulnerable. Even devout fans acknowledge that Amazon’s resources are unwieldy to track and monitor. Attendees of Google’s cloud event in March included many AWS users who were very enthusiastic about Google’s sustained usage price model, which kicks off discounts automatically when certain utilization levels are met.
That sure beats having to futz over spreadsheets tracking AWS usage, according to the developer sitting next to me at the Google event. He wasn’t ready to jump to Google yet, but he acknowledged that Google Cloud was certainly worth a look.
As one AWS watcher put it: “Amazon wants to sell you the Big Mac, but it charges separately for the bun, the ketchup, the onions, the pickles and the secret sauce. Many people just want to buy the Big Mac, all-inclusive.”
And many developers are seeing the advantage of using more all-inclusive resources like DigitalOcean “droplets” that bundle SSD-backed storage, memory, CPU cores and data transfer for a set price ranging from $5 to $80 per month. On last week’s Structure Show, DigitalOcean CEO Ben Uretsky said DigitalOcean has never had to discount that pricing.

AWS challenge: Making complexity simple

Having said all that, anyone who thinks Amazon is standing still doesn’t know Amazon. The concept of CPU credits it unveiled along with new t.2 “burstable” EC2 instances show that it will continue to tweak its services.
In an interview last week, Amazon’s GM of data science Matt Wood would not comment on plans for more automation, but he did note that one of the design goals of the CPU credits, which show up on the customer’s CloudWatch console, is that they kick in without the user having to think about them.
Per AWS evangelist Jeff Barr’s blog post:

“Your ability to burst is based on the concept of ‘CPU Credits’ that you accumulate during quiet periods and spend when things get busy. You can provision an instance of modest size and cost and still have more than adequate compute power in reserve to handle peak demands for compute power.”

So Amazon knows it needs to get more automated and simpler, but it has a long way to go. Even the new t.2 instances add complexity, according to Brian Adler, principal cloud architect at RightScale, which provides multi-cloud management and monitoring tools.
In a blog post, Adler wrote:

“[A]lthough the new t2.small instances are 40 percent cheaper than the m1.small instances, the m1.small instances include storage, while the new T2 instances do not. In addition, the T2 family operates as a shared CPU model. For example, at baseline, the t2.small instance gives you only 20 percent of a CPU core.”

RightScale's comparison of small AWS EC2 instances. ECU stands for EC2 compute unit.

RightScale’s comparison of small AWS EC2 instances. ECU stands for EC2 compute unit.

And, Adler added:

“While the T2 family has 1 or 2 vCPUs, you are sharing the vCPU so only get a percentage of the vCPU power. AWS also provides an ECU measure (EC2 Compute Unit) that describes the relative measure of processing power for different instance types. However, in the case of T2 instances, the published ECU ratings are ‘Variable’ due to the bursting capabilities, which makes it difficult to compare with other instance types.”

Expect Amazon to roll out more automation-in-service-of-simplicity going forward. That could start as early as this Thursday at the AWS Summit in New York, or if not there, definitely at its annual AWS Re:Invent event in November.