Report: Total experience quality: integrating performance, usability, and application design

Our library of 1700 research reports is available only to our subscribers. We occasionally release ones for our larger audience to benefit from. This is one such report. If you would like access to our entire library, please subscribe here. Subscribers will have access to our 2017 editorial calendar, archived reports and video coverage from our 2016 and 2017 events.
Image-for-the-article-Is-Your-Mobile-App-a-Gateway-to-Confidential-Data-Leaking
Total experience quality: integrating performance, usability, and application design by Rich Morrow:
As the number of consumption models in the digital delivery landscape has grown so has the burden on application designers. From desktop to web to phone to tablet and beyond, many designers create an entirely new user experience (UX) for each target platform — but often in a vacuum. Despite a growing acceptance of responsive design principles and the improvement of cross-platform tools, designers frequently target one primary platform.
One result is that decisions made during design can be barriers to performance down the road. When new platforms launch, performance is typically an afterthought to be optimized later. This is short-sighted: Lack of performance out of the gate can quickly doom a web or mobile-based app (both referred to here as apps). It is imperative that performance considerations play a front-seat role in the entire UX equation.
User experience is heavily dependent on underlying technical structure, and those structural choices are often dictated by usability. To navigate the possibilities of native, hybrid, and responsive designs and the myriad backend services that support them, UX designers must have an intimate knowledge of the limits to which they can push app performance. This report will help designers make educated, value-driven decisions about app experiences.
To read the full report, click here.

The enterprise view of cloud, specifically Private Cloud, is confusing

Enterprise organizations are actively looking for ways to leverage cloud computing. Cloud presents the single-largest opportunity for CIOs and the organizations they lead. The move to cloud is often part of a larger strategy for the CIO moving to a consumption-first paradigm. As the CIO charts a path to cloud along the cloud spectrum, Private Cloud provides a significant opportunity.

Adoption of private cloud infrastructure is anemic at best. Looking deeper into the problem, the reason becomes painfully clear. The marketplace is heavily fractured and quite confusing even to the sophisticated enterprise buyer. After reading this post, one could question the feasibility of private cloud. The purpose of this post is not to present a case to avoid private cloud, but rather expose the challenges to adoption to help build awareness towards solving the issues.

Problem statement

Most enterprises have a varied strategy with cloud adoption. Generally there are two categories of applications and services:

  1. Existing enterprise applications: These may include legacy and custom applications. The vast majority was never designed for virtualization let alone cloud. Even if there is an interest to move to cloud, the cost and risk to move (read: re-write) these applications to cloud is extreme.
  2. Greenfield development: New applications or those modified to support cloud-based architectures. Within the enterprise, greenfield development represents a small percentage compared with existing applications. On the other hand, web-scale and startup organizations are able to leverage almost 100% greenfield development.

 

Private Cloud Market Mismatch


 The disconnect is that most cloud solutions in the market today suit greenfield development, but not existing enterprise applications. Ironically, from a marketing perspective, most of the marketing buzz today is geared toward solutions that service the greenfield development leaving existing enterprise applications in the dust.

Driving focus to private cloud

For the average enterprise organization, they are faced with a cloud conundrum. Cloud, theoretically, is a major opportunity for enterprise applications. Yet the private cloud solutions are a mismatched potpourri of offerings, which make it difficult to compare. In addition, private cloud may take different forms.

 

Private Cloud Models

 

Keep in mind that within the overall cloud spectrum, this is only private cloud. At the edges of private cloud, colocation and public cloud present a whole new set of criteria to consider.

Within the private cloud models, it would be easy if the only criteria were compute, storage and network requirements. The reality is that a myriad of other factors are the true differentiators.

The hypervisor and OpenStack phenomenon

The defacto hypervisor in enterprises today is VMware. Not every provider supports VMware. Private cloud providers may support VMware along with other hypervisors such as Hyper-V, KVM and Zen. Yes, it is possible to move enterprise workloads from one hypervisor to another. That is not the problem. The problem is the amount of work required to address the intricacies of the existing environment. Unwinding the ball of yarn is not a trivial task and presents yet another hurdle. On the flipside, there are advantages to leveraging other hypervisors + OpenStack.

Looking beyond the surface of selection criteria

There are about a dozen different criteria that often show up when evaluating providers. Of those, hypervisor, architecture, location, ecosystem and pricing models are just some of the top-line criteria.

In order to truly evaluate providers, one must delve further into the details of each to understand the nuances of each component. It is those details that can make the difference between success and failure. And each nuance is unique to the specific provider. As someone recent stated, “Each provider is like a snowflake.” No two are alike.

The large company problem

Compounding the problem is a wide field of providers trying to capture a slice of the overall pie. Even large, incumbent companies are failing miserably to deliver private cloud solutions. There are a number of reasons companies are failing.

Time to go!

With all of these reasons, one may choose to hold off considering private cloud solutions. That would be a mistake. Sure, there are a number of challenges to adopting private cloud solutions today. Yes, the marketplace is highly fractured and confusing. However, with work comes reward.

The more enterprise applications and services move to private cloud solutions, the more opportunities open for the CIO. The move to private cloud does not circumvent alternatives from public cloud and SaaS-based solutions. It does, however, help provide greater agility and focus for the IT organization compared to traditional infrastructure solutions.

New Relic boosts revenue growth in first post-IPO earnings

New Relic’s first earnings report since going public last December seemed to please investors as the application-performance and analytics company took in $29 million in revenue in what it considers its third quarter 2015 earnings. That’s a 14 percent quarter-over-quarter increase from the second quarter of 2015 and a 69 percent year-over-year increase from the third quarter in 2014.

The San Francisco-based company also said it now has 11,270 paid business accounts as of December 31, 2014, which is up from the 10,590 paid business accounts it had as of September 30, 2014, as disclosed in an SEC filing.

New Relic also signed on some new customers during the quarter including [company]Capital One Services[/company], [company]Hootsuite Media[/company] and [company]Walgreens Boots Alliance[/company].

Seventy-five percent of [company]New Relic[/company]’s customer base is made up of small to medium-size businesses with the other 25 percent coming from companies with over 100 employees. However, those bigger clients account for roughly half of the company’s revenue, said New Relic CFO Mark Sachleben in a conference call.

New Relic sees its recently launched Insights real-time analytics product line as the main differentiator from competitors, and is part of the company’s “land and expand” strategy that involves selling a product line to a client and then persuading it to purchase more goods, explained Sachleben.

The company has also seen “quite a bit of success” in migrating clients from monthly billing cycles to up-front annual payments, which is something larger enterprises are more prone to do, said Sachleben.

In an interview with Gigaom after the conference call, New Relic CEO Lew Cirne wouldn’t say which of its many product lines has been the fastest growing in the past quarter, but he did say that the company is looking to boost staff in Dublin and London as it attempts to grow its market share in those regions. Cirne said 34 percent of New Relic’s business comes from outside the U.S., but the company doesn’t currently have a large global salesforce. So far, the plans are to expand outside the U.S. starting with Europe, but Cirne said the company has “nothing yet to share beyond those markets” at this time.

Here’s some of the numbers based on the company’s earnings report:

  • Revenue for the third quarter of 2015 was $29 million, which is a 14 percent increase from the second quarter of 2015 and a 69 percent increase from the third quarter in 2014.
  • New Relic took $15.6 million in GAAP loss from operations for the third quarter of 2015, which was an increase from the $11.7 million GAAP loss from operations it took in the third quarter of 2014.
  • The company ended up raising $119.9 million in net proceeds during its IPO.
  • For the fourth quarter of fiscal 2015, New Relic is projecting revenue between $30.0 million and $30.5 million and expects a non-GAAP loss from operations ranging between $11.0 million and $12.0 million.

Time’s up! Changing core IT principles

There is a theme gaining ground within IT organizations. In truth, there are a number of examples that support a common theme coming up for IT organizations. And this very theme will change the way solutions are built, configured, sold and used. Even the ecosystems and ancillary services will change. It also changes how we think, organize, lead and manage IT organizations. The theme is:

Just because you (IT) can do something does not mean you should.

Ironically, there are plenty of examples in the history of IT where the converse of this principle served IT well. Well, times have changed and so must the principles that govern the IT organization.

Take it to the customization of applications and you get this:

Just because IT can customize applications to the nth degree does not mean they necessarily should.

A great example of this is in the configuration and customization of applications. Just because IT could customize the heck out of it, should they have? Now, the argument often made here is that it provides some value, somewhere, either real or (more often) perceived. However, the reality is that it comes at a cost, sometimes, a very significant and real cost.

Making it real

Here is a real example that has played out time and time again. Take application XYZ. It is customized to the nth degree for ACME Company. Preferences are set, not necessarily because they should be, but rather because they could. Fast-forward a year or two. Now it is time to upgrade XYZ. The costs are significantly higher due to the customizations done. It requires more planning, more testing, more work all around. Were those costs justified by the benefit of the customizations? Typically not.

Now it is time to evaluate alternatives for XYZ. ACME builds a requirements document based on XYZ (including the myriad of customizations). Once the alternatives are matched against the requirements, the only solution that really fits the need is the incumbent. This approach actually gives significant weight to the incumbent solution therefore limiting alternatives.

These examples are not fictitious scenarios. They are very real and have played out in just about every organization I have come across. The lesson here is not that customizations should be avoided. The lesson is to limit customizations to only those necessary and provide significant value.

And the lesson goes beyond just configurations to understanding what IT’s true value is based on what they should and should not do.

Leveraging alternative approaches

Much is written about the value of new methodologies and technologies. Understanding IT’s true core value opportunity is paramount. The value proposition starts with understanding how the business operates. How does it make money? How does it spend money? Where are the opportunities for IT to contribute to these activities?

Every good strategy starts with a firm understanding of the ecosystem of the business. That is, how the company operates and it’s interactions. A good target that many are finding success with sits furthest away from the core company operations and therefore hardest to explain true business value…in business terms. For many, it starts with the data center and moves up the infrastructure stack. For a bit more detail: CIOs are getting out of the data center business.

Preparing for the future today

Is your IT organization ready for today? How prepared is your organization, processes and systems to handle real-time analytics? As companies consider how to engage customers from a mobile platform in real-time, the shift from batch-mode to real-time data analytics quickly takes shape. Yet many of the core systems and infrastructure are nowhere ready to take on the changing requirements.

Beyond data, are the systems ready to respond to the changing business climate? What is IT’s holistic cloud strategy? Is a DevOps methodology engaged? What about container-based architectures?

These are only a few of the core changes in play today…not in the future. If organizations are to keep up, they need to start making the evolutionary turn now.

Google is Adding a Private Registry to its Docker Arsenal

Google, continuing its investment in containers and cluster management, is swiftly building a private Docker registry offering for its customers. Given the importance of security and compliance, enterprises have been reluctant to use publicly accessible Docker repositories. Private registries enable secure and rapid storage and retrieval of Docker images. We will be testing this out in the coming weeks.

Google was one of the first public cloud providers to offer container hosting and cluster management capabilities. It started with Container Optimised VMs followed by Managed VMs, Kubernetes and finally Google Container Engine (GKE). Despite these improvements, customers still had to store Docker images on the public Docker Hub or create a private registry in one of the VMs.

This process will be eliminated when Google unveils Google Container Registry hosted on Google Cloud Platform. DevOps teams will be able to pull and push images from the registry on the same infrastructure. Google Container Registry is integrated with Google Accounts. It exposes an HTTP endpoint at gcr.io that is accessible within its cloud platform or on-premises infrastructure. Container images are stored in a Google Cloud Storage bucket. When an image is pushed for the first time, a dedicated bucket is created within the same Google account to store the image. Owners and admins of the project can pull and push the images while users with project viewer permission can only pull images. The command line utility of Google Cloud Platform, gcutil is updated to support pull and push operations. Images stored in Google Container Registry can be used from Container Optimized VMs, Managed VMs, Kubernetes, and Google Container Engine Pods.

Google Container Registry

Google Container Registry - Source: Gigaom

Source: Gigaom Research

Other vendors serious about Docker and containers are also investing in private registries. CoreOS acquired Quay.io, a hosted private docker repository company and Tutum, a Docker hosting platform also offers a private registry. Docker, Inc. acquired Koality to augment its enterprise hub offering. Koality’s speciality was continuous integration and deployment of containerized applications. By integrating CI/CD with its native registry, Docker, Inc. can attract enterprise customers.

Docker Hub Enterprise (DHE) was announced at DockerCon Europe 2014. DHE delivers workflow capabilities for developers and sysadmins managing a dynamic lifecycle behind the enterprise firewall. DHE is a drop-in solution that allows enterprise developers to focus on creating multi-container distributed applications behind-the-firewall. DHE’s first release comes with an installer, GUI configuration, resumable push/pull of images, flexible storage capability with support for local filesystem, in-memory and Amazon S3.

AWS, IBM, and Microsoft host DHE on their respective public cloud offerings. IBM pledged its support to integrate DHE with SoftLayer and Bluemix while Microsoft will host DHE natively on Azure. AWS will offer DHE as an appliance through its Test Drive program. It may eventually get listed in AWS Marketplace. While this seems like just another partnership announcement, there is more to it: Google is conspicuously missing from the list. Google had clear plans to build a complete container platform with private registry as the cornerstone of its strategy. This made Google opt out of DHE partnership.

The Gigaom Research Perspective

It is clear that Google has a dual strategy when it comes to containers.

1) Embrace Docker – Google has been running containers for a long time. Instead of exposing its internal toolchain for managing the lifecycle of containers, it decided to support Docker, which has a vibrant community and ecosystem of developers. It then open sourced Kubernetes, a cluster management and orchestration tool that enjoyed huge popularity among Docker users. Meanwhile, Google started adding native Docker support to App Engine and Compute Engine making it easy for developers to launch and manage containers on its public cloud. Google wants its cloud platform to be the best public cloud to run Docker containers.

2) Monetize Container Building Blocks – Docker is the most successful open source project after Linux. There are over a hundred startups building tools and components around Docker but it is still not clear how these startups will eventually make money. Docker, Inc. is busy assembling all the key building blocks to make its platform complete for enterprise customers. With early investments made in containers, Google doesn’t want to miss the opportunity of commercialising its intellectual property. While Kubernetes is open source and available on a variety of cloud platforms, Google Container Engine abstracts it further, delivering a simplified experience of deploying and managing clusters. When developers use GKE, they indirectly consume compute, storage, and database services. Container registry is an important step towards technologies such as Rocket and LXD on its platform. It will certainly impact Docker, Inc. and its ecosystem.

Changing the CIO conversation from technology to business

For many years, traditional IT thinking has served the IT function well. Companies have prospered from both the technological advances and consequent business improvements. Historically, the conversation typically centered on some form of technology. It could have been about infrastructure (data centers, servers, storage, network) or applications (language, platform, architectures) or both.

Today, we are seeing a marked shift in the conversations happening with the CIO. Instead of talking about the latest bell-and-whistle, it is increasingly more apt to involve topics about business enablement and growth. The changes did not happen overnight. For any IT leader, it takes time to evolve the conversation. Not only does the IT leader need to evolve, but so does their team and fellow business leaders. Almost two years ago, I wrote about the evolution of these relationships in Transforming IT Requires a Three-Legged Race.

Starting the journey

For the vast majority of IT leaders, the process is not an end-state, but rather a journey about evolution that has yet to start in earnest. For many I have spoken with, there is an interest, but not a clear path in which to take.

This is where an outside perspective is helpful. It may come from mentors, advisors or peers. It needs to come from someone that is trusted and objective. This is key, as the change itself will touch the ethos of the IT leader.

The assessment

Taking a holistic assessment of the situation is critical here. It requires a solid review of the IT leadership, organizational ability, process state and technological situational analysis. The context for the assessment is back to the core business strategy and objectives.

Specific areas of change are items that clearly are not strategic or differentiating to support the company’s strategy and objectives. A significant challenge for IT organizations will be: Just because you can manage it, does not mean you should manage it.

Quite often, IT organizations get too far into the weeds and loose sight of the bigger picture. To fellow business leaders, this is often perceived as a disconnect between IT & Line of Business (LoB) leaders. It essentially alienates IT leaders and creates challenges to fostering stronger bonds between the same leaders.

Never lose sight of the business

It is no longer adequate for the CIO to be the only IT leader familiar with the company’s strategy and objectives. Any IT leader today needs to fully understand the ecosystem of how the company makes and spends money. Without this clarity, the leader lacks the context in which to make healthy, business-centric decisions.

The converse is an IT leader that is well familiar with the business perspective as outlined above. This IT leader will gain greater respect amongst their business colleagues. They will also have the context in which to understand which decisions are most important.

Kicking technology to the curb

So, is IT really getting out of the technology business? No! Rather, think of it as an opportunity to focus. Focus on what is important and what is not. What is strategic for the company and what is not? Is moving to a cloud-centric model the most important thing right now? What about shifting to a container-based application architecture model? Maybe. Maybe not. There are many areas of ripe, low hanging fruit to be picked. And just as with fruit, the degree of ripeness will change over time. You do not want to pick spoiled fruit. Nor do you want to pick it too soon.

One area of great interest these days is in the data center. I wrote about this in detail with CIOs are getting out of the Data Center business. It is not the only area, but it is one of many areas to start evaluating.

The connection between technology divestiture and business

By assessing which areas are not strategic and divesting those area, it provides IT with greater focus and the ability to apply resources to more strategic functions. Imagine if those resources were redeployed to provide greater value to the company strategy and business objectives. By divesting non-strategic areas, it frees up the ability to move into other areas and conversations.

By changing the model and using business as the context, it changes the tone, tenor and impact in which IT can have for a company. The changes will not happen overnight. The evolution of moving from technology to business discussions takes vision, perseverance, and a strong internal drive toward change.

The upside is a change in culture that is both invigorating and liberating. It is also a model that supports the dynamic changes required for today’s leading organizations.

With $10M, HashiCorp launches its first commercial product

Building applications in today’s world involves a lot of work assembling, managing and monitoring all of those various components that need to come together across myriad environments. To help with this chore, HashiCorp is rolling out an application development hub called Atlas, its first commercial product based on its various open-source technology. The startup is also announcing a $10 million series A funding round from Mayfield Fund, GGV Capital and True Ventures (see disclosure).

HashiCorp’s biggest claim to fame is its open-source Vagrant tool that helps developers quickly spin up virtual environments so they can build and test their software projects before they see the light of day.

Over time, the startup developed other open-source tech to help coders with all aspects of the software-development process; from Serf, which handles cluster management and makes sure those developer environments don’t fail, to Consul, which helps users discover and configure all the services running in their coupled-together applications.

Atlas diagram

Atlas diagram

With Atlas, the startup is bundling up all of its open-source software into one package and throwing in a dashboard that will supposedly let coders see how their application is performing in both public and private clouds or hybrid environments.

The Atlas software-as-a-service is now available in beta and will be available to the public in the first quarter of 2015; the company will explain pricing by then and will unveil an on-premise version.

Diagram provided by HashiCorp

Disclosure: HashiCorp is backed by True Ventures, a venture capital firm that is an investor in the parent company of Gigaom.

Microsoft starting to lay out the plan for open-source .NET

Microsoft is making good on its plans to open source the .NET framework and has revealed new details on .NET Core, a fork of .NET that’s been developed to make .NET more approachable to modern-day software development, the company explained in a blog post on Thursday. As .NET matured over the years since its inception, coders created many variants of the framework to make sure it could function across numerous devices and environments. The new open-source .NET Core essentially removes the need of having multiple versions of .NET by providing “a single code base that can be used to build and support all the platforms, including Windows, Linux and Mac OSX,” the post explained.