Businesses striving towards digital transformation need to get their applications, infrastructure and cloud initiatives synchronized in order to be successful. TrackVia’s low-code application development platform brings order to the chaos of developing cloud-enabled applications, while also adding in critical mobile device support.
Our library of 1700 research reports is available only to our subscribers. We occasionally release ones for our larger audience to benefit from. This is one such report. If you would like access to our entire library, please subscribe here. Subscribers will have access to our 2017 editorial calendar, archived reports and video coverage from our 2016 and 2017 events.
Total experience quality: integrating performance, usability, and application design by Rich Morrow:
As the number of consumption models in the digital delivery landscape has grown so has the burden on application designers. From desktop to web to phone to tablet and beyond, many designers create an entirely new user experience (UX) for each target platform — but often in a vacuum. Despite a growing acceptance of responsive design principles and the improvement of cross-platform tools, designers frequently target one primary platform.
One result is that decisions made during design can be barriers to performance down the road. When new platforms launch, performance is typically an afterthought to be optimized later. This is short-sighted: Lack of performance out of the gate can quickly doom a web or mobile-based app (both referred to here as apps). It is imperative that performance considerations play a front-seat role in the entire UX equation.
User experience is heavily dependent on underlying technical structure, and those structural choices are often dictated by usability. To navigate the possibilities of native, hybrid, and responsive designs and the myriad backend services that support them, UX designers must have an intimate knowledge of the limits to which they can push app performance. This report will help designers make educated, value-driven decisions about app experiences.
To read the full report, click here.
Heroku, the Salesforce-owned company that powers the application-development process of hot startups like Lyft and Upworthy, announced a new product line Thursday called Heroku Enterprise. It’s geared for big companies that want to develop the kind of modern applications seen at startups while providing the type of features that many large enterprises want, including security features and access control.
Essentially, the product line claims that large enterprises can now have it both ways: a way to make the type of applications that are typically derived from an agile-development process (with access to trendy technology like containers and new database services) all while being monitored under the iron fist of the enterprise. Kudos to Heroku if it can pull that off.
With Heroku Enterprise, organizations can supposedly now monitor all their developers, applications and resources under one interface. Companies can keep tabs on what applications are in production, which developers are working on an app and how each app is eating up resources, according to a Heroku blog post detailing the announcement.
From the blog post:
[blockquote person=”Heroku” attribution=”Heroku”]Heroku Enterprise introduces a new kind of application-level access control called a privilege. Privileges strike a balance between fine-grained permissions that are too hard to manage and coarse-grained, all-or-nothing flags that won’t do the job. In this initial release, we are introducing three app level privileges in beta: deploy, operate and manage. [/blockquote]
The new product line also comes packed with Heroku Connect, which can link up a company’s Salesforce data to the Heroku platform. [company]Salesforce[/company] said that pricing for Heroku Enterprise will be based on how many resources a company consumes.
Of course, developing the types of applications seen at Lyft and Instacart requires a type of developer mindset that can contrast with the old waterfall-style of development seen at big enterprises in which releases don’t come as often and the development lifecycle at large is more sequential in nature.
Even with a new product, it’s important for companies to realize that development is not just tool-centric, but also requires a bit of a culture shift.
Enterprise organizations are actively looking for ways to leverage cloud computing. Cloud presents the single-largest opportunity for CIOs and the organizations they lead. The move to cloud is often part of a larger strategy for the CIO moving to a consumption-first paradigm. As the CIO charts a path to cloud along the cloud spectrum, Private Cloud provides a significant opportunity.
Adoption of private cloud infrastructure is anemic at best. Looking deeper into the problem, the reason becomes painfully clear. The marketplace is heavily fractured and quite confusing even to the sophisticated enterprise buyer. After reading this post, one could question the feasibility of private cloud. The purpose of this post is not to present a case to avoid private cloud, but rather expose the challenges to adoption to help build awareness towards solving the issues.
Most enterprises have a varied strategy with cloud adoption. Generally there are two categories of applications and services:
- Existing enterprise applications: These may include legacy and custom applications. The vast majority was never designed for virtualization let alone cloud. Even if there is an interest to move to cloud, the cost and risk to move (read: re-write) these applications to cloud is extreme.
- Greenfield development: New applications or those modified to support cloud-based architectures. Within the enterprise, greenfield development represents a small percentage compared with existing applications. On the other hand, web-scale and startup organizations are able to leverage almost 100% greenfield development.
The disconnect is that most cloud solutions in the market today suit greenfield development, but not existing enterprise applications. Ironically, from a marketing perspective, most of the marketing buzz today is geared toward solutions that service the greenfield development leaving existing enterprise applications in the dust.
Driving focus to private cloud
For the average enterprise organization, they are faced with a cloud conundrum. Cloud, theoretically, is a major opportunity for enterprise applications. Yet the private cloud solutions are a mismatched potpourri of offerings, which make it difficult to compare. In addition, private cloud may take different forms.
Keep in mind that within the overall cloud spectrum, this is only private cloud. At the edges of private cloud, colocation and public cloud present a whole new set of criteria to consider.
Within the private cloud models, it would be easy if the only criteria were compute, storage and network requirements. The reality is that a myriad of other factors are the true differentiators.
The hypervisor and OpenStack phenomenon
The defacto hypervisor in enterprises today is VMware. Not every provider supports VMware. Private cloud providers may support VMware along with other hypervisors such as Hyper-V, KVM and Zen. Yes, it is possible to move enterprise workloads from one hypervisor to another. That is not the problem. The problem is the amount of work required to address the intricacies of the existing environment. Unwinding the ball of yarn is not a trivial task and presents yet another hurdle. On the flipside, there are advantages to leveraging other hypervisors + OpenStack.
Looking beyond the surface of selection criteria
There are about a dozen different criteria that often show up when evaluating providers. Of those, hypervisor, architecture, location, ecosystem and pricing models are just some of the top-line criteria.
In order to truly evaluate providers, one must delve further into the details of each to understand the nuances of each component. It is those details that can make the difference between success and failure. And each nuance is unique to the specific provider. As someone recent stated, “Each provider is like a snowflake.” No two are alike.
The large company problem
Compounding the problem is a wide field of providers trying to capture a slice of the overall pie. Even large, incumbent companies are failing miserably to deliver private cloud solutions. There are a number of reasons companies are failing.
Time to go!
With all of these reasons, one may choose to hold off considering private cloud solutions. That would be a mistake. Sure, there are a number of challenges to adopting private cloud solutions today. Yes, the marketplace is highly fractured and confusing. However, with work comes reward.
The more enterprise applications and services move to private cloud solutions, the more opportunities open for the CIO. The move to private cloud does not circumvent alternatives from public cloud and SaaS-based solutions. It does, however, help provide greater agility and focus for the IT organization compared to traditional infrastructure solutions.
The new Node.js Foundation — which will include the founding members of Joyent, [company]IBM[/company], [company]Paypal[/company], [company]Microsoft[/company], Fidelity and The Linux Foundation — is the next logical step after the establishment of the Node.js advisory board in October 2014 and it will help take the load off of Joyent’s plate. Joyent has been the corporate steward of the Node.js project for the last five years, explained Joyent CEO Scott Hammond.
Node.js is typically used to build low-latency applications, which are the type of apps that can gather and exchange data between the server environment and the front-end in real time. [company]PayPal[/company], [company]Dow Jones[/company], [company]Walmart[/company] and [company]LinkedIn[/company] are some of the companies that are public with their use of the framework.
“If you are writing a walkie-talkie app for the mobile phone, you want that to be real-time communication,” said Hammond in reference to the types of applications one can build using Node.js.
The framework was created in 2009 by a Joyent employee named Ryan Dahl and has relied on Joyent to foster the project since then. Since that time, Node.js’s popularity has ballooned and it now counts over 2 million downloads per month. Even GoDaddy has been a supporter of the framework and it looks like the web-hosting company is close to buying a Node-centric PaaS called Nodejitsu.
However, there’s been a bit of commotion within the Node.js community over the way the project has been managed by Joyent, specifically regarding the long wait times between new releases that have contributed to user discontent.
A group of disgruntled Node.js contributors forked the open-source project and put up its own project called io.js on Github in December, but apparently both sides are open to eventually joining forces again, said Hammond.
“We have kept them abreast with what we are doing,” said Hammond. “I think we are both interested in aligning those projects.”
The new governance board will be responsible for handling the project’s finances, fundraising, trade shows, marketing, code of conduct and all the other details that encompass running an open-source project, explained Hammond. The point is to free Joyent from having the final say as to what goes on with Node.js.
“It helps distribute the work of overseeing the project instead of us being solely responsible for a lot of the decisions going on,” said Hammond. “So it’s not just Joyent making a certain decision, it is the community speaking.”
Joyent turned to the Linux Foundation for help in creating an open-source foundation “that is unique to Node.js” and spent the last few months working on a game plan as to how to how it would do so.
How the new foundation will look like
The new foundation will be split into two separate groups: the board of directors and the technical committee, in which there is already a core team in place, as described on the Node.js website.
It’s still to be determined which organizations will be part of the board of directors, but the plan is to have a three-tiered system in which the board consists of representatives from companies who have to sign up to either a platinum, gold or silver membership, explained Hammond.
At the platinum level, members will be expected to pay $250,000 a year, which includes having a company representative as a board member. Gold members will pay on a sliding scale of $50,000 to $100,000 a year based on their employee headcount and they will also have to vote on who from those gold-member organizations will become board members; one out of every three gold-level organizations will end up having a representative on the board, said Hammond.
Silver members will pay on a sliding scale of $5,000 to $25,000 and like gold members, they will have to vote on which people from other silver-member companies will become board members; Hammond said that one out of every ten silver-member companies will have a board representative.
The technical committee will also vote on a technical committee member to become a board member, explained Hammond.
Joyent will be granted a gold membership, but will not have to pay a fee for at least a couple of years because of the fact that it has been running the Node.js project and is now “moving the project into the community,” said Hammond.
When asked how the foundation can ensure that everyone on the board of directors has an equal say and this doesn’t turn into a “pay-to-play” situation, Hammond said that won’t be the case and “the organizations that are writing checks to sponsor the foundation are the ones who are adding the money to the budget to further the development of the Node committee.” This means that the board members all contribute cash into a general fund, which is then used to fund projects, like tradeshows, meet-up groups, API validation and other Node.js-related management issues.
Taking a load off of Joyent’s plate
Hammond is hoping that with the new Node.js foundation, coders will eventually see less gaps between feature releases and a more streamlined way of managing the project, which has grown so much that the fledging Joyent would stand to benefit from not having to oversee such an endeavor.
For Joyent, it gives the company a change to buckle down and focus on its core cloud business, which the company, banking on the popularity of containers, is hoping can compete with cloud giants like [company]Amazon[/company], [company]Microsoft[/company] and [company]Google[/company], not to mention legacy companies like [company]IBM[/company] and [company]HP[/company].
The company recently open-sourced its SmartDataCenter open-source cloud management platform and Manta object-storage systems in an attempt to gain some developer momentum and interest to its cloud. However, the company also saw the departures of two well-known higher-ups, Mark Cavage and Ben Rockwood, who decamped to Oracle and Chef, respectively.
Joyent will still play a large role in the Node.js community, Hammond said, as the Joyent public cloud uses Node.js and many of the company’s technologies, like Manta, are written using the framework. With the cloud company recently launching the Node.js incubator program, which will see Joyent giving selected participants up to $25,000 in Joyent Cloud hosting credits to develop Node.js applications on the Joyent cloud, it’s clear that the company sees Node.js as a way to lure coders to its own cloud.
Whether the formation of the new open-source foundation will please Node.js users who may have turned to io.js remains to be seen, but it seems that Joyent is trying to do something to remedy the situation — it just can’t do it all on its own.
There is a theme gaining ground within IT organizations. In truth, there are a number of examples that support a common theme coming up for IT organizations. And this very theme will change the way solutions are built, configured, sold and used. Even the ecosystems and ancillary services will change. It also changes how we think, organize, lead and manage IT organizations. The theme is:
Just because you (IT) can do something does not mean you should.
Ironically, there are plenty of examples in the history of IT where the converse of this principle served IT well. Well, times have changed and so must the principles that govern the IT organization.
Take it to the customization of applications and you get this:
Just because IT can customize applications to the nth degree does not mean they necessarily should.
A great example of this is in the configuration and customization of applications. Just because IT could customize the heck out of it, should they have? Now, the argument often made here is that it provides some value, somewhere, either real or (more often) perceived. However, the reality is that it comes at a cost, sometimes, a very significant and real cost.
Making it real
Here is a real example that has played out time and time again. Take application XYZ. It is customized to the nth degree for ACME Company. Preferences are set, not necessarily because they should be, but rather because they could. Fast-forward a year or two. Now it is time to upgrade XYZ. The costs are significantly higher due to the customizations done. It requires more planning, more testing, more work all around. Were those costs justified by the benefit of the customizations? Typically not.
Now it is time to evaluate alternatives for XYZ. ACME builds a requirements document based on XYZ (including the myriad of customizations). Once the alternatives are matched against the requirements, the only solution that really fits the need is the incumbent. This approach actually gives significant weight to the incumbent solution therefore limiting alternatives.
These examples are not fictitious scenarios. They are very real and have played out in just about every organization I have come across. The lesson here is not that customizations should be avoided. The lesson is to limit customizations to only those necessary and provide significant value.
And the lesson goes beyond just configurations to understanding what IT’s true value is based on what they should and should not do.
Leveraging alternative approaches
Much is written about the value of new methodologies and technologies. Understanding IT’s true core value opportunity is paramount. The value proposition starts with understanding how the business operates. How does it make money? How does it spend money? Where are the opportunities for IT to contribute to these activities?
Every good strategy starts with a firm understanding of the ecosystem of the business. That is, how the company operates and it’s interactions. A good target that many are finding success with sits furthest away from the core company operations and therefore hardest to explain true business value…in business terms. For many, it starts with the data center and moves up the infrastructure stack. For a bit more detail: CIOs are getting out of the data center business.
Preparing for the future today
Is your IT organization ready for today? How prepared is your organization, processes and systems to handle real-time analytics? As companies consider how to engage customers from a mobile platform in real-time, the shift from batch-mode to real-time data analytics quickly takes shape. Yet many of the core systems and infrastructure are nowhere ready to take on the changing requirements.
Beyond data, are the systems ready to respond to the changing business climate? What is IT’s holistic cloud strategy? Is a DevOps methodology engaged? What about container-based architectures?
These are only a few of the core changes in play today…not in the future. If organizations are to keep up, they need to start making the evolutionary turn now.
For many years, traditional IT thinking has served the IT function well. Companies have prospered from both the technological advances and consequent business improvements. Historically, the conversation typically centered on some form of technology. It could have been about infrastructure (data centers, servers, storage, network) or applications (language, platform, architectures) or both.
Today, we are seeing a marked shift in the conversations happening with the CIO. Instead of talking about the latest bell-and-whistle, it is increasingly more apt to involve topics about business enablement and growth. The changes did not happen overnight. For any IT leader, it takes time to evolve the conversation. Not only does the IT leader need to evolve, but so does their team and fellow business leaders. Almost two years ago, I wrote about the evolution of these relationships in Transforming IT Requires a Three-Legged Race.
Starting the journey
For the vast majority of IT leaders, the process is not an end-state, but rather a journey about evolution that has yet to start in earnest. For many I have spoken with, there is an interest, but not a clear path in which to take.
This is where an outside perspective is helpful. It may come from mentors, advisors or peers. It needs to come from someone that is trusted and objective. This is key, as the change itself will touch the ethos of the IT leader.
Taking a holistic assessment of the situation is critical here. It requires a solid review of the IT leadership, organizational ability, process state and technological situational analysis. The context for the assessment is back to the core business strategy and objectives.
Specific areas of change are items that clearly are not strategic or differentiating to support the company’s strategy and objectives. A significant challenge for IT organizations will be: Just because you can manage it, does not mean you should manage it.
Quite often, IT organizations get too far into the weeds and loose sight of the bigger picture. To fellow business leaders, this is often perceived as a disconnect between IT & Line of Business (LoB) leaders. It essentially alienates IT leaders and creates challenges to fostering stronger bonds between the same leaders.
Never lose sight of the business
It is no longer adequate for the CIO to be the only IT leader familiar with the company’s strategy and objectives. Any IT leader today needs to fully understand the ecosystem of how the company makes and spends money. Without this clarity, the leader lacks the context in which to make healthy, business-centric decisions.
The converse is an IT leader that is well familiar with the business perspective as outlined above. This IT leader will gain greater respect amongst their business colleagues. They will also have the context in which to understand which decisions are most important.
Kicking technology to the curb
So, is IT really getting out of the technology business? No! Rather, think of it as an opportunity to focus. Focus on what is important and what is not. What is strategic for the company and what is not? Is moving to a cloud-centric model the most important thing right now? What about shifting to a container-based application architecture model? Maybe. Maybe not. There are many areas of ripe, low hanging fruit to be picked. And just as with fruit, the degree of ripeness will change over time. You do not want to pick spoiled fruit. Nor do you want to pick it too soon.
One area of great interest these days is in the data center. I wrote about this in detail with CIOs are getting out of the Data Center business. It is not the only area, but it is one of many areas to start evaluating.
The connection between technology divestiture and business
By assessing which areas are not strategic and divesting those area, it provides IT with greater focus and the ability to apply resources to more strategic functions. Imagine if those resources were redeployed to provide greater value to the company strategy and business objectives. By divesting non-strategic areas, it frees up the ability to move into other areas and conversations.
By changing the model and using business as the context, it changes the tone, tenor and impact in which IT can have for a company. The changes will not happen overnight. The evolution of moving from technology to business discussions takes vision, perseverance, and a strong internal drive toward change.
The upside is a change in culture that is both invigorating and liberating. It is also a model that supports the dynamic changes required for today’s leading organizations.
This week, Yang came on the Structure Show podcast to talk about her work, her future, Jeeves’ future and the unfortunate realties of sexism. Her are some highlights from a very interesting interview.
[soundcloud url=”https://api.soundcloud.com/tracks/186115767″ params=”color=ff5500&auto_play=false&hide_related=false&show_comments=true&show_user=true&show_reposts=false” width=”100%” height=”166″ iframe=”true” /]
Why Jeeves matters
Essentially, Yang explained, projects like Jeeves matter because privacy policies are becoming an integral part of applications, but managing them is still a task most programmers would like to do without.
“Right now, if you change the policy or if you change the code, you have to go through and update this spaghetti of policy and code and cross your fingers and hope that everything’s happening right,” she said. “. . . If someone else wrote that code and you’re just maintaining it, well, good luck.”
She continued: “The idea is, with our model … if we’re starting from scratch, we want to be able to specify the policies here, the stuff that uses the policies over there. So if programmers want to make a change to the policies, they can just go update the policies and rely on the enforcement do do everything else, and if they want to change the code they don’t have to touch the policies.”
Is it ready for primetime?
Right now, Jeeves is a fine research project and even has a Python library that rewrites code into Jeeves on the fly. There’s also an extension for the Django web framework works with both the application and database code. But it’s probably a few years from being ready, hopefully, to be pushed into industry and readied for production workloads, Yang said.
“Right now, we’ve run a small conference management system using our web framework,” she explained. “But, you know, it’s a small conference management system — we’re not building Facebook with it. I think in order to build a more realistic system using it, we’d have to really look at the scaling issues, and there are some good research issues there. … It turns out that carrying the policies around with the data is pretty expensive.”
Confronting the trolls on Reddit
Yang said she was warned about the risk of sexist comments when she told people she’d and her peers would be doing the AMA, but she wasn’t about to let fear — or the trolls — win. And beside, she still wanted to interact with the kinder community members and answer legitimate questions about computer science education.
“I think it’s kind of bogus that women are kept out of certain physical and online spaces because of the threat of harm,” Yang said. “And I think that there’s this perception that if a woman goes out into the internet she’s going to be harassed or something like that, and I really wanted to test it for myself and show people that it’s not that big of a deal.”
When misogyny hits home
While some warned Yang and her cohorts about sexism and against doing the AMA, however, others didn’t seem to think it would be a problem. She said they seemed surprised, after reading the piece on Wired, that people would act that way.
Certainly they had heard about “Gamergate,” she said, but “maybe they think, ‘Oh, gamers, that’s not part of our culture’ or something like that — ‘That’s a subculture.’ But to see it affect people who they didn’t see as part of some niche subculture maybe hit them differently.”
[soundcloud url=”https://api.soundcloud.com/tracks/182147043″ params=”color=ff5500&auto_play=false&hide_related=false&show_comments=true&show_user=true&show_reposts=false” width=”100%” height=”166″ iframe=”true” /]
It’s safe to say that Docker has had a momentous year with the container-management startup gaining a lot of developer interest and scoring a lot of support from big tech companies like Amazon, Google, VMware and Microsoft.
Docker CEO Ben Golub came on to the Structure Show this week to talk about Docker’s year and what he envisions the company to be as it continues to grow (hint: it’s aiming for something similar to [company]VMware[/company]). Golub also talks about Docker’s raft of new orchestration features and shares his thoughts on the new CoreOS container technology and how that fits in with Docker.
In other news, Derrick Harris and Barb Darrow kick things off by looking at how Hortonworks and New Relic shares were holding up and the good news is — they’re doing pretty well at the ripe old age of 1 week.
Also on the docket, [company]IBM[/company] continues its cloud push by bringing a pantload of new data centers online — in Frankfurt (for the all-important German market) as well as Mexico City and Tokyo. In October, IBM said it was working with local partner Tencent to add cloud services for the Chinese market, which reminds us that Amazon Web Services Beijing region remains in preview mode.
Hosts: Barbara Darrow, Derrick Harris and Jonathan Vanian
Building applications in today’s world involves a lot of work assembling, managing and monitoring all of those various components that need to come together across myriad environments. To help with this chore, HashiCorp is rolling out an application development hub called Atlas, its first commercial product based on its various open-source technology. The startup is also announcing a $10 million series A funding round from Mayfield Fund, GGV Capital and True Ventures (see disclosure).
HashiCorp’s biggest claim to fame is its open-source Vagrant tool that helps developers quickly spin up virtual environments so they can build and test their software projects before they see the light of day.
Over time, the startup developed other open-source tech to help coders with all aspects of the software-development process; from Serf, which handles cluster management and makes sure those developer environments don’t fail, to Consul, which helps users discover and configure all the services running in their coupled-together applications.
With Atlas, the startup is bundling up all of its open-source software into one package and throwing in a dashboard that will supposedly let coders see how their application is performing in both public and private clouds or hybrid environments.
The Atlas software-as-a-service is now available in beta and will be available to the public in the first quarter of 2015; the company will explain pricing by then and will unveil an on-premise version.
Diagram provided by HashiCorp
Disclosure: HashiCorp is backed by True Ventures, a venture capital firm that is an investor in the parent company of Gigaom.