On Value Stream Management in DevOps, and Seeing Problems as Solutions

You know that thing when a term emerges and you kind of get what it means, but you think you’d better read up on it to be sure? Well, so it was for me with Value Stream Management, as applied to agile development in general and DevOps in particular. So, I have done some reading up.

Typically, there seems to be some disagreement around what the phrase means. A cursory Google of “value stream DevOps” suggests that “value stream mapping” is the term du jour, however a debate continues on the difference between “value stream mapping” (ostensibly around removing waste in lean best practices) and “value streams” – for once, and for illustration purposes only, I refer to Wikipedia: “While named similarly, Lean value stream mapping is a process-based practice that seeks to identify waste, whereas value streams provide a higher-level overview of how a stakeholder receives value.”

Value streams are also seen (for example in this 2014 paper) as different to (business) processes, in that they are about making sure value is added, versus being about how things are done. This differentiation may help business architects, who (by nature) like precision in their terminology. However the paper also references Hammer & Champy’s 1993 definition of a process, which specifically mentions value: “Process is a technical term with a precise definition: an organized group of related activities that together create a result of value to the customer.” Surely a process without value is no process at all?

Meanwhile analysts such as Forrester have settled on Value Stream Management, which they reference as “an emerging market” even though at least some of the above has been around for a hundred years or so. Perhaps none of the terminological debate matters, at least to the people trying to do things with whatever the term means. Which is what, precisely? The answer lies in the restating of a problem as a solution: if value stream management is the answer, the challenge comes from a recognition that things are not working as well as they could be, and therefore are not delivering value as a result.

In the specific instance of DevOps, VSM can be seen as a direct response to the challenge of DevOps Friction, which I write about in this report. So, how does the pain manifest itself? The answer is twofold. For people and organisations who are already competent at DevOps, particularly those cloud-native organisations who are DevOps-by-default (and might wonder what other approach might exist), the challenge is knowing whether specific iterations, sprints and releases are of maximum benefit, delivering something of use as efficiently as possible.

In this instance, the discipline of value stream management acts as Zen master, asking why things are as they are and whether they can be improved. Meanwhile the ‘emerging market’ of VSM refers to tooling which smooths and simplifies development and operational workflows, enabling the discipline to be implemented and hopefully maximising value as a result. Which gives us another “problem-as-solution” flip — while many of the tools available today are API-based, enabling their integration into workflows, they have not always been built with end-to-end value delivery in mind.

A second group feeling the pain concerns organisations that see DevOps as an answer, but are yet to harness it in a meaningful way beyond individual initiatives — many traditional enterprises tend to fall into this category, and we’ve held various webinars about helping organisations scale their DevOps efforts. For these groups, value stream management offers an entry point: it suggests where effort should be focused, not as DevOps as an end in itself but as a means for delivering increased, measurable value out of software.

In addition, it creates a way of thinking about DevOps as practical workflows, enabled by automation tools, as opposed to ‘just’ a set of philosophical constructs. The latter are fine, but without some kind of guidance, organisations can be left with a range of tooling options but no clear idea about how to make sure they are delivering. It’s for this reason that I was quite keen on GitHub’s announcement around actions, a couple of weeks ago: standardisation, around not just principles, but also processes and tools, is key to efficiency.

The bottom line is that, whatever the terminology, we are moving away from thinking that ‘DevOps is the answer’ and towards ‘implementing the right kind of DevOps processes, with the right tools, to deliver higher levels of value’. Whether about principles or tooling, value stream management can therefore be filed in the category of concepts that, when they are working right, they cease to exist. Perhaps this will become true in the future but right now, we are a long way from that point.

Afternoon: If you want to read up on the notions of value management as applied to business outcomes, I can recommend this book by my old consulting colleague Roger Davies.

GitHub Actions: The Best Practice Game Changer

“GitHub? That’s a code repository, right?” said a friend, when I mentioned I was in San Francisco. GitHub Universe, the company’s annual conference, is small but perfectly formed — 1,500 delegates fills a hall but doesn’t overwhelm. And yes, developers, engineers and managers are here because they are pulling files from, and pushing to, one of the largest stores of programming code on the planet.

GitHub representatives would likely dispute the “just a code repo” handle, nonetheless. I would imagine they would point at the collaboration mechanisms and team management features on the one hand, and the 30-plus million developers on the other. “It’s an ecosystem,” they might say. I haven’t asked, because the past two days’ announcements may have made the question somewhat moot. Or one announcement in particular: GitHub Actions.

In a nutshell, GitHub Actions allow you to do something based on a triggering event: they can be strung together to create (say) a set of tests when code is committed to the repository, or to deploy to a target environment. The “doing something” bit runs in a container on GitHub’s servers; and a special command (whose name escapes me…wait: RepositoryDispatch) means external events can trigger actions.

That’s kind of it, so what makes GitHub Actions so special? Or, to put it another way, what is causing the sense of unadulterated glee, across both the execs I have spoken to and those presenting from the main stage. “I can feel the hairs on the back of my neck as I talk about this,” I was told, not in some faux ’super-excited’ way but with genuine delight.

The answer lies in several, converging factors. First, as tools mature, they frequently add rules-based capabilities — we saw it with enterprise management software two decades ago, and indeed ERP and CRM before that. Done right, event-driven automation is always a feature to be welcomed, increasing efficiency, productivity, enforcing policy, governance and all.

Second is: what happens when you switch on such a feature for a user base as large, and as savvy, as the GitHub community? Automation is a common element of application lifecycle management tooling, and multiple vendors exist to deliver on this goal. But few if any have the ability to tell millions of open source developers, “let’s see what you got.”

Which brings to a third point: right now, we are in one of those fan-out technology waves. In my report on DevOps, I name-checked 110 vendors; I left out many more. Choosing a best-of-breed set of tools for a pipeline, or indeed, deciding the pipeline, involves a complex, uncertain and fraught set of decisions. And many enterprises will have built their own customisations on top.

As I wrote in the report’s introduction, “In the future, it is likely that a common set of practices and standards will emerge from DevOps; that the market landscape for tools will consolidate and simplify; and that infrastructure platforms will become increasingly automated.” The market desperately needs standardisation and simplification: every day, organisations reinvent and automate practices which, frankly, is not a good use of their time.

For there to be a better way requires a forum — an ecosystem, if you will — within which practices can be created, shared and enhanced. While there may be a thousand ways to deploy a Ruby application, most organisations could probably make do with one or two, based on constraints which will be similar for their industry peers. With a clear day, a following wind and the right level of support, GitHub Actions could provide the platform for this activity.

Will this put other continuous automation and orchestration vendors out of business? Unlikely, as there’s always more to be done (and no organisation is going to switch off existing automations overnight). However it could create a common language for others to adopt, catalysing standardisation still further; it also creates opportunities for broader tooling, for example helping select a workflow based on specific needs, or bringing in plugins for common actions.

It’s also notable that GitHub Actions is only being released as Beta at this point (you can sign up here). Questions remain over how to authorise and authenticate access, what criteria GitHub will set over “acceptable” Action workloads, and indeed, how Actions will work within a GutHub enterprise installation. Cliché it may be, but the capability creates as many questions as it does answers — which is perhaps just as well.

Above all perhaps, the opportunity for GitHub Actions is defined by its lack of definition. Methodologists could set out workflows based on what they thought might be appropriate; but the bigger opportunity is to let the ecosystem decide what is going to be most useful, by creating Actions and seeing which are adopted. And yes, these will go way beyond the traditional dev-to-ops lifecycle.

One thing is for sure: the capability very much changes the raison d’être for their founding organisation. “Just a code repository” they may have been, in the eyes of some; but a collaborative hub for best practice is what the organisation will undoubtedly become, with the adoption of GitHub Actions. No wonder the sense of suppressed glee.

Seven lessons from writing the report, Scaling DevOps in the Enterprise

Over the past couple of months I’ve been collating a report about DevOps, which I hope to be out in August (all being well, with a following wind). I’ve taken briefings, had interviews and conversations, and generally made a nuisance of myself. The goal was, and remains, to go beyond “DevOps, is great, come on board” evangelism, and address the simple, yet profound question: how to scale DevOps from small initiatives, towards making it work across the enterprise?

Despite my background in various areas of dev and ops, and the many reports, articles and research notes I have written on the topic, I confess to have started the process with a soupçon of imposter syndrome: what if I was to find this was a non-question: “Oh, come on, man! We sorted it. You know, these days, it just… works!”

Over the period, I have learned that my fears were unfounded; or rather, the challenges were just as big as I thought they might be (and ever were). I’ve also learned a number of lessons about the nature and reality of DevOps, which I thought I would share:

1. It’s not (just) about DevOps. Don’t get me wrong, breaking down the wall between development and operations is a worthy goal and a laudable achievement; however, it isn’t an end in itself. We’ve ended up with a lot of stakeholders trying to crowbar their own interests into the DevOps title, ending up with clunky terms like DevSecOps, whereas perhaps the focus should be elsewhere completely. To whit:

2. It is all about business value delivery. Customer-centricity, done right, gives more to customers and therefore, modelled right a greater return on investment to the business. DevOps can bring speed and responsiveness, and therefore result in more innovative, higher-value solutions. But the drivers for innovation come from the customer, by way of the business. There is not point in meeting the wrong need, however quickly.

3. Reality is the biggest bottleneck to DevOps. Channeling my inner Scooby Doo villain, DevOps would have been just fine if it wasn’t for all those pesky real world challenges. Testing and quality management, security, database and information management, governance, collaboration and so on keep getting in the way, but this is looking at things the wrong way around. To flip it, the question is, how can enterprise reality be made more efficient? This leads to:

4. Man, is there a crapload of DevOps vendors. We are in an apparent fan-out phase, in which hundreds of tools and service companies claim to have some kind of DevOps solution. They are all right, at the same time as being a symptom of, rather than a solution to the DevOps scaling challenge. We will see a massive wave of consolidation and subsumption, triggered when an enterprise-focused software company cracks the code and triggers a buying spree.

5. Cloud is cause, catalyst and now consequence of the DevOps stalemate. Speak to digital-native startups, who have built their infrastructures on the public cloud, and they wonder why DevOps is even a thing. Speak to cloud companies and they say, rightly, that a wholesale move to the cloud would enabler a simpler world in which DevOps could thrive. Speak to enterprises however, and find a continued preference for hybrid models, rendering such simple rhetoric pointless.

6. Enterprises know where they want to end up, but are stymied. Cloud and software vendors present the current smorgasbord of service options as a good thing, but the gleeful fan-out of innovation is getting in the way of enterprise progress. Companies that serve millions of people in complex ways can’t simply change everything wholesale, and would really rather the tech industry commoditised a bit — or a lot — so they could get on with becoming learning organisations without all that distraction. Which means:

7. Tech could start by turning some of that smartness onto itself. Enterprises  don’t need a thousand different DevOps pipelines, enabling a thousand thousand different ways of addressing what should be a solved problem. The tech industry tells other verticals about the power of data, of automation, of machine learning of AI: it will have succeeded if it can come up with a business-led DevOps process that all organisations can bank on, and which is enough of a standard to enabled data-driven, predictive automation.

There’s an eighth point of course: it’s a crap name. I’m not a fan of changing labels willy-nilly, but the fact is, DevOps is the kind of name a techie (or two) would come up with, and what enterprises need is a technical basis upon which the business can innovate. No, no, and three times no, this should not be called BizOps or any other derivation. DevOps emerged as a touchstone, but it risks becoming a millstone.

The discipline currently known as DevOps has a way to run, as organisations learn to benefit from new ways of delivering faster. But, as the business moves into the driving seat, so it should also be given the remit to define what success looks like, and the terminology that goes with it. Watch that space.

Follow Jon Collins on Twitter.

Five questions for: Mike Burrows of Agendashift

My travels around the landscape of DevOps brought me to Mike Burrows, and the work he was doing around what he terms Agendashift, an outcome-based approach to continuous transformation. While these words could be off-putting, I was more intrigued by the fact that Mike had set up a Slack site to articulate, test and improve his experience-based models – as he says, there’s 500 people on the site now, and as I have experienced, it’s very participative. So, what’s it all about – is there life beyond prescriptive lean and agile approaches? I sat down with Mike (in the virtual sense) to find out the background of, and hopes and dreams for, Agendashift.
1. What led you to write a book about lean/agile/Kanban — what was being missed?
Good question! I’m one of those people that laments the rise of prescription in the Lean-Agile space, and though I found it easy to find people who were in sympathy with my view, I didn’t find a lot of constructive alternatives. I myself had developed a consistent approach, but calling it “non-prescriptive” only told people what it wasn’t, not what it was! Eventually, I (or perhaps I should say “we”, because I had collaborators and a growing community by this time) landed on “outcome-oriented”, and suddenly everything became a lot clearer.
2. How would you explain Agendashift in terms a layperson might understand?
The central idea is principle #2 (of 5 – see agendashift.com/principles): Agree on outcomes. It seems kinda obvious that change will be vastly easier when you have agreement on outcomes, but most of us don’t have the tools to identify, explore, and agree on outcomes, so instead we jump to solutions, justify them, implement them over other people’s resistance, and so on. I believe that as an industry we need to move away from that 20th century model of change management, and that for Agile it is absolutely essential.
Around that central idea, we have 5 chapters modelled on the 5 sessions of our workshops, namely Discovery (establishing a sense of where we are and where we’d like to get to), Exploration (going down a level of detail, getting a better sense of the overall terrain and where the opportunities lie), Mapping (visualising it all), Elaboration (framing and developing our ideas), and Operation (treating change as real work). Everything from a corporate ambition to the potential impact of an experiment is an outcome, and we can connect the dots between them..
3. You went through an interesting development process, care to elucidate?
Two key ingredients for Agendashift are to be found in the last chapter of my first book, Kanban from the Inside (2014). The first is the idea of “keeping the agenda for change visible”, a clue to where the name “Agendashift” came from, and worthwhile to develop further how one might populate and visualise such a thing (and I took inspiration not just from Kanban, but also from Story Mapping). The second was the kind of bullet point checklist you see at the end of a lot of books.
I and a few others independently around the world (Matt Phillip most notably) realised that we had the basis for an interesting kind of assessment tool here, organised by the values of transparency, balance, collaboration and so on (the values model that was the basis for my book). In collaboration with Dragan Jojic we went through several significant iterations, broadening the assessment’s scope, removing jargon, eliminating any sense of prescription, and so on. We found that the more we did that, the more accessible it became (we now have experience using it outside of IT), and yet also more thought-provoking. Interesting!
Other collaborators – most notably Karl Scotland and Andrea Chiou – helped move Agendashift upstream into what we call Discovery, making sure than when we come to debriefing the assessment that we’re already well grounded in business context and objectives. The unexpected special ingredients there has been Clean Language (new to me at the time, and a great way to explore outcomes) and Cynefin (already very familiar to me as model, but now also very practical once we had the means to create lots of fragments of narrative, outcomes in Agendashift’s case).
4. Who is the Agendashift book aimed at, is it appropriate for newcomers, journeymen or masters?
I do aim in my writing for “something for everyone”. I accept though that the complete newcomer to Lean-Agile or to coaching and facilitation may find that it assumes just a bit too much knowledge on the part of the reader. My third book (working title “Right to Left: The digital leader’s guide to Lean-Agile”, due 2019) will I think have the broadest possible appeal for books in this space. We’ll see!
5. How do you see things progressing – is nirvana round the corner or is that the wrong way to think about it?
We’re coming up to the 2 year anniversary of the public launch of the Agendashift partner programme, 2 years into what I’m told is likely a 3-year bootstrap process (I have some fantastic collaborators but no external investment). General interest is definitely growing – more than 500 people in the Agendashift Slack for example – and I’m seeing a significant uptick in demand for private workshops, either directly from corporates or via partner companies. Its potential as a component of leadership development and strategy deployment is gaining recognition too, so we’re not dependent only on Agile transformation opportunities. I believe that there is potential for Agendashift in the digital and DevOps spaces too.
There is a lot of vested interest in imposed Agile, and in all honesty I don’t see that changing overnight – in fact I tell people that I can see the rest of my career (I’m 53) being devoted to outcomes. Over time though, I believe that we will see more success for transformations that are based on genuine engagement, which can only be good for the likes of Agendashift, OpenSpace Agility, and so on. Eventually, the incongruity of imposed Agile will be exposed, and nirvana will be achieved 🙂
 
My take: Not the weapon, but the hand
I’m all for methodologies. Of course, I would say that – I used to run a methodology group, I trained people in better software delivery and so on. From an early stage in my career however, I learned that it is not enough to follow any set of practices verbatim: sooner or later (as I did), edge cases or a changing world will cause you to come unstuck, which goes a long way to explain why best practices seem to be in a repeated state of reinvention.
I was also lucky enough to have some fantastic mentors. Notably Barry McGibbon, who had written books about OO, and Robin Bloor, whose background was in data. Both taught me, in different ways, that all important lesson we can get from Monty Python’s Holy Grail: “It’s only a model.”
Models exist to provide a facade of simplicity, which can be an enormous boon in this complex, constantly changing age. At the same time however, they are not a thing in themselves; rather, they offer a representation. As such, it is important to understand where and when they are most suited, but also how they were created, because, quite simply, sometimes it may be quicker to create a new one than use something ill-suited for the job.
And so it is for approaches and methods, steps we work through to get a job done. Often they are right, sometimes less so. A while back, myself, Barry and others worked with Adam and Tim at DevelopmentProcess to devise a dashboard tool for developers. So many options existed, the thought of creating something generic seemed insurmountable…
… until the epiphany came, that is: while all processes require the same types of steps, their exact form, and how they were strung together, could vary. This was more than just a, “Aha! That’s how they look!” as it also put the onus onto the process creator to decide which types of step were required, in which order.
Because of this, among many other reasons, I think Mike is on to something. In another recent conversation, Tony Christensen, DevOps lead at RBS, said the goal had become to create a learning organisation, rather then transforming into some nirvanic state. True Nirvana, in this context at least, is about understanding the mechanisms available, and having the wherewithal to choose between them.
 
Image: Agendashift
 

5 questions for… Electric Cloud

As I am working on a DevOps report at the moment, I’m speaking to a lot (and I mean a lot) of companies involved in and around the space. Each, in my experience so far, is looking to address some of the key IT delivery challenges of our time – namely, how to deliver services and applications at a pace that keeps up with the rate of technology change?
One such organisation is Electric Cloud. I spoke to Sam Fell, VP of Marketing, to understand how the company sees its customers’ main challenges, and what it is doing to address them – not least, the complexity of working at enterprise scale.
 

  1. Where did Electric Cloud come from, what need did it set out to deal with?

Electric Cloud has been automating and accelerating software delivery since 2002, from code check-in to production release. Our founders looked to solve a huge bottleneck, to address how development teams’ agile pace of software delivery and new technology adoption has outstripped the ability of operations teams to keep up. This cadence and skills mismatch limits the business, can jeopardize transformation efforts, putting teams in a constant state of what we call “release anxiety.”
The main challenges we see are:

  • The ability to predictably deploy any application to any environment at any scale they want.
  • The ability to manage release pipelines and dependencies across multiple teams, point tools, and infrastructures.
  • A comprehensive, but simple way to plan, schedule, and track releases across its lifecycle

In consequence, we developed an Adaptive Release Orchestration platform called ElectricFlow to help organizations like E*TRADE, HPE, Huawei, Intel and Lockheed Martin confidently release new applications and adapt to change at any speed demanded by the business, with the analytics and insight to measure, track, and improve their results along the way.

  1. Where’s the ‘market for DevOps’ going, from a customer perspective?

Nearly every industry now is taking notice of, or participating in the DevOps space – from FinServ and government to retail and entertainment – nearly every market, across nearly all geographies are recognizing DevOps as a way forward. The technology sector is still on the forefront, but you’d be surprised how quickly industries like transportation are catching up.
One thing we find invaluable is learning what critical factors are helping our customers drive their own businesses forward. A theme we hear over and over is how to adapt to business needs on a continuous basis.
But, there is an inherent dichotomy to how companies are expected to achieve the business goals set by leadership. For example, the need to implement fast and adapt to their changing environment easily – including support for new technologies like microservices and serverless. The challenge is, how to do this reliably and efficiently – to shift practices like security left and not create more technology debt or outages in the process.
Complexity is inevitable and the focus needs to be on how to adapt. Ways that we know work in addressing this complexity are:

  • Organizations that learn how to fix themselves will ultimately be high performers in the end – resiliency is the child of adaptability (credit: Rob England).
  • Companies that automate what humans aren’t good at – mundane, repeatable tasks that don’t require creativity – are ultimately set-up for success. Keep people engaged on high-value tasks with a focus on creating high-performance for themselves.
  • Organizations that continuously scrutinize their value streams, and align the business to the value stream, will be more successful than the competition. Improvements in one value stream may well create bottlenecks in others.
  • Companies that measure impact and outcomes, not just activities, will gain context into how ideas can transform into business value metrics such as customer satisfaction.
  • Understanding that there is no “one way” to solve a problem. If companies empower their teams to learn fast, the above may very well take care of itself.
  1. What’s the USP for Electric Cloud in a pretty crowded space?

Electric Cloud sees the rise in DevOps and modern software delivery methods as an opportunity to emphasize the fact that collaboration, visibility and auditability are key pillars to ensuring fast delivery works for everyone involved. Eliminating silos and reducing management overhead is easier said than done, but with a scalable, secure and unified platform – anything is possible.
We’re proud to say we’re the only provider of a centralized platform that can provide all of the following in one simple package:

  • model-based automation techniques to replace brittle scripting with reusable abstract models;
  • process-as-code through a Groovy-based domain specific language (DSL) to onboard apps quickly so they are versionable, testable, reusable and refactorable;
  • a self-service library of best practice automation techniques for consistency across the organization;
  • a vast amount of plugins and integrations to support enterprise governance of any tool your company uses;
  • Role-based access control, approval tracking for every change in the pipeline;
  • An impenetrable Agent-based architecture to support communications for scalability, fault tolerance and security.

And all at enterprise scale, with our ability to enable unlimited clustering architecture and efficient processing for high availability and low-latency of concurrent deployments.

  1. How does Electric Cloud play nice, and where does it see its most important integrations?

Every company’s software delivery process is unique, and touches many different tools, integrations and environments. We provide centralized management and visibility of the entire software delivery pipeline – whatever these might be – to improve developer productivity, streamline operations and increase efficiency.
To that end, Electric Cloud works with the most popular tools and infrastructure on the planet and allows our customers to add a layer of automation and governance to the tools they already use. You can find a list of our plugins, here.

  1. I’m also interested to know more about (Dev)SecOps, and I would say PrivOps but the name is taken!

We definitely think securing the pipeline, and the application, is very important in software production.  We have been talking about it a lot recently — you may find these resources helpful:

  • We recently held an episode of Continuous Discussions (#c9d9) to dive into how DevSecOps help teams “shift left,” and build security and quality into the process by making EVERYONE responsible for security at every stage. http://electric-cloud.com/blog/2018/05/c9d9-podcast-e87-devsecops/
  • Prior to that, we held a webinar with John Willis – an Electric Cloud advisor, co-author of the “DevOps Handbook” with Gene Kim, and expert at security and DevOps. You can view the webinar here.
  • We also participated in the RSA DevOps Connect event. At the show, we took a quick booth survey and the results may (or may not) surprise you…: http://electric-cloud.com/blog/2018/04/security-needs-to-shift-left-too/

 
My take: Moving beyond the principle
The challenges that DevOps set out to address are not new: indeed, they are perhaps as old as technology delivery itself. Ultimately, while we talk about removal of barriers, greater automation and so on, the ultimate goal is how to deliver complexity at scale. Some, who we might call ‘platform natives’, may never have had to run through the mud of corporate and infrastructure inertia and may wonder what all the fuss is about; for others, the challenges may appear insurmountable.
Vendors in the crowded DevOps space may have cut their teeth working for the former, platform-based group, who use containers as a default and who see serverless models as a logical extension of their keep-it-simple infrastructure approach. Many, if not all see enterprise environments as both the biggest opportunity and the greater challenge. Whoever can cut the Gordian knot of enterprise convolution stands to take the greatest prize.
Will it be Electric Cloud? To my mind, the astonishing number of vendor players in this space is a symptom of how quickly it has grown to date, creating a situation ripe for massive consolidation – though it is difficult to see any enterprise software vendor that is actively looking to become ‘the one’: consider IBM’s outsourcing of Rational and HPE’s divestiture of its own software business to Microfocus as examples of companies running in the opposite direction.
However the market opportunity remains significant, despite the elusivity of the prize. I have no doubt that the next couple of years will see considerable industry consolidation, and who knows at this stage which brands, models and so on will pervade. I very much doubt that the industry will go ‘full serverless’ any time soon, for a raft of reasons (think: IoT, SDN, data, state, plus everything we don’t know about yet), but remain optimistic that automation and orchestration will deliver on their potential, enabling and enabled by practices such as DevOps.
Now I shall get back on with my report!
 

TrackVia’s Low-code Platform is the Secret Sauce of Digital Transformation

Businesses striving towards digital transformation need to get their applications, infrastructure and cloud initiatives synchronized in order to be successful. TrackVia’s low-code application development platform brings order to the chaos of developing cloud-enabled applications, while also adding in critical mobile device support.

Report: Total experience quality: integrating performance, usability, and application design

Our library of 1700 research reports is available only to our subscribers. We occasionally release ones for our larger audience to benefit from. This is one such report. If you would like access to our entire library, please subscribe here. Subscribers will have access to our 2017 editorial calendar, archived reports and video coverage from our 2016 and 2017 events.
Image-for-the-article-Is-Your-Mobile-App-a-Gateway-to-Confidential-Data-Leaking
Total experience quality: integrating performance, usability, and application design by Rich Morrow:
As the number of consumption models in the digital delivery landscape has grown so has the burden on application designers. From desktop to web to phone to tablet and beyond, many designers create an entirely new user experience (UX) for each target platform — but often in a vacuum. Despite a growing acceptance of responsive design principles and the improvement of cross-platform tools, designers frequently target one primary platform.
One result is that decisions made during design can be barriers to performance down the road. When new platforms launch, performance is typically an afterthought to be optimized later. This is short-sighted: Lack of performance out of the gate can quickly doom a web or mobile-based app (both referred to here as apps). It is imperative that performance considerations play a front-seat role in the entire UX equation.
User experience is heavily dependent on underlying technical structure, and those structural choices are often dictated by usability. To navigate the possibilities of native, hybrid, and responsive designs and the myriad backend services that support them, UX designers must have an intimate knowledge of the limits to which they can push app performance. This report will help designers make educated, value-driven decisions about app experiences.
To read the full report, click here.

Report: Bringing Hadoop to the mainframe

Our library of 1700 research reports is available only to our subscribers. We occasionally release ones for our larger audience to benefit from. This is one such report. If you would like access to our entire library, please subscribe here. Subscribers will have access to our 2017 editorial calendar, archived reports and video coverage from our 2016 and 2017 events.
Hadoop-elephant_rgb
Bringing Hadoop to the mainframe by Paul Miller:
According to market leader IBM, there is still plenty of work for mainframe computers to do. Indeed, the company frequently cites figures indicating that 60 percent or more of global enterprise transactions are currently undertaken on mainframes built by IBM and remaining competitors such as Bull, Fujitsu, Hitachi, and Unisys. The figures suggest that a wealth of data is stored and processed on these machines, but as businesses around the world increasingly turn to clusters of commodity servers running Hadoop to analyze the bulk of their data, the cost and time typically involved in extracting data from mainframe-based applications becomes a cause for concern.
By finding more-effective ways to bring mainframe-hosted data and Hadoop-powered analysis closer together, the mainframe-using enterprise stands to benefit from both its existing investment in mainframe infrastructure and the speed and cost-effectiveness of modern data analytics, without necessarily resorting to relatively slow and resource-expensive extract transform load (ETL) processes to endlessly move data back and forth between discrete systems.
To read the full report, click here.

Storage product development is getting worse

Storage is becoming less conservative than in the past. It has its pros and cons, but if it means poorer quality of the final product and increasing risk of data loss, then it’s not the way to go.

Bad behavior

I stumbled on this article which talks about all the problems, bugs and mistakes made by Maxta with one of its (now former) customers. I won’t talk about Maxta and this particular case (also because there are different versions of this story), but this is an example on how some vendors, especially small startups, are setting the bar too high and then consequently struggle to deliver.
Data loss is the worst case scenario, but it’s quite common now to hear about storage startups in trouble when the game gets tough. Sometimes they miserably fail to scale when they promise “unlimited scalability”, or performance is far lower than expected or some of the features don’t actually work as documented.
This time round it’s happened to Maxta, but I’m sure that many others could fall in the same mistake.

DevOp-izing Data storage is dangerous

n the last couple of years, I’ve been hearing a lot about the drastic change in the development process of storage systems. Most of the vendors are adopting new agile development processes, and some of them have been openly talking about a DevOps-like approach.
I’ve always been keen on this type of development approach: modern, fast and produces results quickly. But… I can appreciate it on my smartphone apps, not on my storage system. I can imagine a continuous refinement of the UI or management features, but not for core performance or data protection aspects of the product.
What ever happened to the golden rule “if it works, leave it alone!”??? I’m not saying to apply it literally, but couldn’t more time be spent on testing and QAs instead of releasing a new version every other week? Do we really need a storage software update every fortnight? I don’t think so.

Fierce competition

It’s all about competition in the end. In the past a single good feature was enough to make a product, set a new market and have success (take DataDomain for example). It took time for others to follow and the development cycle was not as fast as today. Now everything is much more complicated, things have accelerated and an on going product evolution is needed to keep the pace with your competitors. Look at hyperconvergence or All-Flash for example, in many cases it is really difficult to find a differentiator now, and end users want all the features that are taken for granted (and the list is very long!). What is now considered table stake is already hard to achieve, on top of which you have to promise more to be taken seriously.

Closing the circle

I know times have changed and everything runs at a faster pace… but when it comes to data and data storage, data protection, availability and durability are still at the top of the list, aren’t they?
Standing out in a crowd is much harder now than in the past. Even established vendors are much quicker in reacting to market changes. Lately, when a new potential market segment is discovered, they’ve shown their ability to buy out a startup or come out with their own product pretty quickly and successfully (take VMWARE VSAN for example). First movers, like Nutanix for example, have an advantage (a vision) and can aim at successful exits, but for a large part of the me-too startups it’s tough because the lack of innovation and differentiation puts them in an awkward position: they are constantly trying to catch up with the leaders.
Software-defined or not, product quality is still fundamental, especially when dealing with Storage. I’d like to see more storage vendors talk about how thoroughly they test their products and how long they maintain them in Beta before going into production… instead of how many new releases they are able to provide per month!
And please, find some time to write better documentation too!

>Originally posted on Juku.it