5 questions for… ADLINK. Edge-Fog-Cloud?

One of the most interesting and dynamic areas of technology today is also one of the hardest to define: that is, the space between the massively scalable infrastructure of the cloud, and the sensor-based devices that are proliferating all around us. To understand this better, I spoke to Steve Jennis, corporate VP and Head of Global Marketing at hardware platform and connectivity provider ADLINK.
 
1. What do you mean by Edge Computing, with respect to Cloud and Fog Computing?
“Edge Computing” is still in the eye of the beholder. Cloud Service Providers use the Edge as a reference to any computing resources on their users’ premises, the telecoms industry sees it as the end-node of their proprietary network (as in Multi-access Edge Computing), and corporate users often think of the term as the boundary between operational technology (OT) and information technology (IT). All are valid: Edge Computing is computing at the edge of a network, but the user’s context is obviously important.
By comparison, the term “Fog Computing” represents the continuum between data sources, Edge Computing and Cloud Services. As such, Edge Computing is a subset of Fog Computing, ‘the meat in the sandwich’ between the IoT’s “Things” and Cloud-based Services. Fog Computing is about exploiting compute resources anywhere in an end-to-end system, to add the most (business) value overall.
Meanwhile we have “Cloud Computing”: despite its enormous growth it has some well-understood limitations. For example, the cost of exclusively using Cloud Services can be unacceptable if you are generating terabytes of data every day. That drives IT people – who tend to think top-down – to consider Edge Computing as a complement to Cloud Computing.
Simultaneously, we have the “hard-hat” guys in their OT world, working at the sharp-end of data collection, analysis and control, with a need for real-time systems that require 24/7 fault-tolerant operation. These guys think bottom-up, they often look suspiciously at the corporate IT world and feel that IT people don’t understand their production computing needs; they see Edge Computing as the boundary between themselves and their counterparts in IT, a boundary where their ‘northbound’ data is processed (ingested, normalized, aggregated, analyzed, abstracted, etc.) before it enters the traditional corporate IT domain.
2. If it’s all about enabling end-to-end IoT solutions, how do you see the opportunity?
A few years ago we were in the proof-of-concept era: the main question around IoT was, would the technology work end-to-end? That question is answered today (although concerns around security remain, see below), as are questions around business value: the business case for exploiting the IoT ‘tool kit’ has become a nod, whether for operational excellence reasons (e.g. predictive maintenance), or to support new business models (e.g. post-sale connectivity, services and subscription models).
Therefore, the focus has now shifted to, how do I get started, how do I deploy, and how do I manage the new risks? End-to-end IoT solutions are, by their nature, multi-technology, multi-vendor and multi-standard. They also almost always include both greenfield (new) and brownfield (legacy) data sources and data sinks. All of which is moving the focus to IoT systems integration. The bigger vendors (the traditional leaders in IT or OT) have a single vendor, one-stop shop culture, but IoT solutions aren’t like that. So, who do you turn-to to take end-to-end responsibility for these heterogeneous IoT solutions – an in-house multi-disciplinary team, your trusted local services supplier or a major SI? A lack of good options is the biggest bottleneck in IoT solutions deployment today, across vertical markets.
We’re also seeing two models of IoT deployment. Bottom-up, equipment providers are adding IoT technologies into their product lines, enabling evolutionary adoption by end users within normal technology cycles. And top-down, strategic digitization opportunities and threats are getting a lot of attention, so turning IoT threats into opportunities is a growing concern for company boards, CEOs and CIOs. Both models lead to greater enterprise digitization in pursuit of operational excellence (the cost line) and support for new revenue opportunities (the top line).
Putting the two together, we see machine providers increasingly supporting as-a-service models, marrying IT and OT worlds to optimize post-sales performance and provide customers with new services. Right now, the bottom-up (evolutionary adoption) model is the most prevalent, but five years from now, it will become more balanced as supporting new business models increasingly drives IoT investments. i.e. as one vendor introduces a new service, its competitors will have to quickly follow-suit to remain competitive.
3. How is ADLINK responding to this market evolution?
20 years ago, ADLINK was a traditional electronics manufacturer, building boards and modules, often to customer designs. Soon thereafter, ADLINK started designing its own innovative (analog and digital, hence AD-LINK) embedded computing products and building its own brand globally. This business has been consistently successful over more than two decades. Then, a few years ago, Jim Liu our CEO, added to our corporate strategic vision and ADLINK entered the emerging market for industrialized IoT products and solutions, essentially enabling “connected embedded computing”.
As we started to think about the elements of end-to-end IoT solutions, we quickly realized how much of an opportunity existed at the Edge. Edge computing really is virgin territory in computing, where no incumbent vendors dominate, making it a great growth opportunity. But, as mentioned, the channels-to-market for deploying IoT solutions are relatively immature. As a result, we offer what we call “Digital Experiments as-a-Service” (DXS) — where we partner with customers who are looking to improve their operations or prove-out new business models and revenue streams.
As we help our customers and learn more about the best opportunities-of-scale (both in terms of the size of deployments and the number of potential customers) this solutions view also helps drive the way we embed IoT tech into all our enabling products: platforms, data connectivity and advanced application enablement (e.g. AI-at-the-Edge). Through this top-down (DXS IoT solutions) and bottom-up (enabling products) approach, we support our customers in their embrace of IoT technologies and also help our systems integration channel partners to respond to the huge IoT solutions opportunity in front of them.
4. Where are you seeing the most maturity and growth in end user adoption of IoT?
Overall, we are looking to address two specific questions: Firstly, how to identify customers and solutions with the biggest potential upside, i.e. which users and Use Cases will deliver the best ROIs? And secondly, how to identify applications that can really scale, in terms of system size per customer and/or the number of potential customers? The answers define the sweet-spots in the overall market for us.
In addressing these questions, we prioritize engagements with forward-looking, entrepreneurial and innovative customers rather than any specific vertical markets. It is the customer’s culture and attitude that is more important than their application domain.
That said, we are spending a lot of time with manufacturers (particularly in terms of enabling smart factories) and with a wide range of machinery makers, who now see their products as valuable data sources in an IoT context (in addition to their traditional functionality). But we range across many verticals, and engagement depends mostly on the forward-thinking and innovation culture of the customer. So, in summary, the customer’s willingness to innovate, experiment and explore is more important than the vertical market in which they operate.
5. Where do integrators fit in the end-to-end IoT solutions ecosystem?
For IoT solutions to work end-to-end you need the right team of players, including both users (domain experts) and specialist partners (complementary services providers). So, we’re working with a broad set of partners, both major – such as Intel’s market-ready solutions programme – and smaller – like specialist systems integrators and local services providers – to reduce the user’s barriers to deployment of IoT technologies.
We still see a channel bottleneck in terms of skills and experience in many vertical domains, so working with innovative partners we can learn together how best to create new business value from IoT solutions. ADLINK will continue to act as a multi-vendor, end-to-end solutions advisor and provider, working with preferred systems integrators to develop the IoT solutions eco-system, and thus overcome the “getting started” and then the large-scale deployment issues that end users and machinery makers face today.
 
My take: There’s substance behind the fog: watch out, cloud providers
I confess, I’m not a great fan of the term “fog computing” as it focuses more on the problem rather than the solution. However, offers a relatively accurate description of IoT’s current state of affairs: a lack of clarity pervades, alongside more general agreement on standards and norms. These are symptoms of where we are, rather than inherent problems, which will be treated over time.
The foggy nature of things is also a smoke screen for what could be one of the most exciting areas of technology in years to come. I don’t want to overstate this, as it starts to sound like hype, but let’s think about the pervading architectural model: cloud.
Right now, we have a layer of rhetoric which assumes that all processing and storage will centralise to a small number of providers: this is variously termed as “the journey to the cloud.” Perhaps it may take decades, goes the thinking, but it will happen. Hybrid architectures are a stop-gap, a Canute-like attempt to stave off the inevitable.
Fog computing, a.k.a. highly distributed and self-orchestrated processing systems, flies directly in the face of the hyper-centralised cloud model. In the foggy world, technology is moving rapidly from a set of standardised boxes and stacks, to a situation where anything goes. The mobile phone or the home wireless hub are just as able to integrate sensors and processing, as any custom-built device. And they will.
When we do achieve a level of standardisation (and move away from this wild west), we can expect to see an explosion in both innovation and uptake. Organizations that have built their businesses on the centralised models will no doubt adjust the rhetoric to suggest that the cloud has extended right out to the sensors. But they will have their work cut out keeping up with the new competitors that will emerge, out of the fog, to take market leading positions seemingly from nowhere.
 

Lambda is an AWS internal efficiency driver. So why no private serverless models?

I’ve been in a number of conversations recently about Functions as a Service (FaaS), and more specifically, AWS’ Lambda instantiation of the idea. For the lay person, this is where you don’t have to actually provide anything but program code — “everything else” is taken care of by the environment.

You upload and press play. Sounds great, doesn’t it? Unsurprisingly, some see application development moving inexorably towards a serverless, i.e. FaaS-only, future. As with all things technological however, there are plusses and minuses to any such model. FaaS implementations tend to be stateless and event-driven — that is, they react to whatever they are asked to do without remembering what position they were in.

This means you have to manage state within the application code. FaaS frameworks are vendor-specific by nature, and tend to add transactional latency, so a re good for doing small things with huge amounts of data, rather than lots of little things each with small amounts of data. For a more detailed explanation of the pros and cons, check Martin Fowler’s blog (HT Mike Roberts) .

So, yes, horses for courses as always. We may one day arrive in a place where our use of technology is so slick, we don’t have to think about hardware, or virtual machines, or containers, or anything else. But right now, and as with so many over-optimistic predictions, we are continuing to fan-out into more complexity (cf the Internet of Things).

Plus, each time we reach a new threshold of hardware advances, we revisit many areas which need to be newly understood, re-integrated and so on. We are a long way from a place where we don’t have to worry about anything but a few lines of business logic.

A very interesting twist on the whole FaaS thing is around its impact on server efficiency. Anecdotally, AWS sees Lambda not only as a new way of helping customers, but also as a model which makes better use of spare capacity in its data centres. This merits some thought, not least that serverless models are anything but.

From an architectural perspective, these models involve a software stack which is optimised for a specific need — think of it as a single, highly distributed application architecture which can be spread over as many server nodes as it needs to get its current jobs done. Unlike relatively clunky and immobile VMs, or a bit less flexible containers, you can orchestrate your serverless capabilities much more dynamically, to use up spare headroom in your server racks.

Which is great, at least for cloud providers. A burning question is, why aren’t such capabilities available for private clouds, or indeed, traditional data centres? In principle, the answer is, well, there should be. Despite a number of initiatives, such an option has still to take off. Which begs a very big question of — what’s holding them back?

Don’t get me wrong, there’s nothing wrong with the public cloud model as a highly flexible, low-entry-cost outsourcing mechanism. But nothing technological exists that gives AWS, or any other public cloud provider some magical advantage over internal systems: the same tools are available to all.

As long as we live in a hybrid world, which will be the case as long as it keeps changing so fast, we will have to deal with managing IT resources from multiple places, internal and external. Perhaps, like the success story of Docker, we will see a sudden uptake in internal FaaS, with all the advantages — not least efficiency — that come with it.

Where We’re Going With Unified Communications

A business is not an island. Businesses must be in constant communication with customers, clients, vendors, contractors, employees, partners, and more — which means businesses need comprehensive communications systems. At the beginning of the decade, communications providers began offering unified communications solutions, which brought together voice, video, instant messaging, email, and other methods into a single, synchronous service. It was a much-needed revolution in business communication.

Yet, since then, communication behavior has shifted. Technology is dramatically more advanced, and workforces are overwhelmingly mobile; traditional unified services simply no longer cover modern business communication needs.
Fortunately, this isn’t the end of unified communications solutions. The industry is shifting alongside business and consumer behavior. Read on to learn more about the future of unified communications.

Mobile Capability

Since their introduction, mobile devices have taken over the workplace. Many employers offer company mobile devices to high-ranking leaders, so they can stay connected wherever they go. Other employees have taken the initiative to gather their own mobile tech. At last count, more than 42 percent of organizations are executing a BYOD policy, but 87 percent of companies believe their employees use personal devices for work while away from the office.
Like it or not, the workplace is going mobile — and communications needs to keep up. The unified communications solutions of the future must integrate the gamut of mobile devices to be effective at uniting a workforce’s communications systems. Cisco Unified Communications Systems already allows mobile devices to access the corporate network, so businesses that place a high priority on mobile integration should consider transitioning to this progressive unified communications provider.

Cloud Compatibility

The cloud is spreading into every corner of business, so it should be no surprise that unified communications has caught a whiff. Most unified communications providers offer a bevy of cloud solutions — but not all of them are valuable to all businesses.
For example, startups might benefit from fully cloud-based communications, in which case it is critical that unified communications remain compatible with other business applications, like customer relationship management solutions. Meanwhile, larger enterprises with established unified communications might prefer cloud communications features that officer enhanced agility.
For businesses that have yet to connect to the cloud, unified communication systems offer an accessible entry point. As long as business leaders find trustworthy communications providers with strong, secure clouds, there is little risk in trusting the cloud for communications solutions. In fact, the cloud could be the only communications tool of the future.

Collaboration Tools


Because the workforce is more mobile than ever before and because the cloud makes digital solutions simple, collaboration tools have become vital for bringing teams together to accomplish tasks. Applications like Google Docs and Slack make it easier to organize projects, brainstorm, and carry out responsibilities in groups, but without broader, more flexible communications solutions, collaboration can still be a chore.
Thus, unified communications solutions must provide collaboration options — or else be compatible with an organization’s existing collaboration systems. Already, some communications providers offer UCC, or unified communications and collaboration, which is software designed to coordinate collaborative efforts and communication tech. However, it is vital that business leaders understand the resources required by UCC solutions before attempting to add them to their communications infrastructure. UCC can place extreme stress on aging networks, causing latency, lag, and sometimes network failure. UCC might be the future, but to reach that future, some organizations might need to update other aspects of their tech architecture.

Scalability

Scalability has long been an important issue associated with unified communications. However, now that the economy is booming and businesses are growing, it is especially critical that organizations equip themselves with communications solutions that will continue to serve them as they expand.
Unfortunately, many business leaders harbor misconceptions regarding scalability and unified communications. For example, plenty of leaders assume that all communications solutions are infinitely scalable. This is only true in theory; in practice, most systems have upper limits on the number of devices they can service. Businesses that invest in a solution without knowing those limits will either suffer downtime or waste money upgrading to a new system in the near future.
Any time a business considers a new solution, it must balance its current needs with its future projections. In the case of unified communications, this is especially true. The future of unified communications is upon us, and businesses should be ready to build bridges to these necessary technologies — or perish, alone, on their deserted islands.
Jackie is a content coordinator and contributor that creates quality articles for topics like technology, business, home life, and education. She studied business management and is continually building positive relationships with other publishers and the internet community .

Know Your Embedded Database Cost

This post is sponsored by NuoDB.
All thoughts and opinions are my own.

In the move of a software vendor to a SaaS business model, database costs can escalate if the architecture and license model is not built for the cloud.
This blog concludes the Mount Rushmore for choosing the database for software vendors moving to a SaaS business model. I have previously covered SQL functionality, true elasticity, and ACID compliance.  
The database will be replicated throughout your customer base unless it’s shared in a multi-tenant application, which means you’ll be dealing with one large, and very critical, database. Regardless, it is not an exaggeration to say your database vendor choice will significantly define the cost structure to serve your software. If the functionality is covered (for now and the future), it comes down to cost. You will seek a low and predictable TCO for your database, all things being equal.  
License pricing can be complex and unpredictable, even for the same product – and over time. You want to deal with a vendor who is very transparent about costs and promotes their pricing at the Mount Rushmore level.
One common pricing model is per-storage or per-server/per-core. A similar model that database vendors offer is user-based, which can be distinguished by user class. There are also tiered per-distinguished-user model, where you can fit your profile into one of a few user buckets, with descending per-user costs that cost out at a fixed, recalculated cost per year.  
Others use a data source model. I don’t prefer, or recommend for my end clients, cost models that can unduly influence architecture. Architecture should, and can, exist outside of the cost structure for the underlying software. Source pricing penalizes software companies for database count. This model will also create cost chasms to ecosystem growth.
However, the database architecture can drive pricing by providing options. While any database can be placed in the cloud, there are certain cloud features that customers have come to expect from cloud software, yet may not be getting from their database of choice.  One key feature necessary to be cloud-enabled is the ability to independently scale compute and storage through specification of nodes with emphasis on one aspect over the other.  This alleviates compromise in the resource allocation and unused costly resources.
You only pay for what you use, when you use it. Compute and Storage are independently priced. Database software from an on-premises age assumes those are all tightly coupled. In that approach, you are forced to pay for compute and storage even when you aren’t using it.
A solution built for the cloud should allow you to use and pay independently for the storage and compute you need. This arrangement is important to have in the foundation of the cloud database.
NuoDB has this approach to their pricing model. Compute, storage and pricing scale independently. Pricing is lower on a per-core basis as well. Compare list prices to Oracle and SQL Server here.
This architecture also works well if you have seasonality to your data and compute needs. With true elasticity, you get the right compute, storage and pricing for what you need.
If you are creating new product lines or converting an existing product portfolio, the database in the transition is one of the most important decisions. This series has given you the top four database requirements in the move of a software vendor moving to or engaging a SaaS business model: Full SQL functionality, true elasticity, ACID compliance and finally, a costing model that provides low TCO through the ability to separate compute and storage.  Manage your client’s “gold” – their data – and therefore your success, according to these four. Customers will see the value and you will see the agility and the valuation result.

William McKnight is a contributing Analyst at Gigaom. Read Bio »

ACID is instrumental in the move to SaaS

This post is sponsored by NuoDB.
All thoughts and opinions are my own.

When moving an on-premises app to the cloud, software vendors may find the lack of ACID in the database and – therefore the need to code data management functionality into the application – an insurmountable challenge.
This blog continues the series on the top four considerations for choosing the database for software vendors moving to a SaaS business model. In addition to the importance of SQL functionality and true elasticity, there is the importance of ACID. For certain types of applications, eliminating ACID and having to code data management functionality into the application instead of letting the database take care of it creates substantial complications.
ACID stands for atomicity, consistency, isolation, and durability.  Compliance to ACID means that a system supports transactions by guaranteeing the full committal of all or no operations in a transaction.  This ensures the state of the system is always consistent.  In a banking scenario, if I only commit the deposit and the associated withdrawal does not complete, not only is the system inconsistent, but it could be difficult to restore consistency.
Expanding on this (since ACID stands for 4 distinct items, not one), we have…

  • Atomicity – All operations or no operations of a transaction committed – no in between
  • Consistency – The database only has valid and committed transactions
  • Isolation – No views into components of uncommitted transactions
  • Durability – Committed transactions are permanent, and a restore will contain all committed transactions

A significant number of the offerings in the NoSQL and Commercial open source (OSS) camps are fundamentally different from what a Fortune 50 may be used to when it comes to ACID compliance.  So is the lack of ACID compliance a fault in the design or is there a time and place for tools short of ACID compliance?   
Nearly the entire NoSQL movement has emerged by playing up the benefits of non-ACID compliance (and taking advantage of the speed of development) such that it’s almost by definition that NoSQL is not ACID compliant.  I do note that some do provide ACID compliance using eventual consistency and others promote processes and have ACID-light features like “lazy writes” to minimize inconsistencies.
The NoSQL movement plays to the importance of the applications it serves.  It serves a different class of applications than has been tapped to-date with relational database technologies.  
Moreover, many relational applications rely on a strict level of consistency not offered by even the ACID-compliant, eventually consistent NoSQL databases. We should be careful that non-ACID and eventually consistent databases do not step over the line into areas the technology is not suited for.  One way to sum this up is the confession of a leading engineer of a product that he wouldn’t use NoSQL for “money.”
ACID compliance has a psychological side as well. ACID provides a greater comfort level to developers and managers, knowing that data will not be lost no matter what and that all kinds of data can be trusted to the database.
As operational applications (especially with those dealing with valuable and even business-critical  data) increasingly move to the cloud, they often need a database that can combine strict ACID compliance with the elastic scale out and availability advantages that caused people to turn to NoSQL in the first place. Finding an elastic SQL database – like NuoDB – that preserves ACID while providing that elasticity and availability is a valuable alternative for a software vendor moving to SaaS.

Moving to SaaS: Scale With True Elasticity

This post is sponsored by NuoDB. All thoughts and opinions are my own.
The database needs to be able to scale out and in without disruption or delay.
Software vendors moving to a SaaS business model are usually optimistic about the prospects of the product(s) and the company. Making an investment in a change like this implies a hope for growth.
This dynamic has actually been true for the vast majority of the data projects I’ve been involved with at clients. Other than projects done just for regulatory reasons, the hope is that the project takes off and supports sales, new market penetration, product line expansion, etc.
Projecting the growth is exciting, yet difficult, and there is a clear knock-on effect of this on projecting the data growth. Growth estimates are usually wildly off. Projects tend to out-perform expectations (a good problem) or underwhelm and eventually get shut down. Out-performing expectations means more data for the database than anticipated. With scalable systems and proper monitoring, database professionals have been able to adapt, although this has imposed work cycles on the organization.
More importantly, by definition an on-premises system must be over-specified so resources do not run out. On an automatically monitored system with access to practically unlimited and ready resources such as a cloud, the over-specification – and hence your wasted costs – is negligible. This is the promise of the cloud – immediate access to the resources you need.
The exact level of resources necessary should not only be what is provisioned, it is the cost basis.
A cloud database needs to be able to scale out and in without disruption or delay. If it takes hours or days to scale (in a typical relational database, this means up or down) or creates disruption for migration or repartitioning efforts, one of the key benefits is lost. The more granular the growth of the clusters, and the less of a “step ladder” approach to resources, the more elastic the solution is.
Deciding ahead of time what the next rung on the ladder looks like – or negotiating it in haste – is not true elasticity. The more proactive and involved a customer has to be in the process of resource determination, the less elastic the solution is.
If cluster expansion requires either downtime or a series of manual data copying to sync data from an old to a new cluster or database storage capacity is fixed to the node type (compute dense or storage dense), it’s not fully elastic.
This elasticity extends to upgrades. The cloud database software, like NuoDB for example, should be able to be upgraded without any downtime, performance impact or interruption of service.
Software vendors moving to a SaaS business model are in a unique position to appreciate the lack of over-commitment of resources and the cost savings that are leveraged throughout the customer base and should be sure to seek true elasticity in their database of choice.

William McKnight is a contributing Analyst at Gigaom and President of McKnight Consulting Group. Read Bio »

AWS meets the enterprise head on

Amazon Web Services must have been a very interesting company to work for over recent years. My conversations with AWS senior executives have sometimes been fraught — not because of any conflict or contention, but rather due to a pervading feeling that discussion gets in the way of activity. The organisation has been so busy doing what it is doing (and making a pretty reasonable fist of it) that it barely has time to stop to talk.

Any thoughts or feedback about how AWS might do things differently, about how the needs of the enterprise could be better achieved, have been met with flummoxed consternation. It’s all completely understandable, in a company which measures success by the number of new features achieved or services shipped, to question any question of whether it is doing enough. But still, the question needs to be asked.

Against this background, watching the feet is a far better option of watching the mouth. AWS has come a long way since its early stance of offering an out-and-out alternative to in-house enterprise IT processing and storage and indeed, continues to work on delivering ‘the’ technology platform for digital-first organisations that need, and indeed desire, little in the way of infrastructure.

From an enterprise perspective however, and despite some big wins, many decision makers still treat the organisation as the exception rather than the norm. In part this is through no fault of AWS; more that you can’t just rip and replace decades worth of IT investments, even if you wanted to. In many cases, the cheaper (in both money and effort) option is to make the most of what you have — the age-old blessing and curse of legacy systems.

In addition, as IT staffers from CIOs to tape operatives are only too aware, technology is only one part of the challenge. Over the years, Enterprise IT best practice has evolved to encompass a wide variety of areas, not least how to develop applications and services in a sustainable manner, how to maintain service delivery levels, how to pre-empt security risks and assure compliance, how to co-ordinate a thousand pools of data.

And, above all, how to do so in what sometimes feels like a horseless cart careering down a hill, even as the hill itself is going through convulsions of change, just one slope in a wide technology landscape that shimmers and twists to adapt to what is being called the ‘digital wave’ of user-led technology adoption. Within which AWS itself is both driving the cause of constant change, and feeling its effect.

So what? Well, my perception is that as AWS matures, its philosophy is becoming more aligned to these, very real enterprise needs. This can only be a perception: if you asked AWS execs whether they cared about security, they would look askance, because of course the organisation would not exist without pretty strong security built in. Similarly, the AWS platform is built with the needs of developers front and centre. And so on.

What’s changing is how these areas are being positioned, to incorporate a more integrationist, change-aware, even enterprise-y foundation. For example, development tools are evolving to support the broader needs of integrated configuration and delivery management, DevOps automation and so on. Security teams are not only delivering on security features, but are broadening into areas such policy-based management and, for example, how to reduce the time to resolution should a breach occur.

The seal on this deal is AWS’ recently announced Managed Services (previously codenamed Sentinel) offering, which brings ITIL-type features — change management, performance management, incident management and so on — into the AWS portfolio. The toolset originally appeared on the radar back in June last year but wasn’t launched until December, perhaps in recognition of the fact that it had to be right. It’s also available both to end-user organisations and service providers or outsourcing organisations.

By incorporating ITIL best practice, AWS has kicked into touch any idea that it doesn’t ‘get’ the IT challenges faced by larger organisations. Meanwhile many other areas of AWS’ evolving catalogue of capabilities, and indeed its rhetoric, reinforce a direction that takes into account the fact that enterprise IT is really, really hard and requires a change-first mindset. AWS’ confirmation that the world will be hybrid for some time yet, the expansion of its Snowmobile storage movement product to a 100 Petabyte shipping container, and indeed simple remarks like “many customers don’t know what they have,” all illustrate this point.

Such efforts are a work in progress: plenty remains for AWS to deliver internally, in terms of how products integrate, how features are provided and to whom: this will always be the case in a rapidly changing world. Nonetheless the organisation is a quick learner which is moving beyond seeing cloud-based services as something ‘out there’ that need to be ‘moved to’, and towards an understanding that it can provide a foundation the enterprise can build upon, offering not only the right capabilities but also the right approach.

With this understanding, AWS can engage with enterprise organisations in a way the latter understand, even as enterprises look to make the kinds of transformations AWS and other technology providers enable. Finally the cloud vendor can earn the right to partner with traditional enterprises, alongside the cloud-first organisations it has preferred to highlight thus far.

Moving to SaaS: Start with SQL Functionality

This post is sponsored by NuoDB. All thoughts and opinions are my own.
To leverage existing SQL tools and skills when moving to the cloud without significant rework, a solution should support ANSI-standard SQL, not a partial or incompatible variant.
If you’re a software vendor moving to a SaaS business model either by creating new product lines (from scratch or by adding cloud characteristics to existing products) or converting an existing product portfolio, the transition to a SaaS model will impact every aspect of the company right down to the company’s DNA. New software companies typically start with a SaaS model — not on-premises software – so this is more often a common consideration for many legacy software companies today. Customers see the value and software companies see the agility and the valuation result.
 
Ultimately there are major architectural changes that will be required to succeed. It is a good time to do a reevaluation of all major architectural components of the solution, including the underlying database, along with hosting plans, customer onboarding procedures, billing & pricing, security & regulatory, monitoring and the assorted challenges associated with the move to SaaS.
 
In these posts, I will address the top four considerations for choosing the database in the move. The database selection is critical and acts as a catalyst for all other technology decisions. The database needs to support both the immediate requirements as well as future, unspecified and unknown requirements. Ideally the DBMS selection should be one of the first technology decisions made for the move.
There are severe consequences of making an inappropriate DBMS selection including long development cycles related to needing new skillsets or converting existing application code, as well as cost and support expansion.
SQL is the long-standing common language of the database, supported by thousands of tools and known by millions of users.  Backward compatibility to core SQL is essential, particularly for operational applications that rely on the ACID compliance that usually comes hand-in-hand with SQL databases. SQL is essential as you move to the cloud, and it needs to be standard SQL that works everywhere and scales all of the time and for all queries.
To do this, modern databases (such as NuoDB) should support ANSI-standard SQL for both reads and writes, not limited or partial SQL or an incompatible variant as many NoSQL and NewSQL databases do. The SQL 2011 standard is the latest revision and added improved support for temporal databases, time period definitions, temporal primary keys with referential integrity, and system versioned tables, among other enhancements.
SQL remains the most viable and useful method for managing and querying data, and will be a primary language to use in the foreseeable future and should be the foundation for a software move to SaaS today.