Moscow-based Runa Capital has invested €3 million ($3.4 million) in MariaDB, the open-source database company that offers what began as a MySQL fork (Google and Wikipedia are big-name users). Runa, which is headed up by founders of Acronis and Parallels, is already a backer of the Nginx web server and platform-as-a-service outfit Jelastic. In a statement, MariaDB CEO Patrik Sallner said his firm was looking forward to collaborating with Runa and its other open-source portfolio companies in its enterprise push.
Heroku, the Salesforce-owned company that powers the application-development process of hot startups like Lyft and Upworthy, announced a new product line Thursday called Heroku Enterprise. It’s geared for big companies that want to develop the kind of modern applications seen at startups while providing the type of features that many large enterprises want, including security features and access control.
Essentially, the product line claims that large enterprises can now have it both ways: a way to make the type of applications that are typically derived from an agile-development process (with access to trendy technology like containers and new database services) all while being monitored under the iron fist of the enterprise. Kudos to Heroku if it can pull that off.
With Heroku Enterprise, organizations can supposedly now monitor all their developers, applications and resources under one interface. Companies can keep tabs on what applications are in production, which developers are working on an app and how each app is eating up resources, according to a Heroku blog post detailing the announcement.
From the blog post:
[blockquote person=”Heroku” attribution=”Heroku”]Heroku Enterprise introduces a new kind of application-level access control called a privilege. Privileges strike a balance between fine-grained permissions that are too hard to manage and coarse-grained, all-or-nothing flags that won’t do the job. In this initial release, we are introducing three app level privileges in beta: deploy, operate and manage. [/blockquote]
The new product line also comes packed with Heroku Connect, which can link up a company’s Salesforce data to the Heroku platform. [company]Salesforce[/company] said that pricing for Heroku Enterprise will be based on how many resources a company consumes.
Of course, developing the types of applications seen at Lyft and Instacart requires a type of developer mindset that can contrast with the old waterfall-style of development seen at big enterprises in which releases don’t come as often and the development lifecycle at large is more sequential in nature.
Even with a new product, it’s important for companies to realize that development is not just tool-centric, but also requires a bit of a culture shift.
GoDaddy is continuing its effort to upgrade its infrastructure into something more modern by acquiring Nodejitsu, a Node.js-centric platform-as-a-service provider, the web-hosting company announced today. GoDaddy did not disclose the financial terms of the deal, which was rumored to close this week, but a spokesperson said that four Nodejitsu employees will be coming on board. Given the popularity of Node.js, which just got its own open-source foundation, the deal makes sense for GoDaddy, which has spent the past couple of years modernizing its technology through acquisitions, mobile development and even the creation of its own content-delivery network.
Apprenda, which started out as a .NET-and-Windows-focused Platform-as-a-Service but has since opened up to other languages and technologies, continues to broaden its horizon. A new partnership with Piston Cloud gives it an entry into the OpenStack camp.
Here’s the PR spin from the announcement:
Together, Apprenda and Piston will deliver a tightly integrated solution that enables agile software development teams to build Java and .NET cloud applications and microservices faster in a true hybrid cloud environment. With more enterprise developers turning to both PaaS and OpenStack solutions than ever before, it makes sense to deliver a powerful joint solution.
Piston co-founder Chris MacGown said the deal makes sense given that both Piston CloudOS and Piston OpenStack are meant to be vendor agnostic in terms of underlying hardware. “We believe that developers want to consume platform-as-a-service, and that there’s not yet a one-size fits all approach to meet that need. This means we need to provide, integrate, and partner with PaaS solutions tailored for specific use-cases,” he said via email.
Since Cloud Foundry never focused on the Windows arena, this partnership brings .NET-focused platform to Piston OpenStack, he added.
If you follow vendor shenanigans, this is an interesting turn because Piston and its other co-founder, OpenStack pioneer Joshua McKenty, have been fairly tightly aligned with Cloud Foundry, the open-source PaaS backed by Pivotal, [company]IBM[/company], [company]HP[/company] and others. In fact, McKenty recently left Piston for Pivotal — which offers a PivotalCF, a commercial version of Cloud Foundry.
Cloud Foundry claims to bring PaaS capabilities to your public cloud of choice, and Apprenda CEO Sinclair Schuller has been openly dismissive of public PaaS adoption in general. Apprenda paints itself as the enterprise-class private PaaS, a contention that the PivotalCF folks can’t stand.
So it’s fair to say that Apprenda and Pivotal are not tight. Apprenda recently plastered a billboard on the side of Pivotal’s San Francisco headquarters building (pictured above).
Back to Piston and Apprenda: If this partnership delivers what it promises, customers can get — as Piston CEO Jim Morrisroe put it in a statement — “a scalable turnkey [OpenStack] IaaS and [Apprenda] PaaS out of the box.”
Disclosure: Piston is backed by True Ventures, a venture capital firm that is an investor in the parent company of Gigaom.
Note: This story was updated at 5:55 a.m. PST, December 24 with Chris MacGown’s comments.
It’s 1:00 am. You get an email from an “application migration manger” (automation tool) that says your inventory-control application containers successfully migrated from your AWS instances to your new Google instances. What’s more, the connection to the on-premises database has been reestablished and all containers are up and running with all security and governance services reestablished.
This happened without any prompting from you — it is an automatic process that compares the cost of running the application containers on AWS versus the cost of running them on Google. The latter proved more cost-effective at that time, so the auto-migration occurred based on predefined policies and moved the containers from one public cloud to another. Of course, the same concept also works with private-to-public or private-to-private clouds, too.
While these scenarios might sound like science fiction today, the associated capabilities are coming, and fast. The ability to mix and match containers and automate the migration and localization of those containers could change the way we think about cloud development and what private and public PaaS and IaaS platforms provide.
The trouble with existing approaches to cloud computing, including IaaS and PaaS, is that they have a tendency to come with platform lock-in. Once an application is ported to a cloud-based platform such as Google, AWS, or Microsoft, it’s tough, risky, and expensive to move that application from one cloud to another. This is not by design. Rather it’s the result of a market moving so quickly that public and private cloud providers as of now do not do good job of building portability into their platforms. Currently it isn’t in their best interest to do so, but market demand has not yet caught up with this sector.
Enter new approaches based on old ones — namely, containers — and thus the open-source project Docker. The promise is to provide a common abstraction layer that allows applications to be localized within the container and then ported to other public and private cloud providers that support the container standard. Most do — or will very soon.
Finding new value
At the center of all this is a cloud-orchestration layer that can both provision the infrastructure required to support the containers and perform the live migration of the containers, including monitoring their health after the migration occurs (see the below figure).
Using containers is not a new procedure: They certainly predate Docker. However, auto-provisioning and auto-migration are concepts that were often pushed but very remained elusive in practice. The use of Docker to convert these concepts to reality has a few basic features and advantages, including:
- The ability to reduce complexity by leveraging container abstractions. The containers remove the dependencies on the underlying infrastructure services, which reduces the complexity of dealing with those platforms. They are truly small platforms that support an application or an application’s services that sit inside of a very well-defined domain: the containers.
- The ability to leverage automation with containers to maximize their portability, and with it their value. Through the use of automation, we’re scripting a feature we could also do manually, in essence, such as migrating containers from one cloud to another. This could also mean reconfiguring communications between the containers such as tiered services or data service access. Today it’s much harder to guarantee portability and the behavior of applications when using automation. Indeed, automation often relies upon many external dependencies that can break at any time. This remains a problem, but, fortunately, one that is solvable.
- The ability to provide better security and governance services by placing those services around rather than within containers. In many instances, security and governance services are platform-specific, not application-specific. The ability to place security and governance services outside of the application domain provides better portability and less complexity during implementation and operations.
- The ability to provide better-distributed computing capabilities, considering that an application can be divided into many different domains, all residing within containers. These containers can be run on any number of different cloud platforms, including those that provide the most cost and performance efficiencies. So applications can be distributed and optimized according to their utilization of the platform from within the container. For example, one could place an I/O-intensive portion of the application on a bare-metal cloud that provides the best performance, place a compute-intensive portion of the application on a public cloud that can provide the proper scaling and load balancing, and perhaps even place a portion of the application on traditional hardware and software. All of these elements work together to form the application, and the application has been separated into components that can be optimized.
- The ability to provide automation services with policy-based optimization and self-configuration. None of this works without providing an automation layer that “auto-magically” finds the best place to run the container as well as deal with the changes in the configurations and other things specific to the cloud platforms where the containers reside.
While this may seem like distributed-application Nirvana, and certainly a better way to utilize emerging cloud-based platforms, there are many roadblocks here.
The industry must consider the fact that today’s automation and orchestration technology can’t provide this type of automation — yet. While it can certainly manage machine instances and even containers using basic policy and scripting approaches, automatically moving containers from cloud-to-cloud using policy-driven automation, including auto-configuration and auto-localization, is really not there yet.
Also, we’ve only just begun our Docker container journey. There is a lot we don’t understand about the potential of this technology and its limitations. Taking a lesson from the use of containers and distributed objects from years ago, the only way this technology can provide value is through cloud coordination of those supporting containers. Yes, having a standard here is a great thing, but history shows us that vendors and providers have a tendency to march off in their own proprietary directions for the sake of market share. If that occurs, all is lost.
The final issue is that of complexity. While we seemingly make things less complex, the reality over time is that the use of containers as the platform abstraction means that applications will morph toward architectures that are much more complex and distributed. Moving forward, it may not be unusual to find applications that exist within hundreds of containers running on dozens of different models and brands of cloud computing. The more complex these things become, the more vulnerable they are to operational issues.
All things considered this could still be a much better approach to building applications on the cloud. PaaS and IaaS clouds will still provide the platform foundations and even development capabilities. These, however, will likely commoditize over time, moving from true platform to good container hosts. It will be interesting to see if the larger providers want to take on that role. Considering the interest in Docker, that could be the direction.
The core question now: If this is the destination of this technology, and for application hosting on cloud-based platforms, should organizations redirect their resources toward this new vision? I suspect that most enterprises aren’t far enough along into cloud computing to make that change. Indeed, the great cloud migration should continue. However, know that we’ll get better at cloud application architectures using approaches that account more for both automation and portability, and we’ll all eventually land here.
Joshua McKenty, one of the early architects of OpenStack while at NASA, and a co-founder of OpenStack startup Piston, has joined Pivotal at field CTO for Cloud Foundry. He hopes to make Cloud Foundry, running on OpenStack, into what NASA envisioned several years ago.
While Oracle has a cloud platform that it’s been trying to spread to the masses for a few years, the biggest deal regarding the new cloud upgrade is supposedly the ability for people to use Oracle’s database in the cloud or on-premise in their own environment.
Users of Cask’s open-source big data PaaS will have three options: CDAP Free, CDAP Standard and CDAP Enterprise.
At the Pivotal Summit, company trots out new dashboards and a way to achieve better redundancy for its version of Cloud Foundry.
Heroku has been at the platform-as-a-service game for a long time now, and it’s still trying to bridge the gap between its web-application-centric roots and the enterprise developers it desires. CEO Tod Nielsen explains how it’s going about that and why he thinks it will succeed.