It’s time to shake up the enterprise software market

Kristoffer is chief executive at Starcounter.
The enterprise software industry as we know it is changing significantly. Recently, it’s seen a rise in ‘best of breed’ apps, which in turn, has contributed to the fall of the enterprise monopoly. More specifically, there are three factors fueling this change, including the reality of the problem, the collective app economy, and new technology enabling the next generation of business applications.

The reality of the enterprise problem

Over $600 billion is spent on enterprise software each year. And to date, modern enterprises have had two options: buy big, unfocused packages that excel in nothing; or buy small, focused ’best of breed’ apps and risk an integration nightmare.
First off, the major software platforms purchased by IT departments provide tools to make thousands of different features for thousands of different types of users. These products are cumbersome when they are overloaded, often perceived as hostile by the user, and difficult to learn. What’s more, this obstacle isn’t bypassed when they move to the cloud in an attempt to keep up with changing trends. But since these products seem to solve everything, they’ve been successful for quite some time and become the status quo in the software market.
In the meantime, the ’best of breed’ apps have been the preferred choice by business executives, and they have increasingly received support among all employees. With the consumerization of enterprise technology, products now enter a company from the bottom up, rather than the top down.
In the private space, you can easily compose your own suite of software tools to accomplish a task. To throw a birthday party, for example, go to the app store and get a calendar app, a social app, an invitation app and you’re set.
Those who have grown up today with the ease of an app store are entering into the positions at enterprises. They want their daily business tools such as accounting and smart marketing tools to be as accessible as messaging apps and car navigators. Enterprises must endorse this through the fast and easy delivery of efficient tools to employees who need a certain software to accomplish a special task.
Although these have been the two prevalent options among enterprises, the reality is that 68 percent of IT projects fail. These failures are forcing CIOs and CEOs at enterprises to seek change from the status quo.

The collective app economy

When multiple small apps band together, they can upend the status quo – this is the new collective app economy. Today, new technology is allowing this to happen:

  • Allowing enterprises to mix and match best of breed apps without the need for complex and expensive integration projects.
  • Allowing each vendor to be on top of its own food chain.
  • Allowing new business models to outperform the existing ones in the enterprise software business.
  • Allowing app stores, for example, to deliver interoperating apps and to be truly polylithic.

The ’best of breed’ app vendors often have an innovative edge because of their focus and domain knowledge. Though they still cannot check off as many boxes as the big vendors with their ecosystem of piggy-backers, and buyers shun complex, expensive, lengthy and risky integration projects.
The incumbent software platforms will not be replaced by a single app, no matter how elegant, intuitive, and powerful the app is. That’s because a single app can never replace the full functionality of the large players’ platforms. The best of breed apps must be able to be mixed-and-matched without the need for complex integration projects.
A single mobile payment app, for example, cannot replace a full retail solution. However, combining best of breed apps for accounting, product stock and point of sales results in a virtual retail suite that can outperform the larger players. These new apps are promising and must meet two critical conditions to gain any significant market share:

  1. It must be possible to run multiple independent business apps by different vendors side by side in a modern web user interface. Together they create your virtual business solution.
  2. The apps must operate on the same set of data without knowing about each other. All apps being aware of all other apps simply does not scale. APIs have been the peer-to-peer glue between apps for 50 years, and independent business apps still do not operate on a single set of data.

As new technology is now solving the vital, if not all, components of the ”shared data and shared screen” problems, this will have a major impact for enterprise software vendors and enterprises.

The next generation of business applications

Technology comes before revolution. Always. Though not typically obvious when first introduced, the importance of truly novel technology is not something reserved for tech geeks; in fact, it is and has always been vital for the evolution of the human race. Modems and computers were not invented to power the web. Rather, the web was discovered because we had computers and modems.
The new generation of in-memory application platforms will not only transform the database industry; it will forever change the enterprise software industry. How? By combining an in-memory database engine, data processing and application server into a single business operating system. This technology can deliver the capabilities of a huge data center in that single server. It can allow many small independent applications in combination to replace a monolithic application. It can remove the need for a single application to be the master of all others. Plus, the speed, radical efficiently, low costs and openness will help drive fast and wide adoption.

Time to band together

The enterprise software landscape will change dramatically in the near future. The new software landscape will be simultaneously fragmented and connected, with new business models created by collaboration among many small vendors.
It is time for vendors and talents to band together in order to provide businesses with the freedom to use the best tools needed to meet their goals, instead of being locked into a single vendor that is a jack of all trades, but master of none.

How hybrid will reshape the entire cloud market

Sinclair Schuller is the CEO and cofounder of Apprenda, a leader in enterprise Platform as a Service.
When the phrase “hybrid cloud” is mentioned, some technologists will tell you it is the eventual end state of cloud computing, while other technologists chuckle. Those that chuckle typically view hybrid as a phrase used by vendors and customers who have no cloud strategy at all. But hybrid is real and here to stay. Not only is it here to stay, but the hybrid cloud will also reshape cloud computing forever.
People today imagine public cloud to be an “amorphous, infinitely scalable computing ether.” They think moving to the cloud rids themselves of the need to detail with computing specificity and that cloud makes you location, risk and model independent. They think enterprises that move to the cloud no longer need to depend on pesky IT departments and deal with risks associated with centralized computing. This perception of computing independence and scale couldn’t be further from the truth.
The promise of cloud is one where anyone who needs compute and storage can get it in an available, as-needed, and robust manner. Cloud computing providers have perfected availability to the point where, even with occasional mass outages, they outperform the service-level agreements (SLAs) of internal IT departments. This does come at a cost, however.
Cloud computing is arguably the largest centralization of technology the world has ever seen and will see. For whatever reason, many people don’t immediately realize that the cloud is centralized, something that should be heavily scrutinized. Possibly because the marketing behind cloud can be vague and lacking a description of a tangible “place.” Don’t be fooled.
When an enterprise selects a cloud vendor, they’re committing to that provider in a meaningful way. As applications are built for or migrated to a cloud, switching cost gets very high. The nature of this market is driven by a network effect where, assuming all else is equal, each prospective customer of a cloud provider (AWS, Microsoft, etc.) benefits by consuming a cloud that has many of customers over one that has fewer since it indicates lower risk and helps drive the economies that make a given cloud attractive.
If we play this future out, we’ll likely see the cloud infrastructure market collapse to just a few massive, global providers. This will partly be driven by success of the individual providers and the consolidation of smaller players who have great technology but simply can’t compete at that scale. Just take a look at the acquisition of Virtustream by EMC just prior to Dell’s acquisition of EMC for a recent example.
A look at recent market share estimates show exactly that, with Amazon, Microsoft, IBM, and Google accounting for 50 percent of the global cloud infrastructure market. One day, these five vendors will likely account for 80 percent of the market. Compare that to the often-criticized banking world, where despite the massive size of today’s banks, the list of banks that hold 50 percent of global deposits is much longer than just five banks. If we applied the same standard to cloud computing, we’d certainly be infuriated and demanding that these “too big to fail” computing providers be broken up.
To be clear, I’m not suggesting that what’s happening is bad or that public cloud is bad, but rather to point out the realistic state of cloud computing and the risk created by centralizing control to just a few providers. Cloud would likely never have succeeded without a few key companies making massive bets. The idea of a truly decentralized, global cloud would likely have been the wrong starting point.
Let’s explore the idea that a global decentralized cloud, or something more decentralized than what we have now, is the likely end state. Breaking up cloud providers isn’t necessary or optimal. Unlike banking, technology is capable of layers of abstraction to mitigate these sorts of centralized risks.
Most large enterprises looking to adopt cloud are making two large considerations in their decision process:

  1. They can’t shut down their entire IT department and replace it with cloud. There are many practical reasons why this is unlikely.
  2. Many are keenly aware of the risks associated with depending on a single vendor for all their cloud computing needs.

The first consideration makes it difficult to adopt a public cloud without at least considering how to reconcile the differences with on-premises, and the second makes it difficult to choose one provider at a level that is incompatible with another provider. The result of centralization in public cloud providers and looking for symmetry between off- and on-premises computing strategies is driving enterprises to explore (and in some cases demand) hybrid capabilities in layers that abstract away infrastructure. In fact, hybrid has transformed to be synonymous with multi-cloud.
Technical layers like enterprise PaaS software platforms and Cloud Management Platforms have evolved to allow for multi-cloud capabilities to cater to a modality where resources are abstract. Over the coming years, we’ll likely see multi-cloud features in these technology layers to lead to a much more decentralized computing model where something like a PaaS layer will fuse resources from public clouds, on-premises infrastructure, and regional infrastructure providers into logical clouds.
At least in the enterprise space, “private clouds” will really be an amalgam of resources and will behave as the single, “amorphous ether” that we tend to assign cloud to begin with. The cloud market will not be one where five vendors control all the compute and customers are at the mercy of the vendors. Instead, cloud will be consumed through multi-cloud layers that will protect customers from inherent centralization risk. The end state is a decentralized model with control points owned by the customer through software – a drastic reshaping to say the least.

Mobile video ads are poised to explode—So what’s holding it back?

Josh is chief executive officer at AerServ.
Video has always been one of advertisers’ most powerful engagement tools. And as digital video in general has become a regular aspect of our daily media consumption, advertisers are looking for new ways to leverage it to boost results and gain an advantage over competitors.
Enter mobile video. Last year, a study from digital ad firm Undertone and market research company Ipsos ASI determined that high-impact ads deliver the best brand recall. Not exactly a revelation, but this year, those firms tested the  theory on mobile devices and discovered that the same was true: High-impact ads still deliver the highest engagement and brand recall, regardless of screen size. This is good news for both publishers and advertisers, who need not worry about banners and other traditional formats on mobile; they can still grab attention on a small screen by leveraging targeting technology and compelling video content.
Not only that, but mobile also affords advertisers the ability to target specific users at specific locations like no other medium can. Using device ID, coordinates/ location data, demographic data, browsing behavior and more, advertisers can target at an extremely granular level to hone in on the most qualified audiences to boost efficiency and overall results.
So with all those capabilities and benefits, why hasn’t mobile video spending exploded? Why aren’t advertisers, who are constantly looking for new ways to engage today’s ever-connected consumer, absolutely desperate to get into the mobile video game?
Well, like any emerging technology, mobile video still faces certain challenges that need to be overcome, or at least mitigated, before risk-averse agencies and brands will be eager to take the plunge. A fairly common concern about mobile video is its newness; even the industry old guard is learning about it along with everyone else. And that lack of familiarity among the experts and decision makers, as well as its relative lack of field-testing, is enough to keep it off the budget for another year.
Another key challenge associated with being a relatively recent innovation is the fragmentation of the mobile video market. Creative formats vary across devices and platforms and there is a general lack of standards and best practices for both advertisers and publishers. TV buyers want to reuse their TV spots and desktop buyers want to reuse their desktop videos, neither of which is going to seamlessly fit a mobile platform. Desktop Flash VPAID creatives don’t work in mobile environments either. Mobile VPAID, while standardized via IAB, is still emerging and actually varies wildly per vendor.
And boy are there a lot of vendors. And a lot of devices. The mobile space is getting more complex with every release of a new phone, tablet, operating system or feature upgrade. For advertisers and agencies, it becomes extremely difficult to plan for every possible combination of device, OS and ad format, and it can cause significant hiccups in deploying creative.
Further challenges arise with regard to targeting; while mobile offers highly advanced targeting capabilities, it does also lack transparency with regard to context. Specifically, advertisers cannot necessarily be certain that their ads are not running alongside inappropriate or irrelevant video content. Context is extremely important when it comes to brand integrity and getting your money’s worth out of the ad buy, and some advertisers want to see improvement in this area before investing.
Anyone who has been working in the ad-tech space for more than a few years won’t be surprised by what is perhaps the largest hurdle that mobile video must overcome in order to establish itself in the marketer’s standard arsenal: measurement. Measurement challenges have plagued every traditional and digital advertising medium for years, but mobile engagement actually can be tracked through multiple metrics including clicks, views, leads, installs, purchases, foot traffic, etc.
That said, mobile video measurement is not always a piece of cake. Viewability is the industry’s current favorite metric, but advertisers and publishers alike struggle with what it actually means — at what point is the video ad considered viewable or count as an impression? Some define a viewable impression as three seconds onscreen, but there is still no industry standard. A concept that seems so simple is still being debated due to new technologies like in-feed or “native” video. In addition, common analytics vendors from the desktop space are not yet mature in the in-app space.
For agencies and marketers concerned with justifying their ad spend with tangible, understandable results, the murkiness of mobile video measurement is a significant stumbling block. For vendors and solutions providers, the chief concern is attribution, or making sure they are appropriately credited and compensated for each conversion. It’s not easy for anyone to take a risk on an emerging tech solution when you don’t even have a clear, validated way of gauging its effectiveness and communicating it to the holders of the purse strings.
Mobile video absolutely can be an advertiser’s secret weapon; it is just crucial for those issues to be addressed and clarified if possible. Luckily, there are several ways forward and a plethora of mobile tools available to help get there. Utilize those tools, like the rich interactive video experiences of VPAID and custom video and, if viewability is a concern, many vendors have in-app viewability tools to offer comfort and peace of mind. To deal with creative challenges, it can be useful to partner with a company boasting a solid history of mobile campaign execution, such as Telemetry or Sizmek, and data providers and DMPs such as Neustar and Factual are working every day on ways to improve targeting accuracy and contextual transparency.
Any nascent technology has its growing pains, but the benefits of mobile video are well worth the effort of overcoming those challenges to be at the forefront of an important shift in the industry. More and more tools are emerging for advertisers and publishers to handle the challenges of navigating the still somewhat murky waters of mobile video, but soon enough, consumer response will dictate the industry’s rate of advancement in this area. Your consumers have cut the cord — to the TV, to the desktop — and they are holding devices in their purses and pockets capable of delivering the same positive user experience and even better engagement results. The mobile market is ready, you should get ready, too.

Turning data scientists into action heroes: The rise of self-service Hadoop

Mike is chief operating officer at Altiscale.
The unfortunate truth about data science professionals is that they spend a shockingly small amount of time actually exploring data. Instead, they are stuck devoting significant amounts of time wrangling data and pouring resources into the tedious act of prepping and managing it.
While Hadoop excels at turning massive amounts of data into valuable insights, it’s also a notorious culprit for sucking up resources. In fact, these hurdles are serious bottlenecks to big data success, with research firm Gartner predicting that through 2018, 70 percent of Hadoop deployments will not meet cost savings and revenue generation objectives due to skills and integration challenges.
Whether it’s time stuck in a queue behind higher priority jobs or functioning as a Hadoop operations person, — building their own clusters, accessing data sources, and running and troubleshooting jobs — data scientists are wasting time on administrative tasks. Sure, it’s necessary to do some heavy lifting to successfully perform analysis on data. But it isn’t the best use of a data scientist’s time, and it’s a drain on an organization’s resources.
That said, how can data scientists stop serving as substitute Hadoop administrators and become analytics action heroes?
Just as the business intelligence industry has moved to a more self-service model, the Hadoop industry is also moving to a self-service model. Operational challenges are moving to the background, so that data scientists are liberated to spend more time building models, exploring data, testing hypotheses, and developing new analytics.
Self-service Hadoop solutions simplify, streamline, and automate the steps needed to create a data exploration environment. Self-service is achieved when a provider (one who runs and operates a scalable, secure Hadoop environment) delivers a data science platform for the analytics team.
With a self-service environment, data scientists can focus on the data analysis, while being confident that the data and Hadoop operations are well taken care of. And these environments can be kept separate from production environments, ensuring that test data science jobs don’t interfere with a production Hadoop environment that is core to business operations, thereby reducing risk of operational mishaps.
As we see a rise in self-service Hadoop, organizations will realize the benefits of analytics action heroes and their super power contributions. Here are a few reasons why:

  • Faster understanding of trends and correlations that drive business action: Self-service tools eliminate the complex and time-consuming steps of procuring and provisioning hardware, installing and configuring Hadoop and managing clusters in production. By automating issues that customers run into in production, such as job failures, resource contention, performance optimization and infrastructure upgrades, data analytics projects run with more ease and speed.
  • Freedom to take risks with more agile data science and analytics teams: Using the latest self-service technology in the Hadoop ecosystem, organizations can gain a competitive edge not previously possible. Teams can experiment with advanced technology in a production environment, without the overhead associated with maintaining an on-premise solution. This allows data scientists to develop cutting-edge products that leverage features in the most advanced software available.
  • Increased time for Hadoop experts to focus on value-added tasks: Operational stability frees up internal resources so Hadoop experts can focus on unearthing data insights and other value-added tasks such as data modeling insights. Simply put, with more time spent on examining the data rather than wrangling it, organizations can uncover insights that drive business forward — and deliver on the true promise of big data.

Hadoop has unlimited potential to drive business forward. Yet, it can quickly become a drain on internal operational resources when running in production and at scale. Organizations need to devote more time on data science and not on the Hadoop infrastructure to fully realize big data’s potential — self-service tools make this a reality.

Why mass market VR won’t come soon

Gilles is the CEO and founder News Republic.
A little more than a year ago, Facebook acquired Oculus VR (Virtual Reality helmet designer and manufacturer), for $2 billion – kicking off a series of investments and start-ups in the burgeoning field.
All manufacturers – from Intel to Samsung – are following suit and contributing to this new hype. Content studios, equipment developers, study tours, cameras, developers, are all rushing into the race. (If you have time, read this Medium article, which I find extremely insightful). Of course, in a self-fulfilling prophecy approach, forecasts are flying through the roof, betting on a market that will reach $150B in the next five years!
But the hype ignores some key facts – if explored closely – that suggest that 3D, augmented reality or even VR 1.0 – will most likely fail. Virtual reality will continue to be virtual for many years – at least for the mass market – as augmented reality and 3D are. Here are a few big explanations for why…

It offers a pathetic user experience.

I’m not always sure if everyone writing about VR have actually tested the equipment and user experience. Setting up and using a Virtual Reality helmet is pretty painful for the following reasons:

  • The set-up is convoluted. Except for the Samsung VR helmet, the level of expertise needed is far above the average tech user. In fact, the technical skills needed to use most available VR products means that only those with a strong tech background will be able to use it.
  • The screen resolution. To have a good immersion, you need to have a decent resolution. The fact that you watch a screen so close to your eyes makes this expectation even bigger. A 4K resolution is the minimum needed and yet none the VR helmet offer this resolution.
  • Smooth movement. Once you have great definition, you need to propose a high refresh rate of the screen or FPS (frame per second) in order to offer a smooth display when you move around. The correct level is around 90 to 120 FPS.
  • Hardware console. To manage a 4K resolution picture at 90 to 120 FPS properly, you need to have a high-tier boosted PC, a configuration that starts out at $2,000. As a point of reference, even the latest console PS4 is not powerful enough to manage a 4K game.
  • Leach. It is never shown on photos about VR, but you always need to have cable that links your helmet to the console / PC (the only exception being the Samsung VR, which is a plug-and-play solution with a low resolution). It’s pretty uncomfortable after a while.

There’s no customer.

Just because something is innovative, cool, and has massive financial backing doesn’t mean it will be inherently successful. The most successful products solve a problem for customers or fulfil a need. At its core, virtual reality does neither. For the moment, there are only three potential types of users that seem to be candidates for the VR market.
The first of these user types? The gamer. At first glance, the gamer sounds like VR’s number one customer. And it is true that VR provides a true added value to video games, especially in the first-person shooter and racing categories. Plus, gamers are usually technologically-savvy and open to investing. Nevertheless, the segment is very demanding. The innovation around 3D games and 3D TV was focusing on these people as a natural target. But mass-market 3D never took off – even for the gamer. There is a risk that the same thing will happen for VR.
Then there’s the porn watcher. While consumers of adult content would seem like an obvious major target, there are several, seemingly insurmountable obstacles in the way of making virtual porn a reality. For one, the equipment to provide feedback and interaction for the user doesn’t exist for the mass market. In addition, this market isn’t as fanatical as the gaming market, making it less likely that they’d be willing to shell out the big bucks required to develop the products necessary.
And finally, there’s the entertainment enthusiast. The ability to attend a concert, a show, or a game in virtual reality is pretty attractive. If it is a real market, the question is: will it be big enough to support a full industry?

It’s too expensive for the user.

The price of a helmet is around $400 to $600. But to work properly, you need to have a PC that costs more than $1,000. The full package is clearly close to $2,000. While some enthusiastic geeks will invest in the market, this price range keeps VR from producing mass-market products.
By running a sanity check on previous similar innovations such as 3D, Google Glass, Augmented Reality, factoring in the operational constraints of what virtual reality means for the consumer, and taking into consideration the technical challenges and the potential markets, I have already posed many open questions that question the future of VR. While it undoubtedly has potential for today’s businesses and civil organizations, betting on it for the mass-market segment is a high risk venture in the short run.
Nevertheless, with the evolution of the respective technologies – especially on the hardware side – Virtual Reality should be ready for mass market in 5 to 10 years.  Will companies be patient enough to wait that long? One thing is for sure: Facebook will.
Gilles Raymond is the CEO and founder News Republic, a mobile news syndicator. Gilles wrote a master thesis called “Virtual Reality Myth and Reality” in 1994, and has been following the market ever since.

Why unikernels might kill containers in five years

Sinclair Schuller is the CEO and cofounder of Apprenda, a leader in enterprise Platform as a Service.
Container technologies have received explosive attention in the past year – and rightfully so. Projects like Docker and CoreOS have done a fantastic job at popularizing operating system features that have existed for years by making those features more accessible.
Containers make it easy to package and distribute applications, which has become especially important in cloud-based infrastructure models. Being slimmer than their virtual machine predecessors, containers also offer faster start times and maintain reasonable isolation, ensuring that one application shares infrastructure with another application safely. Containers are also optimized for running many applications on single operating system instances in a safe and compatible way.
So what’s the problem?
Traditional operating systems are monolithic and bulky, even when slimmed down. If you look at the size of a container instance – hundreds of megabytes, if not gigabytes, in size – it becomes obvious there is much more in the instance than just the application being hosted. Having a copy of the OS means that all of that OS’ services and subsystems, whether they are necessary or not, come along for the ride. This massive bulk conflicts with trends in broader cloud market, namely the trend toward microservices, the need for improved security, and the requirement that everything operate as fast as possible.
Containers’ dependence on traditional OSes could be their demise, leading to the rise of unikernels. Rather than needing an OS to host an application, the unikernel approach allows developers to select just the OS services from a set of libraries that their application needs in order to function. Those libraries are then compiled directly into the application, and the result is the unikernel itself.
The unikernel model removes the need for an OS altogether, allowing the application to run directly on a hypervisor or server hardware. It’s a model where there is no software stack at all. Just the app.
There are a number of extremely important advantages for unikernels:

  1. Size – Unlike virtual machines or containers, a unikernel carries with it only what it needs to run that single application. While containers are smaller than VMs, they’re still sizeable, especially if one doesn’t take care of the underlying OS image. Applications that may have had an 800MB image size could easily come in under 50MB. This means moving application payloads across networks becomes very practical. In an era where clouds charge for data ingress and egress, this could not only save time, but also real money.
  2. Speed – Unikernels boot fast. Recent implementations have unikernel instances booting in under 20 milliseconds, meaning a unikernel instance can be started inline to a network request and serve the request immediately. MirageOS, a project led by Anil Madhavapeddy, is working on a new tool named Jitsu that allows clouds to quickly spin unikernels up and down.
  3. Security – A big factor in system security is reducing surface area and complexity, ensuring there aren’t too many ways to attack and compromise the system. Given that unikernels compile only which is necessary into the applications, the surface area is very small. Additionally, unikernels tend to be “immutable,” meaning that once built, the only way to change it is to rebuild it. No patches or untrackable changes.
  4. Compatibility – Although most unikernel designs have been focused on new applications or code written for specific stacks that are capable of compiling to this model, technology such as Rump Kernels offer the ability to run existing applications as a unikernel. Rump kernels work by componentizing various subsystems and drivers of an OS, and allowing them to be compiled into the app itself.

These four qualities align nicely with the development trend toward microservices, making discrete, portable application instances with breakneck performance a reality. Technologies like Docker and CoreOS have done fantastic work to modernize how we consume infrastructure so microservices can become a reality. However, these services will need to change and evolve to survive the rise of unikernels.
The power and simplicity of unikernels will have a profound impact during the next five years, which at a minimum will complement what we currently call a container, and at a maximum, replace containers altogether. I hope the container industry is ready.

How the internet of things will power the Intelligence Age

We’re currently shifting from the Information Age to the Intelligence Age. The Intelligence Age will be characterized by autonomous communication between intelligent devices that are sensitive to a person’s presence and respond by performing a specific task that enhances that person’s lifestyle.  The shift is driven by the consumer’s desire for efficiency, particularly in connection with everyday tasks that can be easily automated. And the costs associated with connected devices are no longer prohibitive, so companies of all sizes are able to bring products to market.

Consumer desire

Consumers are infatuated with technology that uses connectivity and machine learning to track and analyze everyday habits. They’re willing to let products track their locations, conversations, steps, eating, spending and other behavior because the product creates a seamless experience that couldn’t be achieved otherwise. Google Now, for example, incorporates data from a user’s calendar, web searches and location to present her with relevant information and suggestions throughout the day. [company]Google[/company] Now alerts the consumer to weather, traffic and restaurants nearby and delivers location-based reminders to her phone. It’s a personal assistant that uses data to make a consumer’s life more efficient.

 Cost of innovation

Five years ago, it was extremely expensive to manufacture the necessary parts for the connected devices that exist today. However, the rise of smartphones and tablets that use similar components created an increase in the production of components, which led to a rise in the number of manufacturers and an array of price points for varying specifications or quality of product. This made it feasible for companies to purchase radios, sensors, cameras and other materials at reasonable prices.

Once cost was no longer prohibitive, innovation began, and today even the smallest startups can afford (with the help of online crowdfunding in many cases) to build an idea. Planet Labs, for example, is leveraging access to these components to create the next generation of earth-imaging satellites at a fraction of the cost and time it takes to build traditional satellites. By using basic smartphone components, Planet Labs has launched 71 satellites into orbit in the last 16 months. These satellites produce affordable, real-time images that the government and agricultural industry can use to evaluate geological occurrences. 

Ambient intelligence

Ambient intelligent devices sense a user’s presence, movement and behavior, analyze that data in order to learn about that user, and then make an intelligent decision to perform a task based on the data. For example, Nest learns about a user’s schedule and uses that information to automate climate control. New companies like Zuli, Iotas and Spire are all entering the market over the next six months and will focus on using data to enable their products to make intelligent decisions based on user habits. Zuli is developing a recommendation engine based on a person’s presence in a specific room, for instance, that will allow for adjustment of the room’s temperature, lighting and music.

Market opportunity

As you move through your everyday life, be conscious of moments in which your repetitive actions have limited or no tangible effect on your environment. These moments are examples of when, at some point in the near future, the Intelligence Age will deliver enhanced experiences that turn the mundane into remarkable. These moments are opportunities to develop new products and services that will create the next economic boom in America and worldwide. The companies that capitalize on these opportunities will be the first publicly traded companies to be valued at a trillion dollars.

Mark Spates is head of Logitech’s smart home platform, founder of iotlist.co and president of the Internet of Things Consortium.

One big way that book publishing startups can succeed now

It’s been more than seven years since the introduction of the first Kindle. Ebooks market share seems to be stabilizing at around one third of total books sold in the U.S. according to the latest reports. But ebooks are just the beginning–the detonator, in a way, of a decade-long disruption of the traditional publishing landscape.

Publishers and agents have certainly “adapted,” but have largely failed to carry innovation forward; distribution channels have been disrupted, but the creative process around books and the business model of publishing remain, for now, unchanged.

As it often happens when technology erupts in a non-tech-heavy industry, numerous opportunities have emerged for smaller players: namely authors, freelancers, and startups. To take advantage of the changing industry landscape, however, those small players will have to grasp the delicate mix of strong technology and intuitive user experience (UX) needed to succeed in a tech-unsavvy industry.

Publishers and “tech”

At the Frankfurt Book Fair last October, startup founder John Pettigrew from Futureproofs noted that “Until now, publishing companies, as any other big corporations, have been adopting several softwares that came with ‘how-to’ manuals.” Pettigrew was identifying the lack of technological innovation in the publishing industry, which continues to rely on the same old technology despite readers’ and authors’ changing needs.

Case in point? HarperCollins, considered the most forward-thinking publisher out there, has introduced Bookperk — its latest digital product that just happens to be a glorified email listserv. Distributors like Amazon, Kobo or B&N have been offering customers specials and customized recommendations via email for years. But publishers have have been held back by the limitations of outdated technology, along with an understandable reluctance toward investing heavily in digital (after all, most of their revenue still comes from print books and bookstores).

That leaves room for individual authors to take advantage of digital formats that bring control of the publishing value chain into their hands (i.e., selling directly to readers). And in turn, authors have created opportunities for startups by generating a market wholly nonexistent until the early 2010s: independent publishing services.

Addressing real needs with strong technology

Many startups that have thrown their hats in the ring have confronted one of two challenges: they know the market’s needs but are unable to build the technology, or they come with great technology but don’t know how to “geek it down.” Let’s give a couple of examples: Editorially and Net Minds.

Editorially was trying to solve an obvious problem: the vast majority of authors are still writing on Microsoft Word, software that’s not made for writing books and stories, and generates formatting issues when converting to EPUB and MOBI files.

Editorially created a beautiful collaborative writing tool and editing platform, and received VC funding most startups only dream of. But it went under because it “failed to attract enough users to be sustainable.” The technology behind Editorially was great, but for authors to embrace a new editing tool, it needs to look and feel like what they’ve been using for decades — only simpler and more effective. That’s what good UX means in the publishing world.

Net Minds had the opposite problem. It had the awesome vision that authors could share royalties with the editors, designers, and marketers who worked to bring their books to life. The founders had knowledge of the market, as well as a good network thanks to CEO Tim Sanders, a bestselling author and speaker. However, it failed for the same reason many startups out there fail: the founders didn’t get along. Or more precisely, the tech founders didn’t get along with the non-tech ones.

Creating the right UX

User Experience, in my opinion, is one of the top factors that will ultimately dictate any success or failure in this industry. Be it a marketplace, an online writing tool, or a distribution channel–and be it aimed at publishers, authors or other industry professionals–emerging tech needs to feel intuitive to its users.

One of the most impactful examples of UX taking the day is Smashwords, the startup founded by Mark Coker in 2008. “The rise of Smashwords is the story of the rise of self-publishing,” Coker wrote in August last year.

Smashwords basically allows authors to convert their manuscripts to the right electronic formats, then distributes them across all major e-retailers, aggregating the right metadata so authors only have to enter it once. Though some competitors offer more features and flexibility, Smashwords’ superior UX condemns these competitors to a narrower segment of the market.

There are few other industries out there as exciting and full of opportunities as publishing. It’s up to smaller players to inject the book industry with new vitality and carry on the disruption started by Amazon.

Ricardo Fayet is a co-founder of Reedsy, an online marketplace that enables authors to directly access the wealth of editing and design talent that has started leaving major publishers over the past few years. A technology and startup enthusiast, he likes to imagine how small players will build the future of publishing.

Why 2015 is the year of encryption

During a visit to Silicon Valley earlier this month, President Obama described himself as “a strong believer in strong encryption.” Some have criticized the president for equivocating on the issue, but as “strong believers” ourselves, we’ll take him at his word. Obama isn’t alone; everyone is calling for encryption, from activists to engineers, and even government agencies tasked with cybersecurity.

In the past, using encryption to secure files and communication has typically only been possible for technically sophisticated users. It’s taken some time for the tech industry and the open source community to ramp up their efforts to meet the call for widespread, usable encryption, but the pieces are in place for 2015 to be a turning point.

Last fall, [company]Apple[/company] and [company]Google[/company] announced that the newest versions of iOS and Android would encrypt the local storage of mobile devices by default, and 2015 will be the year this change really starts to takes hold. If your phone is running iOS 8 or Android Lollipop 5.0, photos, emails and all the other data stored on your device are automatically secure against rummaging by someone who happens to pick it up. More important, even the companies themselves can’t decrypt these devices, which is vital for protecting against hackers who might otherwise attempt to exploit a back door.

Of course the protection from these updated operating systems relies on user adoption, either by upgrading an old device or buying a new one with the new OS preinstalled. Gigaom readers might be on the leading edge, but not everyone rushes to upgrade. Based on past adoption trends, however, a majority of cell phone users will finally be running one of these two operating systems by the end of 2015. As the Supreme Court wrote last year, cell phones are a “pervasive and insistent part of modern life.” The world looks a whole lot different when most of those phones are encrypted by default.

There are two more developments involving encryption which might not make the front page this year, but they’re equally as important as the moves by Apple and Google, if not more so.

First, this month saw the finalization of the HTTP/2 protocol. HTTP/2 is designed to replace the aging Hyper-Text Transfer Protocol (HTTP), which for almost two decades has specified how web browsers and web servers communicate with one another. HTTP/2 brings many modern improvements to a protocol that was designed back when dial-up was king, including compression, multiplexed data transfers, and the ability for servers to preemptively push content to browsers.

HTTP/2 was also originally designed to operate exclusively over encrypted connections, in the hope that this would lead to the encryption of the entire web. Unfortunately that requirement was watered down during the standards-making process, and encryption was deemed optional.

Despite this, Mozilla and Google have promised that their browsers will only support encrypted HTTP/2 connections—which means that if website operators want to take advantage of all the performance improvements HTTP/2 has to offer, they’ll have to use encryption to do so or else risk losing a very large portion of their audience. The net result will undoubtedly be vastly more web traffic being encrypted by default.

But as any sysadmin can tell you, setting up a website that supports encryption properly can be a huge hassle. That’s because in order to offer secure connections, websites must have correctly configured “certificates” signed by trusted third parties, or Certificate Authorities. Obtaining a certificate can be complicated and costly, and this is one of the biggest issues standing in the way of default use of HTTPS (and encrypted HTTP/2) by websites.

Fortunately, a new project launching this summer promises to radically lower this overheard. Let’s Encrypt will act as a free Certificate Authority, offering a dramatically sped-up certificate process and putting implementation of HTTPS within the reach of any website operator. (Disclosure: Our employer, the Electronic Frontier Foundation, is a founding partner in Let’s Encrypt.)

Of course there are sure to be other developments in this Year of Encryption. For example, both Google and Yahoo have tantalizingly committed to rolling out end-to-end encryption for their email services, which could be a huge step toward improving the famously terrible usability of email encryption.

Finally, we’d be accused of naiveté if we didn’t acknowledge that despite President Obama’s ostensible support, many high-level law enforcement and national security officials are still calling for a “debate” about the balance between encryption and lawful access. Even putting aside the cold, hard fact that there’s no such thing as a “golden key,” this debate played out in the nineties in favor of strong encryption. We’re confident that in light of the technical strides like the ones we’ve described, calls for backdoored crypto will come to seem increasingly quaint.

Andrew Crocker is an attorney and fellow at the Electronic Frontier Foundation. Follow him on Twitter @AGCrocker.

Jeremy Gillula is a staff technologist at the Electronic Frontier Foundation. Prior to EFF, Jeremy received his doctorate in computer science from Stanford, and a bachelor’s degree from Caltech.

Cloud hosting services and the future of lock-in

For a software product, identifying a product–market fit, getting to market, and then scaling to meet business demands needs to happen instantaneously. Time is still money and we’re now thinking in terms of dozens of deploys per day that need to happen. Optimizing your time and staying focused on your core product is essential for your business and your customers.

With that increased need for moving fast, many teams started using the cloud to get a large infrastructure off the ground quickly. But those cloud services, and especially the latest iteration of services built on top of the cloud, can lead to a level of lock-in that many teams aren’t looking for.

Cloud revolution, service evolution

Over the past few years, cloud and virtualization have revolutionized how we build applications and the speed with which we can build them. These innovations have leveled the playing field and enabled small development teams to build massive applications; however, this has also pressured them to build, ship and deliver products more quickly than their competitors.

Using the cloud and deploying applications on immutable servers increased overall performance and drastically reduced time spent maintaining infrastructure for many teams. Because systems scale as needed, we are able to stop thinking about the individual server and, instead, focus on the experience that this collection of servers provides our customers. The individual server is irrelevant as it’s short lived, very specialized on one task and gets replaced constantly.

We’re now entering the next evolutionary phase of infrastructure, the service. While services to host our applications, like Heroku and AppEngine, have been around for a while, they’ve now started to hide more of the complexities of running infrastructure. For example, the recently launched Amazon Web Services Lambda and EC2 container services, as well as lightweight VM-like containers like Docker, hide many of the complexities of running background tasks.

These services promise to make building a large infrastructure from micro-services much simpler. Getting those micro-services up and running without having to think about the underlying infrastructure and communication between those services dramatically decreases the time needed to build your systems. Furthermore, it limits the code you would have to write to the absolute minimum, thus making sure you stay focused on building your product, without the distraction of building and maintaining infrastructure.

Lock-in

The biggest concern I’ve heard when discussing this topic was potential lock-in to a specific provider. I see three different scenarios for lock-ins in this kind of infrastructure.

Moving infrastructure takes a base effort

When you change providers, there is always work involved that locks you into the current provider. Even with an infrastructure built on standard tools and frameworks, you’ll have to go through transferring your data, changing DNS, and testing the new setup extensively. Better tooling can help make this easier but not remove it.

Code-level lock-in

Google App Engine is an example of deep code-level lock-in. It requires you to build your application in a very specific way tailored to their system. This can give you major advantages because it’s very tightly integrated into the infrastructure, but for many teams this deep lock-in is too risky.

Architectural lock-in

An example of a service that has minimal code lock-in but major architectural lock-in is Amazon Web Services Lambda. In the first iteration of Lambda, you write Node.js functions and invoke those functions through either the API or they get invoked on specific events in S3, Kinesis or DynamoDB.

For any sufficiently complex infrastructure, this could lead to dozens or hundreds of very small functions that aren’t complex by themselves or have major lock-ins on a code level. But you can’t take those node functions and run them on another server or hosting provider. You would need to build your own system around them which means high architectural lock-in.

On the plus side, there is now a lot of infrastructure we simply don’t have to deal with anymore. Events are fired somewhere in your infrastructure and your functions will be executed and scaled automatically.

Heroku, AWS and other cloud providers have seen that writing on the wall and are decreasing code level lock-in, while providing new services that create architectural lock-in.

It’s up to every team to decide which of those lock-in scenarios are a good trade off. A micro-service oriented architecture that is built on technology you can use on a variety of providers can offset some of that trade-off (e.g. frameworks like Rails or Node). You can build on top of the services in the beginning and move parts of your infrastructure somewhere else for more control in the future. But it does require a different approach to building infrastructure than we have today.

Flo Motlik is the cofounder and CTO of Codeship, a continuous integration and deployment service. Follow him on Twitter @flomotlik or @codeship.