Why BI’s shift to stream intelligence is a top priority for CAOs

Nova is co-founder and CEO at Bottlenose.
A quick search on LinkedIn reveals thousands of professionals in the United States now hold the recently established title of Chief Analytics Officer (CAO). Analytics officers have ascended to the C-suite amongst a constellation of new roles and responsibilities that cut across departmental lines at Fortune 1,000 companies. These positions are driven by the influx of data with which companies now need to contend, across even industries that were not previously data-oriented.
The CAO’s role most closely aligns with business intelligence, leveraging data analytics to create real business value and inform strategic decisions. Yet, the CAO’s responsibilities also encompass discovering the various and constantly changing threats and opportunities impacting a business.
The most dramatic shift in data-driven business intelligence that has necessitated this role is the sheer volume, variety, and velocity of data now available to the enterprise. Data is no longer just static or historical, but real-time, streaming, unstructured, and abundant from both public and proprietary sources.
Unstructured data is the fastest growing category of data, and within it stream data – time-stamped series of records – is the fastest growing sub-category. Stream data spans messaging, social media, mobile data, CRM, sales, support, IT data, sensor and device data such as the emerging internet-of-things, and even live video and audio.
The CAO’s charge is to enable the enterprise to deal with all of this data and generate timely, actionable intelligence from it – increasingly in real-time. I’ve been calling this process of business intelligence for streaming data “stream intelligence” for a while now. Among the dozens of CAOs I’ve spoken with recently, moving from classical business intelligence on static data to stream intelligence is one of their biggest priorities for 2016. This emerging form of BI creates unique problems for enterprise companies, but it also creates unique opportunities for those companies to discern and discover trends early, while there is still time to act on them.

Understanding BI 3.0

Thomas Davenport is a professor at Babson College, a research fellow at the MIT Center for Digital Business, and a senior advisor to Deloitte Analytics. He has written eloquently about these topics since 2010 and offers a framework for thinking about the past, present, and future of analytics.
For Davenport, BI 1.0 was about traditional analytics, providing descriptive reporting from relatively small internally sourced data. It was about back-room teams and internal decision reports.
BI 2.0 was about complex, much larger unstructured data sources. It was also about new computational capabilities that ran on top of traditional analytics. With big data, we saw data scientists first emerge, alongside several waves of new data-based products and services. This is where we are today.
BI 3.0 is about rapid, agile insight delivery – analytical tools at the point of decision, and decision making at scale. Today, analytics are considered a key asset enabling strategic decision making, not merely a mirror reflecting an organization’s past and present.
The “how” of accomplishing this vision amounts to balancing support for the “three V’s” of data — volume, variety, velocity — in the enterprise analytics stack. Most big data and BI technologies to-date were engineered to solve volume and variety, with very little emphasis placed on the velocity of data and analytics. This has to change.
Analysts are already drowning in volume, variety, and velocity of data. To make matters worse, the rate at which new analysts are being trained is far less than the growth rate of demand for analysts and data scientists. In fact, the gap between the supply of analyst hours and the demand for analyst cycles is growing exponentially. This means that there will never be enough data scientists to cope with the rise of unstructured stream data in the enterprise.
To solve the growing “analyst gap” we either need to figure out how to make exponentially more analysts, or we have to figure out how to make the finite supply of analysts exponentially more productive. I prefer the latter solution, but to accomplish it, analysts need automation.
Manual analysis by humans is still possible for structured data, but not for streaming data. Streaming data is just too complex, and changes too fast for human analysts to keep up with on their own. Automation is the only practical way to keep up with changing streams of big data.
BI 3.0 emphasizes real-time business impact and makes use of automation in the analytics process. This will increasingly be achieved with a seamless blend of traditional analytics and big data. BI 3.0 analytics are now integral to running the business day-to-day and hour-to-hour.

Following the flow of big data investment

I’ll close by talking about where big data investment dollars are starting to go. In short, real-time stream data is now a major priority, and historical data is now riding in the back seat.
According to Harvard Business Review, 47 percent of expenditures are directed towards process improvement. 26 percent are directed towards accommodating a greater variety of data, and 16 percent address a greater volume of data. Velocity of data today represents a very small slice, at three percent of overall investment, but that slice will grow quickly in 2016.
In fact, organizations that have prioritized real-time data are outpacing all others, according to Aberdeen Group. Companies that are competent across volume, variety, and velocity alike have seen 26 percent growth in their pipelines, 15 percent increase in cash generated, and 67 percent in operational cost reduction.
It’s hard to argue with those numbers. CAOs understand that BI 3.0 is happening now, which is why it’s become a top priority for 2016.

Facing threat of extinction: How pro services can compete in the age of AI

Alastair is president at Huddle.
The professional service industry — any third-party consultancy or support service that helps businesses operate better — is massive. In the United States the professional service industry is comprised of almost 20 million people, which is roughly 13 percent of the total American workforce. Those 13 percent are in grave danger, finding themselves jobless in as little as 10 years.
The latest job reports from the Bureau of Labor Statistics show that the American services industry is currently thriving, but starting to come under extreme threat. Burgeoning new markets overseas, which used to support industrial economies, are transitioning into service-based economies. That flood of new (oftentimes extremely inexpensive) labor means incredible new competition for almost every aspect of the service industry, including professionals like lawyers and paralegals, accountants, and business consultants.
At the same time, businesses and pro service professionals around the world face an even greater common threat: replacement. The rise of Artificial Intelligence (AI) and automation mean that professional service businesses don’t only have to worry about human competitors; they have to worry about machines as well. Today, algorithms help people solve their business problems. Software like QuickBooks and Xero attract customers who third-party accountants used to own exclusively, without anyone (or rather, anything) to contest them.  And there is a program in the works that can instantly analyze the legitimacy of a legal claim. You can already see the impact of automation at work in other industries like administration and executive assistance, for example, x.ai’s says that its product — a robotic assistant called Amy — can fully replace a real human aid.
The professional services sector will face an onslaught of new competition, and when it does, service-based businesses need to be able to set themselves apart and prove their value – or die trying.

The calm before the storm

To get an idea of how quickly the professional service businesses can shift from complacency into chaos, look no further than the disintegrating world of European tax audit firms. In 2014, in the wake of a series of public fraud claims, the European Union instituted a mandatory audit firm rotation rule, which mandated that all European companies switch to a different firm every ten years. Instead of doubling down to compete on the quality of their products and services, audit firms have responded by lowering their fees. This has created an environment where competition is based on price, not quality of work. The results have been ugly, and it is only a matter of time before new competition from overseas and technology swoops in that can win on price, maybe even quality.
That experience of sudden, constant competition isn’t something that will stay limited to auditors. As technology advances and the world shifts, every professional services business will be put under the microscope by its clients and forced to distinguish itself. Unfortunately, the fields where differentiation is most critical can also be the ones that are hardest to showcase. Often there aren’t metrics to point to, and firms can’t compete with algorithms in terms on price.
To be truly successful in the new marketplace, businesses need to find new ways to set their services apart to save themselves.

Keeping the services industry relevant with new rules

One of the biggest value-adds that any professional service can offer is its knowledge—industry expertise, experience and understanding of your clients themselves are hugely important, and aren’t something that other players can easily replicate. That said, knowledge at firms can easily become siloed if resources are limited. Even the smallest companies these days often need to operate globally, so having access to knowledge from all over the world is also a massive asset, lending firms provide extremely valuable insight that competitive alternatives might lack. One way to diversify knowledge is to alter corporate structure; for example, Grant Thorton UK moved to a shared enterprise format to give voice to all of its employees. Firms should also take a closer look at software to engage and interact with remote teams and leverage their expertise as well, without relying on workflow-slowing email back-and-forth.
Another way professional services firms can distinguish themselves is to highlight their creativity. Creative contributions can’t be outsourced, automated or devalued. A company’s creativity will always be a powerful leveraging point with clients. Finding the creative areas where a firm’s creativity flourishes and how to showcase that creativity can be a huge boon. For example, this might mean hiring diversely, providing creative resources, or even acquiring a whole new company, like PricewaterhouseCoopers did last year. Creativity is one of the few assets that is completely irreplaceable, and therefore priceless.
Most importantly, service-based companies must learn to dissolve the wall between themselves and their clients. Good service experiences ultimately boil down to relationships—they’re what make or break any new client engagement. The single most important factor for developing those relationships is communication and transparency. By opening channels of communication through consistent collaboration, firms can break down the barrier between themselves and their client, becoming less of a commoditized service provider and more of a trusted partner.
Businesses today are deeply concerned with maintaining accountability (and with good reason, as illustrated by the fraud behind the European rotation). If companies can offer clients clear insight into what you’re working on, you can win their confidence and retain their business. Some companies grow closer with their clients by working on projects in conjunction. Others partner on new initiatives or share joint resources. Whatever a company can do to bring themselves closer to their client is a step closer to becoming indispensable to them.
When business is going well, it’s easy to sit back and be complacent. But truly successful organizations are constantly evolving, with an eye to how things might change in the future. Professional services are incredibly nuanced. They add indispensable value. And the more they can clearly demonstrate this, the more they’ll thrive — even in the face of a more competitive environment.

Health care’s future is data driven

Darren is the chief executive at Apixio.
Despite spending more than $3 trillion a year on health care in the U.S., we do not yet have a way to easily access your complete medical history. The average hospital reinvests in MRI machines every five years or so at a cost of millions of dollars. Yet, comparatively little has been directed toward unlocking some of the most valuable information health insurers, physicians, and hospitals already have about you.
The U.S. produces 1.2 billion clinical care documents annually, but nearly 80 percent of the data they contain is unstructured. This information is difficult for entities to understand and use. The medical chart contains a record of your health care — visits with doctors and hospitals, treatments, procedures, medications, diagnoses, and the results of workups. It is the key to understanding your health and improving the care provided to you. Yet, the challenge of accessing and making that information useable is immense.
The typical medical chart is stored in various fragments across different locations and systems. Your primary care physician has their record of you but not the record from your cardiologist or gynecologist or from the emergency department doctor you saw six months ago for bronchitis, for example. Imagine your entire medical record as a jigsaw puzzle in which the pieces are scattered and stored in different locations and different types of boxes, each of which is hard to open. No wonder people feel as if they are repeating themselves every time they visit a medical facility.
Luckily, technologies that make sense of the immense amount of data and preserve the patient narrative are rapidly emerging. With the rise of  cognitive computing, natural language processing (NLP), and data science in health care, we now have the power to unlock untold value in health care data and drive proactive, targeted health care.

Enabling insights

The first step is being able to make sense of the rich narrative in the medical chart whether from a primary care physician or specialist practicing in different organizations, different regions or both. While your doctor is familiar with your record, the medical system as a whole is not. So medical care continues to be reactive rather than proactive.
This is where cognitive computing and NLP enter the picture. NLP tools can help extract data from free text found in the patient record creating valuable material for big data technologies to analyze. Cognitive computing platforms use NLP along with pattern recognition models and data mining techniques to simulate human thought processes in a self-learning computerized system. A cognitive computing platform refines the way it looks for patterns as well as the way it processes data so it becomes capable of anticipating new problems and modeling possible solutions.
Once doctors have access to patient data, the question is how can they use it to accurately predict what treatments are most effective. Data science gives rise to a better understanding of the relationship between treatments, outcomes and patients. Now health care organizations have the tools to combine data from different sources and paint a more complete picture of the patient to personalize their treatment.

A data-rich health care future

These technologies give health care organizations the ability to access the previously untapped 80 percent of health care data so providers have real-time access to information and a deeper understanding of patients. If doctors know more about patients, then they can make more intelligent decisions that will result in quicker recoveries, fewer readmissions, lower infection rates and fewer medical errors. Ultimately, it will supercharge the value of care.
Beyond benefiting individual patients, access to this data will also create a living laboratory of clinical data to better inform health care decisions. Now that information about clinical care can be machine read, physicians can access it and base research on the everyday clinical care of millions of patients. Rather than depending on narrowly designed studies that do not directly relate to individual patients, health care organizations and researchers can learn about health care delivery from everyday real-world data.
Big data technologies can make use of information that is locked up in our medical charts in different systems and locations so we can transform how we look at and interpret patient health. With access to the untapped 80 percent of patient data and tools that put the data to use, we can change the delivery and consumption of health care as we know it, and usher in a new data revolution in health care that will improve patient care and result in high-quality outcomes.

Why CIOs must pursue ‘eventual symmetry’ for their cloud strategies

Sinclair is CEO and cofounder of Apprenda

The idea that hybrid cloud is the end state of enterprise computing is no longer controversial. Nearly all technologists, IT executives, and analysts subscribe to the idea that public cloud and on-premises computing both have a place in modern enterprise IT strategy.
A hybrid end state isn’t a bridging tactic or a strategic consolation prize, but a desirable outcome. In fact, a strong case could be made that a hybrid model allows for specialized optimization based on use cases – there are many scenarios both now and in the future that may map best to on-premises or public cloud.
There are two primary ways to implement a hybrid end state: asymmetric and symmetric.

1. Asymmetric – In asymmetric orientations, an enterprise consumes public cloud as one endpoint, and builds an on-premises cloud that is a distinctly separate, second endpoint. For example, we could look at the Infrastructure-as-a-Service (IaaS) layer and say that an enterprise could use OpenStack on-premises and AWS in public cloud, and use processes, operations, and a brokering abstraction across the two endpoints to help normalize the consumption of IaaS regardless of what side of the firewall it came from.

In asymmetric hybridity, the technology used on-premises is different than that used in the public cloud, resulting in the need for reconciliation and the need to accept a lossy factor (i.e. the two technologies may have different features and evolutionary paths) since points of differentiation between the two need to be ignored or marginalized to ensure consistency.

2. Symmetric – Symmetric hybridity means that an enterprises on-premises assets and public assets are using the same technology, and that technology reconciles the assets on both sides of the firewall as a single endpoint. An example of this would be a Platform-as-a-Service (PaaS) layer that can be installed on-premises that could use local OSes and OSes from one or more public clouds all under one logical instance of the PaaS.

The PaaS hides the fact that resources are coming from disparate providers and only exposes that fact where appropriate (e.g. at the policy definition level to shape deployments). In this case, the PaaS is the single endpoint where interaction happens, and resources on both sides are used as resource units by the PaaS. Any organizational processes and consumption processes would be ignorant to the idea that a border exists in the resource model.

Pros and cons of symmetric and asymmetric models

Symmetric models guarantee that anyone within the enterprise consuming cloud infrastructure is shielded from the distinction between on- and off-premises resources and capabilities. If the end user  of cloud infrastructure (e.g. a developer or data scientist) is required to acknowledge any asymmetry, they will have to deal with it in their project. This explicit need to deal with a fractured cloud creates an immediate “tax” related to consuming infrastructure and it will generate consumption biases.
For example, if one side of the asymmetric deployment is easier to consume than the other, then an end user will prefer that even if the not-so-easy side is more aligned with the project, and will cause IT itself a number of headaches when it comes to operations related to that project.
It’s important to understand symmetry doesn’t mean the on-premises and public cloud side of a hybrid deployment must be equal. Certainly, workloads may need on-premises or public assets to satisfy certain requirements the other side couldn’t possibly satisfy.
What symmetry guarantees is that a workload that is indifferent to on-premises or public never be exposed to those concepts. Symmetry also ensures a workload with requirements that can only be satisfied by one part of a hybrid cloud or another is never exposed to the technical divide between the clouds. Instead, a workload communicates its preference in the language of requirements.

Eventual symmetry

Asymmetric models might be good starting points or appropriate for certain layers of the infrastructure stack, but they’re not ideal as a final end state. Symmetric models are clearly superior in almost all other aspects.
In response to this, CIOs should pursue a strategy of ‘Eventual Symmetry.’ Eventual symmetry means that any cloud strategy must:

  1. Choose symmetric models over asymmetric models where possible
  2. If asymmetric is the only possible approach, ensure that the implementation lends itself to eventually being replaced by a symmetric model or that processes and technology be used to abstract the asymmetry into a perceived symmetric model

By establishing eventual symmetry as a core cloud strategy pillar, a CIO can guarantee that any disjointedness in their strategy will be resolved. He or she can also ensure consumers of IT resources are abstracted away from details related to on-premises and off-premises.

Why publishers can’t afford to ignore deliverability

Tony is CEO of PostUp.
From Lena Dunham to TechCrunch, it seems like everyone is getting into email newsletters these days. It should come as no surprise that email is becoming the new darling among publishers, since mobile users love to kick back and read a curated digest of great content every day.
As newsletters take off, it is important that publishers not just focus on click-through rates, but keep deliverability, inbox placement and readability top of mind. These factors could be the most important details in email marketing, yet are often overlooked by publishers who are blinded by clicks. However, an email that is not delivered to a subscriber’s inbox is effectively worthless; if a subscriber doesn’t get it, they can’t click through.
Engagement is one of the key factors that impacts deliverability. The ISPs look at what you are sending and who you are sending it to in order to determine whether or not it should be placed in the inbox or some other folder. The fact is that you don’t get placed in the inbox if people don’t open and click your messages. If your recipients are engaging with the communications that you are sending because they are well-formatted and providing valuable information, then the ISPs have no other option than to deliver your messages to the inbox. Engaged recipients are reinforcing the idea that the message is relevant and deserves good placement.
Creating a strategy around engagement will improve deliverability, which will lead to a more successful email program. To boost engagement, you should consider the following points:

  • Have a clear email-specific strategy. Don’t just dump web content in an email and call it a day. While publishers are very good at creating content, they are often challenged in determining which content makes the most sense in an email. Publishers need to make decisions about content based on who is signing up and customer behavior. Unfortunately, by not considering deliverability, publishers often end up formatting emails that are optimized to their website rather than considering how it would best generate traffic to the website. Instead, publishers should adopt a well-thought-out communication strategy for the email itself to improve deliverability and inbox placement.
  • Design for mobile & execute campaign previews. As mobile engagement continues to grow, previewing becomes a necessary task before you hit send. Viewing how an email will look across webmail clients and mobile devices using rendering reports can help publishers optimize the creative, which in turn makes the message more clickable. This helps to optimize the creative before you send which in turn increases the chances of driving engagement. Better engagement equals better delivery rates.
  • Use targeting techniques to increase engagement. Unfortunately, a lot of publishers have a “batch and blast” mentality and send emails to anyone who has ever signed up for their list, and the emails become irrelevant for many. This can negatively impact deliverability rates. For example, if you have a million recipients on your list and you always send to the entire list, you could hurt your delivery if an overwhelming majority of those recipients never engage. It tells the ISPs that you aren’t sending relevant communications and could eventually lead to spam folder delivery. Because engagement affects inbox placement, targeting plays a role in deliverability, and publishers that want to land in the inbox should consider targeting their messaging.

Publishers have to send relevant, engaging content that people will open and click. If you send something that people don’t open and click, you will have challenges. If you don’t consider these elements, then you are not likely to find the inbox because poor engagement will lead to poor delivery rates. Don’t forget: if the email doesn’t make it to the inbox, no one will open it, no one will click on it, and you won’t make money.

The not-so-distant future of mobile and video

Galia is COO and head of sales for Taptica.
The mobile video advertising ecosystem is headed toward a major eruption. By the end of next year, the industry will take colossal strides to combat fraud and eliminate bogus inventory, while simultaneously working to meet advertisers’ growing demand for mobile video ads. Savvy businesses will continue to buy or build platforms that allow them to offer a complete mobile video ad solution — the content, the audience, and the ad tech — and increase their revenue streams. As with any massive overhaul (and it will be massive), there will be casualties. But, the end result will be more effective and more secure mobile video advertising opportunities to help feed the already burgeoning demand.
Here’s what we know. Advertisers and agencies are scrambling to create or fine-tune their mobile strategies and to contend with the now irrefutable fact that consumers are spending more time on their mobile devices. We’ve been talking about a “mobile-first” media landscape for a while; now it’s time for marketing strategies to catch up with user behavior. Simultaneously, consumer and brand interest in video is mounting, and for basically the same reason: high-quality video is engaging. Neither mobile nor video is without its challenges, but marrying the two helps alleviate some of the issues. Video mobile ads are more engaging than mobile banner ads. Couple that with the rich data we can harness through mobile devices, and you open up video to a host of new marketing use cases. Now we can monitor viewers’ subsequent behaviors and better measure the ROI of our investments. Video isn’t just for brand awareness anymore.
Within the next few years, it’s basically inevitable that the bulk of ad buying, mobile included, will be done programmatically. Spending levels are already soaring. Programmatic buying is lauded for its benefits, which include increased efficiency, more accurate targeting capabilities, and easy scalability. But it’s also rife with fraud. According to research from the Association of National Advertisers and WhiteOps, 23 percent of video ad views are actually from bots. This is simply not acceptable. As an increasing number of brands and agencies look to embrace video advertising, they’ll clamor for more stringent regulations and better protection against fraudulent traffic. They’ll also take a hard look at programmatic trading, the practice of buying media and then reselling it for profit. In the not-too-distant future, 2016 will be the year brands demand more transparency across the board. The writing is already on the wall. BrightRoll has begun cleaning up its traffic and taking steps to educate the market about the ubiquity of fraud. AppNexus, which has been a big player in programmatic trading, made huge strides this year by working to improve the quality of its inventory, limiting media arbitrage and increasing viewability. Perhaps predictably, it’s also shifting its focus toward video.   

Mobile media consolidation

Even with the growing amount of fraudulent mobile video traffic, there is still a pressing supply and demand issue. There’s simply not enough inventory to meet advertisers’ needs. We’re going to have to rethink our definition of “premium” and become more open to running video ads in-stream and alongside user-generated content, a change that’s already underway—just look at the buzz Snapchat’s video ad product is generating.
The increased demand for mobile video will continue to drive up prices for publishers, which is part of the reason why a growing number of companies are working to buy or create quality video content. Mobile media has already begun consolidating, as content distributors and content creators realize they’re more powerful together. Look at AOL’s acquisition of Adapt.TV, a programmatic video ad platform, and Vidible, a content syndication solution for publishers. (Expect to see an increase in the syndication of quality content as one solution to the challenge of meeting expanding demand quickly.) With these moves, AOL now has the audience, content, and technology needed to offer advertisers a full-service ad solution. And don’t forget, it’s owned by Verizon, which recently announced a plan to launch its own Hulu/Netflix-like app (Go90) for streaming mobile video, and also revealed that its focus is on mobile video.
RTL Group’s purchase of SpotXchange last year and Facebook’s acquisition of LiveRail, the third-largest online video advertising management platform, are also harbingers of industry-wide change. Facebook can now extend its video ad reach beyond its platform, and LiveRail is already working to improve the quality of its exchange by cutting out providers who don’t work directly with advertisers. If they manage to improve their supply and leverage Facebook’s in-depth data, the results will be what marketers’ dreams are made of: the ability to reach their target audience via quality mobile video traffic while gaining precise insight into what type of creative is resonating.
These types of intelligent mergers will continue as the industry works to reduce fraud, improve viewability, and better harness data. These changes will punish suppliers with less-than-stellar practices, but the net result will be an industry that’s more mature, regulated, and effective. Mobile video advertising will be a no-brainer investment.

If you want millennial TV viewers, it’s interact or bust

Zane is CEO and cofounder of Watchwith.
It’s pretty much taken as gospel these days that millennials are cord-cutters, eager to abandon television as we know it — torch broadcast and cable business models along the way.
The reality, though, is a lot more nuanced. Yes, millennials are more likely to opt out of subscriptions to traditional cable-TV bundles. But they’re cord-cutting in only the most narrow sense—substituting one delivery system (linear TV) for another (on-demand streaming) and one type of hardware (cable-fed TVs) for others (mobile devices, and TVs and PCs rigged with over-the-top solutions).
They’re still watching TV shows — lots of TV shows — and consuming plenty of programming generated by the “traditional” TV industry. They’re just doing it on their own terms.
At Watchwith, our team of entertainment, technology, and advertising experts have been studying video consumption patterns since 2012. We’re particularly interested in understanding the psychology of what works and what doesn’t for the new generation of TV viewers, particularly when it comes to advertising messages. That said, here are three broad findings we’ve found to be true…

Mobile pre-roll feels like a personal violation for millennials

All of us tend to be deeply connected to and dependent on our mobile devices, but for digital-native millennials, omnipresent smartphones are almost like an extension of their bodies — and of their personalities. That’s why pre-roll and mid-roll advertising that might be acceptable (or at least tolerable) in a desktop setting becomes absolutely rage-inducing on mobile.
We’ve heard again and again from millennial consumers about their incredible feeling of frustration while watching pre-roll on a phone. There’s a profound mismatch between the pre-roll experience and a personal device like a smartphone. Pre-roll advertising in a desktop browser tab comes off as interruptive, but the same pre-roll on mobile feels domineering, like viewers have been temporarily deprived of the sense of control they take for granted when using their beloved devices.
According to a study just released by eMarketer, “young adults ages 18 to 29 are more likely to own a mobile phone or smartphone than a desktop or laptop, pointing to how mobile is becoming an all-purpose device that users are increasingly relying on.” But in a phenomenon that MaryLeigh Bliss of youth-market research firm Ypulse calls “ad A.D.D.,” millennials are turning a blind eye to traditional ads on their favorite platform. According to Ypulse research, says Bliss, “when we ask young consumers which type of advertising they usually ignore or avoid, 62 percent say online ads, like banner and video ads, and 68 percent say mobile in-app ads. In other words, online marketing — you’re doing it wrong. It’s not enough to be where they are. You have to be where they are, and match your message to their behavior.”

‘Interactive TV’ is entirely intuitive on mobile for millennials

Many digital-native millennials grew up, or at least came of age, thinking of media consumption as a tactile experience. It’s entirely natural — indeed, second nature — for them to feel their way through media on mobile devices. And that’s even more true for post-millennials, aka Generation Z; witness the various videos on YouTube like “Baby Works iPad Perfectly,” and “9 Month Old Baby Using iPad.”
All those years of people fumbling with remotes to navigate through cable guides and various iterations of “interactive TV” have given way to being able to touch, tap and swipe — in the process instantly controlling their content-consumption destiny. That’s part of what’s behind the explosive growth of millennial-favorite streaming-gaming platform Twitch, which Amazon acquired last year for $970 million as well as prioritizes real-time interaction on its mobile apps.
The very culture surrounding streaming video itself on both mobile and desktop engenders seamless interaction. Consider the billions of shares, likes/dislikes, and channel subscriptions in just the YouTube ecosystem alone.

Millennials are poised to interact—in context, in program

Millennials are incredibly distracted consumers of content. They’re media multitaskers. So, programming that allows them to multi-task in-program helps them satisfy their natural desire to touch, tap, and swipe their way through their content-consumption journey.
In a recent study Watchwith conducted in collaboration with Magid, we found that more than half of 18- to 24-year-olds are more likely to watch more episodes of a show if it has in-program (i.e., non-interruptive) ads. We also found that in-program ads have higher levels of unaided recall compared to traditional TV ads. The point is to let the show continue flowing, but still get the ad message across.
In other words, while millennials may be inveterate multitaskers, when they’re actually in the flow of content-consumption, they’re ready and willing to interact with brand messages on their own terms.

The BI conundrum: Delivering trust and transparency at speed

Brad is chairman and chief product officer at Birst.
Historically, the pendulum of the business intelligence (BI) market has moved between centralized governance of data and self-service and agility. Today, the industry is swaying back and forth, as CIOs struggle to find the right balance of control, transparency, and truth.
Recently, the pendulum has swung too far in the direction of data discovery. While data discovery tools provide speedy data discovery and manipulation, these tools can create analytical silos that hinder the ability of users to make decisions with confidence.
These self-service capabilities create challenges for CIOs, according to Gartner, Inc. Without proper processes and governance in place, self-service tools can introduce multiple versions of the truth, increase errors in analysis and result in inconsistent information.

Is ‘imperfect but fast’ a fair trade-off?

That said, CIOs have come to accept data inconsistency as the price to pay – a tradeoff to achieve speed – to give business users the ability to analyze data without depending on a central BI team. Both parties seem to have adopted the maxim “imperfect but fast is better than perfect but slow.” But is this price too much? The answer is worth delving into.
Backlashes include siloed and inconsistent views of key metrics and data across groups. For instance, lead-to-cash analysis requires data from three different departments (Marketing, Sales, and Finance) and three separate systems (marketing automation, CRM, ERP). A consistent and reliable view of the information between departments and systems – one that provides a common definition of “Lead” or “Revenue” – is necessary to avoid confusion and conflicting decisions.
Finding consistency with data-discovery tools requires the daunting task of manually delivering a truly governed layer of data without a comprehensive understanding of core business logic. This means having the ability to build and test integrated data models, tools for performing extraction, transformation and loading (ETL) routines across corporate systems, channels for proliferating enterprise-wide metadata, and a demand for governance-centric business procedures. All of which are a burden to CIOs.

The end goal: Transparency and speed

Trusted data does not have to be synonymous with restrictive access and long wait times. By implementing transparent governance, CIOs can enable local (decentralized) execution with global (centralized) consistency, reconciling speed with trust at enterprise scale.
But to deliver the agility of data discovery with enterprise governance, CIOs must work to do the following:

  • Adopt a data-driven culture. CIOs need to create a team approach to BI that balances the use of skilled resources and the development of more localized business skills to deliver ongoing success.
  • Enable data access to all business users. Creating protocols to access new datasets ensures that all business users can identify opportunities that add value. Multiple layers of security during discovery and consumption are crucial to uphold security. With simple and secure access, users can easily identify opportunities to derive insights.
  • Create a consistent understanding and interpretation of data. Certify and manage key input datasets and governing information outputs to help align organizational accountability for data discovery. A single view of governed measures and dimensions, for users in both decentralized and centralized use cases, ensures consistency.

Following the aforementioned points to deliver trust and transparency results in big gains. According to Dresner Advisory Services, organizations that view data as a single truth with common rules are nearly 10 times more likely to achieve BI success than organizations with multiple inconsistent sources.
There is a powerful and direct correlation between business success and having a trusted view of enterprise data. Companies evaluating BI solutions must look for modern architectures that support transparent governance at business speed and deliver a unified view of data without sacrificing end-user autonomy. The companies that do will continue to win.

Why monolithic apps are often better than microservices

Sinclair is CEO and cofounder of Apprenda, a leader in enterprise Platform as a Service.
With all of the talk these days about microservices and distributed applications, monolithic applications have become the scourge of cloud systems design. Normally, when a new technical trend emerges to replace a previous one, it is due (at least in part) to evolved thinking. The odd thing with monolithic application architecture, however, is that nobody ever proposed it as a good idea in the first place.
The idea of loosely coupled services with clear boundaries has been around for decades in software engineering. So, how did we end up with so many apps “designed” as monoliths? In a word – convenience.
The fact is, in many use cases, monolithic architectures come with some non-trivial and durable benefits that we can’t simply discount because it doesn’t adhere to a modern pattern. Conversely, microservices can introduce significant complexity to application delivery that isn’t always necessary.
As a fan of microservices, I fear enterprises are blindly charging forward and could be left disappointed with a microservices-based strategy if the technology is not appropriately applied.  The point of this post isn’t to pour FUD onto microservices. It’s about understanding tradeoffs and deliberately selecting microservices based on their benefits rather than technical hype.

Debugging and testing

Generally speaking, monolithic applications are easier to debug and test when compared to their microservices counterparts. Once you start hopping across process, machine, and networking boundaries, you introduce many hundreds of new variables and opportunities for things to go wrong – many of which are out of the developer’s control.
Also, the looser the dependency between components, the harder it is to determine when compatibility or interface contracts are broken. You won’t know something has gone wrong until well into runtime.


If your shiny new mobile app is taking several seconds to load each screen because it’s making 30 API calls to 30 different microservices, your users aren’t going to congratulate you on this technical achievement. Sure, you can add some clever caching and request collapsing, but that’s a lot of additional complexity you just bought yourself as a developer.
If you’re talking about a complicated application being used by hundreds of thousands or millions of users, this additional complexity may well be worth the benefits of a microservices architecture. But, most enterprise line-of-business applications don’t approach anything near that scale.

Security and operations

Fortune 500 enterprises I work with struggle with managing the relatively coarse-grained application security IT departments use today. If you’re going to break up your application into lots of tiny services, you’re going to have to manage the service-to-service entitlements that accompany this plan. While managing “many as one” has time tested benefits, it’s also contrary to the motivation behind microservices.

Planning and design

Microservices have a higher up-front design cost and can involve complicated political conversations across team boundaries. It can be tricky to explain why your new “pro-agile” architecture is going to take weeks of planning for every project to get off the ground. There’s also a very real risk of “over-architecting” these types of distributed solutions.

Final thoughts

Having said all of this, microservices can absolutely deliver significant benefits. If you’re building a complicated application and/or work across multiple development teams operating in parallel and iterating often, microservices make a ton of sense.
In fact, in these types of situations, monolithic applications simply serve as repositories of technical debt that ultimately becomes crippling. There is a clear tipping point here where each of the advantages of monolithic applications I described earlier become liabilities.  They become too large to debug without understanding how everything fits together, they don’t scale, and your security model isn’t granular enough to expose segments of functionality.
One way to help reduce and in some cases even eliminate the technical “tax” associated with microservices is to pair them with an enterprise Platform as a Service (PaaS). A proper enterprise PaaS is designed to stitch together distributed services and takes deployment, performance, security, integration, and operational concerns off the developer and operators’ plates.

The next information revolution will be 100 times bigger than the Internet

Ambarish is cofounder and CEO of Blippar. You can follow him on Twitter.
Every day I see something I want to know more about, something I can experience at a deeper level, and share with my friends and family. I’m hardly alone in that; the average citizen of any connected country is an avid consumer, seeker, and sharer of information — driving over 5.7 billion Google searches each day. But what happens when you see something you can’t describe? Or when you encounter something you can’t accurately communicate to a friend, let alone a search engine?
Sadly, the platforms and tools of the current age of information aren’t much help when trying to learn about . They restrict our ability to learn more about things we cannot describe with words. And while the Internet has powered a new era of human networking and intelligence, the first information revolution fell short of realizing the potential of technology to provide us with the keys we need to fully unlock the world around us in any given moment. This isn’t a new development. Throughout history, our ability to express curiosity for the world around us has been limited only by the technology available.
In today’s age of information, mobile devices and global connectivity have brought an impressive amount of knowledge to our very fingertips. The Internet and powerful text search tools enabled us to discover nearly everything about anything we can describe with words – any text that can be typed into a search engine. But words cannot express the reality of the entire human experience. That said, for all our advancements, the human experience remains largely driven by sight, as it has for millennia. Unsurprisingly, eyeballs have always had a shorter path to the brain than any other sense. And, our ability to quickly derive information and make decisions based on visual data evolved far before our ability to understand language and invent the alphabet.
When the next information revolution arrives, it must then open the door to the physical, visual world and enable people to quickly discover contextual information about the objects and images around them. The future of discovery will be pointing at things we’re curious about and learning relevant information without even having to ask a question. This revolution will transform how we access shared knowledge and impact nearly every aspect of our lives at home and in the workplace.

Revolution is coming, and soon.

Fortunately (and excitingly), this revolution is going to happen much more quickly than many realize. New technologies like image recognition, wearable hardware, machine learning, and augmented/virtual reality have created an ecosystem capable of bringing us closer to a world in which information isn’t just at our fingertips, but accessible through every shape and form around us.
This is the “Internet on Things” — an environment in which information is autonomously accessed in real-time, immediately upon encountering and interacting with something in the world. Unlike the often referenced “Internet of Things,” technology from the Internet on Things isn’t embedded within an object. Instead, the object itself is the key that allows another platform to find and deliver associated data, unlocking relevant information and experiences.
The potential applications for such technology are undoubtedly exciting. But are we ready? A revolution, after all, is inherently disruptive — in the truest sense of the word, not the buzzword bandied about today’s tech community. While the Internet on Things will undoubtedly have a positive, transformative effect on the lives of average consumers, it will pull the rug out from under a range of established companies and create new business practices in virtually every industry.
To get a sense of the far-ranging implications of a new information revolution, we can consider the massive shift the search business drove in the wake of mainstream Internet adoption. As PCs became cheaper and connectivity improved, millions of consumers needed a better way to access the wealth of information that was now available within their homes and offices. In meeting that need, the search industry established the infrastructure that is today continuing to disrupt everything from print advertising to brick & mortar retail.
The best example of the long-term ramifications of an information revolution is, of course, Google.
Google is a microcosm of innovation and disruption. The company took advantage of a vast “Blue Ocean” opportunity created by new technologies and changing consumer preferences, and rode a tidal wave of change that let it grab an increasingly large share of the technology industry at large. Today that includes long-term ripples of the first information revolution, such as YouTube, drones, and self-driving cars.
Putting aside the potential for a “new Google” to hatch in the wake of the next information revolution (one that would further transform advertising, e-commerce and more), the Internet on Things will also cause disruption through the infrastructure required to support it.
Powering this new model will mean indexing all of the world’s visual data — every object and image — and building machines smart enough to return the right, context-sensitive information to an end user. This will require massive investments in technology — both software and hardware — and will be the first barrier for companies seeking to exploit this dynamic space.
There are enormous opportunities on the horizon, but they come alongside a host of challenges. Today we stand on the cusp of a revolution that will reimagine how we interact with the physical world and disrupt industries in every market across the globe. If we can rise to the occasion and overcome the hurdles in front of us, tomorrow we will stand in a world awash with information, an environment in which every person can acquire relevant knowledge about their surroundings in microseconds. Empowered with such ability, what will humanity accomplish next?