Report: Hadoop in the enterprise: how to start small and grow to success

Our library of 1700 research reports is available only to our subscribers. We occasionally release ones for our larger audience to benefit from. This is one such report. If you would like access to our entire library, please subscribe here. Subscribers will have access to our 2017 editorial calendar, archived reports and video coverage from our 2016 and 2017 events.
herd of elephants
Hadoop in the enterprise: How to start small and grow to success by Paul Miller:
Hadoop-based solutions are increasingly encroaching on the traditional systems that still dominate the enterprise-IT landscape. While Hadoop has proved its worth, neither its wholesale replacement of existing systems nor the expensive and unconstrained build-out of a parallel and entirely separate IT stack make good sense for most businesses. Instead, Hadoop should normally be deployed alongside existing IT and within existing processes, workflows, and governance structures. Rather than initially embarking on a completely new project in which return on investment may prove difficult to quantify, there is value in identifying existing IT tasks that Hadoop may demonstrably perform better than the existing tools. ELT offload from the traditional enterprise data warehouse (EDW) represents one clear use case in which Hadoop typically delivers quick and measurable value, familiarizing enterprise-IT staff with the tools and their capabilities, persuading management of their demonstrable value, and laying the groundwork for more-ambitious projects to follow. This paper explores the pragmatic steps to be taken in introducing Hadoop into a traditional enterprise-IT environment, considers the best use cases for early experimentation and adoption, and discusses the ways Hadoop can then move toward mainstream deployments as part of a sustainable enterprise-IT stack.
To read the full report click here.

Report: Bringing Hadoop to the mainframe

Our library of 1700 research reports is available only to our subscribers. We occasionally release ones for our larger audience to benefit from. This is one such report. If you would like access to our entire library, please subscribe here. Subscribers will have access to our 2017 editorial calendar, archived reports and video coverage from our 2016 and 2017 events.
Hadoop-elephant_rgb
Bringing Hadoop to the mainframe by Paul Miller:
According to market leader IBM, there is still plenty of work for mainframe computers to do. Indeed, the company frequently cites figures indicating that 60 percent or more of global enterprise transactions are currently undertaken on mainframes built by IBM and remaining competitors such as Bull, Fujitsu, Hitachi, and Unisys. The figures suggest that a wealth of data is stored and processed on these machines, but as businesses around the world increasingly turn to clusters of commodity servers running Hadoop to analyze the bulk of their data, the cost and time typically involved in extracting data from mainframe-based applications becomes a cause for concern.
By finding more-effective ways to bring mainframe-hosted data and Hadoop-powered analysis closer together, the mainframe-using enterprise stands to benefit from both its existing investment in mainframe infrastructure and the speed and cost-effectiveness of modern data analytics, without necessarily resorting to relatively slow and resource-expensive extract transform load (ETL) processes to endlessly move data back and forth between discrete systems.
To read the full report, click here.

Pentaho changes ETL license for big data push

Pentaho is moving its business intelligence tools to the Apache license to make them more compatible with big data technologies that already operate under that license. Pentaho’s Kettle extract, transform, load (ETL) technology was previously available under the LGPL or lesser Gnu General Public License.