Report: Bringing Hadoop to the mainframe

Our library of 1700 research reports is available only to our subscribers. We occasionally release ones for our larger audience to benefit from. This is one such report. If you would like access to our entire library, please subscribe here. Subscribers will have access to our 2017 editorial calendar, archived reports and video coverage from our 2016 and 2017 events.
Hadoop-elephant_rgb
Bringing Hadoop to the mainframe by Paul Miller:
According to market leader IBM, there is still plenty of work for mainframe computers to do. Indeed, the company frequently cites figures indicating that 60 percent or more of global enterprise transactions are currently undertaken on mainframes built by IBM and remaining competitors such as Bull, Fujitsu, Hitachi, and Unisys. The figures suggest that a wealth of data is stored and processed on these machines, but as businesses around the world increasingly turn to clusters of commodity servers running Hadoop to analyze the bulk of their data, the cost and time typically involved in extracting data from mainframe-based applications becomes a cause for concern.
By finding more-effective ways to bring mainframe-hosted data and Hadoop-powered analysis closer together, the mainframe-using enterprise stands to benefit from both its existing investment in mainframe infrastructure and the speed and cost-effectiveness of modern data analytics, without necessarily resorting to relatively slow and resource-expensive extract transform load (ETL) processes to endlessly move data back and forth between discrete systems.
To read the full report, click here.

More ARM CPUs in the Datacenter for 2017?

AI, Containers, cloud computing, large storage infrastructures, IoT, HPC… and probably more. And to serve all of this, large Service Providers and Enterprises are building huge datacenters where everything is designed to maximize efficiency.
The work covers all the aspects around the data center facility, power and cooling, security and compute power density as well. A lot has been done but there is more that has been asked.

Failed attempts to do more (with less)

In the past many vendors tried to get more work done taking approaches that failed miserably. Do you remember Sun Sparc T1 processors for example? Launched in 2005, 72 Watts, 32 threads (4 threads per core), with a 1.4Ghz clock… but it was too ahead of its time. Most of the software was still single threaded and didn’t work well on this kind of CPU.
In the past we have also seen several attempts to push ARM CPUs in the datacenter, 32 bit processors first and 64 bit later. They all failed for the same reason that afflicted Sun’s CPU… plus the lack of optimized software in some cases.
But the number of cores continued to increase (now Intel can show up to 24 cores in a single CPU) and software followed the same trend with multithreading first and down to micro services now. Apps organized in single process containers are just perfect to run on this type of CPU.

Thank you Raspberry Pi (and others)

pi2modb1gb_-compNow Linux on ARM and lot of other open source software (as well as a specific version of Windows 10!) are much more optimized to run on ARM CPUs than in the past.
Raspberry Pi, the super cheap computer (starting at $5 now) which was launched in 2012, opened a world of opportunities for hobbyists, students and all sorts of developer at all levels. Prototyping is much easier and inexpensive while the community ecosystem is growing exponentially. Raspberry Pi, and all its clones, are not designed for the datacenter of course… but it is also true that this small computer has inspired a lot of people and is at the base of some very cool projects, including HPC and Docker Swarm clusters!

The next step

ARM CPUs are particularly efficient when it comes to power consumption and now they are becoming more and more powerful. What’s more, these CPUs are usually designed with a SoC approach (System-on-a-Chip), which simply means that the CPU already contains a lot of other components needed to build a computer. In fact, multiple cores are often coupled with a GPU, network and storage controllers and so on.
It doesn’t mean more compute power per se, but actually more compute power and less power consumption per square centimeter. And this is what datacenter architects are craving for!

Back to the datacenter

Contrary to the past, there are now all the components ready to build a successful ARM-based datacenter ecosystem. CPUs don’t have the same compute performance per core compared to x86 CPUs, but it is also true that many applications and workloads work in a massive parallel fashion and container development will help to improve this trend furthermore. And, at the end of the day, for many workloads, compute power density is becoming more important than single core performance.
Other aspects include:

  • software which is much more optimized than in the past,
  • 64 bit ARM CPUs, which are much more mature now,
  • Automation and Orchestration tools, which are now ready to handle hundreds of thousands of nodes in a single infrastructure.

Now ARM CPUs are relegated to small appliances, or as a component of larger x86-based systems, but this could change pretty soon. I want to mention Kaleao here, a startup working on an interesting ARM-based HCI solution. This is just an example, but there are many others working on ARM-based solutions for the datacenter now.

Closing the circle

Server room.ARM has its potential in the datacenter, but we’ve been saying that for years now, and reality has always shown the contrary. This time around things could be different, the stars are all aligned and if it doesn’t start happening now I think it will be harder and harder in the future.
It’s also interesting to note that there is a lot of stirring regarding compute power when it comes to large scale datacenters. Google looking at its own specialized chips for AI and alternative CPUs and GPUs for HPC-like applications in the cloud, Quantum computing are just a few examples… ARM is one of the multiple options on the table to build next-gen datacenters.
My last note goes to Intel, which has demonstrated multiple times that they are capable of reacting and innovating when the market changes. Their CPUs are very powerful and the instruction set has improved generation after generation. Are power consumption and density at the core of current compute design? Definitely not and they don’t look like the best CPUs for future cloud applications… but who knows what’s up their sleeve!

Originally posted on Juku.it

The consumer IoT standards wars

On the same day during the second week in November both Bluetooth and Thread announced major updates and roadmaps for their respective network layer protocols. It’s unclear whether it was a mere coincidence, but what isn’t hard to believe is just how much the battle to become the dominant communications protocol in consumer IoT is drawing in every big IT player from Apple to Samsung to ARM.
To review, what’s going on here is that there are a number of competing protocols in consumer IoT. Bluetooth and Thread are just two. There’s also Apple’s HomeKit, which is really a made for iOS certification program, as well as Google’s newly announced Weave, IoTivity (backed by Intel) and the open source AllJoyn.
The protocols differ on many levels. Some of the differentiators include reported power characteristics, whether they enable true mesh networking, proprietary vs. open source, and IP requirements intrinsic to the alliances they’ve formed.
It’s obvious that not all will survive, though what may be less obvious to those backing each protocol is that the proliferation of certification programs and competing protocols will actually bring folks further away from the dreamed goal for the smarthome: true device to device interoperability that’s easy to enable. At least that’s true in the short term. If a dominant networking protocol emerges in the home, as WiFi did a decade ago, that protocol could really create a stable networking protocol that helps the overall market.
In terms of the news at hand, Bluetooth’s announcements were the most compelling. Their 2016 tech roadmap includes a 4 times longer range, mesh networking and double the speed without increasing power draw.
I’ve been slowly following the Bluetooth renaissance ever since the introduction of Bluetooth 4.0, also known as Bluetooth Low Energy. Bluetooth LE solved a lot of the annoying pairing issues and power problems associated with previous versions of Bluetooth. And for those reasons I saw it popping up in a lot of novel but compelling consumer IoT products like connected bike locks, where easy syncing with a smartphone was needed.
In terms of the smart home, Bluetooth always had problems because it’s range has been limited and it’s not a mesh network, two requirements for a really robust smart home where data can seamlessly pass throughout the entire home. But that could all change in the future and the fact that Bluetooth chips are cheap and they are on everyone’s smart phone translates into a very large install base.
Thread’s announcement heralded the opening of its program for device certification, in the same vein as Apple’s HomeKit certification. Over 30 products and components have now been submitted to the consortium, which includes Samsung, Nest, Freescale, Silicon Labs, Qualcomm and others.
The Thread protocol runs atop 6LoWPAN (IPv6 over Low-power Wireless Personal Area Networks), and can work with existing 802.15.4 hardware wireless devices with a software update. 802.15.4 is the basis for ZigBee. One of the major reported advantages of Thread is that it’s mesh network works well and that it’s self healing. Imagine a home with 10 or 20 Thread enabled devices. If the battery dies on one, another communications point could be quickly found so that the flow of data continues. It’s fair to say that in terms of developing and focusing on a mesh networking capability, Thread has been ahead of the competition and has truly been a protocol designed for consumer IoT.
What’s at stake for all of these players exists on two levels. On one level they want to preserve their position in the market. Freescale, for example, already offers a pre-certified software stack for Thread and is expecting full certification for its microcontrollers, microprocessors, sensors and communications options in the near future. If Thread gains traction, companies like Freescale want to be the go to vendors for pre certified components.
But the second layer of what’s at stake relates to the value of the overall market. In Bluetooth’s press release Bluetooth’s Toby Nixon made sure to reiterate that the IoT market potential could run as high as 11.1 trillion. I’ve honestly never seen a projection that high, despite the huge buzz around IoT, but there’s a different consideration here.
If that very large market is ever to materialize, a secure, mesh networked, large install base protocol will need to emerge. Much of the value of consumer IoT in places like the smart home revolves around consumers having positive experiences with products that work well with other smart home products. Point application products, be they a thermostat or a smart lock, have incrementally more value in the market if they play well with other smart home devices.
The first wave of successful smart home products have been point application products like the Sonos wireless speakers or the Nest thermostat. But the the future of the smart home will have to be bigger than that. It’ll have to be about a context aware, integrated experience where developers are given the power to figure out creative applications that leverage the hardware resources across multiple home devices and multiple sensor systems.
I don’t believe that we currently actually even understand the full potential of the smart home because we haven’t given developers a means to build apps atop all of the hardware in a home. And the sooner we settle on a robust protocol, the quicker we’ll get to that smart home vision.

ARM launches a faster, more efficient chip design for smartphones

Nearly every single smartphone sold last year uses a processor originally designed by ARM. On Tuesday, the British company announced new processor designs that will likely end up in devices in 2016.

ARM announced a new CPU chip design and a new GPU chip design. The new CPU is going to be called the Cortex A72, and it should replace the Cortex A15 and Cortex A57 as the “big” CPU for high-performance smartphones and tablets.

Remember that ARM encourages its customers — chipmakers — to lay out its processor cores in what it calls a “Big.Little” configuration. Fast and power-hungry cores handle jobs when single-core performance is important, and other tasks are delegated to the “little” core, which uses less power. The A72 will be a “big” core for most of ARM’s customers, and will likely be paired with the A52 design as its “little.”

Currently, devices sporting ARM’s A57 design are just starting to hit the market, usually in devices with a Qualcomm Snapdragon 810 chip. Many of last year’s high-end devices are using the A15. According to ARM, the A72 boasts performance 3.5 times better than the A15. More importantly for mobile devices, the A72 will use 75 percent less energy as the A15 on the same workload and will integrate with ARM’s other designs such as those for GPUs, display controllers, and video controllers.

“For our customers that do want to take all the pieces, it will all glue together and will be optimized in a very good way,” Ian Ferguson, ARM VP for marketing, said.

ARM says it’s optimized the A72 design to be fabricated on TSMC’s 16nm process, although other fabs — like Samsung, which is bragging about a new 14nm process — will also be able to produce the design. Ten chipmakers have already licensed the A72 design, including MediaTek, Rockchip, and High Silicon. The A72 is a 64-bit chip but 32-bit apps can run on it without modification.

ARM’s new GPU design is called Mali-T880, and it promises nearly double the performance of the Mali-T760, which is included in devices on sale today, while using 40 percent less energy on the same jobs. There’s also a new security feature called Trustzone, which eliminates backdoors for devices decrypting streamed 4K content.

“If studios are going to trust the streaming of data to these devices at the same time premium content is appearing in theaters, that content has to be secured,” Ferguson said. “With Trustzone, as the information comes down in encrypted form on the handset, it will go to the display without any backdoors to pull off that content and use it in other ways.”

ARM believes that mobile GPUs will soon be used for certain non-graphics computational tasks like speech recognition locally on smartphones. “We’re approaching the time for [general processing] GPU computing. That world is coming,” Ferguson said.

Unfortunately, although these new designs are available today, ARM hasn’t discussed specific technical details, but promises that information is coming in April.

Confirmed: Amazon is buying Annapurna Labs

Amazon has indeed agreed to purchase Annapurna Labs, a super-stealthy Israeli company that is reportedly working on new chip technology. Talks were first reported in Israeli financial newspaper Calcalist and picked up by Reuters and others.

An Amazon spokesperson confirmed the acquisition via email Thursday afternoon but provided no detail.

Annapurna Labs was privately owned by Avigdor Willenz, who founded Marvell Semiconductor in 1992, with additional investment from ARM, the British chip maker and Walden International a VC firm, according to the original report. The purchase price was reportedly $350 million.

According to its LinkedIN page Annapurna Labs:

is a cutting-edge technology startup, established in 2011 by industry veterans. We are well funded, with sites in Israel and Silicon Valley. We are operating in stealth mode and can’t share much about our company, but we’re hiring on an exclusive basis, seeking smart, aggressive, multi-disciplinary engineers and business folks, with focus on teamwork in a group of highly talented team.

It would make sense for [company]Amazon[/company] to invest in cutting-edge chip technology given that its Amazon Web Services arm is always in the hunt for faster, more efficient infrastructure.