While we waste four cores, scientists use a million at a time

A group of Stanford researchers recently ran a complex fluid dynamics workload across more than a million cores on the Sequoia supercomputer. It’s an impressive feat and might foretell a future where parallel programming becomes commonplace even on our smartphones.

Facebook open sources Corona — a better way to do webscale Hadoop

Facebook has open sourced a new system called Corona for scheduling and managing Hadoop jobs. Corona attempts to do away with many of the problems that come along with massive-scale Hadoop operations, and soon looks to take Facebook’s Hadoop deployment beyond just MapReduce.

5 trends that are changing how we do big data

In just a few years, big data has turned from a buzzword and concept best left for large web companies into a force that drives much of our digital lives. Here are five technological trends that will change how data is processed and consumed going forward.

Because Hadoop isn’t perfect: 8 ways to replace HDFS

Hadoop is on its way to becomig the de facto platform for the next-generation of data-based applications, but it’s not without some flaws. Ironically, one of Hadoop’s biggest shortcomings right now is also one of its biggest strengths going forward — the Hadoop Distributed File System.

All aboard the Hadoop money train

Market research firm IDC released the first legitimate market forecast for Hadoop on Monday, claiming the ecosystem around the de facto big data platform will sell almost $813 million worth of software by 2016. But Hadoop’s actual economic impact is likely much, much larger.

Oracle faces big data, cloud, hardware triple whammy

For years, Oracle has wowed Wall Street with fat software margins: Large companies depending on Oracle relational databases pay what it takes to keep them up and running. It’s unclear whether Oracle can carry that dominance over into the Big Data era, however.