IBM said today that it will develop two new supercomputers for the U.S. Department of Energy that are based on IBM’s new Power servers and will contain NVIDIA GPU accelerators and Mellanox networking technology. The new supercomputers, to be named Summit and Sierra, will be ready to roll in 2017; IBM will end up scoring a cool $325 million in government contracts.
MaidSafe’s project is absurdly ambitious — a serverless network system that offers free storage, repels surveillance and effectively constitutes a distributed supercomputer. But maybe, just maybe, it might work.
Chris Fenton and Andras Tantos decided they wanted a model of the famed supercomputer for their desk. It turned out to be a more complicated project than expected.
Researchers have simulated 1 second of real brain activity, on a network equivalent to 1 percent of an actual brain’s neural network, using the world’s fourth-fastest supercomputer. The results aren’t revolutionary just yet, but they do hint at what will be possible as computing power increases.
A group of Stanford researchers recently ran a complex fluid dynamics workload across more than a million cores on the Sequoia supercomputer. It’s an impressive feat and might foretell a future where parallel programming becomes commonplace even on our smartphones.
Early attempts at cloud-based video gaming were a flop. Roy Bahat, of OUYA, says it’s still a worthy pursuit, but should be based on a new generation of games built specifically to take advantage of the cloud’s supercomputing strengths.
When you mix a researcher, a massive online encyclopedia and a supercomputer, the result is a collection of insights and visualizations into what Wikipedia looks like mapped across time and space. It looks a lot like how our history books might look merged and graphed.
An effort to build a telescope that can see back 13 billion years to the creation of the universe is prompting a five year €32 million ($42.7 million) effort to create a low-power supercomputer and networks to handle the data the new telescope will generate.
Los Alamos National Laboratory is trying to build to an exascale computer, which could process one billion billion calculations per second. The man in charge of executing that vision, however, sees a big obstacle toward building it. That problem, discussed at Structure:Data, is resilience.
Looks like Oracle has some competition when it comes to selling big iron for big data. On Wednesday, Cray, the Seattle-based company best known for building some of the world’s fastest supercomputers, announced it’s getting into the big data game.