MIT researchers claim they have a way to make faster chips

A team of MIT researchers have discovered a possible way to make multicore chips a whole lot faster than they currently are, according to a recently published research paper.

The researchers’ work involves the creation of a scheduling technique called CDCS, which refers to computation and data co-scheduling. This technique can distribute both data and computations throughout a chip in such a way that the researchers claim that in a 64-core chip, computational speeds saw a 46 percent increase while power consumption decreased by 36 percent. This boost in speed is important because multicore chips are becoming more prevalent in data centers and supercomputers as a way to increase performance.

The basic premise behind the new scheduling technique is that data has to be near the computation that uses it, and the best way to do so is with a combination of hardware and software that distributes both the data and computations throughout the chip more easily than before.

Although current techniques like nonuniform cache access (NUCA) — which basically involves storing cached data near the computations — have worked so far, these techniques don’t take in account the placement of the computations themselves.

The new research touts the use of an algorithm that optimally places the data and the compute together as opposed to only the data itself. This algorithm allows the researchers to anticipate where the data needs to be located.

“Now that the way to improve performance is to add more cores and move to larger-scale parallel systems, we’ve really seen that the key bottleneck is communication and memory accesses,” said MIT professor and author of the paper Daniel Sanchez in a statement. “A large part of what we did in the previous project was to place data close to computation. But what we’ve seen is that how you place that computation has a significant effect on how well you can place data nearby.”

While the CDCS-related hardware loaded on the chip accounts for 1 percent of the chip’s available space, the researchers believe that it’s worth it when it comes to the performance increase.

US weather agency to boost supercomputers to 2.5 petaflops each

The National Oceanic and Atmospheric Administration (NOAA) plans to upgrade the performance of its two supercomputers with a roughly tenfold increase of capacity by October 2015, the agency said Monday. With the upgrade, the agency is hoping for more accurate and timely weather forecasts.

The supercomputer upgrade comes courtesy of a $44.5 million contract with [company]IBM[/company], which is subcontracting with Seattle-based supercomputer-maker Cray Inc. to improve the systems. Of that $44.5 million, the NOAA said that $25 million “was provided through the Disaster Relief Appropriations Act of 2013 related to the consequences of Hurricane Sandy.”

The National Weather Service (part of NOAA) will reap the benefits this month when the two supercomputers triple their current total capacity from 0.776 petaflops to 1.552 petaflops as part of the first step of the overhaul. With the bump in power, the National Weather Service will be able to run an upgraded version of its Global Forecast System with better resolution and longer weather forecasts.

Global Forecast System

Global Forecast System

When the upgrade is finished, each supercomputer should be able to handle a capacity of 2.5 petaflops, which makes for a total capacity of 5 petaflops.

While that’s a sizable increase of capacity, the world’s fastest supercomputer, China’s Tianhe-2, can deliver 55 peak petaflops.

In November, IBM announced that it would build two new supercomputers based on IBM’s OpenPower technology for the U.S. Department of Energy. Those new supercomputers should be functional by 2017 and will supposedly deliver more than 100 peak petaflops.

IBM to build two supercomputers for the U.S. Department of Energy

IBM said today that it will develop two new supercomputers for the U.S. Department of Energy that are based on IBM’s new Power servers and will contain NVIDIA GPU accelerators and Mellanox networking technology. The new supercomputers, to be named Summit and Sierra, will be ready to roll in 2017; IBM will end up scoring a cool $325 million in government contracts.

Amazon details how it does networking in its data centers

Amazon shed some light onto what goes on with networking inside its many data centers on Wednesday at AWS re:Invent 2014 in Las Vegas. James Hamilton, the vice president and distinguished engineer of Amazon Web Services, laid out the networking details during his conference session that also touched on data centers and databases.