Researchers hope deep learning algorithms can run on FPGAs and supercomputers

The NSF has funded projects that will investigate how deep learning algorithms run on FPGAs and across systems using the high-performance RDMA interconnect. Another project, led by Andrew Ng and two supercomputing experts, wants to put the models on supercomputers and give them a Python interface.

Grad students fuse flash and FPGAs for fast data processing

A pair of MIT graduate students is working on an interesting system they think can help speed the process of analyzing data without putting it on expensive DRAM. The project uses a cluster of flash drives to store the data, with each one connected to a field-programmable gate array, or FPGA. The FPGA is really the key because it can perform calculations on the data in place before it’s sent over the network to the main processor. The architecture could potentially underpin a functional interactive database system for budget-conscious, data-heavy fields such as science.