Intel’s new E5 v3 family of processors comes loaded with 18 cores per socket, DDR4 memory and software that can help orchestrate the allocation of resources in a data center.
The NSF has funded projects that will investigate how deep learning algorithms run on FPGAs and across systems using the high-performance RDMA interconnect. Another project, led by Andrew Ng and two supercomputing experts, wants to put the models on supercomputers and give them a Python interface.
Coming off of last month’s announcement at Structure of a new customizable chip loaded with a FPGA, Intel has shipped off a version of its Xeon E7 x2 processor that links together Oracle’s software to its hardware.
Microsoft has been experimenting with its own custom chip effort in order to make its data centers more efficient, and these chips aren’t centered around ARM-based cores, but rather FPGAs from Altera.
A pair of MIT graduate students is working on an interesting system they think can help speed the process of analyzing data without putting it on expensive DRAM. The project uses a cluster of flash drives to store the data, with each one connected to a field-programmable gate array, or FPGA. The FPGA is really the key because it can perform calculations on the data in place before it’s sent over the network to the main processor. The architecture could potentially underpin a functional interactive database system for budget-conscious, data-heavy fields such as science.