In case anyone was wondering, financial services institutions are hungry to feed a ceaseless demand for lower latency in their trading systems. I had a chance to sit down with Lou Modano, head of global infrastructure for NASDAQ OMX, yesterday at Cisco Live, where he shared some of the details on how NASDAQ continually shaves microseconds off the time its takes for trades to take place.
When you rely entirely on electronic trading, said Modano, a faster system is key to adding liquidity to the market. That means performance is king, even at the expense of higher costs. Over the past four years, NASDAQ has been able to get roundtrip latency on the NASDAQ Stock Exchange down to less than 100 microseconds from 1 millisecond, and the only way to keep up this constant improvement is through constantly evaluating the cutting edge of technologies for improving computing performance. For a recent upgrade of the network infrastructure at its Philadelphia exchange, that meant loading up on Cisco (s csco) switches to improve latency, as well as scalability.
Network latency at that exchange is now less than 10 microseconds, Modano said, a 50 percent improvement.
The upgrade was ahead of the normal refresh cycle, but Modano said NASDAQ wanted to capitalize on increased share in options trading by making its network even faster immediately. Not that NASDAQ is an all-Cisco shop, though, or even has any particular allegiance to the company. Modano said his team will use whatever software, servers and networking gear products provide the best performance when it’s ready to buy. That was Cisco networking gear — this time — but Modano said NASDAQ doesn’t run Cisco servers, although he wouldn’t specify what servers do populate its data centers.
And it’s always looking for the next big thing, especially when it comes to hardware. Modano said NASDAQ evaluates almost every product claiming to boost processor or network performance, whether it’s from a freshly minted company or from an established Fortune 500 vendor. Although, he said, there is “a lot of opportunity for new players.” Not surprisingly, he declined to divulge any particularly exciting new technologies that NASDAQ is looking at.
He noted, however, that new hardware advances tend to be quite a bit ahead of the current state of software development (something we’ve seen even in mainstream data centers with the advent of multicore processors), so it takes a tight alliance between all the various infrastructure teams to ensure NASDAQ is squeezing the most-possible performance out of any new investment.
Financial institutions are known early adopters of tactics such as co-processing (offloading certain tasks to GPUs or even IBM’s shortlived Cell processor) and massive in-memory data fabrics, so who knows what they’re experimenting with now. My colleague Stacey Higginbotham noted in January that we might be in the midst of a hardware renaissance, so there’s plenty of new high-performance gear to choose from. I can only imagine how often words like terabit and nanosecond are bandied about without a hint of sarcasm as NASDAQ and its peers discuss their newest strategies for processing even more transactions in less time.
Image courtesy of Delta Computer Products.