Remember when CPU processor speeds were the driving force behind new computers? Going from a 500 MHz to 1 GHz then 2 GHz machine meant noticeable improvements. Then chip vendors started adding more cores. But for the style of computing consumers use today, it’s not about the CPU anymore.
It’s all about graphics processors. Thanks to today’s visually intensive style of computing, a good GPU can improve the user experience much better than a fast CPU. In the data center certain tasks are moving from commodity CPU boxes to GPUs, meaning that over the next year or two, more of them will be sold for corporate computing use.
That’s why Intel is pushing graphics chips such as Larrabee, while AMD is set to unveil integrated chipsets that combine CPUs with GPUs, the result of its acquisition of ATI in 2009. All of this was driven home for me during a trip to Nvidia a few weeks ago, where I saw, side-by-side, the difference between a computer with a super-fast CPU and a computer with a slower CPU but a high-end GPU.
Of course, the demo was optimized for graphics-intense programs (I didn’t see any spreadsheets), but the movies, games and transcoding were all impressive, and more akin to the things I use my laptop for nowadays anyhow. And then the Nvidia guys dropped a bomb on me.
All PDF documents now run through the graphics processor, they told me, as does Google Earth and multiple other web applications. The same goes for PowerPoint slides, Word and other parts of Microsoft Office, starting with Office 2007. On Macs, the visual interface on the file system is handled through the GPU, which makes flipping through thousands of photos and movies much easier. On the consumer side, the rise of the such graphical interfaces helps people visually navigate through ever-increasing amounts of information.
Nvidia and AMD probably have the most to gain from this shift in the consumer field, but Intel won’t be sitting out. However, on the enterprise side is where a GPU might offer a lot more value when it comes to rapid information processing. GPUs are good for applications that require a processor to crunch a lot of data in parallel; they’re not good for step-by-step processes that require decision-making at each step.
So Nvidia doesn’t actually want to kill CPUs so much as have its GPUs shoulder some of the load in corporate data centers that are providing transcoding services and running database queries and Monte Carlo simulations. This heterogeneous computing environment will be more expensive than the Google-like x86 server farms, but certain industries have already shown they will pay for specialized processing in certain areas. Financial institutions, for example, that have deployed servers using Sun’s Niagara chips or Azul Systems’ many-core boxes for high-end computing pay more for faster processing.
As the large content vendors and even carriers try to deploy media content in multiple formats for televisions, personal computers and mobile phones over IP networks, they’ll either have to pay more for storing those multiple versions or pay for real-time transcoding, either in the data center or on the network. The increasing delivery of visual media over an IP network and the increasing amount of electronics data stored in corporate databases all represent an opportunity for GPUs that mean the chips might move out of the graphic niche.