Has Intel Reached the Limits of x86?

Technology has enabled us to bring more brains and connectivity to devices that range from digital picture frames to a server crunching scenes for the latest animated movie, but as we ask computers to take on more jobs, are we counting on a general-purpose architecture to deliver on a task that may best fit a chip? Intel (s intc) may have run into this problem with its announcement today that it will issue a software development kit after last week delaying its own Larrabee graphics chip.

For Intel, the question becomes, how far can the x86 architecture stretch? Its Larrabee delay suggests that using x86 to develop a decent graphics processor may work, but it can’t compete against specialty GPUs. Intel owns the corporate and personal computing market, with AMD (s amd) coming in as a distant second, but with the proliferation of mobile devices requiring lower-power silicon and the reliance on graphics processors to provide faster processing for our increasingly visual world, Intel has introduced Atom at the low end and focused on launching Larrabee for high-end graphics with both using the x86 instruction set.

There is a quote from Abraham Maslow that says, “If all you have is a hammer, then every problem looks like a nail.”  Intel with its x86 monopoly has seen the problem of low-power cell phone chips and high-performance graphics processors and decided that x86 makes a good nail. But chipmakers pushing the ARM (s armh) architecture for the mobile market and the embedded space clearly disagree, while Nvidia (s nvda), AMD and even IBM (s ibm) had fled the confines of x86 when it comes to delivering graphics.

So far the verdict is still out with regards to Intel’s success in unseating ARM with mobile and embedded vendors. And Intel’s delay with Larrabee reflected its inability to deliver comparable performance for the price when stacked up against the GPU vendors, said Jon Peddie, who tracks the GPU market. However, he’s less worried about the limits of x86 architecture. Peddie said Intel plans to take what it has learned from Larrabee and develop a coprocessor for the high-performance computing market, where accelerators in the form of GPUs and processors like IBM’s cell have gained traction.

However, Intel’s  Larrabee decision drives home worries that as the compute jobs fragment, we’re moving into a post-x86 world. If that’s true, should Intel also expand beyond x86 to ensure its growth, or should it make sure it owns the huge swath of middle ground where it’s hard to imagine x86 losing ground? “Intel has experimented with every competitive architecture that’s been built, and it keeps circling back to x86,” Peddie said. “They know, ‘We do x86 really well, so let’s just do it.'”

This means we’re likely to see some tweaks to Larrabee for the HPC market and Intel will continue to try to bring research, such as the efforts it announced last week to bring a low-power 48-core chip to market for highly virtualized environments, out of the labs  and into the market. After all, a hammer is a pretty essential tool to own.