The iPhone Makes Semiconductors Fun Again!

For a while there, covering the chip industry was like covering a race run by a rabbit and a cheetah. AMD was the rabbit, while Intel — with its much larger market cap and greater profits — was the cheetah. Evey now and then the rabbit would fool you into thinking he was going to pull ahead, but we all knew who was going to win. In the past few years, however, two things have brought more runners and more diversity to the course: a challenge to the x86 architecture, and the iPhone.

I could probably find a way to credit the iPhone for changing the furniture industry if I tried hard enough (it could be the new Six Degrees of Kevin Bacon game for tech journalists.) But in this case the iPhone pushed the real Internet — as opposed to a carrier-defined portal — out to mobile consumers and showed them how compelling such access could be. That made clear to carriers that data usage, which was already on the rise, could become a huge revenue booster if consumers were given the right type of devices. Which prompted chip makers to see gold in the form of the 33.2 million high-end handsets sold around the world.

That pushed the chip world into viewing these devices as mini computers requiring their very own processors. Obviously these processors need to be small, use very little energy and still cycle fast enough to load and display web pages, pictures and other mobile computing tasks. Chip firms had been thinking about those functions for years, but the success of the iPhone showed how important the mobile computing experience could be. So Intel begat Atom, a chip designed not for a mobile phone but for a smaller laptop that Intel calls a mobile Internet device.

Other chips firms aren’t standing still, either. Via Technologies, which for a long time had the handheld computer market to itself, is refreshing its line of chips. Qualcomm now has Snapdragon, and Texas Instruments is offering OMAP chips. The dark horse in all of this frenzy comes from Nvidia’s Tegra offering, which is really compelling in demos. But Nvidia has an uneven record of supporting its products, so it remains to be seen if the real-life experience can meet the high expectations set by the demos.

Nvidia is also making my chip coverage fun with its efforts to knock out the x86 architecture. Intel and AMD dual-, triple- and quad-core chips will never go away, but both Nvidia and IBM are pushing credible alternatives for high-end processing. Nvidia’s dressing up its graphics processing chips (GPUs) to run scientific queries, visually intensive tasks and repetitive problems than can be done in parallel, such as video decoding and encoding. The influx of digital media is creating a need for such capabilities in an increasing number of data centers.

IBM, meanwhile, is pushing its Cell processor — which was designed with Sony and Toshiba eight years ago for the PlayStation 3 — for enterprise servers and high-performance computing. In many ways it’s attacking the same problems Nvidia’s GPUs are, with encoding and Monte Carlo simulations showing off the Cell’s specially designed, nine-core architecture. IBM may have an advantage over Nvidia because of its enterprise focus. It offers an enterprise-ready Cell-based blade server, while Nvidia sells its chips to firms such as Atrato and Rackable for corporate consumption.

So the two-company race that was never all that competitive has turned into several races with multiple players. Ironically AMD doesn’t have a mobile processor yet, and isn’t really pushing its GPUs into jobs other than running graphics. Perhaps it believes that if it stays the PC course it can pass the cheetah while Intel focuses on Atom and smaller devices.