What happens when multiple-core processors are no longer sufficient for our demands? Stuart Andrews looks at the future of CPUs.I think we’re actually on the verge of a revolution,’ says Phil Emma of IBM’s Thomas J Watson Research Center. ‘The bad news is that classical scaling isn’t going to work in quite the same way anymore. But the good news is that when we had 20-odd years of this scaling theory working for us, it allowed us as an industry to become complacent – now we have to look to extend things in different ways, and that’s going to call for a lot more ingenuity.’
Phil Emma has a point. There’s life in Moore’s Law yet; processor complexity still doubles every 18 to 24 months, performance continues to improve, and the shift from 90nm to 65nm technology has improved speed while reducing heat and power consumption. However, there’s a growing sense that the party is over – that progress can’t continue without real innovations in materials and manufacturing. CPU architects understand that getting high performance in tomorrow’s applications will require more than a few additional instructions and a die-shrink or dollop of cache; it will require a rethink of the way CPUs work.
You can see this process at work in the processors emerging today. Take a look at the Intel Core 2 Duo. It’s a native dual-core chip, not two processors botched together on a single die, and its shared Level 2 cache can be allocated dynamically, so that if one core is doing all the work it gets the lion’s share of the resources. It’s a design built for the way real applications work in practice, not how Intel might like them to work in theory. And in the next 12 months, we can expect quad-core workstation/server and desktop variants (codenamed Clovertown and Kentsfield) to push its architecture even further.
AMD’s next generation may arrive later, but it’s the biggest departure for the company since Athlon 64. Expected to ship in mid-2007, the K8L (as it’s widely, if unofficially, known) is a native quad-core architecture, designed to work more efficiently with larger memory spaces and new SIMD instructions, and double the SSE and floating-point resources of the current Athlon 64 line. Like Core 2 Duo, it can handle 128-bit vector operations in a single cycle.
So where are we headed from here? Well, take these trends and stretch them out much further: think multiple processors, wider execution pipelines, larger, smarter caches and beefy, out-of-order prefetch and scheduling hardware. In its own future-looking documents, Intel describes processors ‘that will have dozens and even hundreds of cores in some cases’. AMD is thinking along similar lines. K8L and its successors will be modular designs, making it easier to integrate more cores and transports as the market demands.
Expect the execution hardware to get bigger and learn to handle more specialised tasks. AMD senior fellow Chuck Moore explains: ‘If you look at a processor today – on a floor plan diagram, for example – and say, “Okay, where are the instructions actually executed?”, what you see is that the execution units aren’t that big.’ This, Moore says, will change, with future AMD processors doing more to widen out the SSE data paths so that there’s more execution density for media-processing. ‘We’re going to see more media processing capability being built right into these general-purpose processors.’