Gordon Moore has a lot to answer for. His prediction in the now seminal 'Cramming more components onto integrated circuits' article from 1965 (pdf) evolved into Intel's corporate philosophy and have driven the semiconductor industry forward for 45 years. This prediction was that the number of transistors on a CPU would double every 18 months.
This self-fulfilling prophecy has driven CPU design into the realm of multicore. A decade ago Intel and AMD could comply with Moore's law by using increased transistor budgets to double performance on the 18 month scale. But as clock speeds went up, so did both energy requirements and heat output. This then led to a rethink of how to use the doubling transistor count in other ways.
More space means more cores
Despite the fact that multicore solutions were firmly placed in the server and workstation space, it was deemed that multicore processors were the future of desktop computing as well. Unfortunately at the time the vast majority of consumer focused products were single threaded, and while multicore processors did make multitasking a bit smoother, they didn't provide much of a noticeable benefit over single core solutions.
Even now there are few applications that take full advantage of multicore processers. For those that do the benefits are huge, but some applications just don't work well in a multithreaded environment and others prefer to offload tasks involving parallel data sets to the GPU.
Despite this CPU manufacturers decided that more cores were the future of processors. We have seen a progression from dual core, to quad core and now hexa core products. But what we notice in the majority of our benchmarking is that no matter how many cores, the results tend to be more limited by the speed of the cores than how many there are (obviously there are some multicore tests that do indeed show a difference, but they are the exception rather than the rule).
Single core performance is still king
What this has led to is the rise of CPU technology designed to speed up single core performance when an application doesn't use the other cores. Intel's version of the technology is called Turbo Boost, while AMD's is called Turbo Core.
Intel's core by core solution
These technologies are similar, but implemented in different ways. Turbo Boost is available on Intel's Core i5 and Core i7 CPUs (both mobile and desktop versions). What it essentially does is manage core speed based upon a combination of load and TDP (Thermal Design Power). TDP essentially provides a budget for the amount of power and heat being distributed to the CPU. If cores aren't needed Turbo boost will clock them down and use the extra power budget to speed up the active cores.
|Intel has implemented Turbo Boost on its Core i7 and Core i5 CPUs
This is all handled by the operating system and works well. Unless you are specifically using multithreaded applications there will inevitably be a core or two going unused. It allows end users to get more out of their CPU for the single threaded programs that dominate their day to day use, and means that when programs actually are multithreaded that they will get the most out of the CPU.
Of course, this does bring up some interesting scenarios, such as our recent experience with the Core i7 powered Macbook Pro in the PC Authority Labs. By using TDP to determine the performance budget, rather than it being the upper power limit, the CPUs are much more likely to be running at a high heat output all the time, not just when all cores are working.
AMD does things by halves
AMD recently implemented its technology, Turbo Core, on its Phenom II X6 processors. Turbo Core is in many ways a less elegant solution than Intel's. This is most likely due to the fact that while Turbo Boost is ingrained in the design of the Nehalem architecture upon which Core i5 and i7 CPUs are based. On the other hand Turbo Core has been added to the K10 architecture that has underpinned AMD's processors for some time now.
|AMD's Turbo Core is featured on the new 'Thuban' core Phenom II X6 CPUs
Turbo Core works by treating the hexa-core architecture as two tri-core blocks. If up to three cores are under load then the processors will clock the other three down to 800MHz and speed up the block of three that is being used. Like Intel the amount of extra performance is determined by the TDP of the processors. Unlike Intel this isn't determined on the fly, rather the Turbo Core speed is set by processor model. For example, the top end Phenom II X6 1090T runs at 3.2GHz, but when Turbo Core is active it runs at 3.6GHz with the unused cores running at 800MHz.
Better single threaded performance all round
This is a much less granular process than Intel's, which is a byproduct of it being a late addition to the K10 CPU architecture. Just like Intel's solution Turbo Core will be active the vast majority of the time, purely thanks to the fact that very few applications will tax three cores, let alone six.
Intel's Turbo Boost is by far the more elegant way of balancing single core performance with TDP. This is thanks largely to the fact that it is built into the DNA of the Nehalem line of processors. AMD on the other hand has shoehorned a less elegant, but functionally similar technology on top of its now aging K10 architecture. We would expect in the next AMD CPU design, codenamed Fusion, that we will see something more akin to Intel's Turbo Boost.
Other PC technologies you've probably read about in our reviews:
Optimus technology: learn how NVIDIA's new technology balances laptop battery life and graphical performance
Also, how quiet or noisy is your PC? John Gillooly remembers the the painful whine that used to be the hallmark of loud computing gear.