At some point in the next couple of years we should be able to balance a billion transistors on the tip of a finger. Intel has already showed off an Itanium processor with 500 million transistors, and if we follow Moore's Law, it should be no later than 2006 that we'll get a single processor with twice that number.
In many ways this is seen as the next significant milestone in processor technology, after the 1GHz mark was passed in 2000. It also marks a shift in emphasis from the industry away from frequency and towards features. This is because the prospect of a billion transistors beavering away raises an interesting question - what to do with them all?
One thing you can do with a lot of transistors is make a single large processor that is made up of many simple individual steps, each of which can then be run at very high speeds. This has been Intel's approach up to now with its 32-bit processors based around the IA32 architecture, such as the Pentium 4 and Xeon. In each successive generation the processors have seen their pipelines increased in length, which means more instructions can be executed simultaneously, although if one instruction were to fail or be discarded, that can disrupt the rest of the long pipeline, causing things to stall.
This creates another job for all those transistors: cache. The more high-speed memory available to the processor, the less impact the pipeline stalls have, and the better the performance of the processor overall. For this reason, each generation of IA32 processor has also seen significant increases in cache built into the processor, to the point where around half of the transistors in these processors is now taken up by L1 and L2 cache.
This philosophy has worked well up to now, although Intel and AMD are beginning to hit the ceiling with this approach. Even though more transistors can now be crammed into a small space, it creates other problems like power consumption and excessive heat. These two factors alone have caused more problems in processor design over the last few years than any other. For example, Intel even has specially designed software that will rearrange the layout of a processor die specifically to avoid 'hot spots', which are localised areas of high activity that can become so hot that they can destroy the processor.
Changing usage models
Luckily (or conveniently) for Intel and AMD, this approach is also garnering diminishing returns in terms of performance. This is not because the processors are slowing down in performance relative to their size - in fact, by all accounts they're speeding up - but it's the way the processors are being used. There was a time when a spreadsheet would tax a processor's maths capabilities, and that left the user waiting while computations were being performed. These days the tables have turned. It's now the computer that spends a vast majority of its time waiting for the user.
While contemporary CPUs are more than capable of running conventional applications, they are less suited to the kinds of tasks it is envisioned we'll be running in a few years time. One of the industry's big themes at the moment is the digital home - which exists more in hyperbolic imagination space at this stage, but we're confident of its inevitability. This is where your PC becomes one hub in a network of devices that all communicate and share information. This includes your personal video recorder and digital television, your mobile phone and home phone, your stereo, notebook, PDA and any other PCs you have around the house. In the not too distant future we can expect to have all these devices running over a seamless network so you can access the information and content of any device over any other. For example, you may have an internet connection streaming radio to one room in the house, while your PVR is streaming a pre-recorded digital television show in MPEG-2 format into another room, all while someone sits at the PC and plays an online game. In this scenario, today's processors just can't keep up.
Two cores are better than one
This is where the CPU manufacturers have turned to the high-end world of workstations and servers for advice, as this is where the digital home usage model is already being implemented. Servers, for example, have to manage multiple simultaneous requests and network connections, and all without skipping a beat. They do this by splitting up the tasks into individual threads, and then they pass that thread off to one processor to work on, while the other processors in the system handle other tasks.
So now the CPU manufacturers have something to do with all those transistors - put more than one processor core into each CPU. This also alleviates some of the pressures relating to power and heat, as you no longer have one mammoth core sucking up all the juice. Power and heat are still significant challenges, however, and represent probably the biggest hurdle to overcome before dual-core processors hit the streets.
Both Intel and AMD have demonstrated their new dual-core designs, and they share many similarities, although there are a few key differences. Both feature essentially two complete processors on a single die, including discrete L1 and L2 caches. Both come in conventional packaging so they will not require a new processor socket. Both vendors are also transitioning their entire processor lines to dual-core, including everything from server to desktop to mobile.
The main differences come from the two different 'HTs' the vendors use. Intel claims it has been moving towards a dual-core future with its HyperThreading technology, which makes a single processor act like two processors. As Graeme Tucker, Intel Australia technical manager, states: 'HyperThreading technology was an important first step in the move to multi-core processing because it provided incentive for software developers to design applications capable of processing information in parallel for greater efficiency.'
Furthermore, when Intel does release its dual-core processors, they will also feature HyperThreading. This means a single dual-core CPU will appear as four logical processors to your operating system, as each core will be functionally similar to two processors.
This raises a challenge for the software development community however. The vast majority of software for the desktop has been single threaded up until now, which means only a single stream of instructions and data are sent to the processor. On the other hand, a multithreaded application, where the workload is split up into multiple independent streams, has typically been the realm of workstation or server applications. Now software developers have to take multithreading into account when they code their desktop software if they want to see a performance benefit from the latest dual-core processors. This is a big enough issue where two cores are used, but with HyperThreading's four cores, it increases the problem. Intel is working hard with developers to have a significant body of multithreaded software available at launch, although it will likely be a few years at least before we see a majority of desktop software released with multithreaded capabilities built in.
AMD's implementation differs slightly from Intel's. The Opteron was built from the outset to handle two cores, and has had the required connectors built into the silicon from day one. The only thing that was missing was the second core itself, and its dedicated L2 cache. In fact, AMD claims it should be able to produce dual-core processors without a significant increase in the power consumption and heat generated compared to single-core designs. According to AMD Australia's Michael Apthorpe, this is possible because the dual-core Opterons will be manufactured at a 90nm process, which is smaller than the current 130nm process.
Neck and neck
It'll probably be a close race as to which manufacturer can deliver dual-core CPUs to market first. AMD has the advantage of a relatively simple architecture to implement, although it is behind Intel when it comes to manufacturing technology and capacity. For example, AMD has only just begun moving to a 90nm process, while Intel is already demonstrating its 65nm process. In any case, who comes first is fairly academic, as the more interesting thing for us is whether the first generation of dual-core processors are going to give us any real benefit to begin with. For those of us who don't have a digital home setup running, the benefit from dual-core all hinges on whether there is multithreaded software available at launch. So far it looks like there'll be a few major packages, but not a heck of a lot to choose from. Thankfully Windows XP Home and Professional will support dual-core right away, although Home will likely see only two processors even with HyperThreading enabled.
Once dual-core is firmly entrenched in the market, though, we'll start to see a whole host of other applications come to the surface that just weren't possible under the old single processor paradigm. Both Intel and AMD are touting hardware security systems that will use encryption and protected hardware to lock your system down. Both are also talking about virtualisation, which is sure to be the next big buzzword in a year or so. Virtualisation is not a new concept, it's basically the concept of splitting up hardware resources between multiple discreet software systems, such as running multiple operating systems. It has other applications beyond this, though, such as isolating one system from another, or allowing two operating systems to serve two users running from one box.
Regardless of which dual-core system prevails, we can be certain that how we use PCs in the next few years is going to change, and Intel and AMD are both fighting for the lead.