We explain the principles of benchmarking and show you how to test your own PC's performance.
How fast is your PC? It’s a simple question, but there’s no easy answer. When we talk about computer speed, we’re often referring to the clock frequency of the processor – that doesn’t necessarily tell us much about its performance.
CPU architectures are continually being improved and refined, so a recent Intel Core i7-860 processor running at 2.8GHz will run rings around an older Intel Core 2 Duo E7400, despite its identical clockspeed. This doesn’t take into account the number of cores: for multi threaded tasks, the quad-core i7 processor has twice the processing capability of the older Core 2 Duo.
Focusing on the CPU additionally ignores the numerous other components in your system that contribute to overall performance. The quantity of memory in your PC can have a huge impact, as can the speed of your hard disk and, for 3D purposes, the capabilities of your graphics card.
In this feature, we’ll discuss the nuances involved and show youhow to benchmark your own system.
What is a benchmark?
Historically, computer benchmarks have been used to define a standard unit of performance, so as to compare systems with different hardware. One of the first benchmarks to be developed was Whetstone in 1972. It carried out a series of commonplace mathematical tasks based on how long the tasks had taken and gave a performance score in instructions per second. This benchmark could be used to compare the performance of many different types of computers from different manufacturers.
Information such as this has always been useful to businesses considering upgrading their systems, as it gives a clear measurement of the performance benefit of new hardware. But benchmarks aren’t only for businesses. At home, benchmarking can help you to compare the performance of your current PC with one you’reconsidering buying – and if you upgrade a component, benchmarking can expose the performance benefit (or lack thereof).
Approaches such as Whetstone have their limits. Back when Whetstone was created, the mathematical operations it carried out might have been a reasonable simulation of the workload a typical business computer would face. But they were only that – a simulation. Whetstone allowed you to estimate how quickly a computer would perform in real use, but what it really told you was how quickly the computer ran Whetstone. In other words, Whetstone was what’s called a synthetic benchmark – an artificial test designed to tax a computer in a very precise and particular way.
Since synthetic approaches remove that uncertainty, they can determine the capability of a particular piece of hardware to do a specific job. Whetstone targets the CPU, but there are plenty of other benchmarks that measure hard disk performance, memory performance and so on.
The strength of synthetic benchmarking is that it gives you a measure of specific aspects of your hardware’s performance without other factors interfering. Arguably, though, this is also a weakness.
In 1972, business computers probably did spend much of their time performing a single type of operation. Today, however, a multitasking desktop PC is likely to be doing all sorts of things at once. And when you’re using your computer you don’t have the luxury of disabling or ignoring other processes or conditions. So the results you get from a synthetic benchmark may be a world apart from the actual experience of using your PC.
If you want to assess the real-world performance of a computer system, the only way to get a meaningful measurement is by timing real-world tasks. This is the approach we use here at PC & TA: our Real World Benchmarks use popular software such as Microsoft Office and Adobe Photoshop to carry out everyday tasks, and time how long these operations take.
When we want to test the performance of a harddisk or flash drive, we do it by using Windows to copy files to and from the drive, as you would in actual use. As we noted above, this approach means our results may be affected by plugins, or by the amount of available memory – or by the version of Windows we’re using, or by a dozen other factors. But those issues would all affect you if you were using these applications for real, so our results reflect the actual performance of the system.
Real-world benchmarking may seem an improvement over synthetic testing, but it too has its limitations. Although we refer to our “Real World Benchmarks”, there’s still a synthetic aspect to the process. A preset workload may be designed to resemble real work as closely as possible, but no two people use a computer in exactly the same way.
If you do a huge amount of 3D work, and very little word processing, a benchmark that gives equal weight to both tasks won’t be particularly helpful. Even when a benchmark script doesexactly the same job as a human, it typically does so without taking breaks or pausing to think about what it’s doing, which can increase the load on your memory and disks, and bog the system down unrealistically. And since the system is tested holistically, real-world benchmarking may conceal bottlenecks.
For example, your PC might have a super fast processor but very little memory. In a situation such as this you’d see poor performance, and might wrongly conclude your CPU was underpowered. A synthetic approach, which isolates each component, would show that this wasn’t the case. Ultimately, if you’re determined to get a representative, all-round picture of your computer’s real-world performance and capabilities, you’ll need to use a combination of realworld tests and synthetic benchmarks.