You can’t argue with numbers. Mostly...
Benchmarking the performance of systems has been part of product reviews for a couple of decades. Whether you’re looking a PC, smartphone, NAS or tablet, one of the easiest ways to compare different products is to use standardised tests across different devices in a class to make comparisons easy.
Apple’s recent battery life and performance woes were revealed after benchmarks were carried out on devices before and after a battery replacement. This led to the revelation that Apple was throttling device performance as battery life diminished.
We saw a different type of benchmark, emissions testing, “gamed” by Volkswagen in 2015 when the German car-maker rigged a bunch of vehicles to put out false results that made the cars seem that they emitted less nitrogen-based oxides into the atmosphere.
Look at reviews in PC & Tech Authority and you’ll see we use benchmarks to help you compare different systems. Without them, it becomes very difficult to evaluate the differences between computers with different hard drives, processors or memory. Further, they can help you make decisions on components. If the system you have your eye on will be used for disk-intensive tasks, then benchmarks can help identify the best drive to install.
However, it’s important to understand benchmarks aren’t perfect. They represent a set of results based on some very specific activities that may not be representative of how you use a system. And it’s important to look beyond the final result when comparing systems.
When comparing systems, you may find the final benchmark score is heavily influenced by the results in one or two specific areas. A system with a faster hard drive might look to out-perform what looks to be a more highly configured system as its final score is boosted by disk performance.
And if your computer is mainly going to be used for office work, or working with Adobe’s tools, look at the benchmark results for those types of activities. High-end gaming performance, while great to have “just in case” might be overkill and add to your price-tag unnecessarily.
One type of benchmark I’m always wary of is battery life. The amount of working time you can get out of a full charge is highly dependent on what you are doing. If you’re comparing systems, ensure the battery tests are performed identically. If one test has Wi-Fi enabled and connected to a network but the other doesn’t then you can end up with significantly different results.
My preference is for battery tests to be a log of activity with the tester logging all their activity from the time of a full charge through to shutdown. That way, you make a judgement as to how the battery will perform in the real world.
I’d also suggest, where possible, to look compare the results of different benchmarks. Some computers will rank more highly on one type of benchmark over another because they test things in different ways.
Benchmarks represent a system’s performance at a specific moment in time. Computer makers will often make seemingly minor hardware changes that can influence results. For example, different brands of memory can perform differently even though they might seem to be the same. Even the room temperature and other environmental conditions during the test can be influential depending on the effectiveness of cooling systems.
For comparing systems, benchmarks are a very useful tool. But be aware of their limitations and take the time to really understand what they’re testing and look beyond the bottom line result.