Pundits have been predicting the demise of the mechanical hard disk for years, but so far there’s been no revolution. Slowly, though, the first truly credible rival to the conventional hard disk – the flash-based solid-state disk – is creeping up on the old guard.
What’s the big idea?
Conventional hard disk drives store data by encoding it onto spinning metal disks, but solid-state disks store it in memory chips. That’s an approach with plenty going for it. Spinning a weighty set of magnetic platters at high speed takes a fair bit of power, even on the advanced, low-friction fluid bearings modern hard disks use. An average 3.5in desktop hard disk might typically be rated to an idle power consumption of 8W.
An SSD, by comparison, merely needs to push some electrons around – and electrons weigh next to nothing, making SSDs enormously power-frugal. Intel’s 510 series SSDs are rated for an idle power consumption of 100mW – that’s 0.1W, or 80 times less power than many mechanical drives – and even when active they consume only 0.38W.
An even more interesting benefit is speed. When a mechanical disk wants to read a file, the read/write heads must be physically moved to the correct position. Then they need to wait for the platter to rotate to the correct point. Once that happens, the rate at which they can stream data off the platter is limited by the drive’s rotational speed.
With an SSD, there are no limits imposed by the physical momentum of mechanical components. Electronic signals move at a significant fraction of the speed of light, so SSDs are capable of finding data almost instantly, and transferring it at vastly greater transfer speeds than conventional disks. Not only do electrons move quickly, they do it silently: SSDs make no sound.
A final advantage is durability. Hard disks are physically fragile: they can be killed with a simple, relatively light mechanical shock. SSDs, with no delicate moving parts, are largely immune to being knocked or mishandled.
It isn’t all good news
That’s not to say an SSD can’t ever fail or wear out. In fact, one of the fundamentals of SSD engineering is a system known as wear levelling. It’s the first of a few areas blighting the theoretical utopia of the SSD, and one reason why they’re still some way off winning the mass-storage war.
Wear levelling is required because the flash memory cells used by SSDs don’t last forever. You can read their contents as many times as you like without worrying, but each cell can have its contents changed on only a finite number of occasions before it wears out – typically, around 100,000 times.
For a USB flash drive or SD card, this limited lifetime isn’t a problem, because these devices simply aren’t used that much. But if flash technology is used to replace a PC hard disk, the situation changes. Even a PC sitting idle tends to access the hard disk a great deal. The Windows swap file can be written to thousands of times in a session. Things quickly approach the point at which flash life can have an impact on reliability.
Wear levelling tries to mitigate this by distributing data evenly throughout the SSD’s cells, and avoiding concentrated rewriting of any particular area of the disk. If a certain physical area of an SSD’s flash memory is being accessed and rewritten a great deal, the system will reallocate the data to a different area of the drive. But doing this while maintaining the structural integrity of the filing system, and continuing to service other read and write operations, has performance implications.
All solid-state memory isn’t equal
Even when running at maximum efficiency, SSDs aren’t as fast as you might think. There’s a pervasive tendency to imagine that the flash memory chips in an SSD are as fast as the chips in your PC’s main system RAM. Even Windows Vista and Windows 7 fall into this trap with their ReadyBoost feature. It purports to use a USB flash drive as a faster-than-hard-disk cache, to speed up disk operations. But it doesn’t work in practice, because the flash memory in commercial USB flash drives is usually slower than accessing a regular hard disk.
The truth is that the NAND flash memory you’ll find in an SSD works differently to the DRAM that makes up the system memory in your PC.DRAM cells are very simple, and very fast. They consist of a single transistor and a single, small capacitor. The charge on the capacitor (or lack thereof) stores one binary digit: no charge indicates a binary 0; fully charged equals a binary 1. Since DRAM capacitors are so small, they hold only a small amount of energy, meaning they can be charged and drained quickly. This is the fundamental design aspect that gives DRAM its extreme speed.
A NAND flash memory cell isn’t such a simple device. For reading sequential data, it can be nearly as fast as DRAM, but when it comes to writing, a much more complicated (and hence slower) electrical process is required. What’s more, there’s more than one type of write operation that the controller must choose from, depending on the usage history of the device. This brings us to the major limiting factor of SSD performance.
SSDs’ Achilles heel
It would be great if the controller could directly address the individual bytes stored in the NAND flash cells. Unfortunately, engineering constraints mean the data must be divided into “pages” and “blocks”. A single SSD page is the smallest chunk of data that can be accessed and written to; if you have a drive with a page size of 4KB (which is the norm for modern NTFS-formatted disks), even a 1KB file will take up 4KB on the disk. A block is a large group of pages – there are 128 pages per block in a standard SSD architecture.
While it’s straightforward to write a new 4KB page to disk, it isn’t possible to delete or overwrite data with the same degree of precision. The physics of flash mean that in order to change the contents of a single page, you must erase its entire block, and then write back all the data you wish to keep. This means all block-level operations need to manipulate a fair chunk of data – half a megabyte, in fact.
The upshot is that overwriting only one byte of data on an SSD means reading and caching all half-million bytes in the relevant block of data, resetting each cell in the block, modifying the block’s data to reflect the new state, and writing the half-million bytes back into the block. Not only does this slow down performance due to the length of the process, it also greatly increases the overall wear of the drive. In our example, a write operation of one byte causes 511,999 unnecessary bytes to be erased and rewritten. This is known as write amplification.
SSD conventional drives can't match the consistent speeds of solid state
Despite these technical hurdles, new engineering techniques and systems such as TRIM have brought SSD technology to the point where an SSD can now be a viable replacement for a mechanical drive. Indeed, in some cases it can be faster. Intel’s 510 Series SSD, in its 120GB guise, gave sustained sequential read speeds of a scarcely believable 390MB/sec in our Labs tests. Even Kingston’s “value” 128GB SSDNow 100V achieved 239MB/sec – way beyond the 100-140MB/sec that most modern mechanical drives can manage.
But, thanks to the page-level and block-level limitations, SSDs still fall behind when it comes to non-sequential operations on small files. At least one whole 4KB page must be read into cache and then back out (or vice-versa) for the smallest read or write operation. This means small, random-access operations often take far longer on an SSD than they do on a conventional spinning-platter drive. Our A-Listed mechanical drive, for instance, the Samsung Spinpoint F3, can hit over 80MB/sec with random reads, whereas the 120GB Intel 510 drops to a paltry 20MB/sec. Unfortunately, such operations are the bread-and-butter of Windows operation: both the operating system itself and the applications running on it continually read and write small chunks of housekeeping, configuration and user data.
If you’re considering using an SSD as your system drive in Windows, you should also bear in mind that hard disk speed is a fairly minor factor when it comes to real-world productivity. To be sure, the latest and greatest solid-state drive will enable programs and files to open more quickly, and your computer will feel smoother and more responsive. But every time we’ve pitted an SSD against a mechanical drive in our real-world benchmarks, we’ve seen no difference in results. Unless your work is heavily focused on loading and saving large files, an SSD won’t make you more productive.
So do I really need one?
For regular desktop use, SSDs are still a long way from being a no-brainer. Not only is the performance benefit equivocal, they’re much more expensive than conventional drives. The 120GB Intel 510 works out at $3.07 per gigabyte, making it a lot more expensive than the conventional Samsung Spinpoint F3, (which costs $59 for 1TB).
Yet, given SSDs’ low power usage, reduced weight, ruggedness and silent operation, they’re becoming popular in laptops – particularly in luxury and business-class ultraportables.In addition, when it comes to specialist desktop applications involving big files, SSDs can repay their cost in short order. HD video work with multiple streams can be transformed by the speed advantages of an SSD, and anyone editing multilayer Photoshop images all day will appreciate the performance boost every time they browse a folder full of huge images or hit Ctrl-S to save their work.
For general Windows performance, though, the argument is far from won. In a few years’ time, SSDs will probably be a realistic alternative to a mechanical drive. For now, however, unless you need the particular qualities of a solid-state disk, we’d recommend that you stick with a conventional hard disk.