I’ve told several groups I’ve spoken to recently that “disk storage hasn’t gotten faster in in 15 years.” Often that statement is met with some disbelief. I thought I’d take a few paragraphs and explain my reasoning.
First – Lets cover some timeline about the evolution of spinning disk storage.
- 7200 RPM HDD introduced by Seagate in 1992
- 10,000 RPM HDD introduced by Seagate in 1996
- 15,000 RPM HDD introduced by Seagate in 2000
- Serial ATA introduced in 2002
- Serial Attached SCSI introduced 2004
- 15,000 RPM SAS HDD ships in 2005
So, my argument starts with the idea that this is 2015, and the “fastest” hard disk I can buy today is still only 15,000 RPM, and those have been shipping since 2000. Yes, capacities have gotten larger, data densities greater, but they have not increased in rotational speed, and hence have not significantly increased in terms of IOPS.
To be fair, the performance of a drive is a function of several variables, rotational latency (the time for a platter complete one revolution) is just one measure. Head seek time is another measure. As is the number of bits which pass under the head(s) in a straight line per second.
Greater data densities will increase the amount of data on a given cylinder for a drive, and thus increase the amount of data that can be read or written per revolution – So you could argue that throughput may have increased as a function of greater density. But only if you don’t have to re-position the head, and only if you are reading most of a full cylinder. I also submit that the greater densities have lead to drives having fewer platters and thus fewer heads. This leads to my conclusion that the reduction in drive size mostly offsets any significant increased throughput due to the greater densities.
Today we’re seeing a tendency towards 2.5″ and sometimes even 1.8″ drives. These form factors have a potential to increase IO potential by decreasing seek times for the heads. Basically the smaller drive has a shorter head stroke distance and thus potentially will take less time to move the head between tracks. The theory is sound, but unfortunately the seek latency is typically much lower than the rotational latency; so the head gets there faster, but is still waiting for the proper sector to arrive as the disk spins.
Interestingly some manufacturers used to take advantage of a variable number of sectors per track and recognized that the outer tracks held more sectors. For this reason they would use the outer 1/3 of the platter for “fast track” operations looking to minimize the head seek time and maximize the sequential throughput. Again a sound theory, but the move from 3.5″ to 2.5″ drives eliminates this faster 1/3 of the platter. Again, negating any gains we may have made.
Another interesting trend in disk storage is a movement to phase out 15,000RPM drives. These disks are much more power hungry, and thus produce more heat than their slower (10,000RPM and 7,200RPM) counterparts. Heat eventually equates to failure. Likewise the rest of the tolerances in the faster drives are much tighter. This results in faster drives having shorter service lives and being more expensive. For those reasons (and the availability of flash memory) many storage vendors are looking to discontinue shipping of 15,000RPM disks. A 10K drive has only 66% of the IOP potential of a 15K drive.
So I submit that any gains we’ve had in the last 15 years in spinning disk performance have largely be offset by the changes in form factor. Spinning disk hasn’t gotten faster in 15 years. The moves towards 2.5″ and 10K drives could arguably suggest that disks are actually getting slower.
So IO demands for performance are getting greater. VDI, Big Data Analytics, Consolidation and other trends demand more data and faster response times. How do we address this? Many would say the answer is flash memory, often in the form of Solid State Disk (SSD).
SSD storage is not exactly new
- 1991 SanDisk sold a 20MB SSD for $1000
- 1995 M-Systems introduced Flash based SSD
- 1999 BiTMICRO announced a 18GB SSD
- 2007 Fusion IO PCIe @ 320GB and 100,000 IOPS
- 2008 EMC offers SSD in Symmetrix DMX
- 2008 SUN Storage 7000 offers SSD storage
- 2009 OCZ demonstrates a 1TB flash SSD
- 2010 Seagate offers Hybrid SSD/7.2K HDD
- 2014 IBM announces X6 with Flash on DIMM
But Flash memory isn’t without it’s flaws.
We know that a given flash device has a finite lifespan measured in write-cycles. This means that every time you write to a flash device you’re wearing it out. Much like turning on a light bulb, each time you change the state of a bit you’ve consumed a cycle. Do it enough and you’ll eventually consume them all.
Worse is that the smaller the storage cells used for flash are (and thus the greater the memory density) the shorter the lifespan. This means that the highest capacity flash drives will sustain the fewest number of writes per cell. Of course they have more cells so there is an argument that the drive may actually sustain a larger total number of writes before all the cells are burned out.
But… Flash gives us fantastic performance. And in terms of dollars per IOP flash has a much lower cost than spinning disk.
DRAM memory (volatile) hasn’t gone anywhere either – in fact it keeps increasing in it’s own densities and reduced cost per GB. DRAM doesn’t have the wear limit issue of Flash, nor the latencies associated with Disk. However it suffers from it’s inability to store data without power. If DRAM doesn’t have it’s charges refreshed periodically (every few milliseconds) it will loose whatever it’s storing.
Spinning disk capacities keep growing, and getting cheaper. In December of 2014 Engadget announced that Seagate was now shipping 8TB hard disks for $260.
So the ultimate answer (for today) is that we need to use Flash or DRAM for performance and spinning disk (which doesn’t wear out from being written to, or forget everything when the lights go out) for capacity and data integrity. Thus the best overall value comes from solutions which combine technologies to their best use. The best options don’t ask you to create pools of storage of each type, but allow you to create unified storage pools which automatically store data optimally based on how it’s being used.
This is the future of storage.
Pingback: Why Hyperconverged Architectures Win | Lewan IT Solutions Technical Blog