You are here
cloudive.com and I/O measurement
SSD rear guard
IOPS, or input-output operations per second, is a popular parameter, good enough to feel exceptionally proud about. Of course, hosting provider but rarely manufacture storage devices, in order that proud mentioned be authentic. Yet these magic numbers are always posted and referred to. In the VPS benchmarks (see links above) on this site, these benchmarks are measured by performing that many input-output operations, each handling 4kB of data (4096 bytes).
Cloudive.com's contacted me to run a series of tests: how would SSD disk behave when placed into RAID of different types. Note that unless RAID controller recognizes SSD and can perform specific commands like TRIM, it's actually doesn't provide any specific advantages when utilizing RAID.
The testing performed has provided to me that well-known SSD Guard is, in fact, of little real use when talking about boosting disks operations I/O. To those not familiar, I cite manufacturer's explanation.
SSD Guard™, unique to MegaRAID, increases the reliability of SSDs by automatically copying data from a drive with potential to fail to a designated spare or newly inserted drive. A predictive failure event notification, or S.M.A.R.T command, automatically initiates this rebuild to preserve the data on an SSD whose health or performance falls below par. This new feature will greatly benefit users employing a RAID 0 configuration due to the added data protection.
When enabled in MSM, SSD Guard will protect any and all logical volumes built using SSD devices (figure 6). In figure 7 and figure 8, we see a MegaRAID adapter with a RAID 0 volume built from two solid state disk drives. Should one of these drives fail, data loss would occur. However, since SSD Guard is enabled, the MegaRAID adapter is actively monitoring the status of both SSDs. Should a failure appear to be eminent, the MegaRAID adapter will automatically begin rebuilding data onto a third SSD hot spare. If a hot spare is not present or not assigned to the RAID 0, MSM will recommend that the user insert a hot spare drive into an available slot. Once the drive is inserted, copyback will begin.
Now let's talk about I/O measurements.
I/O: bigger, faster, ....?
I will briefly introduce several tools I use to benchmark I/O capacities.
ioping is well-known workhorse for performing quick and simple I/O tests. As far as I understand, this tool can't efficiently guarantee it will indeed use direct I/O, thus whatever it measures is from «current usage perspective»: based upon cached access to data, which depends heavily on what system does at the moment. Nonetheless, it's a good tool useful in many typical situations.
fio is sophisticated I/O measurement tool, that can use many engines to actually access data exchange. I suggest reading information on the above link to get impressed with the tool's capabilities. This is the tool I use (in concurrent direct I/O libaio mode) to measure IOPS parameters I provide in benchmarks listed on this site. An excellent low-level multipurpose tool under active development.
iozone filesystem benchmarking tool is also under current development and supports quite many platforms and testing modes, let alone can create graphs, very convenient for presentations. I use this tool both in its automated mode and in 4kB access mode for files 1MB, 10MB, 100MB and 1000MB long, to get impression from «software viewpoint» of accessing database. Recently, I started to use both direct and default (cached) testing methods, to provide both IOPS benchmarking data (direct access) and software viewpoint ones.
There are other nice I/O measurement tools; the ones mentioned are multiplatform and can do almost any kind of measurement scenario you can fathom. However, I should recommend studying Phoronix test suite and its huge tests collection, especially dbench and tiobench disk benchmarks. Note, however, that Phoronix tests can be extremely disk-intensive.
Now that you know my choice of I/O benchmarking tools, let's return to Cloudive.
Cloudive: real-life tests for SSD disks in RAID
In case you know 64U and Backupsy hosting services, Cloudive is another service provided by the same experts, which I suggest to consider when choosing your hosting. With Cloudive's permission, I post some results of testing SSD disks behaviour when joined into different types of RAID.
Hardware specifications of computer where all the tests have been performed: HP DL160 G6, 144GB of RAM, 8 x Intel 520 240GB SSD @ 3Gbps links, LSI 9265-8i card + BBU.
Note that the mentioned card doesn't support TRIM. This will be well visible when looking at results.
I only give a small slice: measurements for random reading/writing from/to 1024MB file, in 4KB and 128KB chunks.
|Random read||Random write||Random read||Random write|
|Block size||IOPS||Block size||IOPS||Block size||IOPS||Block size||IOPS|
|0||4 KB||1405778||4 KB||782721||4 KB||27720||4 KB||25903|
|128 KB||76120||128 KB||38095||128 KB||8228||128 KB||9069|
|5||4 KB||1455317||4 KB||775933||4 KB||30170||4 KB||28008|
|128 KB||72816||128 KB||39624||128 KB||10483||128 KB||12198|
|10||4 KB||1497953||4 KB||785010||4 KB||29646||4 KB||27007|
|128 KB||80119||128 KB||39624||128 KB||8538||128 KB||9648|
Pay attention to strange results for direct access mode in cases of RAID 5/RAID 10. Also, iozone tests are good to learn what chunk size is optimal to achieve better performance. Looks like reading in 128/256KB is optimal in given case. Download the whole set of results using link below; also, on iozone's site you can find directions on how to make graphs out of generated data.
I look forward to testing TRIM-enabled RAID controllers with SSD — otherwise the sold state drives are unable to demonstrate their full capabilities.
Here's a link to download the mentioned Cloudive's RAID-based experiments, to study in full: cloudive.com-raid-tests.zip.