You are here

Home » Blogs » Konstantin Boyandin's blog

cloudive.com and I/O measurement

Cloudive logo

SSD rear guard

IOPS, or input-output operations per second, is a popular parameter, good enough to feel exceptionally proud about. Of course, hosting provider but rarely manufacture storage devices, in order that proud mentioned be authentic. Yet these magic numbers are always posted and referred to. In the VPS benchmarks (see links above) on this site, these benchmarks are measured by performing that many input-output operations, each handling 4kB of data (4096 bytes).

Cloudive.com's contacted me to run a series of tests: how would SSD disk behave when placed into RAID of different types. Note that unless RAID controller recognizes SSD and can perform specific commands like TRIM, it's actually doesn't provide any specific advantages when utilizing RAID.

The testing performed has provided to me that well-known SSD Guard is, in fact, of little real use when talking about boosting disks operations I/O. To those not familiar, I cite manufacturer's explanation.

SSD Guard™, unique to MegaRAID, increases the reliability of SSDs by automatically copying data from a drive with potential to fail to a designated spare or newly inserted drive. A predictive failure event notification, or S.M.A.R.T command, automatically initiates this rebuild to preserve the data on an SSD whose health or performance falls below par. This new feature will greatly benefit users employing a RAID 0 configuration due to the added data protection.

When enabled in MSM, SSD Guard will protect any and all logical volumes built using SSD devices (figure 6). In figure 7 and figure 8, we see a MegaRAID adapter with a RAID 0 volume built from two solid state disk drives. Should one of these drives fail, data loss would occur. However, since SSD Guard is enabled, the MegaRAID adapter is actively monitoring the status of both SSDs. Should a failure appear to be eminent, the MegaRAID adapter will automatically begin rebuilding data onto a third SSD hot spare. If a hot spare is not present or not assigned to the RAID 0, MSM will recommend that the user insert a hot spare drive into an available slot. Once the drive is inserted, copyback will begin.

Now let's talk about I/O measurements.

I/O: bigger, faster, ....?

I will briefly introduce several tools I use to benchmark I/O capacities.

ioping is well-known workhorse for performing quick and simple I/O tests. As far as I understand, this tool can't efficiently guarantee it will indeed use direct I/O, thus whatever it measures is from «current usage perspective»: based upon cached access to data, which depends heavily on what system does at the moment. Nonetheless, it's a good tool useful in many typical situations.

fio is sophisticated I/O measurement tool, that can use many engines to actually access data exchange. I suggest reading information on the above link to get impressed with the tool's capabilities. This is the tool I use (in concurrent direct I/O libaio mode) to measure IOPS parameters I provide in benchmarks listed on this site. An excellent low-level multipurpose tool under active development.

iozone filesystem benchmarking tool is also under current development and supports quite many platforms and testing modes, let alone can create graphs, very convenient for presentations. I use this tool both in its automated mode and in 4kB access mode for files 1MB, 10MB, 100MB and 1000MB long, to get impression from «software viewpoint» of accessing database. Recently, I started to use both direct and default (cached) testing methods, to provide both IOPS benchmarking data (direct access) and software viewpoint ones.

There are other nice I/O measurement tools; the ones mentioned are multiplatform and can do almost any kind of measurement scenario you can fathom. However, I should recommend studying Phoronix test suite and its huge tests collection, especially dbench and tiobench disk benchmarks. Note, however, that Phoronix tests can be extremely disk-intensive.

Now that you know my choice of I/O benchmarking tools, let's return to Cloudive.

Cloudive: real-life tests for SSD disks in RAID

In case you know 64U and Backupsy hosting services, Cloudive is another service provided by the same experts, which I suggest to consider when choosing your hosting. With Cloudive's permission, I post some results of testing SSD disks behaviour when joined into different types of RAID.

Hardware specifications of computer where all the tests have been performed: HP DL160 G6, 144GB of RAM, 8 x Intel 520 240GB SSD @ 3Gbps links, LSI 9265-8i card + BBU.

Note that the mentioned card doesn't support TRIM. This will be well visible when looking at results.

I only give a small slice: measurements for random reading/writing from/to 1024MB file, in 4KB and 128KB chunks.

RAID levelCachedDirect
Random readRandom writeRandom readRandom write
Block sizeIOPSBlock sizeIOPSBlock sizeIOPSBlock sizeIOPS
04 KB14057784 KB7827214 KB277204 KB25903
128 KB76120128 KB38095128 KB8228128 KB9069
54 KB14553174 KB7759334 KB301704 KB28008
128 KB72816128 KB39624128 KB10483128 KB12198
104 KB14979534 KB7850104 KB296464 KB27007
128 KB80119128 KB39624128 KB8538128 KB9648

Pay attention to strange results for direct access mode in cases of RAID 5/RAID 10. Also, iozone tests are good to learn what chunk size is optimal to achieve better performance. Looks like reading in 128/256KB is optimal in given case. Download the whole set of results using link below; also, on iozone's site you can find directions on how to make graphs out of generated data.

I look forward to testing TRIM-enabled RAID controllers with SSD — otherwise the sold state drives are unable to demonstrate their full capabilities.

Here's a link to download the mentioned Cloudive's RAID-based experiments, to study in full: cloudive.com-raid-tests.zip.

Share/Save