wiki:StoragePerformance
Last modified 4 years ago Last modified on 01/27/14 12:47:04

Storage Performance

This effort is part of understanding the OpenStackPlusCeph fabric and our ability to build solutions on this infrastructure.

Update 2014-01-27: The latest numbers for storage performance and a general framework for maintaining this data are being recorded in ticket:245.

Resources

The research computing system includes a variety of storage platforms some large scale and others traditional locally attached disks in a single computer. The major storage systems are our Hitachi SAN which backs our NFS accessible home directories across the cluster, our Lustre distributed storage system running on a DDN hardware fabric, and our newest member, a cloud storage solution called OpenStackPlusCeph Ceph?.

Goals

Knowing the operating parameters of the different storage subsystems will help identify what uses they are best for, help recognize when they are performing sub-optimally, and help us compare technologies on which to build applications.

The initial data set below offers some preliminary performance data points to guide development and further research into full scale operations monitoring.

Methods

The performance measures will be based on simple reads and writes of raw data to the next layer in the storage fabric. The tool of choice is dd and time. If there are multiple layers in a fabric, each layer should have corresponding performance numbers so we can notice if there are differences at a specific layer boundary. Because our tools are file system level tools, the lowest layer will inherently be the underlying file system on a locally attached system disk.

The current tests center around reading and writing a 10Gigabyte file. The tests create the 10G file with dd and then reads that file multiple times (to identify if cache features). 10GB is a good number that is a decent large file typical of the individual file content in larger data sets. It's also big enough to provide a more stable performance measure without being so big that it takes for ever to run a test, since they are run interactively for now.

The current working directory for these commands is assumed to be a writable portion of the file system whose performance is being measured. The command used to create the file is:

time dd if=/dev/zero of=bigfile1 count=20M

The command used to read the created file is:

time dd if=bigfile of=/dev/null

If multiple files are needed, say to test cache performance for reads of two distinct files or some other read/write tests, an index can be added to bigfile.

For example, in our stock NFS NAS configuration we have "local" system disks (LUNS) in our NAS server that are attached via fibre channel to the back end SAN. This is local disk has an ext4 file system on it. We will perform our baseline performance test at this layer. Each client of the NAS connects to the server via NFS across a Gige network. We will do additional, distinct performance tests to measure the performance of the same write commands across the NFS channel.

In cases, like Ceph, where the local system disks are directly connected to the SATA network in the server and then represented as higher level abstractions to the next client, which in turn represents block storage to the operating system, we will test each layer where an exposed file system allows use of our tools. In the case of Ceph, that is at the server disk layer for ext4 file systems and at the file system imprinted on the virtual block device layer attached to a client (either via OpenStack or RBD).

Results

Summary

The raw data is below but here are the rough performance characteristics:

  • Ceph block store attached to a VM in OpenStack has a write speed 250-290Megabyte/s and a read speed of ~100Megabyte/s. s The read speed is consistent and doesn't change with repeated reads so there doesn't appear to be any caching. The reason for this surprising difference is not yet clear. Assume it must come from some cache setting.
  • Ceph storage is implemented on Dell 720xd boxes with 12 x 3GB 7200RPM disks. The performance for our underlying disk subsystem will help gauge the max performance we can expect at the Ceph layer and beyond. The write speeds for new files are in the range we see at the OpenStack virsh layer, between 250-300Megabyte/s. The write speeds for re-written files (i.e. reusing the same file name) are actually pretty slow 100-125Megabyte/s, but that seems to be an ext4 behavior on reuse of blocks. The read speeds on these disks are very good at over 1Gigabyte/s, so the speeds we are seeing in the virtual fabric at 100Megabyte/s are taking a pretty big performance hit.
  • Hitachi LUN attached to nas-02 NAS in RCS has a wide range in write speeds, one sample was 117Megabyte/s another 211Megabyte/s. More samples are needed. The performance on the read side was more predictable, around 110Megabyte/s on a first read of the file but subsequent reads of the same file saw nearly 10x performance gains, presumably because of high quality caching in the SAN fabric.
  • Lustre file system attached via QDR IB to the head node saw more consistent results across the handful of tests, a write performance around 50Megabyte/s and a read around 150Megabyte/s.

In general the number of tests needs to be increased considerably before any real conclusions can be draw but the results do provide a baseline for some preliminary investigations (the read differential in Ceph) and offer some baseline expectations for what a NAS solution built on top of Ceph block devices should achieve.

Ceph Block Storage

The Ceph tests are focused on the block device interfaces because this is the layer that provides storage permanence in our OpenStack fabric and is targeted for our NAS solution. Block devices created under a tenant account in OpenStack exist in the Ceph RBD pool and can be attached to a VM. For the following tests, a 100GB block store was created in OpenStack and attached to an Ubunutu 12.04.2 VM instantiated in OpenStack.

After the volume was attached to the VM an ext4 file system was put on the block device and mounted into the VM file system.

ubuntu@jpr-test3:~$ sudo sfdisk -l /dev/vdc

Disk /dev/vdc: 208050 cylinders, 16 heads, 63 sectors/track

sfdisk: ERROR: sector 0 does not have an msdos signature
 /dev/vdc: unrecognized partition table type
No partitions found
ubuntu@jpr-test3:~$ sudo mkfs.ext4 /dev/vdc
mke2fs 1.42 (29-Nov-2011)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
6553600 inodes, 26214400 blocks 
1310720 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
800 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
        4096000, 7962624, 11239424, 20480000, 23887872

Allocating group tables: done   
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

ubuntu@jpr-test3:~$ sudo mkdir /mnt3
ubuntu@jpr-test3:~$ sudo mount /dev/vdc /mnt3
ubuntu@jpr-test3:~$ ls /mnt3
lost+found
ubuntu@jpr-test3:~$ df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/vda1       9.9G  780M  8.6G   9% /
udev            998M  8.0K  998M   1% /dev
tmpfs           401M  220K  401M   1% /run
none            5.0M     0  5.0M   0% /run/lock
none           1002M     0 1002M   0% /run/shm
/dev/vdb         20G  173M   19G   1% /mnt
/dev/vdc         99G  188M   94G   1% /mnt3

With the block device mounted, the space for the tests was created and then tests run:

ubuntu@jpr-test3:~$ sudo chown ubuntu /mnt3
ubuntu@jpr-test3:~$ cd /mnt3
ubuntu@jpr-test3:/mnt3$ time dd if=/dev/zero of=bigfile count=20M
20971520+0 records in
20971520+0 records out
10737418240 bytes (11 GB) copied, 43.3103 s, 248 MB/s

real    0m43.321s
user    0m3.652s
sys     0m32.914s
ubuntu@jpr-test3:/mnt3$ ls -lh  
total 11G
-rw-rw-r-- 1 ubuntu ubuntu 10G May 30 11:24 bigfile
drwx------ 2 root   root   16K May 30 11:22 lost+found
ubuntu@jpr-test3:/mnt3$ time dd if=/dev/zero of=bigfile1 count=20M
20971520+0 records in
20971520+0 records out
10737418240 bytes (11 GB) copied, 40.6147 s, 264 MB/s

real    0m40.643s
user    0m3.316s
sys     0m30.938s
ubuntu@jpr-test3:/mnt3$ time dd if=/dev/zero of=bigfile1 count=20M
20971520+0 records in
20971520+0 records out
10737418240 bytes (11 GB) copied, 43.1802 s, 249 MB/s

real    0m44.003s
user    0m3.512s
sys     0m31.958s
ubuntu@jpr-test3:/mnt3$ time dd if=/dev/zero of=bigfile1 count=20M
20971520+0 records in
20971520+0 records out
10737418240 bytes (11 GB) copied, 38.3618 s, 280 MB/s

real    0m39.173s
user    0m3.128s
sys     0m29.326s
ubuntu@jpr-test3:/mnt3$ time dd if=bigfile1 of=/dev/null
20971520+0 records in
20971520+0 records out
10737418240 bytes (11 GB) copied, 106.008 s, 101 MB/s

real    1m46.017s
user    0m5.256s
sys     0m24.614s
ubuntu@jpr-test3:/mnt3$ time dd if=bigfile1 of=/dev/null
20971520+0 records in
20971520+0 records out
10737418240 bytes (11 GB) copied, 105.53 s, 102 MB/s

real    1m45.564s
user    0m5.644s
sys     0m24.230s
ubuntu@jpr-test3:/mnt3$ time dd if=bigfile1 of=/dev/null
20971520+0 records in
20971520+0 records out
10737418240 bytes (11 GB) copied, 104.697 s, 103 MB/s

real    1m44.756s
user    0m5.600s
sys     0m24.374s

Ceph Storage Node

Our Ceph storage is implemented on Dell 720xd boxes with 12 x 3GB 7200RPM disks. Knowing the performance for our underlying disk subsystem will help gauge the max performance we can expect at the Ceph layer and beyond. It's immediately clear from the results that the write speeds for new files are in the range we see at the OpenStack virsh layer, between 250-300Megabyte/s. The write speeds for re-written files (i.e. reusing the same file name) are actually pretty slow 100-125Megabyte/s, but that seems to be an ext4 behavior on reuse of blocks. The read speeds on this disks are very good at over 1Gigabyte/s, so the speeds we are seeing in the virtual fabric at 100Megabyte/s are taking a pretty big performance hit.

These initial results are for the same dd reads and writes on the ceph storage machine with ip address 172.16.x.5 writing to the /var/tmp directory, which in the current crowbar configuration is on the same type of disk as the ceph OSDs.

crowbar@da0-36-9f-0e-2b-88:~$ df -h
Filesystem                                Size  Used Avail Use% Mounted on
/dev/mapper/da0--36--9f--0e--2b--88-root  2.7T   42G  2.5T   2% /
udev                                       48G   12K   48G   1% /dev
tmpfs                                      19G  432K   19G   1% /run
none                                      5.0M     0  5.0M   0% /run/lock
none                                       48G     0   48G   0% /run/shm
/dev/sda2                                 229M   27M  190M  13% /boot
/dev/sdl1                                 2.0T  3.4G  2.0T   1% /var/lib/ceph/osd/ceph-37
/dev/sdk1                                 2.0T  3.3G  2.0T   1% /var/lib/ceph/osd/ceph-39
/dev/sdj1                                 2.0T  2.3G  2.0T   1% /var/lib/ceph/osd/ceph-41
/dev/sdi1                                 2.0T  3.7G  2.0T   1% /var/lib/ceph/osd/ceph-43
/dev/sdh1                                 2.0T  3.5G  2.0T   1% /var/lib/ceph/osd/ceph-45
/dev/sdf1                                 2.0T  3.1G  2.0T   1% /var/lib/ceph/osd/ceph-27
/dev/sdg1                                 2.0T  3.0G  2.0T   1% /var/lib/ceph/osd/ceph-30
/dev/sdd1                                 2.0T  2.6G  2.0T   1% /var/lib/ceph/osd/ceph-13
/dev/sde1                                 2.0T  3.4G  2.0T   1% /var/lib/ceph/osd/ceph-22
/dev/sdb1                                 2.0T  2.6G  2.0T   1% /var/lib/ceph/osd/ceph-5
/dev/sdc1                                 2.0T  3.1G  2.0T   1% /var/lib/ceph/osd/ceph-8

Disk Caching On

crowbar@da0-36-9f-0e-2b-88:/var/tmp$ time dd if=/dev/zero of=bigfile1 count=20M
20971520+0 records in
20971520+0 records out
10737418240 bytes (11 GB) copied, 41.5029 s, 259 MB/s

real    0m41.516s
user    0m2.792s
sys     0m35.146s
crowbar@da0-36-9f-0e-2b-88:/var/tmp$ mkdir disk-perf
crowbar@da0-36-9f-0e-2b-88:/var/tmp$ mv bigfile1 disk-perf/
crowbar@da0-36-9f-0e-2b-88:/var/tmp$ cd disk-perf/
crowbar@da0-36-9f-0e-2b-88:/var/tmp/disk-perf$ ls
bigfile1
crowbar@da0-36-9f-0e-2b-88:/var/tmp/disk-perf$ time dd if=bigfile1 of=/dev/null
20971520+0 records in
20971520+0 records out
10737418240 bytes (11 GB) copied, 9.2557 s, 1.2 GB/s

real    0m9.258s
user    0m2.000s
sys     0m7.256s
crowbar@da0-36-9f-0e-2b-88:/var/tmp/disk-perf$ time dd if=/dev/zero of=bigfile count=20M
20971520+0 records in
20971520+0 records out
10737418240 bytes (11 GB) copied, 35.8028 s, 300 MB/s

real    0m35.805s
user    0m2.964s
sys     0m31.930s
crowbar@da0-36-9f-0e-2b-88:/var/tmp/disk-perf$ time dd if=bigfile of=/dev/null
20971520+0 records in
20971520+0 records out
10737418240 bytes (11 GB) copied, 9.74008 s, 1.1 GB/s

real    0m9.742s
user    0m1.936s
sys     0m7.804s
crowbar@da0-36-9f-0e-2b-88:/var/tmp/disk-perf$ time dd if=bigfile of=/dev/null
20971520+0 records in
20971520+0 records out
10737418240 bytes (11 GB) copied, 9.52678 s, 1.1 GB/s

real    0m9.529s
user    0m2.000s
sys     0m7.528s
crowbar@da0-36-9f-0e-2b-88:/var/tmp/disk-perf$ time dd if=bigfile of=/dev/null
20971520+0 records in
20971520+0 records out
10737418240 bytes (11 GB) copied, 9.70357 s, 1.1 GB/s

real    0m9.706s
user    0m2.292s
sys     0m7.412s
crowbar@da0-36-9f-0e-2b-88:/var/tmp/disk-perf$ time dd if=/dev/zero of=bigfile count=20M

20971520+0 records in
20971520+0 records out
10737418240 bytes (11 GB) copied, 86.0955 s, 125 MB/s

real    1m27.837s
user    0m2.736s
sys     0m37.678s
crowbar@da0-36-9f-0e-2b-88:/var/tmp/disk-perf$ 
crowbar@da0-36-9f-0e-2b-88:/var/tmp/disk-perf$ time dd if=/dev/zero of=bigfile1 count=20M
20971520+0 records in
20971520+0 records out
10737418240 bytes (11 GB) copied, 86.915 s, 124 MB/s

real    1m28.900s
user    0m2.752s
sys     0m38.054s
crowbar@da0-36-9f-0e-2b-88:/var/tmp/disk-perf$ time dd if=/dev/zero of=bigfile2 count=20M
20971520+0 records in
20971520+0 records out
10737418240 bytes (11 GB) copied, 36.252 s, 296 MB/s

real    0m36.254s
user    0m2.868s
sys     0m32.518s
crowbar@da0-36-9f-0e-2b-88:/var/tmp/disk-perf$ time dd if=/dev/zero of=bigfile3 count=20M
20971520+0 records in
20971520+0 records out
10737418240 bytes (11 GB) copied, 36.6563 s, 293 MB/s

real    0m36.659s
user    0m2.976s
sys     0m33.102s
crowbar@da0-36-9f-0e-2b-88:/var/tmp/disk-perf$ time dd if=/dev/zero of=bigfile3 count=20M
20971520+0 records in
20971520+0 records out
10737418240 bytes (11 GB) copied, 95.4214 s, 113 MB/s

real    2m39.178s
user    0m2.768s
sys     0m43.095s

Disk Caching Off

Rerun the local disk test with disk caching off. The above high read numbers were improved by cache performance. The following numbers turn off caching and reveal the realistic read speeds are closer to 150Megabyte/s. This represents only a 50Megabyte/s differential withe storage volume performance.

crowbar@ceph-node:~$mkdir -p project/disk-perf
crowbar@ceph-node:~$cd project/disk-perf/
crowbar@ceph-node:~/project/disk-perf$time dd if=/dev/zero of=bigfile3 count=20M
20971520+0 records in
20971520+0 records out
10737418240 bytes (11 GB) copied, 36.1735 s, 297 MB/s

real    0m36.176s
user    0m2.460s
sys     0m33.598s
crowbar@ceph-node:~/project/disk-perf$time dd if=bigfile3 of=/dev/null
20971520+0 records in
20971520+0 records out
10737418240 bytes (11 GB) copied, 10.0036 s, 1.1 GB/s

real    0m10.006s
user    0m2.084s
sys     0m7.920s
crowbar@ceph-node:~/project/disk-perf$time dd if=bigfile3 of=/dev/null
20971520+0 records in
20971520+0 records out
10737418240 bytes (11 GB) copied, 9.63817 s, 1.1 GB/s

real    0m9.641s
user    0m1.936s
sys     0m7.700s
crowbar@ceph-node:~/project/disk-perf$cat /proc/sys/vm/drop_caches
0
crowbar@ceph-node:~/project/disk-perf$sudo sh -c 'echo 3 > /proc/sys/vm/drop_caches'
crowbar@ceph-node:~/project/disk-perf$cat /proc/sys/vm/drop_caches
3
crowbar@ceph-node:~/project/disk-perf$time dd if=bigfile3 of=/dev/null
20971520+0 records in
20971520+0 records out
10737418240 bytes (11 GB) copied, 72.2676 s, 149 MB/s

real    1m12.270s
user    0m4.356s
sys     0m22.221s
crowbar@ceph-node:~/project/disk-perf$cat /proc/sys/vm/drop_caches 
3
crowbar@ceph-node:~/project/disk-perf$time dd if=bigfile3 of=/dev/null
20971520+0 records in
20971520+0 records out
10737418240 bytes (11 GB) copied, 9.77653 s, 1.1 GB/s

real    0m9.779s
user    0m2.156s
sys     0m7.620s
crowbar@ceph-node:~/project/disk-perf$sudo sh -c 'echo 3 > /proc/sys/vm/drop_caches'
crowbar@ceph-node:~/project/disk-perf$time dd if=bigfile3 of=/dev/null
20971520+0 records in
20971520+0 records out
10737418240 bytes (11 GB) copied, 68.5844 s, 157 MB/s

real    1m8.785s
user    0m4.048s
sys     0m21.281s
crowbar@ceph-node:~/project/disk-perf$time dd if=bigfile3 of=/dev/null
20971520+0 records in
20971520+0 records out
10737418240 bytes (11 GB) copied, 9.45352 s, 1.1 GB/s

real    0m9.456s
user    0m2.152s
sys     0m7.300s
crowbar@ceph-node:~/project/disk-perf$sudo sh -c 'echo 3 > /proc/sys/vm/drop_caches'
crowbar@ceph-node:~/project/disk-perf$time dd if=/dev/zero of=bigfile4 count=20M
20971520+0 records in
20971520+0 records out
10737418240 bytes (11 GB) copied, 45.9953 s, 233 MB/s

real    0m46.033s
user    0m2.728s
sys     0m36.902s

Existing NAS

The existing NAS solution has fibre channel connects to an enterprise Hitachi SAN fabric. The numbers below are taken from the tests running on nas-02 in the $HOME directory of user jpr. This gives a decent measure for performance of a single LUN containing an ext4 file system.

[jpr@cheaha ~]$ ssh nas-02
Last login: Mon Mar 18 13:24:13 2013 from 172.20.0.10

================================================================
This system is configured and operated by the UAB IT Research Computing
group.

Use of this resource is governed by the UAB Acceptable Use Policy for
Computer and Network Resources. Please review these policies on-line:

http://www.uabgrid.uab.edu/aup

Unauthorized use of this resource is prohibited!
================================================================

[jpr@nas-02 ~]$ cd tmp
[jpr@nas-02 tmp]$ time dd if=/dev/zero of=bigfile count=20M
20971520+0 records in
20971520+0 records out
10737418240 bytes (11 GB) copied, 97.651 s, 110 MB/s

real    1m37.821s 
user    0m5.062s  
sys     1m2.859s  
[jpr@nas-02 tmp]$ time dd if=/dev/zero of=bigfile1 count=20M
20971520+0 records in
20971520+0 records out
10737418240 bytes (11 GB) copied, 92.9336 s, 116 MB/s

real    1m32.955s 
user    0m4.863s  
sys     0m58.291s 
[jpr@nas-02 tmp]$ time dd if=bigfile1 of=/dev/null
20971520+0 records in
20971520+0 records out
10737418240 bytes (11 GB) copied, 61.0101 s, 176 MB/s

real    1m1.032s  
user    0m4.478s  
sys     0m20.866s 
[jpr@nas-02 tmp]$ time dd if=bigfile1 of=/dev/null
20971520+0 records in
20971520+0 records out
10737418240 bytes (11 GB) copied, 16.1595 s, 664 MB/s

real    0m16.163s 
user    0m2.603s  
sys     0m11.845s 
[jpr@nas-02 tmp]$ time dd if=bigfile1 of=/dev/null
20971520+0 records in
20971520+0 records out
10737418240 bytes (11 GB) copied, 13.0625 s, 822 MB/s

real    0m13.066s 
user    0m2.122s  
sys     0m10.909s 
[jpr@nas-02 tmp]$ time dd if=bigfile1 of=/dev/null
20971520+0 records in
20971520+0 records out
10737418240 bytes (11 GB) copied, 13.0438 s, 823 MB/s

real    0m13.046s 
user    0m2.077s  
sys     0m10.933s 
[jpr@nas-02 tmp]$ time dd if=bigfile of=/dev/null
20971520+0 records in
20971520+0 records out
10737418240 bytes (11 GB) copied, 139.184 s, 77.1 MB/s

real    2m19.197s 
user    0m4.043s  
sys     0m29.113s 
[jpr@nas-02 tmp]$ time dd if=bigfile of=/dev/null
20971520+0 records in
20971520+0 records out
10737418240 bytes (11 GB) copied, 91.1934 s, 118 MB/s

real    1m31.212s 
user    0m4.293s  
sys     0m25.520s 
[jpr@nas-02 tmp]$ time dd if=bigfile of=/dev/null
20971520+0 records in
20971520+0 records out
10737418240 bytes (11 GB) copied, 30.166 s, 356 MB/s

real    0m30.181s 
user    0m3.592s  
sys     0m17.273s 
[jpr@nas-02 tmp]$ time dd if=bigfile of=/dev/null
20971520+0 records in
20971520+0 records out
10737418240 bytes (11 GB) copied, 12.6146 s, 851 MB/s

real    0m12.620s 
user    0m2.167s  
sys     0m10.404s 
[jpr@nas-02 tmp]$ time dd if=bigfile of=/dev/null
20971520+0 records in
20971520+0 records out
10737418240 bytes (11 GB) copied, 13.326 s, 806 MB/s

real    0m13.328s 
user    0m1.923s  
sys     0m11.369s 
[jpr@nas-02 tmp]$ time dd if=bigfile1 of=/dev/null
20971520+0 records in
20971520+0 records out
10737418240 bytes (11 GB) copied, 89.86 s, 119 MB/s

real    1m29.868s 
user    0m4.330s  
sys     0m23.345s 

Lustre on DDN

[jpr@cheaha ~]$ cd $USER_SCRATCH 
[jpr@cheaha jpr]$ cd projects/disk-perf/
[jpr@cheaha disk-perf]$ time dd if=/dev/zero of=bigfile1 count=20M
20971520+0 records in
20971520+0 records out
10737418240 bytes (11 GB) copied, 216.505 seconds, 49.6 MB/s

real    3m36.525s
user    0m9.802s
sys     3m9.933s
[jpr@cheaha disk-perf]$ time dd if=/dev/zero of=bigfile count=20M
20971520+0 records in
20971520+0 records out
10737418240 bytes (11 GB) copied, 207.89 seconds, 51.6 MB/s

real    3m27.893s
user    0m9.634s
sys     3m12.125s
[jpr@cheaha disk-perf]$ time dd if=bigfile1 of=/dev/null
20971520+0 records in
20971520+0 records out
10737418240 bytes (11 GB) copied, 74.7547 seconds, 144 MB/s

real    1m14.757s
user    0m8.753s
sys     1m5.302s
[jpr@cheaha disk-perf]$ time dd if=bigfile1 of=/dev/null
20971520+0 records in
20971520+0 records out
10737418240 bytes (11 GB) copied, 72.9918 seconds, 147 MB/s

real    1m12.994s
user    0m8.704s
sys     1m3.926s
[jpr@cheaha disk-perf]$ time dd if=bigfile1 of=/dev/null
20971520+0 records in
20971520+0 records out
10737418240 bytes (11 GB) copied, 71.92 seconds, 149 MB/s

real    1m11.936s
user    0m8.722s
sys     1m2.685s
[jpr@cheaha disk-perf]$ time dd if=bigfile1 of=/dev/null
20971520+0 records in
20971520+0 records out
10737418240 bytes (11 GB) copied, 67.9619 seconds, 158 MB/s

real    1m7.964s
user    0m9.379s
sys     0m58.345s