...
The table below summarizes SPC-1 and SPC-2 specifics.
| SPC-1 | SPC-2 |
---|---|---|
Typical applications |
|
...
|
...
Workload | Random I/O | Sequential I/O (1+ streams) |
---|---|---|
Workload variations |
|
...
|
...
Reported metrics |
|
...
|
...
Good practices and tips for benchmarking
...
Because Iperf is an client-server application, you have to install iperf on both machines involved in tests. Make sure that you use iperf 2.0.2 using p-threads or newer due to some multi-threading issues with older versions. You can check the version of this installed tools with a following command:
Panelnoformat |
---|
[:userathostname ~]$ iperf -v iperf version 2.0.2 (03 May 2005) pthreads |
Because in many cases the low network performance is caused by high CPU load, you should measure CPU usage at both link ends during every test round. In this HOWTO we use an open-source vmstat LINK-! tool which you probably have already installed on your machines.
...
In order to enable MTU 9000 on the machine network interfaces you may use ifconfig command.
No Format |
---|
[root@hostname ~]$ ifconfig eth1 mtu 9000
|
Panel |
test001:root@hostname ~$ ifconfig eth1 mtu 9000 |
Alternatively, you can put this settings into interface configuration scripts, e.g. /etc/sysconfig/network-scripts/ifcfg-eth1 (on RHEL, Centos, Fedora etc.).
If jumbo frame are working properly, you should be able to ping one host from another using large MTU:
No Format |
---|
[root@hostname ~]$ ping |
Panel |
test001:root@hostname ~$ ping 10.0.0.1 -s 8960 |
In the example above, we use 8960 instead of 9000 because ping tool option -s needs frame size minus frame header which lenght is equal to 40 bytes. If you cannot use jumbo frames set the mtu to default value 1500.
To tune your link you should measure the average Round Trip Time (RTT) between machines. RTT can be obtained by multiplying the value returned by a ping command by 2. When you have RTT measured, you can set TCO read and write buffers sizes. There are three values you can set: minimum, initial and maximum buffer size. The theoretical value (in bytes) for initial buffer size is BPS / 8 * RTT, where BPS is the link bandwidth in bits/second. Example commands that set these values for the whole operating system are:
No Format |
---|
[root@hostname ~]# sysctl -w |
Panel |
test001:root@hostname ~# sysctl -w net.ipv4.tcp_rmem="4096 500000 1000000" test001:root@hostname ~# sysctl -w [root@hostname ~]# sysctl -w net.ipv4.tcp_wmem="4096 500000 1000000" |
Probably, it is best if you start with values computed using the formula mentioned above and then tune these values according to the tests results.
You can also experiment with maximum socket buffer sizes:
No Format |
---|
[root@hostname ~]# sysctl -w |
Panel |
test001:root@hostname ~# sysctl -w net.core.rmem_max=1000000 test001:root@hostname ~# sysctl -w [root@hostname ~]# sysctl -w net.core.wmem_max=1000000 |
Another options that should boost performance are:
No Format |
---|
[root@hostname ~]# sysctl -w |
Panel |
test001:root@hostname ~# sysctl -w net.ipv4.tcp_no_metrics_save=1 test001:root@hostname ~# sysctl -w [root@hostname ~]# sysctl -w net.ipv4.tcp_moderate_rcvbuf=1 test001:root@hostname ~# sysctl -w [root@hostname ~]# sysctl -w net.ipv4.tcp_window_scaling=1 test001:root@hostname ~# sysctl -w [root@hostname ~]# sysctl -w net.ipv4.tcp_moderate_rcvbuf=1 test001:root@hostname ~# sysctl -w [root@hostname ~]# sysctl -w net.ipv4.tcp_sack=1 test001:root@hostname ~# sysctl -w [root@hostname ~]# sysctl -w net.ipv4.tcp_fack=1 test001:root@hostname ~# sysctl -w [root@hostname ~]# sysctl -w net.ipv4.tcp_dsack=1 |
COMMENT: the meaning of these parameters is explained in the Linux documentation (sysctl command) LINK-!.
...
To perform the test, we should run iperf in server mode in one host:
No Format |
---|
[root@hostname ~]# iperf -s -M $mss
|
Panel |
test001:root@hostname ~# iperf -s -M $mss |
On the other host we should run command like this:
No Format |
---|
[root@hostname ~]# iperf -c $serwer -M $mss -P $threads -w $\{window\} -i $interval -t $test_time
|
Panel |
test001:root@hostname ~# iperf -c $serwer -M $mss -P $threads -w ${window} -i $interval -t $test_time |
There is a description of names and symbols used in the command line:
...
To generate key-pair we use a following command:
No Format |
---|
[root@hostname ~]# |
Panel |
test001:root@hostname ~# ssh-keygen -t dsa |
Then we copy the public key to the remote server and add it to authorized keys file:
No Format |
---|
[root@hostname ~]# cat |
Panel |
test001:root@hostname ~# cat identity.pub >> /home/sarevok/.ssh/authorized_keys |
Now we can login from the remote serve to bug without password. So can also run programs remotely, e.g. from the bash script.
...
There is a simple shell script to run iperf test:
Code Block | ||||
---|---|---|---|---|
| ||||
| ||||
Panel | ||||
#!/bin/sh file_size=41 dst_path=/home/stas/iperf_results script_path=/root curr_date=`date +%m-%d-%y-%H-%M-%S` serwer="10.0.1.1" user="root" test_time=60 interval=1 mss=1460 window=1000000 min_threads=1 max_threads=128 |
Code Block | ||
---|---|---|
| ||
| ||
for threads in 1 2 4 8 16 32 64 80 96 112 128 ; do ssh $user@$serwer $script_path/run_iperf.sh -s -w $\{window\} -M $mss & ssh $user@$serwer $script_path/run_vmstat 1 vmstat-$window-$threads-$mss-$curr_date & vmstat 1 > $dst_path/vmstat-$window-$threads-$mss-$curr_date& iperf c $serwer -M $mss -P $threads -w ${window} -i $interval -t $test_time >> & iperf -c $serwer -M $mss -P $threads -w $\{window\} -i $interval -t $test_time >> $dst_path/iperf-$window-$threads-$mss-$curr_date |
Panelnoformat |
---|
ps ax | grep vmstat | awk '\{print $1\}' | xargs -i kill \{\} 2&>/dev/null ssh $user@$serwer $script_path/kill_iperf_vmstat.sh done |
Script run_iperf.sh can look like this:
Panelnoformat |
---|
#\!/bin/sh |
Panelnoformat |
---|
iperf $1 $2 $3 & |
run_vmstat.sh script can contain:
Panelnoformat |
---|
#\!/bin/sh vmstat $1 > $2 & |
kill_iperf_vmstat.sh may look like this:
Panelnoformat |
---|
#\!/bin/sh ps -elf | egrep "iperf" | egrep -v "egrep" |awk '\{print $4\}' | xargs -i kill -9 \{\} ps -elf | egrep "vmstat" | egrep -v "egrep" |awk '\{print $4\}' | xargs -i kill -9 \{\} |
To start test script that can ignoring hangup signals, you can use nohup command.
Panelnoformat |
---|
[:stasatworm]$ nohup script.sh & |
This command keeps the test running when you close the session with server.
...
Here, we present how to make software raid structure using Linux md tool. To create simple raid level from devices sda1 sda2 sda3 sda4 you should use following command:
Panelnoformat |
---|
mdadm --create --verbose /dev/md1 --spare-devices=0 --level=0 --raid-devices=4 /dev/\{sda1, sda2, sda3, sda4\} |
Where:
- /dev/md1 - the name of created raid group,
- spare devices - you can specify number on drives to be spare ones,
- level - simple raid level you want to create (Currently, Linux supports LINEAR (disks concatenation) md devices, RAID0 (striping), RAID1 (mirroring), RAID4, RAID5, RAID6, RAID10),
- raid-devices - number of devices you want to use to make a RAID structure.
When you create a RAID structure you should be able to see some RAID details similar to the informations shown below:
No Format |
---|
[root@sarevok bin]# mdadm |
Panel |
test001:root@sarevok bin# mdadm --detail /dev/md1 /dev/md1: Version : Version : 00.90.03 Creation Time : Mon Apr 6 17:41:432009 Raid Level : raid0 Array Size : 6837337472 2009 Raid Level : raid0 Array Size : 6837337472 (6520.59 GiB 7001.43 GB) Raid Devices : 4 Total Devices : 4 Preferred Minor :3 Persistence : Superblock is persistent 3
Persistence : Superblock is persistent
|
No Format |
---|
Update Time : Mon Apr 6 |
Panel |
Update Time : Mon Apr 6 17:41:43 2009State : clean Active Devices : 4 Working Devices : 4 Failed Devices : 0 Spare Devices : 0 |
Panel |
---|
Chunk Size : 64K |
Panel |
---|
Rebuild Status : 10% complete |
2009
State : clean
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0
|
No Format |
---|
Chunk Size : 64K
|
No Format |
---|
Rebuild Status : 10% complete
|
No Format |
---|
UUID : |
Panel |
UUID : 19450624:f6490625:aa77982e:0d41d013 Events : Events : 0.1 |
No Format |
---|
Number Major Minor RaidDevice State
0 65 16 0 active sync /dev/sda1
1 65 32 1 active sync /dev/sda2
2 65 48 2 active sync /dev/sda3
3 65 64 3 active sync /dev/sda4
|
Panel |
Number Major Minor RaidDevice State 0 65 16 0 active sync /dev/sda1 1 65 32 1 active sync /dev/sda2 2 65 48 2 active sync /dev/sda3 3 65 64 3 active sync /dev/sda4 |
When performance is considered the chunk size of the md device may be important parameter to tune. There is -c option of the mdadm command, that cen be usedto specify chunk size of kilobytes. The default is 64 kB, however it should be setup up according to some factors such as:
...
To create the file system on md device and then mount it to some directory we use command like this:
Panelnoformat |
---|
mkfs.ext3 /dev/md1 mount /dev/md1 /mnt/md1 |
Again, there are some filesystem parameters that are interesting from the performance point of view. One of them is
the blocksize. It should be set taking into account the application features and the underlying storage components.
One of the rules of thumb is to use the block size equal to the size of the RAID stripe. You can set the the blocksize
using mkfs's -b parameter. There is also possibility to influence the filesystem behaviouds by using using mount command options.
...
The idea of dd is to copy file from 'if' location to 'of' location. Using this tool to measure disk devices requires some trick. To measure write speed you read data from /dev/zero to file on the tested device. For measuring the read performance you should read the data from the file on tested device and write it to /dev/zero.
In that way we avoid measuring more that one storage system at a time. To measure time of reading or writing the file we use time tool. The example commands to test write and read 32 GB of data are:
for writing performance:
No Format |
---|
[root@sarevok ~]# time dd |
Panel |
test001:root@sarevok ~# time dd if=/dev/zero of=/mnt/md1/test_file.txt bs=1024M count=32 |
and for reading performance:
No Format |
---|
[root@sarevok ~]# time dd |
Panel |
test001:root@sarevok ~# time dd if=/mnt/md1/test_file.txt of=/dev/zero bs=1024M count=32 |
where:
- if - input file/device path
- of - output file/device path
- bs - size of a chunk of data to copy
- count - how many times a chunk defined by bs is copied
...
To perform one round of the test we can use command:
Panelnoformat |
---|
iozone -T -t $threads -r $\{blocksize\}k -s $\{file_size\}G -i 0 -i 1 -i 2 -c -e |
where:
- -T - Use POSIX pthreads for throughput tests
- -t - how many threads use for test
- -r - chunk size used to test
- -s - test file size Important!! It is file size PER THREAD, because each thread writes or reads from it's own file.
- -i - test modes - we choose 0 - write/rewrite 1 - read/reread and 2 - random write/read
- -c - Include close() in the timing calculations
- -e - Include flush (fsync,fflush) in the timing calculations
...
To automate the testing we can write some simple SH script like this:
Panelnoformat |
---|
#\!/bin/sh dst_path=/home/sarevok/wyniki_test_iozone curr_date=`date +%m-%d-%y-%H-%M-%S` |
Panelnoformat |
---|
file_size=128 min_blocksize=1 max_blocksize=32 |
Panelnoformat |
---|
min_queuedepth=1 max_queuedepth=16 |
Panelnoformat |
---|
mkdir $dst_path cd /mnt/sdaw/ |
Panelnoformat |
---|
blocksize=$min_blocksize while while [: _blocksize -le $max_blocksize ];do do queuedepth=$min_queuedepth while while [: _queuedepth -le $max_queuedepth ]; do |
Panelnoformat |
---|
vmstat 1 > $dst_path/vmstat-$blocksize-$queuedepth-$curr_date /root/iozone -T -t $queuedepth -r $\{blocksize\}k -s $\{file_size\}G -i 0 -i 1 -c -e > $dst_path/iozone-$blocksize-$queuedepth-$curr_date |
No Format |
---|
ps ax | grep vmstat | awk '\{print $1\}' | xargs -i kill \{\} |
Panel |
ps ax | grep vmstat | awk '{print $1}' | xargs -i kill {} 2&>/dev/null |
No Format |
---|
|
Panel |
queuedepth=`expr $queuedepth \*2` 2` file_size=`expr $file_size \/2` done 2` done blocksize=`expr $blocksize \* 2` done |
To start test script that can ignore hangup signals, you can use nohup command:
test001: root@sarevok$ nohup script.sh &
...