sdb is a simple disk benchmark tool, it is able to read or write sequential data to a disk or file. You can not compare it to tools like bonnie++, its main purpose is to get a hint of the maximal performance of a disk. Therefore it is much faster than a normal benchmark.
In order to achieve this a memory buffer is used of size blocksize (option -b). This buffer is written to - or read in from - the file filename (option -f) until the given filesize (option -d) is reached.
Further options are -D for O_DIRECT and -S for O_SYNC. Both avoid buffering of data and gives more precise results in write performance. It is also possible to fill the buffer with randomized values before writing to a file (option -R).
It is similar to dd, but there are some differences.
dd is similar but its primary focus is not benchmarking a disk. The time is counted for all: Initializing memory buffer, reading data to this buffer and writing it to the file. It also uses multiples of 1000 for sizes.
Here is a simple example which explains much, I guess. We are reading 64 MB from /dev/zero and writing them to /dev/null:
$ dd if=/dev/zero of=/dev/null bs=64M count=1 1+0 records in 1+0 records out 67108864 bytes (67 MB) copied, 0.139788 s, 480 MB/sYou can see, the 64 MB are reported as 67 MB. With sdb you have to use to runs to get the read performance for /dev/zero:
$ sdb -r -f /dev/zero -b 64m -s 64m 67108864 Bytes copied per thread, block size 67108864 Thread 0 (CPU: 0): 0.035224 seconds -> 1.77 GB/sand a second one to get the write performance for /dev/null
$ sdb -w -f /dev/null -b 64m -s 64m 67108864 Bytes copied per thread, block size 67108864 Thread 0 (CPU: 1): 0.000004 seconds -> 13982.10 GB/s
I think one can see why dd is not best suited for disk benchmarks...
To test the write performance of an SSD it might be useful to initialize the used buffer with random values. Some SSDs are able to compress and decompress data on the fly. So a benchmark with writing only zeros to a disk, might result in wrong results.
Here is a simple example of such an SSD, we are writing 256 MB to a disk:
$ sdb -w -f test -b 64m -s 256m -D -S 268435456 Bytes copied per thread, block size 67108864 Thread 0 (CPU: 1): 1.250620 seconds -> 204.70 MB/s in 4 loops
Now we do the same with a randomized buffer:
$ sdb -w -f test -b64m -s 256m -D -S -R Filling buffer with randomized values... Randomization took 6.742207 seconds -> 9.49 MB/s 268435456 Bytes copied per thread, block size 67108864 Thread 0 (CPU: 0): 2.427590 seconds -> 105.45 MB/s in 4 loops
Note: Depending in the size of blocksize it might take quiet a long time to fill the buffer with random values from /dev/urandom.
If you have more than one processor it might be a good idea to write or read several files at the same time. There might be a limit in the memory bandwidth of one core, this especially an issue with really fast RAID systems or temporary file systems like tmpfs or ramfs.
With the option -j several threads can be used. The given filename is therefore appended by the thread index. The option -T displays some timing informations for the functions memset(), memcopy(), bcopy() and memmove(). This will give you a hint how fast the memory can be written or read. For accurate values you should use tools like bandwidth.
The actual version is 0.8.0 and can be found here: sdb-0.8.0.tar.gz
A debian version for amd64 and can be found here: sdb_0.8.0-1_amd64.deb
Most significant change is the calculation of IOPS, squential and random access for writing or reading.
The README is online available, too, even so the manual page: sdb.1
$ tar xvzf sdb-0.7.5.tar.gz
Change to the directory:
$ cd sdb-0.7.5
and compile the source with
$ makeand finally you can install it with
$ make install
This will copy the program and the manual page to /usr/local/bin/ and /usr/local/man/man1/.:w
You can even run the program in the installation directory with the path, e.g.:
$ sdb-0.7.5/sdb -h
If you write to the raw disk, like e.g. /dev/sda or partition, e.g. /dev/sda1, be sure it is the right one! You have to reformat/repartition the disk or partition. So if you choose the wrong disk and writes directly to it, it will destroy all data on it! It is more secure to use a file, but then you have also the file system involved in your results.
The software is under the license GPLv2.
Here is a screenshot for writing 4 files parallel:
Here is a screenshot for reading the 4 files:
Here is a screenshot of version 0.7 calculating IOPS for reading within one file using 32 threads. The tests were done with a RAID-0 consisting of 5 SSDs:
This screenshot of version 0.7 is similar to the above on but for writing to file:
If you have any comments, bugs, problems, hints: Write me an email!
Dirk Geschke, email@example.com