IO benchmarking for MySQL, part 1
A while ago I started a project which will be heavily IO-bound on the MySQL Server, the testmachine allocated for this had a DAS with 15 disks (although I only used 14) connected via external SAS (standard 3Gb/s half-duplex or 6Gb/s full-duplex on two ports).
I used sysbench for the tests, both fileio and oltp, although these results will be based on the fileio results. The setup with the disks in RAID10 (7 raid 1 sets, then striping over them) and later RAID50 (2 raid 5 sets with 7 disks each, then striping over that), the latter yielding better results.
Let’s take a look at 1,2,4,8,16 and 128 concurrent clients, with different IO schedulers, although using XFS.
The config for the raid controller was write-back, cached access, advanced readahead. 512MB battery backed cache on the controller.
Also, I tested both sequential reading (SEQRD in the graphs) and random read/write (RNDRW in the graphs)
So, in random access they were all quite bad, especially considering that I had expected better performance, but they were at least very close. However, sequential reading was a field where noop did a much better job, as all the others started dropping off after 128 clients, but I reached a peak around 384 concurrent clients with noop (and a throughput of ~518Mb/sec)
Now, in August Vadim had a post about ec2/ebs IO benchmarking, and the results that you will see below are better, but still not good enough in my mind, since I know that IO access on ec2/ebs is quite slow.
Anyway, let’s take a look at req/s and response time in ms, with 8 threads doing random read-write to keep the benchmark comparable with Vadim’s.
The latency here stayed pretty constant, and what is interesting is that the latency for each test was almost identical to each other as I scaled up the amount of threads as well.
One thing worth noting, was that with bonnie++, I would get similar results for one thread as I got with sysbench in the reading test, however, it turns out that linuxmesses around with device management even though it’s better off left to the controller in this case.
Take a look at this graph, which shows how much improvement you can get by using blockdev to change setra on linux even though that the raid controller should be responsible for that:
In the next part, I will focus on finding the limits for each scheduler as well as see what is possible to get out of this array, but I am still waiting for additional hardware to arrive in order to do so.
As for which scheduler I recommend? For the workloads I tried with the oltp test, noop had the best results. These results will be published in the next part as well.