RAID 10 slower than single drive?

General support questions
Post Reply
thexder2
Posts: 5
Joined: 2018/07/05 15:56:31

RAID 10 slower than single drive?

Post by thexder2 » 2018/07/05 16:08:41

I have been setting up a new server and when running io tests using fio I am seeing some strange results. The drives in this system are high speed NVMe drives and I have 4 of them. When I test a single drive I get pretty much the rated performance of the drives, but when I put the 4 drives in a RAID 10 array (using md, lvm, or even zfs) I get far slower speed than running a single drive. For instance a single drive using fio directly to the device file I get 802k iops, 3.2GB/s bandwidth on random read and, 400k iops, 1.6GB/s bandwith on random write, but md raid 10 again directly to the device file I get 577k iops, 2.3GB/s bandwidth on the same random read test and 129k iops and 0.5GB/s bandwidth on the random write test. I get similar effect when I have a filesystem on there and write to and read from a file, and also similar on lvm raid 10.

Has anyone else experienced this? Anyone have an explanation or fix for it? This is all running CenOS 7 with latest updates on a fresh install.

User avatar
TrevorH
Site Admin
Posts: 33218
Joined: 2009/09/24 10:40:56
Location: Brighton, UK

Re: RAID 10 slower than single drive?

Post by TrevorH » 2018/07/05 16:10:25

How are the nvme devices attached to the system? What sort of system is it? Hardware details.
The future appears to be RHEL or Debian. I think I'm going Debian.
Info for USB installs on http://wiki.centos.org/HowTos/InstallFromUSBkey
CentOS 5 and 6 are deadest, do not use them.
Use the FAQ Luke

thexder2
Posts: 5
Joined: 2018/07/05 15:56:31

Re: RAID 10 slower than single drive?

Post by thexder2 » 2018/07/05 18:02:16

It is a Dell PowerEdge R740xd with two Xeon 6148's, 768GB RAM. I am not sure exactly how they are attached other than they are the 2.5" Samsung PM1725a drives.

thexder2
Posts: 5
Joined: 2018/07/05 15:56:31

Re: RAID 10 slower than single drive?

Post by thexder2 » 2018/07/05 18:05:42

Ohh also, it seems RAID 0 does give speed increase, but something is limiting around 800k IOPS, or 3.2GB/s as 2 drives show increase in sequential write and random write, but not random read and going all the way up to 4 drives RAID 0 shows no improvement over 2 drives. Also running the same tests on the drives individually, but all 4 drives at the same time shows similar limitation. Still any RAID > 0 shows slower in every test than single drive.

User avatar
TrevorH
Site Admin
Posts: 33218
Joined: 2009/09/24 10:40:56
Location: Brighton, UK

Re: RAID 10 slower than single drive?

Post by TrevorH » 2018/07/05 18:08:10

Well, I am guessing that your drives are contending with the others for bandwidth and as a result, slowing them all down. Maybe the output from lshw might help to show the layout of everything.
The future appears to be RHEL or Debian. I think I'm going Debian.
Info for USB installs on http://wiki.centos.org/HowTos/InstallFromUSBkey
CentOS 5 and 6 are deadest, do not use them.
Use the FAQ Luke

thexder2
Posts: 5
Joined: 2018/07/05 15:56:31

Re: RAID 10 slower than single drive?

Post by thexder2 » 2018/07/05 19:19:15

Thanks for pointing me in the right direction, it seems the people who built the server connected all 4 drives to a PCIe switch that connects to the motherboard over an x4 connection which is likely where all of the limitations are that we are seeing.

thexder2
Posts: 5
Joined: 2018/07/05 15:56:31

Re: RAID 10 slower than single drive?

Post by thexder2 » 2018/07/09 19:47:42

After fixing the connection to the drives it looks like there are still limitations on the speed, tuned a few parameters on the drives and in the tests, but still not getting expected speeds. Waiting for next round of RAID 10 tests to see what that shows, but RAID 0 is looking better even if it is not quite what I think should be expected.

Post Reply