Yeah, it's 2x PCIe 2.0 lanes, which should be 10 Gbps.chemal wrote: ↑2019/09/10 04:49:43Marvell? That's fake RAID, isn't it? (I see you mentioned Marvell before, I must have read over it.)
A single 860 EVO writes at about 500 MB/s (sequentially).
Edit: Google says, a Marvell 9230 is h/w RAID but really low-end. It has two PCIe 2.0 lanes for a theoretical max of 1 GB/s of which you can get ~800 MB/s in reality.
At this point though, that's hard to tell whether it's because of the Marvell controller or whether it's because the system is truly limited to 10 Gbps mostly because the two are coincidentally the same.
The only way that I would be able to test that would be if I got like a Avago/Broadcom/LSI MegaRAID 12 Gbps SAS RAID controller (like the MegaRAID 9341-8i) to see if that will alleviate some of the bandwidth limitation issues because at least that's a PCIe 3.0 x8 (64 Gbps).
Also interesting enough, there AREN'T any Avago/Broadcom/LSI PCIe 3.0 x16 RAID controllers at ALL. They have host bus adapters, but not RAID HBAs, which also means that if I were to use a non-RAID HBA, the RAID would be software RAID, which of course, comes with its own set of issues.
I'm still debating on whether I want to switch over to the MegaRAID 9341-8i because doing so will max out my system's PCIe lane supply/demand. (Core i7-4930K supplies 40 PCIe 3.0 lanes, so 16 is taken up by the Mellanox NIC, 16 is taken up by the GTX Titan, which will leave 8 for the MegaRAID HBA if I go with it.)
I'm undecided in regards to whether I want to go with SAS/SATA/NVMe 12 Gbps MegaRAID, or just SAS and SATA (MegaRAID 9341-8i) or just SAS/SATA 6 Gbps (9271-8i).
Again though, at 768.67 MB/s, for the RAID array, that works out to be 192.1675 MB/s which is a far cry from the ~500 MB/s sequential write speed.
192.1675 MB/s is only about 1.5 Gbps (for each drive), where each drive also has a 6 Gbps interface.
Again, combined, it only musters 6.14936 Gbps (out of 10 Gbps for the PCIe 2.0 x2 link for the Marvell HW RAID controller itself), and that suggests that the RAID controller's PCIe 2.0 x2 interface isn't the limiting factor unless it's only like 61.5% efficient in regards to its usage of the PCIe 2.0 x2 interface.
I would have expected that if I were to max out the controller's interface, that I would be seeing something closer to 10 Gbps given the testing methodology that you suggested, so I'm still confused and something is still not adding up for me (in regards to maxing out the interface bandwidth either of the PCIe 2.0 x2 HW RAID controller) or with the suggested testing methodology.
With the hardware that I've got, and if the advertised specs are to be believed, I should be pushing upwards of 16.64 Gbps.
The suggested testing methodology I think, takes care of the buffering which might be inflating the results (which is fair), but then I'm getting results that are closer to the maximum random write speeds (based on number of IOPS max * 4 drives) than I am towards the advertised sequential speeds.
Again, maybe it's just me, but the data and the results doesn't make sense given the hardware.