0

Nvme raid0 read(and write) performance is sometimes worse than the performance of one individual underlying drive.

hdparm -t /dev/md0 :

Timing buffered disk reads: 7054 MB in  3.00       seconds = 2350.46 MB/sec

hdparm -t /dev/nvme0n1 :

Timing buffered disk reads: 7160 MB in  3.00    seconds = 2386.32 MB/sec

This is a datacenter hosted bare metal unit, the drives and the server are cooled properly so there is no thermal throttling due to high temps.
Drives are at about 45 C on average, as shown by nvme smart-log /dev/nvmeX
Something is wrong with those low array speeds and I want to understand what. These are the full specs:

Epyc 7502p [2nd gen Rome 32cores/64threads]  
128 GB DDR4 32x4 3200mhz   
3x 3.8 TB samsung nvme pcie3 enteprise SSDs   
Raid0 software created with mdadm   
xfs filesystem on top of md0 created with defaults 'mkfs.xfs /dev/md0' 
debian 10 64bit with latest updates and stock kernel (no tinkering)   

What am I doing wrong ? What am I not doing at all ? I know there are diminishing returns for any raid when it comes to speed but this is bad.

PS: off the top of your head, what raid0 read speed do you expect that system to have ?

2

1 Answer 1

1

Michael Hampton and Stuka are right: use fio for benchmarking. For example:

fio --filename=/dev/md0 --direct=1 --rw=read --bs=64k --ioengine=libaio --iodepth=64 --runtime=120 --numjobs=4 --time_based --group_reporting --name=seq_read

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .