My goal is to obtain good writing performances on SSD NVME Disks (benchmarking tools seem to give performances in specific optimized context but far from what we can obtained in real life)
My server: 32x Intel(R) Xeon(R) Silver 4110 CPU @ 2.10GHz RAM : 256GB DDR4 HD : 2x 1,5TB SSD NVME MICRON 9200 => RAID0 HD : 1x 1,5TB SSD NVME MICRON 9200 => DISK1 OS : debian 9.5 file system : XFS
I am using cp and dd to measure bandwidth of files copy from RAMDISK (30Go of /dev/urandom) to DISK1 and to RAID0 I am monitoring the writing operation with iostat I am using bs=1M and oflag with dd
a) with dd from RAMDisk to RAID0 without oflag=direct I get 15Gb/s and with oflag=direct I get nearly 20Gb/s. GOOD!
b) with dd from RAMDisk to DISK1 without oflag=direct I get 15Gb/s and with I get 13 Gb/s. I am not sure to understand the inversion
c) with cp from RAMDisk to RAID0, performances are poor and iostat samples every 2s shows 4 seconds without write between each write on RAID0
What is the best way to measure RAID0 SSD NVME in a realistic way without fluctuation)
Kind regards