SSD7000 Series Performance Test Guide (Linux)
- Tutorial Videos & Installation Guides
-
FAQ
FnL Product Line Determining PCIe lane assignment for your SSD7000 Controller MacOS Windows Linux SSD6200 Series Controller SSD7000 Series Controller RocketStor Series RocketRAID Series RocketU Series Motherboard Compatible Report Other Questions Standard Responses for Known Issues or Subjects WebGUI eStore Gen5
- HPT
- Compatibility Reports
- FAQ
- Workaround Issue
Linux Platforms
Step 1 Download the Performance Test tool
We recommend using the fio utility to test the NVMe RAID array’s performance in a Linux environment.
- Download fio (The following example was created using an Ubuntu 20.04 system):
#apt-get install fio
Step 2 Check the PCIe Lane assignment
WebGUI:
- Start the WebGUI management software and click the Physical Enclosure 1 tab
- SSD7100 Series RAID Controllers require a dedicated PCIe 3.0 x16 slot in order to perform optimally;
- SSD7200 Series RAID Controllers require a dedicated PCIe 3.0 x8 slot in order to perform optimally:
- SSD7500 Series RAID Controllers require a dedicated PCIe 4.0 x16 slot in order to perform optimally:
- If you are configuring a Cross-Sync RAID array, repeat this procedure for Enclosure 2 to check the PCIe Lane assignment.
CLI:
1. Open a command terminal and enter the following command to start the CLI:
#hptraidconf
2. Enter the following command to check the PCIe Lane assignment:
HPT CLI>query enclosures
- SSD7100 Series RAID Controllers require a dedicated PCIe 3.0 x16 slot in order to perform optimally;
- SSD7200 Series RAID Controllers require a dedicated PCIe 3.0 x8 slot in order to perform optimally;
- SSD7500 Series RAID Controllers require a dedicated PCIe 4.0 x16 slot in order to perform optimally;
3. If you are configuring a Cross-Sync RAID array, repeat this procedure for Enclosure 2 to check the PCIe Lane assignment.
Step 3 Configure the RAID Array (e.g. RAID 0)
1. Create a RAID array using the WebGUI or CLI;
WebGUI:
- To configure the NVMe RAID array, access the WebGUI management software, and click the Logical tab.
- Click on Create Array and configure the NVMe SSD’s as a RAID 0.
CLI:
- Open a command terminal and enter the following command to start the CLI:
#hptraidconf
- Enter the following command to check the PCIe Lane assignment:
HPT CLI> create RAID0 disks=* capacity=* init=quickinit bs=512K
2. Format the RAID array; use the following command:
#mkfs.ext4 /dev/hptblock0n* -E lazy_itable_init=0,lazy_journal_init=0
3. Mount the disk:
#mount /dev/hptblock0n* /mnt
Step 4 Start the Performance Test (e.g. RAID0)
- Use a command terminal to select a performance test script that corresponds with the number of CPUs used by the motherboard.
Single CPU performance test
2M continuous reading performance test script:
# fio --filename=/mnt/test1.bin --direct=1 --rw=read --ioengine=libaio --bs=2m --iodepth=64 --size=10G --numjobs=1 --runtime=60 --time_base=1 --group_reporting --name=test-seq-read
2M continuously write performance test scripts:
# fio --filename=/mnt/test1.bin --direct=1 --rw=write --ioengine=libaio --bs=2m --iodepth=64 --size=10G --numjobs=1 --runtime=60 --time_base=1 --group_reporting --name=test-seq-write
4K random read performance test script:
# fio --filename=/mnt/test1.bin --direct=1 --rw=randread --ioengine=libaio --bs=4k --iodepth=64 --size=10G --numjobs=8 --runtime=60 --time_base=1 --group_reporting --name=test-rand-read
4K random write performance test script:
# fio --filename=/mnt/test1.bin --direct=1 --rw=randwrite --ioengine=libaio --bs=4k --iodepth=64 --size=10G --numjobs=8 --runtime=60 --time_base=1 --group_reporting --name=test-rand-write
Multi-CPU performance test
- First, confirm which CPU corresponds with the slot the card is installed into, and then specify this CPU for the performance test.
- Use the following command to view the node corresponding to each CPU, and confirm the cpus value that corresponds with each CPU:
#numactl –H
The node corresponding to CPU1 is 0, and the node corresponding to CPU2 is 1;
The cpus corresponding to CPU1 is:0-11,24-35
The cpus corresponding to CPU2 is:12-23,36-47
- Confirm that the HighPoint NVMe RAID Controller is plugged into the PCIe Slot of the motherboard. If the PCIe Slot used corresponds to CPU1, you need to specify the a cpu value of CPU1 during the performance test. Several workers are used in the script (to correspond with the number of cpus).
2M continuous reading performance script:
# taskset -c 0 fio --filename=/mnt/test1.bin --direct=1 --rw=read --ioengine=libaio --bs=2m --iodepth=64 --size=10G --numjobs=1 --runtime=60 --time_base=1 --group_reporting --name=test-seq-read
2M continuous writing performance script:
# taskset -c 0 fio --filename=/mnt/test1.bin --direct=1 --rw=write --ioengine=libaio --bs=2m --iodepth=64 --size=10G --numjobs=1 --runtime=60 --time_base=1 --group_reporting --name=test-seq-write
4K random read performance script:
# taskset -c 0-7 fio --filename=/mnt/test1.bin --direct=1 --rw=randread --ioengine=libaio --bs=4k --iodepth=64 --size=10G --numjobs=8 --runtime=60 --time_base=1 --group_reporting --name=test-rand-read
4K random write performance script:
# taskset -c 0-7 fio --filename=/mnt/test1.bin --direct=1 --rw=randwrite --ioengine=libaio --bs=4k --iodepth=64 --size=10G --numjobs=8 --runtime=60 --time_base=1 --group_reporting --name=test-rand-write