Contact Us

If you still have questions or prefer to get help directly from an agent, please submit a request.
We’ll get back to you as soon as possible.

Please fill out the contact form below and we will reply as soon as possible.

  • Contact Us
  • Home
  • FAQ
  • Linux

SSD7000 Series Performance Test Guide (Linux)

Written by Support Team

Updated at December 9th, 2020

Contact Us

If you still have questions or prefer to get help directly from an agent, please submit a request.
We’ll get back to you as soon as possible.

Please fill out the contact form below and we will reply as soon as possible.

  • Tutorial Videos & Installation Guides
    SSD7000 Series
  • FAQ
    FnL Product Line Determining PCIe lane assignment for your SSD7000 Controller MacOS Windows Linux SSD6200 Series Controller SSD7000 Series Controller RocketStor Series RocketRAID Series RocketU Series Motherboard Compatible Report Other Questions Standard Responses for Known Issues or Subjects WebGUI eStore Gen5
  • HPT
    Product Support Video Installation Guides Shipping Troubleshooting Video Driver Installation Video
  • Compatibility Reports
    SuperMicro w/ HighPoint Compatibility HP w/ HighPoint Compatibility Dell w/ HighPoint Compatibility Asus w/ HighPoint Compatibility Gigabyte w/ HighPoint Compatibility
  • FAQ
    NVMe Products
  • Workaround Issue
+ More

Linux Platforms

Step 1 Download the Performance Test tool

We recommend using the fio utility to test the NVMe RAID array’s performance in a Linux environment.

  1. Download fio (The following example was created using an Ubuntu 20.04 system):

#apt-get install fio

 

Step 2 Check the PCIe Lane assignment

WebGUI:

  1. Start the WebGUI management software and click the Physical  Enclosure 1 tab
  1. SSD7100 Series RAID Controllers require a dedicated PCIe 3.0 x16 slot in order to perform optimally;


 

 

  1. SSD7200 Series RAID Controllers require a dedicated PCIe 3.0 x8 slot in order to perform optimally:

  1. SSD7500 Series RAID Controllers require a dedicated PCIe 4.0 x16 slot in order to perform optimally:

  1. If you are configuring a Cross-Sync RAID array, repeat this procedure for Enclosure 2 to check the PCIe Lane assignment.

CLI:

1. Open a command terminal and enter the following command to start the CLI:

   #hptraidconf

2. Enter the following command to check the PCIe Lane assignment:

   HPT CLI>query enclosures


 

 

  1. SSD7100 Series RAID Controllers require a dedicated PCIe 3.0 x16 slot in order to perform optimally;

 

  1. SSD7200 Series RAID Controllers require a dedicated PCIe 3.0 x8 slot in order to perform optimally;

  1. SSD7500 Series RAID Controllers require a dedicated PCIe 4.0 x16 slot in order to perform optimally;


 

 

3. If you are configuring a Cross-Sync RAID array, repeat this procedure for Enclosure 2 to check the PCIe Lane assignment.

 

Step 3 Configure the RAID Array (e.g. RAID 0)

1. Create a RAID array using the WebGUI or CLI; 

WebGUI:

  1. To configure the NVMe RAID array, access the WebGUI management software, and click the Logical tab.
  2. Click on Create Array and configure the NVMe SSD’s as a RAID 0. 


 

 

CLI:

  1. Open a command terminal and enter the following command to start the CLI:

         #hptraidconf

  1. Enter the following command to check the PCIe Lane assignment:

         HPT CLI> create RAID0 disks=* capacity=* init=quickinit bs=512K

 

2. Format the RAID array; use the following command:

#mkfs.ext4 /dev/hptblock0n* -E lazy_itable_init=0,lazy_journal_init=0

3. Mount the disk:

#mount /dev/hptblock0n* /mnt


 

 

Step 4 Start the Performance Test (e.g. RAID0)

  1. Use a command terminal to select a performance test script that corresponds with the number of CPUs used by the motherboard.

Single CPU performance test

2M continuous reading performance test script:

# fio --filename=/mnt/test1.bin --direct=1 --rw=read --ioengine=libaio --bs=2m --iodepth=64 --size=10G --numjobs=1 --runtime=60 --time_base=1 --group_reporting --name=test-seq-read

 

2M continuously write performance test scripts:

# fio --filename=/mnt/test1.bin --direct=1 --rw=write --ioengine=libaio --bs=2m --iodepth=64 --size=10G --numjobs=1 --runtime=60 --time_base=1 --group_reporting --name=test-seq-write

 

4K random read performance test script:

# fio --filename=/mnt/test1.bin --direct=1 --rw=randread --ioengine=libaio --bs=4k --iodepth=64 --size=10G --numjobs=8 --runtime=60 --time_base=1 --group_reporting --name=test-rand-read

     

 

4K random write performance test script:

# fio --filename=/mnt/test1.bin --direct=1 --rw=randwrite --ioengine=libaio --bs=4k --iodepth=64 --size=10G --numjobs=8 --runtime=60 --time_base=1 --group_reporting --name=test-rand-write

Multi-CPU performance test

  1. First, confirm which CPU corresponds with the slot the card is installed into, and then specify this CPU for the performance test.
  1. Use the following command to view the node corresponding to each CPU, and confirm the cpus value that corresponds with each CPU:

#numactl –H

The node corresponding to CPU1 is 0, and the node corresponding to CPU2 is 1;

The cpus corresponding to CPU1 is:0-11,24-35

The cpus corresponding to CPU2 is:12-23,36-47

  1. Confirm that the HighPoint NVMe RAID Controller is plugged into the PCIe Slot of the motherboard. If the PCIe Slot used corresponds to CPU1, you need to specify the a cpu value of CPU1 during the performance test. Several workers are used in the script (to correspond with the number of cpus).

 

2M continuous reading performance script:

# taskset -c 0 fio --filename=/mnt/test1.bin --direct=1 --rw=read --ioengine=libaio --bs=2m --iodepth=64 --size=10G --numjobs=1 --runtime=60 --time_base=1 --group_reporting --name=test-seq-read

 

2M continuous writing performance script:

# taskset -c 0 fio --filename=/mnt/test1.bin --direct=1 --rw=write --ioengine=libaio --bs=2m --iodepth=64 --size=10G --numjobs=1 --runtime=60 --time_base=1 --group_reporting --name=test-seq-write


 

 

4K random read performance script:

# taskset -c 0-7 fio --filename=/mnt/test1.bin --direct=1 --rw=randread --ioengine=libaio --bs=4k --iodepth=64 --size=10G --numjobs=8 --runtime=60 --time_base=1 --group_reporting --name=test-rand-read

 

4K random write performance script:

# taskset -c 0-7 fio --filename=/mnt/test1.bin --direct=1 --rw=randwrite --ioengine=libaio --bs=4k --iodepth=64 --size=10G --numjobs=8 --runtime=60 --time_base=1 --group_reporting --name=test-rand-write

 

nvme ssd7000 series performance linux fio

Was this article helpful?

Yes
No

Related Articles

  • Standard Support Response – Custom Linux Driver ETA for uncommon/non-standard distribution.

Copyright 2025 – HighPoint Technologies, Inc..

Knowledge Base Software by Helpjuice

Definition by Author

0
0
Expand