How can I use all six CPU cores to improve the write speed of NVMe

Hi,
I tested the nvme hard disk write speed on the Tx2 4G module and found that when the cpu operating mode is MAXP CORE ARM, the writing speed is the largest, and when switching the cpu operating mode to MAXN, the speed will drop and is much lower than the TX2 8G. How can I use all six CPU cores to improve the write speed of NVMe

Could you share what nvme hdd you are using?
And also the test steps.

I used Samsung 970 EVO ssd for testing.

My system version is l4t 32.2.1. In the upper right corner of the ubuntu interface, you can directly select power mode.The test steps are:
Select the power mode as:0-4, and then use the following command to test the ssd write speed:

time dd if=/dev/zero of=/mypath bs = 1M count = 1024

As a result, when the power mode is selected as 0, the writing speed is not the highest.

I used Samsung 970 EVO ssd for testing.

My system version is l4t 32.2.1. In the upper right corner of the ubuntu interface, you can directly select power mode.The test steps are:
Select the power mode as:0-4, and then use the following command to test the ssd write speed:

time dd if=/dev/zero of=/mypath bs = 1M count = 1024

As a result, when the power mode is selected as 0, the writing speed is not the highest.

Here are the results from a SATA SSD doing the same test:

0:MAXN:
drew@drew-tx2:~/testpath$ time dd if=/dev/zero of=/home/drew/testpath/testpath bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 6.55095 s, 164 MB/s

real	0m6.561s
user	0m0.000s
sys	0m4.572s

1:MAXQ
drew@drew-tx2:~/testpath$ time dd if=/dev/zero of=/home/drew/testpath/testpath bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 6.1677 s, 174 MB/s

real	0m6.648s
user	0m0.000s
sys	0m4.284s

2:MAXP CORE ALL
drew@drew-tx2:~/testpath$ time dd if=/dev/zero of=/home/drew/testpath/testpath bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 8.88789 s, 121 MB/s

real	0m9.305s
user	0m0.020s
sys	0m7.764s

3:MAXP CORE ARM
drew@drew-tx2:~/testpath$ time dd if=/dev/zero of=/home/drew/testpath/testpath bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 5.28549 s, 203 MB/s

real	0m5.638s
user	0m0.008s
sys	0m2.888s

4:MAXP CORE DENVER
drew@drew-tx2:~/testpath$ time dd if=/dev/zero of=/home/drew/testpath/testpath bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 9.32985 s, 115 MB/s

real	0m9.920s
user	0m0.012s
sys	0m6.312s

Perhaps try with higher priority:

<b>sudo nice --adjustment=-10</b> time dd if=/dev/zero of=/home/drew/testpath/testpath bs=1M count=1024

A “nice” of “-10” should provide a lot of priority without shutting down more important processes. A tendency to improve might imply an issue with competing processes, whereas only tiny improvement would tend to say the issue is part of the chain of dd and eMMC.