Unable to run benchmark on Jetson Orin NX 16GB

Description:
I encountered an error while running the benchmark script on NVIDIA Jetson Orin NX using JetPack 6.2.1.

Steps to Reproduce:

  1. Jetson Orin NX is set to MAX performance mode.

  2. Execute the following command:
    sudo python3 benchmark.py --all --csv_file_path benchmark_csv/orin-nx-16gb-benchmark.csv

Observed Error:

Traceback (most recent call last):
  File "/home/jetson_nx/jetson_benchmarks/benchmark.py", line 130, in <module>
    main()
  File "/home/jetson_nx/jetson_benchmarks/benchmark.py", line 28, in main
    system_check.run_set_clocks_withDVFS()
  File "/home/jetson_nx/jetson_benchmarks/utils/utilities.py", line 47, in run_set_clocks_withDVFS
    self.set_clocks_withDVFS(frequency=self.gpu_freq, device='gpu')
  File "/home/jetson_nx/jetson_benchmarks/utils/utilities.py", line 77, in set_clocks_withDVFS
    self.set_frequency(device=device, enable_register=self.freq_register, freq_register=self.freq_register, frequency=frequency)
  File "/home/jetson_nx/jetson_benchmarks/utils/utilities.py", line 88, in set_frequency
    self.write_internal_register(freq_register, frequency)
  File "/home/jetson_nx/jetson_benchmarks/utils/utilities.py", line 108, in write_internal_register
    with open(register, 'w') as f:
FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/bpmp/debug/clk/nafll_gpc1/rate'

Additional Info:

  • JetPack Version: 6.2.1

  • Device: Jetson Orin NX (16GB)

  • Benchmark Script: jetson_benchmarks

Please advise if this is a known issue or if there is a workaround/fix.

Hello,

Thanks for visiting the NVIDIA Developer Forums.
To ensure better visibility and support, I’ve moved your post to the Jetson category where it’s more appropriate

Cheers,
Tom

Hi,

The error indicates there are some issues when the benchmark script tries to enable maximum performance.
Could you try to add --jetson_clocks to see if it can work?

Thanks.

Thank you for the suggestion. After adding --jetson_clocks, the benchmark script executes, but all models fail to build and report:

Error in Build, Please check the log in: models/
We recommend to run benchmarking in headless mode

The FPS remains 0 for all models. The detailed log shows:

FAILED TensorRT.trtexec [TensorRT v100300]
super_resolution_bsd500-bs1.onnx --explicitBatch --inputIOFormats=int8:chw4+chw32 --int8 ...

Additionally, we see:

sh: 1: cannot create /sys/devices/platform/pwm-fan: Is a directory

It appears the script is trying to access deprecated sysfs paths for clocks and fan control, which are no longer available in JetPack 6.x. Could you confirm:

  1. Does the current jetson_benchmarks repository officially support JetPack 6.2.1 and TensorRT 10.x?

  2. If not, is there an updated branch or recommended workaround (e.g., replacing DVFS register writes with jetson_clocks and updating ONNX models for TensorRT 10.x)?

Any guidance on compatibility or patches would be greatly appreciated.

Hi,

Only the ONNX model can work with TensorRT 10.3.
Could you share the console output so we can know more details?

Thanks.

Hi @AastaLLL

As mentioned by you please find the snippet of the console log below. I have also attached the logs for one model run.

sudo python3 benchmark.py --all --csv_file_path benchmark_csv/orin-nx-16gb-benchmarks.csv --model_dir models/ --jetson_clocks
Please close all other applications and Press Enter to continue…
Setting Jetson orin in max performance mode
Jetson clocks are Set
Running all benchmarks.. This will take at least 2 hours…
------------Executing inception_v4------------

---------------------- 0 0 0

Error in Build, Please check the log in: models/
Error in Build, Please check the log in: models/
Error in Build, Please check the log in: models/
We recommend to run benchmarking in headless mode

Model Name: inception_v4
FPS:0.00


------------Executing vgg19_N2------------

---------------------- 0 0 0

Error in Build, Please check the log in: models/
We recommend to run benchmarking in headless mode

Model Name: vgg19_N2
FPS:0.00


------------Executing super_resolution_bsd500------------

---------------------- 0 0 0

Error in Build, Please check the log in: models/
Error in Build, Please check the log in: models/
Error in Build, Please check the log in: models/
We recommend to run benchmarking in headless mode

Model Name: super_resolution_bsd500
FPS:0.00


------------Executing unet-segmentation------------

---------------------- 0 0 0

Error in Build, Please check the log in: models/
We recommend to run benchmarking in headless mode

Model Name: unet-segmentation
FPS:0.00


------------Executing pose_estimation------------

---------------------- 0 0 0

Error in Build, Please check the log in: models/
We recommend to run benchmarking in headless mode

Model Name: pose_estimation
FPS:0.00


------------Executing yolov3-tiny-416------------

---------------------- 0 0 0

Error in Build, Please check the log in: models/
Error in Build, Please check the log in: models/
Error in Build, Please check the log in: models/
We recommend to run benchmarking in headless mode

Model Name: yolov3-tiny-416

inception_v4_b32_ws1024_dla1.txt (26.3 KB)

inception_v4_b32_ws1024_dla2.txt (26.3 KB)

inception_v4_b32_ws2048_gpu.txt (26.2 KB)

FPS:0.00


------------Executing ResNet50_224x224------------

---------------------- 0 0 0

Error in Build, Please check the log in: models/
Error in Build, Please check the log in: models/
Error in Build, Please check the log in: models/
We recommend to run benchmarking in headless mode

Model Name: ResNet50_224x224
FPS:0.00


------------Executing mobilenet_v1_ssd------------

---------------------- 0 0 0

Error in Build, Please check the log in: models/
Error in Build, Please check the log in: models/
Error in Build, Please check the log in: models/
We recommend to run benchmarking in headless mode

Model Name: mobilenet_v1_ssd
FPS:0.00


------------Executing ssd_resnet34_1200x1200------------

---------------------- 0 0 0

Error in Build, Please check the log in: models/
Error in Build, Please check the log in: models/
Error in Build, Please check the log in: models/
We recommend to run benchmarking in headless mode

Model Name: ssd_resnet34_1200x1200
FPS:0.00


            Model Name  FPS

0 inception_v4 0
1 vgg19_N2 0
2 super_resolution_bsd500 0
3 unet-segmentation 0
4 pose_estimation 0
5 yolov3-tiny-416 0
6 ResNet50_224x224 0
7 mobilenet_v1_ssd 0
8 ssd_resnet34_1200x1200 0
sh: 1: cannot create /sys/devices/platform/pwm-fan: Is a directory

Hi,

The trtexec binary has updated some arguments, so it won’t work by default.
You will need to solve the compatibility issues if you want to run it on JetPack 6.

For example, we can get yolov3-tiny-416 result with below steps:

Change:

diff --git a/benchmark.py b/benchmark.py
index ebde123..97a54d1 100755
--- a/benchmark.py
+++ b/benchmark.py
@@ -125,6 +125,6 @@ def main():
                 model.remove()
 
     system_check.clear_ram_space()
-    system_check.set_jetson_fan(0)
+    #system_check.set_jetson_fan(0)
 if __name__ == "__main__":
     main()
diff --git a/utils/load_store_engine.py b/utils/load_store_engine.py
index de36356..8ffa2f6 100644
--- a/utils/load_store_engine.py
+++ b/utils/load_store_engine.py
@@ -30,19 +30,19 @@ class load_store_engine():
                 self.device = 'dla'
                 model_base_path = self._model2deploy()
                 dla_cmd = str('--useDLACore=' + str(device_id - 1))
-                workspace_cmd = str('--workspace=' + str(self.ws_dla))
+                #workspace_cmd = str('--workspace=' + str(self.ws_dla))
+                _model = str(os.path.splitext(self.model_name)[0]) + '_b' + str(self.batch_size_dla)+'_ws'+str(self.ws_dla) + '_' + str(self.device) + str(device_id)
                 _model = str(os.path.splitext(self.model_name)[0]) + '_b' + str(self.batch_size_dla)+'_ws'+str(self.ws_dla) + '_' + str(self.device) + str(device_id)
                 engine_CMD = str(
-                    './trtexec' + " " + model_base_path + " " + in_io_format + " " +'--allowGPUFallback'+ " " + precision_cmd + " " + " " + dla_cmd + " " +
-                    workspace_cmd)
+                    './trtexec' + " " + model_base_path + " " + in_io_format + " " +'--allowGPUFallback'+ " " + precision_cmd + " " + " " + dla_cmd)
             else:
                 self.device = 'gpu'
                 model_base_path = self._model2deploy()
-                workspace_cmd = str('--workspace=' + str(self.ws_gpu))
+                #workspace_cmd = str('--workspace=' + str(self.ws_gpu))
                 _model = str(os.path.splitext(self.model_name)[0]) + '_b' + str(self.batch_size_gpu) + '_ws' + str(
                     self.ws_gpu) + '_' + str(self.device)
                 engine_CMD = str(
-                    './trtexec' + " " + model_base_path + " " + in_io_format + " " + precision_cmd + " " +workspace_cmd)
+                    './trtexec' + " " + model_base_path + " " + in_io_format + " " + precision_cmd)
             cmd.append(engine_CMD)
             model.append(_model)
             
@@ -83,13 +83,14 @@ class load_store_engine():
                 batch_cmd = str('--batch=' + str(self.batch_size_dla))
             return str(_model_output + " " + _out_io_format + " " + _model_base+ " " + batch_cmd)
         if self.framework == str('.onnx'):
-            batch_cmd = str('--explicitBatch')
+            #batch_cmd = str('--explicitBatch')
             model_name_split = os.path.splitext(self.model_name)[0]
             if self.device == 'gpu':
                 model_onnx = str(model_name_split+'-bs'+str(self.batch_size_gpu)+self.framework)
             if self.device == 'dla':
                 model_onnx = str(model_name_split+'-bs'+str(self.batch_size_dla)+self.framework)
-            return str('--onnx=' + str(os.path.join(self.model_path, model_onnx))+ " " + batch_cmd)
+            return str('--onnx=' + str(os.path.join(self.model_path, model_onnx)))
+            #return str('--onnx=' + str(os.path.join(self.model_path, model_onnx))+ " " + batch_cmd)
         if self.framework == str('.uff'):
             _model_input = str('--uffInput='+str(self.model_input))
             _model_output = str('--output='+str(self.model_output))

Step:

$ git clone https://github.com/NVIDIA-AI-IOT/jetson_benchmarks.git
$ cd jetson_benchmarks/
$ mkdir models
$ sudo sh install_requirements.sh
$ sudo python3 utils/download_models.py --model_name tiny-yolov3 --csv_file_path benchmark_csv/orin-nx-16gb-benchmarks.csv --save_dir [/path/to/jetson_benchmarks/models/]
$ sudo nvpmodel -m 0
$ sudo jetson_clocks 
$ sudo python3 benchmark.py --model_name tiny-yolov3 --csv_file_path benchmark_csv/orin-nx-16gb-benchmarks.csv --model_dir [/path/to/jetson_benchmarks/models/] --jetson_clocks
...
Please close all other applications and Press Enter to continue...
Setting Jetson orin in max performance mode
Jetson clocks are Set
------------Executing yolov3-tiny-416------------

---------------------- 97.24255333333333 97.8186533333333 99.05181333333334
--------------------------

Model Name: yolov3-tiny-416 
FPS:979.27 

--------------------------

Wall Time for running model (secs): 1076.3251011371613

For JetPack 6.2, we recommended benchmarking with the LLM use case instead:

Thanks.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.