Orin nano benchmark on R36.4.3(jetpack6.2)

Hi

I would like to run benchmark on orin nano 4G with devkit by using R36.4.3(jetpack6.2).
I flashed the official bsp image with jetson-orin-nano-devkit-super.conf and followed the instruction from GitHub - NVIDIA-AI-IOT/jetson_benchmarks: Jetson Benchmark

The FPS all showed 0 when I ran benchmark in MAXN_SUPER power mode.

p@p-desktop:~/jetson_benchmarks$ nvpmodel -q
NV Power Mode: MAXN_SUPER
2
p@p-desktop:~/jetson_sudo python3 benchmark.py --all --csv_file_path /home/p/jetson_benchmarks/benchmark_csv/orin-nano-benchmarks.csv --model_dir /home/p/jetson_benchmarks/models/ --jetson_clocksetson_clocks
Please close all other applications and Press Enter to continue...
Setting Jetson orin in max performance mode
Jetson clocks are Set
Running all benchmarks.. This will take at least 2 hours...
------------Executing inception_v4------------

---------------------- 0 0 0
Error in Build, Please check the log in: /home/p/jetson_benchmarks/models/
We recommend to run benchmarking in headless mode
--------------------------

Model Name: inception_v4
FPS:0.00

--------------------------

------------Executing vgg19_N2------------

---------------------- 0 0 0
Error in Build, Please check the log in: /home/p/jetson_benchmarks/models/
We recommend to run benchmarking in headless mode
--------------------------

Model Name: vgg19_N2
FPS:0.00

--------------------------

------------Executing super_resolution_bsd500------------

---------------------- 0 0 0
Error in Build, Please check the log in: /home/p/jetson_benchmarks/models/
We recommend to run benchmarking in headless mode
--------------------------

Model Name: super_resolution_bsd500
FPS:0.00

--------------------------

------------Executing unet-segmentation------------

---------------------- 0 0 0
Error in Build, Please check the log in: /home/p/jetson_benchmarks/models/
We recommend to run benchmarking in headless mode
--------------------------

Model Name: unet-segmentation
FPS:0.00

--------------------------

------------Executing pose_estimation------------

---------------------- 0 0 0
Error in Build, Please check the log in: /home/p/jetson_benchmarks/models/
We recommend to run benchmarking in headless mode
--------------------------

Model Name: pose_estimation
FPS:0.00

--------------------------

------------Executing yolov3-tiny-416------------

---------------------- 0 0 0
Error in Build, Please check the log in: /home/p/jetson_benchmarks/models/
We recommend to run benchmarking in headless mode
--------------------------

Model Name: yolov3-tiny-416
FPS:0.00

--------------------------

------------Executing ResNet50_224x224------------

---------------------- 0 0 0
Error in Build, Please check the log in: /home/p/jetson_benchmarks/models/
We recommend to run benchmarking in headless mode
--------------------------

Model Name: ResNet50_224x224
FPS:0.00

--------------------------

------------Executing mobilenet_v1_ssd------------

---------------------- 0 0 0
Error in Build, Please check the log in: /home/p/jetson_benchmarks/models/
We recommend to run benchmarking in headless mode
--------------------------

Model Name: mobilenet_v1_ssd
FPS:0.00

--------------------------

------------Executing ssd_resnet34_1200x1200------------

---------------------- 0 0 0
Error in Build, Please check the log in: /home/p/jetson_benchmarks/models/
We recommend to run benchmarking in headless mode
--------------------------

Model Name: ssd_resnet34_1200x1200
FPS:0.00

--------------------------

                Model Name  FPS
0             inception_v4    0
1                 vgg19_N2    0
2  super_resolution_bsd500    0
3        unet-segmentation    0
4          pose_estimation    0
5          yolov3-tiny-416    0
6         ResNet50_224x224    0
7         mobilenet_v1_ssd    0
8   ssd_resnet34_1200x1200    0
sh: 1: cannot create /sys/devices/platform/pwm-fan: Is a directory
p@p-desktop:~/jetson_benchmarks$

Why are the FPS all 0, is there anything I missed?

Thx
Yen

Hi,

Error in Build, Please check the log in: /home/p/jetson_benchmarks/models/

JetPack 6.2 upgrades TensorRT to 10.3 and the models used in the jetson_benchmarks are not supported anymore,
To test performance with JetPack 6.2, we recommend to check our Jetson AI Lab instead:

Thanks.

Hi

What version of pytorch and torchvision does jetpack6.2 support?

Thx
Yen

Hi,

Does this benchmark support orin nano 4G?

It showed this messages and stop running

TVMError: Check failed: (output_res.IsOk()) is false: Insufficient GPU memory error: The available single GPU memory is 3061.683 MB, which is less than the sum of model weight size (3894.758 MB) and temporary buffer size (2260.074 MB).
1. You can set a larger "gpu_memory_utilization" value.
2. If the model weight size is too large, please enable tensor parallelism by passing `--tensor-parallel-shards $NGPU` to `mlc_llm gen_config` or use quantization.
3. If the temporary buffer size is too large, please use a smaller `--prefill-chunk-size` in `mlc_llm gen_config`.

Thx
Yen

Hi

I ran Benchmarks - NVIDIA Jetson AI Lab on orin nano 8G and it generated a csv file.

timestamp	 hostname	 api	 model	 precision	 input_tokens	 output_tokens	 prefill_time	 prefill_rate	 decode_time	 decode_rate	 memory
20250123 22:15:34	 p-desktop	 mlc	 HF://dusty-nv/Llama-3.2-1B-Instruct-q4f16_ft-MLC	 MLC	18	128	 0.014327838666666669	 1279.335587153889	 1.4803616997795275	 86.46544902858676	1043.664063
20250123 22:19:14	 p-desktop	 mlc	 HF://dusty-nv/Llama-3.2-3B-Instruct-q4f16_ft-MLC	 MLC	18	128	0.035887901	 512.0359004611038	 3.103482484913386	 41.24417035343086	1179.164063
20250123 22:24:23	 p-desktop	 mlc	 HF://dusty-nv/Llama-3.1-8B-Instruct-q4f16_ft-MLC	 MLC	18	128	0.081089014	 225.96868431727432	 7.450760754729659	 17.179611467985183	1300.171875
20250123 22:29:09	 p-desktop	 mlc	 HF://dusty-nv/Llama-2-7b-chat-hf-q4f16_ft-MLC	 MLC	20	128	0.077425003	 258.37890573666994	 6.615822372283465	 19.348280803406197	1052.753906
20250123 22:31:22	 p-desktop	 mlc	 HF://dusty-nv/Qwen2.5-0.5B-Instruct-q4f16_ft-MLC	 MLC	13	128	 0.012667305666666668	 939.9833818106796	 1.0144985008713912	 126.25101352975531	1125.824219
20250123 22:33:57	 p-desktop	 mlc	 HF://dusty-nv/Qwen2.5-1.5B-Instruct-q4f16_ft-MLC	 MLC	7	128	0.018227635	 344.7333369862025	 2.042400282540682	 62.68086095143252	1141.128906
20250123 22:38:09	 p-desktop	 mlc	 HF://dusty-nv/Qwen2.5-7B-Instruct-q4f16_ft-MLC	 MLC	19	128	0.076040863	 249.8755119737155	 7.1531067852513655	 17.848014193749258	1230.257813
20250123 22:41:46	 p-desktop	 mlc	 HF://mlc-ai/gemma-2-2b-it-q4f16_1-MLC	 MLC	13	107	0.09058522	 121.84590641516716	 3.3439286345660126	 31.906005262055395	1344.152344
20250123 22:44:54	 p-desktop	 mlc	 HF://dusty-nv/Phi-3.5-mini-instruct-q4f16_ft-MLC	 MLC	17	128	0.052318219	 330.42664697018733	 3.9348300010498685	 32.533198247801536	990.6914063
20250123 22:47:56	 p-desktop	 mlc	 HF://dusty-nv/SmolLM2-135M-Instruct-q4f16_ft-MLC	 MLC	20	128	0.016438428	 1237.344605801351	 0.7126507593910761	 179.61644806780384	1106.058594
20250123 22:51:06	 p-desktop	 mlc	 HF://dusty-nv/SmolLM2-360M-Instruct-q4f16_ft-MLC	 MLC	20	108	0.017826257	 1140.535032516803	 0.7288978749680666	 147.86020868770822	1139.847656
20250123 22:53:55	 p-desktop	 mlc	 HF://dusty-nv/SmolLM2-1.7B-Instruct-q4f16_ft-MLC	 MLC	20	128	 0.022672298000000004	 896.7286762108669	 2.151499843863517	 59.494318459047896	1034.472656

Please teach me how to calculate the value like this

Thx
Yen

Hi,

Sorry for the late update.

Please find the decode rate of the output csv file.
It stands for the LLM throughput tokens per second.

Thanks.

Hi

Is it normal to see this message when running an AI benchmark on orin nano devkit, or does it indicate a potential issue on orin nano devkit?
power mode is “maxn-super”.

Thx
Yen

Hi

I’d appreciate your feedback on this question.

Thx
Yen

Hi

What version of pytorch and torchvision does jetpack6.2 support?

Thx

Hi,

Sorry for the missing.
You can find the prebuilt PyTorch and TorchVision in the below link:

https://pypi.jetson-ai-lab.dev/jp6/cu126

It’s expected to see the overcurrent warning when running a model with performance mode.
The mechanism protects the device in the heavy loading scenario.

You can also turn off the notification and let it work on the backend.

Thanks.

Hi

Does this AI benchmark support orin nx?
I keep encountering FileNotFoundError: [Errno 2] No such file or directory: '/data/prompts/completion_16.json' now when I run the benchmark.

How can I fix this problem?

Thx
Yen

Hi,

I tried AI benchmark on orin nano again and it will show the same error message FileNotFoundError: [Errno 2] No such file or directory: '/data/prompts/completion_16.json'

Please teach us how to run this benchmark correctly.

Thx
Yen

Hi,

Any updates?

Thx
Yen

Hi,

Do you mean jetson_benchmarks?
Or which AI benchmark do you refer to?

Thanks.