I would like to run benchmark on orin nano 4G with devkit by using R36.4.3(jetpack6.2).
I flashed the official bsp image with jetson-orin-nano-devkit-super.conf and followed the instruction from GitHub - NVIDIA-AI-IOT/jetson_benchmarks: Jetson Benchmark
The FPS all showed 0 when I ran benchmark in MAXN_SUPER power mode.
p@p-desktop:~/jetson_benchmarks$ nvpmodel -q
NV Power Mode: MAXN_SUPER
2
p@p-desktop:~/jetson_sudo python3 benchmark.py --all --csv_file_path /home/p/jetson_benchmarks/benchmark_csv/orin-nano-benchmarks.csv --model_dir /home/p/jetson_benchmarks/models/ --jetson_clocksetson_clocks
Please close all other applications and Press Enter to continue...
Setting Jetson orin in max performance mode
Jetson clocks are Set
Running all benchmarks.. This will take at least 2 hours...
------------Executing inception_v4------------
---------------------- 0 0 0
Error in Build, Please check the log in: /home/p/jetson_benchmarks/models/
We recommend to run benchmarking in headless mode
--------------------------
Model Name: inception_v4
FPS:0.00
--------------------------
------------Executing vgg19_N2------------
---------------------- 0 0 0
Error in Build, Please check the log in: /home/p/jetson_benchmarks/models/
We recommend to run benchmarking in headless mode
--------------------------
Model Name: vgg19_N2
FPS:0.00
--------------------------
------------Executing super_resolution_bsd500------------
---------------------- 0 0 0
Error in Build, Please check the log in: /home/p/jetson_benchmarks/models/
We recommend to run benchmarking in headless mode
--------------------------
Model Name: super_resolution_bsd500
FPS:0.00
--------------------------
------------Executing unet-segmentation------------
---------------------- 0 0 0
Error in Build, Please check the log in: /home/p/jetson_benchmarks/models/
We recommend to run benchmarking in headless mode
--------------------------
Model Name: unet-segmentation
FPS:0.00
--------------------------
------------Executing pose_estimation------------
---------------------- 0 0 0
Error in Build, Please check the log in: /home/p/jetson_benchmarks/models/
We recommend to run benchmarking in headless mode
--------------------------
Model Name: pose_estimation
FPS:0.00
--------------------------
------------Executing yolov3-tiny-416------------
---------------------- 0 0 0
Error in Build, Please check the log in: /home/p/jetson_benchmarks/models/
We recommend to run benchmarking in headless mode
--------------------------
Model Name: yolov3-tiny-416
FPS:0.00
--------------------------
------------Executing ResNet50_224x224------------
---------------------- 0 0 0
Error in Build, Please check the log in: /home/p/jetson_benchmarks/models/
We recommend to run benchmarking in headless mode
--------------------------
Model Name: ResNet50_224x224
FPS:0.00
--------------------------
------------Executing mobilenet_v1_ssd------------
---------------------- 0 0 0
Error in Build, Please check the log in: /home/p/jetson_benchmarks/models/
We recommend to run benchmarking in headless mode
--------------------------
Model Name: mobilenet_v1_ssd
FPS:0.00
--------------------------
------------Executing ssd_resnet34_1200x1200------------
---------------------- 0 0 0
Error in Build, Please check the log in: /home/p/jetson_benchmarks/models/
We recommend to run benchmarking in headless mode
--------------------------
Model Name: ssd_resnet34_1200x1200
FPS:0.00
--------------------------
Model Name FPS
0 inception_v4 0
1 vgg19_N2 0
2 super_resolution_bsd500 0
3 unet-segmentation 0
4 pose_estimation 0
5 yolov3-tiny-416 0
6 ResNet50_224x224 0
7 mobilenet_v1_ssd 0
8 ssd_resnet34_1200x1200 0
sh: 1: cannot create /sys/devices/platform/pwm-fan: Is a directory
p@p-desktop:~/jetson_benchmarks$
Why are the FPS all 0, is there anything I missed?
Error in Build, Please check the log in: /home/p/jetson_benchmarks/models/
JetPack 6.2 upgrades TensorRT to 10.3 and the models used in the jetson_benchmarks are not supported anymore,
To test performance with JetPack 6.2, we recommend to check our Jetson AI Lab instead:
TVMError: Check failed: (output_res.IsOk()) is false: Insufficient GPU memory error: The available single GPU memory is 3061.683 MB, which is less than the sum of model weight size (3894.758 MB) and temporary buffer size (2260.074 MB).
1. You can set a larger "gpu_memory_utilization" value.
2. If the model weight size is too large, please enable tensor parallelism by passing `--tensor-parallel-shards $NGPU` to `mlc_llm gen_config` or use quantization.
3. If the temporary buffer size is too large, please use a smaller `--prefill-chunk-size` in `mlc_llm gen_config`.
Is it normal to see this message when running an AI benchmark on orin nano devkit, or does it indicate a potential issue on orin nano devkit?
power mode is “maxn-super”.
It’s expected to see the overcurrent warning when running a model with performance mode.
The mechanism protects the device in the heavy loading scenario.
You can also turn off the notification and let it work on the backend.
Does this AI benchmark support orin nx?
I keep encountering FileNotFoundError: [Errno 2] No such file or directory: '/data/prompts/completion_16.json' now when I run the benchmark.
I tried AI benchmark on orin nano again and it will show the same error message FileNotFoundError: [Errno 2] No such file or directory: '/data/prompts/completion_16.json'
Please teach us how to run this benchmark correctly.