Hello,
I am using Jetson AGX Orin with JetPack 5.1.1.
I am measuring processing time with the basic_usage.py in NanoSAM, but I am not getting the performance as described on GitHub.
Specifically, I am unable to achieve the performance of 8ms for ResNet18. At best, it is around 20ms.
How was this performance achieved?
I would appreciate it if you could let me know.
Hi,
The perf is gathered under maximum performance.
Have you maximized the device clocks first?
$ sudo nvpmodel -m 0
$ sudo jetson_clocks
Thanks.
Both commands have already been executed.
Hi,
Thanks for the feedback.
We will check this with our internal team and share more info with you.
Thanks.
I am considering trying jetson_benchmarks to check if Jetson is performing adequately. However, an engine file is required to run it. Do I need to generate this from the downloaded ONNX file myself?
Hi,
It needs to convert the model to a TensorRT engine on the target to select an optimal algorithm.
But this can be done with a few steps mentioned in our GitHub:
Thanks.
Sorry, I couldn’t find any information regarding the generation of the engine file. Should I use the trtexec
command? I would appreciate it if you could provide specific instructions.
Hi,
It is done by Python code (benchmark.py) automatically.
Please follow the instructions on the GitHub page:
Thanks.
I executed the process, but the following log appears for all models:
Error in Build, Please check the log in: models/
It seems that the log file shows an error indicating that the engine file could not be found.
Do you know what the cause might be?
I am attaching the log file generated in the models/ directory.
ResNet50_224x224_b32_ws1024_dla1.txt (4.7 KB)
Hi,
Yes, the error is related to the missing model file.
Have you run the python3 utils/download_models.py ...
first?
Could you share the command you use so we can check?
Thanks.
Yes, I executed the command python3 utils/download_models.py ...
As a result, the ONNX files and Prototxt files for each model were generated.
The command I used is as follows:
python3 utils/download_models.py --all --csv_file_path benchmark_csv/orin-benchmarks.csv --save_dir models
Next:
sudo python3 benchmark.py --all --csv_file_path benchmark_csv/orin-benchmarks.csv --model_dir models
Returning to the previous question, could you please tell me the code for measuring NanoSAM performance that is posted on GitHub?
Have you changed the encoding size etc. from 1024?
Hi,
Please find below link for the info:
Thanks.
Thank you for sharing the link. I will use it as a reference.
However, what I would like to ask is whether the encoding size was changed when measuring the performance mentioned on GitHub. I would like to have the information necessary to achieve that level of performance.
Hi,
The perf value is measured with the default value in the repo.
Thanks.
Thank you for the information. So, the performance issue could potentially be related to the configuration of the AGX Orin, correct?
Could you please provide details about the issue where I am unable to run benchmark.py
in order to conduct a performance evaluation?
https://forums.developer.nvidia.com/t/i-am-not-getting-the-performance-i-expected-with-nanosam/307338/13?u=satou.d