How can I generate a cal.bin on the Jetson Orin?

Hardware Platform: Jetson Orin
DeepStream Version: 6.3
JetPack Version: 5.1.2
TensorRT Version: 8.5.2
Issue Type: question

To generate a calibration cache file, I use the command “tao-deploy yolo_v4 gen_trt_engine” including the etlt file and the images folder (–cal_image_dir) on my server. The generated cal.bin is used to export in INT8 mode on the Jetson. It has worked without problems when I use Deepstream 6.0 on Jetson Xavier. However, when I use that calibration file on Jetson Orin, the inferences do not work correctly. I guess it is a problem with the cal.bin generated with another version of TensorRT.
I want to generate the cal.bin using the Jetson Orin but I can’t find any option using tao-convert or another tool. Can you please help me with the command to generate the calibration cache file on the Jetson?

Moving to TAO forum.

Could you set Jetson Orin to the same environment(tensort version, etc) as Jetson Xavier and retry?

I am asking for the command to generate the calibration file on Jetson. Could you please provide it?

Still suggest you to use your sever to generate the calibration cache file.
For example, you can pull TAO deploy 5.0 docker(Its TRT version is 8.5.3).
5.0 docker: nvcr.io/nvidia/tao/tao-toolkit:5.0.0-deploy /bin/bash
Other version of deploy docker can be found in TAO Toolkit | NVIDIA NGC.

If you still want to generate calibration cache file in Jetson, you can follow tao_deploy/README.md at main · NVIDIA/tao_deploy · GitHub to install tao-deploy in Jetson.
For example, Jetpack5.0 + nvcr.io/nvidia/l4t-tensorrt:r8.5.2.2-devel should be working.
You can download other version of l4t-tensorrt docker.
But please do not flash Jetpack6.0 to Jetson due to this thread.

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.