To generate a calibration cache file, I use the command “tao-deploy yolo_v4 gen_trt_engine” including the etlt file and the images folder (–cal_image_dir) on my server. The generated cal.bin is used to export in INT8 mode on the Jetson. It has worked without problems when I use Deepstream 6.0 on Jetson Xavier. However, when I use that calibration file on Jetson Orin, the inferences do not work correctly. I guess it is a problem with the cal.bin generated with another version of TensorRT.
I want to generate the cal.bin using the Jetson Orin but I can’t find any option using tao-convert or another tool. Can you please help me with the command to generate the calibration cache file on the Jetson?
Still suggest you to use your sever to generate the calibration cache file.
For example, you can pull TAO deploy 5.0 docker(Its TRT version is 8.5.3).
5.0 docker: nvcr.io/nvidia/tao/tao-toolkit:5.0.0-deploy /bin/bash
Other version of deploy docker can be found in TAO Toolkit | NVIDIA NGC.
There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks