QAT with TRT8 and deploy in enviroment with a lower TRT version

Hi there,

My goal is to use QAT to improve my yolov5 trt model and deploy it in my deepstream environment. I haven’t figured out everything yet but from what I have gathered, it seems like I can do QAT in pytorch with the newest version of trt8 on a server, retrieve the calib.bin file and then generate a trt7 engine on my jetson machine.

So my main question is that can trt8 calib.bin file be used in trt7 application?
Also please let me know if there is any other potential issue I didn’t notice.

My jetson setup is the following:
Jetson Xavier
DeepStream 5.0
JetPack 4.4
TensorRT 7.1.3
NVIDIA GPU Driver Version 10.2

Thanks!

Hi,

You can find the answer in our TensorRT document:
https://docs.nvidia.com/deeplearning/tensorrt/developer-guide/index.html#faq

Q: Are engines and calibration tables portable across TensorRT versions?

A: No. Internal implementations and formats are continually optimized and can change between versions. For this reason, engines and calibration tables are not guaranteed to be binary compatible with different versions of TensorRT. Applications should build new engines and INT8 calibration tables when using a new version of TensorRT.

Thanks.