Deploy a QAT model to Jetson/NX

I need some help converting a QAT trained model to TRT.

The proces of converting a “normal” model is quite well described and works well;
train - prune - retrain - convert to Int8 - copy files to dest. platform - feed model.eltl and calibration.bin to tlt-converter →

Tlt-converter /home/……./model.etlt -k nvidia_tlt -c /home/………/calibration.bin - o output_cov/Sigmoid,output_bbox/BiasAdd -d 3,768,1024 -m 64 -i nchw -t int8 -e /home/……/resnet18_detector.trt -b 4

and it results in a working model.trt.

Once I enable QAT during the proces, the “convert to Int8” process outputs a model.trt.int8 file

I have no idea how to use the tlt-converter to convert it to a .trt model.

Anyone?

Kind regards, Gerard

The model.trt.int8 file is a .trt engine.
If you want to use the tlt-converter to generate trt engine to destination platform, please copy etlt file along with the calibration_qat.bin(generated from “convert to Int8” process ) and run the similar command you mentioned above.

Thanks Morganh!