Accuracy drop when converting tlt model to engine model


Hello, I used TAO to train a multitask classifier, and then converted it to .etlt (fp32) using tao multitask_classifier export, and exported it to .engine (fp32) using tao convert. Both conversion and export were made in a docker image ( v3.21.11-tf1.15.5-py3 c607b0237bc5).

I noticed a drop in accuracy when I run inferences using the .engine model, and I can’t find the reason. Some people say it might be the preprocessing used by tao that is different from tensorRT, or it could be an issue during the export / conversion.

When I run evaluation on the tlt model, it is able to reach 90% accuracy, but when it comes to the .engine model, the results seem random.


TensorRT Version: 8.0.1
Nvidia Driver Version: v3.21.11-tf1.15.5-py3 c607b0237bc5
CUDA Version: 11.3
Operating System + Version: Ubuntu 20
Python Version (if applicable): 3.9.7
Baremetal or Container (if container which image + tag): 21.08-py3 cc8404aefdca

Relevant Files

you’ll find the python script and models bellow :

Thank you in advance,


This looks more related to the TAO toolkit. We are moving this post to the TAO forum to get better help.

Thank you.

Refer to

tao-toolkit-triton-apps/ at main · NVIDIA-AI-IOT/tao-toolkit-triton-apps · GitHub
tao-toolkit-triton-apps/ at main · NVIDIA-AI-IOT/tao-toolkit-triton-apps · GitHub

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.