Accuracy drop when converting tlt model to engine model

Description

Hello, I used TAO to train a multitask classifier, and then converted it to .etlt (fp32) using tao multitask_classifier export, and exported it to .engine (fp32) using tao convert. Both conversion and export were made in a docker image ( nvcr.io/nvidia/tao/tao-toolkit-tf v3.21.11-tf1.15.5-py3 c607b0237bc5).

I noticed a drop in accuracy when I run inferences using the .engine model, and I can’t find the reason. Some people say it might be the preprocessing used by tao that is different from tensorRT, or it could be an issue during the export / conversion.

When I run evaluation on the tlt model, it is able to reach 90% accuracy, but when it comes to the .engine model, the results seem random.

Environment

TensorRT Version: 8.0.1
Nvidia Driver Version: nvcr.io/nvidia/tao/tao-toolkit-tf v3.21.11-tf1.15.5-py3 c607b0237bc5
CUDA Version: 11.3
Operating System + Version: Ubuntu 20
Python Version (if applicable): 3.9.7
Baremetal or Container (if container which image + tag): nvcr.io/nvidia/tensorrt 21.08-py3 cc8404aefdca

Relevant Files

you’ll find the python script and models bellow :

Thank you in advance,

Hi,

This looks more related to the TAO toolkit. We are moving this post to the TAO forum to get better help.

Thank you.

Refer to

https://github.com/NVIDIA-AI-IOT/tao-toolkit-triton-apps/blob/main/tao_triton/python/types/frame.py#L180
https://github.com/NVIDIA-AI-IOT/tao-toolkit-triton-apps/blob/main/tao_triton/python/postprocessing/multitask_classification_postprocessor.py

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.