Hi, I am trying to perform inference with a trt engine on an instance of aws ec2.
For this I am using a python script by which I load the images and then do the inference with the trt engine.
The model is an efficientnet b1 trained with TAO toolkit 3.0 and when doing the inference with the .tlt model the results are good, while doing it with the .trt using the python script the results for the same set of images are much worse.
Do you know what could be wrong? It occurs to me that it could be the preprocessing, since I am reading the images without doing it. However, when training the model in TAO I did not specify any type of preprocessing, do you know what is the default preprocessing to be able to replicate it in python?
We have noticed that you’re using an older version of the TensorRT. We recommend you please use the latest TRT version. Also looks like you’re using TAO. If you need further assistance, we recommend you please move this post to TAO forum to get better help.