TAO tensorRT model inferencing using python

Hi guys,
We at avyuct have found out how to use tensorRT model generated by TAO toolkit.

We were working on a classification problem using the resnet18 pretrained model and were running into issues when trying to deploy the tesnortRT model.

For all the folks who are stuck at the same phase, check out the attachments to see the configuration file which is used to genereate the tesortRT model and the python code to infer the tensorRT model.
tensorRT_model.py (3.0 KB)
classification_spec.cfg (1.2 KB)

See Integrating TAO CV Models with Triton Inference Server — TAO Toolkit 3.22.05 documentation , you can use the triton server directly. Or refer to its preprocessing or postprocessing.
More, there are also some topics talking about classification inference.
Inferring resnet18 classification etlt model with python - #40 by Morganh
Error while running inference, model generated through TLT using Opencv-Python - #3 by Morganh

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.