Use TensorRT model with TAO Toolkit inference

• Network Type: EfficientNet B1
• TLT Version (TAO Toolkit 3-21.11)

Hello,

I’ve been looking into the TAO Toolkit documentation and I’ve seen that in “running inference on a model” there’s a part that says “TensorRT Python inference can also be enabled”.

However, on the sample command, the -m parameter is for the “path to the pretrained model (TAO model)”.

  • My question is, it could be possible to use a TensorRT generated engine to run the inference using this tao command?
  • And also, could I use another extension like .etlt?

Thanks in advance.

For inference, usually there are 3 ways.

  1. Tao inference. Currently it can only run against tlt model.

  2. With deepstream. Refer to
    Issue with image classification tutorial and testing with deepstream-app - #21 by Morganh

  3. With python inference. Refer to tao-toolkit-triton-apps/configuring_the_client.md at main · NVIDIA-AI-IOT/tao-toolkit-triton-apps (github.com) and Issue with image classification tutorial and testing with deepstream-app - #25 by dzmitry.babrovich

Thank you.

But what does it mean TensorRT Python inference can also be enabled? Is it an alternative mode for tao inference command?

As mentioned above, it should be talking about Integrating TAO CV Models with Triton Inference Server — TAO Toolkit 3.22.05 documentation.

Ok, thank you so much!

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.