• Network Type: EfficientNet B1
• TLT Version (TAO Toolkit 3-21.11)
Hello,
I’ve been looking into the TAO Toolkit documentation and I’ve seen that in “running inference on a model” there’s a part that says “TensorRT Python inference can also be enabled”.
However, on the sample command, the -m parameter is for the “path to the pretrained model (TAO model)”.
- My question is, it could be possible to use a TensorRT generated engine to run the inference using this tao command?
- And also, could I use another extension like .etlt?
Thanks in advance.