• Hardware Platform (Jetson / GPU) Jetson Orin Nano • DeepStream Version 7.0 • JetPack Version (valid for Jetson only) 6.0 • TensorRT Version 8.6.2
I want to test TAO models with Deepstream. Currently, I use Yolo for my pipelines and the process I follow is converting the .pt file to .onnx file in a way the supports TensorRT conversion and then I have to use a script to compile a plugin that processes the outputs of yolo, the custom bbox-parser. From what I read, TAO models can already be converted to TensorRT out of the box, but it wasn’t clear to me if the custom bbox-parser plugin (or other equivalent plugins for other tasks like segmentation) is provided by TAO. Could you clarify the process for me or point me to the exact part of the documentation or examples that explain that?
Please provide complete information as applicable to your setup. Thanks Hardware Platform (Jetson / GPU) DeepStream Version JetPack Version (valid for Jetson only) TensorRT Version NVIDIA GPU Driver Version (valid for GPU only) Issue Type( questions, new requirements, bugs) How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing) Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks