Where can you set the Pytorch model function called by Triton for a Deepstream app?

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) Jetson Xavier
• DeepStream Version 6.1.1
• JetPack Version (valid for Jetson only) 5.0.2
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs) Question
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Where can I set or find the function that deepstream calls in a Pytorch model converted to TensorRT model?

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

deepstream nvinfer will convert model to TRT model before inference, but nvinfer dose not support Pytorch model directly, please refer to inputs-and-outputs, you might convert Pytorch model to onnx model first.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.