Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) GPU, T4, GCP
• DeepStream Version 5.0
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs) Question
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
I have a trained pytorch model which I want to use with Deepstream for inference, getting issues in converting it to TRT compatible format(ONXX, Torchscript) etc. Is there a way I can use it without converting to any other format using the triton server or something else?