Using Torch model without conversion

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) GPU, T4, GCP
• DeepStream Version 5.0
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs) Question
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
I have a trained pytorch model which I want to use with Deepstream for inference, getting issues in converting it to TRT compatible format(ONXX, Torchscript) etc. Is there a way I can use it without converting to any other format using the triton server or something else?

Hi,

Which platform do you use?
Could you fill the environment information above?

Triton server can support PyTorch for a desktop environment.
But if you are using a Jetson device, only TensorFlow and TensorRT are supported.

Thanks.

Hi @AastaLLL I have updated it, I will not be using Jetson device, it will run on GPU only(cloud based).
Does it support Pytorch model natively or we need to convert pytorch model to torchscript?

Hi,

Sorry for the late update.

You will need to convert the model into ONNX or TorchScript.
Please check the support format of (desktop) Triton server below:
https://github.com/triton-inference-server/server/blob/master/docs/model_repository.md#model-files

Thanks.

@AastaLLL Thanks for the update