Torch-TensorRT with Deepstream 6.0 on Jetson

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) Jetson Xavier NX
• DeepStream Version 6.0
• JetPack Version (valid for Jetson only) 4.6
• TensorRT Version 8.2
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs) Question

Hi, will Torch-TensorRT converted models work with Deepstream on a Jetson? My understanding is that:

  • TorchScript converted models do not work with Deepstream on a Jetson
  • Torch-TensorRT falls back to TorchScript where it cannot convert functions to TensorRT

As such, I assume Torch-TensorRT converted models would not work, but just want to check/confirm. Thanks!

Hi,

There are two inference components in the Deepstream: nvinfer and nvinferserver.

  • nvinfer is implemented with the TensorRT.
    It only supports the TensorRT engine and the model format that can be converted into TensorRT.

  • nvinferserver use the Triton server. There are lots of different backends that are supported.
    But unfortunately, Triton doesn’t support PyTorch on Jetson yet.

You can find more details below:

https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_plugin_gst-nvinferserver.html
The plugin supports Triton features along with multiple deep-learning frameworks such as TensorRT, TensorFlow (GraphDef / SavedModel), ONNX and PyTorch on Tesla platforms. On Jetson, it also supports TensorRT and TensorFlow (GraphDef / SavedModel). TensorFlow and ONNX can be configured with TensorRT acceleration.

Thanks.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.