Integrate Non-Detection TensorRT model into DeepStream

• Hardware Platform (Jetson / GPU) GPU
• DeepStream Version 6.0
• JetPack Version (valid for Jetson only) N.A.
• TensorRT Version 7.0 / 8.0
• NVIDIA GPU Driver Version (valid for GPU only) 470.82.01
• Issue Type( questions, new requirements, bugs) Question
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing) N.A.
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description) N.A.

We have a non-detection / non-classification model trained using PyTorch and converted to ONNX. The model take one frame as input and output a frame with same size.

We can run this model using TensorRT NvOnnxParser and NvInfer. Now we would like to integrate this model into our DeepStream pipeline. May I know what is the proper way to do it and any samples?

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

• Hardware Platform (Jetson / GPU) GPU
• DeepStream Version 6.0
• JetPack Version (valid for Jetson only) N.A.
• TensorRT Version 7.0 / 8.0
• NVIDIA GPU Driver Version (valid for GPU only) 470.82.01
• Issue Type( questions, new requirements, bugs) Question
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing) N.A.
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description) N.A.

What is the input layer dimension of the model? (NCHW or NHWC or others??)

What is the preprocess needed by the model? What is the postprocess needed by the model?

Input layer dimension: 3x640x640 in NCHW
Output layer dimension: 3x640x640 in NCHW
Pre-process: Channel based mean subtraction and normalization.
Post-process: None.

Since you have implement TensorRT with your model. So please refer to nvdsvideotemplate plugin to customize your own inference plugin. Gst-nvdsvideotemplate — DeepStream 6.1.1 Release documentation

The source code is available in /opt/nvidia/deepstream/deepstream/sources/gst-plugins/gst-nvdsvideotemplate

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.