DeepStream SDK: How to use Custom Multi-task (Depthmap, Semantics, Detection) tensorrt model for inference in Python?

• Hardware Platform (Jetson / GPU) dGPU (Tesla T4)
• DeepStream Version 6.1.1
• TensorRT Version 8.4.1.5
• NVIDIA GPU Driver Version (valid for GPU only) 515.65.01
• Issue Type( questions, new requirements, bugs) Question

I have a pre-trained Multi-task(Depthmap, Semantics, Detection) model in ONNX and TensorRT .engine format that can be loaded in a python script for inference.
Input: Cropped, resized fisheye image
Output: Depthmap, Semantic segmentation, Object Detection

From the documentation, it seems like I need to use Preprocessed Tensor Input mode in Gst-nvinfer.
But is there any reference document on how to use tensorrt engine directly in Deepstream SDK using Python bindings? Any suggestions would be appreciated.

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

The nvinfer configuration has nothing to do with python. Even with python app deepstream_python_apps/apps/deepstream-test1 at master · NVIDIA-AI-IOT/deepstream_python_apps (github.com), the model is configurated with the nvinfer configuration file deepstream_python_apps/dstest1_pgie_config.txt at master · NVIDIA-AI-IOT/deepstream_python_apps (github.com)

For the engine file, you can set “model-engine-file” parameter Gst-nvinfer — DeepStream 6.3 Release documentation in the configuration file. Just as in this sample: deepstream_python_apps/dstest1_pgie_config.txt at master · NVIDIA-AI-IOT/deepstream_python_apps (github.com)

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.