• Hardware Platform (Jetson / GPU) dGPU (Tesla T4) • DeepStream Version 6.1.1 • TensorRT Version 22.214.171.124 • NVIDIA GPU Driver Version (valid for GPU only) 515.65.01 • Issue Type( questions, new requirements, bugs) Question
I have a pre-trained Multi-task(Depthmap, Semantics, Detection) model in ONNX and TensorRT .engine format that can be loaded in a python script for inference.
Input: Cropped, resized fisheye image
Output: Depthmap, Semantic segmentation, Object Detection
From the documentation, it seems like I need to use Preprocessed Tensor Input mode in Gst-nvinfer.
But is there any reference document on how to use tensorrt engine directly in Deepstream SDK using Python bindings? Any suggestions would be appreciated.