• Hardware Platform (Jetson / GPU) dGPU (Tesla T4) • DeepStream Version 6.1.1 • TensorRT Version 8.4.1.5 • NVIDIA GPU Driver Version (valid for GPU only) 515.65.01 • Issue Type( questions, new requirements, bugs) Question
I have a pre-trained Multi-task(Depthmap, Semantics, Detection) model in ONNX and TensorRT .engine format that can be loaded in a python script for inference.
Input: Cropped, resized fisheye image
Output: Depthmap, Semantic segmentation, Object Detection
From the documentation, it seems like I need to use Preprocessed Tensor Input mode in Gst-nvinfer.
But is there any reference document on how to use tensorrt engine directly in Deepstream SDK using Python bindings? Any suggestions would be appreciated.
There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks