Using TensorRT inference with NVDEC

Hi! I’m trying to implement an efficient C++ inference pipeline using YoloV3. For that, I’m planning to use NVDEC and TensorRT inference and post that use CUDA kernels for postprocessing. Furthermore, this would be running on a range of devices like T4, RTX 2080Ti as well as Jetson Nano. I’m aware DeepStream might solve most of the requirements here but since it doesn’t support TensorRT7 (and RTX devices) and I would like a bare minimum implementation so that might not be an option. I couldn’t find bare minimum samples for NVDEC + TRT anywhere on the forums + repos. I have both NVDEC and TensorRT7 inference working independently but unsure of how to bridge the gap. Are there any samples or documentations on how to do this? Or examples of modifying DeepStream to achieve what I want?

Edit: I furthermore found that Jetson doesn’t have NVDEC/NVCUVID support (tried running Video Codec SDK and it failed) and it’s available only through DeepStreame and GSTPlugin interface. Given DeepStream doesn’t support RTX devices and Jetson Nano doesn’t support NVDEC I’m wondering how can I get one interface running on all of them.


TensorRT Version: 7
GPU Type: RTX 2080Ti
Nvidia Driver Version: 440.33.01
CUDA Version: 10.2
CUDNN Version: 7.6.5
Operating System + Version: Ubuntu 18.04
Python Version (if applicable): 3.7

Moving to Deepstream SDK forum so that Deepstream team can take a look.

Please noted NVCUVID is for x86 platform, if you use Jetson for video processing, it’s based on v4l2, you can refer to Multimedia LL API, if you want to have one unified interface for video processing and inference for both x86 and Jetson, Deepstream would be one consideration for you, we have sample about using YoloV3 within deepstream which may meet your requirments(under sources/objectDetector_Yolo), in upcoming release, we will have TRT 7.0 support.
in case you need it, deepstream portal,