DeepStream get cv::mat by

I want to load the GStreamer command line in OpenCV to use hard decoding(example nvn4l2decoder) to get image data for AI inference(my AI SDK not in nvinfer plugin)? What can I do to get the highest performance?

**• Hardware Platform (Jetson / GPU)**xavier nx
• DeepStream Version5.0
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

The HW decoder output is Nvidia customized format, it can not be used outside DeepStream pipeline. You may try to use pipeline “video src => nvv4l2decoder ==> nvvideoconvert ==> appsink” to output decoded frames outside gstreamer pipeline.
You don’t want to use DeepStream for inferrence, right? So it is not a DeepStream topic.

Just Use HW decoder

openCV can run gstreamer pipeline, but the pipeline must be complete(from src to sink), the pipeline is a closed loop, no video data is available outside the pipeline.

OpenCV has CUDA codec interfaces for video. OpenCV: opencv2/cudacodec.hpp File Reference

This is not DeepStream or even Gstreamer related, you need to refer to openCV resources.