Hi, I am trying use the sample(02_video_dec_cuda) to decode the h264 file.
But how should I display the result with opencv.
Please help me.
This is not an answer… but I wouldn’t advise to use opencv for display on Jetson… highgui’s imshow is not so efficient for more than 640x480@30 fps. This would also depend on you GUI backend (GTK, QT…).
From opencv, with higher pixel rate it is usually better to use a VideoWriter with a gstreamer pipeline to various display sinks.
So better directly map your MMAPI frame to a gstreamer buffer, and then convert, filter, infer or display from gstreamer framework also leveraging HW accelaration with some NV plugins.
Hi,
For OpenCV, we suggest run gstreamer pipeline in cv2.VideoCapture() like this sample:
Doesn't work nvv4l2decoder for decoding RTSP in gstreamer + opencv - #3 by DaneLLL
If you would like to develop the usecase based on 02_video_dec_cuda
. we suggest use NvEglRenderer
or NvDrmRenderer
instead of OpenCV.
Because my algorithm only supports BGR format, and gstreamet’s expansibility is not good, so I want to develop it with MMAPI. Can you give me some suggestions
Because my algorithm only supports BGR format, and gstreamet’s expansibility is not good, so I want to develop it with MMAPI. Can you give me some suggestions。
Hi,
For MMAPI + cv::GpuMat, please refer to this patch:
LibArgus EGLStream to nvivafilter - #7 by DaneLLL
For MMAPI + cv::Mat, please refer to
NVBuffer (FD) to opencv Mat - #6 by DaneLLL