I’m trying to decode h264 and h265 streams coming from rtsp cameras expoiting NVCUVID. To achieve that I’m using openCV 3.4.3 with GStreamer 1.16. I have a nvidia card rtx 2080.
As a start I would like to decode the frames and gain access to raw rgb data directly in cpu for further image processing operations (next step would be to perform some opencv operations direcly in gpu).
The best pipeline that I managed to put toghether is:
gst-launch-1.0 rtspsrc location="rtsp://root:email@example.com:554/axis-media/media.amp?videocodec=h264&resolution=3840x2160&fps=25" protocols=GST_RTSP_LOWER_TRANS_TCP latency=0 ! rtph264depay ! h264parse ! nvdec ! gldownload ! fpsdisplaysink sync=false
and it shows only completely green frames but the fps counter increases, the framerate is consistent with the camera fps and nvidia-smi shows that the process is using some gpu memory
Usually when i have a gstreamer pipeline that works, replaceing fpsdisplaysink with appsink is enough to gain access to the rgb values in openCV but in this case I only have green frames and opencv doesn’t even open the VideoCaputure, it gets stuck. Is the pipeline wrong? My guessing is that I’m missing something in the passage from gpu memory to cpu memory.