Writing h264 video using GST pipeline with OpenCV using NVENC engine

Hardware : Xavier NX
JetPack: 5.1.2
Python : 3.8.10
gst-launch-1.0 : 1.16.3

Hello,

My main goal is to efficiently record videos in h264 format using GPU hardware acceleration using a GStreamer pipeline as show here:

gst_out = "appsrc ! video/x-raw, format=BGR ! queue ! videoconvert ! nvvidconv ! nvv4l2h264enc ! h264parse ! matroskamux ! filesink location=output.mp4"
writer = cv2.VideoWriter(gst_out, cv2.CAP_GSTREAMER, fps, framz_size)

This pipeline works well in some cases, but not in others. Let me explain.

From a video read by videoCapture as it’s done here, the following code works very well:

class VideoEncoder:
    def __init__(self):
        self.cap = cv2.VideoCapture("input.mp4")

        gst_out = f"appsrc ! video/x-raw, format=BGR ! queue ! videoconvert ! video/x-raw,format=BGRx ! nvvidconv ! nvv4l2h264enc ! h264parse ! matroskamux ! filesink location=output.mp4"
        self.writer = cv2.VideoWriter(gst_out, 
                                      cv2.CAP_GSTREAMER, 
                                      self.cap.get(cv2.CAP_PROP_FPS), 
                                      (int(self.cap.get(cv2.CAP_PROP_FRAME_WIDTH)), int(self.cap.get(cv2.CAP_PROP_FRAME_HEIGHT))))

    def encode_video(self):
        while True:
            ret, frame = self.cap.read()
            if not ret:
                break
            self.writer.write(frame)

    def release(self):
        self.cap.release()
        self.writer.release()

if __name__ == "__main__":
    encoder = VideoEncoder()
    encoder.encode_video()
    encoder.release()

However, in my application, I use an RTSP source initially read like this:

gst_in = f"rtspsrc location=rtsp://ip:port/mystream protocol=tcp latency=10 ! rtph264depay ! h264parse ! avdec_h264 ! videorate ! video/x-raw,framerate=25/1 ! videoconvert ! appsink max-buffers=1 drop=true"
cap = cv2.VideoCapture(gst_in, cv2.CAP_GSTREAMER)

When I use this source, initializing the videowriter generates alarms:

FFMPEG: tag 0x00000708/‘???’ is not found (format ‘mp4 / MP4 (MPEG-4 Part 14)’)
(python3:144948): GStreamer-WARNING **: 13:50:13.313: Failed to load plugin ‘/usr/lib/aarch64-linux-gnu/gstreamer-1.0/libgstnvvidconv.so’: /lib/aarch64-linux-gnu/libGLdispatch.so.0: cannot allocate memory in static TLS block
(python3:144948): GStreamer-WARNING **: 13:50:13.328: Failed to load plugin ‘/usr/lib/aarch64-linux-gnu/gstreamer-1.0/libgstnvvideo4linux2.so’: /lib/aarch64-linux-gnu/libGLdispatch.so.0: cannot allocate memory in static TLS block
[ WARN:0] global /opt/opencv-4.5.3/modules/videoio/src/cap_gstreamer.cpp (1654) open OpenCV | GStreamer warning: error opening writer pipeline: no element “nvvidconv”

And no video is recorded. Here’s the code I use:

import cv2

class VideoEncoder:
    def __init__(self):      
        gst_in = f"rtspsrc location=rtsp://ip:port/mystream protocol=tcp latency=10 ! rtph264depay ! h264parse ! avdec_h264 ! videorate ! video/x-raw,framerate=25/1 ! videoconvert ! appsink max-buffers=1 drop=true"

        self.cap = cv2.VideoCapture(gst_in, cv2.CAP_GSTREAMER)

        if not self.cap.isOpened():
            print("Error: Could not open RTSP stream")
            exit()

        gst_out = "appsrc ! video/x-raw, format=BGR ! queue ! videoconvert ! nvvidconv ! nvv4l2h264enc ! h264parse ! matroskamux ! filesink location=output.mp4"
        self.writer = cv2.VideoWriter(gst_out, 
                                      cv2.CAP_GSTREAMER, 
                                      self.cap.get(cv2.CAP_PROP_FPS),
                                      (int(self.cap.get(cv2.CAP_PROP_FRAME_WIDTH)), int(self.cap.get(cv2.CAP_PROP_FRAME_HEIGHT))))

    def encode_video(self):
        start_time = cv2.getTickCount()

        while True:
            ret, frame = self.cap.read()
            if not ret:
                break

            self.writer.write(frame)

            current_time = cv2.getTickCount()
            elapsed_time = (current_time - start_time) / cv2.getTickFrequency()
            
            if elapsed_time > 10:
                break

    def release(self):
        self.cap.release()
        self.writer.release()

if __name__ == "__main__":
    encoder = VideoEncoder()
    encoder.encode_video()
    encoder.release()

I have several questions:

  • Why does using a different source affect the VideoWriter?
  • What should I do to be able to record videos from an RTSP stream?
  • Is it possible to accelerate the VideoCapture of the RTSP stream using the GPU?

The problem seems to stem from the fact that I’m using a GStreamer pipeline for both reading and writing, causing memory allocation conflicts. When I read in OpenCV and write in GStreamer, it’s fine. Similarly, when I read in GStreamer and write in OpenCV, it’s fine. However, when I do both operations in GStreamer, these memory allocation errors occur.

Hi,
You may eliminate max-buffers=1 drop=true from appsink and give it a try. Not sure but it looks like with the setting, only one buffer is allocated.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.