Jetson Nano - Saving video with Gstreamer and OpenCV

Hello everyone,

I’m new to Gstreamer and I need to save a video stream from an e-con See3CAM_CU135 camera to an .avi file. I have successfully
streamed the video to a small window, so then I tried to use OpenCV in Python to save the frames to the file, but
I think my pipeline is not configured correctly.

This is how I configured VideoCapture and VideoWriter:

if __name__ == '__main__':
	width = 4208
	height = 3120
	fps = 20
	gst_in = f"v4l2src device=/dev/video0 ! image/jpeg, width={width}, height={height}, framerate={fps}/1, format=MJPG ! nvv4l2decoder mjpeg=1 ! nvvidconv ! appsink max-buffers=1 drop=true"
	gst_out= f"appsrc ! video/x-raw, format=BGR ! avimux ! filesink location=video_{datetime.timestamp(datetime.now())}.avi "
	stream = cv2.VideoCapture(gst_in, cv2.CAP_GSTREAMER)
	writer = cv2.VideoWriter(gst_out, cv2.CAP_GSTREAMER, 0, fps, (int(width), int(height)))

At first I tried to use videoconvert before the appsink in the VideoCapture pipeline, but it worked very badly, a lot
of the frames were missing. So I removed it, but now I get this error:
[ WARN:1] global /home/nvidia/host/build_opencv/nv_opencv/modules/videoio/src/cap_gstreamer.cpp (1629) writeFrame OpenCV | GStreamer warning: cvWriteFrame() needs images with depth = IPL_DEPTH_8U and nChannels = 3.

And I have no idea how to fixed it, I couldn’t find anything online. If anyone can give me a solution or some
advice I’d really appreciate it.

Thank you in advance!

You may be missing the H264 encoding part. Try:

gst_out= f"appsrc ! video/x-raw,format=BGR ! queue ! videoconvert ! video/x-raw,format=BGRx ! nvvidconv ! nvv4l2h264enc insert-vui=1 ! avimux ! filesink location=video_{datetime.timestamp(datetime.now())}.avi "

Or try ffmpeg backend (CPU only, may be slow for high pixel rate):

cv::VideoWriter ocv_h264_writer ("test-ocvh264-writer.avi",
                                 cv::CAP_FFMPEG,
                                 cv::VideoWriter::fourcc ('X', '2', '6', '4'), fps,
                                 cv::Size (width, height));

# Or in python, something like:
output_video_file='test_ffmpeg_h264.avi'
fourcc = cv2.VideoWriter_fourcc(*'X264')        
out = cv2.VideoWriter(output_video_file, cv2.CAP_FFMPEG, fourcc, fps, (frame_width, frame_height))

Also note that in all cases, fps is expected to be a float value:

fps = float(20)

Hello, thank you for the reply.

First of all, changing the fps to float also gave me some kind of error, “can’t link v42lsrc0 and nvv4l2decoder0”, so I changed it back

Now, I tried both of your suggestion and this is what I got:

  1. At first I tried this configuration:
gst_in = f"v4l2src device=/dev/video0 ! image/jpeg, width={width}, height={height}, framerate={fps}/1, format=MJPG ! nvv4l2decoder mjpeg=1 ! nvvidconv ! appsink max-buffers=1 drop=true"
	gst_out= f"appsrc ! video/x-raw,format=BGR ! queue ! videoconvert ! video/x-raw,format=BGRx ! nvvidconv ! nvv4l2h264enc insert-vui=1 ! avimux ! filesink location=video_{datetime.timestamp(datetime.now())}.avi "

But it gave the same error as before (about cvWriteFrame())

  1. Then I tried the second option:
gst_in = f"v4l2src device=/dev/video0 ! image/jpeg, width={width}, height={height}, framerate={fps}/1, format=MJPG ! nvv4l2decoder mjpeg=1 ! nvvidconv ! appsink max-buffers=1 drop=true"
output_video_file='test_ffmpeg_h264.avi '
fourcc = cv2.VideoWriter_fourcc(*'X264')        
writer = cv2.VideoWriter(gst_out, cv2.CAP_FFMPEG, fourcc, fps, (width, height))

This time it did not give an error, but no output file was created at all, not even an empty one, and I have no idea why.

I suspect that the first error is happening because the output of appsink isn’t compatible
with the input of appsrc? Is there a way to make the output correct without using videoconvert?

Thanks again for the help .

Sorry i missed that, you would also convert input into BGR for opencv processing. Try this:

import cv2

gst_in = "v4l2src device=/dev/video0 ! image/jpeg, width=640,height=480,framerate=30/1,format=MJPG ! nvv4l2decoder mjpeg=1 ! nvvidconv ! video/x-raw,format=BGRx ! videoconvert ! video/x-raw,format=BGR ! appsink drop=true"
cap = cv2.VideoCapture(gst_in, cv2.CAP_GSTREAMER)
if not cap.isOpened():
    print('Failed to open camera')
    exit(-1)

gst_out= "appsrc ! video/x-raw,format=BGR ! queue ! videoconvert ! video/x-raw,format=BGRx ! nvvidconv ! nvv4l2h264enc insert-vui=1 ! avimux ! filesink location=test_h264.avi "
writer = cv2.VideoWriter(gst_out, cv2.CAP_GSTREAMER, 0, 30.0, (640, 480))  
if not writer.isOpened():
    print('Failed to open writer')
    cap.release()
    exit(-2)

while True:
    ret, frame = cap.read()
    if not ret:
        print('Failed to read from camera')
        break

    # Your processing on frame

    writer.write(frame)

writer.release()
cap.release()

Looks like it runs better than before, but there are still missing frames I think. I ran a look for 30 seconds and
stored the frames, but the output file was only 22 seconds long. Can something in the pipeline cause that?

I also had to change the fps from 30 to 120, otherwise the program would crash (Internal error). Perhaps
all of it is caused by the camera?

You would see what video modes (format + resolution + framerate) are available from your camera through V4L driver using:

v4l2-ctl -d0 --list-formats-ext

Be sure that you are using one of the listed modes in the caps after v4l2src plugin. You may also try using property io-mode of that plugin and set it to 2 (mmap).
Also be sure to connect camera directly to devkit port (no hub) and use only one camera, with just keyboard and mouse as USB devices.

Another thing to try would be removing property max-buffers=1 from opencv appsink.

You can also check without opencv for a pure gstreamer pipeline recording 10s @30fps:

# Recording
gst-launch-1.0 -ev v4l2src device=/dev/video0 num-buffers=300 ! image/jpeg, width=640,height=480,framerate=30/1,format=MJPG ! nvv4l2decoder mjpeg=1 ! nvvidconv ! video/x-raw,format=BGRx ! videoconvert ! video/x-raw,format=BGR ! timeoverlay ! queue ! videoconvert ! video/x-raw,format=BGRx ! nvvidconv ! nvv4l2h264enc insert-vui=1 ! avimux ! filesink location=test_h264.avi

# displaying FPS
gst-launch-1.0 -ev v4l2src device=/dev/video0 num-buffers=300 ! image/jpeg, width=640,height=480,framerate=30/1,format=MJPG ! nvv4l2decoder mjpeg=1 ! nvvidconv ! video/x-raw,format=BGRx ! videoconvert ! video/x-raw,format=BGR ! timeoverlay ! queue ! videoconvert ! video/x-raw,format=BGRx ! nvvidconv ! nvv4l2h264enc insert-vui=1 ! fpsdisplaysink text-overlay=0 video-sink=fakesink

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.