GStreamer pipeline for accelerated streaming of USB camera

Hi. I’ve been trying to get Jetson Nano to use GPU acceleration in OpenCV to read frames from a USB webcam. I have compiled the newest OCV (4.6.0) from source with CUDA. Every resource I found says to use GStreamer. The camera is using MJPG compression to achieve 720p@30FPS and that’s what I’m trying to get. The problem is that none of the pipelines work and I have no idea what to do.

Can anybody help with that?

Jetsons have HW engine NVDEC for video decoding, better keep you GPU usage for something else.
Assuming your USB/MJPG camera is video node /dev/video1 and that v4l2-ctl -d1 --list-formats-ext reports a mode 1280x720@30 in MJPG format, you may read camera, decode and display with:

gst-launch-1.0 v4l2src device=/dev/video1 ! image/jpeg,format=MJPG,width=1280,height=720,framerate=30/1 ! nvv4l2decoder mjpeg=1 ! nvvidconv ! autovideosink

When ok, be sure you’ve built opencv with GStreamer support and installed to your python env if using python:

import cv2
print(cv2.getBuildInformation())

If ok, you may try using an opencv VideoCapture with gstreamer backend:

    # Capture BGRx frames into opencv 
    cap = cv2.VideoCapture("v4l2src device=/dev/video1 ! image/jpeg,format=MJPG,width=1280,height=720,framerate=30/1 ! nvv4l2decoder mjpeg=1 ! nvvidconv ! video/x-raw,format=BGRx ! appsink drop=1", cv2.CAP_GSTREAMER)

    # Or capture BGR frames into opencv 
    #cap = cv2.VideoCapture("v4l2src device=/dev/video1 ! image/jpeg,format=MJPG,width=1280,height=720,framerate=30/1 ! nvv4l2decoder mjpeg=1 ! nvvidconv ! video/x-raw,format=BGRx ! videoconvert ! video/x-raw,format=BGR ! appsink drop=1", cv2.CAP_GSTREAMER)

    if not cap.isOpened():
        print("failed to open video capture")
        exit(-1)

    cv2.namedWindow("mjpgCam", cv2.WINDOW_AUTOSIZE)

    frames=0
    while frames < 3000:
        ret_val, img = cap.read();
        if not ret_val:
           break
        frames = frames + 1

        cv2.imshow('mjpgCam',img)

        if cv2.waitKey(1) == ord('q'):
           break
 
    cv2.destroyAllWindows()
    cap.release()

Don’t worry about the warning, a live source has no duration.

Thanks a lot for your time, this pipeline does look like it should work in theory.
However with every camera I have, I get the same error.
Note that I haven’t touched the GStreamer installation that comes with JetPack.

Setting pipeline to PAUSED …
Opening in BLOCKING MODE
Pipeline is live and does not need PREROLL …
Setting pipeline to PLAYING …
New clock: GstSystemClock
ERROR: from element /GstPipeline:pipeline0/GstV4l2Src:v4l2src0: Internal data stream error.
Additional debug info:
gstbasesrc.c(3055): gst_base_src_loop (): /GstPipeline:pipeline0/GstV4l2Src:v4l2src0:
streaming stopped, reason not-negotiated (-4)
Execution ended after 0:00:00.009225324
Setting pipeline to PAUSED …
Setting pipeline to READY …
Setting pipeline to NULL …
Freeing pipeline …

What does report your camera driver as V4L supported modes ? (v4l2-ctl command is provided by apt package v4l-utils)

ioctl: VIDIOC_ENUM_FMT
Index : 0
Type : Video Capture
Pixel Format: ‘MJPG’ (compressed)
Name : Motion-JPEG
Size: Discrete 1280x720
Interval: Discrete 0.033s (30.000 fps)
Size: Discrete 800x600
Interval: Discrete 0.033s (30.000 fps)
Size: Discrete 640x480
Interval: Discrete 0.033s (30.000 fps)
Size: Discrete 320x240
Interval: Discrete 0.033s (30.000 fps)

Index : 1
Type : Video Capture
Pixel Format: ‘YUYV’
Name : YUYV 4:2:2
Size: Discrete 1280x720
Interval: Discrete 0.100s (10.000 fps)
Size: Discrete 800x600
Interval: Discrete 0.100s (10.000 fps)
Size: Discrete 640x480
Interval: Discrete 0.040s (25.000 fps)
Size: Discrete 320x240
Interval: Discrete 0.033s (30.000 fps)

This is the output. All of the tested cameras report similarly.

You may try adding io-mode=2 (mmap) to v4l2src properties:

cap = cv2.VideoCapture("v4l2src device=/dev/video1 io-mode=2 ! image/jpeg,...

Or try raw video capture:

cap = cv2.VideoCapture("v4l2src device=/dev/video1 ! video/x-raw,format=YUY2,width=1280,height=720,framerate=10/1 ! nvvidconv ! video/x-raw(memory:NVMM) ! nvvidconv ! video/x-raw, format=BGRx ! appsink drop=1", cv2.CAP_GSTREAMER)

# Or
cap = cv2.VideoCapture("v4l2src device=/dev/video1 io-mode=2 ! video/x-raw,format=YUY2,width=1280,height=720,framerate=10/1 ! nvvidconv ! video/x-raw(memory:NVMM) ! nvvidconv ! video/x-raw, format=BGRx ! appsink drop=1", cv2.CAP_GSTREAMER)

Also, better first try with only one camera connected, later check what happens with several devices.

It seems that I made some mistake when configuring stuff, maybe with OCV compilation, but I’m not sure why it would affect GStreamer. Anyway - I did something and the original pipeline works fine now.

However could you also add a pipeline for CSI cams like the IMX219?
Just for future reference.
Thank you!

    # Capture BGRx frames into opencv 
    cap = cv2.VideoCapture("nvarguscamerasrc ! video/x-raw(memory:NVMM),format=NV12,width=1280,height=720,framerate=30/1 ! nvvidconv ! video/x-raw,format=BGRx ! appsink drop=1", cv2.CAP_GSTREAMER)

    # Or capture BGR frames into opencv 
    #cap = cv2.VideoCapture("nvarguscamerasrc ! video/x-raw(memory:NVMM),format=NV12,width=1280,height=720,framerate=30/1 ! nvvidconv ! video/x-raw,format=BGRx ! videoconvert ! video/x-raw,format=BGR ! appsink drop=1", cv2.CAP_GSTREAMER)

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.