recording video stream from usb cameras

Guys puzzled me with a need to determine a python code for simultaneous recording from multiple USB cameras.
Upon checking with the accelerated gsteramer manual it appears that the code below should work and probably if to incorporate it into some python or cpp wrapper , right?

gst-launch-1.0 v4l2src device="/dev/video0" ! "video/x-raw, width=1920, height=1080, format=(string)YUY2" ! filesink location="test2.raw" -e

This way you wouldn’t record h264 but rather a raw YUY2 file that is much bigger. If you can afford H264 compression, you may try instead:

gst-launch-1.0 v4l2src device=/dev/videoX ! 'video/x-raw,format=(string)YUY2' ! videoconvert ! omxh264enc bitrate=8000000 SliceIntraRefreshEnable=true ! 'video/x-h264,stream-format=byte-stream' ! h264parse ! qtmux ! filesink location=testX.mov

You may adjust H264 bitrate for your resolution and framerate.

@Honey_Patouceul: Thank you.
May be you know if the argus sample supports usb cameras?
External Media

and to incorporate it into python?

Hi Andrey, Argus (libargus) only supports MIPI CSI cameras. For USB cameras, you would want to use V4L2 interface, like you have in your GStreamer pipeline there.

Thank you for letting me know!
May be there is a sample python that does recording to file from usb camera device?
May be it is not a best design though, but folks seem want be able to send start recording sequences remotely somehow with utilization of python, as it seems to me. I can think of python notebook service to do so or ssh sequence. Not sure how that sort of designs typically look like. But eventually I come to some sophisticated design at How-to stream live videos  |  Cloud Video Intelligence API Documentation  |  Google Cloud that probably could be hooked with gstreamer somehow to terminate it at google cloud endpoint for some other purpose.
However, I do not have USB camera to test the gstreamer pipeline, but stereolabs zed that tends to fail due to parameters mismatch. And folks who tried it with usb camera said that it has thrown some errors that will be attached. How would workable gstreamer recording without h264 look like?

could someone verify that the sample works on nano with usb camera, please?

gst-launch-1.0 v4l2src device="/dev/video0" ! \
"video/x-raw, width=640, height=480, format=(string)YUY2" ! \
xvimagesink -e

Source: page 24 of the guide attached.

Accelerated_GStreamer_User_Guide.pdf (594 KB)

I have no Nano for checking, but for launching from python, this adapted example seems working on my TX2 (not using video0 being the onboard camera):

#!/usr/bin/env python

import sys, os
import gi
gi.require_version('Gst', '1.0')
gi.require_version('Gtk', '3.0')
from gi.repository import Gst, GObject, Gtk

class GTK_Main:
    def __init__(self):
        window = Gtk.Window(Gtk.WindowType.TOPLEVEL)
        window.set_title("CamRecorder")
        window.set_default_size(100, 100)
        window.connect("destroy", Gtk.main_quit, "WM destroy")
        vbox = Gtk.VBox()
        window.add(vbox)
        self.movie_window = Gtk.DrawingArea()
        vbox.add(self.movie_window)
        hbox = Gtk.HBox()
        vbox.pack_start(hbox, False, False, 0)
        hbox.set_border_width(10)
        hbox.pack_start(Gtk.Label(), False, False, 0)
        self.button = Gtk.Button("Start")
        self.button.connect("clicked", self.start_stop)
        hbox.pack_start(self.button, False, False, 0)
        self.button2 = Gtk.Button("Quit")
        self.button2.connect("clicked", self.exit)
        hbox.pack_start(self.button2, False, False, 0)
        hbox.add(Gtk.Label())
        window.show_all()

        # Set up the gstreamer pipeline
        self.player = Gst.parse_launch ("v4l2src device=/dev/video1 ! video/x-raw,format=YUY2 ! nvvidconv ! omxh264enc bitrate=8000000 SliceIntraRefreshEnable=true ! video/x-h264,stream-format=byte-stream ! h264parse ! qtmux ! filesink location=video1.mov    v4l2src device=/dev/video2 ! video/x-raw,format=YUY2 ! nvvidconv ! omxh264enc bitrate=8000000 SliceIntraRefreshEnable=true ! video/x-h264,stream-format=byte-stream ! h264parse ! qtmux ! filesink location=video2.mov")
        bus = self.player.get_bus()
        bus.add_signal_watch()
        bus.enable_sync_message_emission()
        bus.connect("message", self.on_message)
        bus.connect("sync-message::element", self.on_sync_message)

    def start_stop(self, w):
        if self.button.get_label() == "Start":
            self.button.set_label("Stop")
            self.player.set_state(Gst.State.PLAYING)
        else:
            self.player.send_event(Gst.Event.new_eos())
            self.player.set_state(Gst.State.NULL)
            self.button.set_label("Start")

    def exit(self, widget, data=None):
        Gtk.main_quit()

    def on_message(self, bus, message):
        t = message.type
        if t == Gst.MessageType.EOS:
            self.player.set_state(Gst.State.NULL)
            self.button.set_label("Start")
        elif t == Gst.MessageType.ERROR:
            err, debug = message.parse_error()
            print "Error: %s" % err, debug
            self.player.set_state(Gst.State.NULL)
            self.button.set_label("Start")

    def on_sync_message(self, bus, message):
        struct = message.get_structure()
        if not struct:
            return
        message_name = struct.get_name()
        if message_name == "prepare-xwindow-id":
            # Assign the viewport
            imagesink = message.src
            imagesink.set_property("force-aspect-ratio", True)
            imagesink.set_xwindow_id(self.movie_window.window.xid)

Gst.init(None)
GTK_Main()
GObject.threads_init()
Gtk.main()

This may depend on your camera…If it doesn’t provide this resolution for this format, it may work if v4l2 is able to convert/scale …or not.
First check what formats/resolution/framerate your camera provides or try without specifying caps first and look at the auto-negociated caps if it works with verbose mode of gst-launch:

v4l2-ctl -d /dev/video0 --list-formats-ext
    
gst-launch-1.0 -ev v4l2src device=/dev/video0 ! xvimagesink
1 Like

Yes, it plays with :

gst-launch-1.0 -ev v4l2src device=/dev/video0 ! xvimagesink

and camera ARC shown in lsusb seems to fall into two /dev/video devices, one of them supports h.264 and one of them supports mjpeg/yuyv according to the query line

lsusb
Bus 002 Device 002: ID 0bda:0411 Realtek Semiconductor Corp. 
Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
Bus 001 Device 018: ID 05a3:9422 ARC International
v4l2-ctl -d /dev/video0 --list-formats-ext
ioctl: VIDIOC_ENUM_FMT
        Index       : 0
        Type        : Video Capture
        Pixel Format: 'MJPG' (compressed)
        Name        : Motion-JPEG
                Size: Discrete 1920x1080
                        Interval: Discrete 0.033s (30.000 fps)
                        Interval: Discrete 0.040s (25.000 fps)
                        Interval: Discrete 0.067s (15.000 fps)
                Size: Discrete 1280x720
                        Interval: Discrete 0.033s (30.000 fps)
                        Interval: Discrete 0.040s (25.000 fps)
                        Interval: Discrete 0.067s (15.000 fps)
                Size: Discrete 800x600
                        Interval: Discrete 0.033s (30.000 fps)
                        Interval: Discrete 0.040s (25.000 fps)
                        Interval: Discrete 0.067s (15.000 fps)
                Size: Discrete 640x480
                        Interval: Discrete 0.033s (30.000 fps)
                        Interval: Discrete 0.040s (25.000 fps)
                        Interval: Discrete 0.067s (15.000 fps)
                Size: Discrete 640x360
                        Interval: Discrete 0.033s (30.000 fps)
                        Interval: Discrete 0.040s (25.000 fps)
                        Interval: Discrete 0.067s (15.000 fps)
                Size: Discrete 352x288
                        Interval: Discrete 0.033s (30.000 fps)
                        Interval: Discrete 0.040s (25.000 fps)
                        Interval: Discrete 0.067s (15.000 fps)
                Size: Discrete 320x240
                        Interval: Discrete 0.033s (30.000 fps)
                        Interval: Discrete 0.040s (25.000 fps)
                        Interval: Discrete 0.067s (15.000 fps)
                Size: Discrete 1920x1080
                        Interval: Discrete 0.033s (30.000 fps)
                        Interval: Discrete 0.040s (25.000 fps)
                        Interval: Discrete 0.067s (15.000 fps)

        Index       : 1
        Type        : Video Capture
        Pixel Format: 'YUYV'
        Name        : YUYV 4:2:2
                Size: Discrete 640x480
                        Interval: Discrete 0.033s (30.000 fps)
                        Interval: Discrete 0.040s (25.000 fps)
                        Interval: Discrete 0.067s (15.000 fps)
                Size: Discrete 800x600
                        Interval: Discrete 0.067s (15.000 fps)
                Size: Discrete 640x360
                        Interval: Discrete 0.033s (30.000 fps)
                        Interval: Discrete 0.040s (25.000 fps)
                        Interval: Discrete 0.067s (15.000 fps)
                Size: Discrete 352x288
                        Interval: Discrete 0.033s (30.000 fps)
                        Interval: Discrete 0.040s (25.000 fps)
                        Interval: Discrete 0.067s (15.000 fps)
                Size: Discrete 320x240
                        Interval: Discrete 0.033s (30.000 fps)
                        Interval: Discrete 0.040s (25.000 fps)
                        Interval: Discrete 0.067s (15.000 fps)
                Size: Discrete 640x480
                        Interval: Discrete 0.033s (30.000 fps)
                        Interval: Discrete 0.040s (25.000 fps)
                        Interval: Discrete 0.067s (15.000 fps)
v4l2-ctl -d /dev/video1 --list-formats-ext
ioctl: VIDIOC_ENUM_FMT
        Index       : 0
        Type        : Video Capture
        Pixel Format: 'H264' (compressed)
        Name        : H.264
                Size: Discrete 1920x1080
                        Interval: Discrete 0.033s (30.000 fps)
                        Interval: Discrete 0.040s (25.000 fps)
                        Interval: Discrete 0.067s (15.000 fps)
                Size: Discrete 1280x720
                        Interval: Discrete 0.033s (30.000 fps)
                        Interval: Discrete 0.040s (25.000 fps)
                        Interval: Discrete 0.067s (15.000 fps)
                Size: Discrete 800x600
                        Interval: Discrete 0.033s (30.000 fps)
                        Interval: Discrete 0.040s (25.000 fps)
                        Interval: Discrete 0.067s (15.000 fps)
                Size: Discrete 640x480
                        Interval: Discrete 0.033s (30.000 fps)
                        Interval: Discrete 0.040s (25.000 fps)
                        Interval: Discrete 0.067s (15.000 fps)
                Size: Discrete 640x360
                        Interval: Discrete 0.033s (30.000 fps)
                        Interval: Discrete 0.040s (25.000 fps)
                        Interval: Discrete 0.067s (15.000 fps)
                Size: Discrete 352x288
                        Interval: Discrete 0.033s (30.000 fps)
                        Interval: Discrete 0.040s (25.000 fps)
                        Interval: Discrete 0.067s (15.000 fps)
                Size: Discrete 320x240
                        Interval: Discrete 0.033s (30.000 fps)
                        Interval: Discrete 0.040s (25.000 fps)
                        Interval: Discrete 0.067s (15.000 fps)
                Size: Discrete 1920x1080
                        Interval: Discrete 0.033s (30.000 fps)
                        Interval: Discrete 0.040s (25.000 fps)
                        Interval: Discrete 0.067s (15.000 fps)

Then I am running

gst-launch-1.0 v4l2src device=/dev/video0 ! 'video/x-raw,format=(string)YUY2' ! videoconvert ! omxh264enc bitrate=8000000 SliceIntraRefreshEnable=true ! 'video/x-h264,stream-format=byte-stream' ! h264parse ! qtmux ! filesink location=testX.mov

And getting the file as attached. That I am playing with

mplayer testX.mov

and getting the noisy image as attached
External Media
and the python code writes something like it, as per attached test0.mov

testX.mov (21.6 MB)
test0.mov (1.68 MB)

what works:

gst-launch-1.0 v4l2src num-buffers=1 ! jpegenc ! filesink location=test4.jpg
gst-launch-1.0 v4l2src device=/dev/video4 do-timestamp=true num-buffers=1 ! image/jpeg,width=1920,height=1080,framerate=25/1 ! jpegparse ! filesink location=camera.jpg

what works but with resolution lower than required [800x600]:

gst-launch-1.0 v4l2src device=/dev/video0  num-buffers=10 ! jpegenc ! multifilesink location="frame%d.jpg"

what works with required resolution for image sink 1920x1080

gst-launch-1.0 v4l2src device=/dev/video4 do-timestamp=true num-buffers=10 ! image/jpeg,width=1920,height=1080,framerate=25/1 ! jpegparse ! multifilesink location=test_%03d.jpeg

what seem to work somehow, but may need to be enhanced somehow or may require optimization
:

gst-launch-1.0 v4l2src device=/dev/video1 num-buffers=3 ! video/x-h264,width=1920,height=1080,framerate=25/1 ! h264parse ! qtmux ! filesink location=camera.mov

what works with two cameras but fails with three cameras:

gst-launch-1.0 v4l2src device=/dev/video0 do-timestamp=true num-buffers=1 ! image/jpeg,width=1920,height=1080,framerate=25/1 ! jpegparse ! multifilesink location=camera0_%03d.jpeg  v4l2src  device=/dev/video4 do-timestamp=true num-buffers=1  ! image/jpeg,width=1920,height=1080,framerate=25/1 ! jpegparse ! multifilesink location=camera4_%03d.jpeg
gst-launch-1.0 v4l2src device=/dev/video1 num-buffers=3 ! video/x-h264,width=1920,height=1080,framerate=25/1 ! h264parse ! qtmux ! filesink location=camera1.mov v4l2src device=/dev/video3 num-buffers=3 ! video/x-h264,width=1920,height=1080,framerate=25/1 ! h264parse ! qtmux ! filesink location=camera3.mov

what does not:

gst-launch-1.0 v4l2src device=/dev/video0 do-timestamp=true num-buffers=10 ! video/x-raw,width=1920,height=1080,framerate=25/1 ! qtmux ! filesink location=movie_%03d.mov

what executes without errors, but produces obscure results that are difficult to read or interpret or play:

gst-launch-1.0 v4l2src device=/dev/video1 num-buffers=1 ! video/x-h264,width=1920,height=1080,framerate=25/1 ! h264parse ! filesink location=camera.mov

that is an attempt to write video file with 1920x1080

misc:
found triggers:

v4l2-ctl --device /dev/video0 --set-fmt-video=width=1920,height=1080,pixelformat=YUY2
v4l2-ctl --device /dev/video0 --set-fmt-video=width=1920,height=1080,pixelformat=MJPEG

to be updated.

@Dusty_nv:
Could you extend if it is possible to write to file from multiple cameras with v4l2_cuda sample of mmapi, please?

./camera_v4l2_cuda 
nvbuf_utils: Could not get EGL display connection

	Usage: camera_v4l2_cuda [OPTIONS]

	Example: 
	./camera_v4l2_cuda -d /dev/video0 -s 640x480 -f YUYV -n 30 -c

	Supported options:
	-d		Set V4l2 video device node
	-s		Set output resolution of video device
	-f		Set output pixel format of video device (supports only YUYV/YVYU/UYVY/VYUY)
	-r		Set renderer frame rate (30 fps by default)
	-n		Save the n-th frame before VIC processing
	-c		Enable CUDA aglorithm (draw a black box in the upper left corner)
	-v		Enable verbose message
	-h		Print this usage

May be the recording to files could also be done with deepstream sdk? does it allow it?
What will be better way to record three usb cameras to files concurrently? gstreamer pipeline like the above?
Thanks.

It works with 2 cameras in my case. It doesn’t work with 3 cameras though in my environment [ nano/xavier]