How to capture video simultaneously with two cameras using gstreamer?

I wonder how I can capture video simultaneously with two cameras using gstreamer (or another tool)?
I tried a bash script with two two gstreamer pipelines running at background (using ampersand). This works if I ssh into the Tx1 and run the bash script, but doesn’t work if I do ssh nvidia@x.x.x.x “sudo ./bash_script.sh” from a host PC.

1 Like

Hi bob20001,

Could you share the bash_script.sh? Maybe the issue is due the DISPLAY and X11 issue, you can run the pipeline as:

DISPLAY=:0 gst-launch ....

Regards,
-Adrian

Hi,
You just need to add them on a single gst-launch for example:

gst-launch-1.0 \
videotestsrc ! xvimagesink \
videotestsrc pattern=ball ! xvimagesink

This displays 2 videotestsrc pipelines, one with test pattern and the other with ball.

Note that if you do it this way the pipelines are dependent on each other. You can’t pause, stop or play one without the other.

A more elegant solution would be using GStreamer Daemon ( a framework for controlling gstreamer using TCP connection messages ). With gstd you can do something like this:

gstd #start gstreamer daemon
gstd-client pipeline_create p0 videotestsrc name=videotestsrc0 ! xvimagesink
gstd-client pipeline_create p1 videotestsrc name=videotestsrc0 pattern=ball ! xvimagesink
gstd-client pipeline_play p0
gstd-client pipeline_play p1

Now both pipelines can be paused or stopped independently. You can do much more with gstd, like changing gstreamer parameters while the pipeline is running:

gstd-client element_set p0 "videotestsrc0 pattern checkers-8"

If you want to learn more about gstd please check out wiki page:
https://developer.ridgerun.com/wiki/index.php?title=GStreamer_Daemon

1 Like

Hi ACervantes and miguel.taylor, thank you for the replies.

I tried to follow miguel.taylor’s suggestion and it works.

Here’s my commandline:

gst-launch-1.0 \
nvcamerasrc sensor-id=0 num-buffers=300 fpsRange="30.0 30.0" ! 'video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)I420, framerate=(fraction)30/1' ! nvvidconv ! 'video/x-raw(memory:NVMM), format=(string)NV12' ! omxh264enc ! 'video/x-h264, stream-format=(string)byte-stream' ! filesink location = ./sensor0.h264 \
nvcamerasrc sensor-id=1 num-buffers=300 fpsRange="30.0 30.0" ! 'video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)I420, framerate=(fraction)30/1' ! nvvidconv ! 'video/x-raw(memory:NVMM), format=(string)NV12' ! omxh264enc ! 'video/x-h264, stream-format=(string)byte-stream' ! filesink location = ./sensor1.h264

My only question now is how to make sure they are truly synchornized (i.e. starting/ending at the same time).

Hi bob20001,

You can use the multiqueue element. It creates sink and src pads on demand and holds buffers until all sink pads are ready. With max-size-buffer you can control the number of buffers it holds before it starts discarding them. If you set this to 1 you can use the element for pipeline synchronization:

multiqueue max-size-buffers=1 name=mqueue

With you gst-launch command it would be something like this:

gst-launch-1.0 \
multiqueue max-size-buffers=1 name=mqueue \
nvcamerasrc sensor-id=0 num-buffers=300 fpsRange="30.0 30.0" ! 'video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)I420, framerate=(fraction)30/1' !  mqueue.sink_1 \
nvcamerasrc sensor-id=1 num-buffers=300 fpsRange="30.0 30.0" ! 'video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)I420, framerate=(fraction)30/1' !  mqueue.sink_2 \
mqueue.src_1 ! nvvidconv ! 'video/x-raw(memory:NVMM), format=(string)NV12' ! omxh264enc ! 'video/x-h264, stream-format=(string)byte-stream' ! filesink location = ./sensor0.h264 \
mqueue.src_2 ! nvvidconv ! 'video/x-raw(memory:NVMM), format=(string)NV12' ! omxh264enc ! 'video/x-h264, stream-format=(string)byte-stream' ! filesink location = ./sensor1.h264

The pipeline is a little messy, but you can get an idea of what is going on with the following diagram:

One nvcamerasrc is starting before the other, but the multiqueue discards buffers generated on this period until the other nvcamerasrc is ready.

2 Likes

at some point you may reach limitations when you decide to add third concurrent sensor to record from.
If that will be the case, you would have e.g. to reduce fps from 30 to 15 to mitigate the bottleneck issue via releasing some bandwidth.

Miguel, I’m trying to do almost the same thing, and I tried your advice, but it doesn’t work like I’d expect. I would expect that at the end of the recording, each video would have the same number of frames, and that the timestamps on each frame would be within a few milliseconds of each other. However, when we stress test our system, we end up with videos that are still unsynchronized.

I explain it more in my stackoverflow question here. Is there something I’m not understanding about how multiqueue works?

Can someone check this? c++ - Using GStreamer to extract frames on videos simultaneously - Stack Overflow

You’d better create a new topic here for better visibility, telling how you get your cameras feeds.
In short, once you get the video camera feed, you would encode into jpeg, then use multifilesink for saving into separate jpeg images:

# From V4L cameras:
gst-launch-1.0 v4l2src device=/dev/video0 ! video/x-raw ! videoconvert ! jpegenc ! multifilesink location=cam0_%05d.jpg    v4l2src device=/dev/video1 ! video/x-raw ! videoconvert ! jpegenc ! multifilesink location=cam1_%05d.jpg  

# From CSI cameras accessed through argus:
gst-launch-1.0 nvarguscamerasrc sensor-id=0 ! nvvidconv ! 'video/x-raw(memory:NVMM),format=I420' ! nvjpegenc ! multifilesink location=cam0_%05d.jpg    nvarguscamerasrc sensor-id=1 ! nvvidconv ! 'video/x-raw(memory:NVMM),format=I420' ! nvjpegenc ! multifilesink location=cam1_%05d.jpg  
1 Like

Thanks for answering @Honey_Patouceul

Will this parallelly process the videos or will it extract all frames from video_1 first and then video_2?

Thanks

Tried this as well, but it seems that that’s not how multiqueue works. it also doesn’t state in the docs that it will “hold buffers until all sink pads are ready” What I ended up doing was creating a lock-step scheme on the src pads using a semaphore