Cannot Record 2x USB Frame-Synced MJPEG Cameras Simultaneously for Stereo

I have 2x USB 2.0 (UVC) Cameras, MJPEG, 1600x1200 @ 60FPS, and they are connected by a wire (separate from the USB) that synchronizes the frames on a camera firmware level (I believe it has an internal sync signal).

For context, I am trying to create a stereo camera pair for my robotics application.

The USB Cables are connected to separate USB ports, directly on the Jetson Orin Nano (5.15.122-tegra). No USB hubs are being used.

Orin Nano Info:

cat /etc/nv_tegra_release
# R36 (release), REVISION: 2.0, GCID: 35084178, BOARD: generic, EABI: aarch64, DATE: Tue Dec 19 05:55:03 UTC 2023
# KERNEL_VARIANT: oot
TARGET_USERSPACE_LIB_DIR=nvidia
TARGET_USERSPACE_LIB_DIR_PATH=usr/lib/aarch64-linux-gnu/nvidia

I am trying to start and stop recording of the 2 cameras, simultaneously, in MJPEG (natively outputted from the camera).

My first approach was to take both the cameras, use compositor to put them side-by-side and record that to a file.

On a desktop Ubuntu machine (intel i7-12400), the following command performs great. Both the cameras are perfectly synchronized per frame, and the file size is reasonably small (1 second video is about 20 megabytes):

gst-launch-1.0 -v compositor name=mix \
    sink_0::xpos=0 sink_0::ypos=0 sink_0::alpha=1 \
    sink_1::xpos=1600 sink_1::ypos=0 sink_1::alpha=1 \
    ! jpegenc \
    ! avimux \
    ! filesink location=output_2cams_1600_60fps.avi \
    v4l2src device=/dev/video0 \
    ! image/jpeg,width=1600,framerate=60/1 \
    ! jpegdec \
    ! mix.sink_0 \
    v4l2src device=/dev/video2 \
    ! image/jpeg,width=1600,framerate=60/1 \
    ! jpegdec \
    ! mix.sink_1

On a Jetson Orin Nano Dev Kit, it does not perform well at all. The videos are not synchronized, frames are dropping all over the place, and it is not a usable video at all.

I have been reading a lot of posts on the forum, doing a lot of googling, and I have tried using the Nvidia gstreamer plugins to accelerate this:

gst-launch-1.0 -v \
    nvcompositor name=mix sink_0::xpos=0 sink_0::ypos=0 sink_1::xpos=1600 sink_1::ypos=0 ! \
    nvvidconv ! "video/x-raw(memory:NVMM), width=3200, height=1200, framerate=60/1, format=RGBA" ! \
    nvvidconv ! avimux ! filesink location=output.avi \
    v4l2src device=/dev/video0 ! image/jpeg,width=1600,framerate=60/1 ! nvv4l2decoder mjpeg=1 ! mix.sink_0 \
    v4l2src device=/dev/video2 ! image/jpeg,width=1600,framerate=60/1 ! nvv4l2decoder mjpeg=1 ! mix.sink_1

With this approach, 1 second video is about 350 megabytes. Playing the video, even worse frame drops and absolutely zero synchronization.

Then I thought a simpler approach would do better:

gst-launch-1.0 \
  v4l2src device=/dev/video0 ! \
  'image/jpeg, width=1600, framerate=60/1' ! \
  jpegdec ! \
  avimux ! filesink location=output1.avi \
  v4l2src device=/dev/video2 ! \
  'image/jpeg, width=1600, framerate=60/1' ! \
  jpegdec ! \
  avimux ! filesink location=output2.avi

And while the file sizes are about the same (1 second of video each, ~140 megabytes each), and the videos are actually playable, the video files themselves are different length/file sizes and therefore not frame-synced. I also don’t think the video files should be that big, compared to what they are on the desktop.

My use case is definitely unique, but I don’t think it is impossible, and I am not sure what to do.

(Also for additionaly context, opening both cameras in openCV whether it is a desktop or jetson, and reading frames in code, also causes it not to be perfectly synchronized. Even if using multi-threading, it still doesn’t seem to work. Only the gstreamer video compositing pipeline works, that too only on desktop).

TL;DR - Trying to set up a stereo camera system with two USB 2.0 cameras on an Orin Nano. Records composited video perfectly on a desktop, but struggles on the Nano, with un-synced and dropped frames. Making it unusable.

Hi,
Please try the command individually and see if you can achieve target frame rate:

$ gst-launch-1.0 v4l2src device=/dev/video0 ! image/jpeg,width=1600,framerate=60/1 ! nvv4l2decoder mjpeg=1 ! nvvidconv ! nv3dsink sync=0
$ gst-launch-1.0 v4l2src device=/dev/video1 ! image/jpeg,width=1600,framerate=60/1 ! nvv4l2decoder mjpeg=1 ! nvvidconv ! nv3dsink sync=0

Comparing to x86 desktop, Orin Nano has worse CPU capability, so performance is worse. Please change to use hardware decoder and see if it can achieve target frame rate. And since Orin Nano does not have hardware encoder, it may not achieve 60fps if you would like to encode frame data to h264 or jpeg. If you can achieve 60fps in camera preview and need encoding, may consider use Orin NX.

Ok, thank you. It looks like that command works, but doesn’t run well at all on the Orin Nano. I will try an Orin NX, thank you. If I find a solution to this with/without the NX, I will report back here.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.