Hello there!
I am currently working with a Jetson Nano and two IMX219 cameras for data acquisition for post stereo vision processing. In that case I am looking for the correct commands for setting up two video streams (one for each camera) and concatenate the two streams in to one (videos side by side), and save to a mp4 file.
I am kind of a noob with video streams, so I don’t even know is this is possible, but any help is appreciated.
Hi,
If you use gstreamer , you can leverage nvcompositor plugin. If you use jetson_multimedia_api, you can call NvBufferComposite(). For more information, please take a look at the documents:
Jetson Linux API Reference: Main Page | NVIDIA Docs
Welcome — Jetson Linux<br/>Developer Guide 34.1 documentation
Thank you for the reply,
I have tested a bit but I am unable to link the nvcompositor pipeline to a filesink, this is what I got so far:
gst-launch-1.0 nvarguscamerasrc num-buffers=600 wbmode=3 aelock=true exposuretimerange="6000000 6000000" gainrange="1 1" ispdigitalgainrange="1 1" sensor_id=1 ! 'video/x-raw(memory:NVMM), width=1920, height=1080 ,format=NV12, framerate=30/1' ! nvvidconv flip-method=0 ! comp. \
nvarguscamerasrc num-buffers=600 wbmode=3 aelock=true exposuretimerange="6000000 6000000" gainrange="1 1" ispdigitalgainrange="1 1" sensor_id=0 ! 'video/x-raw(memory:NVMM), width=1920, height=1080 ,format=NV12, framerate=30/1' ! nvvidconv flip-method=0 ! comp. \
nvcompositor name=comp sink_0::xpos=0 sink_0::ypos=0 sink_0::width=1920 sink_0::height=1080 sink_1::xpos=1920 sink_1::ypos=0 sink_1::width=1920 sink_1::height=1080 ! omxh264enc bitrate=15000000 ! qtmux ! filesink location=/home/jetson/test.mp4 -e
but I get the following error:
could not link comp to omxh264enc-omxh264enc0
What am I missing?
Hi,
Does it work with nvoverlaysink ?
no, but i managed to do it with videomixer, Ill paste the command here for others to use as well.
gst-launch-1.0 videomixer name=mix background="white" sink_1::xpos=0 sink_1::ypos=0 sink_2::xpos=0 sink_2::ypos=1848 \
! omxh264enc bitrate=15000000 ! qtmux ! filesink location=/home/jetson/test.mp4 \
nvarguscamerasrc num-buffers=300 sensor_id=1 ! 'video/x-raw(memory:NVMM), width=3264, height=1848 ,format=NV12, framerate=28/1' \
! nvvidconv flip-method=0 \
! mix.sink_1 \
nvarguscamerasrc num-buffers=300 sensor_id=0 ! 'video/x-raw(memory:NVMM), width=3264, height=1848 ,format=NV12, framerate=28/1' \
! nvvidconv flip-method=0 \
! mix.sink_2 -e
I had to stack the video streams on top of each other because side-by-side ended up cropping my stream at 4096 pixels (I guess it is i hardware limitation?).
DaneLLL
October 29, 2020, 6:34am
10
Hi,
Yes, it is limitation of hardware encoder. Suggest you downscale two sources into two 1920x1080, and stitch to 3840x1080.