I am recording two camera streams (1920x1080 60fps) on a Jetson Nano B01. I need to synchronize the streams and speed is very important. I am running in 10W mode with a 5V 5A barrel jack power supply to maximise power and run jetson_clocks on startup to optimize performance.
The pipeline that I am using is:
gst-launch-1.0 -e multiqueue sync-by-running-time=true name=mqueue nvarguscamerasrc sensor-id=0 sensor-mode=1 ! “video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)NV12, framerate=(fraction)60/1” ! mqueue.sink_1 nvarguscamerasrc sensor-id=1 sensor-mode=1 ! “video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)NV12, framerate=(fraction)60/1” ! mqueue.sink_2 mqueue.src_1 ! nvvidconv ! clockoverlay halignment=center ! timeoverlay time-mode=2 ! nvvidconv ! nvv4l2h265enc ! h265parse ! mp4mux ! filesink location=left.mp4 mqueue.src_2 ! nvvidconv ! clockoverlay halignment=center ! timeoverlay time-mode=2 ! nvvidconv ! nvv4l2h265enc ! h265parse ! mp4mux ! filesink location=right.mp4
which I use within a Python script like this:
pro = subprocess.Popen(gst, stdout=subprocess.PIPE, shell=True, preexec_fn=os.setsid)
#Do stuff
os.killpg(os.getpgid(pro.pid), signal.SIGINT
The clockoverlay/timeoverlays are needed so that I can synchronize the videos.
Can anyone suggest ways to improve/optimize this pipeline? If possible, I want the streams to be synchronized so that I dont have to do postprocessing using the timestamps.
Is there a cleaner way to run this pipeline from within a Python or C++ program?
Are there better (open source) alternatives that I should investigate?
Thanks