How to create a mux gst pipeline

Please provide complete information as applicable to your setup.

**• Hardware Platform (Jetson / GPU)**AGX Xavier
• DeepStream Version 5.0
• JetPack Version (valid for Jetson only) 4.4

I try to create a plugin (s1plugin) to consume synchronized two camera inputs. Could anyone provide comment/suggestion whether the following pipeline make sense or not?

gst-launch-1.0 \
nvarguscamerasrc sensor_mode=0 ! 'video/x-raw(memory:NVMM),width=1920, height=1080, framerate=30/1, format=NV12' ! m.sink_0 nvstreammux name=m batch-size=1 width=1280 height=720 ! nvinfer config-file-path= <primary-detector-config> ! nvvideoconvert name=c102 \
nvarguscamerasrc sensor_mode=1 ! 'video/x-raw(memory:NVMM),width=1920, height=1080, framerate=30/1, format=NV12' ! m.sink_0 nvstreammux name=m batch-size=1 width=1280 height=720 ! nvinfer config-file-path= <primary-detector-config> ! nvvideoconvert name=c104 \
c102. ! s1.sink_0 s1plugin name=s1 \
c104. ! s1.sink_1 s1plugin name=s1 \
s1. ! nvdsosd ! nvegltransform ! nveglglessink

Please help. Thanks a lot.

The two streams can not be fed to nvstreammux by the same sink pad.

Thank you for your comment. How about this one?

gst-launch-1.0 \
nvarguscamerasrc sensor_mode=0 ! 'video/x-raw(memory:NVMM),width=1920, height=1080, framerate=30/1, format=NV12' ! m.sink_0 nvstreammux name=m batch-size=1 width=1280 height=720 \
nvarguscamerasrc sensor_mode=1 ! 'video/x-raw(memory:NVMM),width=1920, height=1080, framerate=30/1, format=NV12' ! m.sink_1 \
m. ! nvinfer config-file-path= <primary-detector-config> ! nvvideoconvert ! s1plugin ! nvdsosd ! nvegltransform ! nveglglessink

Does it make sense? in this case how do I guarantee video0 and video1 frames are in sync when consumed by s1plugin if mux in the pipeline several elements before?

The nvstreammux only handle inference related information, it will not change the video timestamp. What is your sync plugin for? To align the two streams timeline? Will it drop frames when the time gap between the two streams is too big? Do you want to sync the videos first before inference?

the plugin is for computing stereo information from synchronized left and right cameras. we have two choices:

  1. to have 60 fps async mode of left/right camera to rely on s/w sync (in this case we can drop frame to sync up the left and right frames as long as we meet 30 fps real time criteria) or
  2. to have 30 fps sync mode of left/right camera (in this case we cannot drop frame)

So what would be the appropriate pipeline for case 1) and case 2) respectively?

Thanks a lot for your comments.

If there is frame drop possibility, it should happen before going to nvstreammux.