Can't get nvstream mux to work with tee'd sources

• Hardware Platform (Jetson / GPU) Jetson Orin AGX
• DeepStream Version 6.4
• JetPack Version (valid for Jetson only) 5.1

When I introduce a tee between my video source and the rest of my inference pipeline (nvstreammux ! nvinfer ! nvstreamdemux) its stops working.

I’m trying to feed multiple RTSP sources through nvinfer by way of nvstreammux. I can do this successfully on its own but I also want to save the incoming video to files through use of a tee feeding into splitmuxsink which will write the video in configurable length chunks allowing me to access old chunks while still streaming new ones. I’m doing this by trying to insert a tee after rtspsrc ! rtph264depay ! h264parse which allows me to write out the h264 encoded video utilizing the encoding that came off the rtsp camera versus having to re-encode the video on the Jetson device.

Here is what the full pipeline I think should be able to run but it does not pass any video. Note that if I run just fragments A, B, and C it works fine and the files are output. If I run fragments A, B, D, and E then it works fine and I can see the window of video with bounding boxes… When I run all fragments then the pipeline just sits idle after seeing the trace for a Play request.

# Fragment A - the mux, infer, demux pipeline fragment
nvstreammux name=mux batch-size=2 width=1920 height=1080
! nvinfer config-file-path=... batch-size=2
! nvstreamdemux name=demux

# Fragment B - the rtsp source tee'd after the h264parse component
rtspsrc location='rtsp:...'
! rtph264depay
! h264parse
! tee name=t1

# Fragment C -  the branch of the tee connected into splitmuxsink to write out to video chunk files
t1. ! queue ! splitmuxsink location='./test_seg_%04d.mkv' name=splitmuxsink muxer=matroskamux max-size-time=10000000000

# Fragment D -  the branch of the tee connected to the mux above
t1. ! queue ! nvv4l2decoder ! queue ! mux.sink_0

 # Fragment E -  visualizing the video with bboxes coming out inference
demux.src_0  ! "video/x-raw(memory:NVMM), format=NV12" ! queue ! nvvideoconvert ! "video/x-raw(memory:NVMM), format=RGBA"  ! nvdsosd ! nv3dsink

Any ideas?

I got this to work but I don’t know why the change worked and would love to understand why. All I did to make the pipeline run is moved the h264parse to the downstream side of the tee. So now I have:

gst-launch-1.0
nvstreammux name=mux batch-size=2 width=1920 height=1080
! nvinfer config-file-path=… batch-size=2
! nvstreamdemux name=demux

rtspsrc location=‘…’
! rtph264depay
! tee name=t1
t1.
! queue
! h264parse
! splitmuxsink location=‘./test_seg_%04d.mkv’ name=splitmuxsink muxer=matroskamux max-size-time=10000000000
t1.
! queue
! h264parse
! nvv4l2decoder
! queue
! mux.sink_0

demux.src_0
! “video/x-raw(memory:NVMM), format=NV12”
! queue
! nvvideoconvert
! “video/x-raw(memory:NVMM), format=RGBA”
! nvdsosd
! nv3dsink

Why wouldn’t this work with the h264parse on the upstream side of the tee vs the downstream side of the tee like I have here?

Since your command line is incomplete, I don’t know where the error lies. But I believe your view is not accurate. Below is my command line for your reference.

I put h264parse before tee

gst-launch-1.0 -e nvstreammux name=mux batch-size=1 width=1920 height=1080 ! nvinfer config-file-path=/opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app/config_infer_primary.txt batch-size=1 ! nvvideoconvert ! "video/x-raw(memory:NVMM), format=RGBA" ! nvdsosd ! nvvideoconvert ! nvv4l2h264enc ! h264parse ! qtmux ! filesink location=./out.mp4 rtspsrc location="rtsp://xxxxxx" ! rtph264depay ! h264parse ! tee name=t1 t1. ! queue ! filesink location=1.h264 t1. ! nvv4l2decoder ! queue ! mux.sink_0