GstQTDemux:qtdemux0: streaming stopped, reason not-negotiated (-4)

I’m trying to test a gstreamer pipeline with deepstream plugins (on Xavier NX), but it seems i’m missing something when running

gst-launch-1.0 uridecodebin uri=file:///home/user/test-videos/left_10_15.mp4 ! nvvideoconvert ! tee name=t \
    t. ! queue ! nvvideoconvert src-crop=0:0:1920:1080 ! m.sink_0        \
    t. ! queue ! nvvideoconvert src-crop=1920:0:1920:1080 ! m.sink_1     \
    t. ! queue ! nvvideoconvert src-crop=0:1080:1920:1080 ! m.sink_2     \
    t. ! queue ! nvvideoconvert src-crop=1920:1080:1920:1080 ! m.sink_3  \
    t. ! queue ! nvvideoconvert ! m1.sink_0  \
nvstreammux name=m batch-size=4 width=1920 height=1080 ! nvinfer batch-size=4 interval=1 config-file-path=config_infer_primary.txt ! nvtracker tracker-width=320 tracker-height=320 ll-lib-file=/opt/nvidia/deepstream/deepstream/lib/libnvds_nvdcf.so ! nvvideoconvert ! nvmultistreamtiler rows=2 columns=2 width=1280 height=720 ! nvdsosd ! nvegltransform ! fpsdisplaysink text-overlay=false video-sink="nveglglessink sync=false" -v \
uridecodebin uri=file:///home/user/test-videos/right_10_15.mp4 ! nvvideoconvert ! m1.sink_1 \
nvstreammux name=m1 batch-size=2 width=3840 height=2160 ! nvvideoconvert ! nvmultistreamtiler rows=1 columns=2 width=1280 height=720 ! nvegltransform ! fpsdisplaysink text-overlay=false video-sink="nveglglessink sync=false" -v 
ERROR: from element /GstPipeline:pipeline0/GstURIDecodeBin:uridecodebin0/GstDecodeBin:decodebin1/GstQTDemux:qtdemux0: Internal data stream error.
Additional debug info:
qtdemux.c(6073): gst_qtdemux_loop (): /GstPipeline:pipeline0/GstURIDecodeBin:uridecodebin0/GstDecodeBin:decodebin1/GstQTDemux:qtdemux0:
streaming stopped, reason not-negotiated (-4)
ERROR: pipeline doesn't want to preroll.
Setting pipeline to NULL ...

what i am trying to do is

uridecodebin -> nvvideoconvert -> tee -> t. queue -> nvvideoconvert crop top-left   -> |
                                         t. queue -> nvvideoconvert crop top-right  -> |
                                         t. queue -> nvvideoconvert crop btm-left   -> | nvstreammux  batch-size=4 -> nvinfer -> nvtracker -> nvmultistreamtiler -> nvdsosd -> nvegltransform -> nveglglessink
                                         t. queue -> nvvideoconvert ctop btm-right  -> |

                                         t. queue -> nvvideoconvert -> |
                                     uridecodebin -> nvvideoconvert -> | nvstreammux batch-size=2 -> nvvideoconvert ! nvmultistreamtiler ! nvegltransform ! nveglglessink

can this pipeline be optimized further ? or atleast made to work ?

my goal is to take stereo video (left and right) and then perform inference only on left, by cropping the left video into 4 parts (top left, top right, bottom left, bottom right), and also i need the left and right video at sink simultaneously, so i made a copy of uncropped left video, and uridecoded the right video, and tried to mux them, so i should be getting left original video, right original video and inference output on the left video.

both left and right videos are 3840x2160

same issue if i try to do it this way

gst-launch-1.0 uridecodebin uri=file:///home/user/test-videos/left_10_15.mp4 ! nvvideoconvert ! tee name=t \
    t. ! queue ! nvvideoconvert src-crop=0:0:1920:1080 ! m.sink_0        \
    t. ! queue ! nvvideoconvert src-crop=1920:0:1920:1080 ! m.sink_1     \
    t. ! queue ! nvvideoconvert src-crop=0:1080:1920:1080 ! m.sink_2     \
    t. ! queue ! nvvideoconvert src-crop=1920:1080:1920:1080 ! m.sink_3  \
nvstreammux name=m batch-size=4 width=1920 height=1080 ! nvinfer batch-size=4 interval=1 config-file-path=config_infer_primary.txt ! nvtracker tracker-width=320 tracker-height=320 ll-lib-file=/opt/nvidia/deepstream/deepstream/lib/libnvds_nvdcf.so ! nvvideoconvert ! nvmultistreamtiler rows=2 columns=2 width=3840 height=2160 ! nvvideoconvert ! m1.sink_2 \
uridecodebin uri=file:///home/user/test-videos/eft_10_15.mp4 ! nvvideoconvert ! m1.sink_0 \
uridecodebin uri=file:///home/user/test-videos/right_10_15.mp4 ! nvvideoconvert ! m1.sink_1 \
nvstreammux name=m1 batch-size=3 width=3840 height=2160 ! nvvideoconvert ! nvmultistreamtiler rows=2 columns=2 width=3840 height=2160 ! nvegltransform ! fpsdisplaysink text-overlay=false video-sink="nveglglessink sync=true" -v
ERROR: from element /GstPipeline:pipeline0/GstURIDecodeBin:uridecodebin0/GstDecodeBin:decodebin2/GstQTDemux:qtdemux2: Internal data stream error.
Additional debug info:
qtdemux.c(6073): gst_qtdemux_loop (): /GstPipeline:pipeline0/GstURIDecodeBin:uridecodebin0/GstDecodeBin:decodebin2/GstQTDemux:qtdemux2:
streaming stopped, reason not-negotiated (-4)
ERROR: pipeline doesn't want to preroll.
Setting pipeline to NULL ...
[NvDCF] De-initialized
Freeing pipeline ...

What is the resolution of your “left_10_15.mp4” video? The definition is in Gst-nvvideoconvert — DeepStream 5.1 Release documentation, the “src-crop” area should not be out of the video.

its 3840x2160

btw we are trying to achieve this: Where to put the nvmultistreamtiler in a pipeline? - #5 by aditdoshi333

below is the pipeline that is not working, the tracker does not do anything here, the output on nvdsosd is empty, just frames, no tracking,

gst-launch-1.0 uridecodebin uri=file://test-videos/left_10_15.mp4 ! tee name=t \
    t. ! queue ! nvvideoconvert src-crop=0:0:1920:1080 ! m.sink_0        \
    t. ! queue ! nvvideoconvert src-crop=1920:0:1920:1080 ! m.sink_1     \
    t. ! queue ! nvvideoconvert src-crop=0:1080:1920:1080 ! m.sink_2     \
    t. ! queue ! nvvideoconvert src-crop=1920:1080:1920:1080 ! m.sink_3  \
nvstreammux name=m batch-size=4 width=1920 height=1080 ! nvinfer batch-size=4 interval=1 config-file-path=config_infer_primary.txt ! nvmultistreamtiler rows=2 columns=2 width=3840 height=2160 ! nvvideoconvert ! nvdsosd ! nvegltransform ! fpsdisplaysink text-overlay=false video-sink="nveglglessink sync=false" -v

What is your platform? The pipeline you listed works well in my platform.

hey, thanks for the response, that pipeline is working now with tracker,

but this is still not working

gst-launch-1.0 uridecodebin uri=file:///home/user/test-videos/left_10_15.mp4 ! nvvideoconvert ! tee name=t \
    t. ! queue ! nvvideoconvert src-crop=0:0:1920:1080 ! m.sink_0        \
    t. ! queue ! nvvideoconvert src-crop=1920:0:1920:1080 ! m.sink_1     \
    t. ! queue ! nvvideoconvert src-crop=0:1080:1920:1080 ! m.sink_2     \
    t. ! queue ! nvvideoconvert src-crop=1920:1080:1920:1080 ! m.sink_3  \
nvstreammux name=m batch-size=4 width=1920 height=1080 ! nvinfer batch-size=4 interval=1 config-file-path=config_infer_primary.txt ! nvtracker tracker-width=320 tracker-height=320 ll-lib-file=/opt/nvidia/deepstream/deepstream/lib/libnvds_nvdcf.so ! nvvideoconvert ! nvmultistreamtiler rows=2 columns=2 width=3840 height=2160 ! nvvideoconvert ! m1.sink_2 \
uridecodebin uri=file:///home/user/test-videos/eft_10_15.mp4 ! nvvideoconvert ! m1.sink_0 \
uridecodebin uri=file:///home/user/test-videos/right_10_15.mp4 ! nvvideoconvert ! m1.sink_1 \
nvstreammux name=m1 batch-size=3 width=3840 height=2160 ! nvvideoconvert ! nvmultistreamtiler rows=2 columns=2 width=3840 height=2160 ! nvegltransform ! fpsdisplaysink text-overlay=false video-sink="nveglglessink sync=true" -v

both videos are 4K (3840x2160) and i’m running on Xavier NX

Seems the pipeline with two nvstreammux does not work. We will check it.

@satyajitghana1999
Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)

• Hardware Platform (Jetson / GPU): Xavier NX
• DeepStream Version: 5.0
• JetPack Version (valid for Jetson only): 4.4.1
• TensorRT Version: 7.1.3.0
• NVIDIA GPU Driver Version (valid for GPU only): None