Unable to work with rtmp streams directly while using nvstreamdemux

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) GPU - Tesla T4
• DeepStream Version 5.0.1
• TensorRT Version The latest version with Deeepstream sdk 5.0.1
• NVIDIA GPU Driver Version (valid for GPU only) 440.33.01
• Issue Type( questions, new requirements, bugs)

My pipeline -
uridecodebin → mux → pgie → vidconv-caps1 → demuxer
→ vidconv-caps2-> osd → nvvidconv-caps-> encoder-caps-> h264parser → flvmux → rtmp sink

I’m running a custom application, the application pauses indefinitely after 5-7 frames if i run using-
python3 deepstream.py rtmp://13.xxx.xxx.xx/<stream_key>

But it runs perfectly fine if copy video using ffmpeg and then give the resulting rtmp stream as input.

**1) ffmpeg -re -i rtmp://13.xxx.xxx.xx/<stream_key> -c:V copy -f flv rtmp://13.xxx.xxx.xx/<new_stream_key>
2) python3 deepstream.py rtmp://13.xxx.xxx.xx/<new_stream_key> **

Note:
The custom app works with the original rtmp stream using tiler element instead of a demux.
The original rtmp stream does work with sample applications deepstream-imagedata-multistream.py

Any reason as to why this occurs?

I’ve tried the following pipeline, the flv stream can be generated continuously , so it may be your code problem. You need to debug by yourself.

gst-launch-1.0 --gst-debug=v4l2videoenc:5 uridecodebin uri=rtmp://xxxxxxxxx ! mux.sink_0 nvstreammux name=mux batch-size=1 width=1280 height=720 live-source=1 ! queue ! nvinfer config-file-path=/opt/nvidia/deepstream/deepstream-5.0/samples/configs/deepstream-app/config_infer_primary.txt ! nvstreamdemux name=demux demux.src_0 ! nvvideoconvert ! ‘video/x-raw(memory:NVMM),format=RGBA’ ! nvdsosd ! nvvideoconvert ! ‘video/x-raw(memory:NVMM), format=NV12’ ! nvv4l2h264enc bitrate=2000000 bufapi-version=TRUE ! h264parse ! flvmux ! fakesink sync=0