Out rtsp , reference in DPB was never decoded

**• Hardware Platform: Jetson
**• DeepStream Version:6.0.1
• JetPack Version (valid for Jetson only)
**• TensorRT Version:8.2

Use IPC(rtsp://*.) input, the running process has been prompted to
‘ reference in DPB was never decoded’, and then fps of stream drop frame, the corresponding pipeline:
nvstreammux → nvinfer → nvvideoconvert → nvmultistreamtiler → tee → queue1 → nvdsosd → nvvideoconvert → capsfilter → nvv4l2h264enc → rtph264pay → udpsink
→ queue2 → nvvideoconvert → capsfilter → avenc_mpeg4 → mpeg4videoparse → splitmuxsink

However, this prompt does not appear when using native video *.mp4

  1. In pipeline, how to get rtsp data? how to decode? please share the elements name.
  2. it should be network issue. decoding will fail if some packets are lost.
    if still need to check, please dump some data, please refer to this comamnd:
    gst-launch-1.0 rtspsrc location=rtsp://xx ! rtph264depay ! h264parse ! mux. mpegtsmux name=mux ! filesink location=output.ts

Pull the stream through the ipc camera, and it will appear after running for a while

Multi-column video screen unsynchronization delay is serious

could you the whole pipeline? wondering how did you decode the source.

@fanzh My complete code is as follows:

rtsp.py.txt (26.1 KB)

There is no update from you for a period, assuming this is not an issue any more. Hence we are closing this topic. If need further support, please open a new one. Thanks.

  1. is there still “reference in DPB was never decoded” error?
  2. is the output video 's fps low? you can use this method to measure fps.
  3. noticing there are 4 models and 1 encoding in the pipeline, can you use top/nvidia-smi to check if it is performance issue? you can use this method to improve the performance.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.