I’m building a DeepStream inference pipeline for face detection using RTSP sources, but I’m encountering an issue with linking the queue to `nvstreammux`. The pipeline builds successfully, but the dynamic linking to `streammux` appears to fail even though the return code shows `GST_PAD_LINK_OK`.
Code snippet:
```python
# Link queue → streammux
mux_sinkpad = streammux.get_request_pad(“sink_0”)
queue_srcpad = queue_pre.get_static_pad(“src”)
link_result = queue_srcpad.link(mux_sinkpad)
print(f"Pad link result: {link_result}")
if link_result != Gst.PadLinkReturn.OK:
self.log("ERROR", "Failed queue_pre → streammux linking")
return False
```
Expected behavior:
The pipeline should successfully link all elements including the queue to the streammux, allowing the RTSP stream to flow through the inference pipeline.
Actual behavior:
- Pipeline builds without errors
- Pad link returns `GST_PAD_LINK_OK`
- However, the streammux linking appears to fail silently
- No video frames reach the inference element
Environment:
- NVIDIA Jetson platform
- DeepStream SDK
- GStreamer with Python bindings
- RTSP source input
**What I’ve tried:**
- Verified all elements are properly added to the pipeline
- Confirmed queue properties (max-size-buffers, leaky settings)
- Checked that streammux properties are correctly set (batch-size, dimensions)
Question:
Is there a specific approach required for linking to `nvstreammux` request pads, or are there additional properties I need to set on the queue or streammux elements for RTSP sources?