@Fiona.Chen , thank you for your response.
Yes this is exactly what I want to achieve. However, I have a few concerns.
If I later expand my pipeline to include five models, would your proposed solution look like this?
source → nvstreammux → PGIE0 → PGIE1 → PGIE2 → PGIE3 → PGIE4 → fakesink
My concern is whether having multiple models in a single inference pipeline would introduce significant latency across the entire pipeline. Additionally, how flexible is this approach? For example, if I need to activate only PGIE0 and PGIE1 at one moment and then later switch to PGIE2, PGIE3, and PGIE4, will adjusting the interval parameter provide the expected behavior that I described in the main question?
Another question relates to metadata persistence. Each inference model has its own probe function, and some of them add metadata to the Gst Buffer. If PGIE0 and PGIE3 are both classification models that use pyds.NvDsClassifierMeta
, will the metadata added by PGIE0 affect metadata in the probe function of PGIE3? Or does processing the inference result in a probe function make the metadata no longer available to subsequent models?
Realization About valve Element
I discovered that the valve element does not completely drop frames from the entire pipeline but only drops them in the branch where it is placed. This means that for a setup with multiple models, all inference branches must converge into a common sink to ensure smooth switching between them.
For example, the following approach does not work because each branch has its own independent sink and frame dropped in e.g. valve of PGIE0
does never reach its fakesink0
element which is added in the pipeline and this results in crushed pipeline:
source → nvstreammux → tee → valve → PGIE0 → fakesink0
├── valve → PGIE1 → fakesink1
├── valve → PGIE2 → fakesink2
However, this alternative setup works because all branches merge into a single sink using a funnel element. When valve of PGIE0
keeps dropping frames, the frame is still in PGIE1
or PGIE2
branch which eventually reach the final sink. This solution does not break the pipeline:
source → nvstreammux → tee → valve → PGIE0 →
├── valve → PGIE1 → funnel → common_fakesink
├── valve → PGIE2 →
The funnel element acts as an N-to-1 muxer, ensuring that frames from different branches are collected before reaching the final sink.
Issue with valve and Live Source (nvarguscamerasrc)
I implemented this approach and used the drop property of the valve element to control which models are active. According to gst-inspect, the drop property is:
Whether to drop buffers and events or let them through
flags: readable, writable, changeable in NULL, READY, PAUSED or PLAYING state
Boolean. Default: false
This means that I should be able to dynamically change the drop property while the pipeline is in PLAYING mode.
While this works with file-based sources (filesrc, videotestsrc), I encounter a pipeline crash when using a live camera source (nvarguscamerasrc) and modifying the drop property on the fly. The error message I get is:
[MY LOGGER] Current drop value: False
[MY LOGGER] After change drop value: True
Error generated. gstnvarguscamerasrc.cpp, execute:805 Failed to create CaptureSession
nvstreammux: Successfully handled EOS for source_id=0
(Argus) Error EndOfFile: Unexpected error in reading socket (in src/rpc/socket/client/ClientSocketManager.cpp, function recvThreadCore(), line 277)
(Argus) Error EndOfFile: Receiving thread terminated with error (in src/rpc/socket/client/ClientSocketManager.cpp, function recvThreadWrapper(), line 379)
This only happens when I modify drop inside a probe function while using a live source.
Question:
Why does this error occur when using nvarguscamerasrc? Is there a way to safely toggle the drop property while keeping the pipeline stable with a live source? I also tried setting valve
element to PAUSED state and then changing the drop
property however I encountered the same error.
Any insights would be greatly appreciated!