• Hardware Platform (Jetson / GPU): GPU • DeepStream Version: 6.4 • JetPack Version (valid for Jetson only): - • TensorRT Version: 8.6.1.6-1+cuda12.0 • NVIDIA GPU Driver Version (valid for GPU only): 535.183.01 • Issue Type (questions, new requirements, bugs): questions, bugs • How to reproduce the issue? (This is for bugs. Including which sample app is used, the configuration files content, the command line used, and other details for reproducing): Customized version of runtime_source_add_delete • Requirement details (This is for the new requirement. Including the module name which plugin or for which sample application, and the function description): -
I have implemented a Deepstream pipeline in Python. This pipeline is supposed to process multiple streams (mp4 and RTSP streams) and publish the metadata to a Redis broker. It’s inspired by the runtime_source_add_delete sample, and stream sources should be able to be added/removed from the pipeline dynamically at runtime. Here is the definition of the pipeline:
When I changed the pipeline state with the same order as that sample app, I got a segmentation fault. Then I found this topic. Based on the suggestions in that topic, I updated the pipeline state changes as follows:
On starting the pipeline, change its state to READY and then to PAUSED.
On adding the first source element, after initializing the source, change its state to PLAYING. Then change the pipeline’s state to PLAYING. Finally, add the source to the pipeline.
After removing the last source element, return the pipeline state to READY.
Following this approach, the segmentation fault was resolved but now the pipeline cannot handle multiple sources and can only work with one stream with batch-size=1. In what order should I change the pipeline state, or is there another underlying issue causing this behavior? Thank you in advance for your support.
When adding a source (uridecodebin) to the pipeline, I add event listeners for pad-added and child-added signals as described in the runtime_source_add_delete sample app. In the pad-added signal, I request a pad from the nvstreammux and link the source to that pad. But for some reason, if the batch-size is greater than 1, the pad-added callback is never called, and as a result, the stream won’t be processed.
Thanks for sharing that. This approach looks more sustainable. However, since the current implementation of our pipeline is in Deepstream 6.4, I prefer to make it run in this version, without migration to Deepstream 7.0.
Could you please confirm if this approach for changing the pipeline state is in the correct order?
I followed the suggestion to upgrade to DeepStream 7.0 and use Gst-nvmultiurisrcbin element to dynamically add/remove streams via a REST API. However, I encountered a new issue when trying to remove a stream through the /api/v1/stream/remove endpoint. The pipeline crashes under the following conditions:
The pipeline includes capsfilter for converting the NVMM format to RGB and it seems this element is problematic. Here’s a snippet I used in the probe function to convert buffers to a NumPy array for image storage:
When I removed the capsfilter from the pipeline, the API call to remove the stream worked without crashing the pipeline. However, the snippet above no longer functions since it expects RGBA/RGB format, leading to the following error:
RuntimeError: get_nvds_buf_surface: Currently we only support RGBA/RGB color Format
Probably, it’s relevant to this topic:
As suggested in the above post, I tried adding a tee element to the pipeline, but it didn’t help.
How can I re-enable the capsfilter without causing the pipeline to crash when removing streams? Are there any alternative approaches or best practices to handle format conversion?
Any suggestions or guidance would be greatly appreciated!
There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks