The pipeline is using multiurisrcbin or Flask service to dynamically remove inference video streams, and the process crashes

Please provide complete information as applicable to your setup.

• Hardware Platform (GPU)
**• DeepStream Version: 6.2
• JetPack Version (valid for Jetson only)
**• TensorRT Version: 8.5.2-1+cuda11.8
**• NVIDIA GPU Driver Version (valid for GPU only): 525.105.17
I encountered such a problem before, but did not find a solution. At first, I thought it was caused by a problem with the Deepstream version. This is because I did not encounter a similar problem in the Jetson+DST5.0 environment. But similar problems encountered in both Jetson+DST6.0 and GPU+DST6.0;Recently, I pulled the image of GPU+DST5.0 and found that there was no pipeline crash during the test. So I am more sure that it is caused by the problem of the Deepstream version. In order to determine this problem, I modified the deepstream_python_apps/apps/deepstream-test1 at master · NVIDIA-AI-IOT/deepstream_python_apps · GitHub code, using the nvmultiurisrcbin element to simulate the Flask service, found the root of the problem.When my pipeline structure is nvmultiurisrcbin->nvinfer->nvvidconv->fakesink, I can add and remove video streams through the rest api, and the pipeline can run normally. But when the pipeline is nvmultiurisrcbin->nvinfer->nvvidconv->capsfilter->fakesink, when there is only one inference video stream in the pipeline, I remove the only stream, then the whole process will crash and exit. The reason why I use Capfilter here is to get the video frame data in the probe for storage. The forum link I sent out before is as follows:https://forums.developer.nvidia.com/t/pipeline-will-stop-when-dyn-remove-source-bin-element-on-jetson-orin-with-deepstream6-1/238603/12,
test_multiurisrcbin_add_remove.py (8.3 KB)
The model used in the following code is yolov5, and the corresponding engine generation refers to Search · tensorrtx · GitHub

Thanks for the information, we’ll check and come back.

How to add and remove video streams through the rest api in your demo?

all with postman →
add_source:
{
“key”: “sensor”,
“value”: {
“camera_id”: “uniqueSensorID1”,
“camera_name”: “front_door”,
“camera_url”: “file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4”,
“change”: “camera_add”,
“metadata”: {
“resolution”: “1920 x1080”,
“codec”: “h264”,
“framerate”: 30
}
},
“headers”: {
“source”: “vst”,
“created_at”: “2021-06-01T14:34:13.417Z”
}
}
remove_source:
{
“key”: “sensor”,
“value”: {
“camera_id”: “uniqueSensorID1”,
“camera_name”: “front_door”,
“camera_url”: “file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4”,
“change”: “camera_remove”,
“metadata”: {
“resolution”: “1920 x1080”,
“codec”: “h264”,
“framerate”: 30
}
},
“headers”: {
“source”: “vst”,
“created_at”: “2021-06-01T14:34:13.417Z”
}
}

Sorry, we have not integrated the Flask service before. Could you try to use our demo first to repo this problem first?

sources\apps\sample_apps\deepstream-server

Do you have a reproduction problem here? I just looked at the deepstream-server. This dmeo does not provide a method to obtain video frame data and then save the image. The problem I encountered here is probably caused by the addition of Capfilter. But Capfilter is not applicable, I cannot call the pyds.get_nvds_buf_surface() function normally in the probe to get the image data of the video frame. Do you have any other methods to obtain video image data?

Yes, you can add tee plugin in your code to implement a branch pipeline for saving images.
But we hope to locate the Capfilter issue in our environment. Could you help to implement your scenario in our deepstream-server app?

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

I just run your code in my env:T4 Deepstream 6.2. Since I didn’t use server to add or delete source. It should have no source. But it didn’t crash.
Could you attach how to use the service to add and delete the source step by step? Also we sugget you add nvmultistreamtiler or nvstreamdemux before the nvvidconv to merge or split the data from the batch.
You can also use our demo to add or remove the source:
https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_plugin_gst-nvmultiurisrcbin.html?highlight=nvmultiurisrcbin#add-a-new-stream-to-a-deepstream-pipeline

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.