A bug of python's deepstream that is difficult to use in the production environment

When I run python’s rtsp routine, I want to increase and decrease rtsp during the pipeline running process, I refer to python’s runtime_source_add_delete, but I found that there is a problem with this, for example, I have 10 rtsp streams, disconnected After all the way, the fps of the other 9 channels that have no problem also became 0. After I added the deletion logic, the entire pipeline will be restored only when the stream is completely deleted. Why?

Then I used the code of another routine as:

def delete_sources(data):
    global loop
    global g_num_sources
    global g_eos_list
    global g_source_enabled

    #First delete sources that have reached end of stream
    for source_id in range(MAX_NUM_SOURCES):
        if (g_eos_list[source_id] and g_source_enabled[source_id]):
            g_source_enabled[source_id] = False
            stop_release_source(source_id)

    #Quit if no sources remaining
    if (g_num_sources == 0):
        loop.quit()
        print("All sources stopped quitting")
        return False

    #Randomly choose an enabled source to delete
    source_id = random.randrange(0, MAX_NUM_SOURCES)
    while (not g_source_enabled[source_id]):
        source_id = random.randrange(0, MAX_NUM_SOURCES)
    #Disable the source
    g_source_enabled[source_id] = False
    #Release the source
    print("Calling Stop %d " % source_id)
    stop_release_source(source_id)

    #Quit if no sources remaining
    if (g_num_sources == 0):
        loop.quit()
        print("All sources stopped quitting")
        return False

    return True

links: deepstream_python_apps/deepstream_rt_src_add_del.py at master · NVIDIA-AI-IOT/deepstream_python_apps · GitHub

However, I found that only when this useless rtsp stream is completely closed, the entire pipeline will be restored, that is, other sources that are OK have fps again, but this process has a lot of delay, in my test, one stream is damaged , it will take at least 7 seconds to judge. . .
it’s unbearable

So I ran the deepstream_app of C, and I found that the C side does not have this problem, because there is a reset_sources function in its bus_call, and the whole logic is very well written, but it is not found in python. Then I tested deepstream_app.c, also ran 10 rtsp streams, and then manually disconnected 1 channel. The fps printing of the program only showed that the broken rtsp was 0. The others were normal, there was no delay, and I could even wait for the stream. until recovery

I simply judged that I generated a pipeline diagram under the same rtsp stream inference function completed by python and C. I found that python is very simple, especially sources_bin is not shunting, but is directly connected to mux. Is it because of this? Python’s pipeline cannot be dynamically updated? Below is the picture:
python:

The image generated by C’s deepstream_app is too large, with 3M, and I can’t upload it. I found a similar link on the Internet as:
https://img-blog.csdnimg.cn/20210407085504353.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3UwMTA0MTQ1ODk=,size_16,color_FFFFFF,t_70#pic_center

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU):GPU
• DeepStream Version:6.1
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only):nvidia-480
• Issue Type( questions, new requirements, bugs):questions or bugs
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Or 5.1, because I have run in these two mirror environments, the problem exists, my last post exists in the environment:

xxxx:/home/Download/deepstream_python_apps/apps/deepstream-imagedata-multistream# deepstream-app --version-all
deepstream-app version 5.1.0
DeepStreamSDK 5.1.0
CUDA Driver Version: 11.2
CUDA Runtime Version: 11.1
TensorRT Version: 7.2
cuDNN Version: 8.0
libNVWarp360 Version: 2.0.1d3

what dose "I have 10 rtsp streams, disconnected After all the way, the fps of the other 9 channels that have no problem also became 0. " means? what are your test steps?

It means that the 10th road is running rtsp, and then I manually disconnected one of the roads, which caused the entire pipeline to stop.

This is also one of my test steps. First, throw in 10 rtsp streams at a time, and then my other terminal can control whether the current rtsp stream is disconnected or not.

Yes. We have implemented broken detection and reconnection in deepstream-app. And these are very hard to be implemented with python. We will recommend C/C++ instead of python for your case.

ok, thank you. But the C-side is too complicated to carry out secondary development. If I want to add some ideas of my own, I still hope that python’s deepstream can complete these two functions in the future, it will be perfect

Thank you for the suggestion. Since it is totally open source, you may also try to implement it based on the sample.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.