After I executed state_return = source_bin.set_state(Gst.State.NULL), the program got stuck

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) GPU
• DeepStream Version ds6.3-docker
• JetPack Version (valid for Jetson only)
• TensorRT Version 8.5
• NVIDIA GPU Driver Version (valid for GPU only) 550.142
• Issue Type( questions, new requirements, bugs)

I use streammux to play multiple streams at the same time. When I detect that a stream has not updated for a long time, I turn it off manually, but when I execute state_return = source_bin.set_state(Gst.State.PLAYING), Sometimes execution gets stuck.

This is my function to close a sourcebin:

    def remove_source(self, stream_id):
        source_bin = self.source_bins.get(stream_id)

        if not source_bin
            return False

        state_return = source_bin.set_state(Gst.State.NULL)

        if state_return == Gst.StateChangeReturn.ASYNC:
            timeout_ns = 3 * Gst.SECOND
            state_result = source_bin.get_state(timeout_ns)
            MyLogger.debug(f" {state_result[0]}")
            if state_result[0] == Gst.StateChangeReturn.SUCCESS:
                MyLogger.info("STATE CHANGE SUCCESS")
            elif state_result[0] == Gst.StateChangeReturn.TIMEOUT:
                MyLogger.warning("STATE CHANGE TIMEOUT")
            else:
                MyLogger.debug("STATE CHANGE Failed")
        elif state_return == Gst.StateChangeReturn.SUCCESS:
            MyLogger.info("STATE CHANGE SUCCESS")
        else:
            MyLogger.debug("STATE CHANGE Failed")

        pad_name = f"sink_{stream_id}"
        sinkpad = self.streammux.get_static_pad(pad_name)
        if sinkpad is None:
            MyLogger.error("Failed to get sink pad")
        else:
            sinkpad.send_event(Gst.Event.new_flush_stop(False))
            self.streammux.release_request_pad(sinkpad)

        self.pipeline.remove(source_bin)
        MyLogger.debug(f"remove_source:{self.stream_paths}")
        del self.stream_paths[stream_id]
        self.number_sources -= 1
        self.streammux.set_property("batch-size", self.number_sources)
        self.pgie.set_property("batch-size", self.number_sources)
        return True

Please upgrade the DeepStream SDK to the latest version (DeepStream 7.1 GA).
The RTSP reconnection handling has been implemented with nvmultiurisrcbin. Please refer to /opt/nvidia/deepstream/deepstream/service-maker/sources/apps/python/pipeline_api/deepstream_test5_app

My rtsp stream may change in real time. Can I also reconnect dynamically

Yes. You can add/remove streams with REST API in runtime. Please refer to DeepStream With REST API Sever — DeepStream documentation

My main server is ubuntu20.04, can I use ds7.1-docker?

No. Please follow the platform compatibility. Installation — DeepStream documentation

Thank you for your advice, but my server cannot be upgraded to ubuntu22.04 at the moment. Is there any other way to solve this problem

The nvmultiurisrcbin is the solution. It is open source. You’d better upgrade to DeepStream 7.1 GA. We have fixed many bugs.

Ok, let me try using nvmultiurisrcbin in ds6.3-docker

When I run

python3 deepstream_test5.py -c /opt/nvidia/deepstream/deepstream/service-maker/sources/apps/python/pipeline_api/deepstream_test5_app/test5_b16_dynamic_ source.yaml -s /opt/nvidia/deepstream/deepstream/service-maker/sources/apps/python/pipeline_api/deepstream_test5_app/source_list_dynami c.yaml

The program gives an error: > pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see Miscellaneous - pybind11 documentation for debugging advice.

If you are convinced there is no bug in your code, you can define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a NoneType object.
Traceback (most recent call last):
File “/ opt/nvidia/deepstream deepstream - 7.1 / service - maker/sources/apps/python/pipeline_api/deepstream_test5_app/deepstream_te st5.py”, line 266, in
main(args)
File “/ opt/nvidia/deepstream deepstream - 7.1 / service - maker/sources/apps/python/pipeline_api/deepstream_test5_app/deepstream_te st5.py”, line 100, in main
pipeline = Pipeline(name=pipeline_name, config_file=pipeline_config_file)
The File “/ usr/local/lib/python3.10 / dist - packages/pyservicemaker/pipeline. Py”, line 27, in init
self._instance = _Pipeline(name, config_file) if config_file else _Pipeline(name)
RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
pybind11::handle::dec_ref() is being called while the GIL is either not held or invalid. Please see Miscellaneous - pybind11 documentation for debugging advice.
If you are convinced there is no bug in your code, you can define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::dec_ref() call was triggered on a str object.
terminate called after throwing an instance of ‘std::runtime_error’
what(): pybind11::handle::dec_ref() PyGILState_Check() failure.
Aborted (core dumped)

Where and how did you run the sample?

cd /opt/nvidia/deepstream/deepstream/service-maker/sources/apps/python/pipeline_api/deepstream_test5_app
python3 deepstream_test5.py -c /opt/nvidia/deepstream/deepstream/service-maker/sources/apps/python/pipeline_api/deepstream_test5_app/test5_b16_dynamic_ source.yaml -s /opt/nvidia/deepstream/deepstream/service-maker/sources/apps/python/pipeline_api/deepstream_test5_app/source_list_dynami c.yaml

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)

• Hardware Platform (Jetson / GPU) GPU
• DeepStream Version* ds7.1-docker
• JetPack Version (valid for Jetson only)
• TensorRT Version 10.3
• NVIDIA GPU Driver Version (valid for GPU only) 550.142
My Sever is ubuntu20.04,in docker is 22.04

Please follow the platform compatibility Installation — DeepStream documentation

And please follow the docker instructions. Docker Containers — DeepStream documentation

FROM nvcr.io/nvidia/deepstream:7.1-<container type>
COPY myapp  /root/apps/myapp
# To get video driver libraries at runtime (libnvidia-encode.so/libnvcuvid.so)
ENV NVIDIA_DRIVER_CAPABILITIES $NVIDIA_DRIVER_CAPABILITIES,video

Does this have to be done, should I run it in a terminal inside a container or something

No. You don’t need to do this if you only want to run the sample.


Before using the 560 driver, I want to ask why I can’t run the demo successfully if I follow this note, install cuda12.4-toolkot and set it as default cuda

I ran into other problems when using Nvmultiurisrcbin plugin myself. Can you point out the possible problems? This is a link toTopic,Is it also a driver issue?