Deepstream

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) : NVIDIA GeForce RTX 3090
• DeepStream Version : 6.3
• JetPack Version (valid for Jetson only)
• TensorRT Version : 12.2
• NVIDIA GPU Driver Version (valid for GPU only) : 535.104.05
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)

How to make deepstream_test3.py run continuously with RTSP links

You can pass an RTSP uri as a parameter

python3 deepstream_test3.py  "rtsp://..."

What do you mean specifically with continuously? You can refer to our README.

I mean for example if RTSP cameras stopped sending frames for some time, will deepstream pipeline stop or it will still listen and wait for incoming frames

I am looking for production pipeline

This depends on whether the source plugin you are using has a reconnection function.

you mean that deepstream elements like nvstreammux in pipeline has reconnection function or I should make custom reconnection function

Usually, the reconnection function is in the source plugin. You can refer to our nvurisrcbin.

rtsp-reconnect-interval

in deepstream-test3.py

I am testing this script with 7 videos in parallel and added while loop to make docker container always work as below

    while True:
    # create an event loop and feed gstreamer bus mesages to it
        loop = GLib.MainLoop()
        bus = pipeline.get_bus()
        bus.add_signal_watch()
        bus.connect ("message", bus_call, loop)

        tiler_sink_pad=tiler.get_static_pad("sink")
        if not tiler_sink_pad:
            sys.stderr.write(" Unable to get sink pad \n")
        else:
            tiler_sink_pad.add_probe(Gst.PadProbeType.BUFFER, tiler_sink_pad_buffer_probe, 0)

        # List the sources
        log.info("Now playing...")
        for i, source in enumerate(args):
            log.info(f"{i} :  {source}")

        log.info("Starting pipeline \n")
        # start play back and listed to events		
        pipeline.set_state(Gst.State.PLAYING)
        try:
            time_st = time.time()
            loop.run()
            time_end = time.time()
            log.info(f"frame time =  {time_end-time_st}")
        except:
            pass
        # cleanup
        log.info("Exiting app\n")
        pipeline.set_state(Gst.State.NULL)
        log.info(f"RTSP streams not available, wait for {settings.PIPELINE_WAIT_TIME} seconds!")
        time.sleep(settings.PIPELINE_WAIT_TIME)

it didn’t start inference pipeline on other 6 videos and gives error as below

Error: gst-resource-error-quark: Resource not found. (3): gstfilesrc.c(532): gst_file_src_start (): /GstPipeline:pipeline0/GstBin:source-bin-06/GstDsNvUriSrcBin:uri-decode-bin/GstURIDecodeBin:nvurisrc_bin_src_elem/GstFileSrc:source:
No such file “/opt/nvidia/deepstream/deepstream-6.3/sources/inference/configs/streams/cam7.mp4”
Error: gst-resource-error-quark: Resource not found. (3): gstfilesrc.c(532): gst_file_src_start (): /GstPipeline:pipeline0/GstBin:source-bin-06/GstDsNvUriSrcBin:uri-decode-bin/GstURIDecodeBin:nvurisrc_bin_src_elem/GstFileSrc:source:

If you use rtsp source, then why the source in your pipeline is

?

I am planning to use RTSP but I am doing testing on 7 videos
so why if one video does not exists pipeline stopped? , I am expecting that pipeline should work on existing 6 videos

Also for RTSP , if one of source is down , will pipeline work on other running source and will stop ??

Appreciate your feedback

You need to ensure that the uri is available during startup if you use the demo.

No, it will keep trying to reconnect.

OK it will try to reconnect for source which is down but for other RTSP sources will pipeline work and do inference?

please I need clear answer for this

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

Yes. For other RTSP sources, pipeline will work and do inference. You need to set the appropriate batched-push-timeout proverty for nvstreammux. You can try to set it to 40000 instead of the 40000000 in the code.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.