Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) : NVIDIA GeForce RTX 3090 • DeepStream Version : 6.3 • JetPack Version (valid for Jetson only) • TensorRT Version : 12.2 • NVIDIA GPU Driver Version (valid for GPU only) : 535.104.05 • Issue Type( questions, new requirements, bugs) • How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
How to make deepstream_test3.py run continuously with RTSP links
I mean for example if RTSP cameras stopped sending frames for some time, will deepstream pipeline stop or it will still listen and wait for incoming frames
I am testing this script with 7 videos in parallel and added while loop to make docker container always work as below
while True:
# create an event loop and feed gstreamer bus mesages to it
loop = GLib.MainLoop()
bus = pipeline.get_bus()
bus.add_signal_watch()
bus.connect ("message", bus_call, loop)
tiler_sink_pad=tiler.get_static_pad("sink")
if not tiler_sink_pad:
sys.stderr.write(" Unable to get sink pad \n")
else:
tiler_sink_pad.add_probe(Gst.PadProbeType.BUFFER, tiler_sink_pad_buffer_probe, 0)
# List the sources
log.info("Now playing...")
for i, source in enumerate(args):
log.info(f"{i} : {source}")
log.info("Starting pipeline \n")
# start play back and listed to events
pipeline.set_state(Gst.State.PLAYING)
try:
time_st = time.time()
loop.run()
time_end = time.time()
log.info(f"frame time = {time_end-time_st}")
except:
pass
# cleanup
log.info("Exiting app\n")
pipeline.set_state(Gst.State.NULL)
log.info(f"RTSP streams not available, wait for {settings.PIPELINE_WAIT_TIME} seconds!")
time.sleep(settings.PIPELINE_WAIT_TIME)
I am planning to use RTSP but I am doing testing on 7 videos
so why if one video does not exists pipeline stopped? , I am expecting that pipeline should work on existing 6 videos
Also for RTSP , if one of source is down , will pipeline work on other running source and will stop ??
There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks
Yes. For other RTSP sources, pipeline will work and do inference. You need to set the appropriate batched-push-timeout proverty for nvstreammux. You can try to set it to 40000 instead of the 40000000 in the code.