Hi,
I’m trying to run this inference pipeline on an Orin Nano device using Python (simplified the content in appsink from our real use while reproducing the issue), the process hangs during termination and the pipeline cannot exit gracefully.
The Python script:
import os
import gi
gi.require_version('Gst', '1.0')
from gi.repository import Gst, GLib
os.environ["USE_NEW_NVSTREAMMUX"] = "yes"
RTSP_URL = "rtsp://<username>:<password>@<ip>:<port>"
ENGINE_PATH = "<yolo_engine_file_path>"
CONFIG_PATH = "<detector_config_file_path>"
TRACKER_CONFIG_PATH = "<tracker_config_file_path>" # config content same as https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_plugin_gst-nvtracker.html#trafficcamnet-nvsort
Gst.init(None)
pipeline_description = """
rtspsrc drop-on-latency=True latency=3000 protocols=tcp timeout=0 tcp-timeout=0 teardown-timeout=0 !
rtph264depay ! h264parse ! tee ! queue ! decodebin ! tee ! m.sink_0 nvstreammux name=m batch-size=1 sync-inputs=True !
queue ! nvvideoconvert ! video/x-raw(memory:NVMM),format=(string)RGBA ! queue !
nvinfer batch-size=1 !
nvtracker ll-lib-file=/opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so gpu-id=0 display-tracking-id=1 !
queue ! tee ! appsink emit-signals=True
"""
pipeline = Gst.parse_launch(pipeline_description)
def bus_call(bus, message, loop):
t = message.type
if t == Gst.MessageType.EOS:
print("End-of-stream")
pipeline.set_state(Gst.State.NULL)
loop.quit()
elif t == Gst.MessageType.ERROR:
err, debug = message.parse_error()
print(f"Error: {err}, {debug}")
pipeline.set_state(Gst.State.NULL)
loop.quit()
return True
def callback(sink):
sample = sink.emit("pull-sample")
return Gst.FlowReturn.OK
rtspsrc = pipeline.get_by_name("rtspsrc0")
rtspsrc.set_property("location", RTSP_URL)
nvinfer = pipeline.get_by_name("nvinfer0")
nvinfer.set_property("model-engine-file", ENGINE_PATH)
nvinfer.set_property("config-file-path", CONFIG_PATH)
nvtracker = pipeline.get_by_name("nvtracker0")
nvtracker.set_property("ll-config-file", TRACKER_CONFIG_PATH)
sink = pipeline.get_by_name("appsink0")
sink.connect("new-sample", callback)
loop = GLib.MainLoop()
bus = pipeline.get_bus()
bus.add_signal_watch()
bus.connect("message", bus_call, loop)
pipeline.set_state(Gst.State.PLAYING)
try:
loop.run()
except:
pass
pipeline.set_state(Gst.State.NULL)
After running the script for a short while and then pressing ctrl+C, the gst log shows it tries to change the element states to terminate the pipeline, and then:
0:00:22.452147830 11861 0xaaab12b97d80 INFO GST_STATES gstbin.c:2928:gst_bin_change_state_func:<manager> child 'rtpsession0' changed state to 3(PAUSED) successfully
0:00:22.452163062 11861 0xaaab12b97d80 INFO GST_STATES gstelement.c:2806:gst_element_continue_state:<manager> completed state change to PAUSED
0:00:22.452176246 11861 0xaaab12b97d80 INFO GST_STATES gstelement.c:2706:_priv_gst_element_state_changed:<manager> notifying about state-changed PLAYING to PAUSED (VOID_PENDING pending)
0:00:22.453182739 11861 0xaaab134baf00 INFO task gsttask.c:368:gst_task_func:<queue2:src> Task going to paused
0:00:22.461647272 11861 0xaaab134baf60 INFO task gsttask.c:368:gst_task_func:<queue1:src> Task going to paused
0:00:22.468888409 11861 0xffff0c006920 ERROR default gstnvstreammux_pads.cpp:342:push:<m> push failed [-2]
0:00:22.475457079 11861 0xffff0c006920 ERROR default gstnvstreammux_pads.cpp:342:push:<m> push failed [-2]
After around twenty outputs about nvstreammux “push failed”, it stuck at:
0:00:22.590386482 11861 0xffff0c006920 ERROR default gstnvstreammux_pads.cpp:342:push:<m> push failed [-2]
0:00:22.594176991 11861 0xffff0c006920 ERROR default gstnvstreammux_pads.cpp:342:push:<m> push failed [-2]
0:01:47.288580738 11861 0xaaab12b97d80 WARN rtspsrc gstrtspsrc.c:5734:gst_rtspsrc_loop_interleaved:<rtspsrc0> warning: The server closed the connection.
0:01:47.288759751 11861 0xaaab12b97d80 INFO GST_ERROR_SYSTEM gstelement.c:2271:gst_element_message_full_with_details:<rtspsrc0> posting message: Could not read from resource.
0:01:47.288845897 11861 0xaaab12b97d80 INFO GST_ERROR_SYSTEM gstelement.c:2298:gst_element_message_full_with_details:<rtspsrc0> posted warning message: Could not read from resource.
0:01:47.289068944 11861 0xaaab12b97d80 INFO task gsttask.c:368:gst_task_func:<task0> Task going to paused
and stays hanging without termination.
The testing environment is as follows:
- Hardware Platform: Jetson Orin Nano 4G
- DeepStream Version: 7.0
- JetPack: 6.0
- L4T: 36.3.0
- TensorRT: 8.6.2.3
- CUDA: 12.2.140
- VPI: 3.1.5
- Gstreamer: 1.20.3
Some experiments and tests we have done:
- It is tested that by replacing
appsink
withfakesink
in the Python script, it can terminate gracefully. The gst-launch-1.0 command usingfakesink
with the same pipeline can also terminate normally:
USE_NEW_NVSTREAMMUX=yes gst-launch-1.0 -e -v rtspsrc location=<rtsp_url> drop-on-latency=True latency=3000 protocols=tcp timeout=0 tcp-timeout=0 teardown-timeout=0 ! rtph264depay ! h264parse ! tee ! queue ! decodebin ! tee ! m.sink_0 nvstreammux name=m batch-size=1 sync-inputs=True ! queue ! nvvideoconvert ! 'video/x-raw(memory:NVMM),format=(string)RGBA' ! queue ! nvinfer batch-size=1 model-engine-file=<engine_path> config-file-path=<config_path> ! nvtracker ll-lib-file=/opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so ll-config-file=<tracker_config> gpu-id=0 display-tracking-id=1 ! queue ! tee ! fakesink sync=True async=False enable-last-sample=False
but there seems to be no easy way to test appsink
from gst-launch-1.0 command directly.
-
It is also tested that if removing the
nvtracker
element, the Python script can also terminate normally, which leads to suspicion that the cause might lie somewhere in the new release in nvtracker or appsink. -
The same Python script can run and terminate without hanging in an older testing environment we had previously:
- Hardware Platform: Jetson Orin Nano 4G
- DeepStream Version: 6.2
- JetPack: 5.1.1
- L4T: 35.3.1
- TensorRT: 8.5.2.2
- CUDA: 11.4.315
- VPI: 2.2.7
- Gstreamer: 1.16.3
Please kindly let me know what other information I can provide, and what else I can do to fix the issue. Thank you for your support!