Rtsp inference

Continuing the discussion from Rtspsrc not linking with rtph264depay!:

• Hardware Platform (GPU) GPU A30
• DeepStream Version 6.4
• TensorRT Version 10.0.0.6
• NVIDIA GPU Driver Version (valid for GPU only) 535.104.12
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
Run the script with an rtsp uri

0:01:30.399822204 243874 0x563ea00956f0 INFO nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus: [UID 1]: Load new model:dstest1_pgie_config.txt sucessfully
Aborted (core dumped)
or
0:01:31.641875620 165387 0x559b4a1376f0 INFO nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus: [UID 1]: Load new model:dstest1_pgie_config.txt sucessfully
Segmentation fault (core dumped)

There is a failure warning :
0:00:06.015523251 165387 0x559b4a1376f0 WARN nvinfer gstnvinfer.cpp:679:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2201> [UID = 1]: deserialized backend context :/opt/nvidia/deepstream/deepstream-6.4/samples/models/Primary_Detector/resnet18_trafficcamnet.etlt_b1_gpu0_int8.engine failed to match config params, trying rebuild

Adittional info running the program with gdb python like its said here:
terminate called after throwing an instance of ‘std::runtime_error’
what(): Unable to read configuration

Thread 92 “pool-python” received signal SIGABRT, Aborted.
[Switching to Thread 0x7ffeb0ff9640 (LWP 849648)]
__pthread_kill_implementation (no_tid=0, signo=6, threadid=140731867960896) at ./nptl/pthread_kill.c:44
44 ./nptl/pthread_kill.c: No such file or directory.

(gdb) bt -full
#0 __pthread_kill_implementation (no_tid=0, signo=6, threadid=140731867960896) at ./nptl/pthread_kill.c:44
tid =
ret = 0
pd = 0x7ffeb0ff9640
old_mask = {__val = {29295, 140734140520292, 140734140442660, 0, 0, 0, 0, 0, 0, 140734140330218, 0, 8589934592, 18446744073709551615, 0, 0, 0}}
ret =
#1 __pthread_kill_internal (signo=6, threadid=140731867960896) at ./nptl/pthread_kill.c:78
#2 __GI___pthread_kill (threadid=140731867960896, signo=signo@entry=6) at ./nptl/pthread_kill.c:89
#3 0x00007ffff7c42476 in __GI_raise (sig=sig@entry=6) at …/sysdeps/posix/raise.c:26
ret =
#4 0x00007ffff7c287f3 in __GI_abort () at ./stdlib/abort.c:79
save_stage = 1

              act = {__sigaction_handler = {sa_handler = 0x7ffeb0ff74f0, sa_sigaction = 0x7ffeb0ff74f0}, sa_mask = {__val = {140737352152736, 1, 140737352152867, 3432, 140737350519265, 140730106392984, 10, 140737352152736, 140731867960896, 140731867954672, 140730106399536, 5138137972254386944, 140737350520515, 10, 140737352152736, 140731867960896}}, sa_flags = -137857478, sa_restorer = 0x5555558ecae8 <stderr@GLIBC_2.2.5>}
    sigs = {__val = {32, 140737350215453, 140731867955728, 140737314093197, 18, 206158430322, 140737310820416, 140731867954224, 0, 140737351076052, 0, 1, 140737352152867, 1, 140730106392984, 140737350512365}}

#5 0x00007ffff56a2b9e in () at /lib/x86_64-linux-gnu/libstdc++.so.6
#6 0x00007ffff56ae20c in () at /lib/x86_64-linux-gnu/libstdc++.so.6
#7 0x00007ffff56ad1e9 in () at /lib/x86_64-linux-gnu/libstdc++.so.6
#8 0x00007ffff56ad959 in __gxx_personality_v0 () at /lib/x86_64-linux-gnu/libstdc++.so.6
#9 0x00007ffff59cffe9 in __libunwind_Unwind_Resume () at /lib/x86_64-linux-gnu/libunwind.so.8
#10 0x00007fff3872e86d in () at /lib/x86_64-linux-gnu/libproxy.so.1
#11 0x00007fff38737827 in px_proxy_factory_get_proxies () at /lib/x86_64-linux-gnu/libproxy.so.1
#12 0x00007fff38bb4827 in () at /usr/lib/x86_64-linux-gnu/gio/modules/libgiolibproxy.so
#13 0x00007ffff6d49194 in () at /lib/x86_64-linux-gnu/libgio-2.0.so.0
#14 0x00007ffff6ff1714 in () at /lib/x86_64-linux-gnu/libglib-2.0.so.0
#15 0x00007ffff6feeab1 in () at /lib/x86_64-linux-gnu/libglib-2.0.so.0
#16 0x00007ffff7c94ac3 in start_thread (arg=) at ./nptl/pthread_create.c:442
ret =
pd =

                  unwind_buf = {cancel_jmp_buf = {{jmp_buf = {140731901527152, 1398833823397350543, 140731867960896, 1, 140737350551504, 140731901527504, -1399275827557397361, -1398851048929194865}, mask_was_saved = 0}}, priv = {pad = {0x0, 0x0, 0x0, 0x0}, data = {prev = 0x0, cleanup = 0x0, canceltype = 0}}}
    not_first_call = <optimized out>

#17 0x00007ffff7d26850 in clone3 () at …/sysdeps/unix/sysv/linux/x86_64/clone3.S:81

I already have a pipeline working:
Producer

gst-launch-1.0 -v rtspsrc location=rtsp://xxxxxxx  ! decodebin ! autovideoconvert ! x264enc tune=zerolatency ! rtph264pay ! udpsink host=127.0.0.1 port=5000

Cosumer

gst-launch-1.0 udpsrc address=127.0.0.1 port=5000 ! application/x-rtp,media=video,payload=96 ! rtph264depay ! h264parse ! nvv4l2decoder ! nvvideoconvert ! autovideosink

The point is that I want to infer that rtsp stream and then output it through the udpsink or using rtspserver as deepstream_test1_rtsp_out.py

You can try to upgrade the glib version by referring the FAQ. Or you can just try to use our latest version DS7.0.

It’s actually working now

Glad to hear that. You can attach the solution to the problem for others to refer to. Thanks

I have follow the FAQ to update the GLib 2.0 and im still using the Deepstream 6.4.
And now the code simply works

Ty for the help, now i have tried to add the Tracker, which should be something like this

srcpad.link(sinkpad)
streammux.link(pgie)
pgie.link(tracker)
tracker.link(nvvidconv)
nvvidconv.link(nvosd)
nvosd.link(sink)

A new window is created trying to provide the output, but suddenly it close.
The console gives the Aborted (core dumped) error

According to the description, this is a new problem, you can open a new topic to discuss this.

We are facing the same issue on DS7.0 with rtsp streams. We did upgrade GLib, but it only works when we delete gio/modules folder. NVIDIA’s solution is not complete. Why did you not include that in the deepstream 7.0 image you release anyway?

Do you mean the crash problem? This is fixed on the DeepStream 7.0 version. You can open a new topic to describe your problems in detail.

It is the same problem as above. It is not fixed in DS7.0.

Please file a new topic and describe in detail how you operate that? Like what platform are you running on, in docker or host, use our demo or your own, etc…

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.