Weird problem with script initially coming from Jetson, DS 6.0

… and which is now supposed to run under 6.4 on an AWS T4 instance.

Verdict: It crashes.

Positive reference: deepstream-rtsp-in-rtsp-out works perfectly.

In my desperation I now have carved out most of my old pipeline code and inserted the sample pipeline handling. The crashes don’t go away.

I hesitate a bit to throw all my past work away and just take over the running sample, just because I’m unable to fix it, but I currently don’t know what I’m doing wrong.

Script is started with:

python3 ./inference.py rtsp://127.0.0.1:8554/stream-inference -ml resnet18-trafficcamnet -ll info

and is supposed to read from a local RTSP server stream. Same works with the example.

What I get is:

0:00:02.553421668 84899 0x555556bc36f0 INFO                 nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2106> [UID = 1]: Trying to create engine from model files

and then

Thread 8 "pool-python3" received signal SIGABRT, Aborted.

The GDB view doesn’t make it more clear, what’s happening:

2024-04-09 13:36:25,837 inference.py         INFO    : Inference v.0.1.0
2024-04-09 13:36:25,846 validation.py        INFO    : Inference configuration file: /tmp/tmp06e8b__n
2024-04-09 13:36:25,846 validation.py        INFO    : model-engine-file does not exist, skip entry
[Detaching after vfork from child process 84907]
2024-04-09 13:36:25,849 inference.py         INFO    : DeepStream SDK version: 6.4.0

[New Thread 0x7fffec9ff640 (LWP 84908)]
/home/ubuntu/ai/inference/./inference.py:332: DeprecationWarning: Gst.Element.get_request_pad is deprecated
  sinkpad = streammux.get_request_pad(padname)
[New Thread 0x7fffa5b5e640 (LWP 84938)]
WARNING: Overriding infer-config batch-size 30  with number of sources  1  

Starting pipeline
[New Thread 0x7fffa43ff640 (LWP 84939)]
[New Thread 0x7fffa3612640 (LWP 84940)]
0:00:02.553421668 84899 0x555556bc36f0 INFO                 nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2106> [UID = 1]: Trying to create engine from model files
[New Thread 0x7fffa2e11640 (LWP 84941)]
[New Thread 0x7fffa2610640 (LWP 84942)]
[New Thread 0x7fffa1e0f640 (LWP 84943)]
[New Thread 0x7fffa160e640 (LWP 84944)]

Thread 8 "pool-python3" received signal SIGABRT, Aborted.
[Switching to Thread 0x7fffa1e0f640 (LWP 84943)]
__pthread_kill_implementation (no_tid=0, signo=6, threadid=140735909262912) at ./nptl/pthread_kill.c:44
44      ./nptl/pthread_kill.c: No such file or directory.
(gdb) thread apply all bt

Thread 9 (Thread 0x7fffa160e640 (LWP 84944) "gdbus"):
#0  0x00007ffff7d18bcf in __GI___poll (fds=0x7fff68001de0, nfds=2, timeout=-1) at ../sysdeps/unix/sysv/linux/poll.c:29
#1  0x00007ffff60d31f6 in  () at /lib/x86_64-linux-gnu/libglib-2.0.so.0
#2  0x00007ffff607d2b3 in g_main_loop_run () at /lib/x86_64-linux-gnu/libglib-2.0.so.0
#3  0x00007ffff5e6b07a in  () at /lib/x86_64-linux-gnu/libgio-2.0.so.0
#4  0x00007ffff60aca51 in  () at /lib/x86_64-linux-gnu/libglib-2.0.so.0
#5  0x00007ffff7c94ac3 in start_thread (arg=<optimized out>) at ./nptl/pthread_create.c:442
#6  0x00007ffff7d26850 in clone3 () at ../sysdeps/unix/sysv/linux/x86_64/clone3.S:81

Thread 8 (Thread 0x7fffa1e0f640 (LWP 84943) "pool-python3"):
--Type <RET> for more, q to quit, c to continue without paging--c
#0  __pthread_kill_implementation (no_tid=0, signo=6, threadid=140735909262912) at ./nptl/pthread_kill.c:44
#1  __pthread_kill_internal (signo=6, threadid=140735909262912) at ./nptl/pthread_kill.c:78
#2  __GI___pthread_kill (threadid=140735909262912, signo=signo@entry=6) at ./nptl/pthread_kill.c:89
#3  0x00007ffff7c42476 in __GI_raise (sig=sig@entry=6) at ../sysdeps/posix/raise.c:26
#4  0x00007ffff7c287f3 in __GI_abort () at ./stdlib/abort.c:79
#5  0x00007ffff64a2692 in  () at /lib/x86_64-linux-gnu/libstdc++.so.6
#6  0x00007ffff64ad9da in __gxx_personality_v0 () at /lib/x86_64-linux-gnu/libstdc++.so.6
#7  0x00007ffff5824fe9 in __libunwind_Unwind_Resume () at /lib/x86_64-linux-gnu/libunwind.so.8
#8  0x00007fffb0c2d86d in  () at /lib/x86_64-linux-gnu/libproxy.so.1
#9  0x00007fffb0c36827 in px_proxy_factory_get_proxies () at /lib/x86_64-linux-gnu/libproxy.so.1
#10 0x00007fffec0a9827 in  () at /usr/lib/x86_64-linux-gnu/gio/modules/libgiolibproxy.so
#11 0x00007ffff5e08194 in  () at /lib/x86_64-linux-gnu/libgio-2.0.so.0
#12 0x00007ffff60af6b4 in  () at /lib/x86_64-linux-gnu/libglib-2.0.so.0
#13 0x00007ffff60aca51 in  () at /lib/x86_64-linux-gnu/libglib-2.0.so.0
#14 0x00007ffff7c94ac3 in start_thread (arg=<optimized out>) at ./nptl/pthread_create.c:442
#15 0x00007ffff7d26850 in clone3 () at ../sysdeps/unix/sysv/linux/x86_64/clone3.S:81

Thread 7 (Thread 0x7fffa2610640 (LWP 84942) "dconf worker"):
#0  0x00007ffff7d18bcf in __GI___poll (fds=0x7fff70001de0, nfds=1, timeout=-1) at ../sysdeps/unix/sysv/linux/poll.c:29
#1  0x00007ffff60d31f6 in  () at /lib/x86_64-linux-gnu/libglib-2.0.so.0
#2  0x00007ffff607b3e3 in g_main_context_iteration () at /lib/x86_64-linux-gnu/libglib-2.0.so.0
#3  0x00007fffd3d5d33d in  () at /usr/lib/x86_64-linux-gnu/gio/modules/libdconfsettings.so
#4  0x00007ffff60aca51 in  () at /lib/x86_64-linux-gnu/libglib-2.0.so.0
#5  0x00007ffff7c94ac3 in start_thread (arg=<optimized out>) at ./nptl/pthread_create.c:442
#6  0x00007ffff7d26850 in clone3 () at ../sysdeps/unix/sysv/linux/x86_64/clone3.S:81

Thread 6 (Thread 0x7fffa2e11640 (LWP 84941) "gmain"):
#0  0x00007ffff7d18bcf in __GI___poll (fds=0x7fff78010d50, nfds=1, timeout=-1) at ../sysdeps/unix/sysv/linux/poll.c:29
#1  0x00007ffff60d31f6 in  () at /lib/x86_64-linux-gnu/libglib-2.0.so.0
#2  0x00007ffff607b3e3 in g_main_context_iteration () at /lib/x86_64-linux-gnu/libglib-2.0.so.0
#3  0x00007ffff607b431 in  () at /lib/x86_64-linux-gnu/libglib-2.0.so.0
#4  0x00007ffff60aca51 in  () at /lib/x86_64-linux-gnu/libglib-2.0.so.0
#5  0x00007ffff7c94ac3 in start_thread (arg=<optimized out>) at ./nptl/pthread_create.c:442
#6  0x00007ffff7d26850 in clone3 () at ../sysdeps/unix/sysv/linux/x86_64/clone3.S:81

Thread 5 (Thread 0x7fffa3612640 (LWP 84940) "task0"):
#0  syscall () at ../sysdeps/unix/sysv/linux/x86_64/syscall.S:38
#1  0x00007ffff60ccb43 in g_cond_wait () at /lib/x86_64-linux-gnu/libglib-2.0.so.0
#2  0x00007ffff5e0708b in g_task_run_in_thread_sync () at /lib/x86_64-linux-gnu/libgio-2.0.so.0
#3  0x00007fffec0a9a75 in  () at /usr/lib/x86_64-linux-gnu/gio/modules/libgiolibproxy.so
#4  0x00007ffff5ded1af in  () at /lib/x86_64-linux-gnu/libgio-2.0.so.0
#5  0x00007ffff5dfe18b in g_socket_client_connect () at /lib/x86_64-linux-gnu/libgio-2.0.so.0
#6  0x00007ffff5dfeadf in g_socket_client_connect_to_uri () at /lib/x86_64-linux-gnu/libgio-2.0.so.0
#7  0x00007fffdab60880 in gst_rtsp_connection_connect_with_response_usec () at /lib/x86_64-linux-gnu/libgstrtsp-1.0.so.0
#8  0x00007fffdab61297 in gst_rtsp_connection_connect_usec () at /lib/x86_64-linux-gnu/libgstrtsp-1.0.so.0
#9  0x00007fffdb0f4070 in  () at /usr/lib/x86_64-linux-gnu/gstreamer-1.0/libgstrtspclientsink.so
#10 0x00007fffdb0f8fb6 in  () at /usr/lib/x86_64-linux-gnu/gstreamer-1.0/libgstrtspclientsink.so
#11 0x00007fffdb0fb698 in  () at /usr/lib/x86_64-linux-gnu/gstreamer-1.0/libgstrtspclientsink.so
#12 0x00007ffff58ef127 in  () at /lib/x86_64-linux-gnu/libgstreamer-1.0.so.0
#13 0x00007ffff60af6b4 in  () at /lib/x86_64-linux-gnu/libglib-2.0.so.0
#14 0x00007ffff60aca51 in  () at /lib/x86_64-linux-gnu/libglib-2.0.so.0
#15 0x00007ffff7c94ac3 in start_thread (arg=<optimized out>) at ./nptl/pthread_create.c:442
#16 0x00007ffff7d26850 in clone3 () at ../sysdeps/unix/sysv/linux/x86_64/clone3.S:81

Thread 4 (Thread 0x7fffa43ff640 (LWP 84939) "cuda-EvtHandlr"):
#0  0x00007ffff7d18bcf in __GI___poll (fds=0x7fff8c000c20, nfds=11, timeout=100) at ../sysdeps/unix/sysv/linux/poll.c:29
#1  0x00007ffff0ab8bef in  () at /lib/x86_64-linux-gnu/libcuda.so.1
#2  0x00007ffff0b7be5f in  () at /lib/x86_64-linux-gnu/libcuda.so.1
#3  0x00007ffff0ab1cdf in  () at /lib/x86_64-linux-gnu/libcuda.so.1
#4  0x00007ffff7c94ac3 in start_thread (arg=<optimized out>) at ./nptl/pthread_create.c:442
#5  0x00007ffff7d26850 in clone3 () at ../sysdeps/unix/sysv/linux/x86_64/clone3.S:81

Thread 3 (Thread 0x7fffa5b5e640 (LWP 84938) "cuda-EvtHandlr"):
#0  0x00007ffff7d18bcf in __GI___poll (fds=0x555556bc3ee0, nfds=2, timeout=-1) at ../sysdeps/unix/sysv/linux/poll.c:29
#1  0x00007ffff0ab8bef in  () at /lib/x86_64-linux-gnu/libcuda.so.1
#2  0x00007ffff0b7be5f in  () at /lib/x86_64-linux-gnu/libcuda.so.1
#3  0x00007ffff0ab1cdf in  () at /lib/x86_64-linux-gnu/libcuda.so.1
#4  0x00007ffff7c94ac3 in start_thread (arg=<optimized out>) at ./nptl/pthread_create.c:442
#5  0x00007ffff7d26850 in clone3 () at ../sysdeps/unix/sysv/linux/x86_64/clone3.S:81

Thread 2 (Thread 0x7fffec9ff640 (LWP 84908) "python3"):
#0  __futex_abstimed_wait_common64 (private=0, cancel=true, abstime=0x7fffec9feb40, op=137, expected=0, futex_word=0x7ffff49d2170 <g_nvbuf_res_storage+80>) at ./nptl/futex-internal.c:57
#1  __futex_abstimed_wait_common (cancel=true, private=0, abstime=0x7fffec9feb40, clockid=0, expected=0, futex_word=0x7ffff49d2170 <g_nvbuf_res_storage+80>) at ./nptl/futex-internal.c:87
#2  __GI___futex_abstimed_wait_cancelable64 (futex_word=futex_word@entry=0x7ffff49d2170 <g_nvbuf_res_storage+80>, expected=expected@entry=0, clockid=clockid@entry=1, abstime=abstime@entry=0x7fffec9feb40, private=private@entry=0) at ./nptl/futex-internal.c:139
#3  0x00007ffff7c942dd in __pthread_cond_wait_common (abstime=0x7fffec9feb40, clockid=1, mutex=0x7ffff49d2120 <g_nvbuf_res_storage>, cond=0x7ffff49d2148 <g_nvbuf_res_storage+40>) at ./nptl/pthread_cond_wait.c:503
#4  ___pthread_cond_clockwait64 (abstime=0x7fffec9feb40, clockid=1, mutex=0x7ffff49d2120 <g_nvbuf_res_storage>, cond=0x7ffff49d2148 <g_nvbuf_res_storage+40>) at ./nptl/pthread_cond_wait.c:691
#5  ___pthread_cond_clockwait64 (cond=0x7ffff49d2148 <g_nvbuf_res_storage+40>, mutex=0x7ffff49d2120 <g_nvbuf_res_storage>, clockid=1, abstime=0x7fffec9feb40) at ./nptl/pthread_cond_wait.c:679
#6  0x00007ffff2de2c65 in NvBufResStorage::taskTrim(NvBufResStorage*) () at ///opt/nvidia/deepstream/deepstream-6.4/lib/libnvbufsurftransform.so
#7  0x00007ffff64dc253 in  () at /lib/x86_64-linux-gnu/libstdc++.so.6
#8  0x00007ffff7c94ac3 in start_thread (arg=<optimized out>) at ./nptl/pthread_create.c:442
#9  0x00007ffff7d26850 in clone3 () at ../sysdeps/unix/sysv/linux/x86_64/clone3.S:81

Thread 1 (Thread 0x7ffff7e71480 (LWP 84899) "python3"):
#0  0x00007ffff7ca48b0 in _int_malloc (av=av@entry=0x7ffff7e1ac80 <main_arena>, bytes=bytes@entry=904) at ./malloc/malloc.c:4382
#1  0x00007ffff7ca5139 in __GI___libc_malloc (bytes=904) at ./malloc/malloc.c:3329
#2  0x00007ffff64ae98c in operator new(unsigned long) () at /lib/x86_64-linux-gnu/libstdc++.so.6
#3  0x00007fffe22737d5 in  () at /lib/x86_64-linux-gnu/libnvinfer.so.8
#4  0x00007fffe2273fa7 in  () at /lib/x86_64-linux-gnu/libnvinfer.so.8
#5  0x00007fffe2264590 in  () at /lib/x86_64-linux-gnu/libnvinfer.so.8
#6  0x00007fffe2137681 in  () at /lib/x86_64-linux-gnu/libnvinfer.so.8
#7  0x00007fffe2137629 in  () at /lib/x86_64-linux-gnu/libnvinfer.so.8
#8  0x00007ffff7c99ee8 in __pthread_once_slow (once_control=0x7fffeb987760, init_routine=0x7ffff64dad50 <__once_proxy>) at ./nptl/pthread_once.c:116
#9  0x00007fffde63e21c in  () at /lib/x86_64-linux-gnu/libnvinfer.so.8
#10 0x00007fffde3b63a5 in createInferBuilder_INTERNAL () at /lib/x86_64-linux-gnu/libnvinfer.so.8
#11 0x00007ffff03a6611 in  () at ///opt/nvidia/deepstream/deepstream-6.4/lib/libnvds_infer.so
#12 0x00007ffff0380eed in nvdsinfer::NvDsInferContextImpl::buildModel(_NvDsInferContextInitParams&) () at ///opt/nvidia/deepstream/deepstream-6.4/lib/libnvds_infer.so
#13 0x00007ffff0381c03 in nvdsinfer::NvDsInferContextImpl::generateBackendContext(_NvDsInferContextInitParams&) () at ///opt/nvidia/deepstream/deepstream-6.4/lib/libnvds_infer.so
#14 0x00007ffff0388af4 in nvdsinfer::NvDsInferContextImpl::initialize(_NvDsInferContextInitParams&, void*, void (*)(INvDsInferContext*, unsigned int, NvDsInferLogLevel, char const*, void*)) () at ///opt/nvidia/deepstream/deepstream-6.4/lib/libnvds_infer.so
#15 0x00007ffff038946e in createNvDsInferContext(INvDsInferContext**, _NvDsInferContextInitParams&, void*, void (*)(INvDsInferContext*, unsigned int, NvDsInferLogLevel, char const*, void*)) () at ///opt/nvidia/deepstream/deepstream-6.4/lib/libnvds_infer.so
#16 0x00007ffff07d0d25 in  () at /usr/lib/x86_64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_infer.so
#17 0x00007ffff2941441 in  () at /lib/x86_64-linux-gnu/libgstbase-1.0.so.0
#18 0x00007ffff2941675 in  () at /lib/x86_64-linux-gnu/libgstbase-1.0.so.0
#19 0x00007ffff58bd73f in  () at /lib/x86_64-linux-gnu/libgstreamer-1.0.so.0
#20 0x00007ffff58bdfec in gst_pad_set_active () at /lib/x86_64-linux-gnu/libgstreamer-1.0.so.0
#21 0x00007ffff58a5155 in  () at /lib/x86_64-linux-gnu/libgstreamer-1.0.so.0
#22 0x00007ffff58ae36b in gst_iterator_fold () at /lib/x86_64-linux-gnu/libgstreamer-1.0.so.0
#23 0x00007ffff591c29a in  () at /lib/x86_64-linux-gnu/libgstreamer-1.0.so.0
#24 0x00007ffff58a2a56 in  () at /lib/x86_64-linux-gnu/libgstreamer-1.0.so.0
#25 0x00007ffff58a5410 in  () at /lib/x86_64-linux-gnu/libgstreamer-1.0.so.0
#26 0x00007ffff58a4729 in gst_element_change_state () at /lib/x86_64-linux-gnu/libgstreamer-1.0.so.0
#27 0x00007ffff58a4e35 in  () at /lib/x86_64-linux-gnu/libgstreamer-1.0.so.0
#28 0x00007ffff587d7ec in  () at /lib/x86_64-linux-gnu/libgstreamer-1.0.so.0
#29 0x00007ffff58d0796 in  () at /lib/x86_64-linux-gnu/libgstreamer-1.0.so.0
#30 0x00007ffff58a4729 in gst_element_change_state () at /lib/x86_64-linux-gnu/libgstreamer-1.0.so.0
#31 0x00007ffff58a476f in gst_element_change_state () at /lib/x86_64-linux-gnu/libgstreamer-1.0.so.0
#32 0x00007ffff58a4e35 in  () at /lib/x86_64-linux-gnu/libgstreamer-1.0.so.0
#33 0x00007ffff6634e2e in  () at /lib/x86_64-linux-gnu/libffi.so.8
#34 0x00007ffff6631493 in  () at /lib/x86_64-linux-gnu/libffi.so.8
#35 0x00007ffff619c722 in  () at /usr/lib/python3/dist-packages/gi/_gi.cpython-310-x86_64-linux-gnu.so
#36 0x00007ffff619a826 in  () at /usr/lib/python3/dist-packages/gi/_gi.cpython-310-x86_64-linux-gnu.so
#37 0x00007ffff619aa7d in  () at /usr/lib/python3/dist-packages/gi/_gi.cpython-310-x86_64-linux-gnu.so
#38 0x00005555556a4a7b in _PyObject_MakeTpCall ()
#39 0x000055555569d629 in _PyEval_EvalFrameDefault ()
#40 0x00005555556ae9fc in _PyFunction_Vectorcall ()
#41 0x000055555569745c in _PyEval_EvalFrameDefault ()
#42 0x00005555556ae9fc in _PyFunction_Vectorcall ()
#43 0x000055555569726d in _PyEval_EvalFrameDefault ()
#44 0x00005555556939c6 in  ()
#45 0x0000555555789256 in PyEval_EvalCode ()
#46 0x00005555557b4108 in  ()
#47 0x00005555557ad9cb in  ()
#48 0x00005555557b3e55 in  ()
#49 0x00005555557b3338 in _PyRun_SimpleFileObject ()
#50 0x00005555557b2f83 in _PyRun_AnyFileObject ()
#51 0x00005555557a5a5e in Py_RunMain ()
#52 0x000055555577c02d in Py_BytesMain ()
#53 0x00007ffff7c29d90 in __libc_start_call_main (main=main@entry=0x55555577bff0, argc=argc@entry=7, argv=argv@entry=0x7fffffffe108) at ../sysdeps/nptl/libc_start_call_main.h:58
#54 0x00007ffff7c29e40 in __libc_start_main_impl (main=0x55555577bff0, argc=7, argv=0x7fffffffe108, init=<optimized out>, fini=<optimized out>, rtld_fini=<optimized out>, stack_end=0x7fffffffe0f8) at ../csu/libc-start.c:392
#55 0x000055555577bf25 in _start ()
(gdb) 

Here is the config read from a tmp file:

[property]
gpu-id=0
net-scale-factor=0.00392156862745098
tlt-model-key=tlt_encode
tlt-encoded-model=/home/ubuntu/ai/inference/models/primary_detector/resnet18-trafficcamnet/resnet18_trafficcamnet.etlt
labelfile-path=/home/ubuntu/ai/inference/models/primary-detector/resnet18-trafficcamnet/labels.txt
int8-calib-file=/home/ubuntu/ai/inference/models/primary-detector/resnet18-trafficcamnet/cal_trt.bin
force-implicit-batch-dim=1
batch-size=30
process-mode=1
model-color-format=0
network-mode=2
num-detected-classes=4
interval=0
gie-unique-id=1
uff-input-order=0
uff-input-blob-name=input_1
output-blob-names=output_cov/Sigmoid;output_bbox/BiasAdd
cluster-mode=2
infer-dims=3;544;960
operate-on-class-ids=2
filter-out-class-ids=0;1;3;
[class-attrs-all]
pre-cluster-threshold=0.5
eps=0.5
group-threshold=1

Please help, I’m lost a bit.

Is Jetpack Version upgraded when you upgrade Deepstream from 6.0 to 6.4?

I’m running on an AWS instance now, so JP isn’t required (?)

OK. Can you check your OS Compatibility with the DeepStream 6.4, like CUDA version, TensorRT version and Driver version?
dGPU model Platform and OS Compatibility

I think we can stop here. I was taking the opportunity to rebuild it from scratch and it looks good so far.

1 Like

Oh, I think I found the reason for the crash and you should be able to follow.

In my module I didn’t have used the “common” module. I don’t know how and why this can cause a crash, but do me a favour and try your deepstream-rtsp-in-rtsp-out sample.

Comment the line

from common.bus_call import bus_call

and

bus.connect("message", bus_call, loop)

It should crash.

It must have to do with this import sequence, which comes later in my script…

import gi
import sys
gi.require_version('Gst', '1.0')
from gi.repository import Gst

Please refer to our project, do the python bindings first.

Well, I did.

  • Use your deepstream-rtsp-in-rtsp-out.py sample script
  • Comment the line from common.bus_call import bus_call
  • Comment the line bus.connect("message", bus_call, loop)
  • Run the script
  • It will crash

The only difference is to an unchanged script is (besides the fact, that messages are not handled now), that these lines

import gi
import sys
gi.require_version('Gst', '1.0')
from gi.repository import Gst

are not executed very early now (with the import of bus_call) and this seems to be a problem, even though I don’t understand, why, because they are repeated in the calling script a couple of lines later. It must be some weird runtime/import ordering problem.

But the crash happens during the construction of the pgie things.

I just seek for confirmation, that you see this issue too.

Can you describe why you need to comment out bus_call in your scenario? This is the mechanism Gstreamer uses to handle exceptions or various messages. Some messages may not be handled properly after you delete them.

Can you describe why you need to comment out bus_call in your scenario? This is the mechanism Gstreamer uses to handle exceptions or various messages. Some messages may not be handled properly after you delete them.

Sure. In my script I was having this as well, in my way. Not in an imported module.

Anyway. Let’s recap: What was the reason for this post? I wasn’t able to run a script, which was written for DS 5 and 6.0 and Jetson Nano. It crashed for unknown reasons when building/loading the inference machine.

I gave up messing with this script and started from scratch with your deepstream-rtsp-in-rtsp-out.py as template. Everything worked fine.

Then I began to remove the (unused) references to common and — boom. There was the crash again.

I couldn’t imagine, in what way the removal of an unused module can crash the app until I found, that this bus_call.py was including some modules at a very early state

import gi
import sys
gi.require_version('Gst', '1.0')
from gi.repository import Gst

Now I had it. I just removed your common and imported GI and GST at the same place as common did and there was no crash anymore.

Then I was trying to give you something to reproduce. You can try your script importing common as is and just removing the line, which inserts the (optional) bus_call handler - it will work. It will not crash.

But it will crash if you do not use common.bus_call at the same place as it is now: In the first line of the script.

Could be an effect of lazy loading or something, I don’t know. But it is reproducible that way: Drop common and replace it by the import statements which are in bus_call.py - it will work. Drop common and rely on the import of all this a couple of lines later - it will not work and crash.

Clear?

Gstreamer is based on GObject. If you want to use python for Gstreamer, you need the PyGObject module. So you need to use the gi module for the binding.

Thanks. But sorry. You are completely missing the point. Anyway. Just ignore. No issue anymore.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.