Segmentation Fault in DeepStream Application Using gRPC

Segmentation Fault in DeepStream Application Using gRPC

Hello DeepStream Community,

Details

I have developed a DeepStream-based application to process RTSP streams for specific event detection. The application detects events in RTSP streams and sends them to a backend using gRPC. The application is based on the deepstream-imagedata-multistream test application from DeepStream 6.4 and runs using the Docker image deepstream:6.4-triton-multiarch.

The gRPC server is developed in-house and is not the same as the gRPC used by Triton. I utilize gRPC just for reading camera configs at the beginning of the app and also for sending events to the backend.

Working Environment

The application runs without issues on the following hardware:

CPU: Intel Core i7-12700K (12th Gen, 20 threads, up to 5.0 GHz)
RAM: 32 GiB
GPU: NVIDIA GeForce RTX 3090
OS: Ubuntu 22.04.4 LTS
Driver Version: 545.29.06

Problematic Environments

However, the application encounters a Segmentation Fault error on other hardware configurations such as:

CPU: Intel Core i3-10100F (4 cores, 8 threads, 3.60 GHz base, 4.30 GHz max)
RAM: 16 GiB
GPU: NVIDIA GeForce RTX 3060 Ti (8 GiB VRAM)
OS: Ubuntu 22.04.3 LTS 
Driver Version: 545.29.06 
CPU: Intel Core i3-10100F (4 cores, 8 threads, 3.60 GHz base, 4.30 GHz max)
RAM: 16 GiB
GPU: NVIDIA GeForce RTX 3080 (10 GiB VRAM)
OS: Ubuntu 20.04.6 LTS
Driver Version: 535.183.01

Error Log

Running the application with gRPC enabled results in the following error (gdb backtrace):

GNU gdb (Ubuntu 12.1-0ubuntu1~22.04) 12.1
Copyright (C) 2022 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
Type "show copying" and "show warranty" for details.
This GDB was configured as "x86_64-linux-gnu".
Type "show configuration" for configuration details.
For bug reporting instructions, please see:
<https://www.gnu.org/software/gdb/bugs/>.
Find the GDB manual and other documentation resources online at:
    <http://www.gnu.org/software/gdb/documentation/>.

For help, type "help".
Type "apropos word" to search for commands related to "word"...
Reading symbols from python3...
(No debugging symbols found in python3)
(gdb) run
Starting program: /usr/bin/python3 app.py
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/usr/lib/x86_64-linux-gnu/libthread_db.so.1".
[New Thread 0x7fffeedff640 (LWP 242)]
[New Thread 0x7fffec5fe640 (LWP 243)]
[New Thread 0x7fffe9dfd640 (LWP 244)]
[New Thread 0x7fffe95fc640 (LWP 245)]
[New Thread 0x7fffe6dfb640 (LWP 246)]
[New Thread 0x7fffe25fa640 (LWP 247)]
[New Thread 0x7fffe1df9640 (LWP 248)]

[New Thread 0x7fffd2dbf640 (LWP 249)]
Loading settings from gRPC...

[New Thread 0x7fffd25be640 (LWP 250)]
[New Thread 0x7fffd1dbd640 (LWP 251)]
[New Thread 0x7fffd15bc640 (LWP 252)]
[New Thread 0x7fffd0dbb640 (LWP 253)]
[New Thread 0x7fffbbfff640 (LWP 254)]
[New Thread 0x7fffbb7fe640 (LWP 255)]
[New Thread 0x7fffbaffd640 (LWP 256)]
[New Thread 0x7fffba7fc640 (LWP 257)]
[New Thread 0x7fffb9ffb640 (LWP 258)]
[New Thread 0x7fffb97fa640 (LWP 259)]
[New Thread 0x7fffb8ff9640 (LWP 260)]
[New Thread 0x7fff9bfff640 (LWP 261)]
[Thread 0x7fffb8ff9640 (LWP 260) exited]
[Thread 0x7fffb97fa640 (LWP 259) exited]
[Thread 0x7fff9bfff640 (LWP 261) exited]
[Thread 0x7fffb9ffb640 (LWP 258) exited]
[Thread 0x7fffba7fc640 (LWP 257) exited]
[Thread 0x7fffbaffd640 (LWP 256) exited]
[Thread 0x7fffbb7fe640 (LWP 255) exited]
[Thread 0x7fffbbfff640 (LWP 254) exited]
[Thread 0x7fffd0dbb640 (LWP 253) exited]
[Thread 0x7fffd15bc640 (LWP 252) exited]
[Thread 0x7fffd25be640 (LWP 250) exited]
[Thread 0x7fffd1dbd640 (LWP 251) exited]
[New Thread 0x7fffd1dbd640 (LWP 262)]
[New Thread 0x7fffd25be640 (LWP 263)]
[New Thread 0x7fffd15bc640 (LWP 264)]
[New Thread 0x7fff9bfff640 (LWP 265)]
[New Thread 0x7fffd0dbb640 (LWP 266)]
[New Thread 0x7fffbbfff640 (LWP 267)]
[New Thread 0x7fffbb7fe640 (LWP 268)]
[New Thread 0x7fffbaffd640 (LWP 269)]
[New Thread 0x7fffba7fc640 (LWP 270)]
[New Thread 0x7fffb9ffb640 (LWP 271)]
[New Thread 0x7fffb97fa640 (LWP 272)]
[New Thread 0x7fffb8ff9640 (LWP 273)]
[Thread 0x7fffb8ff9640 (LWP 273) exited]
[Thread 0x7fffb97fa640 (LWP 272) exited]
[Thread 0x7fffb9ffb640 (LWP 271) exited]
[Thread 0x7fffba7fc640 (LWP 270) exited]
[Thread 0x7fffbaffd640 (LWP 269) exited]
[Thread 0x7fffbb7fe640 (LWP 268) exited]
[Thread 0x7fffbbfff640 (LWP 267) exited]
[Thread 0x7fffd0dbb640 (LWP 266) exited]
[Thread 0x7fff9bfff640 (LWP 265) exited]
[Thread 0x7fffd15bc640 (LWP 264) exited]
[Thread 0x7fffd25be640 (LWP 263) exited]
[Thread 0x7fffd1dbd640 (LWP 262) exited]
Creating streammux

[New Thread 0x7fffd25be640 (LWP 274)]
At least one of the sources is live

Create and link source bins to streammux

Creating source_bin for URI Name: "rtsp://camera_stream_url"

source-bin-00
Creating queue1

Creating nvvidconv

Creating queue2

Creating filter

Creating queue3

Creating Pgie

Updated pre-cluster-threshold to 0.699999988079071 and engine file to ../model_b1_gpu0_fp16.engine
[New Thread 0x7fffd1dbd640 (LWP 275)]
Creating queue4

Creating Tracker

Creating queue5

Creating nvvidconv

Creating queue6

Creating filter

Creating queue7

Creating EGLSink

Creating tiler

Creating queue8

Creating nvvidconv

Creating queue9

Creating nvosd

Creating queue10

Now playing...
rtsp://camera_stream_url"

Starting pipeline...

[New Thread 0x7fffbaffd640 (LWP 276)]
[New Thread 0x7fffbbfff640 (LWP 277)]
[New Thread 0x7fff48919640 (LWP 278)]
[New Thread 0x7fff43fff640 (LWP 279)]
[New Thread 0x7fff437fe640 (LWP 280)]
[New Thread 0x7fff42ffd640 (LWP 281)]
[New Thread 0x7fff427fc640 (LWP 282)]
[New Thread 0x7fff41ffb640 (LWP 283)]
[New Thread 0x7fff417fa640 (LWP 284)]
gstnvtracker: Loading low-level lib at libnvds_nvmultiobjecttracker.so
[NvTrackerParams::getConfigRoot()] !!![WARNING] File doesn't exist. Will go ahead with default values
[NvTrackerParams::getConfigRoot()] !!![WARNING] File doesn't exist. Will go ahead with default values
[NvMultiObjectTracker] Initialized
[New Thread 0x7fff40d59640 (LWP 285)]
[New Thread 0x7fff11ffd640 (LWP 286)]
[New Thread 0x7fff117fc640 (LWP 287)]
0:00:05.902136448   233 0x7fffccb42180 INFO                 nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:2092> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-6.4/sources/deepstream_python_apps/apps/HDetect/model_b1_gpu0_fp16.engine
WARNING: [TRT]: The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1.
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 4
0   INPUT  kFLOAT input           3x640x640       
1   OUTPUT kFLOAT boxes           8400x4          
2   OUTPUT kFLOAT scores          8400x1          
3   OUTPUT kFLOAT classes         8400x1          

0:00:06.040096266   233 0x7fffccb42180 INFO                 nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2195> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-6.4/sources/deepstream_python_apps/apps/HDetect/model_b1_gpu0_fp16.engine
[New Thread 0x7fff10ffb640 (LWP 288)]
[New Thread 0x7fff09fff640 (LWP 289)]
[New Thread 0x7fff097fe640 (LWP 290)]
0:00:06.044735815   233 0x7fffccb42180 INFO                 nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<primary-inference> [UID 1]: Load new model:pgie/yolov8_config.txt sucessfully
[New Thread 0x7fff08ffd640 (LWP 291)]
[New Thread 0x7ffefbfff640 (LWP 292)]
[New Thread 0x7ffefb7fe640 (LWP 293)]
Decodebin child added: source 

[New Thread 0x7ffefaffd640 (LWP 294)]
[New Thread 0x7ffefa7fc640 (LWP 295)]
[New Thread 0x7ffef9ffb640 (LWP 296)]
[New Thread 0x7ffef97fa640 (LWP 297)]
[Thread 0x7fffe1df9640 (LWP 248) exited]
[Thread 0x7fffe25fa640 (LWP 247) exited]
[Thread 0x7fffe6dfb640 (LWP 246) exited]
[Thread 0x7fffe95fc640 (LWP 245) exited]
[Thread 0x7fffe9dfd640 (LWP 244) exited]
[Thread 0x7fffec5fe640 (LWP 243) exited]
[Thread 0x7fffeedff640 (LWP 242) exited]
[Detaching after fork from child process 298]
[New Thread 0x7fffe1df9640 (LWP 299)]
[New Thread 0x7fffe25fa640 (LWP 300)]
[New Thread 0x7fffe6dfb640 (LWP 301)]
[New Thread 0x7fffe95fc640 (LWP 302)]
[New Thread 0x7fffec5fe640 (LWP 303)]
[New Thread 0x7fffe9dfd640 (LWP 304)]
[New Thread 0x7ffef8ff9640 (LWP 305)]
[New Thread 0x7ffed3fff640 (LWP 306)]
[New Thread 0x7ffed37fe640 (LWP 307)]
[New Thread 0x7ffed2ffd640 (LWP 308)]
[New Thread 0x7ffed27fc640 (LWP 309)]
[New Thread 0x7ffed1ffb640 (LWP 310)]
Decodebin child added: decodebin0 

Decodebin child added: rtppcmadepay0 

Decodebin child added: alawdec0 

In cb_newpad:

gstname :  audio/x-raw
[New Thread 0x7ffed17fa640 (LWP 311)]
[New Thread 0x7ffed0ff9640 (LWP 312)]
[New Thread 0x7ffecbfff640 (LWP 313)]
[New Thread 0x7ffecb7fe640 (LWP 314)]
[New Thread 0x7ffecaffd640 (LWP 315)]
[New Thread 0x7ffeca7fc640 (LWP 316)]
[New Thread 0x7ffec9ffb640 (LWP 317)]
[New Thread 0x7ffec97fa640 (LWP 318)]
[New Thread 0x7ffec8ff9640 (LWP 319)]
[New Thread 0x7ffec1fff640 (LWP 320)]
[New Thread 0x7ffec17fe640 (LWP 321)]
[New Thread 0x7ffec0ffd640 (LWP 322)]
[New Thread 0x7ffeaffff640 (LWP 323)]
[New Thread 0x7ffeaf7fe640 (LWP 324)]
[Thread 0x7ffeaffff640 (LWP 323) exited]
[Thread 0x7ffeaf7fe640 (LWP 324) exited]
[Thread 0x7ffec0ffd640 (LWP 322) exited]
[Thread 0x7ffec17fe640 (LWP 321) exited]
[Thread 0x7ffec1fff640 (LWP 320) exited]
[Thread 0x7ffec8ff9640 (LWP 319) exited]
[Thread 0x7ffec9ffb640 (LWP 317) exited]
[Thread 0x7ffeca7fc640 (LWP 316) exited]
[Thread 0x7ffecaffd640 (LWP 315) exited]
[Thread 0x7ffecb7fe640 (LWP 314) exited]
[Thread 0x7ffecbfff640 (LWP 313) exited]
[Thread 0x7ffec97fa640 (LWP 318) exited]
[New Thread 0x7ffecb7fe640 (LWP 325)]
[New Thread 0x7ffec97fa640 (LWP 326)]
[New Thread 0x7ffecbfff640 (LWP 327)]
[New Thread 0x7ffeca7fc640 (LWP 328)]
[New Thread 0x7ffeaf7fe640 (LWP 329)]
[New Thread 0x7ffecaffd640 (LWP 330)]
[New Thread 0x7ffec9ffb640 (LWP 331)]
[New Thread 0x7ffec8ff9640 (LWP 332)]
[New Thread 0x7ffec1fff640 (LWP 333)]
[New Thread 0x7ffec17fe640 (LWP 334)]
[New Thread 0x7ffec0ffd640 (LWP 335)]
[New Thread 0x7ffeaffff640 (LWP 336)]
[Thread 0x7ffeaffff640 (LWP 336) exited]
[Thread 0x7ffec0ffd640 (LWP 335) exited]
[Thread 0x7ffec17fe640 (LWP 334) exited]
[Thread 0x7ffec1fff640 (LWP 333) exited]
[Thread 0x7ffec8ff9640 (LWP 332) exited]
[Thread 0x7ffec9ffb640 (LWP 331) exited]
[Thread 0x7ffecaffd640 (LWP 330) exited]
[Thread 0x7ffeaf7fe640 (LWP 329) exited]
[Thread 0x7ffeca7fc640 (LWP 328) exited]
[Thread 0x7ffecbfff640 (LWP 327) exited]
[Thread 0x7ffec97fa640 (LWP 326) exited]
[Thread 0x7ffecb7fe640 (LWP 325) exited]
[New Thread 0x7ffec97fa640 (LWP 337)]
[New Thread 0x7ffecb7fe640 (LWP 338)]
[New Thread 0x7ffecbfff640 (LWP 339)]
[New Thread 0x7ffeaffff640 (LWP 340)]
[New Thread 0x7ffecaffd640 (LWP 341)]
[New Thread 0x7ffeca7fc640 (LWP 342)]
[New Thread 0x7ffec9ffb640 (LWP 343)]
[New Thread 0x7ffec8ff9640 (LWP 344)]
[New Thread 0x7ffec1fff640 (LWP 345)]
[New Thread 0x7ffec17fe640 (LWP 346)]
[New Thread 0x7ffec0ffd640 (LWP 347)]
[New Thread 0x7ffeaf7fe640 (LWP 348)]
[Thread 0x7ffec0ffd640 (LWP 347) exited]
[Thread 0x7ffeaf7fe640 (LWP 348) exited]
[Thread 0x7ffec17fe640 (LWP 346) exited]
[Thread 0x7ffec1fff640 (LWP 345) exited]
[Thread 0x7ffec8ff9640 (LWP 344) exited]
[Thread 0x7ffec9ffb640 (LWP 343) exited]
[Thread 0x7ffeca7fc640 (LWP 342) exited]
[Thread 0x7ffecaffd640 (LWP 341) exited]
[Thread 0x7ffeaffff640 (LWP 340) exited]
[Thread 0x7ffecbfff640 (LWP 339) exited]
[Thread 0x7ffecb7fe640 (LWP 338) exited]
[Thread 0x7ffec97fa640 (LWP 337) exited]
[New Thread 0x7ffecb7fe640 (LWP 349)]
[New Thread 0x7ffec97fa640 (LWP 350)]
[New Thread 0x7ffecbfff640 (LWP 351)]
[New Thread 0x7ffeaf7fe640 (LWP 352)]
[New Thread 0x7ffecaffd640 (LWP 353)]
[New Thread 0x7ffeca7fc640 (LWP 354)]
[New Thread 0x7ffec9ffb640 (LWP 355)]
[New Thread 0x7ffec8ff9640 (LWP 356)]
[New Thread 0x7ffec1fff640 (LWP 357)]
[New Thread 0x7ffec17fe640 (LWP 358)]
[New Thread 0x7ffec0ffd640 (LWP 359)]
[New Thread 0x7ffeaffff640 (LWP 360)]
[Thread 0x7ffec0ffd640 (LWP 359) exited]
[Thread 0x7ffeaffff640 (LWP 360) exited]
[Thread 0x7ffec17fe640 (LWP 358) exited]
[Thread 0x7ffec1fff640 (LWP 357) exited]
[Thread 0x7ffec8ff9640 (LWP 356) exited]
[Thread 0x7ffec9ffb640 (LWP 355) exited]
[Thread 0x7ffeca7fc640 (LWP 354) exited]
[Thread 0x7ffecaffd640 (LWP 353) exited]
[Thread 0x7ffeaf7fe640 (LWP 352) exited]
[Thread 0x7ffecbfff640 (LWP 351) exited]
[Thread 0x7ffec97fa640 (LWP 350) exited]
[Thread 0x7ffecb7fe640 (LWP 349) exited]
Decodebin child added: decodebin1 

Decodebin child added: rtph265depay0 

Decodebin child added: h265parse0 

Decodebin child added: capsfilter0 

Decodebin child added: nvv4l2decoder0 

[New Thread 0x7ffec97fa640 (LWP 361)]
In cb_newpad:

gstname :  video/x-raw
features :  <Gst.CapsFeatures object at 0x7fffd2e2ce80 (GstCapsFeatures at 0x7ffe2001e100)>
Decodebin picked nvidia decoder plugin successfully!

[New Thread 0x7ffecb7fe640 (LWP 362)]
[New Thread 0x7ffecbfff640 (LWP 363)]
[Thread 0x7ffecbfff640 (LWP 363) exited]
[New Thread 0x7ffeaffff640 (LWP 364)]
[New Thread 0x7ffeca3ff640 (LWP 365)]
[New Thread 0x7ffeaca79640 (LWP 366)]
[New Thread 0x7ffea5fff640 (LWP 367)]
[New Thread 0x7ffea57fe640 (LWP 368)]
[New Thread 0x7ffea4ffd640 (LWP 369)]
[New Thread 0x7ffe91fff640 (LWP 370)]
[New Thread 0x7ffe917fe640 (LWP 371)]
[New Thread 0x7ffe90ffd640 (LWP 372)]
[New Thread 0x7ffe81fff640 (LWP 373)]
[New Thread 0x7ffe817fe640 (LWP 374)]
[New Thread 0x7ffe80ffd640 (LWP 375)]
[New Thread 0x7ffe7bfff640 (LWP 376)]
[New Thread 0x7ffe7b7fe640 (LWP 377)]
[Thread 0x7ffe7bfff640 (LWP 376) exited]
[Thread 0x7ffe7b7fe640 (LWP 377) exited]
[Thread 0x7ffe80ffd640 (LWP 375) exited]
[Thread 0x7ffe817fe640 (LWP 374) exited]
[Thread 0x7ffe81fff640 (LWP 373) exited]
[Thread 0x7ffe90ffd640 (LWP 372) exited]
[Thread 0x7ffe917fe640 (LWP 371) exited]
[Thread 0x7ffe91fff640 (LWP 370) exited]
[Thread 0x7ffea4ffd640 (LWP 369) exited]
[Thread 0x7ffea57fe640 (LWP 368) exited]
[Thread 0x7ffea5fff640 (LWP 367) exited]
[Thread 0x7ffeaca79640 (LWP 366) exited]
[New Thread 0x7ffea5fff640 (LWP 378)]
[New Thread 0x7ffeaca79640 (LWP 379)]
[New Thread 0x7ffe90ffd640 (LWP 380)]
[New Thread 0x7ffe80ffd640 (LWP 381)]
[New Thread 0x7ffea57fe640 (LWP 382)]
[New Thread 0x7ffea4ffd640 (LWP 383)]
[New Thread 0x7ffe91fff640 (LWP 384)]
[New Thread 0x7ffe917fe640 (LWP 385)]
[New Thread 0x7ffe81fff640 (LWP 386)]
[New Thread 0x7ffe817fe640 (LWP 387)]
[New Thread 0x7ffe7bfff640 (LWP 388)]
[New Thread 0x7ffe7b7fe640 (LWP 389)]
[Thread 0x7ffe7bfff640 (LWP 388) exited]
[Thread 0x7ffe7b7fe640 (LWP 389) exited]
[Thread 0x7ffe817fe640 (LWP 387) exited]
[Thread 0x7ffe81fff640 (LWP 386) exited]
[Thread 0x7ffe917fe640 (LWP 385) exited]
[Thread 0x7ffe91fff640 (LWP 384) exited]
[Thread 0x7ffea4ffd640 (LWP 383) exited]
[Thread 0x7ffea57fe640 (LWP 382) exited]
[Thread 0x7ffe80ffd640 (LWP 381) exited]
[Thread 0x7ffe90ffd640 (LWP 380) exited]
[Thread 0x7ffeaca79640 (LWP 379) exited]
[Thread 0x7ffea5fff640 (LWP 378) exited]
[New Thread 0x7ffeaca79640 (LWP 390)]
[New Thread 0x7ffea5fff640 (LWP 391)]
[New Thread 0x7ffe90ffd640 (LWP 392)]
[New Thread 0x7ffe7b7fe640 (LWP 393)]
[New Thread 0x7ffea57fe640 (LWP 394)]
[New Thread 0x7ffea4ffd640 (LWP 395)]
[New Thread 0x7ffe91fff640 (LWP 396)]

Thread 52 "queue2:src" received signal SIGSEGV, Segmentation fault.
[Switching to Thread 0x7ffefbfff640 (LWP 292)]
0x00007fffd9cde5f1 in ?? () from /usr/local/lib/python3.10/dist-packages/cv2/cv2.abi3.so
(gdb) bt
#0  0x00007fffd9cde5f1 in  () at /usr/local/lib/python3.10/dist-packages/cv2/cv2.abi3.so
#1  0x00007fffd9ce32ab in  () at /usr/local/lib/python3.10/dist-packages/cv2/cv2.abi3.so
#2  0x00007fffd9ce36e3 in  () at /usr/local/lib/python3.10/dist-packages/cv2/cv2.abi3.so
#3  0x00007fffd9899fc4 in  () at /usr/local/lib/python3.10/dist-packages/cv2/cv2.abi3.so
#4  0x00005555556ae10e in  ()
#5  0x00005555556a4a7b in _PyObject_MakeTpCall ()
#6  0x000055555569d629 in _PyEval_EvalFrameDefault ()
#7  0x00005555556ae9fc in _PyFunction_Vectorcall ()
#8  0x000055555569745c in _PyEval_EvalFrameDefault ()
#9  0x00005555556ae9fc in _PyFunction_Vectorcall ()
#10 0x00007ffff783954f in  () at /usr/lib/python3/dist-packages/gi/_gi.cpython-310-x86_64-linux-gnu.so
#11 0x00007ffff7fae7ec in  () at /usr/lib/x86_64-linux-gnu/libffi.so.8
#12 0x00007ffff7faf050 in  () at /usr/lib/x86_64-linux-gnu/libffi.so.8
#13 0x00007ffff6f314a6 in  () at /usr/lib/x86_64-linux-gnu/libgstreamer-1.0.so.0
#14 0x00007ffff76fd626 in g_hook_list_marshal () at /usr/lib/x86_64-linux-gnu/libglib-2.0.so.0
#15 0x00007ffff6f31a55 in  () at /usr/lib/x86_64-linux-gnu/libgstreamer-1.0.so.0
#16 0x00007ffff6f36c32 in  () at /usr/lib/x86_64-linux-gnu/libgstreamer-1.0.so.0
#17 0x00007ffff6f3718e in gst_pad_push () at /usr/lib/x86_64-linux-gnu/libgstreamer-1.0.so.0
#18 0x00007ffff532921f in  () at /usr/lib/x86_64-linux-gnu/libgstbase-1.0.so.0
#19 0x00007ffff6f337cd in  () at /usr/lib/x86_64-linux-gnu/libgstreamer-1.0.so.0
#20 0x00007ffff6f36d69 in  () at /usr/lib/x86_64-linux-gnu/libgstreamer-1.0.so.0
#21 0x00007ffff6f3718e in gst_pad_push () at /usr/lib/x86_64-linux-gnu/libgstreamer-1.0.so.0
#22 0x00007fffd0efb875 in  () at /usr/lib/x86_64-linux-gnu/gstreamer-1.0/libgstcoreelements.so
#23 0x00007ffff6f5e127 in  () at /usr/lib/x86_64-linux-gnu/libgstreamer-1.0.so.0
#24 0x00007ffff774a6b4 in  () at /usr/lib/x86_64-linux-gnu/libglib-2.0.so.0
#25 0x00007ffff7747a51 in  () at /usr/lib/x86_64-linux-gnu/libglib-2.0.so.0
#26 0x00007ffff7cdeac3 in  () at /usr/lib/x86_64-linux-gnu/libc.so.6
--Type <RET> for more, q to quit, c to continue without paging--c
#27 0x00007ffff7d70a40 in  () at /usr/lib/x86_64-linux-gnu/libc.so.6

Observations

  1. The error occurs consistently across different hardware configurations but always points to /usr/local/lib/python3.10/dist-packages/cv2/cv2.abi3.so.
  2. The application works perfectly when it runs standalone, without gRPC, on the problematic hardwares.
  3. Using the API protocol instead of gRPC also leads to the same error.
  4. If I start the DeepStream application using a JSON file for reading camera configs and then switch to gRPC for event reporting, it works fine. This means when we first run the DeepStream app and then use the gRPC service, there is no problem.

Suspected Cause

I was wondering if it is related to thread safety issues between the gRPC service and GStreamer (DeepStream). It appears that multiple threads might be trying to access or modify shared resources concurrently, causing a race condition and leading to segmentation fault.

Troubleshooting Steps Taken

  • Tried removing pipeline elements such as queue2 and queue1, but the error persisted and pointed to another element in the pipeline.
  • Updated the NVIDIA driver from version 535 to 545, but the issue remained.

Request for Help

I need assistance in resolving this segmentation fault error. I would be highly appreciated for any advice or solution.

Thank you for your help!

1 Like

Could you update DeepStream from 6.4 to 7.0 and have a try?

Thank you for your suggestion.

  1. What is your diagnosis for this error, and are you sure that updating to DeepStream 7.0 will solve my problem? I want to be sure before committing to the time-consuming process of installing all dependencies and migrating everything to DeepStream 7. Could you provide more details on how DeepStream 7.0 addresses this specific issue?

  2. Is a test application like deepstream-imagedata-multistream different in DeepStream 7 compared to DeepStream 6.4? Understanding the extent of changes will help me evaluate the potential challenges I might face during the migration process.

Thank you again for your assistance.

No. This comparison can narrow-down the problem. You can just use the docker nvcr.io/nvidia/deepstream:7.0-triton-multiarch.
Depending on your suspect, we do have some thread issues in the DeepStream with python on the 6.4 version. You can refer to our FAQ.

1 Like

Thank you for the information. I will try and get back to you.

I am in the process of migrating my DeepStream-based application from nvcr.io/nvidia/deepstream:6.4-triton-multiarch to nvcr.io/nvidia/deepstream:7.0-samples-multiarch. Based on your suggestion, I am attempting to resolve this issue by upgrading to DeepStream 7.0.

Steps Taken

Following your advice, I set up the DeepStream 7.0 Docker container environment by performing the below necessary installations like what I did on DeepStream 6.4 Docker container:

cd /opt/nvidia/deepstream/deepstream
./user_additional_install.sh
./user_deepstream_python_apps_install.sh -v 1.1.11

However, I encountered a new problem where even the DeepStream Python test sample deepstream_test_3.py does not process the stream on this DeepStream 7.0 Docker container, whereas it had no issues on the DeepStream 6.4 Docker container. The output log for running the sample application is as follows:

root@Company-System-Product-Name:/opt/nvidia/deepstream/deepstream-7.0/samples/streams# cd /opt/nvidia/deepstream/deepstream-7.0/sources/deepstream_python_apps/apps/deepstream-test3
root@Company-System-Product-Name:/opt/nvidia/deepstream/deepstream-7.0/sources/deepstream_python_apps/apps/deepstream-test3# python3 deepstream_test_3.py -i file:///opt/nvidia/deepstream/deepstream-7.0/samples/streams/sample_720p.mp4 
{'input': ['file:///opt/nvidia/deepstream/deepstream-7.0/samples/streams/sample_720p.mp4'], 'configfile': None, 'pgie': None, 'no_display': False, 'file_loop': False, 'disable_probe': False, 'silent': False}
Creating Pipeline 
 
Creating streamux 
 
Creating source_bin  0  
 
Creating source bin
source-bin-00
Creating Pgie 
 
Creating tiler 
 
Creating nvvidconv 
 
Creating nvosd 
 
Is it Integrated GPU? : 0
Creating EGLSink 

WARNING: Overriding infer-config batch-size 30  with number of sources  1  

Adding elements to Pipeline 

Linking elements in the Pipeline 

Now playing...
0 :  file:///opt/nvidia/deepstream/deepstream-7.0/samples/streams/sample_720p.mp4
Starting pipeline 


**PERF:  {'stream0': 0.0} 


**PERF:  {'stream0': 0.0} 


**PERF:  {'stream0': 0.0} 


**PERF:  {'stream0': 0.0} 


**PERF:  {'stream0': 0.0} 


Question

Are there any additional dependencies or configurations required for nvcr.io/nvidia/deepstream:7.0-samples-multiarch that differ from nvcr.io/nvidia/deepstream:6.4-triton-multiarch? If so, could you specify what needs to be installed or configured to run the sample applications successfully?

This will help me ensure my own application runs on DeepStream 7.0 and determine if the ‘Segmentation Fault’ error I encountered is resolved.

Environment Details

  • OS: Ubuntu 22.04
  • NVIDIA Driver Version: 545.29.06

I was wondering if I need to install

./user_additional_install.sh
./user_deepstream_python_apps_install.sh -v 1.1.11

from cd /opt/nvidia/deepstream/deepstream-7.0 instead of cd /opt/nvidia/deepstream/deepstream

Thank you for your assistance.

The compatibility they depend on is a little different. If you are going to develop your own app, we recommend using nvcr.io/nvidia/deepstream:7.0-triton-multiarch docker.

Thank you for your response. I apologize for the typo in my previous message. I am indeed using the nvcr.io/nvidia/deepstream:7.0-triton-multiarch Docker image.

Given this, could you please help me with the issue where the DeepStream Python test sample deepstream_test_3.py does not process the stream on this DeepStream 7.0 Docker container, whereas I have no problem on the DeepStream 6.4 Docker container?

Can you install our adapted driver and cuda versions on your host first? I have attached the compatibility before.

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

Thank you for your continued support. I would appreciate it if you could keep this topic open a bit longer. I’ve recently observed some unusual behavior even after upgrading to DeepStream 7.0. I am in the process of gathering more data and observations to better understand the issue. I will provide additional details shortly.

Your assistance has been invaluable, and I hope to resolve this with your help.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.