Sending messages every frame with msgbroker causes a segmentation fault

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU): Jetson Nano
• DeepStream Version: 6.0
• JetPack Version (valid for Jetson only): 4.6.6 (L4T 32.7.6)
• NVIDIA GPU Driver Version (valid for GPU only): included in JetPack 4.6.6

Hi
I am using deepstream-test4
to test message sending to an AMQP broker (RabbitMQ) via msgbroker.
Using the sample code, messages were sent to the first object every 30 frames by default.
However, changing the message frequency from 30 to 1 frame causes the code to core dump.
Why is this happening?

Modified code

            # Ideally NVDS_EVENT_MSG_META should be attached to buffer by the
            # component implementing detection / recognition logic.
            # Here it demonstrates how to use / attach that meta data.
-           if is_first_object and (frame_number % 30) == 0:
+           if (not (frame_number % 30)) == 0:
                # Frequency of messages to be send will be based on use case.
                # Here message is being sent for first object every 30 frames.

Execution Log

deepstream_test4 |
deepstream_test4 | Creating Source
deepstream_test4 |
deepstream_test4 | Creating H264Parser
deepstream_test4 |
deepstream_test4 | Creating Decoder
deepstream_test4 |
deepstream_test4 | Creating EGLSink
deepstream_test4 |
deepstream_test4 | Playing file /opt/nvidia/deepstream/deepstream-6.0/samples/streams/sample_qHD.h264
deepstream_test4 | Adding elements to Pipeline
deepstream_test4 |
deepstream_test4 | Linking elements in the Pipeline
deepstream_test4 |
deepstream_test4 | Starting pipeline
deepstream_test4 |
deepstream_test4 |
deepstream_test4 | Using winsys: x11
rabbitmq | 2025-10-24 04:25:13.520299+00:00 [info] accepting AMQP connection (172.27.0.1:59278 -> 172.27.0.2:5672)
rabbitmq | 2025-10-24 04:25:13.524799+00:00 [info] connection (172.27.0.1:59278 -> 172.27.0.2:5672): user 'guest' authenticated and granted access to vhost '/'
deepstream_test4 | Opening in BLOCKING MODE
deepstream_test4 | 0:00:02.139953896 10 0x182d9a10 WARN nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::initialize() [UID = 1]: Warning, OpenCV has been deprecated. Using NMS for clustering instead of cv::groupRectangles with topK = 20 and NMS Threshold = 0.5
deepstream_test4 | 0:00:08.185753289 10 0x182d9a10 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() [UID = 1]: deserialized trt engine from :/root/models/resnet10.caffemodel_b1_gpu0_fp16.engine
deepstream_test4 | INFO: [Implicit Engine Info]: layers num: 3
deepstream_test4 | 0 INPUT kFLOAT input_1 3x368x640
deepstream_test4 | 1 OUTPUT kFLOAT conv2d_bbox 16x23x40
deepstream_test4 | 2 OUTPUT kFLOAT conv2d_cov/Sigmoid 4x23x40
deepstream_test4 |
deepstream_test4 | 0:00:08.187660092 10 0x182d9a10 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() [UID = 1]: Use deserialized engine model: /root/models/resnet10.caffemodel_b1_gpu0_fp16.engine
deepstream_test4 | 0:00:08.417175900 10 0x182d9a10 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus: [UID 1]: Load new model:dstest4_pgie_config.txt sucessfully
deepstream_test4 | NvMMLiteOpen : Block : BlockType = 261
deepstream_test4 | NVMEDIA: Reading vendor.tegra.display-size : status: 6
deepstream_test4 | NvMMLiteBlockCreate : Block : BlockType = 261
deepstream_test4 | Frame Number = 0 Vehicle Count = 4 Person Count = 1
deepstream_test4 | Frame Number = 1 Vehicle Count = 4 Person Count = 1
deepstream_test4 | Frame Number = 2 Vehicle Count = 4 Person Count = 1
deepstream_test4 | Frame Number = 3 Vehicle Count = 4 Person Count = 1
deepstream_test4 | Frame Number = 4 Vehicle Count = 4 Person Count = 1
rabbitmq | 2025-10-24 04:25:21.290987+00:00 [warning] closing AMQP connection (172.27.0.1:59278 -> 172.27.0.2:5672, vhost: '/', user: 'guest'):
rabbitmq | 2025-10-24 04:25:21.290987+00:00 [warning] client unexpectedly closed TCP connection
deepstream_test4 | /root/docker-entrypoint.sh: line 30: 10 Segmentation fault (core dumped) python3 deepstream_test_4.py -i /opt/nvidia/deepstream/deepstream-6.0/samples/streams/sample_qHD.h264 -p /opt/nvidia/deepstream/deepstream-6.0/lib/libnvds_amqp_proto.so -c cfg_amqp.txt --conn-str="localhost;5672;guest" -t topicname -s 0
deepstream_test4 exited with code 139
^CGracefully stopping... (press Ctrl+C again to force)
Stopping rabbitmq ... done

log.txt (30.8 KB)

from the logs, the client closed. can this crash issue be reproduced every time? could you use gdb to get a crashed stack?

The number of frames output varies, but the crash issue reproduces every time.
I don’t have knowledge of GDB, so please bear with me for a moment.

I’m sorry I’m late.

Actually, we confirmed that core dumps occur in the following two patterns:
・Core dumps occurring during frame processing (SIGSEGV)
・Core dumps occurring after video processing completes (SIGABRT)
Using GDB to capture logs and conducting 30 tests, we observed the following signals:

■SIGSEGV

Thread 7 “nvtee-que2:src” received signal SIGSEGV, Segmentation fault.
[Switching to Thread 0x7f8892d1f0 (LWP 81)]
0x0000007f500bc960 in ?? ()

■SIGABRT

terminate called after throwing an instance of ‘std::bad_optional_access’
  what():  bad optional access

Thread 1 “python3” received signal SIGABRT, Aborted.
__GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:51
51    ../sysdeps/unix/sysv/linux/raise.c: No such file or directory.

Test results are as follows:
・SIGSEGV: 14 times (occurred mid-frame)
・SIGABRT: 16 times (occurred after video ended)
Note that when SIGSEGV occurs, the number of frames output will vary.

For reference, I’ve attached two types of logs and the RabbitMQ startup log.
SIGSEGV.log (7.4 KB)
SIGABRT.log (104.7 KB)
rabbimp.log (30.0 KB)

Thanks for the sharing! from SIGSEGV.log, the crash is not in DeepStream. from SIGABRT.log, seems the crash is in nvds_clear_meta_list.

  1. With the same code modification, I am not able to reproduce this issue on DS8.0. Here is the log log-1028.txt (81.9 KB) . To narrow down this issu, can this crash issue be reproduced every time?is the patch in the issue description all modificaton? the without the code modifcaiton, can this issue be reproduced? if using “frame_number % 10”, will the issue persist?
  2. sending broker lib is opensource. the path is /opt/nvidia/deepstream/deepstream-6.0/sources/libs/amqp_protocol_adaptor. could you test this lib separately by referring to the readme? you can also modify test_amqp_proto_async.c to seng more and longer messages.

Thank you for your response.

This time, I installed Jetpack 4.6 on the Jetson Nano host and am running deepstream_test_4.py via Docker.

1.Narrow down
・can this crash issue be reproduced every time?
→Yes, it reproduces.
Testing shows it crashes 30 times out of 30 attempts.
Breakdown: SIGABRT 16 times, SIGSEGV 14 times

・is the patch in the issue description all modificaton?
→ Yes, that’s correct.
I’m sending the complete set of files used during execution (deepstream-test4.zip (15.4 KB)).
The startup command is “docker-compose up --build”.

・the without the code modifcaiton, can this issue be reproduced?
→ No, it does not occur.

・if using “frame_number % 10”, will the issue persist?
→ No, the issue does not persist.
Tested 10 times without crashing.

2.could you test this lib separately by referring to the readme?
I ran the test program described in the README.
The results are as follows.

root@H360-0029:/opt/nvidia/deepstream/deepstream-6.0/sources/libs/amqp_protocol_adaptor# ./test_amqp_proto_async 
Adapter protocol=AMQP , version=3.0
connection string queried= 
Connect Success
Message num 0 : send success
Message num 1 : send success
Message num 2 : send success
Message num 3 : send success
Message num 4 : send success
Message num 5 : send success
Message num 6 : send success
Message num 7 : send success
Message num 8 : send success
Message num 9 : send success
root@H360-0029:/opt/nvidia/deepstream/deepstream-6.0/sources/libs/amqp_protocol_adaptor# ./test_amqp_proto_sync
Adapter protocol=AMQP , version=3.0
connection signature queried= 
Successfully sent msg[0] : Hello world
Successfully sent msg[1] : Hello world
Successfully sent msg[2] : Hello world
Successfully sent msg[3] : Hello world
Successfully sent msg[4] : Hello world
Successfully sent msg[5] : Hello world
Successfully sent msg[6] : Hello world
Successfully sent msg[7] : Hello world
Successfully sent msg[8] : Hello world
Successfully sent msg[9] : Hello world
root@H360-0029:/opt/nvidia/deepstream/deepstream-6.0/sources/libs/amqp_protocol_adaptor#
  1. Was there “closing AMQP connection” printing before every crash? if so, maybe it is related to this excepition.
  2. AYK, DeepStream Python leverage Python binding to use C version DeepStream SDK.To narrow down this issue, could you use /opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-test4/ with the same code modification to test sending broker? Please share the running log and gdb crash stack if the issue persists.
  1. Closing AMQP connection
    → Yes, it appears every time.

The RabbitMQ site had the following description:

Detecting High Connection Churn
High connection churn (lots of connections opened and closed after a brief period of time) can lead to resource exhaustion.

Could this be the cause of the current issue, resulting in the error:
“Thread 7 ”nvtee-que2:src“ received signal SIGSEGV, Segmentation fault.”?

  1. Testing with C Sample Application (deepstream-test4)
    → I tested it 10 times and it did not crash even once.
  1. From the crash stack, there are two kinds of error. one is in mid-frame. The log prints “”nvtee-que2:src“ received signal SIGSEGV,”. But the stack is empty.
  2. regarding “closing AMQP connection”, to rule out AMQP client issue, you may try sending kafka,
  3. regarding " I tested it 10 times and it did not crash even once.", do you mean using the same code modification(changing the message frequency from 30 to 1 frame), and the same input video, the C version test4 sent broker normally? if so, the issue should be in Python code. the latter deepstream_python_apps version fixed some binding and usage bugs. Since deepstream_python_apps is opensource, you can port the new modifications to V1.1.1(DS6.0). Here is the steps.
    1> Download v1.1.1 code. compare two version deepstream_test_4.py. replace the v1.1.1 version deepstream_test_4.py, cfg_amqp.txt with the master branch version.
    2>. find “alloc_nvds_event_msg_meta” in the master branch, then port the related modifications to v.1.1.1 version, then build and install according to this doc.
  1. Sorry, my mistake.
  2. Since I haven’t worked with Kafka before, it will take some time to verify. Please wait a moment.
  3. Does this mean that using the same code fix (changing the message frequency from 30 frames to 1 frame) and the same input video, the C version of test4 successfully sent to the broker?
    →Yes, that’s correct.
    Porting the new fix to V1.1.1 (DS6.0)
    →Currently verifying, but it’s taking some time. Please wait a moment.

Sorry for the late reply, Is this still an DeepStream issue to support? Thanks!

I’m sorry for the delayed reply.

I have replaced/migrated the files. Is this understanding correct?
・Replaced deepstream_test_4.py and cfg_amqp.txt with the master branch versions
・Searched for “alloc_nvds_event_msg_meta” and migrated it to the v1.1.1 version

deepstream_python_apps
├── bindings/
│   ├── docstrings
│   │   └── pydocumentation.h (migrated from master branch's schemadoc.h)
│   └── src
│       └── bindschema.cpp

I am following the build method for v1.1.1-doc.
I completed “2.1 Base dependencies Ubuntu - 18.04,”
but I am getting an error in “2.2 Initialization of submodules.”

root@H360-0029:/opt/nvidia/deepstream/deepstream-6.0/sources/deepstream_python_apps# git submodule update --init
fatal: detected dubious ownership in repository at '/opt/nvidia/deepstream/deepstream-6.0/sources/deepstream_python_apps/3rdparty/gst-python'
To add an exception for this directory, call:

        git config --global --add safe.directory /opt/nvidia/deepstream/deepstream-6.0/sources/deepstream_python_apps/3rdparty/gst-python
Unable to find current revision in submodule path '3rdparty/gst-python'
root@H360-0029:/opt/nvidia/deepstream/deepstream-6.0/sources/deepstream_python_apps#

I have some progress to report.

Regarding git, I executed git config --global --add safe.directory as instructed, and the issue was resolved.

Next, following the steps in 2.3 Installing Gst-python, I executed
./autogen.sh
and encountered the following error:

checking for python script directory... ${prefix}/lib/python2.7/dist-packages
checking for python extension module directory... ${exec_prefix}/lib/python2.7/dist-packages
checking for python >= 2.7... checking for pkg-config... /usr/bin/pkg-config
checking pkg-config is at least version 0.9.0... yes
checking for GST... yes
checking for PYGOBJECT... configure: error: Package requirements (pygobject-3.0 >= 3.8) were not met:

No package ‘pygobject-3.0’ found

The detailed log is here (8.3 KB)

DS6.0 uses Gstreamer1.14. you can use gst-python 1.14 branch.

We are implementing this using gst-python 1.14.

from the error, seems pygobject-3.0 is needed. Is this link helpful?

I was able to avoid the pygobject-3.0 error using the method you taught me.
After some trial and error, I confirmed it works frame by frame.
Thank you very much!