Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) GPU
• DeepStream Version 7.1
• JetPack Version (valid for Jetson only)
• TensorRT Version (comes with DeepStream 7.1 container)
• NVIDIA GPU Driver Version (valid for GPU only) 570.172.08
• Issue Type( questions, new requirements, bugs) questions
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
Description
I am trying to attach a custom C++ structure (TinSourceInfo) to a buffer so that a modified version of nvmsgconv can produce a rich JSON payload. My goal is to have the custom fields (sensorId, sourceUri) available in the message converter logic.
I have implemented custom Python bindings for this struct. However, I am encountering a SIGSEGV (Segmentation Fault) inside my release_tin_source_info function when the pipeline attempts to clear the metadata.
Debugging Journey
Initially, I suspected a Double Free issue. However, after debugging with GDB, I observed that the release function is called only once per object. The backtrace shows the crash occurs exactly at the free() call within my custom Janitor function.
GDB Backtrace Snippet:
Thread 64 "msg-queue:src" received signal SIGSEGV, Segmentation fault. [Switching to Thread 0x7fff81762640 (LWP 13912)] 0x00007ffff7cf33fe in free () from /lib/x86_64-linux-gnu/libc.so.6 (gdb) bt #0 0x00007ffff7cf33fe in free () at /lib/x86_64-linux-gnu/libc.so.6 #1 0x00007ffff4801c19 in pydeepstream::release_tin_source_info(void*, void*) () at /usr/local/lib/python3.10/dist-packages/pyds.so #2 0x00007ffff40e765b in nvds_clear_meta_list () at /opt/nvidia/deepstream/deepstream/lib/libnvds_meta.so #3 0x00007ffff40e7727 in release_frame_meta () at /opt/nvidia/deepstream/deepstream/lib/libnvds_meta.so #4 0x00007ffff40e6ffd in nvds_destroy_meta_pool () at /opt/nvidia/deepstream/deepstream/lib/libnvds_meta.so #5 0x00007ffff40e5da5 in nvds_destroy_batch_meta () at /opt/nvidia/deepstream/deepstream/lib/libnvds_meta.so #6 0x00007ffff6e50be8 in gst_buffer_foreach_meta () at /lib/x86_64-linux-gnu/libgstreamer-1.0.so.0 #7 0x00007ffff6e5612c in gst_buffer_pool_release_buffer () at /lib/x86_64-linux-gnu/libgstreamer-1.0.so.0 #8 0x00007ffff6e56218 in () at /lib/x86_64-linux-gnu/libgstreamer-1.0.so.0 #9 0x00007ffff6e88cc5 in gst_mini_object_unref () at /lib/x86_64-linux-gnu/libgstreamer-1.0.so.0 #10 0x00007fffed39f1f3 in () at /lib/x86_64-linux-gnu/libgstbase-1.0.so.0 #11 0x00007fffed3cd258 in () at /lib/x86_64-linux-gnu/libgstbase-1.0.so.0 #12 0x00007fffed39c4f0 in () at /lib/x86_64-linux-gnu/libgstbase-1.0.so.0 #13 0x00007ffff6e9186d in () at /lib/x86_64-linux-gnu/libgstreamer-1.0.so.0 #14 0x00007ffff6e94e09 in () at /lib/x86_64-linux-gnu/libgstreamer-1.0.so.0 #15 0x00007ffff6e9522e in gst_pad_push () at /lib/x86_64-linux-gnu/libgstreamer-1.0.so.0 #16 0x00007fffed3aa21f in () at /lib/x86_64-linux-gnu/libgstbase-1.0.so.0 #17 0x00007ffff6e9186d in () at /lib/x86_64-linux-gnu/libgstreamer-1.0.so.0 #18 0x00007ffff6e94e09 in () at /lib/x86_64-linux-gnu/libgstreamer-1.0.so.0 #19 0x00007ffff6e9522e in gst_pad_push () at /lib/x86_64-linux-gnu/libgstreamer-1.0.so.0 #20 0x00007fffd01ab875 in () at /usr/lib/x86_64-linux-gnu/gstreamer-1.0/libgstcoreelements.so #21 0x00007ffff6ebc1d7 in () at /lib/x86_64-linux-gnu/libgstreamer-1.0.so.0 #22 0x00007ffff760e384 in g_thread_pool_thread_proxy (data=<optimized out>) at ../glib/gthreadpool.c:350 #23 0x00007ffff760dac1 in g_thread_proxy (data=0x7fffac001380) at ../glib/gthread.c:831 #24 0x00007ffff7ce2ac3 in () at /lib/x86_64-linux-gnu/libc.so.6 #25 0x00007ffff7d74850 in () at /lib/x86_64-linux-gnu/libc.so.6
then I add commands
(gdb) break pydeepstream::release_tin_source_info Breakpoint 1 at 0x7ffff4801bf0 (gdb) commands Type commands for breakpoint(s) 1, one per line. End with a line saying just "end". >silent >set $cnt = $cnt + 1 >printf "\n[CALL %d] release_tin_source_info called, ptr=%p\n", $cnt, $rdi >bt 4 >continue >end (gdb) run The program being debugged has been started already. Start it from the beginning? (y or n) y
then I realize it just call that function 1 time
Starting program: /usr/bin/python3 main_c.py -i file:///videos/video2.mp4 -c /configs/ds_mainapp_config.txt [Thread debugging using libthread_db enabled] Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1". [New Thread 0x7fffe95ff640 (LWP 14232)] ============================================================ DEEPSTREAM PIPELINE STARTING ============================================================ [INIT] GStreamer initialized [INIT] Pipeline created ============================================================ [INIT] Constructing pipeline for 1 streams ============================================================ [CREATE] Creating nvmultiurisrcbin... [CONFIG] Set 1 stream URIs [CONFIG] REST API enabled on port 9003 [CREATE] Creating primary inference engine... [New Thread 0x7fffd35ff640 (LWP 14233)] [CONFIG] Inference config: /configs/ds_mainapp_config.txt, batch-size: 1 [CREATE] Creating tiler... [CONFIG] Tiler: 4x5 grid, 1920x1080 [CREATE] Creating video converter... [CREATE] Creating OSD... [CREATE] Creating messaging elements... [CREATE] Creating H264 encoder... Is it Integrated GPU? : 0 [CREATE] Creating RTP payloader... [CREATE] Creating UDP sink... [PIPELINE] Adding elements to pipeline... [PIPELINE] All elements added successfully [PIPELINE] Linking elements... source_bin -> pgie pgie -> tiler tiler -> nvvidconv nvvidconv -> nvosd nvosd -> tee /app/main_c.py:420: DeprecationWarning: Gst.Element.get_request_pad is deprecated tee_render_pad = tee.get_request_pad('src_%u') tee -> render_queue -> encoder -> rtppay -> sink tee -> msg_queue -> msgconv -> msgbroker [PIPELINE] All elements linked successfully [PROBES] Installing buffer probes... PGIE sink pad probe installed for FPS/SourceID display PGIE source pad probe installed Tiler source pad probe installed OSD sink pad probe installed (detailed analytics) [INIT] Event loop and bus watch configured [RTSP] Setting up RTSP server... ============================================================ *** Launched RTSP at rtsp://localhost:8555/ds-test *** ============================================================ [PIPELINE] Setting pipeline state to PLAYING... [New Thread 0x7fffc49b0640 (LWP 14234)] [mosq_mqtt_log_callback] Client tin_ds_multi_cam_01 sending CONNECT [New Thread 0x7fffc41af640 (LWP 14235)] [mosq_mqtt_log_callback] Client tin_ds_multi_cam_01 received CONNACK (0) mqtt connection success; ready to send data Failed to query video capabilities: Invalid argument [New Thread 0x7fffc23ff640 (LWP 14236)] [New Thread 0x7fffc1bfe640 (LWP 14237)] [New Thread 0x7fffc13fd640 (LWP 14238)] [New Thread 0x7fffc0bfc640 (LWP 14239)] [New Thread 0x7fffc38b9640 (LWP 14240)] [New Thread 0x7fffc389f640 (LWP 14241)] [New Thread 0x7fffc3885640 (LWP 14242)] [New Thread 0x7fffc386b640 (LWP 14243)] [New Thread 0x7fffc35ff640 (LWP 14244)] [New Thread 0x7fffc35e5640 (LWP 14245)] [New Thread 0x7fffc03fb640 (LWP 14246)] [New Thread 0x7fffc03e1640 (LWP 14247)] [New Thread 0x7fffc03c7640 (LWP 14248)] [New Thread 0x7fffc03ad640 (LWP 14249)] [New Thread 0x7fffc0393640 (LWP 14250)] [New Thread 0x7fffc0379640 (LWP 14251)] [New Thread 0x7fffc035f640 (LWP 14252)] [New Thread 0x7fffc0345640 (LWP 14253)] [New Thread 0x7fffc032b640 (LWP 14254)] [New Thread 0x7fffc0311640 (LWP 14255)] [New Thread 0x7fffc02f7640 (LWP 14256)] [New Thread 0x7fffc02dd640 (LWP 14257)] [New Thread 0x7fffc02c3640 (LWP 14258)] [New Thread 0x7fffc02a9640 (LWP 14259)] [New Thread 0x7fffc028f640 (LWP 14260)] [New Thread 0x7fffc0275640 (LWP 14261)] [New Thread 0x7fffc025b640 (LWP 14262)] [New Thread 0x7fffc0241640 (LWP 14263)] [New Thread 0x7fffc0227640 (LWP 14264)] [New Thread 0x7fffc020d640 (LWP 14265)] [New Thread 0x7fffc01f3640 (LWP 14266)] [New Thread 0x7fffc01d9640 (LWP 14267)] [New Thread 0x7fffc01bf640 (LWP 14268)] [New Thread 0x7fffc01a5640 (LWP 14269)] [New Thread 0x7fffc018b640 (LWP 14270)] [New Thread 0x7fffc0171640 (LWP 14271)] [New Thread 0x7fffc0157640 (LWP 14272)] [New Thread 0x7fffc013d640 (LWP 14273)] [New Thread 0x7fffc0123640 (LWP 14274)] [New Thread 0x7fffc0109640 (LWP 14275)] [New Thread 0x7fffc00ef640 (LWP 14276)] [New Thread 0x7fffc00d5640 (LWP 14277)] [New Thread 0x7fffc00bb640 (LWP 14278)] [New Thread 0x7fffc00a1640 (LWP 14279)] [New Thread 0x7fffc0087640 (LWP 14280)] [New Thread 0x7fffc006d640 (LWP 14281)] [New Thread 0x7fffc0053640 (LWP 14282)] [New Thread 0x7fffc0039640 (LWP 14283)] [New Thread 0x7fffc001f640 (LWP 14284)] [New Thread 0x7ffefbfff640 (LWP 14285)] [New Thread 0x7ffefbfe5640 (LWP 14286)] [New Thread 0x7ffefbfcb640 (LWP 14287)] [New Thread 0x7ffefbfb1640 (LWP 14288)] [New Thread 0x7ffefbf97640 (LWP 14289)] [New Thread 0x7ffefbf7d640 (LWP 14290)] [New Thread 0x7ffefbf63640 (LWP 14291)] Civetweb version: v1.16 Server running at port: 9003 [New Thread 0x7ffefbf49640 (LWP 14292)] [New Thread 0x7ffefb748640 (LWP 14293)] Unknown group common Unknown group person Unknown group vehicle [New Thread 0x7ffef9765640 (LWP 14294)] [New Thread 0x7ffef8f64640 (LWP 14295)] 0:00:02.763268001 14217 0x555557985fa0 INFO nvinfer gstnvinfer.cpp:684:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:2092> [UID = 1]: deserialized trt engine from :/models/peoplenet/resnet34_peoplenet_int8_b20_gpu0_int8.engine INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:327 [FullDims Engine Info]: layers num: 3 0 INPUT kFLOAT input_1:0 3x544x960 min: 1x3x544x960 opt: 20x3x544x960 Max: 20x3x544x960 1 OUTPUT kFLOAT output_cov/Sigmoid:0 3x34x60 min: 0 opt: 0 Max: 0 2 OUTPUT kFLOAT output_bbox/BiasAdd:0 12x34x60 min: 0 opt: 0 Max: 0 0:00:02.763365249 14217 0x555557985fa0 INFO nvinfer gstnvinfer.cpp:684:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2195> [UID = 1]: Use deserialized engine model: /models/peoplenet/resnet34_peoplenet_int8_b20_gpu0_int8.engine [New Thread 0x7ffed9fff640 (LWP 14296)] [New Thread 0x7ffed97fe640 (LWP 14297)] [New Thread 0x7ffed8ffd640 (LWP 14298)] 0:00:02.776519501 14217 0x555557985fa0 INFO nvinfer gstnvinfer_impl.cpp:343:notifyLoadModelStatus:<primary-inference> [UID 1]: Load new model:/configs/ds_mainapp_config.txt sucessfully [New Thread 0x7ffec1fff640 (LWP 14299)] [PIPELINE] ✓ Pipeline is now PLAYING ============================================================ PIPELINE RUNNING - Monitor logs below Press Ctrl+C to stop ============================================================ [BUS] Stream Added: SourceID=0 -> SensorID=CAM0 [New Thread 0x7ffec17fe640 (LWP 14300)] [New Thread 0x7ffec0ffd640 (LWP 14301)] [New Thread 0x7ffe8ffff640 (LWP 14302)] Warning: gst-stream-error-quark: No decoder available for type 'audio/mpeg, mpegversion=(int)4, framed=(boolean)true, stream-format=(string)raw, level=(string)2, base-profile=(string)lc, profile=(string)lc, codec_data=(buffer)12100000000000000000000000000000, rate=(int)44100, channels=(int)2'. (6): ../gst/playback/gsturidecodebin.c(960): unknown_type_cb (): /GstPipeline:pipeline0/GstDsNvMultiUriBin:src-bin/GstBin:src-bin_creator/GstDsNvUriSrcBin:dsnvurisrcbin0/GstURIDecodeBin:nvurisrc_bin_src_elem [Thread 0x7fffc0bfc640 (LWP 14239) exited] Failed to query video capabilities: Invalid argument [New Thread 0x7fffc0bfc640 (LWP 14303)] [New Thread 0x7ffe8f7fe640 (LWP 14304)] [New Thread 0x7ffe8effd640 (LWP 14305)] mimetype is video/x-raw [New Thread 0x7ffe8e7fc640 (LWP 14306)] [New Thread 0x7ffe8dffb640 (LWP 14307)] [New Thread 0x7ffe8d7fa640 (LWP 14308)] [New Thread 0x7ffe8cff9640 (LWP 14309)] [PGIE] Primary inference started processing frames [New Thread 0x7ffe75b4d640 (LWP 14310)] [Thread 0x7ffe75b4d640 (LWP 14310) exited] [New Thread 0x7ffe75b4d640 (LWP 14311)] [Thread 0x7ffe75b4d640 (LWP 14311) exited] [New Thread 0x7ffe75b4d640 (LWP 14312)] [Thread 0x7ffe75b4d640 (LWP 14312) exited] [New Thread 0x7ffe75b4d640 (LWP 14313)] [New Thread 0x7ffe7534c640 (LWP 14314)] [Thread 0x7ffe75b4d640 (LWP 14313) exited] [Thread 0x7ffe7534c640 (LWP 14314) exited] [New Thread 0x7ffe7534c640 (LWP 14315)] [New Thread 0x7ffe75b4d640 (LWP 14316)] [Thread 0x7ffe7534c640 (LWP 14315) exited] [Thread 0x7ffe75b4d640 (LWP 14316) exited] [New Thread 0x7ffe75b4d640 (LWP 14317)] [New Thread 0x7ffe7534c640 (LWP 14318)] [Thread 0x7ffe75b4d640 (LWP 14317) exited] [Thread 0x7ffe7534c640 (LWP 14318) exited] [DEBUG-C++] ENTERED nvds_msg2p_generate_new | PayloadType: 257 DEBUG: Inside generate_dsmeta_message_custom [DEBUG-C++] ENTERED nvds_msg2p_generate_new | PayloadType: 257 DEBUG: Inside generate_dsmeta_message_custom [DEBUG-C++] ENTERED nvds_msg2p_generate_new | PayloadType: 257 DEBUG: Inside generate_dsmeta_message_custom [DEBUG-C++] ENTERED nvds_msg2p_generate_new | PayloadType: 257 DEBUG: Inside generate_dsmeta_message_custom [DEBUG-C++] ENTERED nvds_msg2p_generate_new | PayloadType: 257 DEBUG: Inside generate_dsmeta_message_custom [New Thread 0x7ffe7534c640 (LWP 14319)] [DEBUG-C++] ENTERED nvds_msg2p_generate_new | PayloadType: 257 DEBUG: Inside generate_dsmeta_message_custom [New Thread 0x7ffe75b4d640 (LWP 14320)] [DEBUG-C++] ENTERED nvds_msg2p_generate_new | PayloadType: 257 DEBUG: Inside generate_dsmeta_message_custom [DEBUG-C++] ENTERED nvds_msg2p_generate_new | PayloadType: 257 DEBUG: Inside generate_dsmeta_message_custom [DEBUG-C++] ENTERED nvds_msg2p_generate_new | PayloadType: 257 DEBUG: Inside generate_dsmeta_message_custom [DEBUG-C++] ENTERED nvds_msg2p_generate_new | PayloadType: 257 DEBUG: Inside generate_dsmeta_message_custom [DEBUG-C++] ENTERED nvds_msg2p_generate_new | PayloadType: 257 DEBUG: Inside generate_dsmeta_message_custom [DEBUG-C++] ENTERED nvds_msg2p_generate_new | PayloadType: 257 DEBUG: Inside generate_dsmeta_message_custom [DEBUG-C++] ENTERED nvds_msg2p_generate_new | PayloadType: 257 DEBUG: Inside generate_dsmeta_message_custom [DEBUG-C++] ENTERED nvds_msg2p_generate_new | PayloadType: 257 DEBUG: Inside generate_dsmeta_message_custom [DEBUG-C++] ENTERED nvds_msg2p_generate_new | PayloadType: 257 DEBUG: Inside generate_dsmeta_message_custom [DEBUG-C++] ENTERED nvds_msg2p_generate_new | PayloadType: 257 DEBUG: Inside generate_dsmeta_message_custom [DEBUG-C++] ENTERED nvds_msg2p_generate_new | PayloadType: 257 DEBUG: Inside generate_dsmeta_message_custom [DEBUG-C++] ENTERED nvds_msg2p_generate_new | PayloadType: 257 DEBUG: Inside generate_dsmeta_message_custom [mosq_mqtt_log_callback] Client tin_ds_multi_cam_01 sending PUBLISH (d0, q0, r0, m1, 'pickleball/detections', ... (1578 bytes)) [mosq_mqtt_log_callback] Client tin_ds_multi_cam_01 sending PUBLISH (d0, q0, r0, m2, 'pickleball/detections', ... (1578 bytes)) Publish callback with reason code: Success. [mosq_mqtt_log_callback] Client tin_ds_multi_cam_01 sending PUBLISH (d0, q0, r0, m3, 'pickleball/detections', ... (1578 bytes)) [mosq_mqtt_log_callback] Client tin_ds_multi_cam_01 sending PUBLISH (d0, q0, r0, m4, 'pickleball/detections', ... (1578 bytes)) [mosq_mqtt_log_callback] Client tin_ds_multi_cam_01 sending PUBLISH (d0, q0, r0, m5, 'pickleball/detections', ... (1578 bytes)) [mosq_mqtt_log_callback] Client tin_ds_multi_cam_01 sending PUBLISH (d0, q0, r0, m6, 'pickleball/detections', ... (1578 bytes)) [mosq_mqtt_log_callback] Client tin_ds_multi_cam_01 sending PUBLISH (d0, q0, r0, m7, 'pickleball/detections', ... (1578 bytes)) [mosq_mqtt_log_callback] Client tin_ds_multi_cam_01 sending PUBLISH (d0, q0, r0, m8, 'pickleball/detections', ... (1578 bytes)) [mosq_mqtt_log_callback] Client tin_ds_multi_cam_01 sending PUBLISH (d0, q0, r0, m9, 'pickleball/detections', ... (1578 bytes)) [mosq_mqtt_log_callback] Client tin_ds_multi_cam_01 sending PUBLISH (d0, q0, r0, m10, 'pickleball/detections', ... (1578 bytes)) [mosq_mqtt_log_callback] Client tin_ds_multi_cam_01 sending PUBLISH (d0, q0, r0, m11, 'pickleball/detections', ... (1578 bytes)) [mosq_mqtt_log_callback] Client tin_ds_multi_cam_01 sending PUBLISH (d0, q0, r0, m12, 'pickleball/detections', ... (1578 bytes)) [mosq_mqtt_log_callback] Client tin_ds_multi_cam_01 sending PUBLISH (d0, q0, r0, m13, 'pickleball/detections', ... (1578 bytes)) [mosq_mqtt_log_callback] Client tin_ds_multi_cam_01 sending PUBLISH (d0, q0, r0, m14, 'pickleball/detections', ... (1578 bytes)) [mosq_mqtt_log_callback] Client tin_ds_multi_cam_01 sending PUBLISH (d0, q0, r0, m15, 'pickleball/detections', ... (1578 bytes)) [mosq_mqtt_log_callback] Client tin_ds_multi_cam_01 sending PUBLISH (d0, q0, r0, m16, 'pickleball/detections', ... (1578 bytes)) [mosq_mqtt_log_callback] Client tin_ds_multi_cam_01 sending PUBLISH (d0, q0, r0, m17, 'pickleball/detections', ... (1578 bytes)) [mosq_mqtt_log_callback] Client tin_ds_multi_cam_01 sending PUBLISH (d0, q0, r0, m18, 'pickleball/detections', ... (1578 bytes)) [DEBUG-C++] ENTERED nvds_msg2p_generate_new | PayloadType: 257 DEBUG: Inside generate_dsmeta_message_custom Publish callback with reason code: Success. Publish callback with reason code: Success. Publish callback with reason code: Success. Publish callback with reason code: Success. Publish callback with reason code: Success. Publish callback with reason code: Success. Publish callback with reason code: Success. Publish callback with reason code: Success. Publish callback with reason code: Success. Publish callback with reason code: Success. Publish callback with reason code: Success. Publish callback with reason code: Success. Publish callback with reason code: Success. Publish callback with reason code: Success. [DEBUG-C++] ENTERED nvds_msg2p_generate_new | PayloadType: 257 DEBUG: Inside generate_dsmeta_message_custom Publish callback with reason code: Success. Publish callback with reason code: Success. Publish callback with reason code: Success. [DEBUG-C++] ENTERED nvds_msg2p_generate_new | PayloadType: 257 DEBUG: Inside generate_dsmeta_message_custom [DEBUG-C++] ENTERED nvds_msg2p_generate_new | PayloadType: 257 DEBUG: Inside generate_dsmeta_message_custom [DEBUG-C++] ENTERED nvds_msg2p_generate_new | PayloadType: 257 DEBUG: Inside generate_dsmeta_message_custom [DEBUG-C++] ENTERED nvds_msg2p_generate_new | PayloadType: 257 DEBUG: Inside generate_dsmeta_message_custom [DEBUG-C++] ENTERED nvds_msg2p_generate_new | PayloadType: 257 DEBUG: Inside generate_dsmeta_message_custom [DEBUG-C++] ENTERED nvds_msg2p_generate_new | PayloadType: 257 DEBUG: Inside generate_dsmeta_message_custom [DEBUG-C++] ENTERED nvds_msg2p_generate_new | PayloadType: 257 DEBUG: Inside generate_dsmeta_message_custom [DEBUG-C++] ENTERED nvds_msg2p_generate_new | PayloadType: 257 DEBUG: Inside generate_dsmeta_message_custom [DEBUG-C++] ENTERED nvds_msg2p_generate_new | PayloadType: 257 DEBUG: Inside generate_dsmeta_message_custom [DEBUG-C++] ENTERED nvds_msg2p_generate_new | PayloadType: 257 DEBUG: Inside generate_dsmeta_message_custom [DEBUG-C++] ENTERED nvds_msg2p_generate_new | PayloadType: 257 DEBUG: Inside generate_dsmeta_message_custom [DEBUG-C++] ENTERED nvds_msg2p_generate_new | PayloadType: 257 DEBUG: Inside generate_dsmeta_message_custom [DEBUG-C++] ENTERED nvds_msg2p_generate_new | PayloadType: 257 DEBUG: Inside generate_dsmeta_message_custom [Switching to Thread 0x7ffef9765640 (LWP 14294)]
[CALL 1] release_tin_source_info called, ptr=0x7ffeb80074e0 #0 0x00007ffff4801bf0 in pydeepstream::release_tin_source_info(void*, void*) () at /usr/local/lib/python3.10/dist-packages/pyds.so #1 0x00007ffff40e765b in nvds_clear_meta_list () at /opt/nvidia/deepstream/deepstream/lib/libnvds_meta.so #2 0x00007ffff40e7727 in release_frame_meta () at /opt/nvidia/deepstream/deepstream/lib/libnvds_meta.so #3 0x00007ffff40e6ffd in nvds_destroy_meta_pool () at /opt/nvidia/deepstream/deepstream/lib/libnvds_meta.so Thread 64 "msg-queue:src" received signal SIGSEGV, Segmentation fault. 0x00007ffff7cf33fe in free () from /lib/x86_64-linux-gnu/libc.so.6
I am currently mixing g_malloc0/g_strdup for allocation and standard free() for deallocation in my bindings, following some patterns found in the documentation. I also suspect there might be a conflict between how NvDsEventMsgMeta and NvDsUserMeta manage the lifecycle of the user_meta_data pointer when they are nested.
Implementation Details
1. Custom Binding (ds python binding)
File: src/custom_binding/include/tinsource.h:
#ifndef _TINSOURCE_H_
#define _TINSOURCE_H_
struct TinSourceInfo {
char* sensorId;
char* sourceUri;
int sourceId;
};
#endif
File: src/custom_binding/bindtinsource.cpp
#include "bind_string_property_definitions.h"
#include "include/bindtinsource.hpp"
namespace py = pybind11;
namespace pydeepstream {
// Lifecycle: Copy function for Gst Meta deep copy
void * copy_tin_source_info(void* data, void* user_data) {
NvDsUserMeta *srcMeta = (NvDsUserMeta*) data;
TinSourceInfo *srcData = (TinSourceInfo *) srcMeta->user_meta_data;
TinSourceInfo *destData = (TinSourceInfo *) g_malloc0(sizeof(TinSourceInfo));
destData->sourceId = srcData->sourceId;
if (srcData->sensorId) destData->sensorId = g_strdup(srcData->sensorId);
if (srcData->sourceUri) destData->sourceUri = g_strdup(srcData->sourceUri);
return destData;
}
// Lifecycle: Release function (The Janitor)
void release_tin_source_info(void * data, void * user_data) {
NvDsUserMeta *srcMeta = (NvDsUserMeta*) data;
if (srcMeta != nullptr && srcMeta->user_meta_data != nullptr) {
TinSourceInfo *srcData = (TinSourceInfo *) srcMeta->user_meta_data;
if (srcData->sensorId) free(srcData->sensorId);
if (srcData->sourceUri) free(srcData->sourceUri);
g_free(srcData);
srcMeta->user_meta_data = nullptr;
}
}
void bindtinsource(py::module &m) {
py::class_<TinSourceInfo>(m, "TinSourceInfo", pydsdoc::tinsourcedoc::TinSourceInfoDoc::descr)
.def(py::init<>())
// STRING_FREE_EXISTING handles the free() of old char* when new string assigned
.def_property("sensorId", STRING_FREE_EXISTING(TinSourceInfo, sensorId))
.def_property("sourceUri", STRING_FREE_EXISTING(TinSourceInfo, sourceUri))
.def_readwrite("sourceId", &TinSourceInfo::sourceId)
.def("cast", [](void *data) {
return (TinSourceInfo *) data;
}, py::return_value_policy::reference, pydsdoc::tinsourcedoc::TinSourceInfoDoc::cast);
// Function added to pyds module
m.def("alloc_tin_source_info", [](NvDsUserMeta *meta) {
auto *mem = (TinSourceInfo *) g_malloc0(sizeof(TinSourceInfo));
meta->base_meta.copy_func = (NvDsMetaCopyFunc) pydeepstream::copy_tin_source_info;
meta->base_meta.release_func = (NvDsMetaReleaseFunc) pydeepstream::release_tin_source_info;
return mem;
}, py::return_value_policy::reference, "Allocate TinSourceInfo struct and register to UserMeta");
}
}
2. Metadata Attachment (Python Probe)
File: main_app.py
def osd_sink_pad_buffer_probe(pad, info, u_data):
gst_buffer = info.get_buffer()
if not gst_buffer: return Gst.PadProbeReturn.OK
batch_meta = pyds.gst_buffer_get_nvds_batch_meta(hash(gst_buffer))
l_frame = batch_meta.frame_meta_list
while l_frame is not None:
frame_meta = pyds.NvDsFrameMeta.cast(l_frame.data)
# 1. Acquire the standard Event Container
user_event_meta = pyds.nvds_acquire_user_meta_from_pool(batch_meta)
if user_event_meta:
# alloc_nvds_event_msg_meta automatically sets up memory management
msg_meta = pyds.alloc_nvds_event_msg_meta(user_event_meta)
# 2. Set the "Fingerprint" so C++ knows what is inside extMsg
msg_meta.objType = pyds.NvDsObjectType.NVDS_OBJECT_TYPE_CUSTOM
msg_meta.objClassId = TIN_SOURCE_SUBTYPE_ID
# 3. Populate your custom fields
camera_info = source_to_sensor_map.get(frame_meta.source_id, {})
custom_payload = pyds.alloc_tin_source_info(user_event_meta)
custom_payload.sensorId = camera_info.get("sensor_id", "Unknown")
custom_payload.sourceUri = camera_info.get("uri", "Unknown")
custom_payload.sourceId = int(frame_meta.source_id)
# 4. Enclose it
msg_meta.extMsg = custom_payload
msg_meta.extMsgSize = sys.getsizeof(pyds.TinSourceInfo)
# 5. Attach the container to the frame
user_event_meta.user_meta_data = msg_meta
user_event_meta.base_meta.meta_type = pyds.NvDsMetaType.NVDS_EVENT_MSG_META
pyds.nvds_add_user_meta_to_frame(frame_meta, user_event_meta)
l_frame = l_frame.next
return Gst.PadProbeReturn.OK
3. Custom Payload Generation (C++ Library Mod)
File: dsmeta_payload.cpp (Modifying nvmsgconv)
gchar*
generate_dsmeta_message_custom(void* privData, void* frameMeta)
{
if (!frameMeta) return NULL;
NvDsFrameMeta* frame_meta = (NvDsFrameMeta*)frameMeta;
JsonNode* rootNode = NULL;
JsonObject* rootObj = json_object_new(); // Create root early for easier cleanup
JsonArray* objectsArray = json_array_new();
stringstream ss;
// 1. Calculate Scale Factors
float scaleW = (frame_meta->pipeline_width == 0) ? 1.0f : (float)frame_meta->source_frame_width / frame_meta->pipeline_width;
float scaleH = (frame_meta->pipeline_height == 0) ? 1.0f : (float)frame_meta->source_frame_height / frame_meta->pipeline_height;
// 2. Object Loop (Optimized Stringstream)
for (NvDsObjectMetaList* obj_l = frame_meta->obj_meta_list; obj_l; obj_l = obj_l->next) {
NvDsObjectMeta* obj_meta = (NvDsObjectMeta*)obj_l->data;
if (!obj_meta) continue;
ss.str(""); ss.clear();
float left = obj_meta->rect_params.left * scaleW;
float top = obj_meta->rect_params.top * scaleH;
ss << obj_meta->object_id << "|" << left << "|" << top << "|"
<< (left + (obj_meta->rect_params.width * scaleW)) << "|"
<< (top + (obj_meta->rect_params.height * scaleH)) << "|"
<< (obj_meta->obj_label ? obj_meta->obj_label : "N/A") << "|"
<< obj_meta->confidence;
json_array_add_string_element(objectsArray, ss.str().c_str());
}
// 3. Root Assembly (ObjectsArray is now owned by rootObj)
json_object_set_string_member(rootObj, "version", "4.0");
json_object_set_int_member(rootObj, "frame_num", (gint64)frame_meta->frame_num);
char ts[MAX_TIME_STAMP_LEN + 1];
generate_ts_rfc3339(ts, MAX_TIME_STAMP_LEN);
json_object_set_string_member(rootObj, "@timestamp", ts);
json_object_set_array_member(rootObj, "objects", objectsArray);
// 4. Safe Heartbeat Unboxing (Strict Size Validation)
for (NvDsUserMetaList* l = frame_meta->frame_user_meta_list; l; l = l->next) {
NvDsUserMeta* user_meta = (NvDsUserMeta*)l->data;
if (user_meta && user_meta->base_meta.meta_type == NVDS_EVENT_MSG_META) {
NvDsEventMsgMeta* msg_meta = (NvDsEventMsgMeta*)user_meta->user_meta_data;
if (msg_meta && msg_meta->objClassId == 1001 && msg_meta->extMsg
&& msg_meta->extMsgSize >= sizeof(TinSourceInfo)) {
TinSourceInfo* info = (TinSourceInfo*)msg_meta->extMsg;
JsonObject* sourceDetails = json_object_new();
json_object_set_int_member(sourceDetails, "sourceId", info->sourceId);
json_object_set_string_member(sourceDetails, "sensorId", info->sensorId ? info->sensorId : "unknown");
json_object_set_string_member(sourceDetails, "uri", info->sourceUri ? info->sourceUri : "unknown");
json_object_set_object_member(rootObj, "sourceDetails", sourceDetails);
break;
}
}
}
// 5. Serialization
rootNode = json_node_new(JSON_NODE_OBJECT);
json_node_set_object(rootNode, rootObj);
gchar* message = json_to_string(rootNode, TRUE);
// 6. Final Cleanup
json_node_free(rootNode);
json_object_unref(rootObj);
return message;
}
My questions:
-
Allocator Mismatch: Should I be strictly using
g_freefor strings allocated via Python bindings usingSTRING_FREE_EXISTING, or isfree()correct here? -
Conflict in
release_func: When I callpyds.alloc_nvds_event_msg_meta(user_event_meta)andpyds.alloc_tin_source_info(user_event_meta), are they both trying to set therelease_funcfor the sameuser_metacontainer? Does the second call overwrite the first? -
Size Validation: Is
sys.getsizeof(pyds.TinSourceInfo)the correct way to pass the size toextMsgSize, or is there a specific way to get the C-struct size in the Python bindings? -
Best Practice: What is the recommended way to attach a custom struct specifically for use within
extMsgof anNvDsEventMsgMetato ensure the lifecycle is managed correctly without crashing during cleanup?
I am currently working on a critical project and have encountered this memory issue. I’m in a bit of a risky situation with a deadline, and I’m reaching out in hopes that someone might be able to provide some insight, even though it’s the weekend. I would truly appreciate any guidance you can offer. Thanks.
UPDATE:
I observed that the alloc_nvds_vehicle_object() style functions in pyds do not take an NvDsUserMeta parameter, whereas the official binding tutorials for custom structs suggest: m.def("alloc_custom_struct", [](NvDsUserMeta *meta) { ... })
By passing the meta pointer, the allocator registers a custom release_func. However, if I then use pyds.alloc_nvds_event_msg_meta(user_event_meta), the container’s release function is overwritten. When I nest my custom struct in extMsg, I believe I am creating a “Type Confusion” crash because the Janitor (release function) is being called with an NvDsEventMsgMeta pointer but is casting it to my custom struct.
I am seeking the “Best Fit” for a production pipeline where msg2p-newapi = true. I see four paths and need to know which is the “NVIDIA-intended” standard:
Path A: The “New API” Blob (NVDS_CUSTOM_MSG_BLOB)
-
Implementation: Attach a JSON string directly via
NvDsCustomMsgInfoto Frame Meta. -
Pros: Simplest lifecycle; no “Janitor” conflict.
-
Cons: The default
minimalschema wraps this in acustomMessagearray. To get my desired top-level JSON structure, I must modifydsmeta_payload.cppto parse the blob and remount it. Is modifying the lib to re-mount blobs considered bad practice?
Path B: The “Old API” Nesting (extMsg)
-
Implementation: Use
NvDsEventMsgMetaand nest a custom C-struct inextMsg. -
Pros: Fits the “Event” paradigm.
-
Cons: Requires a “Double Janitor” (Custom release for the struct + EventMsg release). How should I register the release function for the nested struct without hijacking the
UserMetacontainer?
Path C: The “Deep Mod” (Custom NvDsEventMsgMeta)
- Implementation: Replace the standard
NvDsEventMsgMetawith a fully custom type in my Python bindings and modify the C++ library to handle this new type directly.
Specific Questions Now
-
When to use each? In a “New API” context, is
NVDS_CUSTOM_MSG_BLOBthe official replacement forNvDsEventMsgMetafor custom payloads? When should I use traditional api (msg2p-newapi= false), when should I use new api(msg2p-newapi= true)? -
Lifecycle Management: If I use traditional way, use
extMsg, should my custom allocator avoid taking theNvDsUserMetaparameter to prevent overwriting the container’s release function? I’m confuse when to replace user event meta with my custom type, or include my custom type in extMsg -
Production Recommendation: For a project with high performance requirements, what is the most stable “DeepStream-native” way to pass custom sensor data to
nvmsgbroker? Custom data I mean is I control the json response structure, modify it to whatever I want.