DeepStream 7.1 on Jetson (JetPack 6.2): Crash on Custom Metadata with Age Classifier (Double Free or Corruption)

• Hardware Platform (Jetson)
• DeepStream Version 7.1
• JetPack Version (6.2)
• Issue Type( Question)

Hello everyone,

I’m building a DeepStream Python pipeline based on the multi-input multi-output python sample with an integrated tracker. My objective is to run a GoogleNet age classifier ONNX model Model Repo on each face detected by a PeopleNet primary GIE. While the gaze estimation works, adding age classification causes a double free or corruption (out) crash when attaching custom metadata. This occurs when I attempt to attach custom metadata for age.

This is the preprocess config file:

# SPDX-FileCopyrightText: Copyright (c) 2021-2022 NVIDIA CORPORATION & AFFILIATES.
# SPDX-License-Identifier: LicenseRef-NvidiaProprietary

[property]
enable=1
target-unique-ids=
operate-on-gie-id=1
network-input-order=0
process-on-frame=0
unique-id=7
gpu-id=0
maintain-aspect-ratio=1
symmetric-padding=1
processing-width=224
processing-height=224
scaling-buf-pool-size=8
tensor-buf-pool-size=8
network-input-shape=1;3;224;224
network-color-format=1
tensor-data-type=0
tensor-name=input
scaling-pool-memory-type=0
scaling-pool-compute-hw=1
scaling-filter=1

custom-lib-path=/opt/nvidia/deepstream/deepstream/lib/gst-plugins/libcustom2d_preprocess.so
custom-tensor-preparation-function=CustomTensorPreparation

[user-configs]
# Valores medios a restar (BGR)
offsets=104.0;117.0;123.0
pixel-normalization-factor=1.0;1.0;1.0

[group-0]
src-ids=0
operate-on-class-ids=2
process-on-all-objects=1
custom-input-transformation-function=CustomAsyncTransformation
input-object-min-width=50
input-object-min-height=50
input-object-max-width=1000
input-object-max-height=1000

and the model config file:

[property]
gpu-id=0
batch-size=1
network-type=1
network-mode=0
onnx-file=/opt/nvidia/deepstream/deepstream-7.1/solipsis/AdGaze/jetson/models/age_gender/age_googlenet.onnx
model-engine-file=/opt/nvidia/deepstream/deepstream-7.1/solipsis/AdGaze/jetson/models/age_gender/age_googlenet.onnx_b1_gpu0_fp32.engine
process-mode=2
gie-unique-id=5
operate-on-gie-id=1
operate-on-class-ids=2
output-tensor-meta=1
input-tensor-from-meta=1
offsets=104.0;117.0;123.0

[class-attrs-all]
threshold=0.0

I’m trying to access the metadata through this probe:

def sgie_age_src_pad_buffer_probe(pad, info, u_data):
    gst_buffer = info.get_buffer()
    if not gst_buffer:
        return Gst.PadProbeReturn.OK

    batch_meta = pyds.gst_buffer_get_nvds_batch_meta(hash(gst_buffer))
    l_frame = batch_meta.frame_meta_list

    pyds.nvds_acquire_meta_lock(batch_meta)

    while l_frame:
        frame_meta = pyds.NvDsFrameMeta.cast(l_frame.data)
        l_obj = frame_meta.obj_meta_list

        while l_obj:
            obj_meta = pyds.NvDsObjectMeta.cast(l_obj.data)

            if obj_meta.class_id == PGIE_CLASS_ID_FACE:
                l_user = obj_meta.obj_user_meta_list
                age_meta_added = False

                while l_user and not age_meta_added:
                    um = pyds.NvDsUserMeta.cast(l_user.data)
                    
                    # Buscar metadatos de tensor de edad
                    if um.base_meta.meta_type == pyds.NvDsMetaType.NVDSINFER_TENSOR_OUTPUT_META:
                        infer_meta = pyds.NvDsInferTensorMeta.cast(um.user_meta_data)
                        
                        # Procesar tensor de edad
                        age = process_age_tensor(infer_meta)
                        
                        if age is not None:
                            # Crear metadatos de usuario
                            user_meta = pyds.nvds_acquire_user_meta_from_pool(batch_meta)
                            payload = json.dumps({
                                "age": int(age)})
                            
                            # Crear estructura de datos personalizada
                            data = pyds.alloc_custom_struct(user_meta)
                            data.message = payload
                            data.structId = frame_meta.frame_num
                            
                            # Configurar metadatos
                            user_meta.user_meta_data = data
                            user_meta.base_meta.meta_type = pyds.NvDsMetaType.NVDS_USER_META
                            
                            # Adjuntar a los metadatos del objeto
                            pyds.nvds_add_user_meta_to_obj(obj_meta, user_meta)
                            age_meta_added = True

                    try:
                        l_user = l_user.next
                    except StopIteration:
                        break

            l_obj = l_obj.next
        l_frame = l_frame.next

    pyds.nvds_release_meta_lock(batch_meta)
    return Gst.PadProbeReturn.OK

Finally, in the On-Screen-Display probe, I access and read the custom user metadata to draw age and gaze info.

def osd_buffer_probe(pad, info, user_data):
    global gaze_memory, global_tracking, perf_data
    frame_number = 0
    num_rects = 0
    num_arrows = 0
    obj_counter = {
        PGIE_CLASS_ID_PERSON: 0,
        PGIE_CLASS_ID_BAG: 0,
        PGIE_CLASS_ID_FACE: 0
    }

    gst_buffer = info.get_buffer()
    if not gst_buffer:
        return Gst.PadProbeReturn.OK

    batch_meta = pyds.gst_buffer_get_nvds_batch_meta(hash(gst_buffer))
    l_frame = batch_meta.frame_meta_list

    while l_frame:
        frame_meta = pyds.NvDsFrameMeta.cast(l_frame.data)
        stream_idx = frame_meta.pad_index
        current_frame = frame_meta.frame_num
        stream_id = f"stream{stream_idx}"

        fps = max(getattr(perf_data, 'get_fps', lambda x: 30.0)(stream_idx), 1.0)

        display_metas = []
        current_display_meta = pyds.nvds_acquire_display_meta_from_pool(batch_meta)
        current_display_meta.num_labels = 0
        display_metas.append(current_display_meta)

        l_obj = frame_meta.obj_meta_list

        while l_obj:
            obj_meta = pyds.NvDsObjectMeta.cast(l_obj.data)
            object_id = obj_meta.object_id
            rect = obj_meta.rect_params

            pitch = yaw = age_value = None
            found_new = False

            l_umeta = obj_meta.obj_user_meta_list
            while l_umeta:
                try:
                    um = pyds.NvDsUserMeta.cast(l_umeta.data)
                    if um.base_meta.meta_type == pyds.NvDsMetaType.NVDS_USER_META:
                        custom = pyds.CustomDataStruct.cast(um.user_meta_data)
                        msg = pyds.get_string(custom.message)
                        data = json.loads(msg)
                        if 'pitch' in data and 'yaw' in data:
                            pitch, yaw = data['pitch'], data['yaw']
                            gaze_memory[object_id] = {'pitch': pitch, 'yaw': yaw, 'expiry': current_frame + 3}
                            found_new = True
                            draw_gaze_on_frame(batch_meta, obj_meta, pitch, yaw)
                            print(f"[OSD] found obj has pitch: {pitch:.2f} yaw: {yaw:.2f} during frame {current_frame}")
                        if 'age' in data:
                            age_value = int(data['age'])
                except Exception:
                    pass
                l_umeta = l_umeta.next

            if not found_new:
                rec = gaze_memory.get(object_id)
                if rec and rec['expiry'] >= current_frame:
                    pitch, yaw = rec['pitch'], rec['yaw']
                else:
                    l_obj = l_obj.next
                    continue

            if obj_meta.class_id == PGIE_CLASS_ID_PERSON:
                tracker_id = object_id
                tracking = global_tracking[stream_idx].setdefault(tracker_id, {
                    'start_frame': current_frame,
                    'last_frame': current_frame,
                    'last_update': time.time()
                })
                tracking['last_frame'] = current_frame
                tracking['last_update'] = time.time()

                duration_frames = current_frame - tracking['start_frame']
                duration_seconds = duration_frames / fps

                if current_display_meta.num_labels >= 16:
                    current_display_meta = pyds.nvds_acquire_display_meta_from_pool(batch_meta)
                    current_display_meta.num_labels = 0
                    display_metas.append(current_display_meta)

                text = current_display_meta.text_params[current_display_meta.num_labels]
                text.display_text = f"Duration: {duration_seconds:.2f}s"
                text.x_offset = int(rect.left)
                text.y_offset = int(rect.top) + 50
                text.font_params.font_size = 10
                text.font_params.font_color.set(1.0, 1.0, 1.0, 1.0)
                text.set_bg_clr = 1
                text.text_bg_clr.set(0.0, 0.0, 0.0, 1.0)
                current_display_meta.num_labels += 1

            display_meta = pyds.nvds_acquire_display_meta_from_pool(batch_meta)
            display_meta.num_labels = 1
            display_meta.text_params[0].display_text = f"P:{pitch:.2f},Y:{yaw:.2f}"
            display_meta.text_params[0].x_offset = int(rect.left)
            display_meta.text_params[0].y_offset = int(rect.top) - 10
            pyds.nvds_add_display_meta_to_frame(frame_meta, display_meta)

            if age_value is not None:
                age_meta = pyds.nvds_acquire_display_meta_from_pool(batch_meta)
                age_meta.num_labels = 1
                age_meta.text_params[0].display_text = f"Age: {age_value}"
                age_meta.text_params[0].x_offset = int(rect.left)
                age_meta.text_params[0].y_offset = int(rect.top) - 30
                age_meta.text_params[0].font_params.font_size = 10
                age_meta.text_params[0].font_params.font_color.set(1.0, 1.0, 0.5, 1.0)
                age_meta.text_params[0].set_bg_clr = 1
                age_meta.text_params[0].text_bg_clr.set(0.0, 0.0, 0.0, 0.7)
                pyds.nvds_add_display_meta_to_frame(frame_meta, age_meta)

            l_obj = l_obj.next

        l_frame = l_frame.next

    return Gst.PadProbeReturn.OK

I am running this through the deepstream:7.1-triton-multiarch docker container and ran two sh files that are required for python dev. The container was created under this command:

docker run -it \

--network=host \

--runtime=nvidia \

--privileged \

--device=/dev/video0 \

--device=/dev/video1 \

-e DISPLAY=$DISPLAY \

-w /opt/nvidia/deepstream/deepstream-7.1 \

-v /tmp/.X11-unix/:/tmp/.X11-unix \

[nvcr.io/nvidia/deepstream:7.1-triton-multiarch](http://nvcr.io/nvidia/deepstream:7.1-triton-multiarch)

I suspect the crash is related to either:

  • Improper allocation or reuse of user_meta_data memory (data.message, etc.)
  • Conflict between gaze and age metadata management
  • Not using a proper cleanup/release mechanism for custom structures
  • Container access limitations to hardware.

I would really appreciate guidance or a working pattern to safely attach and display metadata from multiple sources.

Thanks in advance!

Please use the gdb tool to debug that first 334365 #3.

Here’s the backtrace:

#0  0x0000fffff7b6f200 in  () at /usr/lib/aarch64-linux-gnu/libc.so.6
#1  0x0000fffff7b2a67c in raise () at /usr/lib/aarch64-linux-gnu/libc.so.6
#2  0x0000fffff7b17130 in abort () at /usr/lib/aarch64-linux-gnu/libc.so.6
#3  0x0000fffff7b63308 in  () at /usr/lib/aarch64-linux-gnu/libc.so.6
#4  0x0000fffff7b7957c in  () at /usr/lib/aarch64-linux-gnu/libc.so.6
#5  0x0000fffff7b7b694 in  () at /usr/lib/aarch64-linux-gnu/libc.so.6
#6  0x0000fffff7b7dc84 in free () at /usr/lib/aarch64-linux-gnu/libc.so.6
#7  0x0000fffff3b53a18 in  () at /opt/nvidia/deepstream/deepstream/lib/libnvds_meta.so
#8  0x0000fffff3b53094 [PAC] in nvds_destroy_batch_meta () at /opt/nvidia/deepstream/deepstream/lib/libnvds_meta.so
#9  0x0000fffff6c7ddec [PAC] in gst_buffer_foreach_meta () at /usr/lib/aarch64-linux-gnu/libgstreamer-1.0.so.0
#10 0x0000fffff6c83918 in gst_buffer_pool_release_buffer () at /usr/lib/aarch64-linux-gnu/libgstreamer-1.0.so.0
#11 0x0000fffff6c83a40 in  () at /usr/lib/aarch64-linux-gnu/libgstreamer-1.0.so.0
#12 0x0000fffff6cbb7f8 in gst_mini_object_unref () at /usr/lib/aarch64-linux-gnu/libgstreamer-1.0.so.0
#13 0x0000ffffdd096470 in  () at /usr/lib/aarch64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_multistream.so
#14 0x0000fffff6cc4a78 [PAC] in  () at /usr/lib/aarch64-linux-gnu/libgstreamer-1.0.so.0
#15 0x0000fffff6cc7cb8 in  () at /usr/lib/aarch64-linux-gnu/libgstreamer-1.0.so.0
#16 0x0000fffff6cc80e8 in gst_pad_push () at /usr/lib/aarch64-linux-gnu/libgstreamer-1.0.so.0
#17 0x0000ffffdcae784c in  () at /usr/lib/aarch64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_infer.so
#18 0x0000fffff7578064 in g_thread_proxy (data=0xaaaaad250070) at ../glib/gthread.c:831
#19 0x0000fffff7b6d5c8 in  () at /usr/lib/aarch64-linux-gnu/libc.so.6
#20 0x0000fffff7bd5edc in  () at /usr/lib/aarch64-linux-gnu/libc.so.6

From the crash info, it’s due to the “Improper allocation or reuse of user_meta_data memory”.

You can refer to our deepstream_custom_binding_test.py to learn how to add the USER_META.

You can also refer to the FAQ to add that in the native plugin and access it in Python.

Thanks for the custom metadata recommendation, it was very helpful. The issue was in the config file:

input-tensor-from-meta=1

With this setting, nvdspreprocess was generating image frames, while the SGIE was expecting a tensor meta. This mismatch caused the SGIE to access memory regions that don’t exist or are not properly initialized. The correct setting was:

input-tensor-from-meta=0

This ensures the SGIE to use the preprocessed image data, which aligns with what nvdspreprocess provides.

As always, thanks for your quick support and valuable insights. Your help made resolving this much faster and smoother.