Why can't I find the metadata I attached in subsequent probes?

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) Jetson orin NX
• DeepStream Version 7.0
I need to use multiple appsrc to push images for inference, and metadata must be attached to each frame. However, after I push the frame and attach metadata successfully through the signal callback function of appsrc, I cannot find this metadata by adding probes to the src or sink of any subsequent element in the pipeline. I am confused, even if I add probes to the exit of appsrc, I cannot find the metadata.
My pipeline is as follows,

char pipeline_str[4096];
    snprintf(pipeline_str, sizeof(pipeline_str),
             "appsrc name=appsrc0 ! jpegdec name=jpegdec0 ! videoconvert ! video/x-raw,format=RGBA ! "
             "nvvideoconvert ! video/x-raw(memory:NVMM),format=NV12 ! queue ! mux.sink_0 "
             "appsrc name=appsrc1 ! jpegdec name=jpegdec1 ! videoconvert ! video/x-raw,format=RGBA ! "
             "nvvideoconvert ! video/x-raw(memory:NVMM),format=NV12 ! queue ! mux.sink_1 "
             "appsrc name=appsrc2 ! jpegdec name=jpegdec2 ! videoconvert ! video/x-raw,format=RGBA ! "
             "nvvideoconvert ! video/x-raw(memory:NVMM),format=NV12 ! queue ! mux.sink_2 "
             "appsrc name=appsrc3 ! jpegdec name=jpegdec3 ! videoconvert ! video/x-raw,format=RGBA ! "
             "nvvideoconvert ! video/x-raw(memory:NVMM),format=NV12 ! queue ! mux.sink_3 "
             "appsrc name=appsrc4 ! jpegdec name=jpegdec4 ! videoconvert ! video/x-raw,format=RGBA ! "
             "nvvideoconvert ! video/x-raw(memory:NVMM),format=NV12 ! queue ! mux.sink_4 "
             "appsrc name=appsrc5 ! jpegdec name=jpegdec5 ! videoconvert ! video/x-raw,format=RGBA ! "
             "nvvideoconvert ! video/x-raw(memory:NVMM),format=NV12 ! queue ! mux.sink_5 "
             "appsrc name=appsrc6 ! jpegdec name=jpegdec6 ! videoconvert ! video/x-raw,format=RGBA ! "
             "nvvideoconvert ! video/x-raw(memory:NVMM),format=NV12 ! queue ! mux.sink_6 "
             "appsrc name=appsrc7 ! jpegdec name=jpegdec7 ! videoconvert ! video/x-raw,format=RGBA ! "
             "nvvideoconvert ! video/x-raw(memory:NVMM),format=NV12 ! queue ! mux.sink_7 "
             "appsrc name=appsrc8 ! jpegdec name=jpegdec8 ! videoconvert ! video/x-raw,format=RGBA ! "
             "nvvideoconvert ! video/x-raw(memory:NVMM),format=NV12 ! queue ! mux.sink_8 "
             "appsrc name=appsrc9 ! jpegdec name=jpegdec9 ! videoconvert ! video/x-raw,format=RGBA ! "
             "nvvideoconvert ! video/x-raw(memory:NVMM),format=NV12 ! queue ! mux.sink_9 "
             "appsrc name=appsrc10 ! jpegdec name=jpegdec10 ! videoconvert ! video/x-raw,format=RGBA ! "
             "nvvideoconvert ! video/x-raw(memory:NVMM),format=NV12 ! queue ! mux.sink_10 "
             "appsrc name=appsrc11 ! jpegdec name=jpegdec11 ! videoconvert ! video/x-raw,format=RGBA ! "
             "nvvideoconvert ! video/x-raw(memory:NVMM),format=NV12 ! queue ! mux.sink_11 "
             "appsrc name=appsrc12 ! jpegdec name=jpegdec12 ! videoconvert ! video/x-raw,format=RGBA ! "
             "nvvideoconvert ! video/x-raw(memory:NVMM),format=NV12 ! queue ! mux.sink_12 "
             "appsrc name=appsrc13 ! jpegdec name=jpegdec13 ! videoconvert ! video/x-raw,format=RGBA ! "
             "nvvideoconvert ! video/x-raw(memory:NVMM),format=NV12 ! queue ! mux.sink_13 "
             "appsrc name=appsrc14 ! jpegdec name=jpegdec14 ! videoconvert ! video/x-raw,format=RGBA ! "
             "nvvideoconvert ! video/x-raw(memory:NVMM),format=NV12 ! queue ! mux.sink_14 "
             "appsrc name=appsrc15 ! jpegdec name=jpegdec15 ! videoconvert ! video/x-raw,format=RGBA ! "
             "nvvideoconvert ! video/x-raw(memory:NVMM),format=NV12 ! queue ! mux.sink_15 "
             "nvstreammux name=mux batch-size=8 width=640 height=640 live-source=0 ! "
             // "nvinfer name=infer config-file-path=%s ! nvdsosd name=osd ! nvvideoconvert ! video/x-raw,format=RGBA ! appsink name=sink",
             "nvinfer name=infer config-file-path=%s ! "
             // "fakesink name=sink sync=false",
             "nvvideoconvert ! "
             "nvdsosd name=osd display-text=1 ! "
             "nvv4l2h264enc bitrate=4000000 preset-level=1 ! "
             "h264parse ! "
             "matroskamux ! "
             "filesink location=output11.mkv",
             CONFIG_PATH);

    GError *err = NULL;
    pipeline = gst_parse_launch(pipeline_str, &err);
    if (!pipeline)
    {
        g_printerr("Pipeline creation failed: %s\n", err->message);
        return -1;
    }
    printf("使用的GStreamer管道: %s\n", pipeline_str);
    appsrc0 = gst_bin_get_by_name(GST_BIN(pipeline), "appsrc0");
    appsrc1 = gst_bin_get_by_name(GST_BIN(pipeline), "appsrc1");
    appsrc2 = gst_bin_get_by_name(GST_BIN(pipeline), "appsrc2");
    appsrc3 = gst_bin_get_by_name(GST_BIN(pipeline), "appsrc3");
    appsrc4 = gst_bin_get_by_name(GST_BIN(pipeline), "appsrc4");
    appsrc5 = gst_bin_get_by_name(GST_BIN(pipeline), "appsrc5");
    appsrc6 = gst_bin_get_by_name(GST_BIN(pipeline), "appsrc6");
    appsrc7 = gst_bin_get_by_name(GST_BIN(pipeline), "appsrc7");
    appsrc8 = gst_bin_get_by_name(GST_BIN(pipeline), "appsrc8");
    appsrc9 = gst_bin_get_by_name(GST_BIN(pipeline), "appsrc9");
    appsrc10 = gst_bin_get_by_name(GST_BIN(pipeline), "appsrc10");
    appsrc11 = gst_bin_get_by_name(GST_BIN(pipeline), "appsrc11");
    appsrc12 = gst_bin_get_by_name(GST_BIN(pipeline), "appsrc12");
    appsrc13 = gst_bin_get_by_name(GST_BIN(pipeline), "appsrc13");
    appsrc14 = gst_bin_get_by_name(GST_BIN(pipeline), "appsrc14");
    appsrc15 = gst_bin_get_by_name(GST_BIN(pipeline), "appsrc15");
    // appsink = gst_bin_get_by_name(GST_BIN(pipeline), "sink");
    GstElement *infer = gst_bin_get_by_name(GST_BIN(pipeline), "infer");

    for (int i = 0; i < 4; i++)
    {
        gchar *jpegdec_name = g_strdup_printf("jpegdec%d", i);
        GstElement *jpegdec = gst_bin_get_by_name(GST_BIN(pipeline), jpegdec_name);
        if (jpegdec)
        {
            GstPad *jpegdec_src_pad = gst_element_get_static_pad(jpegdec, "src");
            if (jpegdec_src_pad)
            {
                gst_pad_add_probe(jpegdec_src_pad, GST_PAD_PROBE_TYPE_BUFFER,
                                  jpegdec_src_probe, NULL, NULL);
                gst_object_unref(jpegdec_src_pad);
            }
            gst_object_unref(jpegdec);
        }
        g_free(jpegdec_name);
    }

    // 添加探针到 nvinfer 的 src pad
    GstPad *infer_src_pad = gst_element_get_static_pad(infer, "src");
    if (!infer_src_pad)
    {
        g_printerr("Unable to get src pad from nvinfer element\n");
    }
    else
    {
        gst_pad_add_probe(infer_src_pad, GST_PAD_PROBE_TYPE_BUFFER,
                          osd_sink_pad_buffer_probe, NULL, NULL);
        gst_object_unref(infer_src_pad);
    }

    // g_object_set(appsink, "emit-signals", TRUE, NULL);
    // g_signal_connect(appsink, "new-sample", G_CALLBACK(on_new_sample), NULL);
    g_signal_connect(appsrc0, "need-data", G_CALLBACK(push_sample), NULL);
    g_signal_connect(appsrc1, "need-data", G_CALLBACK(push_sample), NULL);
    g_signal_connect(appsrc2, "need-data", G_CALLBACK(push_sample), NULL);
    g_signal_connect(appsrc3, "need-data", G_CALLBACK(push_sample), NULL);
    g_signal_connect(appsrc4, "need-data", G_CALLBACK(push_sample), NULL);
    g_signal_connect(appsrc5, "need-data", G_CALLBACK(push_sample), NULL);
    g_signal_connect(appsrc6, "need-data", G_CALLBACK(push_sample), NULL);
    g_signal_connect(appsrc7, "need-data", G_CALLBACK(push_sample), NULL);
    g_signal_connect(appsrc8, "need-data", G_CALLBACK(push_sample), NULL);
    g_signal_connect(appsrc9, "need-data", G_CALLBACK(push_sample), NULL);
    g_signal_connect(appsrc10, "need-data", G_CALLBACK(push_sample), NULL);
    g_signal_connect(appsrc11, "need-data", G_CALLBACK(push_sample), NULL);
    g_signal_connect(appsrc12, "need-data", G_CALLBACK(push_sample), NULL);
    g_signal_connect(appsrc13, "need-data", G_CALLBACK(push_sample), NULL);
    g_signal_connect(appsrc14, "need-data", G_CALLBACK(push_sample), NULL);
    g_signal_connect(appsrc15, "need-data", G_CALLBACK(push_sample), NULL);
    GstBus *bus = gst_pipeline_get_bus(GST_PIPELINE(pipeline));
    gst_bus_add_signal_watch(bus);
    g_signal_connect(bus, "message", G_CALLBACK(on_bus_message), NULL);

    gst_element_set_state(pipeline, GST_STATE_PLAYING);
    loop = g_main_loop_new(NULL, FALSE);
    g_main_loop_run(loop);

My probe and signal callback appsrc push frame function is as follows:

void push_sample(GstElement *src, guint length, gpointer user_data)
{
    int source_id = -1;

    if (src == appsrc0)
        source_id = 0;
    else if (src == appsrc1)
        source_id = 1;
    else if (src == appsrc2)
        source_id = 2;
    else if (src == appsrc3)
        source_id = 3;
    else if (src == appsrc4)
        source_id = 4;
    else if (src == appsrc5)
        source_id = 5;
    else if (src == appsrc6)
        source_id = 6;
    else if (src == appsrc7)
        source_id = 7;
    else if (src == appsrc8)
        source_id = 8;
    else if (src == appsrc9)
        source_id = 9;
    else if (src == appsrc10)
        source_id = 10;
    else if (src == appsrc11)
        source_id = 11;
    else if (src == appsrc12)
        source_id = 12;
    else if (src == appsrc13)
        source_id = 13;
    else if (src == appsrc14)
        source_id = 14;
    else if (src == appsrc15)
        source_id = 15;
    else
    {
        g_printerr("未知 appsrc 地址\n");
        return;
    }

    g_mutex_lock(&prepared_queue_mutex);
    // 等待预处理的数据,但设置超时避免死锁
    while (g_queue_is_empty(prepared_tile_queue) && !preprocessing_finished)
    {
        gint64 end_time = g_get_monotonic_time() + 10 * G_TIME_SPAN_MILLISECOND; // 10ms超时
        if (!g_cond_wait_until(&prepared_queue_cond, &prepared_queue_mutex, end_time))
        {
            printf("等待预处理数据超时\n");
            continue;
        }
    }

    PreparedTile *prepared = g_queue_pop_head(prepared_tile_queue);
    g_mutex_unlock(&prepared_queue_mutex);

    if (!prepared)
    {
        // 没有更多数据,结束流
        static gboolean eos_sent = FALSE;
        if (!eos_sent && preprocessing_finished)
        {
            printf("所有瓦片处理完成,发送EOS\n");
            gst_app_src_end_of_stream(GST_APP_SRC(src));
            eos_sent = TRUE;
        }
        return;
    }

    // 创建GstBuffer - 使用gst_buffer_fill而不是map/unmap
    GstBuffer *buffer = gst_buffer_new_allocate(NULL, prepared->jpeg_size, NULL);
    if (!buffer)
    {
        g_print("Source %d: Failed to allocate buffer\n", source_id);
        free(prepared->jpeg_data);
        free(prepared);
        return;
    }

    gst_buffer_fill(buffer, 0, prepared->jpeg_data, prepared->jpeg_size);

    // 直接将PreparedTile中的TileInfo放入队列 保证绝对的一一对应
    TileInfo *info = malloc(sizeof(TileInfo));
    *info = prepared->tile_info; // 从prepared中取出,保证100%对应
    g_queue_push_tail(tile_info_queue, info);

    // 添加时间戳元数据
    struct timeval precise_time;
    gchar *timestamp_str = get_precise_timestamp(&precise_time);
    if (timestamp_str)
    {
        printf("push_sample Timestamp: source_id=%d, timestamp=%s, num=%d\n",
               source_id, timestamp_str, q++);

        // 分配时间戳数据结构
        CustomTimestampData *timestamp_data = g_malloc0(sizeof(CustomTimestampData));
        if (timestamp_data)
        {
            // 安全地复制字符串,确保以null结尾
            g_strlcpy(timestamp_data->timestamp_str, timestamp_str, sizeof(timestamp_data->timestamp_str));
            timestamp_data->source_id = source_id;
            timestamp_data->precise_time = precise_time;

            // 添加DeepStream元数据
            NvDsMeta *dsmeta = gst_buffer_add_nvds_meta(buffer, timestamp_data, NULL,
                                                        copy_timestamp_meta, release_timestamp_meta);
            if (dsmeta)
            {
                dsmeta->meta_type = NVDS_USER_META;
                printf("Source %d: Successfully added timestamp metadata\n", source_id);
            }
            else
            {
                g_print("Source %d: Failed to add timestamp metadata to frame\n",
                        source_id);
                g_free(timestamp_data);
            }
        }
        else
        {
            g_print("Source %d: Failed to allocate timestamp data for frame\n",
                    source_id);
        }

        g_free(timestamp_str);
    }
    else
    {
        g_print("Source %d: Failed to get timestamp for frame\n",
                source_id);
    }
    // 推送buffer到pipeline - 使用g_signal_emit_by_name而不是gst_app_src_push_buffer
    GstFlowReturn ret;
    g_signal_emit_by_name(src, "push-buffer", buffer, &ret);

    if (ret == GST_FLOW_OK)
    {
        ImageInfo *current_img = &images_info[prepared->tile_info.image_index];
        if ((info->image_index + 1) % 10 == 0 && info->tile_index + 1 == current_img->total_tiles)
        {
            printf("Appsrc推送第 %d 张图像(共 %d 张)的第 %d 个瓦片(共 %d 个),位置 (%d, %d);推送队列长度: %d\n",
                   info->image_index + 1, total_images,
                   info->tile_index + 1, current_img->total_tiles,
                   info->tile_x, info->tile_y, g_queue_get_length(tile_info_queue));
        }
    }
    else if (ret != GST_FLOW_FLUSHING)
    {
        g_print("Source %d: Push buffer failed: %s\n", source_id, gst_flow_get_name(ret));
    }

    gst_buffer_unref(buffer);

    // 清理预处理数据
    free(prepared->jpeg_data);
    free(prepared);
}

static GstFlowReturn on_new_sample(GstAppSink *sink, gpointer user_data)
{
    GstSample *sample = gst_app_sink_pull_sample(sink);
    if (sample)
    {
        gst_sample_unref(sample); // 不处理,释放即可
    }

    return GST_FLOW_OK;
}

// 在 appsrc 的 src pad 添加探针
static GstPadProbeReturn appsrc_src_probe(GstPad *pad, GstPadProbeInfo *info, gpointer user_data)
{
    GstBuffer *buffer = GST_PAD_PROBE_INFO_BUFFER(info);
    g_print("appsrc src buffer address: %p\n", buffer);

    // 检查元数据
    NvDsBatchMeta *batch_meta = gst_buffer_get_nvds_batch_meta(buffer);
    if (batch_meta)
    {
        g_print("Found batch meta in appsrc src!\n");
    }
    else
    {
        g_print("No batch meta in appsrc src\n");
    }

    return GST_PAD_PROBE_OK;
}

GstPadProbeReturn osd_sink_pad_buffer_probe(GstPad *pad, GstPadProbeInfo *info, gpointer u_data)
{
    GstBuffer *buf = (GstBuffer *)info->data;
    NvDsBatchMeta *batch_meta = gst_buffer_get_nvds_batch_meta(buf);
    if (!batch_meta)
        return GST_PAD_PROBE_OK;
    // g_print("=== OSD Probe: Processing batch with %d frames ===\n", batch_meta->num_frames_in_batch);
    for (NvDsMetaList *l_frame = batch_meta->frame_meta_list; l_frame != NULL; l_frame = l_frame->next)
    {
        NvDsFrameMeta *frame_meta = (NvDsFrameMeta *)(l_frame->data);
        if (!frame_meta)
            continue;

        for (NvDsMetaList *l_user = frame_meta->frame_user_meta_list; l_user != NULL; l_user = l_user->next)
        {
            NvDsUserMeta *user_meta = (NvDsUserMeta *)(l_user->data);
            if (!user_meta || user_meta->base_meta.meta_type != NVDS_USER_META)
                continue;

            CustomTimestampData *timestamp_data = (CustomTimestampData *)user_meta->user_meta_data;
            if (timestamp_data)
            {
                g_print("  OSD Timestamp: source_id=%d, timestamp=%s\n",
                        timestamp_data->source_id, timestamp_data->timestamp_str);
            }
        }
    }
    for (NvDsMetaList *l_frame = batch_meta->frame_meta_list; l_frame != NULL; l_frame = l_frame->next)
    {
        NvDsFrameMeta *frame_meta = (NvDsFrameMeta *)(l_frame->data);

        TileInfo *tile_info = (TileInfo *)g_queue_pop_head(tile_info_queue);
        if (!tile_info)
        {
            g_printerr("Tile queue underflow!\n");
            continue;
        }

        int img_idx = tile_info->image_index;
        int tile_x = tile_info->tile_x;
        int tile_y = tile_info->tile_y;

        // 原有逻辑改为用 tile_info 来更新检测信息
        int count = 0;
        for (NvDsMetaList *l_obj = frame_meta->obj_meta_list; l_obj; l_obj = l_obj->next)
        {
            NvDsObjectMeta *obj = (NvDsObjectMeta *)(l_obj->data);
            if (obj->class_id < 0 || obj->class_id > 4)
                continue;
            count++;
        }

        if (count > 0)
        {
            int prev = detection_counts[img_idx];
            detection_counts[img_idx] += count;
            all_detections[img_idx] = realloc(all_detections[img_idx], detection_counts[img_idx] * sizeof(DetectionResult));

            int idx = prev;
            for (NvDsMetaList *l_obj = frame_meta->obj_meta_list; l_obj; l_obj = l_obj->next)
            {
                NvDsObjectMeta *obj = (NvDsObjectMeta *)(l_obj->data);
                if (obj->class_id < 0 || obj->class_id > 4)
                    continue;

                all_detections[img_idx][idx].label = labels[obj->class_id];
                all_detections[img_idx][idx].confidence = obj->confidence;
                all_detections[img_idx][idx].x = tile_x + obj->rect_params.left;
                all_detections[img_idx][idx].y = tile_y + obj->rect_params.top;
                all_detections[img_idx][idx].width = obj->rect_params.width;
                all_detections[img_idx][idx].height = obj->rect_params.height;
                idx++;
            }
        }

        free(tile_info); // 用完释放
    }

    return GST_PAD_PROBE_OK;
}

static GstPadProbeReturn jpegdec_src_probe(GstPad *pad, GstPadProbeInfo *info, gpointer user_data)
{
    GstBuffer *buffer = GST_PAD_PROBE_INFO_BUFFER(info);
    if (!buffer)
        return GST_PAD_PROBE_OK;

    NvDsBatchMeta *batch_meta = gst_buffer_get_nvds_batch_meta(buffer);
    if (batch_meta)
    {
        g_print("=== JPEGDec Src Probe: Batch meta found ===\n");
    }
    else
    {
        g_print("=== JPEGDec Src Probe: No batch meta found ===\n");
    }
    return GST_PAD_PROBE_OK;
}

clear

Qídài nǐ de huífù xièxiè

8 / 5,000

翻译结果

翻译结果

Looking forward to your reply, thanks

Hi,
Some GStreamer filter elements don’t copy the metadata from the sinkpad into the srcpad buffer. I thinknvvideoconvert is one of them in certain JetPack versions. Try adding pad probes before and after it to check. You may need to modify the element to copy the metadata manually if it’s getting dropped.

I’m sorry I don’t quite understand, could you explain it to me again?
I added a probe to the srcpad of appsrc before, and I found that no metadata was found in this probe, not even through the following elements

I saw you use DeepStream gst_buffer_get_nvds_batch_meta interface in all the probe functions to get DeepStream batch meta MetaData in the DeepStream SDK — DeepStream documentation. It is wrong. DeepStream batch meta is generated and attached by the DeepStream element nvstreammux(Gst-nvstreammux — DeepStream documentation), before this element, MetaData in the DeepStream SDK — DeepStream documentation does not exist.
appsrc,jpegdec, capsfilter and videoconvert are all GStreamer elements, they have nothing to do with DeepStream, it can never generate any DeepStream related things.

Please make sure you have basic GStreamer knowledge before you start with DeepStream. Please read the DeepStream documents to understand the mechanism of DeepStream.

Thank you for your reply, can I attach deepstream metadata to each buffer pushed into the pipeline when pushing frames from appsrc? Because my metadata needs to be bound to each frame. (I have done this successfully in another example pipeline before, but I don’t know why it doesn’t work in this pipeline) I now just add probes to the sinks of nvinfer and nvdsosd:

static GstPadProbeReturn osd_probe_callback(GstPad *pad, GstPadProbeInfo *info, gpointer user_data)
{
GstBuffer *buffer;

if (info->type & GST_PAD_PROBE_TYPE_BUFFER)
{
buffer = GST_PAD_PROBE_INFO_BUFFER(info);
if (!buffer)
return GST_PAD_PROBE_OK;

NvDsBatchMeta *batch_meta = gst_buffer_get_nvds_batch_meta(buffer);
if (batch_meta)
{
g_print("=== OSD Probe: Processing batch with %d frames ===\n", 
batch_meta->num_frames_in_batch); 

for (NvDsMetaList *l_frame = batch_meta->frame_meta_list; l_frame != NULL; l_frame = l_frame->next) 
{ 
NvDsFrameMeta *frame_meta = (NvDsFrameMeta *)(l_frame->data); 
if (!frame_meta) 
continue; 

for (NvDsMetaList *l_user = frame_meta->frame_user_meta_list; l_user != NULL; l_user = l_user->next) 
{ 
NvDsUserMeta *user_meta = (NvDsUserMeta *)(l_user->data); 
if (!user_meta || user_meta->base_meta.meta_type != NVDS_USER_META) 
continue; 

CustomTimestampData *timestamp_data = (CustomTimestampData *)user_meta->user_meta_data;
if (timestamp_data)
{
g_print(" OSD Timestamp: source_id=%d, timestamp=%s\n",
timestamp_data->source_id, timestamp_data->timestamp_str);
}
}
}
}
}

return GST_PAD_PROBE_OK;
}

But for (NvDsMetaList *l_user = frame_meta->frame_user_meta_list; l_user != NULL; l_user = l_user->next){}
This loop cannot be entered.

You can attach your customized meta by the user meta of NvDsMeta.

Please refer to the sample /opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-gst-metadata-test

Please add nvmultistreamtiler or nvstreamdemux after nvinfer. Your pipeline is wrong.

“filesink location=output11.mkv”, This is my output.
Sorry I don’t quite understand what you mean. Could you explain it again? My current pipeline can actually start normally and output inference to mkv files. I just can’t find the metadata I attached in the nvdsosd probe. And I just checked the official example, I attached it in the same way, but I did it in the callback signal function of appsrc.

You have 16 input and your nvstreammux batch size is 8. Please refer to Frequently Asked Questions — DeepStream documentation and Frequently Asked Questions — DeepStream documentation to set the correct batch size values.

Your pipeline can run but it is wrong. Your 16 inputs will not appear in your final output. nvdsosd and nvv4l2h264enc do not support batch while you already use nvstreammux generated the batch. You must use nvmultistreamtiler or nvstreamdemux after nvinfer to convert the batch back to frames.

Please refer to the deepstream_python_apps/apps/deepstream-test3/deepstream_test_3.py at master · NVIDIA-AI-IOT/deepstream_python_apps for a typical multiple inputs inferencing pipeline.

I deliberately set this to 8, but 16 is also fine. If it is 16, the processing speed will be slower than 8, so I set it to 8. This is normal.

// OSD探针回调函数
static GstPadProbeReturn osd_probe_callback(GstPad *pad, GstPadProbeInfo *info, gpointer user_data) {
    GstBuffer *buffer;
    
    if (info->type & GST_PAD_PROBE_TYPE_BUFFER) {
        buffer = GST_PAD_PROBE_INFO_BUFFER(info);
        if (!buffer) return GST_PAD_PROBE_OK;
        
        NvDsBatchMeta *batch_meta = gst_buffer_get_nvds_batch_meta(buffer);
        if (batch_meta) {
            g_print("=== OSD Probe: Processing batch with %d frames ===\n", 
                   batch_meta->num_frames_in_batch);
            
            for (NvDsMetaList *l_frame = batch_meta->frame_meta_list; l_frame != NULL; l_frame = l_frame->next) {
                NvDsFrameMeta *frame_meta = (NvDsFrameMeta *)(l_frame->data);
                if (!frame_meta) continue;
                
                for (NvDsMetaList *l_user = frame_meta->frame_user_meta_list; l_user != NULL; l_user = l_user->next) {
                    NvDsUserMeta *user_meta = (NvDsUserMeta *)(l_user->data);
                    if (!user_meta || user_meta->base_meta.meta_type != NVDS_USER_META) continue;
                    
                    CustomTimestampData *timestamp_data = (CustomTimestampData *)user_meta->user_meta_data;
                    if (timestamp_data) {
                        g_print("  OSD Timestamp: source_id=%d, timestamp=%s, frame=%lu\n",
                               timestamp_data->source_id, timestamp_data->timestamp_str,
                               timestamp_data->frame_number);
                    }
                }
            }
        }
    }
    
    return GST_PAD_PROBE_OK;
}
// push_sample 信号回调函数 
static void push_sample(GstElement *appsrc, guint length, gpointer user_data) {
    AppData *app_data = (AppData *)user_data;
    if (!app_data) return;
    
    gint source_id = -1;
    
    // 找到对应的源ID
    for (int i = 0; i < NUM_SOURCES; i++) {
        if (app_data->appsrc[i] == appsrc) {
            source_id = i;
            break;
        }
    }
    
    if (source_id == -1) {
        g_print("Error: Unknown appsrc in need-data callback\n");
        return;
    }
    
    g_mutex_lock(&app_data->global_mutex);
    if (!app_data->running) {
        g_mutex_unlock(&app_data->global_mutex);
        return;
    }
    g_mutex_unlock(&app_data->global_mutex);
    
    SourceData *source = &app_data->sources[source_id];
    if (!source->frame_data) {
        g_print("Source %d: frame_data is NULL\n", source_id);
        return;
    }
    
    g_mutex_lock(&source->mutex);
    guint64 current_frame = source->frame_count;
    g_mutex_unlock(&source->mutex);
    
    // 生成测试帧
    generate_test_frame(source->frame_data, FRAME_WIDTH, FRAME_HEIGHT, current_frame, source_id);
    
    // 创建GStreamer buffer
    GstBuffer *buffer = gst_buffer_new_allocate(NULL, FRAME_SIZE, NULL);
    if (!buffer) {
        g_print("Source %d: Failed to allocate buffer\n", source_id);
        return;
    }
    
    // 填充buffer数据
    gst_buffer_fill(buffer, 0, source->frame_data, FRAME_SIZE);
    
    // 设置timestamp和duration
    GST_BUFFER_PTS(buffer) = gst_util_uint64_scale(current_frame, GST_SECOND, FPS);
    GST_BUFFER_DTS(buffer) = GST_BUFFER_PTS(buffer);
    GST_BUFFER_DURATION(buffer) = gst_util_uint64_scale(1, GST_SECOND, FPS);
    
    // 添加时间戳元数据
    struct timeval precise_time;
    gchar *timestamp_str = get_precise_timestamp(&precise_time);
    if (timestamp_str) {
        printf("push_sample Timestamp: source_id=%d, timestamp=%s, frame=%lu\n",
                source_id, timestamp_str, current_frame);
        
        // 分配时间戳数据结构
        CustomTimestampData *timestamp_data = g_malloc0(sizeof(CustomTimestampData));
        if (timestamp_data) {
            // 安全地复制字符串,确保以null结尾
            g_strlcpy(timestamp_data->timestamp_str, timestamp_str, sizeof(timestamp_data->timestamp_str));
            timestamp_data->source_id = source_id;
            timestamp_data->frame_number = current_frame;
            timestamp_data->precise_time = precise_time;
            
            // 添加DeepStream元数据
            NvDsMeta *dsmeta = gst_buffer_add_nvds_meta(buffer, timestamp_data, NULL, 
                                                       copy_timestamp_meta, release_timestamp_meta);
            if (dsmeta) {
                dsmeta->meta_type = NVDS_USER_META;
            } else {
                g_print("Source %d: Failed to add timestamp metadata to frame %lu\n", 
                       source_id, current_frame);
                g_free(timestamp_data);
            }
        } else {
            g_print("Source %d: Failed to allocate timestamp data for frame %lu\n", 
                   source_id, current_frame);
        }
        
        g_free(timestamp_str);
    } else {
        g_print("Source %d: Failed to get timestamp for frame %lu\n", 
               source_id, current_frame);
    }
    
    // 推送buffer
    GstFlowReturn ret;
    g_signal_emit_by_name(appsrc, "push-buffer", buffer, &ret);
    
    if (ret == GST_FLOW_OK) {
        g_mutex_lock(&source->mutex);
        source->frame_count++;
        if (source->frame_count % 100 == 0) {
            g_print("Source %d: pushed %lu frames\n", source_id, source->frame_count);
        }
        g_mutex_unlock(&source->mutex);
    } else if (ret != GST_FLOW_FLUSHING) {
        g_print("Source %d: Push buffer failed: %s\n", source_id, gst_flow_get_name(ret));
    }
    
    // 释放buffer引用
    gst_buffer_unref(buffer);
}

// 创建管道的函数
static GstElement* create_pipeline(AppData *app_data) {
    if (!app_data) return NULL;
    
    // 构建完整的管道字符串
    gchar *pipeline_str = g_strdup_printf(
        "appsrc name=appsrc0 format=time is-live=true do-timestamp=false max-bytes=0 max-buffers=0 block=true emit-signals=true ! "
        "video/x-raw,format=NV12,width=%d,height=%d,framerate=%d/1 ! "
        "nvvideoconvert name=conv0 ! "
        "queue name=queue0 max-size-buffers=0 max-size-bytes=0 max-size-time=0 leaky=0 ! "
        "nvstreammux.sink_0 "
        
        "appsrc name=appsrc1 format=time is-live=true do-timestamp=false max-bytes=0 max-buffers=0 block=true emit-signals=true ! "
        "video/x-raw,format=NV12,width=%d,height=%d,framerate=%d/1 ! "
        "nvvideoconvert name=conv1 ! "
        "queue name=queue1 max-size-buffers=0 max-size-bytes=0 max-size-time=0 leaky=0 ! "
        "nvstreammux.sink_1 "
        
        "appsrc name=appsrc2 format=time is-live=true do-timestamp=false max-bytes=0 max-buffers=0 block=true emit-signals=true ! "
        "video/x-raw,format=NV12,width=%d,height=%d,framerate=%d/1 ! "
        "nvvideoconvert name=conv2 ! "
        "queue name=queue2 max-size-buffers=0 max-size-bytes=0 max-size-time=0 leaky=0 ! "
        "nvstreammux.sink_2 "
        
        "appsrc name=appsrc3 format=time is-live=true do-timestamp=false max-bytes=0 max-buffers=0 block=true emit-signals=true ! "
        "video/x-raw,format=NV12,width=%d,height=%d,framerate=%d/1 ! "
        "nvvideoconvert name=conv3 ! "
        "queue name=queue3 max-size-buffers=0 max-size-bytes=0 max-size-time=0 leaky=0 ! "
        "nvstreammux.sink_3 "
        
        "nvstreammux name=nvstreammux batch-size=%d width=%d height=%d live-source=true batched-push-timeout=%d async=false ! "
        "nvinfer config-file-path=/opt/PlaneShipInfer/model/yolo11/DeepStream-Yolo/config_infer_primary_yolo11_plane_ship_b4.txt ! "
        "nvvideoconvert ! "
        "nvdsosd name=osd display-text=1 ! "
        "nvv4l2h264enc bitrate=4000000 preset-level=1 ! "
        "h264parse ! "
        "matroskamux ! "
        "filesink location=output_with_deepstream_timestamp.mkv",
        FRAME_WIDTH, FRAME_HEIGHT, FPS,    // appsrc0
        FRAME_WIDTH, FRAME_HEIGHT, FPS,    // appsrc1
        FRAME_WIDTH, FRAME_HEIGHT, FPS,    // appsrc2
        FRAME_WIDTH, FRAME_HEIGHT, FPS,    // appsrc3
        NUM_SOURCES, FRAME_WIDTH, FRAME_HEIGHT, BATCH_TIMEOUT  // nvstreammux
    );
    
    g_print("Pipeline string:\n%s\n", pipeline_str);
    
    GError *error = NULL;
    GstElement *pipeline = gst_parse_launch(pipeline_str, &error);
    g_free(pipeline_str);
    
    if (!pipeline) {
        g_print("Failed to create pipeline: %s\n", error ? error->message : "Unknown error");
        if (error) g_error_free(error);
        return NULL;
    }
    
    // 获取appsrc元素并连接信号
    for (int i = 0; i < NUM_SOURCES; i++) {
        gchar *appsrc_name = g_strdup_printf("appsrc%d", i);
        app_data->appsrc[i] = gst_bin_get_by_name(GST_BIN(pipeline), appsrc_name);
        g_free(appsrc_name);
        
        if (!app_data->appsrc[i]) {
            g_print("Failed to get appsrc%d from pipeline\n", i);
            gst_object_unref(pipeline);
            return NULL;
        }
        
        // 连接need-data信号
        g_signal_connect(app_data->appsrc[i], "need-data", G_CALLBACK(push_sample), app_data);
    }
    
    // 添加OSD探针
    GstElement *osd = gst_bin_get_by_name(GST_BIN(pipeline), "osd");
    if (osd) {
        GstPad *osd_src_pad = gst_element_get_static_pad(osd, "src");
        if (osd_src_pad) {
            gst_pad_add_probe(osd_src_pad, GST_PAD_PROBE_TYPE_BUFFER,
                             osd_probe_callback, NULL, NULL);
            gst_object_unref(osd_src_pad);
        }
        gst_object_unref(osd);
    }
    
    return pipeline;
}

The code above is a test pipeline I wrote, which is used to test a batch of multiple inferences and additional metadata (not the pipeline I asked about in my post). This test pipeline adds metadata during the appsrc push phase and then adds a probe to nvdsosd to retrieve metadata, and multiple inferences are successful. The pipeline I asked about in my post imitates this test pipeline to attach metadata. When attaching, it prompts that it is successful, but I don’t know why it shows empty when it is retrieved.

I find neither “gst_to_nvds_meta_transform_func” nor “gst_to_nvds_meta_release_func” implementation in your code post. Please follow the sample /opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-gst-metadata-test exactly. And please add nvmultistreamtiler or nvstreamdemux after nvinfer in you pipeline.

I have extracted my pipeline and made a test code. The way to add metadata in the middle is to imitate the official sample file, but it still shows that the metadata is successfully attached, but the metadata cannot be found in osd. I have always been wondering why the metadata cannot be found in osd. Is there something unique about my pipeline?

#include <gst/gst.h>
#include <gst/app/gstappsrc.h>
#include <glib.h>
#include <stdio.h>
#include <string.h>
#include <stdlib.h>
#include <unistd.h>
#include <sys/time.h>
#include "gstnvdsmeta.h"

#define CONFIG_PATH "/opt/PlaneShipInfer/model/yolo11/DeepStream-Yolo/config_infer_primary_yolo11_plane_ship_b4.txt"
#define IMAGE_WIDTH 640
#define IMAGE_HEIGHT 640
#define FRAMERATE 30
#define JPEG_FILE_PATH "/home/nvidia/PlaneShipInfer/src/TestFile/tiff/jpg_output/airplane_000.jpg" // 你的JPEG文件路径,请根据实际情况修改

/** 设置用户自定义元数据类型 */
#define NVDS_APPSRC_CUSTOM_META (nvds_get_user_meta_type("NVIDIA.APPSRC.CUSTOM_META"))

typedef struct
{
    GstElement *pipeline;
    GstElement *appsrc[4];
    GMainLoop *loop;
    guint frame_count;
    gboolean is_live;
    guint8 *jpeg_data;
    gsize jpeg_size;
} AppData;

/** 自定义元数据结构 */
typedef struct _AppsrcCustomMeta
{
    guint source_id;        // 来自哪个分支 (0-3)
    guint64 push_timestamp; // 推送时间戳(毫秒)
    guint frame_number;     // 帧号
} AppsrcCustomMeta;

/** GST元数据拷贝函数 */
static gpointer appsrc_custom_meta_copy_func(gpointer data, gpointer user_data)
{
    AppsrcCustomMeta *src_meta = (AppsrcCustomMeta *)data;
    AppsrcCustomMeta *dst_meta = (AppsrcCustomMeta *)g_malloc0(sizeof(AppsrcCustomMeta));
    memcpy(dst_meta, src_meta, sizeof(AppsrcCustomMeta));
    return (gpointer)dst_meta;
}

/** GST元数据释放函数 */
static void appsrc_custom_meta_release_func(gpointer data, gpointer user_data)
{
    AppsrcCustomMeta *custom_meta = (AppsrcCustomMeta *)data;
    if (custom_meta)
    {
        g_free(custom_meta);
        custom_meta = NULL;
    }
}

/** GST到NVDS元数据转换函数 */
static gpointer appsrc_gst_to_nvds_meta_transform_func(gpointer data, gpointer user_data)
{
    NvDsUserMeta *user_meta = (NvDsUserMeta *)data;
    AppsrcCustomMeta *src_meta = (AppsrcCustomMeta *)user_meta->user_meta_data;
    AppsrcCustomMeta *dst_meta = (AppsrcCustomMeta *)appsrc_custom_meta_copy_func(src_meta, NULL);
    return (gpointer)dst_meta;
}

/** NVDS元数据释放函数 */
static void appsrc_gst_nvds_meta_release_func(gpointer data, gpointer user_data)
{
    NvDsUserMeta *user_meta = (NvDsUserMeta *)data;
    AppsrcCustomMeta *custom_meta = (AppsrcCustomMeta *)user_meta->user_meta_data;
    appsrc_custom_meta_release_func(custom_meta, NULL);
}

/** 获取当前时间戳(毫秒) */
static guint64 get_current_timestamp_ms()
{
    struct timeval tv;
    gettimeofday(&tv, NULL);
    return (guint64)tv.tv_sec * 1000 + (guint64)tv.tv_usec / 1000;
}

// 读取JPEG文件数据
static gboolean load_jpeg_file(AppData *data)
{
    FILE *file;
    long file_size;

    file = fopen(JPEG_FILE_PATH, "rb");
    if (!file)
    {
        g_print("Cannot open JPEG file: %s\n", JPEG_FILE_PATH);
        g_print("Please make sure you have a valid JPEG file in the current directory\n");
        return FALSE;
    }

    // 获取文件大小
    fseek(file, 0, SEEK_END);
    file_size = ftell(file);
    fseek(file, 0, SEEK_SET);

    if (file_size <= 0)
    {
        g_print("Invalid JPEG file size\n");
        fclose(file);
        return FALSE;
    }

    // 分配内存并读取文件
    data->jpeg_data = g_malloc(file_size);
    data->jpeg_size = fread(data->jpeg_data, 1, file_size, file);
    fclose(file);

    if (data->jpeg_size != file_size)
    {
        g_print("Failed to read complete JPEG file\n");
        g_free(data->jpeg_data);
        return FALSE;
    }

    g_print("Successfully loaded JPEG file: %s (size: %ld bytes)\n", JPEG_FILE_PATH, file_size);
    return TRUE;
}

// 创建JPEG buffer(使用真实的JPEG数据)并添加自定义元数据
static GstBuffer *create_jpeg_buffer_from_file(AppData *data, int source_id, int frame_num)
{
    if (!data->jpeg_data || data->jpeg_size == 0)
    {
        g_print("No JPEG data available\n");
        return NULL;
    }

    // 创建buffer并复制JPEG数据
    GstBuffer *buffer = gst_buffer_new_allocate(NULL, data->jpeg_size, NULL);
    GstMapInfo map;
    gst_buffer_map(buffer, &map, GST_MAP_WRITE);
    memcpy(map.data, data->jpeg_data, data->jpeg_size);
    gst_buffer_unmap(buffer, &map);

    // 设置时间戳
    GST_BUFFER_PTS(buffer) = gst_util_uint64_scale(frame_num, GST_SECOND, FRAMERATE);
    GST_BUFFER_DTS(buffer) = GST_BUFFER_PTS(buffer);
    GST_BUFFER_DURATION(buffer) = gst_util_uint64_scale(1, GST_SECOND, FRAMERATE);

    // 创建自定义元数据
    AppsrcCustomMeta *custom_meta = (AppsrcCustomMeta *)g_malloc0(sizeof(AppsrcCustomMeta));
    if (custom_meta == NULL)
    {
        g_print("Failed to allocate custom metadata\n");
        gst_buffer_unref(buffer);
        return NULL;
    }

    // 填充自定义元数据
    custom_meta->source_id = source_id;
    custom_meta->push_timestamp = get_current_timestamp_ms();
    custom_meta->frame_number = frame_num;

    // 将自定义元数据附加到GST buffer
    NvDsMeta *meta = gst_buffer_add_nvds_meta(buffer, custom_meta, NULL,
                                              appsrc_custom_meta_copy_func,
                                              appsrc_custom_meta_release_func);

    // 设置元数据类型
    meta->meta_type = (GstNvDsMetaType)NVDS_APPSRC_CUSTOM_META;

    // 设置转换函数
    meta->gst_to_nvds_meta_transform_func = appsrc_gst_to_nvds_meta_transform_func;
    meta->gst_to_nvds_meta_release_func = appsrc_gst_nvds_meta_release_func;

    g_print("[APPSRC%d] Attached custom metadata - Frame: %d, Timestamp: %lu ms\n",
            source_id, frame_num, custom_meta->push_timestamp);

    return buffer;
}

// appsrc的need-data回调函数
static void push_sample(GstElement *appsrc, guint unused_size, gpointer user_data)
{
    static int frame_counts[4] = {0, 0, 0, 0};
    AppData *data = (AppData *)user_data;
    GstBuffer *buffer;
    GstFlowReturn ret;

    // 确定是哪个appsrc
    int source_id = -1;
    for (int i = 0; i < 4; i++)
    {
        if (appsrc == data->appsrc[i])
        {
            source_id = i;
            break;
        }
    }

    if (source_id == -1)
    {
        g_print("Unknown appsrc!\n");
        return;
    }

    // 创建JPEG buffer(使用真实的JPEG文件并添加自定义元数据)
    buffer = create_jpeg_buffer_from_file(data, source_id, frame_counts[source_id]);
    if (!buffer)
    {
        g_print("Failed to create buffer for source %d\n", source_id);
        return;
    }

    // 推送buffer到pipeline
    g_signal_emit_by_name(appsrc, "push-buffer", buffer, &ret);
    gst_buffer_unref(buffer);

    frame_counts[source_id]++;

    if (ret != GST_FLOW_OK)
    {
        g_print("Push buffer failed for source %d, return: %d\n", source_id, ret);
        return;
    }

    // 限制帧数,避免无限生成
    if (frame_counts[source_id] > 300)
    { // 10秒 * 30fps
        g_print("Sending EOS for source %d\n", source_id);
        g_signal_emit_by_name(appsrc, "end-of-stream", &ret);
    }
}

/** OSD src pad探针 - 读取自定义元数据 */
static GstPadProbeReturn
osd_src_pad_buffer_probe(GstPad *pad, GstPadProbeInfo *info, gpointer u_data)
{
    GstBuffer *buf = (GstBuffer *)info->data;
    NvDsMetaList *l_frame = NULL;
    NvDsUserMeta *user_meta = NULL;
    AppsrcCustomMeta *custom_meta = NULL;
    NvDsMetaList *l_user_meta = NULL;
    gboolean found_custom_meta = FALSE;

    g_print("[OSD PROBE] Buffer received, checking for metadata...\n");

    NvDsBatchMeta *batch_meta = gst_buffer_get_nvds_batch_meta(buf);
    if (!batch_meta)
    {
        g_print("[OSD PROBE] No batch metadata found!\n");
        return GST_PAD_PROBE_OK;
    }

    g_print("[OSD PROBE] Batch has %d frames\n", batch_meta->num_frames_in_batch);

    for (l_frame = batch_meta->frame_meta_list; l_frame != NULL; l_frame = l_frame->next)
    {
        NvDsFrameMeta *frame_meta = (NvDsFrameMeta *)(l_frame->data);
        g_print("[OSD PROBE] Checking frame %d for user metadata...\n", frame_meta->frame_num);

        // 检查batch级别的用户元数据
        for (l_user_meta = batch_meta->batch_user_meta_list; l_user_meta != NULL;
             l_user_meta = l_user_meta->next)
        {
            user_meta = (NvDsUserMeta *)(l_user_meta->data);
            g_print("[OSD PROBE] Found batch user meta type: %d (looking for %d)\n",
                    user_meta->base_meta.meta_type, NVDS_APPSRC_CUSTOM_META);
            if (user_meta->base_meta.meta_type == NVDS_APPSRC_CUSTOM_META)
            {
                custom_meta = (AppsrcCustomMeta *)user_meta->user_meta_data;
                guint64 current_time = get_current_timestamp_ms();
                guint64 latency = current_time - custom_meta->push_timestamp;
                found_custom_meta = TRUE;

                g_print("[OSD PROBE] Retrieved custom metadata from BATCH level:\n");
                g_print("  ├─ Source ID: %d\n", custom_meta->source_id);
                g_print("  ├─ Frame Number: %d\n", custom_meta->frame_number);
                g_print("  ├─ Push Timestamp: %lu ms\n", custom_meta->push_timestamp);
                g_print("  ├─ Current Time: %lu ms\n", current_time);
                g_print("  └─ Processing Latency: %lu ms\n\n", latency);
            }
        }

        // 检查frame级别的用户元数据
        for (l_user_meta = frame_meta->frame_user_meta_list; l_user_meta != NULL;
             l_user_meta = l_user_meta->next)
        {
            user_meta = (NvDsUserMeta *)(l_user_meta->data);
            g_print("[OSD PROBE] Found frame user meta type: %d (looking for %d)\n",
                    user_meta->base_meta.meta_type, NVDS_APPSRC_CUSTOM_META);
            if (user_meta->base_meta.meta_type == NVDS_APPSRC_CUSTOM_META)
            {
                custom_meta = (AppsrcCustomMeta *)user_meta->user_meta_data;
                guint64 current_time = get_current_timestamp_ms();
                guint64 latency = current_time - custom_meta->push_timestamp;
                found_custom_meta = TRUE;

                g_print("[OSD PROBE] Retrieved custom metadata from FRAME level:\n");
                g_print("  ├─ Source ID: %d\n", custom_meta->source_id);
                g_print("  ├─ Frame Number: %d\n", custom_meta->frame_number);
                g_print("  ├─ Push Timestamp: %lu ms\n", custom_meta->push_timestamp);
                g_print("  ├─ Current Time: %lu ms\n", current_time);
                g_print("  └─ Processing Latency: %lu ms\n\n", latency);
            }
        }
    }

    if (!found_custom_meta)
    {
        g_print("[OSD PROBE] No custom metadata found in this buffer\n\n");
    }

    return GST_PAD_PROBE_OK;
}

// 错误处理回调
static gboolean bus_call(GstBus *bus, GstMessage *msg, gpointer data)
{
    GMainLoop *loop = (GMainLoop *)data;

    switch (GST_MESSAGE_TYPE(msg))
    {
    case GST_MESSAGE_EOS:
        g_print("End of stream\n");
        g_main_loop_quit(loop);
        break;
    case GST_MESSAGE_ERROR:
    {
        gchar *debug;
        GError *error;
        gst_message_parse_error(msg, &error, &debug);
        g_printerr("Error: %s\n", error->message);
        if (debug)
        {
            g_printerr("Debug: %s\n", debug);
        }
        g_error_free(error);
        g_free(debug);
        g_main_loop_quit(loop);
        break;
    }
    case GST_MESSAGE_WARNING:
    {
        gchar *debug;
        GError *error;
        gst_message_parse_warning(msg, &error, &debug);
        g_print("Warning: %s\n", error->message);
        if (debug)
        {
            g_print("Debug: %s\n", debug);
        }
        g_error_free(error);
        g_free(debug);
        break;
    }
    case GST_MESSAGE_STATE_CHANGED:
    {
        GstState old_state, new_state;
        gst_message_parse_state_changed(msg, &old_state, &new_state, NULL);
        if (GST_MESSAGE_SRC(msg) == GST_OBJECT(((AppData *)data)->pipeline))
        {
            g_print("Pipeline state changed from %s to %s\n",
                    gst_element_state_get_name(old_state),
                    gst_element_state_get_name(new_state));
        }
        break;
    }
    default:
        break;
    }
    return TRUE;
}

// 配置appsrc属性
static void configure_appsrc(GstElement *appsrc, int source_id)
{
    GstCaps *caps;

    // 设置caps为JPEG格式
    caps = gst_caps_new_simple("image/jpeg",
                               "width", G_TYPE_INT, IMAGE_WIDTH,
                               "height", G_TYPE_INT, IMAGE_HEIGHT,
                               "framerate", GST_TYPE_FRACTION, FRAMERATE, 1,
                               NULL);

    g_object_set(G_OBJECT(appsrc),
                 "caps", caps,
                 "format", GST_FORMAT_TIME,
                 "is-live", TRUE,
                 "do-timestamp", TRUE,
                 "min-latency", G_GUINT64_CONSTANT(0),
                 "max-latency", G_GUINT64_CONSTANT(0),
                 "block", TRUE,
                 NULL);

    gst_caps_unref(caps);

    g_print("Configured appsrc%d\n", source_id);
}

int main(int argc, char *argv[])
{
    AppData data;
    GstBus *bus;
    guint bus_watch_id;
    GstPad *osd_src_pad = NULL;
    char pipeline_str[2048];

    // 初始化GStreamer
    gst_init(&argc, &argv);

    // 创建主循环
    data.loop = g_main_loop_new(NULL, FALSE);
    data.frame_count = 0;
    data.is_live = TRUE;
    data.jpeg_data = NULL;
    data.jpeg_size = 0;

    // 加载JPEG文件
    if (!load_jpeg_file(&data))
    {
        g_printerr("Failed to load JPEG file. Please ensure %s exists and is a valid JPEG file.\n", JPEG_FILE_PATH);
        return -1;
    }

    // 构建管道字符串
    snprintf(pipeline_str, sizeof(pipeline_str),
             "appsrc name=appsrc0 ! jpegdec name=jpegdec0 ! videoconvert ! video/x-raw,format=RGBA ! "
             "nvvideoconvert ! video/x-raw(memory:NVMM),format=NV12 ! queue ! mux.sink_0 "
             "appsrc name=appsrc1 ! jpegdec name=jpegdec1 ! videoconvert ! video/x-raw,format=RGBA ! "
             "nvvideoconvert ! video/x-raw(memory:NVMM),format=NV12 ! queue ! mux.sink_1 "
             "appsrc name=appsrc2 ! jpegdec name=jpegdec2 ! videoconvert ! video/x-raw,format=RGBA ! "
             "nvvideoconvert ! video/x-raw(memory:NVMM),format=NV12 ! queue ! mux.sink_2 "
             "appsrc name=appsrc3 ! jpegdec name=jpegdec3 ! videoconvert ! video/x-raw,format=RGBA ! "
             "nvvideoconvert ! video/x-raw(memory:NVMM),format=NV12 ! queue ! mux.sink_3 "
             "nvstreammux name=mux batch-size=4 width=640 height=640 live-source=0 ! "
             "nvinfer name=infer config-file-path=%s ! "
             "nvvideoconvert ! "
             "nvdsosd name=osd display-text=1 ! "
             "nvv4l2h264enc bitrate=4000000 preset-level=1 ! "
             "h264parse ! "
             "matroskamux ! "
             "filesink location=output12.mkv",
             CONFIG_PATH);

    g_print("Pipeline: %s\n", pipeline_str);

    // 创建管道
    data.pipeline = gst_parse_launch(pipeline_str, NULL);
    if (!data.pipeline)
    {
        g_printerr("Failed to create pipeline\n");
        g_free(data.jpeg_data);
        return -1;
    }

    // 获取appsrc元素
    for (int i = 0; i < 4; i++)
    {
        char appsrc_name[32];
        snprintf(appsrc_name, sizeof(appsrc_name), "appsrc%d", i);
        data.appsrc[i] = gst_bin_get_by_name(GST_BIN(data.pipeline), appsrc_name);
        if (!data.appsrc[i])
        {
            g_printerr("Failed to get %s\n", appsrc_name);
            g_free(data.jpeg_data);
            return -1;
        }

        // 配置appsrc
        configure_appsrc(data.appsrc[i], i);

        // 连接need-data信号
        g_signal_connect(data.appsrc[i], "need-data", G_CALLBACK(push_sample), &data);
    }

    // 获取nvinfer元素并添加src pad探针用于调试
    GstElement *nvinfer = gst_bin_get_by_name(GST_BIN(data.pipeline), "infer");
    if (nvinfer)
    {
        GstPad *nvinfer_src_pad = gst_element_get_static_pad(nvinfer, "src");
        if (nvinfer_src_pad)
        {
            gst_pad_add_probe(nvinfer_src_pad, GST_PAD_PROBE_TYPE_BUFFER,
                              osd_src_pad_buffer_probe, NULL, NULL);
            g_print("Added probe to nvinfer source pad for debugging\n");
            gst_object_unref(nvinfer_src_pad);
        }
        gst_object_unref(nvinfer);
    }

    // 获取OSD元素并添加src pad探针
    GstElement *osd = gst_bin_get_by_name(GST_BIN(data.pipeline), "osd");
    if (!osd)
    {
        g_printerr("Failed to get OSD element\n");
        g_free(data.jpeg_data);
        return -1;
    }

    osd_src_pad = gst_element_get_static_pad(osd, "src");
    if (!osd_src_pad)
    {
        g_print("Unable to get OSD source pad\n");
    }
    else
    {
        gst_pad_add_probe(osd_src_pad, GST_PAD_PROBE_TYPE_BUFFER,
                          osd_src_pad_buffer_probe, NULL, NULL);
        g_print("Added probe to OSD source pad\n");
    }
    gst_object_unref(osd_src_pad);
    gst_object_unref(osd);

    // 设置总线监听
    bus = gst_pipeline_get_bus(GST_PIPELINE(data.pipeline));
    bus_watch_id = gst_bus_add_watch(bus, bus_call, &data);
    gst_object_unref(bus);

    // 启动管道
    g_print("Starting pipeline...\n");
    gst_element_set_state(data.pipeline, GST_STATE_PLAYING);

    // 运行主循环
    g_print("Running...\n");
    g_main_loop_run(data.loop);

    // 清理资源
    g_print("Stopping pipeline...\n");
    gst_element_set_state(data.pipeline, GST_STATE_NULL);

    for (int i = 0; i < 4; i++)
    {
        if (data.appsrc[i])
        {
            gst_object_unref(data.appsrc[i]);
        }
    }
    gst_object_unref(data.pipeline);
    g_source_remove(bus_watch_id);
    g_main_loop_unref(data.loop);

    // 释放JPEG数据
    if (data.jpeg_data)
    {
        g_free(data.jpeg_data);
    }

    g_print("Done.\n");
    return 0;
}

I’ve told you appsrc ,jpegdec , capsfilter and videoconvert are all GStreamer elements but not DeepStream elements.

The modified code can get the custom meta in downstream. nvmultistreamtiler or nvstreamdemux is MUST in DeepStream pipeline.
meta_test.c (17.5 KB)

I understand what you mean, right? Even if I add metadata before appsrc push, as long as there are non-deepstream elements behind, the metadata will be lost. The metadata is added later through nvstreammux, so it can only be retrieved by adding probes to the deepstream elements after nvstreammux.
Thank you for changing the code. I understand your intention, but there is a problem. When I change the image address and model to mine, the code will have a Segmentation fault (core dumped) error. I want to know why the pipeline always has a Segmentation fault. What is the situation in half of them? Is the source of my push inappropriate?

Can you provide your model, your nvinfer configuration(including the customized postprocessing code) and your image file?

model.zip (33.3 MB)

Hi, the TensorRT engine file is hardware related, please provide the original ONNX model.

Where is the “NvDsInferParseYolo”?

If the “segment fault” happens after you replace the model, your customized postprocessing may be the key, please debug with your customized postprocessing.

parse-bbox-func-name=NvDsInferParseYolo
Sorry, I don’t know what NvDsInferParseYolo you are talking about? Is it in the config file? That is not my custom code, the official example is written like this. The code I used is the code you sent me before, I only changed the model file and image file in it to my own. Is the post-processing you mentioned a probe? I am just going to extract metadata from the probe, which is also the code you sent me before. You can browse it above.Thank you~~
plane_ship_yolo11s_best.pt.zip (31.0 MB)

That is not provided by Nvidia either. Please get the original code and debug by yourself.

I’ve checked the image file, it is OK.