Dual channel inference, merging the output results and displaying them all in one video window

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU):jetson
• DeepStream Version:6.3
• JetPack Version (valid for Jetson only):5.1.3
My pipeline is reflected in the uploaded PDF file, and the display effect is as shown in the figure. I hope to display the inference results of the two models separately. How should I modify my pipeline?
pipelinetest.pdf (40.5 KB)

The left and right parts show different reasoning results now. Do you want to show that in two different videos?

oh, i put a wrong pic, the wrong pic is my wanted situation, my pipeline now show that in one window, i put the pipeline as above.and now the situation is as follow pic.

If you want to display the inference results of the two models separately, you need to add nvstreamdemux plugins after your tee plugin. You can refer to our demo deepstream_parallel_inference_app.

1 Like

Q:
1)I want to create the pipeline by myself, and the deepstream_parallel_inference_app demo is so complex for me. anything wrong on my pipeline?
2)How to set up settings in the deepstream pipeline to prevent program crashes caused by callback functions from different probes processing the same memory simultaneously? I added two probes that are only separated by one element. The callback functions of both probes operate on the inference results, which can occasionally cause the pipeline to crash。

You need to add nvstreamdemux plugins after your tee plugin. Please refer to our demo code and the pipeline.

The two nvinfer plugin do not use the same buffer. The crash may be caused by other reasons, this needs to be further analyzed.

Perhaps my description of the situation was not detailed enough. The real situation is that the pipeline I am currently running is the official deepstream_marallel_inference_app code provided by the authorities. The pipeline visualization can be found in the PDF uploaded this time. I have added the following probes and callback functions in the src position of the Metamux plugin

GstPadProbeReturn
yolo_seg_gie_src_pad_buffer_probe(GstPad *pad, GstPadProbeInfo *info,
gpointer u_data)
{
gchar *msg = NULL;
GstBuffer *buf = (GstBuffer *)info->data;
NvDsMetaList *l_frame = NULL;
NvDsMetaList *l_obj = NULL;
NvDsMetaList l_user = NULL;
NvDsFrameMeta
frameMetaSource0 = nullptr;
NvDsBatchMeta *batch_meta = gst_buffer_get_nvds_batch_meta(buf);
// if (batch_meta == NULL) {
// PrintW(“yolo_seg_gie_src_pad:: no NvDsBatchMeta \n”);
// return GST_PAD_PROBE_OK;
// }
// if (!gst_buffer_is_writable(buf))
// {
// PrintW(“yolo_seg_gie_src_pad:: buffer not writeable\n”);
// return GST_PAD_PROBE_OK;
// }
//TODO:保障source_id=0的=1的只有一帧
// guint length = g_list_length(frameMetaSource0->obj_meta_list);
// g_print(“frameMetaSource0:obj_meta_list:%d\t\n”,length);
for (l_frame = batch_meta->frame_meta_list; l_frame != NULL;
l_frame = l_frame->next)
{

NvDsFrameMeta *frame_meta = (NvDsFrameMeta *)(l_frame->data);
if (frame_meta->pad_index == 0) {
    frameMetaSource0 = frame_meta;
    NvDsBatchMeta *bmeta = frame_meta->base_meta.batch_meta;
    break;
}

}

if (!frameMetaSource0) {
return GST_PAD_PROBE_OK;
}

for (l_frame = batch_meta->frame_meta_list; l_frame != NULL;
l_frame = l_frame->next)
{
NvDsFrameMeta *frame_meta = (NvDsFrameMeta *)(l_frame->data);
if (frame_meta->pad_index != 0) {
if (frame_meta->frame_num % 100 == 0){
PrintW(“Processing frame number = %d\t\n”, frame_meta->frame_num);
}
g_mutex_lock(&ZHYPipeline::mutex);
nvds_copy_obj_meta_list(frameMetaSource0->obj_meta_list,frame_meta);
g_mutex_unlock(&ZHYPipeline::mutex);
}
PrintW(“yolo_seg_gie_src_pad_buffer_probe结束\t\n”);

}
return GST_PAD_PROBE_OK;
}

At the same time, I added the following probes and callback functions in the sink position of the OSD plugin, which can only be called when a communication device sends a signal;

GstPadProbeReturn
remove_meta_probe_cb(GstPad * pad, GstPadProbeInfo * info, gpointer user_data)
{
GstBuffer *buf = (GstBuffer *)info->data;
NvDsMetaList * l_frame = NULL;
NvDsBatchMeta *batch_meta = gst_buffer_get_nvds_batch_meta(buf);
if (batch_meta == NULL) {
PrintW(“no NvDsBatchMeta \n”);
return GST_PAD_PROBE_OK;
}
if (!gst_buffer_is_writable(buf))
{
PrintW(“buffer not writeable\n”);
return GST_PAD_PROBE_OK;
}
g_mutex_lock(&ZHYPipeline::mutex);
PrintW(“remove_meta_probe_cb------------------------------0\t\n”);
for (l_frame = batch_meta->frame_meta_list; l_frame != NULL;
l_frame = l_frame->next) {
NvDsFrameMeta *frame_meta = (NvDsFrameMeta *)(l_frame->data);
for (NvDsObjectMetaList *obj_l = frame_meta->obj_meta_list; obj_l; obj_l = obj_l->next) {
nvds_clear_obj_meta_list(frame_meta, obj_l);
}
for (NvDisplayMetaList *display_l = frame_meta->display_meta_list; display_l; display_l = display_l->next) {
nvds_clear_display_meta_list(frame_meta, display_l);
}
}
PrintW(“remove_meta_probe_cb------------------------------1111\t\n”);
g_mutex_unlock(&ZHYPipeline::mutex);
return GST_PAD_PROBE_OK;
}

When I run the pipeline analysis video, if the above signal is suddenly sent, my program sometimes experiences segment errors and bus errors. I suspect that there may be memory related errors in the meta data in the gstbuffer. Could you please take a
pipeline_zhengque.pdf (51.6 KB)
look.

Could you attach the code stack of the crash?

$gdb --args <your_command>
$r
after crash
$bt