Dual channel inference, merging the output results and displaying them all in one video window

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU):jetson
• DeepStream Version:6.3
• JetPack Version (valid for Jetson only):5.1.3
My pipeline is reflected in the uploaded PDF file, and the display effect is as shown in the figure. I hope to display the inference results of the two models separately. How should I modify my pipeline?
pipelinetest.pdf (40.5 KB)

The left and right parts show different reasoning results now. Do you want to show that in two different videos?

oh, i put a wrong pic, the wrong pic is my wanted situation, my pipeline now show that in one window, i put the pipeline as above.and now the situation is as follow pic.

If you want to display the inference results of the two models separately, you need to add nvstreamdemux plugins after your tee plugin. You can refer to our demo deepstream_parallel_inference_app.

1 Like

Q:
1)I want to create the pipeline by myself, and the deepstream_parallel_inference_app demo is so complex for me. anything wrong on my pipeline?
2)How to set up settings in the deepstream pipeline to prevent program crashes caused by callback functions from different probes processing the same memory simultaneously? I added two probes that are only separated by one element. The callback functions of both probes operate on the inference results, which can occasionally cause the pipeline to crash。

You need to add nvstreamdemux plugins after your tee plugin. Please refer to our demo code and the pipeline.

The two nvinfer plugin do not use the same buffer. The crash may be caused by other reasons, this needs to be further analyzed.

Could you attach the code stack of the crash?

$gdb --args <your_command>
$r
after crash
$bt

I printed the address of the detected obj_cetatlist in this pic。

Because the metadata is automatically managed using a bufferpool, you don’t need to clear that yourself theoretically.
Can you explain why this is necessary to add those 2 probes?

This clearing operation is necessary because I need to obtain analysis results at a specific time, but restarting the pipeline is slow every time, so I use this method.

Our latest version(DeepStream 7.1) has optimized the startup time, which is much shorter if you already have engine files. You can try to upgrade it.

This cannot fundamentally solve my problem, as we have clear requirements for specific moments. Could you please provide a method to address the issue of segmentation errors.

OK. We’ll analyze this issue. Could you describe in detail what you’re doing with these two probe functions?

We recommend that you check whether some pointers in the frame_meta and obj_l are already NULL before you clear them.

1 Like

Does this place not require a loop traversal? There is only one obj_mata_ist in each frame, isn’t there?

for (l_frame = batch_meta->frame_meta_list; l_frame != NULL; l_frame = l_frame->next) {
NvDsFrameMeta *frame_meta = (NvDsFrameMeta *)(l_frame->data);
//Delete all obj_mata_ist
for (NvDsObjectMetaList *obj_l = frame_meta->obj_meta_list; obj_l; obj_l = obj_l->next) {
nvds_clear_obj_meta_list(frame_meta, obj_l);
}
}

I have modified the above code as shown below, and the above problem does not occur again

for (l_frame = batch_meta->frame_meta_list; l_frame != NULL;l_frame = l_frame->next) {
NvDsFrameMeta *frame_meta = (NvDsFrameMeta *)(l_frame->data);
obj_l = frame_meta->obj_meta_list;
nvds_clear_obj_meta_list(frame_meta, obj_l);
display_l = frame_meta->display_meta_list;
nvds_clear_display_meta_list(frame_meta, display_l);
}

No. The loop traversal is implemented in the nvds_clear_obj_meta_list API.

So this is where the problem lies, isn’t it?

and Thank you very, very much. Is there any channel to thank you? Thank you very much for your help during this period. Your response has supported our development journey.

No problem. I’d be glad to help out.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.