Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU):jetson • DeepStream Version:6.3 • JetPack Version (valid for Jetson only):5.1.3
My pipeline is reflected in the uploaded PDF file, and the display effect is as shown in the figure. I hope to display the inference results of the two models separately. How should I modify my pipeline? pipelinetest.pdf (40.5 KB)
oh, i put a wrong pic, the wrong pic is my wanted situation, my pipeline now show that in one window, i put the pipeline as above.and now the situation is as follow pic.
If you want to display the inference results of the two models separately, you need to add nvstreamdemux plugins after your tee plugin. You can refer to our demo deepstream_parallel_inference_app.
Q:
1)I want to create the pipeline by myself, and the deepstream_parallel_inference_app demo is so complex for me. anything wrong on my pipeline?
2)How to set up settings in the deepstream pipeline to prevent program crashes caused by callback functions from different probes processing the same memory simultaneously? I added two probes that are only separated by one element. The callback functions of both probes operate on the inference results, which can occasionally cause the pipeline to crash。
Because the metadata is automatically managed using a bufferpool, you don’t need to clear that yourself theoretically.
Can you explain why this is necessary to add those 2 probes?
This clearing operation is necessary because I need to obtain analysis results at a specific time, but restarting the pipeline is slow every time, so I use this method.
This cannot fundamentally solve my problem, as we have clear requirements for specific moments. Could you please provide a method to address the issue of segmentation errors.
and Thank you very, very much. Is there any channel to thank you? Thank you very much for your help during this period. Your response has supported our development journey.