Hello, I was wondering if it is possible to have multiple primary inference engines in parallel?
My goal is when processing each frame to pass it through two different inference engines, and then process their outputs in a separate plugin.
What do you mean by “in parallel”? Can you describe your pipeline?
What is your engines? Are they all detectors? For detector, the outputs are objects with bboxes in meta data. There is “gie-unique-id” to identify different inference engines in meta data. Can you elaborate what is “process their outputs in a separate plugin”?
@Fiona.Chen my engines are a detector, which outputs bboxes, and a network that outputs a depth map (HxW image). Then I would get the parts of the depth map corresponding to some of the bboxes.
@marmikshah the sequential pipeline would work, but I would like to know if there is already a way to do the parallel in deepstream-app. I am new to this and not sure how to implement beyond the scope of the sequential deepstream-app examples.
What do you mean by “parallel”? Even with the second pipeline suggested by @marmikshah, the two models can work at the same time, when first PGIE works on batch n, the second PGIE may be working on batch n-3.
And what do you mean by “process their outputs in a separate plugin.”?
I doubt there is a way to do it directly in deepstream-app.
If you really need to do something like the first example I mentioned above, you may also consider using 2 deepstream-app instances because it seems you just want to process the same frame in two separate ways that have nothing in common.
So for example if your source is a a USB Camera, then you will first need to create an RTSP stream so that two deepstream-app instances can use it.
So 3 pipelines in total:
/ -> deepstream-app (With your PGIE 1 and Plugin 1)
Pipeline 1 (USB Camera to RTSP)
\ -> deepstream-app (With your PGIE 2 and Plugin 2)