Can I add dummy PGIE to DeepStream?

Please provide complete information as applicable to your setup.

• Hardware Platform Jetson AGX Orin
• DeepStream Version 7.1
• JetPack Version 6.1
• TensorRT Version 10.3

Hi Nvidia

I want to do 4 camera run different nvinfer model. Does it is possible to add dummy infer to PGIE?
Thank you. And each camera run different SGIE?

cam0,1,2,3 → nvstreamux→dummy PGIE→SGIE0,1,2,3→tiler→sink

How many pgie and sgie do you have? what are the models used to do? could you elaborate on dummy infer? do you want pgie to never do inference? if so, you can set ‘interval’ property to a big value.
Regarding “each camera run different SGIE”, you can add nvstreamdemux plugin after pgie to split the sources, then send each source data to each sgie.

Hi Nvidia Team


How many pgie and sgie do you have? what are the models used to do?

There are one pgie and two sgie I have.

could you elaborate on dummy infer? do you want pgie to never do inference?

Yes, I’d like pgie never do inference. I have 4 cameras, and camera0~camera2 using sgie0(object detect, from deepstream) and camera3 using sgie1 do face detect(from Yolo).

Or maybe I can use mutiple streammux0~2 directly, streamux0 for camera0~2 and run pgie0(object detect), streamux1 for camera3 run pgie1(face detect), then all of them go through streammux3 then finally go through nvtiler?

Set the interval to 2^32-1 for the Primary GIE. Then it won’t run for a very long time… you can disable it too. The pipeline still runs without a model.

Yes. you can also use “nvstreammux->nvstreamdemux->nvstreammux” pipeline. Please refer to this ready-made sample deepstream_parallel_inference_app.

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.