• Hardware Platform (Jetson / GPU) T4 (dGPU)
• DeepStream Version nvcr.io/nvidia/deepstream:6.0.1-triton
• NVIDIA GPU Driver Version 470.63.01
We have connected back to back detectors with nvinfer plugin. Both pgie and sgie has flags output-tensor-meta=1 and distinct gie-unique-id values. It seems they work with network-type=0 (Object detector) with a shared library of post-processing attached to them.
We want to do post-processing part in python probe after pgie and sgie. Hence we have set network-type=100. This will let us do the post-processing in python probe attached.
- link pgie plugin with a python probe attached. It does post-processing.
- fill NvDsObjectMeta buffer inside probe,
The above steps gives correct output, but now
- link SGIE plugin, with a python probe in the pipeline with network-type=100. It seems the sgie is not doing any inference.
The issue only persist if we have
network-type=100 → To use python post-prcoessing code.
It is working fine if we have:
network-type=0 → To use shared library c++ post-processing.
Could you please help us what we can do to solve this?