• Hardware Platform: GPU
• DeepStream Version: 5.0.0
• TensorRT Version: 7.0.0.11
• NVIDIA GPU Driver Version (valid for GPU only): 460.32.03
Hi, I want to get something done that was discussed quite a few times in this forum but was never really answered or the provided answer was for an outdated deepstream version.
My pipeline looks something like this: pgie (detector, outputs bboxes) → sgie1 (network-type=100, output-tensor-meta=1; outputs landmarks which i parse and attach to the respective bbox meta) → sgie2
sgie2 expects a bbox that is aligned in respect to the landmarks from sgie1’s output. So the question is how to realize that?
I found the following topics covering this:
https://forums.developer.nvidia.com/t/image-pre-processing-between-pgie-and-sgie/111350/6
Here it is suggested to do an affine transformation on every bbox. I have no idea how this should be realized within deepstream API as a bbox is defined by it’s upper left corner as well as width and height, so it gives a rectangle but applying an affine transformation on a rectangle not necessarily yields a rectangle.
Next thing is that the piece of code which is suggested to be changed in /opt/nvidia/deepstream/deepstream-4.0/sources/libs/nvdsinfer/nvdsinfer_context_impl.cpp
is not available anymore in my deepstream version.
This suggests to follow the dsexample plugin and to use NPP API. However I didn’t manage to find any example how to use the NPP API. Additionally it would be nice to see an example how to connect DS and NPP API. Maybe someone can point to a resource.
Appreciate any comment