Rotated bounding boxes as input for custom SGIE

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
GPU
• DeepStream Version
6.1.1
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
question, new requirements
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Hi,

I have a PGIE Detector producing bounding boxes from a fisheye camera. Due to the lens distortion near the edges of the image, I would like to “rotate” the detected object by n degrees before the image of that object is passed on to a custom SGIE network.

We can use NPP (NVIDIA 2D Image And Signal Performance Primitives (NPP): Rotate) to manipulate the images and produce a rotated image - however I am curious as to how I can use the rotated image as input to an SGIE? While it is possible to manually produce a tensor input to a PGIE by setting input-tensor-from-meta=1, this option appears to be unavailable for SGIE?

Is there any way to skip SGIE preprocessing and attach a raw tensor as input for each object meta or to override the input used? As briefly discussed in As described in (Rotated Boundingboxes) Infer live video on the jetson Platform with onnx model from the ODTK - #6 by mchi it is suggested to attach a probe in the SGIE sink - but it does not mention how to use the rotated image as an actual input for the object meta in a batch. @mchi

I understand that currently rotated bounding boxes are officially not supported - thus I am looking for a way to “force” an SGIE to use a specific input instead of doing internal preprocessing.

Thanks in advance,

/M

There is no update from you for a period, assuming this is not an issue anymore.
Hence we are closing this topic. If need further support, please open a new one.
Thanks

please refer to Gst-nvdsvideotemplate plugin which provides a custom library hooking interface for single/batched video frame(s) processing. Custom lib implementation may have algorithms to transform or process input buffers depending upon the use case.
Gst-nvdsvideotemplate — DeepStream 6.1.1 Release documentation.
there are some samples:
deepstream_tao_apps/apps/tao_others/deepstream-emotion-app at master · NVIDIA-AI-IOT/deepstream_tao_apps · GitHub
deepstream_tao_apps/apps/tao_others/deepstream-gaze-app at master · NVIDIA-AI-IOT/deepstream_tao_apps · GitHub
deepstream_tao_apps/apps/tao_others/deepstream-heartrate-app at master · NVIDIA-AI-IOT/deepstream_tao_apps · GitHub