Would rotated sensor increase detection latency?

Please provide complete information as applicable to your setup.

  • Jetson AGX Xavier (bare metal, no container)
  • DeepStream 5.0
  • Jetpack 4.5

Hi,

I have more of a philosophical question than a demonstrable problem…

I’m hoping those further-up the learning curve can help answer a question I’m being pressed on.

I’m just embarking on a video analysis challenge, with requirements to operate at high frame rates and low latency. There appear to be zero-copy pixel data paths from camera sensor to memory (e.g. for camera interfaces like PCIe) which can fulfill the latency constraints (1 frame delay, max, sensor to GPU memory.) So far, so good. :-)

Now I’m told the camera sensor may be installed rotated 90 degrees. How worried should I be about this?

I know GPUs can rotate images at pixel-fill rates, which is pretty-darn-quick. But would this necessarily imply a buffer copy (i.e. harmful-to-latency double-buffering) before frames are flowed into detection code?

Alternately, perhaps we should run analysis on the rotated image, and then rotate the image (and the bounding boxs) back up-right at output rendering. Would we need to train with rotated images, or do the algorithms confidently accept that objects may be laying on their side?

I’m sorry not to be more specific at this point, but I’m hoping to get a little helpful advice, or assurance, from anyone with a more experience and/or more imagination…

Thanks!

Deepstream has API for rotation which is based on GPU.
NVIDIA DeepStream SDK API Reference: Main Page,
NVIDIA DeepStream SDK API Reference: Main Page

Please refer to /opt/nvidia/deepstream/deepstream-5.0/sources/includes/nvbufsuftransform.h

typedef enum
{
/** Specifies no video flip. /
NvBufSurfTransform_None,
/
* Specifies rotating 90 degrees clockwise. /
NvBufSurfTransform_Rotate90,
/
* Specifies rotating 180 degree clockwise. /
NvBufSurfTransform_Rotate180,
/
* Specifies rotating 270 degree clockwise. /
NvBufSurfTransform_Rotate270,
/
* Specifies video flip with respect to the X-axis. /
NvBufSurfTransform_FlipX,
/
* Specifies video flip with respect to the Y-axis. /
NvBufSurfTransform_FlipY,
/
* Specifies video flip transpose. /
NvBufSurfTransform_Transpose,
/
* Specifies video flip inverse transpose. */
NvBufSurfTransform_InvTranspose,
} NvBufSurfTransform_Flip;

Thanks for this vector! I’ll study this facility, and try to benchmark as well. If it yields a negligible impact on latency, then it’s a welcome solution.

Hello, How can i apply this API with python deepstream program?

@pilotfdd There is no python binding for NvBufSurf transformation APIs.
Now topic for new questions, please!

To close the loop: the answer to the question posed by the topic title is: no.

Quick testing is showing the video-pipeline-delay introduced by frame rotation (of 720p video) is around 13 microSecs… this is a vanishingly-small fraction of the time we’re allowed to process a frame… hurrah! Thanks again.