Question about rotating an input frame

• Hardware Platform (Jetson / GPU) Jetson Xavier NX
• DeepStream Version 5.0
• JetPack Version (valid for Jetson only) R32 Revision: 5.0 GCID: 25531747 Board: t186ref
• TensorRT Version 7.1.3 + CUDA 10.2

• Issue Type( questions, new requirements, bugs) TrafficCamNet model doesn’t do a fantastic job recognizing slightly off angle image.

• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing) please see below

• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description) please see below

Hi Deepstream team,

Please see attached image for how the camera is located to take pics and run inference on.

When I rotate the camera a bit for test purpose (in real life, I cannot), it does a great job finding the car. So, I was curious if there is a way to receive the input rotated. I read that nvvidconv = Gst.ElementFactory.make("nvvideoconvert", "convertor") can rotate images (https://docs.nvidia.com/jetson/l4t/index.html#page/Tegra%20Linux%20Driver%20Package%20Development%20Guide/accelerated_gstreamer.html#wwpID0E0VH0HA) but - nvvidconv is in the pipeline after the inference is ran, and also only extreme rotation options are available.

If I am looking to rotate like 10-20 degrees, is there a way to do it before running the inference? Eventually, as I collect more images, I will run TLT to train the model with the images I want, but as I currently don’t have many images, I wanted to see if there was a way to simply feed in rotated image.

nvvidconv is not a deepstream plugin, it can not work with deepstream.
DeepStream SDK FAQ - Intelligent Video Analytics / DeepStream SDK - NVIDIA Developer Forums

There is no way in deepstream to rotate 10-20 degrees currently.

@Fiona.Chen

What do you mean that nvvidconv is not a deepstream plugin.
https://docs.nvidia.com/metropolis/deepstream/5.0DP/plugin-manual/index.html#page/DeepStream%20Plugins%20Development%20Guide/deepstream_plugin_details.3.07.html#

Isn’t this it? If you look at the above code, i named it nvvidconv (per other examples) but it is using Gst.ElementFactory.make("nvvideoconvert", "convertor")

I am just trying to double confirm that I am not messing anything up.

Also, if deepstream doesn’t have 10-20 degree rotation available, do you have a recommendation on running my setup? (As I mentioned previously, my goal is to use TLT when I have more images).

nvvideoconvert is deepstream plugin while nvvidconv is not.
nvvideoconvert does not support rotation. Gst-nvvideoconvert — DeepStream 6.3 Release documentation

The suggestion is to train your model with the same direction images from your camera.

In-place modification of the buffer with NvBufSurfTransform in a pad-probe won’t work ?

@Blard.Theophile

It looks like NvBufSurfTransform only has 90, 180, 270, etc. Not a specific number
https://docs.nvidia.com/metropolis/deepstream/4.0/dev-guide/DeepStream_Development_Guide/baggage/group__ee__buf__surf__transform.html#ga4d079d4167761076b885cde9a1a10f3f

1 Like

You’re right… Then you can try to load NvBufSurface data in a cv::Mat in READWRITE mode. This is something I already done successfully (based on dsexample), for drawing on the buffer. However I don’t known if it can become a performance bottleneck.

Any examples of using NvBufSurfTransform_Rotate90 on a surface in Python? Can’t find any!

There is no python binding for it.