• Hardware Platform (Jetson / GPU) Jetson Xavier NX
• DeepStream Version 5.0
• JetPack Version (valid for Jetson only) R32 Revision: 5.0 GCID: 25531747 Board: t186ref
• TensorRT Version 7.1.3 + CUDA 10.2
• Issue Type( questions, new requirements, bugs) TrafficCamNet model doesn’t do a fantastic job recognizing slightly off angle image.
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing) please see below
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description) please see below
Hi Deepstream team,
Please see attached image for how the camera is located to take pics and run inference on.
When I rotate the camera a bit for test purpose (in real life, I cannot), it does a great job finding the car. So, I was curious if there is a way to receive the input rotated. I read that
nvvidconv = Gst.ElementFactory.make("nvvideoconvert", "convertor") can rotate images (https://docs.nvidia.com/jetson/l4t/index.html#page/Tegra%20Linux%20Driver%20Package%20Development%20Guide/accelerated_gstreamer.html#wwpID0E0VH0HA) but - nvvidconv is in the pipeline after the inference is ran, and also only extreme rotation options are available.
If I am looking to rotate like 10-20 degrees, is there a way to do it before running the inference? Eventually, as I collect more images, I will run TLT to train the model with the images I want, but as I currently don’t have many images, I wanted to see if there was a way to simply feed in rotated image.