How to remove lens distortion from input images in Deepstream?

I am trying to remove lens distortion from the input image in the Deepstream app. In OpenCV, this can be done using the undistort function with a matrix and distortion coefficients, but I haven’t been able to find a standard implementation of this in Deepstream. My Jetson device comes with OpenCV installed, but without CUDA support, so installing cv::cuda::remap would be difficult and require a lot of storage space.

I have read that the gst-nvdewarper plugin can be used to undistort 360-degree videos, but I am not sure if it can be used for regular images. I have also tried to use gstcameraundistort from gst-plugins-bad, but I keep getting an error message saying “No such element or plugin ‘cameraundistort’”.

Can anyone provide guidance on how to remove lens distortion from the input image in Deepstream? I would appreciate any help or advice on this topic. Thank you!

Please refer to Dewarp Deepstream 6 - Intelligent Video Analytics / DeepStream SDK - NVIDIA Developer Forums for the gst-nvdewarper mapping to opencv camera calibration.

Thank you very much. I am now able to use the deepstream-dewarper-app to undistord a video file.

Is there an easy way to integrate the dewarper into the deepstream-app?
Something like this in the deepstream_app_config.txt:


Or do I have to integrate the library somehow?


I am also able to use gstreamer to undistord the lens. Is there any way I can build this into the deepstream app? I can’t find anything about it in the documentation.

gst-launch-1.0 filesrc location=video.mkv ! qtdemux ! h264parse ! nvv4l2decoder ! nvvideoconvert ! nvdewarper config-file=config_dewarper.txt ! m.sink_0 nvstreammux name=m width=1280 height=720 batch-size=1 num-surfaces-per-frame=1 ! nvmultistreamtiler ! nv3dsink

No, currently the nvdswarper related code is not added in the deepstream-app source code, you may need to add by yourself. DeepStream SDK FAQ - Intelligent Video Analytics / DeepStream SDK - NVIDIA Developer Forums

I have installed deepstream-6.0 and can not find the file: …/deepstream_app_config_parser_yaml.cpp
Any advice?

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)

If there is no deepstream_app_config_parser_yaml.cpp, you don’t need to modify it.

I switched to Deepstream 6.2 and Orin NX. The implementation worked perfectly as described by yuweiw. Thanks for the hint. However, I have a performance problem. Without dewarper I get about 50fps. With the dewarper activated it is only 38fps. Does the plugin run on the GPU? It’s quite a bottelneck in the pipeline. Without dewarper enabled I have permanently 99% GPU utilization. With dewarper, GPU utilization jumps between 35% and 99%. I also see an increased CPU load.

• Hardware Platform (Jetson / GPU) Jetson Orin NX 16GB
• DeepStream Version Deepstream-6.2
• JetPack Version (valid for Jetson only) Jetpack 5.1
• TensorRT Version 5.1
• OpenCV without CUDA

I’m still looking for a solution. Can someone help me?

Can you share your nvdewarper configuration file? What command line or app have you run for your case? Can you share us how to reproduce your case?

I have added the dewarper with: How to remove lens distortion from input images in Deepstream? - #5 by Fiona.Chen
Run VPI - Vision Programming Interface: Performance Benchmark

cd /opt/nvidia/deepstream/deepstream-6.2/sources/objectDetector_Yolo/
sudo deepstream-app -c deepstream_app_config_yoloV3.txt

with these files inside:
config_dewarper_1280x720.txt (2.1 KB)
config_infer_primary_yoloV3.txt (3.7 KB)
deepstream_app_config_yoloV3.txt (4.4 KB)
yolov3.weights Object-Detection---Yolov3/yolov3.weights at master · patrick013/Object-Detection---Yolov3 · GitHub
yolov3.cfg darknet/yolov3.cfg at master · pjreddie/darknet · GitHub

190FPS with enable=0
150FPS with enable=1

I’ve tried your samples, there is FPS drop with dewarper enabling, but the CPU loading is higher when the dewarper is disabled. So the performance drop is not caused by CPU loading.

Is such a high performance drop normal or is there a bottleneck?

We are checking this issue. Will be back when there is any progress.

Could you add the patch in the open source code and try to see if the performance is improved in your env?

--- sources\gst-plugins\gst-nvdewarper\nvdewarper.cpp
*** 41,46 ****
--- 41,47 ----
  #include "cudaEGL.h"
  /* Dewarper #defines */
  #ifndef   M_PI
*** 245,250 ****
--- 246,252 ----
    /* Test measurement with 10 iterations */
    warper.warp(nvdewarper->stream, srcTex, dstBuffer, cuDstRowBytes);
+   cudaStreamSynchronize(nvdewarper->stream);
    warper.warp(0, srcTex, dstBuffer, cuDstRowBytes);
    cudaErr = cudaDeviceSynchronize();

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.