Undistort camera input (non 360°)

I have to undistort the camera stream before inference. How can I do that in the deepstream pipeline?
The gst-nvdewarper doesn’t seem to be suitable for simple camera undistortion procedures using the camera matrix and the distortion coefficients.


You can refer dsexample and use openCV to undistort frame

You can implement your undistort process in dsexample plugin.

gst-launch-1.0 filesrc location= streams/sample_1080p_h264.mp4 ! qtdemux ! h264parse ! nvv4l2decoder ! m.sink_0 nvstreammux name=m width=1920 height=1080 batch-size=1 ! dsexample processing-width=640 processing-height=480 full-frame=1 ! nvinfer config-file-path= configs/deepstream-app/config_infer_primary.txt ! nvvideoconvert ! nvdsosd ! nveglglessink

I know the dsexample.
But I don’t want to slow down the pipeline by copying the stream-images out of the gpu-memory. So to keep deepstreams performance, I need to modify it directly in gpu memory.

Is it right that the NvBufSurfaceTransform and NvBufSurface API are handling the stream-images directly in memory?

I don’t see any possibilities for me to undistort the image directly in gpu-memory.

Please give me a hint, if I am wrong.



I think, there are two issues for this topic,
the first is, for undistort function module, how to get camera data and how to pass the data to the following gstreamer plugin, e.g. format conversion or inference,
the second is, how to implement undistort function.

The dsexample example/pipeline Chris demonstrated is mainly for the first issue, that is, you could implement a ‘undistort’ gstreamer plugin referring to dsexample and insert it in the pipeline so that the ‘undistort’ function module can get the camera data and pass the processed data to following plugin.
For the 2nd issue, you can change any processing code in dsexample to integrate your undistort algorithm, no matter it’s CPU or GPU based.