Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU): Nvidia Orin Jetson 64Gb
• DeepStream Version: 6.2
• JetPack Version (valid for Jetson only) : 5.1-b147
• TensorRT Version: 8.5.2-1+cuda11.4
• Issue Type( questions, new requirements, bugs): question
Hello,
I want to use “Gst-nvdewarper” to undistort my image from a USB RGB camera which has a 3.5mm lens and 1/1.7" sensor size. Following are my camera intrinsic and distortion parameters:
Image width: 1616
Image height: 1240
fx= 776.2205796983008
fy= 775.1831242733599
cx= 779.3577422263877
cy= 630.5392491612774
k1= -1.7550965901443998
k2= 0.42233583084391374
k3= 0.45023886093988386
k4= -1.534818381528613
k5= -0.04519751751606131
k6= 0.7163137290337043
p1= -0.00043710473030367426
p2= -0.0009160066845597751
camera_matrix = np.array([
[fx, 0, cx],
[0, fy, cy],
[0, 0, 1]
], dtype=np.float)
distortion = np.array([[k1], [k2], [p1], [p2], [k3], [k4], [k5], [k6]], dtype=np.float)
undistorted = cv2.undistort(image, camera_matrix, distortion, None, camera_matrix)
I have used the undistort-function of openCV in the past as shown above and it works well with these parameters but I want to utilize Deepstream for this.
I am having a tough time creating the “config_dewraper_perspective.txt” file. I have taken reference from this
config_dewarper_1280x720.txt (2.1 KB) but it’s giving me weird results. Can you please help me to figure out which parameter goes where in the file?