VPI stereo disparity gets discretized to lower accuracy

We are using VPI - Vision Programming Interface: Stereo Disparity code to generate depth image from a realsense camera and see that the depth values are discretized. We think line 138 in python code (VPI - Vision Programming Interface: Stereo Disparity) does this. See below

 # Disparities are in Q10.5 format, so to map it to float, it gets
 # divided by 32. Then the resulting disparity range, from 0 to
 # stereo.maxDisparity gets mapped to 0-255 for proper output.
 disparity = disparity.convert(vpi.Format.U8, scale=255.0/(32*maxDisparity))

Changing it to

 disparity = disparity.convert(vpi.Format.U16, scale=255.0/(32*maxDisparity))

also does not help.

See the discretization of the depth map in the image below; seen as bands

Hey, I think the “convert” command normalizes the output so it could be displayed in conventional methods.
The “255.0” sets the higher bound of values you will get.
I think you can avoid the “convert” command if you want to keep the original disparity values.
Hope this helps.