Point cloud transformation error from depth map

Dear community,

I am using an NN to detect objects in a scene and I transform the pixels inside the bounding box into a point cloud. Unfortunately, the resulting points have a deviation of -2cm with respect to the ground. Is there any way to fix this?
The function I use is,

     world_points = self._camera.get_world_points_from_image_coords(rgbCoords, depthValues)

from the camera sensor.

The resulting pointcloud looks as follows:

Hi @dgut - The deviation you’re seeing could be due to a variety of factors, including the accuracy of your neural network, the precision of your depth sensor, or the transformation you’re using to convert image coordinates to world coordinates. Here are a few suggestions:

  1. Check the accuracy of your neural network: If your neural network isn’t accurately detecting objects, this could result in inaccuracies in the resulting point cloud. You might need to retrain your network or adjust its parameters.
  2. Check the precision of your depth sensor: If your depth sensor isn’t providing accurate depth values, this could also result in inaccuracies. You might need to calibrate your sensor or use a different sensor.
  3. Check your transformation: The function you’re using to convert image coordinates to world coordinates could be introducing errors. Make sure you’re using the correct transformation for your camera setup. You might need to adjust the parameters of this transformation.
  4. Apply a correction: If you know that your points are consistently off by -2cm, you could apply a correction to your points after you’ve transformed them. However, this is a bit of a hack and might not work in all situations.
  5. Check the camera model: Make sure the camera model you’re using in your code matches the actual camera you’re using to capture the images. Different cameras have different intrinsic and extrinsic parameters, and using the wrong model could result in inaccuracies.