Occlusion/visibility of bounding box corners

Hey, I’m trying to create a simulation model for synthetic data generation in Isaac Sim 2022.2.0. There are a lot of boxes on a pallet with randomized positioning, lighting and materials. I’m getting the corners and the center points of the top from the 3d bounding box annotator in a custom writer with the following code which works for a single bounding box.

import omni.syntheticdata as sd

def world_to_image_pinhole(world_points: np.ndarray, camera_params: dict) -> np.ndarray:
    # Project corners to image space (assumes pinhole camera model)
    proj_mat = camera_params["cameraProjection"].reshape(4, 4)
    view_mat = camera_params["cameraViewTransform"].reshape(4, 4)
    view_proj_mat = np.dot(view_mat, proj_mat)
    world_points_homo = np.pad(world_points, ((0, 0), (0, 1)), constant_values=1.0)
    tf_points = np.dot(world_points_homo, view_proj_mat)
    tf_points = tf_points / (tf_points[..., -1:])
    return 0.5 * (tf_points[..., :2] + 1)

def write(self, data: dict):
    render_product = [k for k in data.keys() if k.startswith("rp_")][0]
    bbox3ds = data["bounding_box_3d"]["data"]
    corners_3d = sd.get_bbox_3d_corners(bbox3ds)
    corners_3d = corners_3d.reshape(-1,3)
    points = [corners_3d[0].tolist().pop(i) for i in range(4,8)]
    points.append(center_point(*points))
    corners_2d = world_to_image_pinhole(points, data["camera_params"])
    # contains top corners and center point
    corners_2d *= np.array([[self.width, self.height]])

Is there a way to also get the visibility/occlusion of these points to see if they are hidden behind other prims? I already tried different annotators (point cloud, occlusion, bounding box 2d), but they show all points/objects in the scene. I would need like a list with all the points that are visible to the camera or check in some way if the points are hidden/visible. Is that possible?

Thanks for any help!

Hi, for anyone who’s interested in an answer, I figured it out:
You can actually use the pointcloud annotator to get the visible points in a viewport. I just checked if the bounding box corners can be found in the pointcloud output, this comparison has to be done with the 3d coordinates. There can be an offset between the sampled points in pointcloud and the bbox corners, so you need to check for a range around the bbox corners.
For improved performance, I sorted the pointcloud points by their semantic labels which can be found using the semantic segmentation annotator. It maps the ids found in the pointcloud data to semantic labels, which are the same as the ones in the bbox 3d annotator. So that way you can make sure that the visible bbox corners are actually visible on the specific prim’s surface.