Point clouds generated by Camera.get_pointcloud() are not matched

Camera.get_pointcloud() where Camera is imported from omni.isaac.sensor is supposed to generate the global point clouds based on current depth and camera view pose. However, when I try to use this function to generate a 2D occupancy map for my scene I find that the generated point clouds can not match with each other. I attach the camera to a robot. When the robot is rotating, the generated point clouds are also rotating.
After analysizing, I think it’s because the generated depth and camera view pose are not matched. So I modified the _data_acquisition_callback and get_view_matrix_ros function in Camera, and forced the attribute self.world_w_cam_u_T to be updated when updating the data as shown below.

from omni.isaac.sensor import Camera
R_U_TRANSFORM = np.array([[1, 0, 0, 0], [0, -1, 0, 0], [0, 0, -1, 0], [0, 0, 0, 1]])
class camera(Camera):
    def _data_acquisition_callback(self, event: carb.events.IEvent):
        parsed_payload = self._sdg_interface.parse_rendered_simulation_event(
            event.payload["product_path_handle"], event.payload["results"]
        )
        if parsed_payload[0] == self._render_product_path:
            self._og_controller.evaluate_sync(graph_id=self._sdg_graph_pipeline)
            frame_number = self._fabric_time_annotator.get_data()["referenceTimeNumerator"]
            current_time = self._core_nodes_interface.get_sim_time_at_swh_frame(frame_number)
            if self._previous_time is not None:
                self._elapsed_time += current_time - self._previous_time
            if self._frequency < 0 or self._elapsed_time >= self.get_dt():
                self._elapsed_time = 0
                self.world_w_cam_u_T = UsdGeom.Imageable(self.prim).ComputeLocalToWorldTransform(Usd.TimeCode.Default())
                self._current_frame["rendering_frame"] = frame_number
                self._current_frame["rgba"] = self._rgb_annotator.get_data()
                self._current_frame["rendering_time"] = current_time
                for key in self._current_frame:
                    if key not in ["rgba", "rendering_time", "rendering_frame"]:
                        self._current_frame[key] = self._custom_annotators[key].get_data()
            self._previous_time = current_time
        return
    
    def get_view_matrix_ros(self):
            """3D points in World Frame -> 3D points in Camera Ros Frame

            Returns:
                np.ndarray: the view matrix that transforms 3d points in the world frame to 3d points in the camera axes
                            with ros camera convention.
            """
            # world_w_cam_u_T = self._backend_utils.transpose_2d(
            #     self._backend_utils.convert(
            #         UsdGeom.Imageable(self.prim).ComputeLocalToWorldTransform(Usd.TimeCode.Default()),
            #         dtype="float32",
            #         device=self._device,
            #         indexed=True,
            #     )
            # )
            world_w_cam_u_T = self._backend_utils.transpose_2d(
                self._backend_utils.convert(
                    self.world_w_cam_u_T,
                    dtype="float32",
                    device=self._device,
                    indexed=True,
                )
            )
            r_u_transform_converted = self._backend_utils.convert(
                R_U_TRANSFORM, dtype="float32", device=self._device, indexed=True
            )
            return self._backend_utils.matmul(r_u_transform_converted, self._backend_utils.inverse(world_w_cam_u_T))

It works well when I always set render to be true in SimulationContext.step(self, render) which is imported from omni.isaac.core.simulation_context.
435de754-1b02-487b-80a0-0ba6a23fc3b2
However, when I render every 10 steps, the same problem appears again. I think it’s still the reason that the generated depth and camera view pose are not matched but I don’t know how to check it. Please help me with solving this problem, thanks!

The Camera.get_pointcloud() method is not working for dynamic cameras, since during simulation the dynamic camera pose is not synced with USD. Recommend you to leverage our replicator API to get the pointcloud data from camera: Annotators Information — Omniverse Extensions latest documentation

I will try it!, thx! And btw, what API I should use if I want to get the camera pose synced with USD? Because I want to compute the corresponding point clouds with given 2D image points just like Camera.get_world_points_from_image_coords(), so I think the correct camera pose is necessary.
图片

You do not need to sync with USD during simulation, since that would be crazy slow. After you add camera XForm to SimulationContext, you can get the dynamic pose of camera with ISAAC Sim Core API, like get_world_positions().