Accessing ToF depth map from Orbbec FemtoMega

Isaac Sim Version

4.5.0

Operating System

Ubuntu 22.04

GPU Information

  • Model: NVIDIA RTX A4500
  • Driver Version: 535.183.01

Topic Description

Detailed Description

I am trying to access the ToF depth map from the ToF module on the Orbbec FemtoMega sensor. The only way I can find it to add the “image plane to frame” or “camera to frame” annotators using the add_distance_to_image_plane_to_frame() and add_distance_to_camera_to_frame() methods on the omni.isaac.sensor.Camera object. These are then accessible when calling camera.get_current_frame().

However, this seems to produce a ground truth depth map: i.e. a perfect depth map with none of the idiosyncrasies and/or artifacts that would typically be produce by a sensor in the real world. We observe the same behaviour for other depth sensors such as the Intel RealSense asset in Isaac Sim, where these annotators also results in perfect ground-truth depth maps, instead of errors and artifacts inherent to the camera and its algorithms. Is this indeed the only approach to obtain depth data/maps from depth sensors in Isaac Sim?

Additional Information

What I’ve Tried

This is the a code snippet of how I’m attempting to retrieve the depth maps.

from isaacsim.examples.interactive.base_sample import BaseSample
from isaacsim.core.api.objects import DynamicCuboid
import omni
from omni.isaac.sensor import Camera
from isaacsim.core.utils.stage import add_reference_to_stage
from pxr import UsdGeom, Gf
import numpy as np


class HelloWorld(BaseSample):
    def __init__(self) -> None:
        super().__init__()
        return

    def setup_scene(self):
        world = self.get_world()
        world.scene.add_default_ground_plane()

        cube = DynamicCuboid(
            prim_path='/World/Cube',
            name='main_cube',
            position=np.array([0, 0, 1.0]),
            scale=np.array([0.5, 0.5, 0.5]),
            color=np.array([0.0, 0.0, 1.0]),
        )
        world.scene.add(cube)

        femto_usd_path = 'https://omniverse-content-production.s3-us-west-2.amazonaws.com/Assets/Isaac/4.5/Isaac/Sensors/Orbbec/FemtoMega/orbbec_femtomega_v1.0.usd'
        femto_prim_path = '/World/FemtoMega'
        add_reference_to_stage(usd_path=femto_usd_path, prim_path=femto_prim_path)

        femto_prim = omni.usd.get_prim_at_path(femto_prim_path)
        xform = UsdGeom.Xformable(femto_prim)
        for xform_op in xform.GetOrderedXformOps():
            if xform_op.GetOpType() == UsdGeom.XformOp.TypeTranslate:
                xform_op.Set(Gf.Vec3d(4, 0, 0.7))
                break

        self.femto_nfov_name = 'femto_tof_nfov'
        camera = Camera(
            prim_path='/World/FemtoMega/FemtoMega/MegaBody/TofSensor/tof_sensor/tof_assy/tof_ir_lens/toro_lens/solid/solid/camera_tof_nfov',
            name=self.femto_nfov_name
        )
        camera.initialize()
        camera.add_distance_to_camera_to_frame()
        camera.add_distance_to_image_plane_to_frame()
        world.scene.add(camera)

        return

    async def setup_post_load(self):
        self.get_world().add_physics_callback('sim_step', callback_fn=self.sim_step)
        return

    def sim_step(self, step_size: float):
        world = self.get_world()
        camera = world.scene.get_object(self.femto_nfov_name)
        frame_output = camera.get_current_frame()

        rgba = frame_output['rgba']
        distance_to_camera = frame_output['distance_to_camera']
        distance_to_image_plane = frame_output['distance_to_image_plane']

Modeling the desired effects for users and providing validated sensor models from manufacturers are top priorities for Isaac Sim and the OV Sensors & Rendering team in upcoming releases. Thanks.

Thank you. So just to confirm - currently it is only possible to extract ground truth depth maps from sensors in Isaac Sim?

Yes, as of Isaac Sim 4.5.0.