Creating a custom camera on Isaac Sim App

Hi,
We are trying to configure an RGB sensor in Isaac Sim to reflect the camera we are using in real life (Intel RealSense d455) and generate synthetic data.
However, the instructions are a bit confusing to us.
On the Isaac SDK documentation (Carter Warehouse example) you can add and configure a sensor by editing the carter_graph.json and carter_config.json

Also, on the Training Jetbot example by @HaiLocLu , he mentions:
“When we initially created the camera, we used default values for the FOV and simply angled it down at the road. This initial setup did not resemble the real camera image (Figure 12). We adjusted the FOV and orientation of the simulated camera (Figure 13) and added uniform random noise to the output during training. This was done to make the simulated camera view as much like the real camera view as possible.”

However, we are not able to figure out how to configure the RGB sensor to mimic our camera. We want to add an RGB sensor to mimic the Intel RealSense D455 camera which has the following specs:

  • RGB frame resolution: Up to 1280 × 720 [is this the cols/rows parameter?]
  • RGB frame rate: 30 fps [This has to be Frequency right?]
  • RGB sensor technology: Global ShutterRGB sensor [is it possible to configure this?]
  • FOV (H × V): 90 × 65° [there is an FOV parameter but not sure if this is the H or V one]
  • RGB sensor resolution: 1 MP. [ does this not matter since we already defined the resolution above?]

We are not sure this configuration is even possible using Isaac Sim App, the tutorials seem to point to Isaac Sim SDK.

Any help on how to configure this sensor is appreciated.

Hi mau,

Perhaps the get_camera_params may give you some info:

def get_camera_params(self, viewport):
        """Get active camera intrinsic and extrinsic parameters.

        Returns:
            A dict of the active camera's parameters.

            pose (numpy.ndarray): camera position in world coordinates,
            fov (float): horizontal field of view in radians
            focal_length (float)
            horizontal_aperture (float)
            view_projection_matrix (numpy.ndarray(dtype=float64, shape=(4, 4)))
            resolution (dict): resolution as a dict with 'width' and 'height'.
            clipping_range (tuple(float, float)): Near and Far clipping values.
        """
        stage = omni.usd.get_context().get_stage()
        prim = stage.GetPrimAtPath(viewport.get_active_camera())
        prim_tf = omni.usd.get_world_transform_matrix(prim)
        focal_length = prim.GetAttribute("focalLength").Get()
        horiz_aperture = prim.GetAttribute("horizontalAperture").Get()
        fov = 2 * math.atan(horiz_aperture / (2 * focal_length))
        width, height = viewport.get_texture_resolution()
        aspect_ratio = width / height
        near, far = prim.GetAttribute("clippingRange").Get()
        view_proj_mat = self.generic_helper_lib.get_view_proj_mat(prim, aspect_ratio, near, far)

        return {
            "pose": np.array(prim_tf),
            "fov": fov,
            "focal_length": focal_length,
            "horizontal_aperture": horiz_aperture,
            "view_projection_matrix": view_proj_mat,
            "resolution": {"width": width, "height": height},
            "clipping_range": (near, far),
        }
  • RGB frame resolution: Up to 1280 × 720 [since a camera is attached to a viewport, so it’s the viewport’s resolution]
    So when you create a new viewport for your camera, you can specify the res
viewport_handle_dofbot = omni.kit.viewport.get_viewport_interface().create_instance()
viewport_window_dofbot = omni.kit.viewport.get_viewport_interface().get_viewport_window(viewport_handle_dofbot)
viewport_window_dofbot.set_active_camera(prim_env_path + "/link4/Camera")
viewport_window_dofbot.set_window_pos(720, 0)

# match resolution of physical dofbot camera
viewport_window_dofbot.set_window_size(640, 480)

More info https://docs.omniverse.nvidia.com/app_isaacsim/app_isaacsim/python_snippets.html?highlight=camera#multi-camera

Or specify them in the UI with the top left cog, Render Resolution.

Paul did a great tutorial on Cameras here too https://docs.omniverse.nvidia.com/app_create/common/cameras.html

Hi,

The documents that you’ve mentioned are not available. It seems those pages are deleted. Would you please update them?

https://docs.omniverse.nvidia.com/app_create/prod_materials-and-rendering/cameras.html

One tip is googling “omniverse blah”, omniverse create cameras.

1 Like

Thanks for the tip. Omniverse is fairly new and nothing much can be extracted from google search, this is why I prefer to rely on the forum/ NVIDIA tutorial. But as you know the links are no longer refer to the correct place on the forum/ NVIDIA tutorial. Is there any plan to address this inconsistency in the near future, please?

We are working with the omniverse team to find a way to support older versions of docs. Because of the rapid pace of development for the first few versions of isaac sim and omniverse older links became stale.

The core links to the isaac sim and create docs can always be found from

And the individual apps docs have a search bar to find specific information.

1 Like

On this note, I was curious if IsaacSim has modelled RealSense cameras that we as developers can directly import. Is that possible or do we have to model on our own? If the latter, is it possible to share some recommended steps for the same?

We are working on adding in a small library of sensors for our next release (2022.2.0) later this year. Starting with some of the sensors from NOVA Orin | NVIDIA Developer

But for now you have to model your own using the Camera prim

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.