Issues creating camera sensor in Isaac Sim 2023.1.1

I am creating a standalone application that simulates a camera with an f-number of 2.8 (and therefore f_stop = 2.8). I am following the OpenCV Rational Polynomial Example of this guide to do so however I am encountering a few issues.

The first issue is that the camera does not focus when set_lens_aperture is set to any value other than 0 (i.e. disabled). Below I have attached 2 images, the first is with focusing disabled and the second is with the aperture set to what is recommended in the guide. I have tried values up to 10000 and effectively get the same image as the second image every time.

Setting lens aperture to 0 (disabling focusing)

280 (f_stop * 100.0 as suggested in documentation)

The second issue I am having is that a vignette is applied to an image when distortion coefficients are set, however this is not what the image would look like in real life.

The third issue I am having is that the frequency parameter of the camera class does not seem to do anything and so I am having to control the frequency by adjusting the Isaac Simulation Gate node in the SDGPipeline graph. This is the same as what is done in the example found at standalone_examples/api/omni.isaac.ros2_bridge/camera_periodic.py.

Below is the code for the camera.

width, height = 612, 512
camera_matrix = [[605.4796091670199, 0.0, 305.6297619145443], [0.0, 605.6656786507847, 260.2272299567676], [0.0, 0.0, 1.0]]
distortion_coefficients = [-0.3818920627803916, 0.15178401791432322, -0.00017830161196350367, 0.0002184177735682275, 0.0, 0.0, 0.0, 0.0]

camera = Camera(
    prim_path="/World/camera",
    position=np.array([60.0, 60.0, 100]),
    frequency=None, # Doesn't seem to do anything
    resolution=(width, height),
    orientation=rot_utils.euler_angles_to_quats(np.array([0, 90, 0]), degrees=True),
)
simulation_context.reset()
camera.initialize()

# Camera sensor size and optical path parameters. These parameters are not the part of the
# OpenCV camera model, but they are nessesary to simulate the depth of field effect.
#
# To disable the depth of field effect, set the f_stop to 0.0. This is useful for debugging.
pixel_size = 3.45 * 4  # multiplied by 4 as we have binning of 4 in x and y
f_stop = 2.8  # f-number, the ratio of the lens focal length to the diameter of the entrance pupil
focus_distance = 0.1  # in meters, the distance from the camera to the object plane
diagonal_fov = 77  # in degrees, the diagonal field of view to be rendered

((fx,_,cx),(_,fy,cy),(_,_,_)) = camera_matrix
horizontal_aperture = pixel_size * 1e-3 * width
vertical_aperture = pixel_size * 1e-3 * height
focal_length_x = fx * pixel_size * 1e-3
focal_length_y = fy * pixel_size * 1e-3
focal_length = (focal_length_x + focal_length_y) / 2  # in mm

# Set the camera parameters, note the unit conversion between Isaac Sim sensor and Kit
camera.set_focal_length(focal_length / 10.0)
camera.set_focus_distance(focus_distance)
camera.set_lens_aperture(f_stop * 100.0)
camera.set_horizontal_aperture(horizontal_aperture / 10.0)
camera.set_vertical_aperture(vertical_aperture / 10.0)

camera.set_clipping_range(0.05, 1.0e5)

# Set the distortion coefficients
camera.set_projection_type("fisheyePolynomial")
camera.set_rational_polynomial_properties(width, height, cx, cy, diagonal_fov, distortion_coefficients)

keys = og.Controller.Keys
(camera_graph, _, _, _) = og.Controller.edit(
    {
        "graph_path": "/World/camera/camera_graph",
        "evaluator_name": "push",
        "pipeline_stage": og.GraphPipelineStage.GRAPH_PIPELINE_STAGE_ONDEMAND,
    },
    {
        keys.CREATE_NODES: [
            ("onTick", "omni.graph.action.OnTick"),
            ("createViewport", "omni.isaac.core_nodes.IsaacCreateViewport"),
            ("getRenderProduct", "omni.isaac.core_nodes.IsaacGetViewportRenderProduct"),
            ("setCamera", "omni.isaac.core_nodes.IsaacSetCameraOnRenderProduct"),
            ("cameraHelperRgb", "omni.isaac.ros2_bridge.ROS2CameraHelper"),
            ("cameraHelperInfo", "omni.isaac.ros2_bridge.ROS2CameraHelper"),
        ],
        keys.CONNECT: [
            ("onTick.outputs:tick", "createViewport.inputs:execIn"),
            ("createViewport.outputs:execOut", "getRenderProduct.inputs:execIn"),
            ("createViewport.outputs:viewport", "getRenderProduct.inputs:viewport"),
            ("getRenderProduct.outputs:execOut", "setCamera.inputs:execIn"),
            ("getRenderProduct.outputs:renderProductPath", "setCamera.inputs:renderProductPath"),
            ("setCamera.outputs:execOut", "cameraHelperRgb.inputs:execIn"),
            ("setCamera.outputs:execOut", "cameraHelperInfo.inputs:execIn"),
            ("getRenderProduct.outputs:renderProductPath", "cameraHelperRgb.inputs:renderProductPath"),
            ("getRenderProduct.outputs:renderProductPath", "cameraHelperInfo.inputs:renderProductPath"),
        ],
        keys.SET_VALUES: [
            ("createViewport.inputs:viewportId", 1),
            ("setCamera.inputs:cameraPrim", [usdrt.Sdf.Path("/World/camera")]),
            ("cameraHelperRgb.inputs:frameId", "sim_camera"),
            ("cameraHelperRgb.inputs:topicName", "rgb"),
            ("cameraHelperRgb.inputs:type", "rgb"),
            ("cameraHelperInfo.inputs:frameId", "sim_camera"),
            ("cameraHelperInfo.inputs:topicName", "camera_info"),
            ("cameraHelperInfo.inputs:type", "camera_info"),
        ],
    },
)

# Run the ROS Camera graph once to generate ROS image publishers in SDGPipeline
og.Controller.evaluate_sync(camera_graph)
simulation_app.update()

Hello,

Thank you for providing a detailed explanation and code snippet! I ran Isaac Sim 2023.1.1 with a test scene and the following standalone script:

from omni.isaac.kit import SimulationApp

simulation_app = SimulationApp({"headless": False})

from omni.isaac.core.objects import DynamicCuboid
from omni.isaac.sensor import Camera
from omni.isaac.core import World
import omni.isaac.core.utils.numpy.rotations as rot_utils
import numpy as np
import matplotlib.pyplot as plt
from omni.isaac.core.utils.stage import open_stage
from omni.isaac.core.utils.nucleus import get_assets_root_path


open_stage(get_assets_root_path() + "/NVIDIA/Assets/Scenes/Templates/Outdoor/Puddles.usd")
my_world = World(stage_units_in_meters=1.0)

width, height = 612, 512
camera_matrix = [[605.4796091670199, 0.0, 305.6297619145443], [0.0, 605.6656786507847, 260.2272299567676], [0.0, 0.0, 1.0]]
distortion_coefficients = [-0.3818920627803916, 0.15178401791432322, -0.00017830161196350367, 0.0002184177735682275, 0.0, 0.0, 0.0, 0.0]


camera = Camera(
    prim_path="/World/camera",
    position=np.array([60.0, 60.0, 100]),
    frequency=20,
    resolution=(width, height),
    orientation=rot_utils.euler_angles_to_quats(np.array([0, 90, 0]), degrees=True),
)

camera.initialize()

# To disable the depth of field effect, set the f_stop to 0.0. This is useful for debugging.
pixel_size = 3.45 * 4  # multiplied by 4 as we have binning of 4 in x and y
f_stop = 2.8  # f-number, the ratio of the lens focal length to the diameter of the entrance pupil
focus_distance = 0.1  # in meters, the distance from the camera to the object plane
diagonal_fov = 77  # in degrees, the diagonal field of view to be rendered

((fx,_,cx),(_,fy,cy),(_,_,_)) = camera_matrix
horizontal_aperture = pixel_size * 1e-3 * width
vertical_aperture = pixel_size * 1e-3 * height
focal_length_x = fx * pixel_size * 1e-3
focal_length_y = fy * pixel_size * 1e-3
focal_length = (focal_length_x + focal_length_y) / 2  # in mm

# Set the camera parameters, note the unit conversion between Isaac Sim sensor and Kit
camera.set_focal_length(focal_length / 10.0)
camera.set_focus_distance(focus_distance)
camera.set_lens_aperture(f_stop * 100.0)
camera.set_horizontal_aperture(horizontal_aperture / 10.0)
camera.set_vertical_aperture(vertical_aperture / 10.0)

camera.set_clipping_range(0.05, 1.0e5)

# Set the distortion coefficients
camera.set_projection_type("fisheyePolynomial")
camera.set_rational_polynomial_properties(width, height, cx, cy, diagonal_fov, distortion_coefficients)

my_world.play()
while my_world.is_playing():
    my_world.step(render=True)
    render_time = camera.get_current_frame()["rendering_time"]
    print(f"rendering time is: {render_time}")

I was able to reproduce your first issue - the camera does not focus when set_lens_aperture is set to any value other than 0.


I was unable to reproduce your second issue - after setting distortion coefficients, I did not observe a vignette on the image, even if I set lens aperture back to 0. (see second image)

I was also unable to reproduce your third issue (camera frequency) - the renderer may render each frame, but the data stored in the get_current_frame() dictionary is updated at 20Hz as expected.

I also went ahead and tried the same script in Isaac Sim 4.1 and got the following image:

I suspect the behavior you observed is a bug that was fixed in a later release.

Thanks for getting back to me @adevalla! Perhaps the second and third issues are related to ROS because the ROS publisher is independent of the camera object. I shall add a ROS publisher to the code snippet you attached and get back to you on this.