Thank you very much for your detailed explanation. I have tried it, but I need you to clarify a few more issues. (For all experiments, the camera is at a resolution of 640×480, focal_length = 10, focus_distance = 400.)
-
Make camera publish rate explicit and fixed: I have also tried your approach, but I found that it only works well if your simulation runs stably at 60 Hz. I initialize a world using the command:
my_world = World(stage_units_in_meters=1.0, physics_dt=1 / 200, rendering_dt=1 / 60)
Additionally, I have added a robot and a scene into the environment. Depending on the scene, I notice that the FPS can be low or high, but mainly it is not stable at a specific value to implement the method you suggested (it could be 37, 44, 49… a number I cannot be sure of). Even when I run in headless mode. With this setup, have I configured the renderer correctly to run at 60 Hz? Because sometimes I see it go up to 70-75 Hz?
According to the documentation “Publishing Camera’s Data,” I can fully change the FPS at the step_size line. However, as I mentioned, it still depends on the FPS when I run the simulation.
My goal is to have stability regardless of whether the simulation is slow or fast. I have added a Gate, and the frequency seems to be around 56–57 Hz. But overall, I still find that it has to depend on FPS. Please let me know if I am doing something wrong. Thank you.
2. Cameras are synchronize
I agree with your approach of being driven by the same tick, but the previous issue still remains: if the FPS drops, the frequency will be different. There’s another problem I want to ask about—when adding 3 cameras, the FPS drops significantly if my scene has additional objects. It becomes low but still unstable, and it jumps around randomly (for example, running a simulation while opening another application will also affect the render speed).
With the code below, I notice that the FPS frequency doesn’t drop when adding the 3 cameras, but it is not synchronized.
import omni
import numpy as np
from isaacsim.sensors.camera import Camera
import isaacsim.core.utils.numpy.rotations as rot_utils
import omni.replicator.core as rep
import omni.syntheticdata._syntheticdata as sd
import omni.graph.core as og
class CameraCfg:
def publish_rgb(self, camera: Camera, freq):
render_product = camera._render_product_path
step_size = int(30/freq)
rv = omni.syntheticdata.SyntheticData.convert_sensor_type_to_rendervar(sd.SensorType.Rgb.name)
writer = rep.writers.get(rv + "ROS2PublishImage")
writer.initialize(
frameId=camera.prim_path.split("/")[-1],# This matches what the TF tree is publishing.
nodeNamespace='',
queueSize=1,
topicName=camera.name+"/rgb"
)
writer.attach([render_product])
# Set step input of the Isaac Simulation Gate nodes upstream of ROS publishers to control their execution rate
gate_path = omni.syntheticdata.SyntheticData._get_node_path(
rv + "IsaacSimulationGate", render_product
)
og.Controller.attribute(gate_path + ".inputs:step").set(step_size)
return
def add_camera(self, simulation_app):
self.head_camera = Camera(
prim_path="/World/UpperBody/torso_link/head_yaw_link/head_pitch_link/head_camera",
name = 'head_cam',
translation=np.array([0.1, 0.0, 0.054]),
frequency=30,
resolution=(640, 480),
orientation=rot_utils.euler_angles_to_quats(np.array([0, 60, 0]), degrees=True),
)
self.left_wrist_camera = Camera(
prim_path="/World/UpperBody/left_wrist_roll_link/left_wrist_camera",
name = 'left_wrist_cam',
translation=np.array([0.06, 0.0, -0.04]),
frequency=30,
resolution=(640, 480),
orientation=rot_utils.euler_angles_to_quats(np.array([0, 90, 0]), degrees=True),
)
self.right_wrist_camera = Camera(
prim_path="/World/UpperBody/right_wrist_roll_link/right_wrist_camera",
name = 'right_wrist_cam',
translation=np.array([0.06, 0.0, -0.04]),
frequency=30,
resolution=(640, 480),
orientation=rot_utils.euler_angles_to_quats(np.array([0, 90, 0]), degrees=True),
)
simulation_app.update()
self.head_camera.initialize()
self.left_wrist_camera.initialize()
self.right_wrist_camera.initialize()
return [self.head_camera, self.left_wrist_camera, self.right_wrist_camera]
def config_camera(self):
self.head_camera.set_focal_length(7.6)
self.head_camera.set_focus_distance(400.0)
self.head_camera.set_lens_aperture(0.0)
self.head_camera.set_horizontal_aperture(20.0)
self.head_camera.set_vertical_aperture(15.0)
self.head_camera.set_clipping_range(0.1, 1e5)
self.head_camera.set_projection_type("pinhole")
self.head_camera.set_stereo_role("mono")
self.head_camera.set_shutter_properties(0.0, 0.0)
self.left_wrist_camera.set_focal_length(12.0)
self.left_wrist_camera.set_focus_distance(400.0)
self.left_wrist_camera.set_lens_aperture(0.0)
self.left_wrist_camera.set_horizontal_aperture(20.0)
self.left_wrist_camera.set_vertical_aperture(15.0)
self.left_wrist_camera.set_clipping_range(0.1, 1e5)
self.left_wrist_camera.set_projection_type("pinhole")
self.left_wrist_camera.set_stereo_role("mono")
self.left_wrist_camera.set_shutter_properties(0.0, 0.0)
self.right_wrist_camera.set_focal_length(12.0)
self.right_wrist_camera.set_focus_distance(400.0)
self.right_wrist_camera.set_lens_aperture(0.0)
self.right_wrist_camera.set_horizontal_aperture(20.0)
self.right_wrist_camera.set_vertical_aperture(15.0)
self.right_wrist_camera.set_clipping_range(0.1, 1e5)
self.right_wrist_camera.set_projection_type("pinhole")
self.right_wrist_camera.set_stereo_role("mono")
self.right_wrist_camera.set_shutter_properties(0.0, 0.0)