Synchronize when publish camera data Multi-camera

My goal is to collect data in a simulation environment at 30 Hz, including 3 cameras and the robot’s joint_states (the robot is controlled via the /joint_commands topic and using the Robot API class).

Currently, I am doing this by publishing everything through ROS 2 and recording. I followed this tutorial. However, when using 3 cameras this way, the following issues arise:

  1. The camera output frequency is unstable. (Do you have any suggestions for publishing camera data to ROS 2 at a stable frequency that is independent of the FPS?)

  2. I notice that the frames from the 3 cameras are not synchronized, and sometimes a frame from one of the cameras is lost.

I also looked into synthetic data (following this tutorial), but I find that the frequency is not high and it seems that corresponding joint_states cannot be recorded.

I would like to get answers to the two questions above. Additionally, for my goal, is there another way to achieve this?
I would greatly appreciate any feedback from everyone. Thank you very much.

Hi @nt.anh311202,

you’re basically trying to solve three things at once:

  1. Make the cameras publish at a stable 30 Hz (independent of render FPS)

  2. Ensure all 3 cameras are time-synchronized

  3. Record joint_states aligned with those images


1. Make camera publish rate explicit and fixed

Rather than letting the camera publish “whenever it renders,” you want to control the publish timing from the simulation / OmniGraph.

Steps:

  1. Go through the ROS2 Setting Publish Rates tutorial and follow the pattern they use to:

  2. In your Action Graph / OmniGraph:

    • Use On Playback Tick as the main trigger.

    • Use an “accumulator / modulo” or a Rate / Gate node (as in the publish-rate tutorial) so that:

      • Simulation can run at, say, 60 Hz (for smoother physics / rendering)

      • Cameras only publish every 2nd tick → 30 Hz effective publish rate

  3. Alternatively, follow the Publishing Camera’s Data tutorial and explicitly set the camera publish period to 1/30.0 seconds in the script / graph they show. Publishing Camera’s Data — Isaac Sim Documentation

That way, your camera publish rate is not implicitly tied to render FPS anymore.


2. Ensure all 3 cameras are synchronized

You want all three camera messages to:

  • Be driven by the same tick, and

  • Use the same timestamp source

On the Isaac Sim side:

  1. Make sure all 3 camera graphs:

    • Are triggered by the same On Playback Tick (or the same upstream node coming from it), not separate timers.

    • Use the same publish-rate gating logic (same “every Nth tick” node).

  2. Use Isaac Read Simulation Time (or equivalent) and feed its time output into:

    • Each ROS2 Publish Image node’s timestamp input

    • This ensures all cameras get identical header.stamp at that tick.

On the ROS 2 side:

If you later process data in ROS 2:

  • Use message_filters with ApproximateTime or ExactTime sync on the three sensor_msgs/Image topics, using header.stamp.

  • This will group the three camera frames that share the same simulation timestamp.


3. Synchronize joint_states with the cameras

You also want joint_states stamped with the same simulation time so that you can associate each image triplet with a specific robot configuration.

On the Isaac Sim side:

  1. Follow the ROS2 Joint State / Manipulation tutorial to set up ROS2 Publish Joint State. ROS2 Joint Control: Extension Python Scripting — Isaac Sim Documentation

  2. Important wiring:

    • Use the same On Playback Tick as for the cameras to trigger ROS2 Publish Joint State.

    • Feed the same Simulation Time output into the timestamp input of the ROS2 Publish Joint State node.

  3. Set its publish rate to 30 Hz as well (using the same rate/gate logic as the cameras).

Now, for each simulation tick where you publish the cameras, you also publish one joint_states message with the same header.stamp.


4. Reduce dropped frames and unstable frequency

If you still see missed frames or unstable ros2 topic hz for the cameras:

  1. Lower the cost per frame:

  2. Adjust QoS for camera topics:

    • Use best_effort and keep_last with depth 1–2 for high-bandwidth image topics.

    • This avoids the system building up a queue of old frames and causing “bursty” behavior.

  3. Monitor bandwidth and timings:

    • ros2 topic hz /your_camera_topic

    • ros2 topic bw /your_camera_topic

    • Keep an eye on CPU & GPU usage in nvidia-smi / system monitor.


5. About synthetic data & joint_states

You mentioned trying the synthetic data pipeline:

  • Synthetic data / Replicator is great for annotated images, but often runs at lower FPS and doesn’t automatically tie in joint_states.

  • For your specific goal (3 cameras + joint_states at 30 Hz), the ROS 2 bridge with explicit publish-rate and timestamp wiring is usually the most straightforward, because:

    • Images and joint_states share the same sim time,

    • You can record everything with a single ros2 bag record.

If you really need Replicator annotations plus joint states, a typical pattern is:

  • Use Replicator / synthetic data for images + labels

  • At the same ticks, use OmniGraph/Python to query the robot articulation and log joint positions to a separate CSV/ROS bag using the same simulation timestamps.

Regards,

Embedded Software Engineer
RidgeRun | https://www.ridgerun.com/

Contact us: support@ridgerun.com
Developers wiki: https://developer.ridgerun.com/

Thank you very much for your detailed explanation. I have tried it, but I need you to clarify a few more issues. (For all experiments, the camera is at a resolution of 640×480, focal_length = 10, focus_distance = 400.)

  1. Make camera publish rate explicit and fixed: I have also tried your approach, but I found that it only works well if your simulation runs stably at 60 Hz. I initialize a world using the command:
    my_world = World(stage_units_in_meters=1.0, physics_dt=1 / 200, rendering_dt=1 / 60)
    Additionally, I have added a robot and a scene into the environment. Depending on the scene, I notice that the FPS can be low or high, but mainly it is not stable at a specific value to implement the method you suggested (it could be 37, 44, 49… a number I cannot be sure of). Even when I run in headless mode. With this setup, have I configured the renderer correctly to run at 60 Hz? Because sometimes I see it go up to 70-75 Hz?

    According to the documentation “Publishing Camera’s Data,” I can fully change the FPS at the step_size line. However, as I mentioned, it still depends on the FPS when I run the simulation.

    My goal is to have stability regardless of whether the simulation is slow or fast. I have added a Gate, and the frequency seems to be around 56–57 Hz. But overall, I still find that it has to depend on FPS. Please let me know if I am doing something wrong. Thank you.

2. Cameras are synchronize
I agree with your approach of being driven by the same tick, but the previous issue still remains: if the FPS drops, the frequency will be different. There’s another problem I want to ask about—when adding 3 cameras, the FPS drops significantly if my scene has additional objects. It becomes low but still unstable, and it jumps around randomly (for example, running a simulation while opening another application will also affect the render speed).

With the code below, I notice that the FPS frequency doesn’t drop when adding the 3 cameras, but it is not synchronized.

import omni
import numpy as np
from isaacsim.sensors.camera import Camera
import isaacsim.core.utils.numpy.rotations as rot_utils
import omni.replicator.core as rep
import omni.syntheticdata._syntheticdata as sd
import omni.graph.core as og

class CameraCfg:
    def publish_rgb(self, camera: Camera, freq):
        render_product = camera._render_product_path
        step_size = int(30/freq)
        
        rv = omni.syntheticdata.SyntheticData.convert_sensor_type_to_rendervar(sd.SensorType.Rgb.name)
        writer = rep.writers.get(rv + "ROS2PublishImage")
        writer.initialize(
            frameId=camera.prim_path.split("/")[-1],# This matches what the TF tree is publishing.
            nodeNamespace='',
            queueSize=1,
            topicName=camera.name+"/rgb"
        )
        writer.attach([render_product])

        # Set step input of the Isaac Simulation Gate nodes upstream of ROS publishers to control their execution rate
        gate_path = omni.syntheticdata.SyntheticData._get_node_path(
            rv + "IsaacSimulationGate", render_product
        )
        og.Controller.attribute(gate_path + ".inputs:step").set(step_size)

        return

    def add_camera(self, simulation_app):
        self.head_camera = Camera(
            prim_path="/World/UpperBody/torso_link/head_yaw_link/head_pitch_link/head_camera",
            name = 'head_cam',
            translation=np.array([0.1, 0.0, 0.054]),
            frequency=30,
            resolution=(640, 480),
            orientation=rot_utils.euler_angles_to_quats(np.array([0, 60, 0]), degrees=True),
        )
        self.left_wrist_camera = Camera(
            prim_path="/World/UpperBody/left_wrist_roll_link/left_wrist_camera",
            name = 'left_wrist_cam',
            translation=np.array([0.06, 0.0, -0.04]),
            frequency=30,
            resolution=(640, 480),
            orientation=rot_utils.euler_angles_to_quats(np.array([0, 90, 0]), degrees=True),
        )
        self.right_wrist_camera = Camera(
            prim_path="/World/UpperBody/right_wrist_roll_link/right_wrist_camera",
            name = 'right_wrist_cam',
            translation=np.array([0.06, 0.0, -0.04]),
            frequency=30,
            resolution=(640, 480),
            orientation=rot_utils.euler_angles_to_quats(np.array([0, 90, 0]), degrees=True),
        )
        simulation_app.update()
        self.head_camera.initialize()        
        self.left_wrist_camera.initialize()
        self.right_wrist_camera.initialize()
        return [self.head_camera, self.left_wrist_camera, self.right_wrist_camera]
    
    def config_camera(self):
        self.head_camera.set_focal_length(7.6)
        self.head_camera.set_focus_distance(400.0)
        self.head_camera.set_lens_aperture(0.0)
        self.head_camera.set_horizontal_aperture(20.0)
        self.head_camera.set_vertical_aperture(15.0)
        self.head_camera.set_clipping_range(0.1, 1e5)
        self.head_camera.set_projection_type("pinhole")
        self.head_camera.set_stereo_role("mono")
        self.head_camera.set_shutter_properties(0.0, 0.0)        

        self.left_wrist_camera.set_focal_length(12.0)
        self.left_wrist_camera.set_focus_distance(400.0)
        self.left_wrist_camera.set_lens_aperture(0.0)
        self.left_wrist_camera.set_horizontal_aperture(20.0)
        self.left_wrist_camera.set_vertical_aperture(15.0)
        self.left_wrist_camera.set_clipping_range(0.1, 1e5)
        self.left_wrist_camera.set_projection_type("pinhole")
        self.left_wrist_camera.set_stereo_role("mono")
        self.left_wrist_camera.set_shutter_properties(0.0, 0.0)        

        self.right_wrist_camera.set_focal_length(12.0)
        self.right_wrist_camera.set_focus_distance(400.0)
        self.right_wrist_camera.set_lens_aperture(0.0)
        self.right_wrist_camera.set_horizontal_aperture(20.0)
        self.right_wrist_camera.set_vertical_aperture(15.0)
        self.right_wrist_camera.set_clipping_range(0.1, 1e5)
        self.right_wrist_camera.set_projection_type("pinhole")
        self.right_wrist_camera.set_stereo_role("mono")
        self.right_wrist_camera.set_shutter_properties(0.0, 0.0)        



Hi @nt.anh311202,

Thanks for the update and the extra details.

To understand what is still going wrong, could you please share:

  1. Your Isaac Sim version and ROS 2 distro

  2. The resolution and FPS you configured for each of the three cameras

  3. A screenshot of your Action Graph / OmniGraph that shows:

    • On Playback Tick

    • Read Simulation Time (or equivalent time node)

    • All ROS2 Publish Image nodes for the three cameras

    • The ROS2 Publish Joint State node (if present)

    • Any Rate / Gate / modulo accumulator nodes you are using

In addition, please double-check these points and let me know what you observe:

  1. All camera and joint_state publishers are triggered by the same On Playback Tick (one common tick source, not separate ones).

  2. The Simulation Time output is wired into the timestamp input of every ROS2 Publish Image node and the ROS2 Publish Joint State node, so all messages on a given tick share the same header.stamp.

  3. The rate-limiting logic (Gate or modulo) is applied once, upstream of all three cameras, so they are opened/allowed on the same ticks.

  4. On the ROS 2 side, if you run

    • ros2 topic hz on each camera topic, and

    • ros2 topic hz on joint_states,
      do they all report close to 30 Hz, or is one of them significantly lower or more jittery?

If you can attach the graph screenshot and the ros2 topic hz outputs, I can point to the exact place where the desynchronization is happening and suggest a concrete wiring change.

Regards,

Embedded Software Engineer
RidgeRun | https://www.ridgerun.com/

Contact us: support@ridgerun.com
Developers wiki: https://developer.ridgerun.com/

1 Like

Thank you very much, I have solved it