ROS2 Lidar PointClouds with Per-Point Timestamps in Isaac Sim

Hi everyone,

We are currently working on a project where we use Isaac Sim to generate synthetic datasets with a mobile robot. As part of our pipeline, we aim to use KISS-ICP for odometry estimation, which requires per-point timestamps in the lidar point clouds for proper deskewing and motion compensation.

I found an earlier discussion here: Publish ROS2 lidar pointclouds with x, y, z, timestamp fields (2022), where it was mentioned that Isaac Sim’s default ROS2 point cloud publisher does not support per-point timestamps and that a workaround would involve creating a custom publisher.

Since that post was from 2022, I would like to ask:

  • Is there any official update or new feature in Isaac Sim that now supports publishing lidar point clouds with per-point timestamps via ROS2?
  • If not, are there any recommended best practices for simulating realistic timestamped lidar data within Isaac Sim today?

Any pointers, examples, or official roadmap insights would be greatly appreciated!

Thanks in advance for your support!

@manavt2000 let me reach out to our internal team about your question!

@manavt2000 Unfortunately we still don’t support this feature. We also don’t have any examples doing this. I will file a feature request to the internal team.
At the same time, you can write a customized omnigraph node to mimic this Custom Python Nodes — Isaac Sim Documentation

You could just brute force this by taking the total scan points per complete scan so sensor hz, then give each point a relative time.

Say you have 10 points per scan at 2 hz each point?

import numpy as np

def assign_timestamps(points_per_scan, scan_rate_hz):
scan_duration = 1.0 / scan_rate_hz
return np.linspace(0, scan_duration, points_per_scan, endpoint=False)

Example

timestamps = assign_timestamps(points_per_scan=10, scan_rate_hz=2.0)
print(timestamps)

[0. 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45]

You could also access the firing pattern of the RTX sensor config.

It would be brutally intensive but I don’t know how much data you need there.

I don’t even think you need subframes. Just add the value to sim time for each value.

But one point to note here. Is your sensor operating on a buffer or a direct scan? Something to consider.

My code might be a little off here for your use case but an example. Hope this helps!

Apologies re read this part

“deskewing and motion compensation. I found an earlier discussion here: Publish ROS2 lidar”

Use subframes then. You would need to calculate the max rotation rate vs frame resolution. For you subframe amount.

It needs to have valid data to extract previous mentioned points vs velocity of the rotation.

Probably trial and error. Too high sub frames would be a computational nightmare and sub amounts would predict inaccurate data.

I would also presume that you are using a point stream rather than a buffer frame? I’m not confident on that interpretation.

Or use slam and quantize the space of you have fast enough imu/gyro