RTXLidar PointCloud2 timestamp field: 2-component value and odd behavior

6.0.0
5.1.0
5.0.0
4.5.0
4.2.0
4.1.0
4.0.0
4.5.0
2023.1.1
2023.1.0-hotfix.1
Other (please specify):

Operating System

Ubuntu 24.04
Ubuntu 22.04
Ubuntu 20.04
Windows 11
Windows 10
Other (please specify):

Hello!

I’m trying to wrap my head around the meaning of the timestamp field in the PointCloud2 message the isaacsim.ros2.bridge.ROS2RtxLidarHelper can produce. In the previous IsaacSim 5.1 version I observed the normal single timestamp value (datatype=7, count=1), and that was a real time in nanoseconds that only went up as the simulation progressed. Now, in the new version, I see that it turns to be of datatype=6, count=2 and a bizarre pattern I describe below.

I suppose the timestamp value being of UINT64 is now just ‘cut’ into two UINT32 values to satisfy the PointField requirements.

Here is the snippet of the print output for the PointCloud2 message caught during the simulation in the following format:

(x, y, z, intensity, timestamp_part_1, timestamp_part_2)

(1.5425948, -2.9681783, 1.5396805, 0.04185877, 1602963816, 4)

(3.1306713, -4.4783792, 2.7734396, 0.08034899, 1602968816, 4)

(4.5702057, -7.0407243, 4.1359954, 0.06718511, 1602973816, 4)

(4.2655025, -7.120647, 3.9457197, 0.06873772, 1602978816, 4)

(3.8933194, -7.099444, 3.7280412, 0.05663586, 1602983816, 4)

(4.667353, -6.84623, 4.0690494, 0.08138476, 1602993816, 4)

(4.5024023, -7.141394, 4.022366, 0.0678768, 1602998816, 4)

The most intriguing thing is that the first part rises up to a certain value (~4e9 something), and than resets to 0, and starts again to climb up, and this goes in cycle!

Please, could you clarify on those parameters and such a weird behavior?

Thanks!

Hi @j.silhouette,

Good question — what you’re seeing is actually the expected behavior in 6.0.0.

In Isaac Sim 5.1, the timestamp was encoded as a single FLOAT32 (datatype=7, count=1). This was technically lossy — FLOAT32 only has ~23 bits of mantissa, so for large nanosecond values (billions), you’d lose precision.

In 6.0.0, this was improved to use two UINT32 values (datatype=6, count=2), which preserves the full 64-bit precision:

  • timestamp[0] = low 32 bits of the nanosecond timestamp
  • timestamp[1] = high 32 bits of the nanosecond timestamp

To reconstruct the full timestamp:

  timestamp_ns = int(timestamp_part_2) * (2**32) + int(timestamp_part_1)                                                                               
  # Or equivalently:                                                                                                                                   
  import struct                                                                                                                                        
  timestamp_ns = struct.unpack('<Q', struct.pack('<II', timestamp_part_1, timestamp_part_2))[0]                                                        

Using your example data:

  4 * 4294967296 + 1602963816 = 18,782,831,000 ns ≈ 18.78 seconds into the simulation                                                                  

Why part_1 “wraps”: UINT32 max is ~4.29×10⁹. When the low 32-bit part overflows, it resets to 0 and the high part increments. This wraps every ~4.295 seconds — standard binary counter behavior, not a bug.

The 5000 ns delta between consecutive points in your data is the per-point firing time offset within the lidar scan rotation, which is correct.

That said, I agree this change could use better documentation. I’ll flag this internally for a docs update.

@zhengwang Awesome, thanks a lot for the clarification!