Manipulating the scan buffer of a LiDAR

I have the following LiDAR setup:

            self.lidar = LidarRtx(
            render_product_path = self.lidar.get_render_product_path()

            self.annotator = rep.AnnotatorRegistry.get_annotator(
                    "outputObjectId": True,
                    "keepOnlyPositiveDistance": False,
                    "transformPoints": True,
                    "outputTimestamp": True,

and fetch the point cloud like so:

data = self.annotator.get_data(do_array_copy=True)

Resulting in an expected scan after a single 360-degree rotation:

However, having collected the full scan, the sensor gets programmatically translated to another location and is supposed to do a new 360-degree scan. Which results in the following:

The first scan is still in the buffer while a new one starts to appear. However, the same number of timesteps to collect the full scan ends in a situation where the second scan is not finished and the third is already started: in a few simulation timesteps, the third point cloud starts to take on the buffer:

I’d like to ask two questions:

  • Is there a way to send a request to clean up the scan buffer?
  • why could the second scan not be completed in the same time as the first?

Deleting the sensor via prims.delete_prim leads to the same behavior.
Another unexpected thing is why the points projected by the writer are visible from below the projection surface:

            writer = rep.writers.get("RtxLidar" + "DebugDrawPointCloudBuffer")

View from above the surface:

View from below the surface:

The corresponding elevation map:

A possible workaround is to delete the sensor primitive and, crucially, to reset the simulation context:


Although recreating the sensor at every change in pose might not be the most efficient solution.

The IsaacCreateRTXLidarScanBuffer node will keep the values in the buffer from the previous frames. And it takes 1/10 of a simulation second to fill up a buffer for a 10Hz scanner. Depending on the fps you are running at it can take different amounts of time. if you are at 30 fps, then it should take 3 steps, but if you are at 60fps, then it would take 6.

You could likely use the timestamp and only output points that are older then some time.

Or, maybe your use case should use the RtxSensorCpuIsaacComputeRTXLidarPointCloud annotator, which only outputs the current render frame points?

As for seeing the points on both sides of geometry, this is normal because the debug view points are on the surface and can be seen from both sides of it.