Collecting a single full scan from a LiDAR

Hi, I’m building on top of the rtx_lidar.py standalone example and want to collect the point cloud from the entire 360-degree scan. My LiDAR configuration is OS0_128ch10hz512res.json. Here is what I tried:

  • Documentation RTX Lidar Sensor — Omniverse IsaacSim latest documentation mentions using an annotator which is not available anymore.
  • isaac_read_lidar_point_cloud_node returns says that Prim is not a lidar when I try to connect it to the /sensor (although the ‘Sensor type’ property is ‘lidar’).
  • assuming that the frequency of my sensor is 10Hz and I want to collect 10 scans that add up to 360 degrees, I tried to collect it manually, having:
simulation_context = SimulationContext(
    physics_dt=FullScanDuration/10, rendering_dt=FullScanDuration/10, stage_units_in_meters=1.0
)

and later

    ComputeRTXLidarPointCloudNode = og.Controller().node(
        "/Render/PostProcess/SDGPipeline/RenderProduct_Isaac_RtxSensorCpuIsaacComputeRTXLidarPointCloud"
    )
    pointCloudData = ComputeRTXLidarPointCloudNode.get_attribute(
        "outputs:pointCloudData"
    ).get()

and later dump the array into a file. However, this results in a sequence of scans that are very similar and appear to come from a single azimuth range. Or produces a full scan with some parts missing as in the lower left of the image below:
image

Could you help to understand why the above doesn’t work and point me to some best practices for collecting the LiDAR data if any?

Hi @kzaitsev - Can you confirm what version of Isaac Sim you are using?

I ran the code in 2022.3.1 and 2023.1.0 versions

Hi. Running the code with the latest 2023-1-0.hotfix.1 and the RtxSensorCpuIsaacCreateRTXLidarScanBuffer annotator, I get the following:
image
Regions to the sides should not be empty:

The scan buffer should return a full 360 scan.

If you want, I can look at your config file, and I can let you know if I see any problems.

You can also try
./python.sh standalone_examples/api/omni.isaac.debug_draw/rtx_lidar.py CONFIG_NAME

Where CONFIG_NAME is the name of your .json config file. In there you can see the debug points as they line up with a map in real time.

I tried running the code of the rtx_lidar.py modified to use annotator for 2023-1-0.hotfix:

assets_root_path = nucleus.get_assets_root_path()
if assets_root_path is None:
    carb.log_error("Could not find Isaac Sim assets folder")
    simulation_app.close()
    sys.exit()

simulation_app.update()
# Loading the simple_room environment
stage.add_reference_to_stage(
    assets_root_path + "/Isaac/Environments/Simple_Warehouse/full_warehouse.usd",
    "/background",
)
simulation_app.update()

lidar_config = "Example_Rotary"
if len(sys.argv) == 2:
    lidar_config = sys.argv[1]

# Create the lidar sensor that generates data into "RtxSensorCpu"
# Sensor needs to be rotated 90 degrees about X so that its Z up

# Possible options are Example_Rotary and Example_Solid_State
# drive sim applies 0.5,-0.5,-0.5,w(-0.5), we have to apply the reverse
_, sensor = omni.kit.commands.execute(
    "IsaacSensorCreateRtxLidar",
    path="/sensor",
    parent=None,
    config=lidar_config,
    translation=(0, 0, 1.0),
    orientation=Gf.Quatd(0.5, 0.5, -0.5, -0.5),  # Gf.Quatd is w,i,j,k
)
_, render_product_path = create_hydra_texture([1, 1], sensor.GetPath().pathString)

# Create the debug draw pipeline in the post process graph
writer = rep.writers.get("RtxLidar" + "DebugDrawPointCloud")
writer.attach([render_product_path])

simulation_app.update()
simulation_app.update()

simulation_context = SimulationContext(
    physics_dt=1.0 / 60.0, rendering_dt=1.0 / 60.0, stage_units_in_meters=1.0
)

simulation_context.play()

import omni.graph.core as og
annotator = rep.AnnotatorRegistry.get_annotator(
    "RtxSensorCpuIsaacCreateRTXLidarScanBuffer"
)
annotator.initialize(outputTimestamp=True)
annotator.attach([render_product_path])

while simulation_app.is_running():
    pointCloudData = annotator.get_data()['data']

Here is what I got:

For 2022-2-1 I replaced the annotator from above to fetch directly from a node:

    ComputeRTXLidarPointCloudNode = og.Controller().node(
        "/Render/PostProcess/SDGPipeline/RenderProduct_Isaac_RtxSensorCpuIsaacComputeRTXLidarPointCloud"
    )
    pointCloudData = ComputeRTXLidarPointCloudNode.get_attribute(
        "outputs:pointCloudData"
    ).get()

and got the expected full scan with a few simulation steps:
image

In 2022-2-1 I’m able to collect the full scan by querying the RtxSensorCpuIsaacComputeRTXLidarPointCloud node. The original issue is resolved for this case:
image
A likely cause was a bug in the custom postprocessing code.

However, the annotator approach in 2023-1-0-hotfix from the previous reply still produces:
image
which is a densified point cloud obtained by stacking several subsequent scans that are identical

The reason for partial scans was some changes in the positioning of the sensor. To get the expected result similar to the one obtained from the 2022 version, the following placement works:

_, sensor = omni.kit.commands.execute(
    "IsaacSensorCreateRtxLidar",
    path="/sensor",
    parent=None,
    config=lidar_config,
    translation=(34, -27, 3.5),
    orientation=Gf.Quatd(0, 0, 0, 1),
)

while in 2022 the orientation component was:
orientation=Gf.Quatd(0.5, 0.5, -0.5, -0.5)

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.