Need Help with LiDAR Visualization: Toggling Visibility or Adjusting Point Colors?

I have been working on a project that utilizes multiple LiDARs. I specifically require these LiDARs for confirming position and rotation within our custom environments. I have already developed an extension that imports our environment models and uses a CSV file containing initial LiDAR positions, rotations, and configuration types.

The simulation is functioning perfectly, and I can export the updated positions and rotations to an output CSV after adjusting the LiDARs. However, I am encountering an issue due to the overlap of points rendered by multiple LiDARs in a confined space. This overlap makes it difficult to distinguish the points from the LiDARs I am currently adjusting versus those from other LiDARs.

To address this, I am considering adding functionality where the user can select a LiDAR and toggle its visibility in “RtxLidarDebugDrawPointCloudBuffer”.

Below is the code snippet used to attach the LiDAR to render points into the scene:

# Create and attach a render product to the camera
render_product = rep.create.render_product(sensor.GetPath(), [1, 1])

# Create a Replicator Writer that "writes" points into the scene for debug viewing
writer = rep.writers.get("RtxLidarDebugDrawPointCloudBuffer")

I have searched forums but did not find relevant information for my specific use case. I did find a command that detaches the writer, but it removes all LiDARs from the scene viewport render:


Is there any way to achieve toggling of individual LiDARs?

Alternatively, if toggling isn’t possible, is there a way to change the render point colors in the viewport? Different colors for LiDAR points might also solve our visibility issue.

Any assistance would be greatly appreciated.



I am looking into detaching writers from individual render products without detaching all writers of that type. In the meantime:

Yes, this is absolutely possible! Here’s a snippet (based on our standalone_examples/api/omni.isaac.debug_draw/ script) and a screenshot:

# Create the lidar sensor that generates data into "RtxSensorCpu"
# Possible config options are Example_Rotary and Example_Solid_State
_, sensor = omni.kit.commands.execute(
    translation=(0, 0, 1.0),
    orientation=Gf.Quatd(1.0, 0.0, 0.0, 0.0),  # Gf.Quatd is w,i,j,k
hydra_texture = rep.create.render_product(sensor.GetPath(), [1, 1], name="Isaac")
# NEW LIDAR (sensor_b)
_, sensor_b = omni.kit.commands.execute(
    translation=(1.0, 0, 1.0),
    orientation=Gf.Quatd(1.0, 0.0, 0.0, 0.0),  # Gf.Quatd is w,i,j,k
hydra_texture_b = rep.create.render_product(sensor_b.GetPath(), [1, 1], name="Isaac_b")

simulation_context = SimulationContext(physics_dt=1.0 / 60.0, rendering_dt=1.0 / 60.0, stage_units_in_meters=1.0)

# Create the debug draw pipeline in the post process graph
# ORIGINAL LIDAR - Method #1, call initialize() with inputs to DebugDrawPointCloud OGN node
writer = rep.writers.get("RtxLidar" + "DebugDrawPointCloud" + "Buffer")
writer.initialize(color=[1, 0, 0, 1]) # ORIGINAL LIDAR - red color
# NEW LIDAR - Method #2, call get() and specify inputs directly
writer_b = rep.writers.get("RtxLidar" + "DebugDrawPointCloud" + "Buffer", init_params={'color':[0, 1, 0, 1]})

Note the color input is an RGBA 4-array. The screenshot came from examining the autogenerated SDG pipeline ActionGraph via Window → Visual Scripting → Action Graph → Edit (in the ActionGraph Window) → Selecting the SDG pipeline which is generated the first frame after clicking Play.

1 Like

Thank you for the reply!
Will try this🙏