Access Pointcloud data of RTX Lidar


I have recently built a robot model with lidars attached to it. I created both PhysX and RTX Lidars. With the given examples of the ROS Bridge, I was able to access the data of both lidars via the ROS Topics.
Now I wanted to access the Lidars data in Python. With the PhysX Lidar I did it with this tutorial and it worked very well. But now I am stuck with accessing the point cloud data of the RTX Lidar. Can you provide me with an example or documentation, how to do so?
Also I wanted to ask, if it is possible to get semantic information for RTX Lidar point clouds?

Thanks in advance!

1 Like

Hi @elflocoo - Have you reviewed this document? 16. Publish RTX Lidar Point Cloud — Omniverse Robotics documentation
This might help you getting more info for RTX Lidar point clouds.

Hello @rthaker - I already found this, but it was of no help.

Maybe a little deeper explanation. I already finished the ROS examples and got the data. Then I tried to get the lidar data just via python, no ROS. For the PhysX Lidar I used this code:

import numpy as np

stage = omni.usd.get_context().get_stage()

lidarPath = "/World/rotate/lidar_pos/Lidar"
lidarInterface = _range_sensor.acquire_lidar_sensor_interface() # Used to interact with the LIDAR
sd_interface = _syntheticdata.acquire_syntheticdata_interface()

pointcloud = lidarInterface.get_point_cloud_data(lidarPath)
semantics = lidarInterface.get_semantic_data(lidarPath)

This worked fine, for gathering the data. Now I wanted to do something like this with the RTX Lidar, but after spawning the RTX lidar in the scen (as sampled in the tutorials), I have no idea, how to get the RTX Lidar data just via python and not ROS.

1 Like

I’m having the same problem:
If I understand it correctly, the writer created in the example via “rep.writers.get(“RtxLidar” + “DebugDrawPointCloud”)” is not designed to dump pointclouds in some output-directory.
I was also not able to use an other writer to do that.

You can access the output to the point cloud node in the same way you can access any node output. You can do this in the Script Editor or just in python, if, for example, you wanted to use the standalone example:
./ ./standalone_examples/api/omni.isaac.debug_draw/

Your node would be named, and could be accessed in the ScriptEditor with the snippet below.

import omni.graph.core as og
point_cloud = og.Controller().node("/Render/PostProcess/SDGPipeline/RenderProduct_Isaac_RtxSensorCpuIsaacComputeRTXLidarPointCloud").get_attribute("outputs:pointCloudData").get()


ScriptEditor is accessed in UI with Window->Script Editor, and it allows you to send python commands that have access to the stage.

Thank you @mcarlson1, Can you also provide a way to record semantic data from rtx lidar?

I was able to add a Isaac Read Rtx Lidar Point Data Omnigraph node and read Object ID. Would it be possible to access Semantic Data for SDG?

Also, I see that the image provided here in the docs does have a output:SemanticId, but I dont see one in my IsaacSim :

You should be able to get the semanticId from the objectId. It may even be the same? You can get the prim path from the objecId, and from that, you should be able to get information about the hit object. To get the prim path you can use this function.

from omni.syntheticdata._syntheticdata import acquire_syntheticdata_interface

def object_id_to_prim_path(object_id):
    """Given an ObjectId get a Prim Path

        object_id (int): object id, like form a RTX Lidar return

        prim path string
    return acquire_syntheticdata_interface().get_uri_from_instance_segmentation_id(object_id)
1 Like

Hello @mcarlson1

Thank you for the responses. I am able to access the data inside the script editor. Now I came upon another problem. I would like to start a stage e.g. this ./ ./standalone_examples/api/omni.isaac.debug_draw/ and access the data in the script to do something else. Or start another script to access the rtx cloud and process the pointcloud.
What would be the best way to do this?

So basically, load environment and rtx lidar like in the example, but then also continously process the lidar data, without having to manually start the script editor.

Best regards :)

Ok found out how it is done.

For everyone, that wants to use the RTX Lidar from a standalone example, simply add the code @mcarlson1 provided into the ./ ./standalone_examples/api/omni.isaac.debug_draw/ inside the while simulation_app.is_running():

while simulation_app.is_running():
    point_cloud = og.Controller().node("/Render/PostProcess/SDGPipeline/RenderProduct_Isaac_RtxSensorCpuIsaacComputeRTXLidarPointCloud").get_attribute("outputs:pointCloudData").get()

But I have one more question @mcarlson1. When I start a scene with e.g. the basic warehouse and a robot with 2 different RTX Lidars and their respective writers. Is it possible to access the scene and each lidar from a different python file?

  • First Python Script:

    1. Spawn Environment
    2. Spawn Robot Model with 2 Lidars
  • Second Python Script:

    1. Read and process Lidar data from Lidar 1
  • Third Python Script:

    1. Read and process Lidar data from Lidar 2

Or do I have to do this all in one script?

I’m following this,

I have one question regarding Script editor.

Is it just a one shot or can I have the script editor code boot on loading.

I have to put the code in to start the SDG graph everytime.

If I save the file once I’ve ran the code to build the AG it wont work the next time I load the file so I need a blank canvas everytime I build a new head / SDG pipeline

I think we are asking the same question here.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.