Hi Everyone,
I have an requirement to record a 360 deg pointcloud.
This should be possible via an RTX Lidar which gives me an ability to imitate a Lidar with a config file and allows to publish pointcloud in ROS.
But I was hoping that there would be a way to record pointcloud from RTXLidar render product rather than just the camera.
for example:
# This example is not functional
_, sensor = omni.kit.commands.execute(
"IsaacSensorCreateRtxLidar",
path="/sensor",
parent=None,
config="Example_Rotary",
translation=(0, 0, 1.0),
orientation=Gf.Quatd(0.5, 0.5, -0.5, -0.5),
)
render_product = rep.create.render_product(sensor)
pointcloud_annotator = rep.AnnotatorRegistry.get_annotator("pointcloud")
pointcloud_annotator.attach([render_product])
pointcloud_annotator.get_data()
Can someone please guide me on how can i acheive this.
Although I may have misunderstood your question. if you just want to get the points to render on screen, you need to take a section of code like this:
# RTX sensors are cameras and must be assigned to their own render product
_, render_product_path = create_hydra_texture([1, 1], sensor.GetPath().pathString)
# Create the debug draw pipeline in the post process graph
writer = rep.writers.get("RtxLidar" + "DebugDrawPointCloud")
writer.attach([render_product_path])