I want to use the RTX lidar, rather than the current PhysX one. It has two features we really need: the ability to have non-uniform channel distributions and the ability to bounce off of dynamic objects. Our real lidar completes one full revolution every 10Hz. To simulate our real setup, what I really want is for Isaac to publish the lidar point clouds to a ROS2 topic at 10Hz. Every message would be one 360 degree scan.
I am trying to figure out how to achieve this.
I ran the RTX lidar publishing to ROS2 example. I did some experiments, only changing the
rendering_dt of the simulation. I learned a few things.
- The amount of points in the scan directly relates to the
rendering_dt of less than 10Hz results in each message having an incomplete scan, a circular section of the full scan (pie-slice-shaped).
- The publishing frequency (measured by
ros2 topic hz /point_cloud) is pretty inscrutable. Even running rviz can decrease this rate (despite my fast workstation). Maybe it is due to the number of subscribers?
- How do I get the RTX lidar to consistently output one full scan at 10 Hz?
- Can you explain the relation between rendering frequency, lidar configuration, and publishing rate?
I’ve attached my experiments on the publishing rate w.r.t. physics / rendering frequency for completeness.
Hi @jrb2 ,
Any progress on this issue?
Hi - Sorry for the delay in the response. Let us know if you still having this issue/question with the latest Isaac Sim 2022.2.1 release.
I am using Isaac Sim 2022.2.1 and it is still unclear what’s the way to limit the Lidar rendering rate and publication rate.
The examples for camera
camera_periodic.py are very informative, albeit the part about accessing the
IsaacSimulationGate of the pipelines is not explained, and I am having trouble figuring out how to apply it to the RTX Lidar.
In our case we need high frequency physics (>= 500Hz) but rendering can be 60FPS or lower.
We don’t have formal control over how many points are put into the rtxsensorcpu buffer for this release. It’s possible that more points are getting sent from the renderer than are sent out by the ros2 node You could put some prints in the OgnIsaacReadRTXLidarData.py node code to alter the read lidar data node and see how much is actually being sent out each frame.
You could also make a buffer of points so that it sent out always 360 scan, but that would probably require some node writing.
We did fix a bug that should solve the problem of scans at 10Hz or less being incomplete.
Thanks for the information.
And is there a way to limit the number of frames the lidar is rendered? In my case, I need many cameras at 15hz and the lidar at 20Hz.
I can skip camera frames using the periodic camera examples, but I don’t know how to reduce the laser frequency.
Ideally, I should lower the frequency at which the lidar renders, to minimize resource usage.
Any tips on how to achieve this?
Hi @victor.lopez1 and @mcarlson1 .
I am facing similar issues here. Specifically, I would like to have manual control about how often the Lidar is published. I can use the
set(1) (those seem to work), but I would like to understand if that is correct and if the accumulated cloud would be complete and/or if there are other cleaner ways to do that.
I’m also in a similar situation in which we have physics running at 240 hz and rendering ideally at 30. However, it seems that the Lidar is published when rendered and not in other cases.
Sorry, for this release lidar publishes with render frame rate. We hope to change this in our next release, and have a more robust and enjoyable rtx lidar experience!
@mcarlson1 can you specify what happens if I do not render for some time? is the cloud going to be an accumulated set of points, just the last set of points, or what?
In IsaacSim 2023.1.0, The new IsaacCreateRTXLidarScanBuffer node will hold an up to date buffer at the Hz of the lidar.
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.