How to do obstacle avoidance using Lidar with Isaac Sim API


I have been following the tutorial for setting up the lidar interface via Isaac Sim API: 3. Using Sensors: LIDAR — Omniverse Robotics documentation

However, I want to implement Lidar on a robot using Isaac Sim API and I do not see any tutorials where i can have the robot avoid obstacles with the data it has collected via the lidar. Where should I look to better understand to control my robot with lidar? Does nvidia have a tutorial for implementing obstacle avoidance using Isaac Sim Python API?


Still waiting for an update. I was able to acquire the lidar interface to show that the lidar is detected with the API. However, the next step is how to combine the ArticulationControl with the lidar data to implement obstacle avoidance. I know how to use both classes seperately, but is it possible to combine the two? I hope I can reach the team or anyone for results soon! Thanks for your time and consideration

New update: I am slightly understanding the Isaac Sim API and found that is almost similar to ROS with regards to reading laser data. I have modified the following function to get the linear depth data of the lidar. I was able to print just individual scans vs grouped scans and I have a better idea of how to use this to avoid obstacles. It’s not quite a solution, but it’s getting there.

    async def _set_lidar(self):
        from omni.isaac.range_sensor._range_sensor import acquire_lidar_sensor_interface
        import omni
        lidarInterface = acquire_lidar_sensor_interface()
        lidarPath = "/carter/chassis_link/carter_lidar"
        if lidarInterface.is_lidar_sensor("/World"+lidarPath):
            print("Lidar sensor is valid")
        depth = lidarInterface.get_linear_depth_data("/World"+lidarPath)

Obstacle avoidance should be part of your software stack in ROS or python. that piece does not live in Isaac Sim. for example ROS nav stack does it. Isaac Sim just provides the lidar reading