Unable to get depth data with LIDAR sensor in headless mode

I’m trying to do RL for collision avoidance using Jetbot and LIDAR. I can get depth data while the GUI is open but if I use headless mode the depth data is not the same (it’s all zeros or close to zero).

I have tried using LIDAR in headless mode without the Omniverse Isaac Gym interface and seems to work fine. To test it I made standalone script that spawned LIDAR sensor and one cylinder and then checked the depth values with headless mode on/off. When doing the same with Isaac Gym component I get different values.

I’ve uploaded two python files to illustrate this issue. They are modified from the Cartpole example and can be run with python.sh lidar_example.py

When setting headless=False in 2nd row of lidar_example.py following depth values get printed to terminal
Max depth: [100.]
Min depth: [1.8643869]

If headless=True I get completely different and wrong values like
Min depth: [-4.670763e+26]
Max depth: [2.604592e+30]
Min depth: [-4.670763e+26]
Max depth: [2.604505e+30]

Any help is appreciated as headless mode is a lot faster to train with.

lidar_example.py (639 Bytes)
lidar_task.py (3.2 KB)

Hello. I am interested in this question. May I ask if there is a problem with the accuracy of computer vision? As I understand it, a video camera and lidar are used for depth (according to the principle of a laser rangefinder). Maybe you have been dealing with this issue for a long time and know or have tried to do, does it make sense to use a certain image filter in image processing?
I have an assumption about the application of the image filter, but there are doubts about the meaning of this assumption, because there is little information and there is no opportunity to conduct an experiment.

Hi @rantala.eetu,

To use the LIDAR sensor in headless mode, you will need to make a call to env.render() to make sure the sensor updates. This can either be done by overriding the step() call in VecEnvBase and adding a call to self.render() before calling self._world.step(render=self._render), or always setting render=True when stepping world. Alternatively, you can also choose to pass the env object into the Task class, and make a call toself._env.render() from the task’s pre_physics_step method.

Thanks,
Kelly

2 Likes

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.