Lidar in OIGE with skrl

Hi @toni.sm !

Thanks to your efforts, we are using skrl to easily build a reinforcement learning environment and use it for research.

I am attaching a lidar sensor to the end of my robot manipulator and trying to perform DRL based on the lidar sensor value.

However, in the learning environment, the lidar sensor does not seem to be able to interact with the object. (The ray beam penetrates the object)

In the Isaac Sim environment, objects and lidar interact normally.

Please confirm why this phenomenon occurs and whether it is reasonable to give lidar depth values as an observation values in OIGE.

from omni.isaac.range_sensor import _range_sensor
...
self.lidarPath = "/World/envs/env_0/robot/flange/Front_lidar"
self._li = _range_sensor.acquire_lidar_sensor_interface()
...
depth = self._li.get_depth_data(self.lidarPath)  # This data will be changed ​​into tensors.

Im gonna use these codes.

Thank you in advance !

not sure if it’s relevant information (as i am not an Issac Sim user), but there was another thread from a few days ago regarding Lidar and collision on OIGE. just wanted to pass along.

Thank you for your kind reply.

The issue was solved, but another issue occurs in the process of headless mode.

When I train it with “headless=false”, it returns the proper value of depth.

However, when “headless=True”, the wrong value seem to be returned.

I modified it by referring to the post below, but in my case, the value is fixed to the maximum measurement
distance.

From my guess, it seems that the ray beam penetrates the obstacle and cannot be measured.

What steps can be taken in this case?

Thank you in advance !

Hi,

Did you visualize the point cloud information?
Maybe it helps

Hi @psh9002 , thank you for your advice.

I tried to get_point_cloud_data and convert [x,y,z] data to distance with torch.sqrt(torch.sum(point_cloud_data ** 2, dim=-1)).

In case of Headless=False, they were proper data.

However, Headless=True still still seems not to work.

I set "enable_scene_query_support": True, and overrode VecEnvBase Class as,

class _OmniIsaacGymVecEnv(VecEnvBase): 
    def step(self, actions):
        actions = torch.clamp(actions, -self._task.clip_actions, self._task.clip_actions).to(self._task.device).clone()
        self._task.pre_physics_step(actions)

        for _ in range(self._task.control_frequency_inv):
            
            self.render() #Newely added
            self._world.step(render=True) #Default : (render=self._render)
            self.sim_frame_count += 1

        observations, rewards, dones, info = self._task.post_physics_step()

but they are not still working.

Could you let know your experience or know-how to deal with lidar depth data in Headless mode ?

Thank you in advance !

@swimpark

Until now I check the other side of the point cloud. If I succeed in the headless mode, I will share my result.

(I am sorry but, are you Korean? If so, would you mind chatting with kakao talk? My ID is psh9002.)

Hi, @swimpark

I check the point cloud data information in my environment.

In kellyg’s answer, I referred below answer.

Alternatively, you can also choose to pass the env object into the Task class, and make a call toself._env.render() from the task’s pre_physics_step method.

So, I made the self._env = env variable then add the self._env.render() on pre_physics_step method at environment code.

Here is an example that comes from my code.

class MovingTargetTask(RLTask):
    def __init__(self, name, sim_config, env, offset=None) -> None:
        self._sim_config = sim_config
        self._cfg = sim_config.config
        self._task_cfg = sim_config.task_config
        self.step_num = 0
        self.dt = 1 / 120.0
        self._env = env   # add this for using the env at the method.

....

def pre_physics_step(self, actions) -> None:
    self._env.render()
    ...
    ...
    ...

Before adding self._env.render(), I cannot get any point cloud data though I can get the point cloud with non-headless mode. However, now I can get the point cloud and also I can check via visualization.

Now, I have the following question to @kellyg. When I use self._env.render(), does the simulator render the entire environment but just not display the rendering result on screen? or does the simulator definitely only render what is needed to get the point cloud?

self._env.render() will render the entire simulation, but the results will not be visible if display is not launched when running in headless mode. We do not currently have a mechanism to render only what’s needed for point clouds.

1 Like