In the Isaac Sim environment, objects and lidar interact normally.
Please confirm why this phenomenon occurs and whether it is reasonable to give lidar depth values as an observation values in OIGE.
from omni.isaac.range_sensor import _range_sensor
...
self.lidarPath = "/World/envs/env_0/robot/flange/Front_lidar"
self._li = _range_sensor.acquire_lidar_sensor_interface()
...
depth = self._li.get_depth_data(self.lidarPath) # This data will be changed ​​into tensors.
not sure if it’s relevant information (as i am not an Issac Sim user), but there was another thread from a few days ago regarding Lidar and collision on OIGE. just wanted to pass along.
Alternatively, you can also choose to pass the env object into the Task class, and make a call toself._env.render() from the task’s pre_physics_step method.
So, I made the self._env = env variable then add the self._env.render() on pre_physics_step method at environment code.
Here is an example that comes from my code.
class MovingTargetTask(RLTask):
def __init__(self, name, sim_config, env, offset=None) -> None:
self._sim_config = sim_config
self._cfg = sim_config.config
self._task_cfg = sim_config.task_config
self.step_num = 0
self.dt = 1 / 120.0
self._env = env # add this for using the env at the method.
Before adding self._env.render(), I cannot get any point cloud data though I can get the point cloud with non-headless mode. However, now I can get the point cloud and also I can check via visualization.
Now, I have the following question to @kellyg. When I use self._env.render(), does the simulator render the entire environment but just not display the rendering result on screen? or does the simulator definitely only render what is needed to get the point cloud?
self._env.render() will render the entire simulation, but the results will not be visible if display is not launched when running in headless mode. We do not currently have a mechanism to render only what’s needed for point clouds.