HI.
I am trying to train my agent using DRL with observation data, such as RGB-D camera, lidar, etc…
However, I could not find any example using tensorization of sensor data in OmniIsaacGymEnvs except for force data.
Meanwhile, I found an answer from IsaacSim team of NVIDIA, and they said they are developing that function at that time (2 Nov 2022). Post link
Is it still being developed ?
If it its, is there any way to accelerate DRL using sensor data, such as RGB-D, lidar, or point cloud data in OmniIsaacGymEnvs ?
Thank you in advance!