How to get point clouds from simulated cameras in gym env

Hello,
I am sorry for posting such a basic question but I can’t figure out how to do this, any help is very much appreciated.

I have simulated cameras that are attached to each robot in its gym environment (1 camera per robot, I can add another one if that is needed to have a stereo pair). I then want to retrieve point clouds from each camera, process them a little bit and use them as input for the training algorithm. How can I get point clouds from each of the cameras (or maybe a depth map)? The cameras prim paths are as follows: /World/envs/env_0/Robot/base/Camera/Camera_01. I would like to do this using the Python API if possible.

Thank you very much in advance.

Hi there,

We do not currently have a tensor API to retrieve camera or sensor data from all cameras in the scene. We will be looking to provide this in future releases. For now, try referencing this tutorial 9. Using Sensors: LIDAR — Omniverse Robotics documentation to see if it’s possible to loop through your robots to retrieve the data.

Thank you very much for your reply @kellyg , I will take a look into the LIDAR in isaac sim. For now I am using a raycast solution which does the job but it would be really interesting to know, when the camera API might get integrated. Is there any rough estimate, when this will be?

We are hoping to have this available later this year or early next year.

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.