Read out camera information from each robot in a RL environment

Hi all,

I have a similar problem as described in the post.

I have a certain number of robots in an Isaac Gym environment. Each robot has a camera attached to the manipulator. I want to read and process the RGB image and depth image for each camera during the training process.

In the mentioned post, a camera API was announced for this @kellyg . Does the latest version of OmniIsaacGymEnvs already support this function? I found the flag “enable_cameras: False” in the yaml file for the task, but it is not clear to me how the data can be read out.

Can anyone help me how to read out the sensor data?

Many thanks.

Hi there, we are still working on the APIs and an example to retrieve tensorized sensor data. We will hopefully have an update in our future release this year.

1 Like

Hi,

You mean, it is impossible to use vision information at Omni Isaac Sim RL environment?

Hi all,

I’ll try it with functions from Isaac Sim Using Sensors: LIDAR. However, I could imagine that this would slow down the training extremely, as you have to iterate over all the cameras at each time step.

Maybe there are better / more efficient alternatives.

1 Like

It is currently possible to retrieve vision data by setting up individual cameras or sensors in your environments, and iterating through them each step to collect the data. The new tensorized APIs that are being worked on will improve the performance by allowing batched collection from sensors.

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.