How to use Isaac Gym for Visual Observation Deep Reinforcement Learning?

Hi,
I’m developing an RL environment which takes an image as an observation. But I don’t see any about how to use image data as an observation_buffer in documents or examples. After I call get_camera_image_gpu_tensor and gymtorch.wrap_tensor, I don’t know how to convert them into the obs_buf data type like other examples correctly.
Also I see in the train_cfg it say policy: # only works for MlpPolicy right now. Is it mean that if i want to use an image input, I must override the script like vec_task.py and rl/pytorchppo/module.py, or pull the image data into a strip tensor then input as a MlpPolicy?
Can anyone give me some suggestion to solve this problem?

Thanks

1 Like

It seems neither of currently compatible RL libraries ( rl_pytorch and rl_games) support images as input. So you have to bring in another 3rd party RL library.

Hi @hosh0425
thanks for you reply, I used a simple method that seems to solve this problem. Firstly, we need to flatten the image data into a one-dimension data. You can use view() or einops.rearrange() to do this.
Secondly, we need to use local rl_pytorch package, not what has been already setup in site-package. The simple way to do that is put the rl-pytorch folder into tasks/utils folder, then change

from rl_pytorch.ppo import PPO, ActorCritic

to

from utils.rl_pytorch.ppo import PPO, ActorCritic

in process_ppo.py.

Finally, we need to implement your custom CNN in rl_pytorch/ppo/module.py, assign to self.actor and self.critic.

Remember that you should start with recovery image data like use einops.Rearrange('b (h w c) -> b h w c')

I put my implementation in my github:
https://github.com/cypypccpy/Isaac-drlgrasp

1 Like