Bug of get_camera_image in Gym API?

In order to use image information for reinforcement learning, I am trying to obtain sensor data from cameras set on each environment.
(I’m using Isaac Gym Preview 3)

However, I tried get_camera_image(sim, env, camera_hundle, gymapi.IMAGE_COLOR), the shape of numpy array was now [width, height * channel(RGBA)] instead of [width, height, channel].
When I used get_camera_image(sim, env, camera_hundle, gymapi.IMAGE_DEPTH), the shape of numpy array was correctly [width, height].

Of course, I also tried get_camera_image_gpu_tensor. However, if I set camera_props.enable_tensors = True, I got the following error when create_camera_sensor.

[Error] [carb.gym.plugin] cudaExternamMemoryGetMappedBuffer failed on rgbImage buffer with error 101
[Error] [carb.gym.plugin] cudaExternamMemoryGetMappedBuffer failed on depthImage buffer with error 101
[Error] [carb.gym.plugin] cudaExternamMemoryGetMappedBuffer failed on segmentationImage buffer with error 101
[Error] [carb.gym.plugin] cudaExternamMemoryGetMappedBuffer failed on optical flow buffer with error 101

I am still investigating this error. (I tried this solution, but did not solve the problem in my case)

Please let me know if I’m wrong about anything.

Then about the documentation, IMAGE_COLOR in class isaacgym.gymapi.ImageType is written as Image RGB, but is this a mistake for RGBA?

Hi,

Yes, it appears that get_camera_image(sim, env, camera_hundle, gymapi.IMAGE_COLOR) returns dimension [width, height * channel (4)]. This isn’t an issue for IMAGE_DEPTH since depth has just one channel. I believe the dimensions should be correct for the tensor API. Regarding the tensor API error, are you running on a machine with multi-GPUs? It may help sometimes to limit the GPU to 1 with setting CUDA_VISIBLE_DEVICES, or perhaps you could try if it works in a docker container.

Thank you for your reply.
I’ll try the method you suggested. I’ll post here again if I find out anything else.