In order to use image information for reinforcement learning, I am trying to obtain sensor data from cameras set on each environment.
(I’m using Isaac Gym Preview 3)
However, I tried get_camera_image(sim, env, camera_hundle, gymapi.IMAGE_COLOR), the shape of numpy array was now [width, height * channel(RGBA)] instead of [width, height, channel].
When I used get_camera_image(sim, env, camera_hundle, gymapi.IMAGE_DEPTH), the shape of numpy array was correctly [width, height].
Of course, I also tried get_camera_image_gpu_tensor. However, if I set camera_props.enable_tensors = True, I got the following error when create_camera_sensor.
[Error] [carb.gym.plugin] cudaExternamMemoryGetMappedBuffer failed on rgbImage buffer with error 101
[Error] [carb.gym.plugin] cudaExternamMemoryGetMappedBuffer failed on depthImage buffer with error 101
[Error] [carb.gym.plugin] cudaExternamMemoryGetMappedBuffer failed on segmentationImage buffer with error 101
[Error] [carb.gym.plugin] cudaExternamMemoryGetMappedBuffer failed on optical flow buffer with error 101
I am still investigating this error. (I tried this solution, but did not solve the problem in my case)
Please let me know if I’m wrong about anything.
Then about the documentation, IMAGE_COLOR in class isaacgym.gymapi.ImageType is written as Image RGB, but is this a mistake for RGBA?