gym.get_camera_image_gpu_tensor only return the image tensor for one camera. If I have multiple environments, then I would need to do a for loop to get these tensor reference. And during simulation, I need to do a
stack (or something similar) operation to stack these tensors into one tensor. This seems to add extra time cost. Is there a way to get all the image tensors together instead of getting them one by one just like
The documentation describes how you can use the Gym API to directly access the tensor containing the camera image on the GPU without copying back and forward to the CPU which would cause a lot of overhead. I have not spend a lot of time working on the camera, but if you haven’t already I suggest you read the following:
and check out the interop_torch.py example which shows how this can be done. Here they first create the handle for the camera, obtain the Gym API tensor, wrap it in a PyTorch tensor and then append it to a list. During simulation they loop over all environments and obtain each camera tensor from the list.