Why does the gym environment occupy some amount of GPU memory on each visible GPU?

Hi,

If I use ISAAC GYM on one GPU, it seems that it will spawn something on all the other available gpus and consumes some memory on other GPUs. In this case, the available GPU memory on other gpus are reduced. Is there a way to circumvent this issue?

For example:

I am running the job on GPU 0, but some amount of GPU memory is also consumed on the other GPUs.

Hi @taocc,

At the launch of a task in Isaac Gym, we introduce a CUDA context on each available GPU device to ensure we have a valid CUDA context on all of the devices. This could be the reason why you are seeing GPU memory usage for all of your GPUs. If you’d like to limit the usage to a single GPU, you can use export CUDA_VISIBLE_DEVICES to specify the GPU to use.

Thanks,
Kelly