I am currently trying to train a legged robot to walk in Isaac Gym. I’ve written a reinforcement learning task using the provided tutorials and started training. The overall performance is really good compared to simulators that run on CPU.
Using an RTX 4080, the task can process, spawning 4096 environments, 100 000 steps in approximately 4m and 15s which is awesome compared to the last CPU based simulator that I used, which took 1h 34m 15s to process the same amount of steps.
However, I have noticed that during training the GPU uses 6.5GB of memory and only gets to 20% of “usage” (using nvtop to monitor resources). Is there a way to make the task use all the GPU resources? Will this reduce the time required to process a step?