Here is an observation.
When the simulation and all its environments are destroyed, not all used VRAM is freed if the python script continues execution.
This can be observed, even when:
- the empty environments are created
- there is no viewer (with a viewer even more vram stays occupied)
- both simulators (Flex and PhysX, while with Flex, more vram stays occupied)
A minimum example script is here:
test_vram.py (1.8 KB)
In a loop, I just create and delete simulations with 1000 environments with only a ground plane. The used VRAM grows until all of it is used, and the script aborts.
Tested with nvidia-driver-460 and GTX-1650
Is it an expected behavior, or am I deleting the environments incorrectly?
P.S. Why at all, I was interested in deleting and creating the simulations in one script? Since there is no option to delete the asset and introduce a new one in the environment after the simulation is initiated (or is it?), I had a plan to reinitiate a whole simulation with new sets of assets from time to time.
It was advised in another topic to simulate thousands of environments and/or place the assets outside of the scene when they are not needed. I am afraid that the initiation of the bodies outside the visibility of the robot will use the resources for their simulation. It may be less important with the rigid bodies in PhysX, but more crucial with soft bodies in Flex.
Could this issue be overcome with some other approach?