I’m currently trying to simulate 8 or so cameras at once while having them all publish with ROS. However, it seems like doing so causes my GPU to run out of memory (RTX 2080, 8 GB). I’ve found that I can get up to around 4 cameras working at the same time, using up ~7.2 GB. Are there any methods for lowering the memory usage for many-camera use cases?
EDIT: I thought I should mention that I am effectively only using the cameras for ROS. If there’s a way to disable them rendering to viewports while still updating in ROS, that might also help.
We are aware of the memory usage of the multi camera, and also about the limitation that the camera is tied to the viewport. We are working on it for the next release.
Thanks ! but how to set multi camera with multi GPU ? I run my robot in isaacsim container on remote headless machine with 8 X 3080TI ? It runs normally but the omnigraph’s publisher rate is very low , less than 10hz
We have 2 GeForce RTX 3080. Having only 6 robots, each with a camera (with which we capture a depth image), results in taking half of the memory of each GPU. The camera resolution is only 171 x 224… Is it normal that it takes so much GPU RAM ? Or are we doing something wrong ?
Without the cameras, we can easily have 100 robots.
Hi!
I’m having the same issue, still with the latest version.
Cameras are tied to its viewport, and I still don’t find a way of deactivating them on runtime or even after stopping the simulation when it has already run once.
Is there any known workaround?
Hi @rthaker
My scene is not that big, tho the cameras still consume a lot. My problem is not a matter of how much they consume, but how to get the VRAM back when pausing the simulation.
Currently, once the simulation is run and the cameras start rendering, it does not matter if you pause or stop the simulation, the viewport will keep rendering and your VRAM never gets freed.