Reducing GPU memory usage for multi-camera uses

I’m currently trying to simulate 8 or so cameras at once while having them all publish with ROS. However, it seems like doing so causes my GPU to run out of memory (RTX 2080, 8 GB). I’ve found that I can get up to around 4 cameras working at the same time, using up ~7.2 GB. Are there any methods for lowering the memory usage for many-camera use cases?

EDIT: I thought I should mention that I am effectively only using the cameras for ROS. If there’s a way to disable them rendering to viewports while still updating in ROS, that might also help.

Hi wchen,

We are aware of the memory usage of the multi camera, and also about the limitation that the camera is tied to the viewport. We are working on it for the next release.

Kindly,
Liila

Thanks! Do you guys have an estimate for when the next release is?

Next release will be in October

I meet the same issue, multi camera leads to core dump, beacause of not enough GPU memory .

here is the isaacsim app log
kit_20220614_110632.log (3.2 MB)

for multi camera with high resolution, you do require multi GPU

Thanks ! but how to set multi camera with multi GPU ? I run my robot in isaacsim container on remote headless machine with 8 X 3080TI ? It runs normally but the omnigraph’s publisher rate is very low , less than 10hz

Hopefully this will help you: Rendering Basics — Omniverse Materials and Rendering documentation

We have 2 GeForce RTX 3080. Having only 6 robots, each with a camera (with which we capture a depth image), results in taking half of the memory of each GPU. The camera resolution is only 171 x 224… Is it normal that it takes so much GPU RAM ? Or are we doing something wrong ?

Without the cameras, we can easily have 100 robots.

Here are some memory snapshots (from nvidia-smi):

6 robots:

NVIDIA GeForce

11 robots:

35 66C P2 114W 320W

21 robots:

P2 117W  320W

26 robots: Out of memory

Hi - Sorry for the delay in the response. Let us know if you still having this issue with the latest Isaac Sim 2022.2.1 release.

Hi!
I’m having the same issue, still with the latest version.
Cameras are tied to its viewport, and I still don’t find a way of deactivating them on runtime or even after stopping the simulation when it has already run once.
Is there any known workaround?

Hi @christianbarcelo - Can you provide more information about how big is the scene?

The GPU memory usage has more to do with the geometry and textures in the scene, than with the resolution of the camera.

Hi @rthaker
My scene is not that big, tho the cameras still consume a lot. My problem is not a matter of how much they consume, but how to get the VRAM back when pausing the simulation.
Currently, once the simulation is run and the cameras start rendering, it does not matter if you pause or stop the simulation, the viewport will keep rendering and your VRAM never gets freed.

Hi @christianbarcelo - Will it be possible to share the USD file and the log files of all the scenarios you described earlier?

Any news on this?

The OOM bug when adding cameras into the scene is still here, even with the latest version 2023.1.1 :-(

Hi @mike.niemaz - Can you share more details and repro steps/log file?

Here you go:

Hi @mike.niemaz - As I mentioned in that thread that devs are looking into it. I would request you to not put same question in older threads :). You are happy to look into them but don’t post in those if you already have posted your question.

Thank you for your support on this.

1 Like