As I could not find a definite answers, does Isaac Sim have support for multiple GPUs when using the Python API in headless mode?
If so, how to enable it?
In NVIDIA’s simulator landscape, I could confirm that Isaac Lab and Gym have multi-GPU support, and there are some examples. However, this was not the case for Isaac Sim.
I am running Isaac Sim 4.1.0 in a Docker container, and can successfully use different GPUs in the container, but not use more than one.
Isaac Sim natively supports multiple GPUs yes. However, inside a container, is a different matter entirely. It may be a limitation of the container itself. Try running it outside of a container to find out.
Hi guys @matters@Richard3D, Is it possible to run multiple Isaac Sim instances using Python API with each instance assigned one GPU on a multi-GPU server?
I have tried the following solutions but none of them works:
@Richard3D do you have some resources or tutorials for this kind of application? @hanqingwang, I couldn’t test it yet. I hope so, that is what I am trying to do.
You can pass the gpu number (e.g. 0) to active_gpu, *** physics_gpu, e.t.c. in the config.
Explore multi-GPU rendering and assigning dedicated GPU and simulation to further boost performance. For example, when executing the kit app (or Isaac Sim), you can assign the GPU for Physx or Rendering
I would like to have two instances of Isaac Sim running in different GPUs. When I initially tried doing such thing, I got an error that Isaac Sim could not be moved to another GPU because it was already under other.
Right so you set your first Isaac Instance to GPU 0 with " –/renderer/multiGpu/activeGpus=“1,0”"
And then your second Instance to GPU 1 with " –/renderer/multiGpu/activeGpus=“0,1”"
But let me ask you this… why are you trying to run two Isaac Sims at the same time ? Surely you are better off with running ONE Isaac Sim at double speed with two cards, than two separate Issac Sims with one card. You still have to deal with normal CPU and memory usage. You will never get “clean” separation. Focus on one task, then focus on the next.
Overhead ? No. The exact opposite. The multiple gpus running in one instance scale very linearly. Essentially perfect. This is the way Omniverse and Isaac Sim was designed. If your GPU compute is 20 minutes with one gpu, it will literally be 10 minutes with two. Same for rendering. So if you want to finish one specific task, throw as much gpu power on that problem as possible. That way the whole machine, with all the cpu power, the gpu power and the system memory and hard drive are all focused on the problem.
As soon as you try to run two concurrent GPU intensive apps, even with specific gpus for each, you cannot expect total separation. Yes both gpus are being used, and technically isolated, but not the whole system. And again the point of this comment, is it offers no greater advantage. Not even a little.
But this is based on solving one problem at a time. If you feel you MUST split a machine to solve two problems at the same time, you could try this approach. But for my opinion, you take one problem at a time, and solve that and if you need to scale, get a second machine, not a second instance on the same machine.