We have a GPU server currently equipped with an NVIDIA A100. We want to run Isaac Sim on this server, but I understand that A100 GPUs are not officially supported for Isaac Sim due to the lack of RT cores and NVENC, which are required for rendering and streaming.
We’re planning to add an RTX GPU (e.g., RTX 4080/4090/A6000) to the same server. The intended setup:
RTX GPU: Isaac Sim
A100: other AI/ML workloads
I would like to confirm:
Can Isaac Sim run on the RTX GPU in the same server alongside an A100 without conflicts? In other words, will Isaac Sim properly detect and use the RTX GPU while leaving the A100 free for other workloads?
Is it possible to use the RTX GPU for Isaac Sim rendering while running AI workloads (e.g., RL training using Isaac Lab) on the A100 simultaneously?
Are there any recommended configurations or known issues when using this dual-GPU setup (drivers, CUDA, Docker, environment variables)?
If the answers to the second and third questions are uncertain or unknown, it would still be very helpful to know the answer to the first question. Thank you very much for your guidance!
I cannot officially say if you can mix an A100 with an RTX GPU card. Personally, I would STRONGLY not recommend it. They are very very different cards and require totally different drivers. I would have one server for Compute and one for Graphics RTX. Plus if you make it the SAME machine, then you can not run Compute and Graphics RTX workloads at the same time. Both are extremely demanding. You would not successfully be able to split the server workload into a dual purpose system, in realtime. You are saving on a new server but totally killing your parallel workflow capability.
It is like asking if there is any issue with buying a printer / fax / scanner combo machine. Well technically no, but if you office does a LOT of printing and faxing and scanning, maybe putting it all in one box is not the best idea for multi person, multi use workflow.
Thank you very much for the detailed explanation — it was very helpful!
Just to confirm my understanding: you are saying that mixing an A100 and an RTX GPU in the same server is technically possible, but not recommended mainly because we cannot reliably run heavy compute workloads (on A100) and graphics/rendering workloads (on RTX) at the same time.
If that’s correct, would it still be feasible to run Isaac Sim on the RTX GPU as long as we are not running AI training or other compute-intensive workloads on the A100 simultaneously? Our main reason for considering this approach is cost — buying a completely separate server for Isaac Sim would be quite costly. Our plan would be to use the server in a “one-function-at-a-time” way:
When running Isaac Sim, we would not schedule AI training jobs on the A100.
When running AI training, we would not use Isaac Sim.
i.e., our use case would be like a “printer/fax/scanner combo machine” — but we would only use one function at a time.
Would such a sequential workflow (rather than parallel) be acceptable from a driver and software support perspective, or would you still strongly recommend against combining A100 and RTX GPUs in the same machine?
I understand about the cost aspect but honestly, I would prefer you have two separate machines. They do not have to be big expensive servers. Just a normal workstation is fine. It’s all about the GPU.
You can always try it, but I do not think the drivers would work very well between an A100 and an RTX. You can try it and see, and worse case buy another workstation if needed.