GPU selection for 6DOF dynamic simulator

Hi all, I want to buy a GPU workstaion for unmanned aircraft 6DOF dynamic simulator. How to identify suitable GPU for the purpose. If I select some GPU, say A16, how can I justify. Please provide some insight into this. Vijeesh

Do you need double computations vs. float computations a lot?
Do you need ECC memory?
Is it run 24h?
How critical is GPU cost vs. reliability?

What resources does the simulator need? Memory? Compute power? Does it use the Tensor Cores?
Which GPU generations does it support?
What PCIe version does your system have?

Thanks for your reply. Below are my response:
As it is dynamic physics simulation, I think I’ll need double precision
ECC memory is optional.
Not 24hrs
Reliablity is priority
Currently I do not know how to estimate memory. My CPU code now takes around 15 to 20 MB.
Not sure if Tensor cores will be required.
I’m planning to buy the workstation. So no particular PCIe currently

Am I assuming correctly that this GPU would be used for both graphical workload and compute workload? I imagine flight simulators need very high-resolution 3D graphics.

I am curious why you mentioned the A16. From what I can tell, this is a passively cooled device designed to go into a server. Comprised of four GPUs, it then lets you make these available for virtual workstations. When I think of a workstation GPU, I mostly think of an actively cooled device like the RTX 5000 Ada Generation or RTX 6000 Ada Generation.

Workstation GPUs of the most recent GPU generations all have low double-precision performance, with high throughput double-precision capability reserved for A100 or H100 based server GPUs, which you would buy as part of a fully configured system from an NVIDIA approved system integrator. So the first thing you might want to figure out is how much double-precision computation you will actually require (rough estimate in GFLOPS or TFLOPS).

I have never worked with an unmanned aircraft 6DOF dynamic simulator. If there is a user community for the particular software you have in mind (or a substantially similar product), you might want to ask for platform recommendations there.


[Later:] After Googling around a bit, it seems at least some vendors are building such simulators based on what is essentially a high-end consumer-grade system platform, such as an Intel Core i9-14900K{S} CPU + RTX 4090 GPU + NVMe SSD.

I couldn’t speak to the reliability of such a platform, since over the past 20+ years I have personally only used workstations based on Intel Xeon processors with ECC DRAM + {Quadro | RTX workstation} class GPUs. Given how Intel is now lagging in the CPU space, I would probably switch to an AMD EPYC-based platform if I had to specify a workstation right now.

That said, it appears that Intel Core i9-14900K{S} CPU supports ECC memory, but it may not be supported by most motherboards.

1 Like

For double precision (if really used by your application, please check), the A30 could be an option. It also uses a GA100 core like the A100 and has half the speed as A100, but is more affordable and useful for double precision.

Also important could be L2 cache size and global memory bandwidth.
L2 cache size is high for datacenter cards like A30+A100 and for any Hopper and Ada generation cards onwards.

If other people successfully use a consumer card with that simulation program, that can definitely be an option. Or the professional sister cards like the RTX 5000/6000 Ada njuffa mentioned for more reliability.

For a really high power desktop system with lots of memory, Nvidia has announced the Digits with GB10 Blackwell. Not sure about double precision performance. The marketing concentrates on FP4 for AI.

An easy way to find the GPUs with high DP throughput is to use the TechPowerUp database. Starting with the Ampere architecture GPUs, look here for the GA100 based SKUs:

Then the next younger generation is Hopper GH100:

And the most recent one is Blackwell GB100:

These are all passively-cooled GPUs intended for server platforms to be sold through NVIDIA-approved system integrators. Trying to do DYI builds with these GPUs usually results in all kind of problems and no support from NVIDIA; my advice would be not to do that.

1 Like

It all depends on the context the system is deployed in. What kind of UAVs is this being used for? How mission-critical is the simulator in light of its usage? If it is for something like a Global Hawk, nickel-and-diming the simulator is likely not appropriate. If it is a swarm of small drones used for entertainment purposes (for example, as a fireworks substitute), a consumer grade platform may well be sufficient.

Thanks all for your valuable responses. This workstation I want to use only for numerical computation and graphics will not be part of this. Basically Monte carlo simulation to test some on board software

Based on my google search and budget limitations I’m planning to procure A30 because it has double precision floating point support.

This simulator is to develop a Monte carlo simulation framework for Navigation Guidance and Control Algorithm testing of UAVs

A30 is a datacenter GPU. You may have trouble getting it to work in an “arbitrary” system. These forums are littered with examples of people reporting problems trying to use datacenter GPUs in systems that were not designed to support them. A GPU like the A30 requires server-managed flow-through cooling (it does not keep itself cool) and it also places particular demands on the PCIE PnP system that ordinary system BIOS may not be set up for. YMMV.

Here they list officially qualified SuperMicro mainboards for the A30.