Does the use of two A10 vs. one A30 in terms of GPU memory have a disadvantage for the AI/ML calculation in terms of the size of the model (>24GB)? I see that two A10s should be more powerful than one A30 in terms of computing power and, according to Release Notes - NVIDIA Docs, one unified memory for the A10 provides a comprehensive memory address range. That sounds to me like two A10s come across as a combined, stronger GPU from a vServer perspective and the shared GPU memory can be assigned to a vServer as desired. Am I seeing it that way correctly?
Related topics
Topic | Replies | Views | Activity | |
---|---|---|---|---|
2 NVIDIA A2 GPUs | 0 | 513 | September 16, 2022 | |
How A30 GPU is faster than A10 GPU? | 3 | 6168 | July 5, 2022 | |
Physical RAM and GPU VRAM capacity matching? | 0 | 773 | July 11, 2023 | |
Max M10 boards per server | 4 | 4879 | January 16, 2019 | |
Maximize memory available | 1 | 747 | May 29, 2013 | |
Same gpu with different memory size | 1 | 666 | January 12, 2022 | |
10*A6000 or 10*A40 for training large language models? | 0 | 1126 | January 10, 2023 | |
A10 GPU memory was only 22731M | 5 | 1164 | January 27, 2022 | |
GPU Memory Less Than Promised | 19 | 3041 | December 15, 2022 | |
Windows 10 using ~1 GB of memory for all GPUs (WDDM) | 3 | 5801 | October 22, 2017 |