2 x A10 Vs A30?

Does the use of two A10 vs. one A30 in terms of GPU memory have a disadvantage for the AI/ML calculation in terms of the size of the model (>24GB)? I see that two A10s should be more powerful than one A30 in terms of computing power and, according to Release Notes - NVIDIA Docs, one unified memory for the A10 provides a comprehensive memory address range. That sounds to me like two A10s come across as a combined, stronger GPU from a vServer perspective and the shared GPU memory can be assigned to a vServer as desired. Am I seeing it that way correctly?