We are planning to get on-premise server which can hold 4 GPUs with AMD Eypc processor which will be used for training using TAO. And we plan to give access to our customer for training their model. We have kept provision for queuing for training.
We are confused as to whether for 3080 cards or A4000.
A100 is idle but it is costlier for us at the moment.
And could you point out what parameter you need to see in these GPU series to make decision which is best for training such as cuda cores, tensor cores, TFLOPS etc to evaluate performance vs price
There is no update from you for a period, assuming this is not an issue anymore.
Hence we are closing this topic. If need further support, please open a new one.
Suggest you to search info about “3080 vs A4000”.
RTX A4000 Graphics Card | NVIDIA → https://www.nvidia.com/content/dam/en-zz/Solutions/gtcs21/rtx-a4000/nvidia-rtx-a4000-datasheet.pdf
NVIDIA GeForce RTX 3080 Family → NVIDIA GeForce RTX 3080 Family
Besides cuda cores, tensor cores, TFLOPS, you can also check GPU memory.
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.