I am training deep learning models on a computer with two NVIDIA RTX 2080 Ti, and I am facing the following problem.
When I only start one process on any of the GPUs and the other GPU remains idle, the process works at full speed.
However, when I start a process on GPU:0 and then another different process at GPU:1, the GPU:1 starts losing power, and the process slows down to a ridiculous speed.
Has anyone experienced something similar? Can it be a configuration problem of the drivers? Or it can be a hardware problem (motherboard, data bus performance, power) ?. I am running Ubuntu 20.04 with Nvidia driver version 450.66 and CUDA 10.1
The computer specs are:
- INTEL CORE i9 10900X
- MOTHERBOARD AORUS X299 UD4 PRO
- 4 x DDR4 16 GB 3600 Mhz. HyperX FURY RGB BLACK
- LIQUID COOLING SYSTEM H100i PRO CORSAIR
- 2 NVIDIA RTX 2080 Ti XTREME 11 GB GIGABYTE
- POWER SUPPLY 1200W
Thank you for your help.