GPU utilization failed for two GPUs CUDA 10.2

Dear Support,

I have two problems.

  1. I had installed two RTX 2080 ti. (They are on PCIe and not bridged)
    OS = CentOS 7
    tensorflow 1.14/1.15 tried both. same output
    earlier, I tried to load both GPUS, both were utilizing complete RAM and also volatile was upto 75% but my observation was that both working, with same PID number , but both working same thing one by one. because, the task time calculation for single gpu is same as for both for the same computation.

  2. to resolve, I reinstalled cuda and nvidia driver. now situation is given below

    ±----------------------------------------------------------------------------+
    | NVIDIA-SMI 440.82 Driver Version: 440.82 CUDA Version: 10.2 |
    |-------------------------------±---------------------±---------------------+
    | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
    | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
    |===============================+======================+======================|
    | 0 GeForce RTX 208… Off | 00000000:06:00.0 Off | N/A |
    | 31% 35C P8 24W / 250W | 164MiB / 11016MiB | 0% Default |
    ±------------------------------±---------------------±---------------------+
    | 1 GeForce RTX 208… Off | 00000000:41:00.0 Off | N/A |
    | 33% 39C P8 17W / 250W | 164MiB / 11019MiB | 0% Default |
    ±------------------------------±---------------------±---------------------+

    ±----------------------------------------------------------------------------+
    | Processes: GPU Memory |
    | GPU PID Type Process name Usage |
    |=============================================================================|
    | 0 4864 C python3 153MiB |
    | 1 4864 C python3 153MiB |
    ±----------------------------------------------------------------------------+

If you can hint me where I am making mistake for these both problems.