Request for correct instalation steps for multiple versions of CUDA in same machine.
I am using Tensorflow (TF) 1.14 and TensorFlow 2.3, but these versions require different versions of CUDA as per https://www.tensorflow.org/install/source#linux.
After installation of CUDA 10.0 and 11.0, ubuntu 18.04 system has a four CUDA folders in /usr/local. Namely: cuda, cuda-10.0,cuda-11 and cuda-11.2.
After installation of CUDA 10.0 and CUDA 11, and configured the CUDA in .profile file using the below lines.
# set PATH for cuda 10.0 and cuda 11 installation export PATH=$PATH:/usr/local/cuda-10.0/bin:/usr/local/cuda-11.2/bin export CUDADIR=/usr/local/cuda-10.0:/usr/local/cuda-11.2 export LD_LIBRARY_PATH=/usr/local/cuda/lib64:/usr/local/cuda-10.0/lib64:/usr/local/cuda-11.2/lib64
virtualenv is used for creating environments for each TF versions. TF versions are installed and able to verify the versions.
While testing the TF accessing GPU using the below code
python -c ‘from tensorflow.python.client import device_lib;
print(device_lib.list_local_devices());import tensorflow as tf;print(tf.version)’
The TF 1.14 gives me the correct GPU detection, as shown below
physical_device_desc: “device: 0, name: TITAN Xp, pci bus id: 0000:01:00.0, compute capability: 6.1”
But TF 2.3 does not give the results as intended. the TF 2.3 results are shown below
physical_device_desc: “device: XLA_GPU device”
I checked the nvcc -V, its shows the cuda 10.0
Please support by providing the correct steps for the multiple cuda installation for TF 1.14 and TF 2.3