I used the runfile to install the drivers and cuda toolkit. After the fresh installation, everything went well. However, once I faced the CUDA Out of Memory Error in PyTorch, the driver would crash and even the nvidia-smi was not able to be executed, with error “Failed to initialize NVML: Driver/library version mismatch”.
I am not sure if this is a bug of cuda or PyTorch. Has anyone experienced the same problem as mine?
My environment:
PyTorch version: 1.6.0
Is debug build: False
CUDA used to build PyTorch: 10.2
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.1 LTS (x86_64)
GCC version: (Ubuntu 9.3.0-10ubuntu2) 9.3.0
Clang version: Could not collect
CMake version: Could not collect
Python version: 3.7 (64-bit runtime)
Is CUDA available: True
CUDA runtime version: Could not collect
GPU models and configuration: GPU 0: GeForce RTX 2080 Ti
Nvidia driver version: 440.33.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Versions of relevant libraries:
[pip3] numpy==1.19.1
[pip3] torch==1.6.0
[pip3] torchvision==0.7.0
[conda] numpy 1.19.1 pypi_0 pypi
[conda] torch 1.6.0 pypi_0 pypi
[conda] torchvision 0.7.0 pypi_0 pypi