Hi there, I recently updated my NVIDIA Drivers and CUDA Toolkit to the latest version, see below for the nvidia-smi
output.
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 545.23.06 Driver Version: 545.23.06 CUDA Version: 12.3 |
|-----------------------------------------+----------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+======================+======================|
| 0 Quadro RTX 5000 On | 00000000:01:00.0 Off | N/A |
| N/A 68C P8 20W / 110W | 2249MiB / 16384MiB | 4% Default |
| | | N/A |
+-----------------------------------------+----------------------+----------------------+
+---------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=======================================================================================|
| 0 N/A N/A 29480 G /usr/lib/xorg/Xorg 1012MiB |
| 0 N/A N/A 29947 G /usr/bin/gnome-shell 340MiB |
| 0 N/A N/A 30573 G /usr/bin/gjs-console 21MiB |
| 0 N/A N/A 30818 G ...,WinRetrieveSuggestionsOnlyOnDemand 36MiB |
| 0 N/A N/A 30862 G ...vice,SpareRendererForSitePerProcess 263MiB |
| 0 N/A N/A 31169 G ...ures=SpareRendererForSitePerProcess 296MiB |
| 0 N/A N/A 318550 C+G ...74756456,5044128994603659582,262144 170MiB |
+---------------------------------------------------------------------------------------+
I then moved on and configured my Docker so that I can use it with these latest Drivers and Toolkit. Everything went smoothly; however, after installing the nvidia-container-toolkit
I got prompted with the following:
The following packages were automatically installed and are no longer required:
cuda-cccl-12-2 cuda-command-line-tools-12-2 cuda-compiler-12-2 cuda-crt-12-2 cuda-cudart-12-2 cuda-cudart-12-3 cuda-cudart-dev-12-2 cuda-cuobjdump-12-2 cuda-cupti-12-2 cuda-cupti-dev-12-2 cuda-cuxxfilt-12-2 cuda-documentation-12-2
cuda-driver-dev-12-2 cuda-gdb-12-2 cuda-libraries-12-2 cuda-libraries-12-3 cuda-libraries-dev-12-2 cuda-nsight-12-2 cuda-nsight-compute-12-2 cuda-nsight-systems-12-2 cuda-nvcc-12-2 cuda-nvdisasm-12-2 cuda-nvml-dev-12-2 cuda-nvprof-12-2
cuda-nvprune-12-2 cuda-nvrtc-12-2 cuda-nvrtc-12-3 cuda-nvrtc-dev-12-2 cuda-nvtx-12-2 cuda-nvvm-12-2 cuda-nvvp-12-2 cuda-opencl-12-2 cuda-opencl-12-3 cuda-opencl-dev-12-2 cuda-profiler-api-12-2 cuda-sanitizer-12-2 cuda-toolkit-12-2
cuda-toolkit-12-2-config-common cuda-toolkit-12-3-config-common cuda-toolkit-12-config-common cuda-toolkit-config-common cuda-tools-12-2 cuda-visual-tools-12-2 gds-tools-12-2 gds-tools-12-3 libcublas-12-2 libcublas-12-3
libcublas-dev-12-2 libcufft-12-2 libcufft-12-3 libcufft-dev-12-2 libcufile-12-2 libcufile-12-3 libcufile-dev-12-2 libcufile-dev-12-3 libcurand-12-2 libcurand-12-3 libcurand-dev-12-2 libcusolver-12-2 libcusolver-12-3 libcusolver-dev-12-2
libcusparse-12-2 libcusparse-12-3 libcusparse-dev-12-2 libnpp-12-2 libnpp-12-3 libnpp-dev-12-2 libnvjitlink-12-2 libnvjitlink-12-3 libnvjitlink-dev-12-2 libnvjpeg-12-2 libnvjpeg-12-3 libnvjpeg-dev-12-2 libtinfo5 nsight-compute-2023.2.2
nsight-systems-2023.2.3 nvidia-firmware-535-535.113.01 nvidia-modprobe
with the usual hint on how to remove them with autoremove
.
Now, it seems for some reason that CUDA 12.2 is still around on my system, and I refrained from launching an autoremove
because it seems to me that there are also CUDA 12.3 packages in that list and don’t want to mess-up/break anything by removing them…
If that’s the case is there a reason why the system is picking up those packages when they are currently in use? Also, if that’s the case and those CUDA 12.2 packages are no longer required is there a safe way to purge them without having to reinstall everything form zero? Let me know, thanks in advance!