Display message when running apt install or remove command

Hi there,

I recently installed the latest version of NVIDIA Drivers and CUDA Toolkit. Everything seems to be working just fine; however, every time I sudo apt install or sudo apt remove something I get prompted the following message as well

The following packages were automatically installed and are no longer required:
  cpu-checker cuda-cccl-12-1 cuda-command-line-tools-12-1 cuda-compiler-12-1 cuda-cudart-12-1 cuda-cudart-dev-12-1 cuda-cuobjdump-12-1 cuda-cupti-12-1 cuda-cupti-dev-12-1 cuda-cuxxfilt-12-1 cuda-documentation-12-1 cuda-driver-dev-12-1
  cuda-gdb-12-1 cuda-libraries-12-1 cuda-libraries-dev-12-1 cuda-nsight-12-1 cuda-nsight-compute-12-1 cuda-nsight-systems-12-1 cuda-nvcc-12-1 cuda-nvdisasm-12-1 cuda-nvml-dev-12-1 cuda-nvprof-12-1 cuda-nvprune-12-1 cuda-nvrtc-12-1
  cuda-nvrtc-dev-12-1 cuda-nvtx-12-1 cuda-nvvp-12-1 cuda-opencl-12-1 cuda-opencl-dev-12-1 cuda-profiler-api-12-1 cuda-sanitizer-12-1 cuda-toolkit-12-1 cuda-tools-12-1 cuda-visual-tools-12-1 ipxe-qemu ipxe-qemu-256k-compat-efi-roms libaio1
  libatomic1:i386 libbsd0:i386 libcacard0 libcublas-12-1 libcublas-dev-12-1 libcufft-12-1 libcufft-dev-12-1 libcurand-12-1 libcurand-dev-12-1 libcusolver-12-1 libcusolver-dev-12-1 libcusparse-12-1 libcusparse-dev-12-1 libdaxctl1
  libdecor-0-0 libdecor-0-plugin-1-cairo libdrm-amdgpu1:i386 libdrm-nouveau2:i386 libdrm-radeon1:i386 libdrm2:i386 libedit2:i386 libegl-mesa0:i386 libegl1:i386 libelf1:i386 libexpat1:i386 libfdt1 libffi8:i386 libgbm1:i386 libgfapi0
  libgfrpc0 libgfxdr0 libgl1:i386 libgl1-mesa-dri:i386 libglapi-mesa:i386 libgles2:i386 libglusterfs0 libglvnd0:i386 libglx-mesa0:i386 libglx0:i386 libicu70:i386 libiscsi7 libllvm15:i386 libmd0:i386 libndctl6 libnpp-12-1 libnpp-dev-12-1
  libnvidia-cfg1-530 libnvidia-common-530 libnvidia-compute-530:i386 libnvidia-decode-530 libnvidia-decode-530:i386 libnvidia-encode-530 libnvidia-encode-530:i386 libnvidia-extra-530 libnvidia-fbc1-530 libnvidia-fbc1-530:i386
  libnvidia-gl-530 libnvidia-gl-530:i386 libnvjitlink-12-1 libnvjitlink-dev-12-1 libnvjpeg-12-1 libnvjpeg-dev-12-1 libnvvm-samples-12-1 libopengl0:i386 libpmem1 libpmemobj1 libqrencode4 librados2 librbd1 libsdl2-2.0-0 libsensors5:i386
  libspice-server1 libstdc++6:i386 libtinfo5 liburing2 libusbredirparser1 libvirglrenderer1 libwayland-client0:i386 libwayland-server0:i386 libx11-6:i386 libx11-xcb1:i386 libxau6:i386 libxcb-dri2-0:i386 libxcb-dri3-0:i386 libxcb-glx0:i386
  libxcb-present0:i386 libxcb-shm0:i386 libxcb-sync1:i386 libxcb-xfixes0:i386 libxcb1:i386 libxdmcp6:i386 libxext6:i386 libxfixes3:i386 libxml2:i386 libxshmfence1:i386 libxxf86vm1:i386 msr-tools nsight-compute-2023.1.0
  nsight-systems-2023.1.2 nvidia-compute-utils-530 nvidia-driver-530 nvidia-modprobe nvidia-prime nvidia-utils-530 ovmf pass qemu-block-extra qemu-system-common qemu-system-data qemu-system-gui qemu-system-x86 qemu-utils qrencode seabios
  tree uidmap xclip xserver-xorg-video-nvidia-530
Use 'sudo apt autoremove' to remove them.

Now, I’m a bit sceptical to run an autoremove when I know for a fact I’m using my NVIDIA graphic and associated drivers on my version of Ubuntu. For what reason this message appears if I’m using my RTX Quadro 5000, and is it safe to simply run sudo apt autoremove without having to deal with subsequent issues e.g. falling back to default graphic card or possibly breaking the system?

Thanks in advance!

That looks like you installed cuda using the “cuda” metapackage and then removed it so now autoremove wants to remove all pulled in dependencies including the driver.
How did you initially install the driver and cuda toolkit?

Hi @generix thanks a lot for answering!

Well, initially I had a RTX Quadro 4000 and installed Drivers and Cuda from here since the current installation method was not available back then. I’m talking early 2022 when the version for Cuda was 11.x.

I happened to recently upgrade my GPU to a Quadro 5000, so before installing the hardware component I removed all Drivers and Cuda toolkit as indicated here.
I then proceeded and re-install them once I upgraded the GPU following the steps highlighted here. This procedure is much easier and straightforward than the one I used before; however, it is different and, as you said, pulls everything from a repo I guess.

Still, this is weird as I purged everything before manually installing the new GPU, and then followed this new procedure for installing of Drivers and Cuda toolkit. Because this display message appeared before, soon after I installed the GPU, I thought I did something wrong so went on and remove everything again and started over with a clean install.

These are the exact steps I followed so far. One thing to mention is that I still have this repo among the others on my Ubuntu “Jammy Jellyfish” v22.04.2

Let me know what do you advise to do, is it safe to run autoremove, or I will cause the system to fall back to default graphic (or worst crash it)? Thanks!

I’m not really sure what to do or where this originates. Did you check whether the cuda repo is added twice? What’s happening when you just re-run
sudo apt install cuda

How can I check whether the cuda repo is added twice? This is the output when I run sudo apt install cuda

Reading package lists… Done
Building dependency tree… Done
Reading state information… Done
Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
distribution that some required packages have not yet been created
or been moved out of Incoming.
The following information may help to resolve the situation:

The following packages have unmet dependencies:
cuda-drivers-530 : Depends: nvidia-settings (>= 530.30.02) but 510.47.03-0ubuntu1 is to be installed
E: Unable to correct problems, you have held broken packages.

So looks like cuda was only partially installed. Please try removing the offending nvidia-settings package.

1 Like

Very interesting, is there a safe way to do so? I believe in this case the offending nvidia-settings package is 510.47.03-0ubuntu1.

To be clear I also set my Nvidia X Server Settings to the default Ubuntu version which happens to be exactly that one (510.47.03-0ubuntu1), simply because I liked the option to set performance mode. See this post for a more comprehensive context.

I went on and unhold the Nvidia X Server Settings, run sudo apt upgrade so that also this tool matches the Drivers and Toolkit distro and, finally, re-run sudo apt-get install cuda. This time I was prompted with the following

The following packages were automatically installed and are no longer required:
  cpu-checker ipxe-qemu ipxe-qemu-256k-compat-efi-roms libaio1 libcacard0 libdaxctl1 libdecor-0-0 libdecor-0-plugin-1-cairo libfdt1 libgfapi0 libgfrpc0 libgfxdr0 libglusterfs0 libiscsi7 libndctl6 libpmem1 libpmemobj1 libqrencode4 librados2
  librbd1 libsdl2-2.0-0 libspice-server1 liburing2 libusbredirparser1 libvirglrenderer1 msr-tools ovmf pass qemu-block-extra qemu-system-common qemu-system-data qemu-system-gui qemu-system-x86 qemu-utils qrencode seabios tree uidmap xclip
Use 'sudo apt autoremove' to remove them.
The following additional packages will be installed:
  cuda-12-1 cuda-demo-suite-12-1 cuda-drivers cuda-drivers-530 cuda-runtime-12-1
The following NEW packages will be installed:
  cuda cuda-12-1 cuda-demo-suite-12-1 cuda-drivers cuda-drivers-530 cuda-runtime-12-1
0 upgraded, 6 newly installed, 0 to remove and 0 not upgraded.
Need to get 3,991 kB of archives.
After this operation, 12.9 MB of additional disk space will be used.
Do you want to continue? [Y/n] Y

As you can see I proceed to install the missing dependencies, but still get some hanging packages/libraries which I’m not entirely sure if it’s safe to remove. Let me know, thanks again!

All packages on autoremove are related to kvm/qemu. Do you have any kvm base virtual machines running?

Hi @generix thanks for the insight,

For one, unfortunately, I don’t know what a kvm virtual machine is; hence, I looked it up online. From what I’m aware I’ve never used something similar, the only things I’ve done is setting up a Container Device Interface (CDI) for Docker, as the tool I’m using rely on an older version of CUDA and it doesn’t support v12.1.
But I tend to believe this was the issue.

Anyway, yesterday I decided to give it another try, so went on switching to my default graphic card, purging all Drivers and Toolkit associated packages for then doing a clean install. The problem seems to have disappeared; however, it’s worth mentioning that at first I though something was wrong with what I did since after this new installation the system was not “picking-up” my NVIDIA graphic unit.
Then, I though I might need to specifically telling my system through the Nvidia X Server Setting that I want my GPU in “Performance Mode” — aka “always active” I believe. So, I downgraded the Nvidia X Server Setting to the default Ubuntu version where I can set that preference and, afterwards, I re-installed the version shipped with the 12.1 Drivers and Toolkit. Finally, I made sure to install the remaining CUDA dependencies as before.

That said, with the new Nvidia X Server Setting where I find the option to toggle on/off the “Performance Mode”, as this is a very lengthy and painful procedure to go through all the times, as you might convey… maybe I’m missing something but I couldn’t see this option in the latest releases of the GUI interface. Let me know, thanks.

The switch in nvidia-settings is a unique ubuntu patched version. The equivalent is running
sudo prime-select nvidia|intel|on-demand

Thank you so much @generix, I think this solves the issue.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.