I ran into a critical installation problem with Cuda 10.0 which was caused by the recommended repo containing an old dysfunctional nvidia driver; cuda-repo-ubuntu1604-10-0-local-10.0.130-410.48_1.0-1_amd64.deb. The Cuda Toolkit 10.0 installation instructions say to install this repo (https://developer.nvidia.com/cuda-10.0-download-archive?target_os=Linux&target_arch=x86_64&target_distro=Ubuntu&target_version=1604&target_type=deblocal).
The Nvidia 410 driver (and earlier versions) however do not currently work on Ubuntu16.04 (or a subset of such systems); they cannot be installed via their “.run” files either (https://www.nvidia.com/en-us/drivers/unix/linux-amd64-display-archive).
Note Cuda 10.1 comes with a functional version of the nvidia driver (418), however tensorflow doesn’t currently support Cuda 10.1. For reference also, the only other nvidia driver I have found to be functional is version 435.
I solved this issue by installing the Cuda 10.0 repo without an nvidia driver (cuda-repo-ubuntu1604_10.0.130-1_amd64.deb), along with a functional nvidia driver;
wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1604/x86_64/cuda-repo-ubuntu1604_10.0.130-1_amd64.deb
sudo dpkg -i cuda-repo-ubuntu1604_10.0.130-1_amd64.deb
sudo apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1604/x86_64/7fa2af80.pub
sudo apt-get update
[wget http://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1604/x86_64/nvidia-machine-learning-repo-ubuntu1604_1.0.0-1_amd64.deb
sudo apt install ./nvidia-machine-learning-repo-ubuntu1604_1.0.0-1_amd64.deb
sudo apt-get update]
sudo apt-get install --no-install-recommends cuda-10-0
→ restart computer
Verify that nvidia driver is operational;
nvidia-smi
nvidia-settings