After the installation the NVIDIA Persistenced service fails to start with following error.
sudo tail /var/log/syslog -n 100 >
Jul 10 17:06:56 cu-mlearn kernel: [13657.924298] nvidia-nvlink: Nvlink Core is being initialized, major device number 236
Jul 10 17:06:56 cu-mlearn kernel: [13657.925960] NVRM: The NVIDIA probe routine was not called for 4 device(s).
Jul 10 17:06:56 cu-mlearn kernel: [13657.925961] NVRM: This can occur when a driver such as:
Jul 10 17:06:56 cu-mlearn kernel: [13657.925961] NVRM: nouveau, rivafb, nvidiafb or rivatv
Jul 10 17:06:56 cu-mlearn kernel: [13657.925961] NVRM: was loaded and obtained ownership of the NVIDIA device(s).
Jul 10 17:06:56 cu-mlearn kernel: [13657.925962] NVRM: Try unloading the conflicting kernel module (and/or
Jul 10 17:06:56 cu-mlearn kernel: [13657.925962] NVRM: reconfigure your kernel without the conflicting
Jul 10 17:06:56 cu-mlearn kernel: [13657.925962] NVRM: driver(s)), then try loading the NVIDIA kernel module
Jul 10 17:06:56 cu-mlearn kernel: [13657.925962] NVRM: again.
Jul 10 17:06:56 cu-mlearn kernel: [13657.925962] NVRM: No NVIDIA devices probed.
Jul 10 17:06:56 cu-mlearn kernel: [13657.926124] nvidia-nvlink: Unregistered Nvlink Core, major device number 236
Jul 10 17:06:56 cu-mlearn nvidia-persistenced: Failed to query NVIDIA devices. Please ensure that the NVIDIA device files (/dev/nvidia*) exist, and that user 0 has read and write permissions for those files.
Jul 10 17:06:56 cu-mlearn nvidia-persistenced[159810]: nvidia-persistenced failed to initialize. Check syslog for more details.
Jul 10 17:06:56 cu-mlearn nvidia-persistenced: PID file unlocked.
Jul 10 17:06:56 cu-mlearn systemd[1]: nvidia-persistenced.service: Control process exited, code=exited, status=1/FAILURE
Jul 10 17:06:56 cu-mlearn nvidia-persistenced: PID file closed.
Jul 10 17:06:56 cu-mlearn nvidia-persistenced: Shutdown (159811)
Jul 10 17:06:56 cu-mlearn systemd[1]: nvidia-persistenced.service: Failed with result ‘exit-code’.
Jul 10 17:06:56 cu-mlearn systemd[1]: Failed to start NVIDIA Persistence Daemon.
Jul 10 17:06:56 cu-mlearn kernel: [13658.068076] nvidia-nvlink: Nvlink Core is being initialized, major device number 236
Jul 10 17:06:56 cu-mlearn kernel: [13658.069830] NVRM: The NVIDIA probe routine was not called for 4 device(s).
Jul 10 17:06:56 cu-mlearn kernel: [13658.069831] NVRM: This can occur when a driver such as:
Jul 10 17:06:56 cu-mlearn kernel: [13658.069831] NVRM: nouveau, rivafb, nvidiafb or rivatv
Jul 10 17:06:56 cu-mlearn kernel: [13658.069831] NVRM: was loaded and obtained ownership of the NVIDIA device(s).
Jul 10 17:06:56 cu-mlearn kernel: [13658.069832] NVRM: Try unloading the conflicting kernel module (and/or
Jul 10 17:06:56 cu-mlearn kernel: [13658.069832] NVRM: reconfigure your kernel without the conflicting
Jul 10 17:06:56 cu-mlearn kernel: [13658.069832] NVRM: driver(s)), then try loading the NVIDIA kernel module
Jul 10 17:06:56 cu-mlearn kernel: [13658.069832] NVRM: again.
Jul 10 17:06:56 cu-mlearn kernel: [13658.069832] NVRM: No NVIDIA devices probed.
Jul 10 17:06:56 cu-mlearn kernel: [13658.069968] nvidia-nvlink: Unregistered Nvlink Core, major device number 236
Jul 10 17:06:56 cu-mlearn systemd-udevd[613]: nvidia: Process ‘/sbin/modprobe nvidia-modeset’ failed with exit code 1.
Jul 10 17:06:56 cu-mlearn systemd[1]: nvidia-persistenced.service: Scheduled restart job, restart counter is at 3.
Jul 10 17:06:56 cu-mlearn systemd[1]: Stopped NVIDIA Persistence Daemon.
sudo journalctl -n nvidia-persistenced -n 100>
Jul 10 17:09:22 cu-mlearn nvidia-persistenced[161476]: PID file unlocked.
Jul 10 17:09:22 cu-mlearn nvidia-persistenced[161474]: nvidia-persistenced failed to initialize. Check syslog for more details.
Jul 10 17:09:22 cu-mlearn nvidia-persistenced[161476]: PID file closed.
Jul 10 17:09:22 cu-mlearn systemd[1]: nvidia-persistenced.service: Control process exited, code=exited, status=1/FAILURE
Jul 10 17:09:22 cu-mlearn nvidia-persistenced[161476]: Shutdown (161476)
Jul 10 17:09:22 cu-mlearn systemd[1]: nvidia-persistenced.service: Failed with result ‘exit-code’.
Jul 10 17:09:22 cu-mlearn systemd[1]: Failed to start NVIDIA Persistence Daemon.
Jul 10 17:09:23 cu-mlearn systemd[1]: nvidia-persistenced.service: Scheduled restart job, restart counter is at 3.
Jul 10 17:09:23 cu-mlearn systemd[1]: Stopped NVIDIA Persistence Daemon.
Jul 10 17:09:23 cu-mlearn systemd[1]: Starting NVIDIA Persistence Daemon…
Jul 10 17:09:23 cu-mlearn nvidia-persistenced[161488]: Verbose syslog connection opened
Jul 10 17:09:23 cu-mlearn nvidia-persistenced[161488]: Started (161488)
Jul 10 17:09:24 cu-mlearn nvidia-persistenced[161488]: Failed to query NVIDIA devices. Please ensure that the NVIDIA device files (/dev/nvidia*) exist, and that user 0 has read and write permissi
Jul 10 17:09:24 cu-mlearn nvidia-persistenced[161488]: PID file unlocked.
Jul 10 17:09:24 cu-mlearn systemd[1]: nvidia-persistenced.service: Control process exited, code=exited, status=1/FAILURE
Jul 10 17:09:24 cu-mlearn nvidia-persistenced[161487]: nvidia-persistenced failed to initialize. Check syslog for more details.
Jul 10 17:09:24 cu-mlearn nvidia-persistenced[161488]: PID file closed.
Jul 10 17:09:24 cu-mlearn nvidia-persistenced[161488]: Shutdown (161488)