Another thing I tried:
I found this [1] talking about interrupts, and also found this which mentions the file I have /etc/modprobe.d/nvidia-installer-disable-nouveau.conf. So I edited the file and rebooted trying both settings options nvidia NVreg_EnableMSI=0 and options nvidia NVreg_EnableMSI=1.
And I also tried putting it in /etc/modprobe.d/nvidia.conf too.
But neither settings appears to have any influence on the irq/110-nvidia CPU usage in top :-( And how do I even know that file is being read and used?
I tried this command [3] which shows the parameters given to the nvidia* kernel modules, but there is no sign of NVreg_EnableMSI so how do I know that the parameter in any .conf file was actually taken?
$ sudo grep -H '' /sys/module/nvidia*/parameters/*
/sys/module/nvidia_drm/parameters/modeset:N
/sys/module/nvidia_modeset/parameters/config_file:(null)
/sys/module/nvidia_modeset/parameters/disable_vrr_memclk_switch:N
/sys/module/nvidia_modeset/parameters/fail_malloc:-1
/sys/module/nvidia_modeset/parameters/malloc_verbose:N
/sys/module/nvidia_modeset/parameters/output_rounding_fix:Y
/sys/module/nvidia_uvm/parameters/uvm_ats_mode:1
/sys/module/nvidia_uvm/parameters/uvm_channel_gpfifo_loc:auto
/sys/module/nvidia_uvm/parameters/uvm_channel_gpput_loc:auto
/sys/module/nvidia_uvm/parameters/uvm_channel_num_gpfifo_entries:1024
/sys/module/nvidia_uvm/parameters/uvm_channel_pushbuffer_loc:auto
/sys/module/nvidia_uvm/parameters/uvm_cpu_chunk_allocation_sizes:2166784
/sys/module/nvidia_uvm/parameters/uvm_debug_enable_push_acquire_info:0
/sys/module/nvidia_uvm/parameters/uvm_debug_enable_push_desc:0
/sys/module/nvidia_uvm/parameters/uvm_debug_prints:0
/sys/module/nvidia_uvm/parameters/uvm_disable_hmm:N
/sys/module/nvidia_uvm/parameters/uvm_downgrade_force_membar_sys:1
/sys/module/nvidia_uvm/parameters/uvm_enable_builtin_tests:0
/sys/module/nvidia_uvm/parameters/uvm_enable_debug_procfs:0
/sys/module/nvidia_uvm/parameters/uvm_enable_va_space_mm:1
/sys/module/nvidia_uvm/parameters/uvm_exp_gpu_cache_peermem:0
/sys/module/nvidia_uvm/parameters/uvm_exp_gpu_cache_sysmem:0
/sys/module/nvidia_uvm/parameters/uvm_fault_force_sysmem:0
/sys/module/nvidia_uvm/parameters/uvm_force_prefetch_fault_support:0
/sys/module/nvidia_uvm/parameters/uvm_global_oversubscription:1
/sys/module/nvidia_uvm/parameters/uvm_leak_checker:0
/sys/module/nvidia_uvm/parameters/uvm_page_table_location:(null)
/sys/module/nvidia_uvm/parameters/uvm_peer_copy:phys
/sys/module/nvidia_uvm/parameters/uvm_perf_access_counter_batch_count:256
/sys/module/nvidia_uvm/parameters/uvm_perf_access_counter_mimc_migration_enable:-1
/sys/module/nvidia_uvm/parameters/uvm_perf_access_counter_momc_migration_enable:-1
/sys/module/nvidia_uvm/parameters/uvm_perf_access_counter_threshold:256
/sys/module/nvidia_uvm/parameters/uvm_perf_fault_batch_count:256
/sys/module/nvidia_uvm/parameters/uvm_perf_fault_coalesce:1
/sys/module/nvidia_uvm/parameters/uvm_perf_fault_max_batches_per_service:20
/sys/module/nvidia_uvm/parameters/uvm_perf_fault_max_throttle_per_service:5
/sys/module/nvidia_uvm/parameters/uvm_perf_fault_replay_policy:2
/sys/module/nvidia_uvm/parameters/uvm_perf_fault_replay_update_put_ratio:50
/sys/module/nvidia_uvm/parameters/uvm_perf_map_remote_on_eviction:1
/sys/module/nvidia_uvm/parameters/uvm_perf_map_remote_on_native_atomics_fault:0
/sys/module/nvidia_uvm/parameters/uvm_perf_migrate_cpu_preunmap_block_order:2
/sys/module/nvidia_uvm/parameters/uvm_perf_migrate_cpu_preunmap_enable:1
/sys/module/nvidia_uvm/parameters/uvm_perf_pma_batch_nonpinned_order:6
/sys/module/nvidia_uvm/parameters/uvm_perf_prefetch_enable:1
/sys/module/nvidia_uvm/parameters/uvm_perf_prefetch_min_faults:1
/sys/module/nvidia_uvm/parameters/uvm_perf_prefetch_threshold:51
/sys/module/nvidia_uvm/parameters/uvm_perf_reenable_prefetch_faults_lapse_msec:1000
/sys/module/nvidia_uvm/parameters/uvm_perf_thrashing_enable:1
/sys/module/nvidia_uvm/parameters/uvm_perf_thrashing_epoch:2000
/sys/module/nvidia_uvm/parameters/uvm_perf_thrashing_lapse_usec:500
/sys/module/nvidia_uvm/parameters/uvm_perf_thrashing_max_resets:4
/sys/module/nvidia_uvm/parameters/uvm_perf_thrashing_nap:1
/sys/module/nvidia_uvm/parameters/uvm_perf_thrashing_pin:300
/sys/module/nvidia_uvm/parameters/uvm_perf_thrashing_pin_threshold:10
/sys/module/nvidia_uvm/parameters/uvm_perf_thrashing_threshold:3
/sys/module/nvidia_uvm/parameters/uvm_release_asserts:1
/sys/module/nvidia_uvm/parameters/uvm_release_asserts_dump_stack:0
/sys/module/nvidia_uvm/parameters/uvm_release_asserts_set_global_error:0
According to this [4] I can figure out which .conf files are visited:
$ sudo lsinitramfs /boot/initrd.img | grep etc/modprobe.d
etc/modprobe.d
etc/modprobe.d/alsa-base.conf
etc/modprobe.d/amd64-microcode-blacklist.conf
etc/modprobe.d/blacklist-ath_pci.conf
etc/modprobe.d/blacklist-firewire.conf
etc/modprobe.d/blacklist-framebuffer.conf
etc/modprobe.d/blacklist-modem.conf
etc/modprobe.d/blacklist-nouveau.conf
etc/modprobe.d/blacklist-oss.conf
etc/modprobe.d/blacklist-rare-network.conf
etc/modprobe.d/blacklist.conf
etc/modprobe.d/dkms.conf
etc/modprobe.d/intel-microcode-blacklist.conf
etc/modprobe.d/iwlwifi.conf
etc/modprobe.d/nvidia-installer-disable-nouveau.conf
And [4] also says how to access the parameters, which says that EnableMSI: 0 so maybe the default is zero already? Now we know it does not seem to affect the interrupts and CPU :-(
$ cat /proc/driver/nvidia/params
ResmanDebugLevel: 4294967295
RmLogonRC: 1
ModifyDeviceFiles: 1
DeviceFileUID: 0
DeviceFileGID: 0
DeviceFileMode: 438
InitializeSystemMemoryAllocations: 1
UsePageAttributeTable: 4294967295
EnableMSI: 0
EnablePCIeGen3: 0
MemoryPoolSize: 0
KMallocHeapMaxSize: 0
VMallocHeapMaxSize: 0
IgnoreMMIOCheck: 0
TCEBypassMode: 0
EnableStreamMemOPs: 0
EnableUserNUMAManagement: 1
NvLinkDisable: 0
RmProfilingAdminOnly: 1
PreserveVideoMemoryAllocations: 0
EnableS0ixPowerManagement: 0
S0ixPowerManagementVideoMemoryThreshold: 256
DynamicPowerManagement: 3
DynamicPowerManagementVideoMemoryThreshold: 200
RegisterPCIDriver: 1
EnablePCIERelaxedOrderingMode: 0
EnableResizableBar: 0
EnableGpuFirmware: 18
EnableGpuFirmwareLogs: 2
EnableDbgBreakpoint: 0
OpenRmEnableUnsupportedGpus: 0
DmaRemapPeerMmio: 1
RegistryDwords: ""
RegistryDwordsPerDevice: ""
RmMsg: ""
GpuBlacklist: ""
TemporaryFilePath: ""
ExcludedGpus: ""
[1] NVIDIA/nvidia-drivers - Gentoo wiki
[2] Ubuntu 14.04 hangs after installing Cuda - #3 by Abhijit-Amagi
[3] kernel - How do I list loaded Linux module parameter values? - Server Fault
[4] https://developer.nvidia.com/nvidia-development-tools-solutions-err_nvgpuctrperm-permission-issue-performance-counters