nvprof: Warning: The user does not have permission to profile on the target device.

I am using nvprof on my 64-bit ubuntu machine with Geforce GT 730 GPU. I get the following error when I use nvprof:

==7508== NVPROF is profiling process 7508, command: /homes/pgharat/cuda-workspace/Matrix Mul/Debug/Matrix Mul ==7508==

The visual profiler fails to work giving the same error. As far as I know, you do not need sudo privilege to run nvprof. Could someone please tell me why is this happening? How can I resolve this?

Hi pritam01gharat,

Due to security reasons, recent NVIDIA driver installation has disabled access to GPU Performance. For instructions on enabling permissions please refer:


I tried the solutions provided in this link

None of them worked. My machine has a:
GTX 960m,
Ubuntu 18.04,
Nvidia driver 418.56,
CUDA 10.1

I am currently downgrading everything, hopefully that will work.

Thanks Nvidia for keeping your tools (especially drivers) a terrible and time wasting experience to install, use or even remove on any Linux distribution. Keep up!

1 Like


There was a correction made to the web site.

The command you should be using to allow profiling tools access to the GPU performance counters should be

modprobe nvidia NVreg_RestrictProfilingToAdminUsers=0

This should resolve your issue.
For persistence across reboots, we’d recommend adding this to a /etc/modprobe.d config file as mentioned on

Sorry for the misinformation.

Not sure has this problem solved. But i’ve ran into the same issue and thing described on the website mentioned above does not work.

Hi y.juntao,

Sorry for inconvenience. Can you please try below steps, assuming you are on Linux?

  1. Create .conf file (e.g. profile.conf) in folder /etc/modprobe.d
  2. Open file /etc/modprobe.d/profile.conf in any editor
  3. Add below line in profile.conf
    options nvidia “NVreg_RestrictProfilingToAdminUsers=0”
  4. Close file /etc/modprobe.d/profile.conf
  5. Restart your machine

Also note that

[1] On some systems, it may be necessary to rebuild the initrd after writing a configuration file to /etc/modprobe.d
[2] On Ubuntu systems, when installing via the distro-native packages, the kernel module gets
renamed from nvidia to nvidia-xxx, and then nvidia is aliased to nvidia-xxx
(where xxx is the major number of the driver. So a 418.67 driver would use nvidia-418)

None of the above that you suggest works. I have not been able to use “nvprof”. I have a version of linux Ubuntu 16.04.6 LTS with the driver nvidia-418. Has anyone solved the problem?

Please check if the kernel module “nvidia” exists:
$ modinfo nvidia

If you get a module not found error, try
$ modinfo nvida-418
(assuming 418 is the major number of the driver you are using)

In this case you will need to use the following for step 3:
options nvidia-418 “NVreg_RestrictProfilingToAdminUsers=0”

Also confirm that:

  • You are doing steps 1 to 5 as root
  • Reboot the machine as suggested in step 5
  • Check that the value is correctly set to 0
    $ cat /proc/driver/nvidia/params | grep RmProfilingAdminOnly

You should see:
RmProfilingAdminOnly: 0

Hope this works.

Hi, thanks for your suggestions. However, I found a temporary solution:

sudo /usr/local/cuda/bin/nvprof ./jacobi_test

What was the reason for changing this? Seems unnecessary.

This change was made due to “Security Notice: NVIDIA Response to “Rendered Insecure: GPU Side Channel Attacks are Practical” - November 2018”. Refer https://developer.nvidia.com/nvidia-development-tools-solutions-ERR_NVGPUCTRPERM-permission-issue-performance-counters and https://nvidia.custhelp.com/app/answers/detail/a_id/4738

Hello, I’m using nvprof on Linux.

Since I updated driver to 430.50, ‘sudo’ is needed for nvprof.

According to https://developer.nvidia.com/nvidia-development-tools-solutions-ERR_NVGPUCTRPERM-permission-issue-performance-counters, besides adding parameter to kernel module, run by user with the CAP_SYS_ADMIN capability set should also be a solution.

Using libcap, I can set ‘CAP_SYS_ADMIN’ capability upon login

$ capsh --print
Current: = cap_sys_admin+i

I also setcap to nvprof

$ getcap /usr/local/cuda/bin/nvprof 
/usr/local/cuda/bin/nvprof = cap_sys_admin+eip

But nvprof still cannot get proper permission, is there any suggestion? Thanks

Hello, I’m trying to profile a Python application on a Jetson TX2 (using JetPack 4.3), and I am not able to run nvprof with sudo, as the Python modules my program uses are inside a virtual environment, which root has not visibility of.

When trying to run the program, I obtain the following result:

$ nvprof python tester.py
Successfully opened dynamic library libcudart.so.10.0
[Some initialization messages from my program itself...]
2020-02-03 13:52:08.536904: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcuda.so.1
==7580== NVPROF is profiling process 7580, command: python tester.py
==7580== Warning: Insufficient privileges to start the profiling session. Use root privileges
==7580== Profiling application: python tester.py
==7580== Profiling result:
No kernels were profiled.
No API activities were profiled.

I’ve tried several of the solutions provided above, and tried on the service “nvgpu” instead of “nvidia” as it does not exist with that name. However, I ever obtain the same result (the posted above), even after rebooting before trying again. According to the https://developer.nvidia.com/nvidia-development-tools-solutions-err-nvgpuctrperm-nvprofinstructions provided in this thread, it would be useful to set the CAP_SYS_ADMIN capability on my user, and in the nvprof file. I’ve done so with:

$ capsh --print
Current: = cap_sys_admin+i
getcap /usr/local/cuda/bin/nvprof

Neither of these solutions worked. Is there any other way to try to get it working?

Thanks a lot in advance,

Hi! I also went through all the solutions mentioned, but none of them help.

  1. I tried to change RmProfilingAdminOnly to 0 with the methods mentioned in https://developer.nvidia.com/nvidia-development-tools-solutions-err-nvgpuctrperm-nvprof. However, I stuck at step 1: unload the old modules. The nvidia_modeset is always in use even after reboot. I checked the dependency with

lsmod | grep nvidia

And it shows

nvidia_modeset 1093632 4

which suggests that the module is used by some process, but I cannot find them. Due to this, I cannot unload the old nvidia module. I tried to skip this step and followed remaining steps, but the problem is not solved.

  1. I tried to change RmProfilingAdminOnly to 0 by adding options nvidia “NVreg_RestrictProfilingToAdminUsers=0” to /etc/modprobe.d/profile.conf and rebooted my server, but RmProfilingAdminOnly is still 1 after reboot.

  2. I also added myself to CAP_SYS_ADMIN set, but I still cannot use the nvprof.

I’m using a DGX Station, the driver version is 418.116.00 with CUDA 10.1. Is there any other way to solve this problem?


I have the exact same problem. My system has:
Ubuntu: 16.04
Driver Version: 418.87.00
CUDA Version: 10.1

I followed all the steps but none of them worked. Is there a solution for this?



I also followed the same steps above but none of them worked on the Jetson TX2 baord. Is there any other way to remove the user restriction?


Nvidia Visual Profiler and nvprof don’t support profiling on Tegra devices (like Jetson TX2) for non-root users. The only workaround is to start the profiling session as a root user.

Solutions mentioned in the link https://developer.nvidia.com/nvidia-development-tools-solutions-ERR_NVGPUCTRPERM-permission-issue-performance-counters are applicable for Desktop platforms.

Hello @mjain. Are there any solution for docker environment?


For container image provided by NVIDIA, I think nvprof can be used directly without requiring any docker specific setting as CUDA toolkit is included in the image.

For the cases where user maps the CUDA toolkit into docker, we need to set PATH and LD_LIBRARY_PATH.

export PATH=/path/to/cuda/bin
export LD_LIBRARY_PATH=/path/to/cuda/lib

Starting with CUDA 10.2, user might additionally need to set the LD_LIBRARY_PATH to <toolkit_root_dir>/extras/CUPTI/lib64 as nvprof now uses shared CUPTI library. More details at the docs portal https://docs.nvidia.com/cuda/profiler-users-guide/index.html.