NCU CLI fails to profile a kernel - Error reported by the driver

I’m using the latest version of WSL2.

the command line is as follows:

ncu -k matmul_wmma_kernel python3 examples/matmul.py

nvidia-smi` reports the following:

Mon Nov  6 01:17:02 2023       
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 545.29.01              Driver Version: 546.01       CUDA Version: 12.3     |
|-----------------------------------------+----------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |         Memory-Usage | GPU-Util  Compute M. |
|                                         |                      |               MIG M. |
|=========================================+======================+======================|
|   0  NVIDIA GeForce RTX 3080        On  | 00000000:01:00.0  On |                  N/A |
| 50%   31C    P8              42W / 320W |    781MiB / 10240MiB |      5%      Default |
|                                         |                      |                  N/A |
+-----------------------------------------+----------------------+----------------------+
                                                                                         
+---------------------------------------------------------------------------------------+
| Processes:                                                                            |
|  GPU   GI   CI        PID   Type   Process name                            GPU Memory |
|        ID   ID                                                             Usage      |
|=======================================================================================|
|    0   N/A  N/A        29      G   /Xwayland                                 N/A      |
|    0   N/A  N/A      3680      G   /ncu-ui.bin                               N/A      |
+---------------------------------------------------------------------------------------+

ncu --version reports the following

NVIDIA (R) Nsight Compute Command Line Profiler
Copyright (c) 2018-2023 NVIDIA Corporation
Version 2023.2.0.0 (build 32895467) (public-release)

Here’s the error:

==ERROR== An error was reported by the driver

==ERROR== Profiling failed because a driver resource was unavailable or the user does not have permission to access NVIDIA GPU Performance Counters. Ensure that no other tool (like DCGM) is concurrently collecting profiling data. For instructions on enabling permissions, see https://developer.nvidia.com/ERR_NVGPUCTRPERM. See https://docs.nvidia.com/nsight-compute/ProfilingGuide/index.html#faq for more details.
==ERROR== Failed to profile "matmul_wmma_kernel" in process 4192
==PROF== Trying to shutdown target application
==ERROR== The application returned an error code (9).
==ERROR== An error occurred while trying to profile.
==WARNING== No kernels were profiled.
==WARNING== Profiling kernels launched by child processes requires the --target-processes all option.

I’ve tried following the FAQs and e.g turning off root requirements for using the counters by adding a .conf file in /etc/modprobe.d, but this doesn’t work. Anyone experienced a similar issue and have a workaround?

For WSL2 performance counters you need to go into the WINDOWS nVidia Control Panel, choose Desktop → Enable Developer Settings, then within developer settings, enable non-admin access to perf counters. This is actually more-or-less specified in the FAQ but it’s easy to miss (I missed it a couple times).

2 Likes

oh jesus, your method just works! I spent whole afternoon on this stupid issue, thx