Availability issue for GPU Metrics sampling hardware unit on WSL

Hi,

I’m trying to profile a CUDA-SYCL-based application in WSL2 (Ubuntu 20.04.6) on Win10 (with Insider Program) using Nsight Systems 2022.4.2.1 and I’m having some issues with GPU Metrics (on an RTX 3060). The profiling for the CPU works fine and doesn’t throw any errors, but then the reports contain this Daemon Error:

GPU Metrics [0]: GPU metrics sampling hardware unit is already in use by another instance of Nsight Systems or other tool. The conflict can occur within the OS as well as containers, VMs and hypervisor.
- API function: NVPW_Device_PeriodicSampler_GetCounterAvailability(¶ms)
- Error code: 20
- Source function: static std::vector QuadDDaemon::EventSource::GpuMetricsBackend::Impl::CounterConfig::GetCounterAvailabilityImage(uint32_t)
- Source location: /build/agent/work/323cb361ab84164c/QuadD/Target/quadd_d/quadd_d/jni/EventSource/GpuMetricsBackend.cpp:587

It’s important to say that I am using CUDA 11.8 and a CUDA driver version of 12.1, which is noted as a warning in the report:

Installed CUDA driver version (12.1) is not supported by this build of Nsight Systems. CUDA trace will be collected using libraries for driver version 11.8

Due to the nature of the frameworks I’m using, I’d prefer to not update the CUDA version at this current moment, if possible. There are no other instances of Nsight Systems or Compute running while I’m generating the reports.

Do you know where this issue could possibly come from and how I could fix it? Thank you in advance.

Nvidia-smi output:

+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 530.51.01              Driver Version: 532.03       CUDA Version: 12.1     |
|-----------------------------------------+----------------------+----------------------+
| GPU  Name                  Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf            Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                                         |                      |               MIG M. |
|=========================================+======================+======================|
|   0  NVIDIA GeForce RTX 3060         On | 00000000:05:00.0  On |                  N/A |
| 40%   31C    P8               10W / 170W|   1478MiB / 12288MiB |      1%      Default |
|                                         |                      |                  N/A |
+-----------------------------------------+----------------------+----------------------+

+---------------------------------------------------------------------------------------+
| Processes:                                                                            |
|  GPU   GI   CI        PID   Type   Process name                            GPU Memory |
|        ID   ID                                                             Usage      |
|=======================================================================================|
|    0   N/A  N/A        37      G   /Xwayland                                 N/A      |
+---------------------------------------------------------------------------------------+

@pkovalenko & Jason - is this a WSL2 issue?

I encountered the same problem, did you solve it now?

No, sadly I was not able to solve it so far. I didn’t really look into it much later, as I was able to profile directly on our GPU cluster, which was my goal in the first place. Profiling locally on WSL would have been nice for crosschecks and additional data, but it doesn’t matter much.

So, no, sry I can’t help you. I hope you find a solution.

Thank you bro, wish you a pleasant day.

I’ve recently responded to another user who hit a problem specifically with the version shipped in 11.8, so I am going to recommend loading the newest Nsys version (which worked for him).

I have already download the Nsight Systems 2023.2.1 (Windows Host),but it still can’t work.

I will ping @jasoncohen directly.