Nv-nsight-cu-cli hangs on any binary

I am running the CLI (‘Version 2019.5.0 (Build 27346997)’) on Ubuntu 18.04 to profile a simple CUDA application. I recently changed my drivers to 440.64, and am running CUDA 10.2.

No matter what application I try to profile, the nv-nsight-cu-cli just hangs indefinitely. It seems to be doing some kind of busy waiting (100% CPU) but there is no memory usage etc and it just idles forever (let it run for 12+ hours).

Any ideas on possible causes?

Actually, just changed the driver to 440.33.01, and lo and behold, nv-nsight-cu-cli works correctly now. Does the Nsight Compute tool not work with the latest device drivers? Or is 440.33.01 in an entirely different branch of drivers than 440.64.

Nsight Compute is expected to work with the latest device driver. It is not clear why Nsight Compute is not working with 440.64. We will check.

Any updates on this? Did some more digging and it seems pretty reliable - I cannot use nv-nsight-cu-cli on literally any binary with the latest device drivers. The tool is completely broken on the official newest drivers (at least for me).

We could not reproduce the hang using:
OS: Ubuntu 18.04
Nsight Compute: 2019.5
Driver: 440.64

Can you try collecting a single metric:
$ nv-nsight-cu-cli --metric sm__cycles_elapsed.sum APPLICATION

Also which GPU are you using?

I’m having the same problem, I am using an rtx 2060 and I’m on driver version 450.80.02 and CUDA 2020.1.0. every cuda application that I’ve tried has made my ubuntu desktop hang.

Do you see a hang when running a CUDA application (even without using Nsight Compute)?