Thank you for using Nsight Graphics and providing your feedback. We are sorry for any inconvenience you have encountered.
It’s hard for me to say anything right now. Could you please provide a simple example that would allow us to reproduce the issue? This will help us in investigating and resolving the problem more efficiently.
I take some try on my local test machine, unfortunately, I can’t repro it. After press F11 on vkcube.exe which launched by Ngfx 2025.1.1’s GPUTrace activity, the memory usage just increases a little bit and keep the same.
It’s hard to say why, but can you do some try?
Does this happen on other machines?
Do you turn on/off some Vulkan layout? Please do some check by using Vulkan Configurator aka. vkconfig.exe (under your VulkanSDK’s installment folder)
Why NSight shows 2 windows for capture? There should be only a single one.
This feature allows you to profile certain windows, which means your app needs to use multi window.
Why there is a warning Multiple GPU usage detected - collected data may not be complete. ? In engine I’m not using mGPU, only single physical device.
Could you just disable your iGPU in the BIOS, and give it another try? If your BIOS doesn’t have such option, you can try to force your app to use dGPU by changing the settings in NVIDIA Control Panel. It’s hard to say the huge memory usage has some relationship with the warning, but we can take some try at least.
I’m not really aware of any free and good one. If you can suggest some - can give it a go.
This feature allows you to profile certain windows, which means your app needs to use multi window.
Possibly I was unclear on this part. NSight shows, as-if there are 2 windows for capture, while tracing a simple single-window application. While can definitely be unrelated and minor issue, but can also be side effect of bigger issue. For example, what if it collect samples in a loop, while waits for 2 present-events.
changing the settings in NVIDIA Control Pane
I’ve give that a go today. No changes - still shows multi-gpu for some reason. Done only via control panel, can’t do via BIOS, unfortunately.
Basically, after 36 seconds since capture started, WarpViz.Injection.dll allocated about 13.9 GB of memory - that’s all unfortunately.
Let me know, if there are some setting to toggle, if it can help localize the issue.
Based on your screenshot, it appears thatWarpViz.Injection.dll is consuming more memory than expected. Could you please provide a simple example that would allow us to reproduce the issue? This will help us in investigating and resolving the problem more efficiently.
I’ve tested on other machine today: NSight 2024.2 works fine; 2025.2 - driver soft-crash.By soft-crash I mean black screen for a few second until windows recovers.
System was: win10; RTX3070; descktop. Not my machine, so couldn’t do much more.
Meanwhile locally:
Driver is cleanly installed
Both 2024.2.1 and 2025.2 not capturing anything, crashing with memory leak
Tweaking settings, lowering intervals and such has no effect it seems
A new log line I’ve notices in today’s testing: NVIDIA Nsight Graphics,Sample duration = 2000ns; Warp State Sampling Interval = 16; GPU PMA Buffer Size (MB) = 4000; SOC PMA Buffer Size (MB) = 0
4000MB, essentially 4GB, appears as a bit much for a single frame. Also can’t find Sample duration in the settings, but 2000ns seem fine to me.
Not much clue here, I take a try on a laptop (Win11/Ada GPU), and run the Nsight Graphics 2025.2 (download from the public web), launch the bundled sample (Help → Samples → vk_raytrace), do GPUTrace, the memory usage keeps ~500MB.
I also tried another desktop (Win10/Ampere GPU), the memory doesn’t leak when using vkcube or bundled vk_raytrace.
It shows the same GPU PMA Buffer Size (MB) = 4000 here. The rest message also looks fine, except the Warp State Sampling Interval = 6, it seems you changed Warp State Samples Per PM Interval to a higher value.
Yeah, I understand that it’s very difficult to debug such unpronounceable problems. I hope crash-logs that been generated over time can be of use for your developers.
Let me know, if there are any logs that I can share from my machine, or if there are new releases with extended logs.
at the time you’re generating your crash-log, can you generate a minidump from a debugger and provide that to us? that might help us understand a bit better what is going on in the process.
can you add GPUTRACE_DISABLE_ETW=1 to the environment setting when launching through the GPU Trace activity ?
GPU Trace tries to collect ETW (event tracing for windows) events during collection and it looks like the failure happens during the parsing of those events. I’m not understanding why that would be the case, but that environment variable might at least unblock you.
(to put more color, ETW reports 3Billion contexts to GPU Trace in a CommandQueueStart event - which is non-sensical)