Hello,
I found the ability to monitor GPU and on-board memory utilization very useful for system fine-tuning and maintenance. Nvidia-smi utility provided such a functionality on Linux. This feature seems to be broken in Cuda-4.0, driver 270.41.19 (see below, note N/A in most of the fields). Note, that nvidia-smi and driver of Cuda-3.2 worked on the exact same hardware.
This issue was reported in many posts across the Internet and I failed to find a solution or nVidia’s response to this. Is nVidia willing to fix it, and if yes then when should we expect the fix?
Thanks!
>nvidia-smi -q --id=0
==============NVSMI LOG==============
Timestamp : Tue Jul 12 12:50:48 2011
Driver Version : 270.41.19
Attached GPUs : 4
GPU 0:4:0
Product Name : GeForce GTX 570
Display Mode : N/A
Persistence Mode : Disabled
Driver Model
Current : N/A
Pending : N/A
Serial Number : N/A
GPU UUID : N/A
Inforom Version
OEM Object : N/A
ECC Object : N/A
Power Management Object : N/A
PCI
Bus : 4
Device : 0
Domain : 0
Device Id : 108110DE
Bus Id : 0:4:0
Fan Speed : 77 %
Memory Usage
Total : 1279 Mb
Used : 759 Mb
Free : 519 Mb
Compute Mode : Default
Utilization
Gpu : N/A
Memory : N/A
Ecc Mode
Current : N/A
Pending : N/A
ECC Errors
Volatile
Single Bit
Device Memory : N/A
Register File : N/A
L1 Cache : N/A
L2 Cache : N/A
Total : N/A
Double Bit
Device Memory : N/A
Register File : N/A
L1 Cache : N/A
L2 Cache : N/A
Total : N/A
Aggregate
Single Bit
Device Memory : N/A
Register File : N/A
L1 Cache : N/A
L2 Cache : N/A
Total : N/A
Double Bit
Device Memory : N/A
Register File : N/A
L1 Cache : N/A
L2 Cache : N/A
Total : N/A
Temperature
Gpu : 90 C
Power Readings
Power State : N/A
Power Management : N/A
Power Draw : N/A
Power Limit : N/A
Clocks
Graphics : N/A
SM : N/A
Memory : N/A