GPU utilization broken in CUDA-4.0 Is patch available?

Hello,

I found the ability to monitor GPU and on-board memory utilization very useful for system fine-tuning and maintenance. Nvidia-smi utility provided such a functionality on Linux. This feature seems to be broken in Cuda-4.0, driver 270.41.19 (see below, note N/A in most of the fields). Note, that nvidia-smi and driver of Cuda-3.2 worked on the exact same hardware.

This issue was reported in many posts across the Internet and I failed to find a solution or nVidia’s response to this. Is nVidia willing to fix it, and if yes then when should we expect the fix?

Thanks!

>nvidia-smi -q --id=0

==============NVSMI LOG==============

Timestamp                       : Tue Jul 12 12:50:48 2011

Driver Version                  : 270.41.19

Attached GPUs                   : 4

GPU 0:4:0

    Product Name                : GeForce GTX 570

    Display Mode                : N/A

    Persistence Mode            : Disabled

    Driver Model

        Current                 : N/A

        Pending                 : N/A

    Serial Number               : N/A

    GPU UUID                    : N/A

    Inforom Version

        OEM Object              : N/A

        ECC Object              : N/A

        Power Management Object : N/A

    PCI

        Bus                     : 4

        Device                  : 0

        Domain                  : 0

        Device Id               : 108110DE

        Bus Id                  : 0:4:0

    Fan Speed                   : 77 %

    Memory Usage

        Total                   : 1279 Mb

        Used                    : 759 Mb

        Free                    : 519 Mb

    Compute Mode                : Default

    Utilization

        Gpu                     : N/A

        Memory                  : N/A

    Ecc Mode

        Current                 : N/A

        Pending                 : N/A

    ECC Errors

        Volatile

            Single Bit            

                Device Memory   : N/A

                Register File   : N/A

                L1 Cache        : N/A

                L2 Cache        : N/A

                Total           : N/A

            Double Bit            

                Device Memory   : N/A

                Register File   : N/A

                L1 Cache        : N/A

                L2 Cache        : N/A

                Total           : N/A

        Aggregate

            Single Bit            

                Device Memory   : N/A

                Register File   : N/A

                L1 Cache        : N/A

                L2 Cache        : N/A

                Total           : N/A

            Double Bit            

                Device Memory   : N/A

                Register File   : N/A

                L1 Cache        : N/A

                L2 Cache        : N/A

                Total           : N/A

    Temperature

        Gpu                     : 90 C

    Power Readings

        Power State             : N/A

        Power Management        : N/A

        Power Draw              : N/A

        Power Limit             : N/A

    Clocks

        Graphics                : N/A

        SM                      : N/A

        Memory                  : N/A

I’m having the same problem on 270.41.06

NVIDIA System Management Interface -- v1.280.13

NVSMI provides diagnostic information for Tesla and select Quadro devices.

The data is presented in either plain text or XML format, via stdout or a file.

NVSMI also provides several management operations for changing device state.

Supported products: 

Tesla:  S1070, S2050, C1060, C2050/70/75, M2050/70/75/90, X2070/90

    Quadro: 4000, 5000, 6000, 7000 and M2070-Q

    Other:  All other products are unsupported

nvidia-smi [OPTION1 [ARG1]] [OPTION2 [ARG2]] ...

Looks like they dropped support for reasons that are beyond me… wish we had an alternative.