Memory interface

Is there a way to get information about the memory interface of the current cuda device?

I have to compute the theoretical bandwidth, but I only have the clock rate

cudaDeviceProp prop;

cudaGetDeviceProperties(&prop, 0);

... prop.clockRate ...

Thanks for your help.

I don’t think you can get that through the CUDA API. On Linux, the nvclock utility can get that information:

avid@quadro:~/plots/carbon-part$ nvclock -i

-- General info --

Card: 		nVidia Geforce 9500GT

Architecture: 	G96 A1

PCI id: 	0x640

GPU clock: 	594.000 MHz

Bustype: 	PCI-Express

-- Shader info --

Clock: 1512.000 MHz

Stream units: 32 (011b)

ROP units: 8 (b)

-- Memory info --

Amount: 	512 MB

Type: 		128 bit DDR3

Clock: 		799.200 MHz

-- PCI-Express info --

Current Rate: 	16X

Maximum rate: 	16X

-- Sensor info --

Sensor: GPU Internal Sensor

GPU temperature: 56C

Fanspeed: 50.0%

-- VideoBios information --

Version: 62.94.35.00.00

Signon message: G96 P727 SKU 0001 VGA BIOS

Performance level 0: gpu 550MHz/shader 1400MHz/memory 800MHz/1.00V/100%

VID mask: 1

Voltage level 0: 0.95V, VID: 1

Voltage level 1: 1.00V, VID: 0

so there is definitely an API which can fetch the information at runtime, but not via CUDA.

Hm, If i would do this, I will be not longer operating system independent.

I think I will resign this feature.

Thanks for your answer.

NVAPI can also give you this kind of information:
[url=“http://developer.nvidia.com/object/nvapi.html”]http://developer.nvidia.com/object/nvapi.html[/url]

Looks really nice. I will give it a try. But its only for windows platforms?

Yes, NVAPI is Windows-only at the moment. If you have an important application that could use NVAPI under Linux, please let us know.

I have :)

On a general note, what I would like also to do was to be able to write a linux script that monitors the GPUs in the system.

Such a simple script will have to enumerate through the gpus, get their busy/idle state, tempratures and mainly “real time” occupancy.

Something like a “top” utility for linux.

I know there is some functionality already available (such as tempratures, fan speed etc…) but i would like to have a tool that

can tell me how much of any 24 hours my GPU cluster has been working full speed.

thanks

eyal

I will second that. For us, the ability to extend our existing cluster monitoring and accounting solution to include the GPUs would be a big help.