Odd amount of gloabl memory

I just installed a Tesla S1070 and run deviceQuery (from CUDA SDK) on a connected host to verify the succeed of installation


There are 2 devices supporting CUDA

Device 0: “Tesla T10 Processor”
Major revision number: 1
Minor revision number: 3
Total amount of global memory: -262144 bytes
Total amount of constant memory: 65536 bytes
Total amount of shared memory per block: 16384 bytes
Total number of registers available per block: 16384
Warp size: 32
Maximum number of threads per block: 512
Maximum sizes of each dimension of a block: 512 x 512 x 64
Maximum sizes of each dimension of a grid: 65535 x 65535 x 1
Maximum memory pitch: 262144 bytes
Texture alignment: 256 bytes
Clock rate: 1440000 kilohertz

Device 1: “Tesla T10 Processor”
Major revision number: 1
Minor revision number: 3
Total amount of global memory: -262144 bytes
Total amount of constant memory: 65536 bytes
Total amount of shared memory per block: 16384 bytes
Total number of registers available per block: 16384
Warp size: 32
Maximum number of threads per block: 512
Maximum sizes of each dimension of a block: 512 x 512 x 64
Maximum sizes of each dimension of a grid: 65535 x 65535 x 1
Maximum memory pitch: 262144 bytes
Texture alignment: 256 bytes
Clock rate: 1440000 kilohertz

Test PASSED

Press ENTER to exit…

All the output is sensible except the negative value of “Total amount of global memory

Anyone know the reason?

It is pretty obviously integer overflow. That number comes from totalGlobalMem which is size_t and the version of the deviceQuery source I have prints it with a %u format. Neither of those things should result in a signed integer output…

You are probably using an old SDK that was not using a size_t for the amount of global memory.

Why deviceQuery on your system identify only two device when in a Tesla S1070 there is four T10 GPU? I hope you connected the S1070 to two hosts…
I also have a S1070 and with CUDA 2.1 SDK I correctly see specs of all GPUs.