CUDA with Tesla C1060

Hello! Question is how to install Tesla C1060 if there was already CUDA-device NVIDIA FX1700?
All the CUDA 2.0 stuff for Tesla was installed without problem, and SDK tests were ok, but if to check device info with cudaGetDeviceProperties (or to use DeviceQuery from SDK) if gives :
Device 0: “Tesla C1060”

Number of multiprocessors: 30 [ok]
Number of cores: 240 [ok]
Warp size: 32 [!!! it’s FX1700 warp size!]

Device 1: “Quadro FX 1700”

Number of multiprocessors: 4 [ok]
Number of cores: 32 [ok]
Warp size: 32 [ok]

and so on… Tesla has most of information about warp, block/grid dimension, etc the same as FX1700.
Maybe somebody knows what is the reason?

The reason is that this information (warp size, max block and grid dimensions, amount of shared memory and number of registers) are specific to generation of devices, not to particular model. All G200-based devices have warp size of 32, 16384 registers per MP, 16384 Kb shared memory per MP, etc. They differ in number of multiprocessors (30 vs. 4 in your case), amount of memory and their respective frequencies.

So it’s absolutely OK to have warp = 32 for both Tesla and Quadro.

Ok, thanks!