HPL and Tesla C1060: Not Enough GPUs problem Problem when running HPL.

Hi my name is Andres Aguilar, I’m from Costa Rica and I am working on running HPL to test a desktop computer with 4 tesla C1060 cards,
I’ve compiled the code available from developers site (hpl-2.0_FERMI_v13.tgz),
But When I run “# mpirun -np 4 xhpl” I get the message:

!!! ERROR: Not enough GPUs on node localhost.localdomain, 0 GPUs found, 4 GPUs required !!!

I installed the driver from nvidia: devdriver_4.0_linux_64_270.41.19.run
and the toolkit: cudatoolkit_4.0.17_linux_64_rhel5.5.run (I’m using Red Hat 5.5)

When I run the command “# nvidia-smi -L”
I get the following output

GPU 0: Tesla C1060 (UUID: GPU-91142153e2dba91b-42cfaae2-c4e64e28-1381f927-8e9745dcaec73a3a8a5d9691)
GPU 1: Tesla C1060 (UUID: GPU-63cc384d7579277e-c07960ea-b7f7cf4a-0918a217-95284722c82dad0aa908323f)
GPU 2: Tesla C1060 (UUID: GPU-840559236be1c2ee-e8f8aa28-2d3f468b-18720f78-f063b706da68e04e962a99af)
GPU 3: Tesla C1060 (UUID: GPU-94b52546e6c68c17-e3697725-3fb5d52b-3e83cee5-2b0cc3553a25a36b829d1419)

So it seems to recognize my GPU cards.

Have you any idea of what am I doing wrong?

Thanks a lot for your help! :)

The code is for Fermi card only.

So I guess it would work with Nvidia Tesla M2070 card. Am I right?

Yes, Tesla 2050/2070/2090

mfatica, Thanks a lot for your help!

One last question Do you know where can I get the HPL code for not FERMI cards (like the C1060)?

You could modify the current version and use CUBLAS instead of the custom DGEMM code that only works on Fermi.

Ok, I’ll give a try, Thanks a lot!!