I have 16 nodes that is IBM dx360 M4 with Nvidia m2070 GPU.
Every node have 256 GB mem and 2 GPU card.
when I run linpack on GPU,It can only run N is 120000 ,NB is 1024,and P is 1 Q is 2
But when I use the following parameters to run linpack ,It can not run .
So how can I do ,Thank you very much.
The following is output of the linpack .
Someone can help me?
Thanks a lot.
HPLinpack 2.0 – High-Performance Linpack benchmark – September 10, 2008
Written by A. Petitet and R. Clint Whaley, Innovative Computing Laboratory, UTK
Modified by Piotr Luszczek, Innovative Computing Laboratory, UTK
Modified by Julien Langou, University of Colorado Denver
An explanation of the input/output parameters follows:
T/V : Wall time / encoded variant.
N : The order of the coefficient matrix A.
NB : The partitioning blocking factor.
P : The number of process rows.
Q : The number of process columns.
Time : Time in seconds to solve the linear system.
Gflops : Rate of execution for solving the linear system.
The following parameter values will be used:
N : 480000
NB : 1024
PMAP : Row-major process mapping
P : 2
Q : 122
PFACT : Left
NBMIN : 2
NDIV : 2
RFACT : Left
BCAST : 1ring
DEPTH : 1
SWAP : Spread-roll (long)
L1 : no-transposed form
U : no-transposed form
EQUIL : yes
ALIGN : 8 double precision words
- The matrix A is randomly generated for each test.
- The following scaled residual check will be computed:
||Ax-b||_oo / ( eps * ( || x ||_oo * || A ||_oo + || b ||_oo ) * N )
- The relative machine precision (eps) is taken to be 1.110223e-16
- Computational tests pass if scaled residuals are less than 16.0