Not too much to say…I’m combing through the eigenvalue SDK simulation, trying to figure out how to change the matrix size.
When I go under the section “void getMatrixSize” in “main.cu” and change “int temp” to something other than -1, the build crashes. This doesn’t seem
right, as the code has an algorithm to deal with my NxN specification…or does it? Clearly, I’m missing something(s).
So if anyone has ideas, I’d appreciate 'em. Thanks.
The code accepts the matrix size as a command line argument. I can run it like this:
avidday@cuda:~/NVIDIA_GPU_Computing_SDK_3.0/C/src/eigenvalues$ ~/NVIDIA_GPU_Computing_SDK_3.0/C/bin/linux/release/eigenvalues
[ CUDA eigenvalues ]
Matrix size: 2048 x 2048
Precision: 0.000010
Iterations to be timed: 100
Result filename: 'eigenvalues.dat'
Gerschgorin interval: -2.894310 / 2.923303
Average time step 1: 15.230413 ms
Average time step 2, one intervals: 4.842406 ms
Average time step 2, mult intervals: 0.004310 ms
Average time TOTAL: 20.228331 ms
PASSED.
Press ENTER to exit...
avidday@cuda:~/NVIDIA_GPU_Computing_SDK_3.0/C/src/eigenvalues$ ~/NVIDIA_GPU_Computing_SDK_3.0/C/bin/linux/release/eigenvalues -matrix-size=1024
[ CUDA eigenvalues ]
Matrix size: 1024 x 1024
Precision: 0.000010
Iterations to be timed: 100
Result filename: 'eigenvalues.dat'
Gerschgorin interval: -2.713848 / 2.784777
Average time step 1: 7.058350 ms
Average time step 2, one intervals: 3.610529 ms
Average time step 2, mult intervals: 0.005470 ms
Average time TOTAL: 10.807071 ms
Press ENTER to exit...
avidday@cuda:~/NVIDIA_GPU_Computing_SDK_3.0/C/src/eigenvalues$ ~/NVIDIA_GPU_Computing_SDK_3.0/C/bin/linux/release/eigenvalues -matrix-size=8192
[ CUDA eigenvalues ]
Matrix size: 8192 x 8192
Precision: 0.000010
Iterations to be timed: 100
Result filename: 'eigenvalues.dat'
Gerschgorin interval: -2.932942 / 2.934441
Average time step 1: 99.568253 ms
Average time step 2, one intervals: 19.195148 ms
Average time step 2, mult intervals: 0.004890 ms
Average time TOTAL: 119.115784 ms
Press ENTER to exit...
C:\CUDA\SDK\C\src\eigenvalues> nvcc -ccbin “C:\Program Files\Visual Studio 9.0\VC\bin” main.cu -matrix-size=1024
But NT doesn’t like that syntax.
Plus, when I ran the code initially, it was with Visual Studio Express 2008. When I try to run that command-line code (without the
“-matrix-size=1024”), it fails to link the necessary headers and whatnot. I THINK I can fix the linking issues, but I’d rather just run
the thing in Visual Studio, at least for now.
Do you know how I could enter that “-matrix-size=1024” specification into Visual Studio? Is there even such a command-line-esque
function in Visual Studio?
Sorry about all the basic questions…I’ve been at this stuff for nearly a year and I still don’t know much. Thanks again.
One other thing, too. I’m trying to manually change the input values in the “main.cu” file.
I know that the “diagonal.dat” file has input values, but it only has 2,048 for a 2048x2048 matrix. How does it fill the 4+ million spots with
those 2,048 values?
Firstly, it only reads those values if the default size of 2048 is used. Otherwise it generates a new set of random values. If you read the PDF included with the example, the abstract says this:
So even without reading the code, the answer seems to be that the diagonal file does exactly what its name suggests, and contains only the diagonals, and the superdiagonal file contains the first superdiagonal (which because the matrix is symmetrical means the data is also the first subdiagonal). So only 4096 values are required to describe the full 2048 x 2048 matrix considered by the problem with its default arguments. You can see this reflected in the code of the initInputData() subroutine - input.a is initialized to hold the diagonal and input.b to hold the superdiagonal.