CUDA by using LabVIEW, there are some questions about matrix computation.

Hello, there.

I’m a newbie of CUDA.
I have hardly idea of C or C++.
I have been using LabVIEW and MATLAB since I have started computer languages.
Fortunately, there is a toolkit in LabVIEW for CUDA.
I started from the simplest case that can compare the results between CUDA and LabVIEW Code.

There are two questions about matrix computation.
For computating a huge matrix such as 2D matrix whose size is 20,000 by 20,000, it is impossible to generate the matrix in LabVIEW due to memory overflow.
I guess there is a way that insert elements into GPU by iteration method, it may work without memory overflow.
However, I do not have any idea of inserting elements into GPU through LabVIEW.
There are some VIs in the toolkit, however what VI can help me resolve this problem.

What I have been using GPU is as below:

NVS 160M

CUDA Capability Major/Minor version number: 1.1

8 CUDA Cores(1 multi processor x 8 CUDA cores/MP)

Max texture dimension size(x,y,z)

1D: 8192

2D: 65536, 32768

3D: 2048, 2048, 2048

Max Layered texture size x layer x 512

1D: 8192 x 512

2D: 8192 x 8192 x 512

Max sizes of each dim. of a block: 512 x 512 x 64

Max sizes of each dim. of a grid: 65535 x 65535 x 1


Furthermore, there seems VIs for inserting elements into GPU as below image.

Thank you for in advance!

Albert

You can also read this question in detail from
https://decibel.ni.com/content/thread/17820?tstart=0