Keeping objects/pointers in graphic card memory

I have a question what are the possibilities of keeping a pointer (address) or an object let’s say one 512**512 matrix in graphic card memory.

I would like to use a Matlab-CUDA combination to get one matrix with fixed values in graphic memory (already done that) and to leave it in graphic memory, end the cuda call, return to matlab and get another matrix with variable content to the graphic card, do a matrix multiplication on a GPU, return one value to Matlab, keep the first matrix in graphic card memory replace the second matrix with new Matrix from matlab etc.

So is it possible to keep one matrix in graphic card memory and keep her address and how (just don’t do cudaFree?) so I can use it in multiple calls of one function in Matlab?

When you call cudaFree(), the memory is not actually erased (it is just ‘released’ and available for re-use). If you’re not calling anything else (in CUDA) after the matrix is computed, I don’t see why it wouldn’t work if you just re-initialized another matrix at the same address (and of the same size, of course).

From what you wrote though, it may be a better solution to write a CUDA-enabled MEX file, and get the best of both worlds.

Guessing, try to make sure that the context is not destroyed. Probably that means some playing with driver api.

What about using some kind of daemon process that has exclusive access to the card, and having Matlab talk to that, using something like a pipe or a TCP SOA style of things.

Yes I am working with CUDA-enabled MEX file this way I get a matrix from matlab to the graphic card memory. I would just like to paste the address to where the matrix A resides in the graphic card memory back to Matlab and use it in another CUDA-enabled MEX function. That’s if the matrix A would still be in the memory after I end the first function.

That sounds dangerous. While it should remain unaltered in memory, the exact consequence of doing this is still somewhat undefined. It may work a million times in a row, but there’s always that chance that something will somehow alter one of the memory locations.

So if I understand correctly there is no easy way to keep objects and addresses in graphic card memory unaltered after you end the execution of one callable function (*.cu or *.mexw32) inside the matlab function *.m?

this is easily possible. You can find on the forum how to do it.

something along these lines:

float *d_persistent_matrix = NULL;

mexFunction()
{
if (d_persistent_matrix==NULL) {
cudamalloc(d_persistent_matrix)
cudamemcopy)
}

}

atmexexit() {
cudaFree(d_persistent_matrix);
}

When you want to change the matrix that stays on the GPU, you have to call clear mexfilename in matlab. Then the atmexexit (not 100% sure about the name) function is called, and the memory freed.

You can do this really easily using AccelerEyes’ Jacket for Matlab www.accelereyes.com. You would write a MEX file with CUDA code as usual, but the memory is left on the card and is easily accessible. See [url=“http://www.accelereyes.com/examples/mex_example.zip”]http://www.accelereyes.com/examples/mex_example.zip[/url] to see how this is done.

Then, when the MEX file completes, you end up with a gsingle type that acts exactly like a Matlab’s native single matrix but works with data out on the GPU. You can also visualize your data using the Jacket’s visualization engine. ((GFX TBX)http://www.accelereyes.com/graphics-toolbox.php)

The idea is that aside from the CUDA within the MEX file, you can prototype the rest of your code as if there were no GPU in the loop.

The forums there are pretty active, so you can ask questions there and we tend to answer pretty quickly.

Hope this helps or makes like easier!

  • Gallagher

E.D. Riedijk: You probably reffered to this thread: [url=“http://forums.nvidia.com/index.php?showtopic=70192”]http://forums.nvidia.com/index.php?showtopic=70192[/url]

So as long you make the static pointer outside the mexFunction file and till you call cudaFree for that pointer the object will stay in CUDA memory?

yep. The cuda context stays, so the memory also.

Another small question. If you initialise a pointer in such was as static pointer before mexFunction how it must be called inside a mexFunction that you can get his value from this function as a left hand operand of a mexfunction?

So you would get location of matrix on GPU from function 1 like this pointerGPU=mexfunc(matrix1) and then use it in the second function mexfunct2(matrix2,pointerGPU) where I would do a matrix multiplication between a first and the second matrix.

If you mean that you want to use that same variable in 2 mexfunctions, I am not sure, you might want to check the matlab documentation to see if you can have static/persistent variables that are known in 2 mexfunctions.