Good morning to everyone, I have tried optimizing memory transfers for Matlab data to the gpu, as this is now where most of the time is spent in my function. But unfortunately, every time i use pinned memory, matlab simply crashes… :(
Has anyone had any luck using cudaMallocHost in a mex-file?
I understand that matlab is fairly memory sensitive as indicated in nvidias whitepaper:
Accelerating Matlab with CUDA: “Currently it is not possible to use pinned memory on the host, because the data on the host side neds to be allocated with the special mxAlloc function.”
Although the host side mmemory does not have to be allocated as such, as we can pass the pointer of the data directly to the gpu by just collecting it:
double matrix_gpu,matrizHost;
matrixHost=(double)mxGetData(prhs[0]);
cudaMalloc((void*)&matrix_gpu,sizeof(double)mMatrixnMatrixzMatrix);
cudaMemcpy(matrix_gpu,matrixHost,sizeof(double)mMatrixnMat
rixzMatrix,cudaMemcpyHostToDevice);
//So no mxAlloc needed :)
But according to Boxed Cylons thread (very good for starting with Matlab and CUDA) [url=“http://forums.nvidia.com/index.php?showtopic=70731&pid=407662&mode=threaded&start=#entry407662”]http://forums.nvidia.com/index.php?showtop...rt=#entry407662[/url]
He notes:
→ On the other mail list, I inquired about using cudaMallocHost in a mex file to speed up the host-device communications. This is to use “pinned memory”. Matlab is fairly memory sensitive, so it is not generally possible to allocate memory this way. I suspect, however, that one could actually sometimes get away with using cudaMallocHost() in a mex file - it is a matter of avoiding any calls to matlab. So one could cudaMallocHost(), compute away, and then clear out the CUDA variables before Matlab is aware of what is going on and complain/crash (its the “don’t ask, don’t tell” memory policy).
However I have tried to perform cudaMallocHost() in the mex file without invoking any Matlab functions, but to no avail.
If anyone has been able to perform any memory trasnfer optimizations in Matlab, i would be extremely greatful.
Thanks beforehand, kind regards,
David Lisin.