is it possible to tell CUDA to allocate GPU accessible memory on a defined address?
May I know the purpose?
Do you want to share the memory buffer from CPU side?
Yes, I have some allocated memory in CPU side which is used for receiving data which is then processed by the GPU. I was wondering if this could be made more efficient by using the exact same memory space instead of copying the data.
Thank you for your reply.
You can find some memory information in this document:
For your use case, maybe you can give
cudaHostRegister a try.
The function can register an existing host memory range for use by CUDA.
Please noticed that the memory is shared rather than reallocating.