Host Memory mapping to GPU

Hi,

I am new to CUDA, and was learning about memory mapping of host to GPU using cudaHostAlloc().
please correct me if am wrong but wat i think about cudaHostAlloc() is that it allocates memory on host and then map it to the GPU memory.
but there is no actual memory allocated on GPU.

so wat i was thinking was that insted if i just use simple C language to allocate memory on host and then use the same procedure like calling cudgetDevicePointer()
for getting pointer of GPU to be mapped to the host memory… will that work . i tried trying that but i received ‘invalid argument’ as error.
so am i doing something wrong and alos is there any other way to map the host memory to GPU expect cudaHostAlloc().

hoping to get reply soon.
Thanks.

Consider reading section 3.2.4 of CUDA C Programmin Guide, p. 29.

That might help.

Regards,

MK

Consider reading section 3.2.4 of CUDA C Programmin Guide, p. 29.
That might help.

Regards,
MK

MK ,
Thanks…i read that part but now am in new doubt. i didnt get the logic of ‘Write-Combining Memory’ . can anyone please help me in this context?

Google produced something like this (old but bold): Intel Write-Combining Memory Implementation Guidelines

This may help too: Andy Glew’s comp-arch.net wiki

Bless You,

MK