Hi,
One simple doubt is…
void hostFun()
{
float* d_A;
CUDA_SAFE_CALL(cudaMalloc((void**) &d_A, mem_size_A));
}
My hostFun() is a host function and d_A is a float pointer declared in host function. and
I’m trying to allocte the device memory for d_A using CUDA_SAFE_CALL(cudaMalloc( d_A ));
Is memory allocated for d_A is device memory or host memory?
If the above question answer is no, then how can I allocate device memory for a pointer ( which is declared in a host function ) sitting in host function?
Hi,
One simple doubt is…
void hostFun()
{
float* d_A;
CUDA_SAFE_CALL(cudaMalloc((void**) &d_A, mem_size_A));
}
My hostFun() is a host function and d_A is a float pointer declared in host function. and
I’m trying to allocte the device memory for d_A using CUDA_SAFE_CALL(cudaMalloc( d_A ));
Is memory allocated for d_A is device memory or host memory?
If the above question answer is no, then how can I allocate device memory for a pointer ( which is declared in a host function ) sitting in host function?
your code should allocate on a CUDA-enabled device, if you have any, and I think it will allocate for the default device=0. YOu can specify which device with cudaSetDevice(deviceID)
what system do you have?
have you run the simpleDeviceQuery in the SDK to find what devices there are in your system?
your code should allocate on a CUDA-enabled device, if you have any, and I think it will allocate for the default device=0. YOu can specify which device with cudaSetDevice(deviceID)
what system do you have?
have you run the simpleDeviceQuery in the SDK to find what devices there are in your system?
Yes, I’m using NVIDIA QUADRO NVS 290 & G8600 GPU card.
and I’m able to run all the CUDA sample SDK examples.
BTW, I’m not declaring the pointer, float* d_A, as device pointer.
I mean: device float* d_A
so, how can it allocates device memory?
darot
March 26, 2009, 4:59am
4
Yes, I’m using NVIDIA QUADRO NVS 290 & G8600 GPU card.
and I’m able to run all the CUDA sample SDK examples.
BTW, I’m not declaring the pointer, float* d_A, as device pointer.
I mean: device float* d_A
so, how can it allocates device memory?
If you did not compile it in emulation mode.
then to use cudaMalloc will allocate memory on GPU global memory
Yes, I’m using NVIDIA QUADRO NVS 290 & G8600 GPU card.
and I’m able to run all the CUDA sample SDK examples.
BTW, I’m not declaring the pointer, float* d_A, as device pointer.
I mean: device float* d_A
so, how can it allocates device memory?
cudaMalloc allocates device memory, and puts the address into a host variable (in your case d_A).
If you try to dereference the variable from host code (e.g. d_A[100]), you will get an error. The memory is allocated on the device.
You may pass the pointer when invoking a kernel and your kernel may use the device memory.