I’ve one question about CUDA. Before I use the “cudaMemcpy” function I should allocate memory on the “Host”. Normally I use “malloc” or “new” for allocating new memory on the “Host” but I can’t use this keywords. Unfortunately I should use memory on the “stack” and then I can use the copy function (cudaMemcpy ) of CUDA. Now my question: Why I can’t use “new” or “malloc” for allocating dynamically memory of e.g. arrays?
create/allocate data on the HOST (eg. setup an array of floats)
cudaMemcpy from HOST to DEVICE
do GPU computations
cudaMemcpy from DEVICE to HOST
If you think that CUDA forces you to allocate on the host with their own special commands, this is not true. Just get the data you want via normal means (hard-coded, user input, text files, etc.).
create/allocate data on the HOST (eg. setup an array of floats)
cudaMemcpy from HOST to DEVICE
do GPU computations
cudaMemcpy from DEVICE to HOST
If you think that CUDA forces you to allocate on the host with their own special commands, this is not true. Just get the data you want via normal means (hard-coded, user input, text files, etc.).