Dynamic Global Memory Allocation ? faulty documentation - malloc within Kernel

Hello,

In “Cuda C Programming Guide” - chapter B.15. - Dynamic Global Memory Allocation; states that it is possible to dynamically allocate Device memory.

However when I try to use malloc from within the kernel I get following error:

error : calling a host function from a device/global function is not allowed

Is it possible to use malloc from within the kernel function or chapter B.15. is completely wrong?

EDIT:

I have figured out what is the problem. On devices with Compute Capability < 2.0 compile will fail if you have malloc in global or device function. However on compute capability >= 2.0 it works as stated in documentation.

As time goes by I realize that purchase of C1060 a month ago was complete waste of money :(

many thanks

Mirko

On Fermi GPUs with the 3.2 toolkit you can. Just include stdlib.h and compile for the sm20 architecture and you should be good to go.

thanks avidday,

I just realized that.

I have C1060 and GTX 470, tried it with Fermi and it works OK.

Documentation should be changed to clearly state for what architecture this feature can be implemented.

thanks

Mirko

Thanks for pointing that out, I will bring it to the attention of the documentation folks.

thanks njuffa