cudaMalloc in __device__ code

Is there no way to do cudaMalloc inside a device function ?
From what I understand cudaMalloc (being a host function) can only be called from a host function(naturally).

I want to allocate memory “on the fly”.
The data structure for which I want to cudaMalloc is designed to be device only.


There are no device-side memory management functions.

I was afraid of that. <img src=‘http://hqnveipbwb20/public/style_emoticons/<#EMO_DIR#>/crying.gif’ class=‘bbc_emoticon’ alt=’:’(’ />

That means one can’t even create linked lists on the GPU . (!)