device memory reallocation

Hi all,

I would like to know if there is a cuda function equivalent to the C realloc() function.

In my code I make several sequential callls to a kernel and in each of these calls I send a vector to it. The size of the vector varies for different calls. I would like to be able to increase the size of the vector maintaining the vector data, just as realloc() does. I cannot define a vector of the biggest possible size, because it does not fit in device memory.

Thanks in advance,

No there is none :(

You’ll have to use the max available memory you can + break the data into chunks if it doesnt fit into device memory.

eyal

No luck. Thanks for your reply

In case it is of help to you, you can copy part of a large host array into a smaller device array

http://forums.nvidia.com/index.php?showtop…rt=#entry998208