As far as I know it is not possible to reallocate the size of an array stored on the device using CUDA. What is the preferred way of performing dynamic memory allocation?
If I am running quite close to the capacity of my device, I’d ideally like to avoid copying an array’s data from device to host, deleting the device memory, then allocating the device memory with the new larger size and copying the data back again. However, I can’t really see any way round this if I am unable to fit the “new” allocated array alongside the “old” allocated array on the device in order to perform the copying entirely to the larger array on the device.
Any suggestions for ways to cope with this?