I need some advice regarding the Cuda architecture constant memory management.
Throughout the Cuda documentation, programming guide, and the “Cuda by Example” book, all I seem to find regarding constant memory, is how to assign/copy into a constant declared array, by using the cudaMemcpyToSymbol() function. But there’s never any mention on how to modify or “free” this allocations. ( Unlike Texture memory, which can be unbinded )
I’m working on a problem, were I have to update the values of my constant memory array after each kernel invocation. While searching for answers, I read that it wasn’t possible to modify constant memory, once it had been assigned, but I recently found this post in this forums, which shows it’s actually possible to do what I need:
My guess is that, by calling cudaMemcpyToSymbol(), I can modify this values before each call to my kernel. Is this correct?
What if I need certain amount of constant memory, say 64k for a table of integers at one point, and later on I need another table of another 64k of floats, and I don’t longer need the first table of integers. Is there a way to “free” the first table, in order to allocate the second table?
As far as I understand, constant memory allocation is done at compiling time, which means that I can’t allocate different amounts or sets throughout my program.
Is there a way around this?
I was thinking on using Texture memory, to allow the dynamic allocation of my tables. Yet I was really looking for the Broadcasting benefits of the Constant memory, and not the Spatial locality caching benefits from Texture memory.
Thanks in advance,