Consistent Memory Allocation

Is there a way to allocate the same block of memory on the GPU for every run of the CUDA application?

Thanks.

I am not aware of any memory allocation facilities, CPU or GPU, that guarantee identical allocations on multiple runs of an application. In light of this, what are you trying to accomplish? The following comments apply to both CPUs and GPUs.

In various contexts I have worked, consistent allocation across runs was achieved by allocating all memory needed by the application in one big block at startup, then using a custom sub-allocator to parcel out storage from that big block. Note that there is still some residual risk that the initial block might not be in a consistent location for each run. Furthermore, typical memory allocators such as malloc() or cudaMalloc() operate on virtual memory, and there is usually no guarantee that each run will use identical virtual to physical mappings.