What's CUDA-context's GpuMemory contain? Is it necessary and available to Minimum the CUDA-context's GpuMemorySize?

what’s CUDA-context’s GpuMemory contain? (It doesn’t only hold kernel code, it holds any other static scope device symbols, textures, per-thread scratch space for local memory, printf and heap, constant memory, as well as gpu memory required by the driver and CUDA runtime itself)

Is it necessary and available to Minimum the CUDA-context’s GpuMemorySize? by using cudaDeviceSetLimit/cudaDeviceGetLimit?
If it’s necessary and available, how to set? and what’s the disadvantage?


It’s not specified anywhere that I know of.

A conceptual description is given in the programming guide. But its quite likely that there is no complete description anywhere in the public domain.

There is no direct control given via CUDA runtime API or similar, to arbitrarily limit the size of a context.

1 Like

could you please give keyword to search or a link of that? Thanks!

Device memory for CUDA kernel code: Is it explicitly manageable? - Stack Overflow
found this.

so when should cudaDeviceSetLimit be used?

i did some test and it show that CUDA context consumed 108MB(no cuda code at all in the project).
and after add more and more cuda code, the CUDA context consumed more and more even got the stats right after the main() begin. It seems that CUDA-context’s consumed GPU memory depends on the amount of cuda code that be compiled.


when you want to modify one of the 7 enumerated limits that that API has control over.

Of course, some of these may have indirect affects on the size of the context. Many things you do in CUDA may have an indirect effect on the size of the context, such as CUDA version, the amount of memory or GPUs in your system, the amount of kernel code you have, etc.

1 Like