Have 2 questions:
First, its normal grow up consuming of RAM memory after programs in cuda? My RAM memory always grow after launch kernels even if free the memory used by threads (problem also occurs if use cuda-memcheck).When I start OS RAM is about 400MB used, writting and writting code until close all applications, my RAM consuming current is 4,8GB. There is any solution?
Second, the malloc heap size is splitted by all kernels (incluing child kernels) or only kernel are executed?
In-kernel malloc is a dynamic allocation. It only allocates memory when the actual code is executed.
I don’t understand the first question. Don’t know if you mean system RAM or GPU RAM. It’s also not clear under what circumstances you are measuring it, or how you are measuring it.
So, this heap size is whole operations about dynamic allocation regardless of how many kernels are launched?
About first question, my RAM system always grow (3~5 MB per program launched) after running cuda kernels, i don’t know what happen but OS should release this memory.
Yes, the heap size is a device-wide parameter. It applies to all allocations associated with a particular CUDA context, i.e. a process.
System RAM should return to normal levels after termination of all user processes (applications) that had CUDA activity. Perhaps you are not terminating the processes correctly. If a code terminates normally, the OS should release all allocations associated with that process. The CUDA driver likewise observes process termination and releases all device allocations associated with the process.
I’ve not witnessed the situation you describe, where each application launch uses additional memory that is not released.
Yes, maybe that, some process I not terminating correctly but if occurs that some process not terminating correctly the OS will not be able to release other processes?