I`m trying to make my CUDA code working while some other GPU application (game for example) is running. The problem is that cudaMemGetInfo returns free mem value 7086014464 bytes (or around it) on 1070Ti with 8Gb total. The problem is that it returns same value when no game is runned and GPU mem usage in task manager is 1Gb and when I run WoT and GPU mem usage spikes to 3-4Gb, it always says that about 7Gb is free.
So the question is how to get actual amount of free GPU memory available, including used by other non-CUDA applications?
Does your code look like this?
size_t mem_free, mem_total;
printf("Available video memory = %llu bytes\n", mem_free);
What happens when you try to allocate a lot of memory when the game is running?
size_t alloc_size = 7000000000;
if(cudaMallocManaged(&huge_array, alloc_size) == cudaSuccess)
if(cudaMemset(huge_array, 0, alloc_size) == cudaSuccess)
printf("We allocated and initialized a huge amount of memory...\n");
printf("We allocated the memory but failed to initialize it. Debug further...\n");
printf("Unable to allocate memory. Run cudaMemGetInfo on different parts of the program...\n");
If you are saying that cudaMemGetInfo reports a lot of free memory when you suspect there shouldn’t be that much, try to allocate/initialize it and see if it succeeds.
Try placing cudaMemGetInfo in different parts of the program and read the free memory variable.