Need a little tool to adjust the Vram size

Hello!

I am testing a game right now, and need to report how playable it is under different hardware settings.

For this I need also a tool which lets me change or fill the Vram (Changing the size of the VRAM from 2GB to 1GB or 512 etc).

I read somewhere, that this can be programmed easily with CUDA, but I am not a programmer. sad
Is there already a tool, that I can use? Or can someone help me with this?

Regards

You can try to use the program below, it uses cudaMalloc to allocate some memory which should then be no longer available for your game.

The program below accepts a single arguments, the amount of megabytes you would like to allocate on the GPU. If you specify no amount, 256MB is used as a default. Save it in a text file called gpumem.cu.

#include <stdio.h>

int main(int argc, char *argv[])
{
     unsigned long long mem_size = 0;
     void *gpu_mem = NULL;
     cudaError_t err;

     // get amount of memory to allocate in MB, default to 256
     if(argc < 2 || sscanf(argv[1], " %llu", &mem_size) != 1) {
        mem_size = 256;
     }
     mem_size *= 1024*1024;; // convert MB to bytes

     // allocate GPU memory
     err = cudaMalloc(&gpu_mem, mem_size);
     if(err != cudaSuccess) {
        printf("Error, could not allocate %llu bytes.\n", mem_size);
        return 1;
     }

     // wait for a key press
     printf("Press return to exit...\n");
     getchar();

     // free GPU memory and exit
     cudaFree(gpu_mem);
     return 0;
}

You will have to compile it yourself using nvcc (downloadable for Windows, Mac and Linux as the CUDA toolkit from the Nvidia website). Use the following command line to compile the program:

nvcc gpumem.cu -o gpumem

Execute the program by calling on the commandline: (allocates 1000MB in this example)

./gpumem 1000

Use the nvidia-smi tool (comes with the Nvidia) to verify how much GPU memory is available.

Thank you so much!

This was very helpful. I have compiled the file, and checked GPU-Z Vram size. It seems to work.

You are my Hero! :)

One last question…Is it possible to compile the file as x83(32 bit) .exe ?
This works fine with Win7(64bit), but I am getting an error wenn I start “gpumem” with a 32bit Windows (Win Vista, Win Xp e.t.c.).

nvcc -m32 gpumem.cu -o gpumem

Thank you for this example, however it doesn’t work.
Whenever a game needs more vRAM than is available, the memory used by gpumem gets cleared and it won’t return an error. I’ve checked getLastError before freeing the memory and after, but it returns no errors as if nothing happened.

Can the clearing of memory be prevented? Does it get cached somewhere else or what’s going on here?

Windows WDDM can evict CUDA memory pages out of the (WDDM) GPU and replace them with memory pages desired/required by directX. Since there is no cuda kernel that bangs on this memory allocation, such demand paging could occur relatively unimpeded. I’m not suggesting I can sort this out for you, merely responding to “what’s going on here?”

Thanks. With that information I’ve improved the program such that the memory pages do not get evicted. If not using Windows you’ll have to replace the windows.h include and Sleep function with something else.

#include "cuda_runtime.h"
#include "device_launch_parameters.h"

#include <stdio.h>
#include <conio.h>
#include <Windows.h>

cudaError_t runGPUMem(unsigned long long mem_size);

__global__ void emptyKernel()
{
	//Do nothing
}

int main(int argc, char *argv[])
{
	unsigned long long mem_size = 0;
	cudaError_t cudaStatus;

	// get amount of memory to allocate in MB, default to 256
	if(argc < 2 || sscanf(argv[1], " %llu", &mem_size) != 1) {
		mem_size = 256;
	}
	mem_size *= 1024*1024;; // convert MB to bytes

	//Allocate memory and keep an active kernel
	cudaStatus = runGPUMem(mem_size);
	if (cudaStatus != cudaSuccess) {
		printf("Press return to exit...\n");
		getchar();
		return 1;
	}

	return 0;
}

cudaError_t runGPUMem(unsigned long long mem_size) {
	// allocate GPU memory
	void *gpu_mem = NULL;
	cudaError_t cudaStatus = cudaMalloc(&gpu_mem, mem_size);
	if(cudaStatus != cudaSuccess) {
		printf("Error, could not allocate %llu bytes.\n", mem_size);
		goto Error;
	} else {
		printf("Allocated %llu bytes, (%llu MB).\n", mem_size, mem_size / (1024 * 1024));
	}

	//Keep an active kernel until a key is pressed
	printf("Press any key to exit...\n");
	while(!kbhit()) {
		// Launch a kernel on the GPU with one thread.
		emptyKernel<<<1, 1>>>();

		// Check for any errors launching the kernel
		cudaStatus = cudaGetLastError();
		if (cudaStatus != cudaSuccess) {
			fprintf(stderr, "Kernel launch failed: %s\n", cudaGetErrorString(cudaStatus));
			goto Error;
		}

		// cudaDeviceSynchronize waits for the kernel to finish
		cudaStatus = cudaDeviceSynchronize();
		if (cudaStatus != cudaSuccess) {
			fprintf(stderr, "cudaDeviceSynchronize returned error code %d after launching kernel!\n", cudaStatus);
			goto Error;
		}

		Sleep(1);
	}

Error:
	// free GPU memory and exit
	cudaFree(gpu_mem);
	return cudaStatus;
}

Hey you guys :)

Please don’t judge me on the following but I accidently found another way to set a certain value of VRAM your video card is supposed to use.
I was trying to get more fps for my games and in order to get some sort of buffer of 1GB I set my VRAM usage to 3GB instead of 4 (GeForce GTX 780m)
Don’t ask how I would think that would do any good! - I don’t know -.-

The bad thing is that I totally forgot about this idiotic idea of mine and also forgot about where I set this value… just recently saw it on my MSI Afterburner monitor that my maximum VRAM was supposed to be 3072MB and I was going to slap myself when I remembered that I changed something and didn’t put it back.

Do you guys have any idea how to unlock my total vram again? (BIOS, overclocking tool, system settings)

PS: I am totally aware of the fact that you NEVER should play around with programs and system settings you are not familiar with. For now I am in this situation and I am very thankful for any helpful ideas! :)

Greetings Jonas

I was wondering If this works for the 40 Series Cards, it says it’s allocating xxxx bytes, but GPU-Z still says 12GB.

I just want to do a bit of testing