Memory on DRAM


I recently ran a code on the GPU in which I had allocated and operated on a huge array (approx 2GB). But my GPU DRAM is just 1GB in size. If I do a


doesn’t this mean the memory is allocated on the GPU?. How exactly does this happen?

Thanks in Advance

Make sure you error check after everything you do. Something like this:

kernel_call<<<2, 64>>>();

    if(cudaGetLastError() != cudaSuccess){printf...};

    if(cudaThreadSynchronize() != cudaSuccess){printf...};

So, you mean to say I cannot allocate that much?. I had checked for error using cudaGetLastError(), it gives cudaSuccess. I even got right answers out of the kernel.

Come on dude! I’m tired! It’s PDT time here! What’s the problem if you’re getting the right answers??? Maybe you do have 2GB RAM, right?!

Obviously, I have no idea what you’re hardware is, and obviously, if you’re doing everything right and getting the right answer, you’re original assumption about the 1GB limit is wrong!

I’m using GTX 470 with 1GB of DRAM.

But you say your test is coming out exactly as expected!

I was trying to tell you that you’ve provided absolutely no information in your original post(or since then for that matter.) Information such as the simplest source code possible that demonstrates your problem/concern.

If you really do have a proper test program, and it returns exactly the correct values, and you still suspect that the hardware or software is buggy, then that is so strange that nobody can help you without knowing the exact details. I don’t know, maybe you’re using a CUDA emulator on your CPU??? I don’t know where to start helping you?

If you installed the SDK , run the devive query to find out all the properties of your card. Anyway just use this code to find out the gpu memory available to your program.

size_t free, total;



   printf("%d KB free of total %d KB at the beginning\n",free/1024,total/1024);

Put the lines before and after the allocation of your array. This way you can see how much you allocate.