Memory allocation for GPU

Hi I am new to CUDA programming. I want to know as to what type of memory do I need to allocate if the memory is just required on GPU. Doing cudaMalloc forces me to pass this data again and again in kernel calls.
What I want is that I should have the ability to write some memory on the GPU and the ability to access that memory as a local variable in kernel code without the need of sending it as a parameter from the CPU code.

Just in case if you can recall any ready made project in the Nvidia Sdk folder that makes use of global memory please mention the name of that project.

Depends on your needs.

If you need:

read only data on GPU → use textures or constant memory

read/write data → use global mem

textures/constant mem will take advantage from cache coherence in data access. Use of constant memory is encouranged when you want to make small data available to kernels (say kernel args that you don’t want to pass as args). However textures should be used to pass on larger data, like images/volumes.

The programming guide explains using these mem types.

Hope this helps,

Oj

Thanks for prompt reply

My purpose requires reading/writing data

When you are talking of global memory is this global memory part of the DRAM. That should be a very costly read. Also is there any limitation to the amount of memory that can be accessed in this fashion

I am working on background modelling for video surveillance which requires me to work pixel by pixel and storing some information for every pixel on the GPU and this information needs to be updated for every incoming frame.

Yes, global memory I/O is costly. You might want to read about coalescing in the programming guide. There are some pointers to hide the latency.

-Oj