How do I init an array? Don't know how to init an array

I’m a beginner at cuda and have a faily basic quesition. :">
I want to define an array in a seperate function which is later used in another function. The function defining the array will only be used once. The other function will be called in a loop over and over again.
Preferably I would like a third function wich frees the memory of the array when I close my application.
The cuda functions are called by my application made in c++.
How do I do this? Help!

Think I need som more information. Do you want to create an array in a kernel? This is possible if the array is in shared memory. If you want an array in global memory, space for this has to be allocated from the host.

i think i have a similar problem.
i already have cuda Host functions that receive pointers to arrays as parameter, but i have to copy the data from the parameter to device memory and in the end copy from device to Host…

isn’t it possible to initialize something in GPU memory in one cuda Host function called from a C app, do operations on it, return to C application, call another cuda Host function that operates on the saved data and only then copy the data from device to Host and return it?

this way only one read/write is necessary, whereas how i have it now i have to do one read/write for each function call, which is very slow, since the copies take too much time…

the final objective is to do something like asali wants: one Put function that stores an Host array into device memory, several operations functions and one Get function that returns the array from device to Host and frees it from GPU memory.

Of course this is possible, you just need to track the data pointer and pass it around to the various functions that call GPU kernels.

int main()

  {

  float *d_data; // device data

  float *h_data; // host_data

 d_data = allocateData(); // does cudaMalloc on d_data

  GPU_processdata1(d_data);

  GPU_processdata2(d_data);

  GPU_processdata3(d_data);

  

  copyDataToHostAndDoSomethingWithIt();

 freeData();

  }

Of course, things can be made a little cleaner using C++ classes where d_data would be a member variable but good OO design is outside the scope of the question.

Edit: I should add that you can do nearly everything in the host .cc code including calling cudaMalloc etc… The only things that need to be compiled into .cu are texture bindings, constant memory copies and kernel calls.

yes! that’s exactly what i needed to know :) thanks

edit: but to call cudaMalloc on .cc code i’d have to include cuda libraries in C headers… i’m having a problem with that: [url=“The Official NVIDIA Forums | NVIDIA”]The Official NVIDIA Forums | NVIDIA
any input is welcome please.

Make sure you specify CUDA’s include directory in your compilers include path. In linux this means adding “-I /opt/cuda/include” to the g++ command line (assuming /opt is where you installed CUDA…)

To get cudaMalloc and friends, you need to include cuda_runtime.h

I would like to define an array in function nr1
Preferably by using cudaArray* inputArray;
cudaMallocArray(&inputArray, &charTex, n, m);

In function nr2 I will use this array to copy data to the array from host to device. Make some operations, make acudaMemcpy from device to host.

In function nr3 I want to use cudaFreeArray(inputArray);

I don’t know how to be able to access the Array when I split the code into seperat functions. Is it possible to make an array global? Or how do I do it?

read mister anderson’s post with the C code.
you can have a pointer in your C app to a cuda device array :) i didn’t know that was possible too…

then you just have to pass that pointer as an argument to all cuda Host functions.

Excellent! That will solve the problem. Thanks for the help! :)