I’ve been trying to optimize an old CUDA program I made a few months ago. The program has the following structure:
CUDA set-up : All the necesary stuff to configure CUDA in this application.
OPENGL set-up : All the necesary stuff to configure OPENGL in this application.
OpenGL functions: When the user press a key, the treatment on the image changes.
Obviously, an OPENGL-based program works cyclically. This means, the same procedure is made until the user interacts. The procedure consists in three things:
- Map the data as an OpenGL resource.
- Call a Device function.
2.1. Bind the data (in this moment is a texture) to an array.
2.2. Execute some kernels on this array (These kernels depend on the key pressed).
2.3. Unbind the data.
- Unmap the resource with the aim that OPENGL can render this data.
The data is an 8-bit gray-scale image. So far, the program works as well as I need. But I noted that in the kernel part there’s wasted time in the dynamic allocation of the data which will be processed. This Dynamic allocation is always the same, that is, everytime the cycle begins the program creates and allocates the same data arrays. How can I do this dinamic allocation only once, say, in the beginning??
If anybody knows how to do this, or knows an example, please let me know.