Hi, I’m looking to convert a program to CUDA. I have a function that does a number of calculations based on a series of quite large data arrays and variables (some of which are changed during the calculation). All but one of these is available globally in my normal program and so the function has only one parameter.
What I want to do is make this function a kernel. What I would like to know is the best way to pass all of the information needed to the device for use by the kernel function. In particular I was wondering how parameters passed to the kernel directly are stored on the device and what sort of limit there is on this?
Thanks for any help.