i have a small cuda program like this:
GLMmodel *v = (GLMmodel )a;//a is a function parameter, void. i’ve tried using it directly in cudaMemcpy, same result.
size_t size = sizeof(GLMmodel);
cudaError_t mem_alloc, mem_transfer;
mem_alloc = cudaMalloc((void**) &_glm, size);
std::cout << "mem_alloc: " << mem_alloc << std::endl;
mem_transfer = cudaMemcpy((void *)_glm, (void *)v, size, cudaMemcpyHostToDevice);
std::cout << "mem_transfer: " << mem_transfer << std::endl;
std::cout << "number of triangles: " << _glm->numtriangles << std::endl; //return 12 like the original in device emulation or a strange big value using the gpu
everythings return success but when i access that GLuint variable in *_glm it returns 1378016 or something when it should return 12.
i’ve checked the sizes, the source and the destiny values, the casts, memsets, cuda_get_last_error and such debug info(all success) and i don’t understand this difference.
i even tried the cuda visual profiler and it shows a completed memory copy and with the correct size…
the worst part is, if i use device emulation, the GLuint variable in *glm is correct and returns 12(the program behaves correctly even some lines later in a kernel call). this is very strange and this function(a very important one) working this way is a major problem in my development.
i’ve read two other threads but no one seems to find a solution :(.
i am working on a macmini with geforce 9400M, mac os x 10.5.8, last version of toolkit, driver, sdk and i am capable of running some sdk examples.
i’m sorry if i am doing a stupid error.
thank you very much.