Hello everybody! Im starting with cuda and im trying to accelerate some neural gas algorithm,s, i have the problem with my class Neurona that have a member “neighboors”, it is a std::vector collection of Neuronas, i need to do some operations with every element of std::vector inside a cuda kernel, when y try to compile i obtain some errors, y have searched and i didnt found any information about it. Anyone know about this?
cudaMemcpy(devicePointer, &vectorData, 100*sizeof(whatever), cudaMemcpyHostToDevice);
thx por the reply! but if a have a std::vector *devicePointer and i copy my std::vector into device memory, inside a kernel i cant access to members of struct. For example:
std::vector<VECINAS> *devicePointer = NULL;
cutilSafeCallNoSync( cudaMalloc((void**) &devicePointer, h_idata.vecinas.size()*sizeof(VECINAS)) );
cutilSafeCallNoSync( cudaMemcpy(devicePointer, &h_idata.vecinas, h_idata.vecinas.size()*sizeof(VECINAS), cudaMemcpyHostToDevice) );
when i call to cuda kernel
devicePointer.vecina = 2;
Error 1 error: class “std::vector<VECINAS, std::allocator>” has no member “vecina” c:\Users\Sergio\Desktop\PFC32-stable\PFC32\CUGNG32\sample.cu 252 CUGNG32
On the other hand my initial problem was the next:
I have a array of Neuronas and every Neurona has a std::vector
I copy an array of Neuronas to device memory and i work with that in a kernel, but when i try to access to member vecinas i cant, moreover i cant access to methods of Neurona in a cuda kernel it is normally no?
how i do for access to vecinas into a cuda kernel?
Neurona *h_idata //this is a parameter of function, that arrives with a lot of Neuronas and his members.
Neurona* d_idata = NULL;
cutilSafeCallNoSync( cudaMalloc((void**) &d_idata, bytes) );
cutilSafeCallNoSync( cudaMemcpy(d_idata, h_idata, bytes, cudaMemcpyHostToDevice) );
now like i said previosly into a cuda kernel i cant access to member vecinas, vecinas is copying like the other members no?
Thanks for replies!
You can’t use STL objects inside kernels.
For something very related, check out the thrust v1.2 library. It provides template types similar to STL containers which are CUDA accelerated.
But still these types can’t be used inside CUDA kernels, rather they provide CUDA accelerated implementation of algorithms and operators.
As a side note to what cbuchner1 said, it is possible to use either thrust (or your own class like I do) to present a raw data type pointer for the kernel. A sly way for using this is similar to what thrust does, make a host function that will accept the STL/thrust/your class data type, then have the host function present the raw pointer to the data to the kernel.