Hello,
when i try
memcpy(vs, v, sizeof(float) * 100);
the compiler doesn’t complain but it fails at runtime. Is there a way to make this work or do I have to write a loop?
Kind Regards
Hello,
when i try
memcpy(vs, v, sizeof(float) * 100);
the compiler doesn’t complain but it fails at runtime. Is there a way to make this work or do I have to write a loop?
Kind Regards
Is this inside a kernel? You can’t use host functions inside kernels from what I recall, but I have been away from CUDA for a bit. If you want to use something like std::vector<> inside a kernel you have to make it a pointer, and allocate the std::vector<> on the device. This may or may not work, it is just something I thought of while replying.
std::vector<float>* vec;
cudaMalloc((void**) &vec, sizeof(std::vector<float>) * size);
I seem to remember we actually do support memcpy in kernels (although not STL), but I wouldn’t recommend using it in general.
Take a look at Thrust Library: Google Code Archive - Long-term storage for Google Code Project Hosting.
Don’t know if is there what you want, but it makes memory management very easy.