cudaMemcpy Host to Device of std::list<Object *> * population


I’ve got this structure of objects in my class, “std::list<Object *> * population”, and I want to paralellyze the functions for each element in a kernel. The question is, that I don’t know exactly hoy the memcopy should be (I read something about creating a list, but it’s not the same)

I’ll like to use an iterator about this structure…“std::list<Object *>::iterator j” in the kernel…should I create it in the kernel, or create it and use a memcopy host to device? In that case…how should be the kernel?

Thank you very much.

You won’t be able to use std::list in kernel code.

Thanks. First of all, sorry for answering so late :( I was missing all the weekend.

Yes, I read something about it, and I’m considering another options…I’m reading about thrust and the using of Thrust:vector, but I’m not very familiarized with this.

Another aproach could be changing the structure in my code to another more…suitable for my purpose. But it’ll require a lot of work and modifications in the original code.

Any suggest? Thank you very much for your feedback.

It sounds like you’re on the right path to use Thrust. From the API standpoint, Thrust’s vector is very much like stl’s vector and stl’s list.

Thanks arakageeta.

I found some documentation about thrust, but it seems so…general. Could you suggest me some kind of documentation about this topic?

It’s hard to give advice without knowing what you’re hoping to do. Thrust’s homepage ( has some introductory resources:

  • Quick start guide:
  • Introduction:
  • Thanks Arakageeta. My intention is to parallelize the set of instructions for each object…the fact is that the original estructure is so inadecaute as I was reading, but, before changing it (and modifying all the code) I want to study the different options, and working with thrust is one of them :)