So I’m working with a code base that uses both new and malloc. Now we’re adding GPU, and I want to use unified memory.
So, is it safe to mix cudaMallocManaged and other memory allocating functions ? The correct free
function should be used, of course.
1 Like
If you don’t accidentially pass a non-managed pointer to the GPU, then you’ll be fine mixing various memory allocation methods.
One thing that is a bit challenging is to combine STL objects with managed memory. This requires the use of custom allocators that make use of cudaMallocManaged. It can be very useful to e.g. have the contents of a std::vector reside in managed memory. At least the host side code can then make use of the vector member functions to populate the data buffer.
I haven’t found a good way to use STL members on the device though. I’d love to see an implementation of thread safe CUDA compatible standard containers that allow data modification also on the device.
2 Likes