I am porting older CUDA code to use unified memory, instead of HostToDevice/DeviceToHost memcpy. I have a few public data members of type std::vector in a class that is used by a device kernel.
Since vector size will dynamically change, what is the correct way to use unified memory if at all possible to use on an std::vector? Or will I have to use thrust host and device vector containers instead? Would like to know the correct way to work with vector containers in CUDA with unified memory. I am using CUDA 10.1 toolkit on Linux OS.