Does CUDA support memory remapping?

Hello,

I’m a graduate student working in system software for security.

A recent paper called HushVac[1], written by me, mitigates use-after-free without low memory overhead by using a memory remapping technique. I think the remapping solution is a good way to minimize the possible memory consumption for delaying reuses, and we guarantee safety. The remapping unmaps existing mapping and re-maps the same virtual address space. The remapping feature in Linux is possible because it supports the MAP_FIXED option, which enables the creation of a memory mapping on the specified address.

When I looked at CUDA, I understood that it does not support it. I think the remapping method is helpful for system security researchers in various ways. This technique can profit from caching allocators or buddy allocators for GPU by releasing some unused physical memory.

Is there a way to unmap or remap the CUDA memories for more performance and programmable reasons?

Thank you

[1] Efficient Use-After-Free Prevention with Opportunistic Page-Level Sweeping - NDSS Symposium

CUDA has a virtual memory management API.

2 Likes