CUDA Managed Memory invalid after kernel error

I have a .cpp program that calls cuMemAllocManaged() to allocate a block of Managed Memory and then launches an unrelated kernel with cuLaunchKernel() (this managed mem block is not used). In the kernel though there is an invalid memory access that causes a CUDA_ERROR_ILLEGAL_ADDRESS error. After that failure the pointer given by cuMemAllocManaged() appears to now point to dead memory as any attempt to dereference it causes a SIGSEGV or SIGBUS.

Is this expected behaviour? Is all managed memory invalidated on any CUDA kernel error or is something else happening here?

Assuming the activity described is all occurring in the same CUDA context, then yes, the behavior is expected. An invalid memory access by a CUDA kernel invalidates the entire CUDA context [1].

[url]Programming Guide :: CUDA Toolkit Documentation :

“A valid CUDA context is necessary for the correct operation of managed variables.”

"When CUDA-specific operations fail, typically an error is returned that indicates the source of the failure. Using managed variables introduces a new failure mode whereby a non-CUDA operation (for example, CPU access to what should be a valid host memory address) can fail if the Unified Memory system is not operating correctly. "

[1]: [url]CUDA Runtime API :: CUDA Toolkit Documentation :

“cudaErrorIllegalAddress = 77 The device encountered a load or store instruction on an invalid memory address. The context cannot be used, so it must be destroyed (and a new one should be created). All existing device memory allocations from this context are invalid and must be reconstructed if the program is to continue using CUDA.”