Does a device memory allocated using cuMemAlloc() have anything to do with a cuda module?

Does a device memory allocated using cuMemAlloc() have anything to do with a cuda module?

As far as I understand, the memory allocated is related to the context that is current at that instant and any module loaded against that context can use the device memory allocated using cuMemAlloc.

You are correct. Device memory allocations are portable between kernels from different CUDA modules loaded in the same context.

Right. Thanks Seibert