Current device/context requirements for runtime/driver API calls

I would like to know what CUDA driver and runtime API functions require the current CUDA context to match the one from explicit arguments like streams. I am mostly concerned for the CUDA Driver API.
The CUDA C++ Programming Guide states that you can’t launch kernels on other than the current device. I have the suspicion that the requirement is actually more strict and it is the current context. The failing example there is given for the CUDA Runtime API. Is the requirement the same for the CUDA Driver API?
What about for example cuMemFreeAsync? Does it require the current device/context to match the one associated with the stream argument?
There are some obvious functions that would depend on the current context, like cuMemAlloc.
I have not been able to find adequate information for that in the CUDA Driver API documentation.
I am concerned for pretty much all functions from the CUDA Driver API. Does the omission of restrictions in the doc mean it is allowed?