I don’t know how basic a question this is, so please bear with me…
If I’m not on a system that has Unified Addressing enabled, and I allocate a block of “pinned” Host memory (using the Driver API), then get a “Device Pointer” to that memory (using the Driver API), can I pass that value (the value of the “Device Pointer”) to a kernel (using whatever mechanism) and expect that it will be able to read the value (or values) at that address?
In short, will a running kernel be able to correctly interpret a “CUdeviceptr” value as a valid memory address?
The docs strongly imply that it can, when it states:
“…memory that is page-locked and accessible to the device”, and
“Since the memory can be accessed directly by the device,…”
CUDA Toolkit Reference Manual, cuMemHostAlloc()
If it can, is it assumed by the kernel to be a “generic” address that points to memory in the “global” state space?
If it can’t, well…
The docs say:
“Unified addressing is automatically enabled in 64-bit processes on devices with compute capability greater than or equal to 2.0.”
“Unified addressing is not yet supported on Windows Vista or Windows 7 for devices that do not use the TCC driver model.”
So if there really is no way to pass a pointer to “mapped, pinned host memory” that a kernel can correctly interpret as a valid pointer, is there a way to turn on the “TCC model” in Windows 7, even though my GPU is not a Tesla?
Thanks in advance…