I have tried to run CUDA kernel on a Sony Vaio laptop. It has 8400M GS inside. It came out, that DeviceQuery sample shows only 64M of available RAM on GPU. So, my question is:
- Is CUDA capable of using memory that is shared with the host?
- If it is, then how is it possible to request extra amount of RAM from the host OS?