Using shared host RAM on laptops & integrated GPUs How to request extra RAM from the host?

I have tried to run CUDA kernel on a Sony Vaio laptop. It has 8400M GS inside. It came out, that DeviceQuery sample shows only 64M of available RAM on GPU. So, my question is:

  1. Is CUDA capable of using memory that is shared with the host?
  2. If it is, then how is it possible to request extra amount of RAM from the host OS?

Also I’d like to know, if it’s possible to run CUDA on integrated graphics like GeForce 8200 or 8300? They are on the list here http://www.nvidia.com/object/cuda_learn_products.html. But I still have some doubts, mostly because usually there is no dedicated video RAM on motherboards…

If anybody had experience of running CUDA on integrated graphics, could you please share your results here? Most interesting is the question of device memory allocation and throughput…

It should work, although bandwidth will be very low compared to discrete parts and memory size can be an issue. The last time I used integrated graphics (you know, P3 days) they had separate pools for video and CPU memory that were determined in the BIOS. Maybe check there to see if there are any options on video memory amount.