A couple of questions about CUDA architecture. Need expert advise on these...

Hello all,

I am doing my M.S. thesis on CUDA GPGPU, but I have some questions I need to be answered before I can go on, if there is any documentation on any of the matters I will be grateful if you point them out, otherwise some answers would be enough!

  1. What are security levels in CUDA hardware? I want to use GPU for key generation & management purpose so I need to know how much information in CUDA memories are exposed to OS & etc.
  2. What happens to GPU DRAM in case of system failure? Is there an auto memory dump mechanism?
  3. Is there any documents on CUDA hardware scheduling?
  4. Is there a detailed documentation on CUDA hardware architecture available?

Thanks in advance!


nVidia GPU Driver have access to the whole GPU DRAM, thus Kernel have a full access to GPU memory technically speaking (should also be available directly through PCI-Express bus without driver)

No auto memory dump mechanism, it could be done by modifying the Kernel

There are document on CUDA hardware scheduling, and it’s clearly stated that scheduling may change without hint of the actual scheduling policy

Not really detailed documentation on CUDA hardware architecture, and you have to notice that there are many different architectures that run CUDA, since the original 2006 G80 (GeForce 8800 GTS 320MB). By not really detailed, I mean not as detailed as documentation you may find for a particular CPU architecture.

GPU are not built to be “secure” but to be fast, if you intend to launch GPU process that need to manipulate sensitive data, it’s clear that any other kernel-space driver (as opposed to user-space) and the kernel itself have access to the GPU memory at any time. It may includes rootkit too.

Scheduling is another problem, as kernel scheduling is a matter of driver software AND GPU architecture hardware support, you may not take for granted what you have with one specific card and one specific version of the driver, when running your code on another driver/gpu couple. There are few informations on warp scheduling and block scheduling, rules seems relatively simple, but nVidia is clear that they might change at any point.