CUDA from kernel space

Hello. Is there a way to execute CUDA code from kernel space? I’m not very familiar with CUDA, srry. If it is possible, please describe it detailed as much as possible.

TIA

bump

anyone?

I have never seen anyone make CUDA calls from kernel space.

From user space, the communication passes through the character devices /dev/nvidiactl and /dev/nvidiaX (where X = 0, 1, … up to however many GPUs you have installed). However, the details of how the character devices are used are not published. All the communication handled by library code linked into your program. Certainly no in-kernel API has been documented to achieve the same effect.

While CUDA is for the most part quite stable, I’d be nervous about trying to use it from kernel space. Are you trying to accelerate some calculation taking place in the Linux kernel?

Thank you for your answer seibert. At the moment, I’m just trying to research is there a possibility to do this and then maybe propose a project draft to my menthors. What I am having in mind right now is encryption/decryption stuff in IPSec, wireless drivers with WEP/WPA, encrypted disk volumes…

There is an indirect way to do this by redirect a call to user space daemon first and then ask the daemon to call the CUDA API. There is a paper actually did that for Cryptographic accelleration here “https://www.cs.tcd.ie/~harrisoo/publications/GPU_OCF.pdf”. Nonetheless, new memory management have to be done in order to achieve good performance. I’m looking forward to the day NVidia released the kernel API. I’m also investigating a way of doing this but for the new OpenCL API.

Hope this help!
Rerngvit Yanggratoke