Webserver on GPU

I was wondering would it be possible to implement a webserver that would do most of the calculations on GPU? I’m quite new in the field and I’m not even sure if code running on the GPU would be able to communicate with the networking stack. Fermi architecture provides system calls, but I couldn’t find the definite list of supported ones. Anyone has any ideas about it?

Thanks a lot, Vytis

I was wondering would it be possible to implement a webserver that would do most of the calculations on GPU? I’m quite new in the field and I’m not even sure if code running on the GPU would be able to communicate with the networking stack. Fermi architecture provides system calls, but I couldn’t find the definite list of supported ones. Anyone has any ideas about it?

Thanks a lot, Vytis

You can always have a CPU thread that passes the data to the GPU for processing…

You can always have a CPU thread that passes the data to the GPU for processing…

The GPU is not fully autonomous at this point, so you would would need the sockets to be managed on the CPU. The host code would then move data to the GPU and start CUDA kernels as needed, as Peter_Purdue suggested.

One thing to keep in mind is that you generally want a CUDA device to be associated with one persistent host thread from which all CUDA function calls originate. Multiple CUDA contexts competing for a device have some switching overhead, and migrating a single context between host threads also has non-trivial overhead.

The GPU is not fully autonomous at this point, so you would would need the sockets to be managed on the CPU. The host code would then move data to the GPU and start CUDA kernels as needed, as Peter_Purdue suggested.

One thing to keep in mind is that you generally want a CUDA device to be associated with one persistent host thread from which all CUDA function calls originate. Multiple CUDA contexts competing for a device have some switching overhead, and migrating a single context between host threads also has non-trivial overhead.

webserver on the GPU , i have thought about it before and from my point of view , it can but useless and won’t do you any good , the thing is webservers usually handles requests and query DBs or spit texts/pics/etc as a response , and you can’t query DB from the GPU , this will just make going back and forth bet CPU and GPU and won’t let you have any performance gain and this is not what GPU is made for , on the other hand you can use GPU on your server if you are doing certain type of operations that needs it like password-cracking , data mining on large amounts of data and the list goes on… but using it to substitute the basic webserver is not the way to go … at least for now

What if you use the webservice just as a GPU repository and when you execute your multiGPU code, the code get the GPU availables on the webservice, create n threads and send each one to different GPUs?

I didnt put too much thought in this GPU+Webservice idea, but this I think that could work.