CUDA as a co-processor New libC that will Use Cuda and GPU ?

Maybe i didnt read enought about the topic, but im wondering if it would be possible to write a new version of the libC that will use CUDA and the GPU to optimize some common functions.

Function that use algorithms that can be converted to a parallel form could use the GPU parallel processing ?

This would allow seamless optimization of many program that use the libC library.
Or is CUDA will be imitated only to graphic processing ?

Could this be possible ?

While one could conceivably do this, CUDA is presently ANSI C with some extensions, and not full C++. Beyond that, you’d probably want to be working with a higher level algorithm library and not just at the level of libC, as you’d need to avoid memory transfer overhead in order to make the GPU shine. Beyond that, there are many things that GPUs can’t do yet, and may not want to do for some time, since they would have negative performance implications or use already-scarce hardware resources (e.g. recursion). For now it’s probably best to use CUDA not for random bits of everyday code, but rather as a way of accelerating key performance-critical routines that need to run really fast, and are good candidates for data parallel algorithms. Trying to run random libC stuff on the GPU probably isn’t going to pay off much in the short term.

John