While one could conceivably do this, CUDA is presently ANSI C with some extensions, and not full C++. Beyond that, you’d probably want to be working with a higher level algorithm library and not just at the level of libC, as you’d need to avoid memory transfer overhead in order to make the GPU shine. Beyond that, there are many things that GPUs can’t do yet, and may not want to do for some time, since they would have negative performance implications or use already-scarce hardware resources (e.g. recursion). For now it’s probably best to use CUDA not for random bits of everyday code, but rather as a way of accelerating key performance-critical routines that need to run really fast, and are good candidates for data parallel algorithms. Trying to run random libC stuff on the GPU probably isn’t going to pay off much in the short term.