NVIDIA: I have come to realize that it may be very advantageous, if years from now, I’m able to do distributed CUDA processing. Let me explain:
If I have a machine at home doing 1 TFLOP/sec, it may be advantageous to be able to get access to it remotely. In addition, if some service sitting on the Internet were to identify “public” machines that have been volunteered for distributed CUDA processing, then the sky is the limit. I can see letting a remote CUDA kernel run on my machine when I’m not using it. This would be similar to the concept currently used with the distributed processing for Protein folding or SETI, etc.
In any case, in my opinion, it would be very good for the future of CUDA if there is a Product Manager or Architect at NVIDIA thinking about how to provide the API and infrastructure on how to make distributed Internet based CUDA execution of kernels possible. :magic: