I lately completed Lesson 1 of NVIDIA’s Udacity course, ‘Intro to Parallel Programming’, which has led me to question my intended use case for CUDA. I seek to confirm whether my use case is appropriate, here.
I would take take arbitrarily many different mathematical functions and evaluate each function in parallel on the same set of values (domain) to value the functions (range).
As I understand it, this intention turns the standard usage of CUDA and parallel processing on its head: instead of a vector of values, I have a vector of functions for evaluation.
Thank you for any comment, including, as appropriate, suggested alternative approaches.