Hello CUDA developers,
I am a student trying to get a better understanding on how CUDA is used in real world systems-- an attempt at getting out of theory land. My interest is in resource allocation issues. I am hoping that the community here might be able to give me some insights into the workloads folks typically deal with. Here are the questions that I have for you:
- How much of your application’s execution time is made up between execution on the host CPU(s) and GPU(s)? Would you say it’s 20/80? 50/50? 80/20?
- Does your application run periodically (ex. time-steps) or does it run in more of a one-shot manner?
- Assuming we’re all using multicore systems now, any experiences/comments concerning scalability and/or bottlenecks (many CPUs sharing few GPUs)? Have you ever run into a problem where more than one program wanted to use the GPU?
- Finally, what is your field/application?
Thank you very much-- your help would be greatly appreciated.