I’m having some issues with very poor performance Maya’s viewport is having while evaluating a custom deformer written using CUDA.
Without getting into detail about the deformer itself, basically the calculations needed are all done inside a kernel and the results are sent back to the host. This has to be done for each frame of a given animation.
My question is this, could these calculations be interfering with Maya’s usage of GPU processing, or is there something else going on that I’m not aware of? The CUDA code itself works fine, it was tested thoroughly using a standalone to simulate needed calculation.
Any information is appreciated, thanks in advance!
Yes, the hardware resources on the GPU used to process CUDA tasks are also the same hardware resources that are used for some other “ordinary” graphics tasks (such as running various shaders). I’m not sure I would use the word “interfere”, but it’s quite possible that multiple tasks are competing for resources on the GPU. It seems like this should be easy to get some idea of, by creating a dummy kernel that takes approximately no time to execute per frame, and substuting it for your deformer kernel, to evaluate performance in that case.
Maximus (a Tesla card plus a Quadro card, in specific combinations) was conceived of to help address issues like these. I don’t know if it is applicable to your particular case, however. This demo may give you an idea:
Hi txbob, thanks for the quick reply,
I was actually doing some tests like you mentioned and was getting mixed results all of which led me to believe that, as you pointed out multiple tasks were probably competing for GPU resources.
Anyway, Maximus sounds like it could help, do you think that the same effect (to a certain degree of course) could be achived with just two GTX’s, one to do calculations and the other to handle the viewport?