my program can be executed by cpu and gpu. (host device).
If i run it on the cpu each frame needs X milliseconds to be rendered (-> same interval to render an image).
But if I execute it on the gpu, the rendering process for each frame varies very much in time (you can see it in the attached video).
I could not find out any variation by measuring the timings, so i would like to know, if someone of you guys has an idea,
what could cause this problem.
I know it is not much information about the code, but i don’t know which part is responsible for that - so I just would like to know, if there is a part which I have missed.
thanks a lot!