Hey there,

I have to do some shading calculations for a scene with about 25,000 polygons. But the only thing I want to know is the percentage of the area of each polygon which is not shaded. I do not need any renderings, just the calculation results.

There are two basic methods to do this:

Raytracing or polygon clipping.

Raytracing is easy to parallelize. There are a few thousand rays for each polygons and I just count the number which hit the polygon and which are not (because they hit something else).

On the other hand I can get the same result by clipping those polygons against each other (after they get projected in a 2D certain plane). Since every polygon has to be clipped against all the rest of them, there is plenty of room to parallelize as well.

The raytracing program runs about 50 times faster than the program which works with a polygon clipping library (I used “clipper” by Angus Johnson).

The raytracing program runs already on the GPU while the other one runs on the CPU only.

Do you think it is worth it to reproduce this program in CUDA?

I guess it will take me a few weeks to get a polygon clipping library working on the GPU.

Thanks for advice!

Anyone?

the only way you can expect a certain answer, as opposed to a probabilistic answer, is if you can find a source that has already implemented said algorithm

otherwise, it would inevitably be an uncertain phenomenon, where you can only hope to define/ manage the degree/ level of uncertainty as much as possible

you could of course also see the algorithm running on the cpu as an indirect reference, and make inferences based on it

you could do some cost/ benefit analysis

and you could attack the uncertainty in the underlying measures, to be able to make as informed decision as possible

for instance, you anticipate the cost as ‘a few weeks’

i am not sure how you deem the likely benefit

clearly, there is likely to be some degree of uncertainty in both your cost and benefit estimates - for example, how sure are you it would take you ‘a few weeks’?

i suppose you could expect certain positive externalities from simply coding the algorithm

even if it turns out slower, you are bound to end up wiser - it is rather unavoidable