I’m wondering how much overhead is created during rendering by using degenerate triangles.
I’m thinking about a case where you have e.g. 10,000 triangles, with 5,000 of them being degenerate after the vertex processing stage (after the vertex shader has been applied).
I assume it would be more efficient to render only the 5,000 non-degenerate triangles then, as we will save the vertex processing overhead for the degenerate triangles. The question is how strong this effect will usually be - are there any whitepapers or so that mention some numbers about that issue?
Thanks a lot in advance!