I apologise in advance for the general nature of my query, and if anyone is kind enough to answer, I am not looking for specific answers, just direction for further investigation.
I am not an experienced computer programmer, but have been researching/exploring use of CUDA and Optix in various areas, but I cannot find information on some specific topics.
Most of the Optix information I have found relates to photorealistic raytracing rendering of images of geometry in a fast, almost real-time, sense.
I have not come across implementation, or code samples, regarding baking of models - an area that I read Optix as being quite useful (a la Bungie’s Vertex Light Baking / HBAO or Least Squares Vertex Baking)
My interpretation of baking is that a complete ‘static’ scene is analysed via raytracing (hence Optix is preferable) and (probably) texture maps are updated/written with new colour values.
The scene can then be recalled with the baked texture maps applied for further ‘real-time but simpler’ shading analysis for a more dynamic elements (pre-baked gaming environments etc.)
There seems to be little in the way of actual implementation of baking available online (save for https://github.com/nvpro-samples/optix_prime_baking)
My research relates to an analysis of, and a single complete solution for, a scene, whereby the scene is analysed with millions of rays, and the ray hits ‘for each polygon’ are counted, and stored.
It is a view-independent solution, with no need for pixel-based calculation, just polygon-based calculation and polygon-based data storage.
The single solution for the entire scene is to be stored for viewing/recalling at any time (by way of a simple passing of the model to OpenGL) and the resulting scene would show a different colouring of each polygon representing the number of hits per polygon.
I see similarities to the Collision SDK sample in that the geometry is analysed ‘off screen’ and passed back, but my scenario is not completely the same (i.e. no real-time updating, there is ONLY one answer after analysis)
The two issues I cannot find info for are:
(i) the updating/storage mechanism counting the ray hits per polygon (not per pixel) and
(ii) the creation/storage of texture data that translates hits to a colour, again per polygon (not per pixel), which would be wrapped on the model.
In my scenario, any hit programming is the basis (with no standard recursive ray types - it’s almost just a ray casting exercise).
Obviously OBJ is the geometry type used (I have already automated the scripting of OBJ files for testing), and I assume Vertex Buffer Objects and vertex attributes are relevant.
But the ideas of
(i) Optix storing/counting hits
(ii) Optix ‘writing’ a re-usable/re-loadable texture
are either uncommon (I doubt it) or the ideas are simple and/or obvious but have passed me by in their simplicity.
Any direction would be very much appreciated.