I’m brand new to optix so apologies if this is a dumb or easy question.
I’m working on a problem relating to a physics detector, where I need to develop something effectively that can tell what points are visible to two particular things. Basically i’ll have an .obj file of geometry, and there’ll be a cylinder A, and a detector B, and for every “point” (square of some defined resolution, ex. 1cm x 1cm), color it red if its visible to both, green if visible to the cylinder, and otherwise leave it grey. Then I want to export this as an OBJ file with images for textures. Effectively just a lighting problem it seems.
I implemented this in python but expectedly it’s very slow, and optix having acceleration structures built in seems very nice, of course as well as the parallelization. I wanted to see if anyone had any helpful advice or resources for implementing this kind of code - mainly any relevant examples or documentation, or specific applicable functions.
Hey @luc.barrett57, welcome!
It might help to know that in film & games this type of setup is often called “baking” or “texture baking”. For example, sometimes people create textures for the effect called “ambient occlusion”, which is really just a visibility query similar to what you describe - the idea is to darken the corners, and tight spaces, and contact points between objects or walls, because it looks good. They pre-compute these textures ahead of time by calculating the texture value needed for every ‘texel’ (pixel of the texture map), by casting rays from wherever that texel sits in space and computing the color based on whatever the rays hit. Then they can use these ‘baked’ textures to show nice lighting at run time without having to do the visibility calculations in real time.
OptiX lets you define what your emitter & detector shapes look like, as well as what information your rays carry and what kind of information you export, so it’s pretty good for doing simulation and visibility work that is not specific to rendering or lighting. You’ll find there are a few other people in this forum channel that have done similar things, you might also search for terms like ‘emitter’, ‘source’, ‘detector’, and ‘receiver’, or similar kinds of words that physicists & non-graphics engineers might use, since this topic is not entirely uncommon here.
I’m sorry I don’t know of any template examples off the top of my head, but I suspect it will be fairly easy to adapt one of our SDK samples once you have your head around OptiX, and knowing a few terms to Google might at least give you more relevant examples to start from. I would guess that if you focus on converting a raygen program from a perspective camera into your surface detector, you might be more than halfway there, and most of that will be more about the vector math for calculating where to cast each ray from, and maybe not that much about anything OptiX-specific.