I’m using optix to bake some texture data for a mesh. Therefore, I ray trace through each texel, intersect against the 2D texture geometry. Then pass the interpolated 3D position/normal to the “ClosestHit” program. In the closest hit program, I need to cast shadow rays. However, these shadow rays need to be cast against the 3D geometry (not the previous 2D texture geometry). Is there a way to specify a different intersection program be used in a call to rtTrace?
If it is not possible, I guess a workaround would be to create two copies of optix::Geometry, but with the different intersection programs (and also two rtObject nodes). Then specify the rtObject with the 3D mesh intersections for the shadow ray.
Wait, why exactly do you need to “intersect against a 2D texture geometry” inside the scene at all?
When baking information into something per texel, the standard approach would be to explicitly calculate the origin and direction for the primary rays from the baked texture’s mapping information onto some 3D geometry surface. Since that generates the primary rays, that’s done inside the ray generation program and gathers the resulting incoming information from these rays and outputs that per texel, which gives your launch dimension. Do that progressively and you can gather as much detail as required.
No. The intersection and bounding box program are per Geometry. You normally have one intersection and bounding box program per geometric primitive type (e.g. triangle, sphere, etc.).
If you want to have different behaviours for the hit primitives, you can handle that via different materials holding different any hit and closest hit programs per ray type.
To make things invisible to some rays and not others, you can use the any hit program for example.
The OptiX Introduction examples explain that for cutout opacity: https://devtalk.nvidia.com/default/topic/998546/optix/optix-advanced-samples-on-github/
Well my idea was to ray cast through each texel, find the 2D texture triangle intersection barycentric coordinates, and use that to interpolate the 3D position/normal, and continue processing in 3D from there.
Otherwise, when processing each texel, how would you determine the triangle it belongs to?
Well, you could use OptiX to precompute the mapping from uv space to 3d space if you like; I don’t see why that wouldn’t work. It’s probably overkill for a pre-process, but as you said, you could place all the uv coordinates of the triangles into an acceleration structure in OptiX, then shoot rays through the texels to find the triangle id and barycentric coordinate for each texel center. The “per texel, per triangle” algorithm described in the link above is the simpler linear version of this.
I would offset and start the rays behind the uv plane, e.g. if you use (X=u, Y=v, 0) for the 2d vertices, then I would start the rays with some Z coordinate like (-1), not 0. Otherwise you would be doing a point query on a 2d triangle, and that might stretch the robustness of the acceleration structure.
This would be a completely different acceleration structure, different bounding box program, and different intersection program, versus the ones you would use later when computing final color for that texel.