Rather than casting a ray from a camera, I’d like to set the values in rtCurrentRay directly and call the intersection program for a geometry instance with them. Is that possible?
What I want to do is basically get the colour of an object at a set of points on its surface.
One way might be to cast rays from just off the surface onto it at each point. I was just wondering if there’s an alternative, as there’s always a slim chance that another object in the scene might be hit instead of the one I want.
You can get access to vertex attributes and sample them yourself. I do exactly that for mesh lights.
That can be done via bindless buffer IDs you would need to store in some OptiX buffer.
In my case it’s a vertex attribute buffer and an index buffers for the triangles per geometry inside my light definition structure. Then you could simple call a function which samples points on the mesh surface.
Maybe explain what problem you need to solve instead of asking closed questions.
It would be for generating the input data for a point-based rendering system. The input data is the colour of a set of surface elements in a scene under direct lighting (point lights, spot lights etc). There are lots of direct lighting OptiX examples but for this application there are some complications - like fact that I want to work straight from the intersection points, rather than calculate them via rays from a camera.
So your problem is how to sample an arbitrary mesh geometry uniformly, or do you have specific sampling requirements, like equidistant points across adjacent primitives, which would complicate the explicit sampling?
If you can sample the surface points explicitly, means position, orthonormal-basis, texture coordinates etc. and know its BRDF (material appearance), calculating the incoming light is exactly the same as if you hit that point with some ray.
It doesn’t matter if that was within a global illumination integrator or only with direct lighting calculations, the integration of the incoming light would be the same from that point on.
Means if you have a working renderer, only the ray generation program would need to be changed. It would need to sample the surface and shoot rays into the scene for light integration as your light transport algorithm requires, that’s all.
If your results are view independent (only Lambert materials on these surfaces) there wouldn’t even be a need to recalculate these colors for viewpoint changes.
I already have the point sampling part - i.e. a list of surface element positions, normals, and areas. The requirement there is that the surface elements can be clustered into sets of 3-4, then clustered again, and again to make a tree. That’s straightforward for something like a cube, but a bit trickier for a mesh - and I only have a basic implementation of that at the moment.
Yes, that’s the case, I have a working OptiX renderer set up which I’ve been using for a while and want to reuse to get the colour of each surface element. So that’s why I was interested in knowing if I can call the intersection program directly from the ray generation program. Generating rays just off the surface of each surface element is the alternative.
This is where things break down a bit. I would probably have to force my material programs to limit themselves to the view independent part of whatever colour they generate from the lights. So lose the specular reflection in the global illumination calculation input. Maybe that omission would be noticeable in the final output render, maybe not.