Hey there,
I am using OptiX to try and write my direct lighting into a UV map. Here’s how my pseudo algorithm works:
- I start from the area light source, dividing each light source into cells (stratified sampling) and sampling accordingly. For each light source, I have an origin in world coordinates, as well as a du (parallel to the width of the source) and a dv vector. The width and height of the light source, and the normal are also stored. The light source data is passed to the device programs via the launch parameters. I can then calculate the world positions of the points I am sampling using the origin of the light source and the du and dv vectors to calculate the offset in world space.
- The ray direction is randomized within the upper hemisphere that lies on the same side as the area light source normal.
- The ray is sent into the scene. I use a 2D vector as PRD, with the intention to save the barycentric UV coordinate of the geometry that’s hit first. In my closest hit shader I thus calculate the hit point’s UV coordinate as follows:
const int primID = optixGetPrimitiveIndex();
const glm::ivec3 index = sbtData.index[primID];
const float u = optixGetTriangleBarycentrics().x;
const float v = optixGetTriangleBarycentrics().y;
// Barycentric tex coords
const glm::vec2 tc =
(1.f - u - v) * sbtData.texcoord[index.x]
+ u * sbtData.texcoord[index.y]
+ v * sbtData.texcoord[index.z];
I then simply write it to my PRD.
- Then at the end, in my raygen program, I take the UV coordinate in the PRD, and use it to calculate my pixel index into the color buffer (which is a
uint32_t*
):const uint32_t uvIndex = int(rayTexCoordPRD.x * optixLaunchParams.directLightingTexture.size) + int((rayTexCoordPRD.y * optixLaunchParams.directLightingTexture.size) * optixLaunchParams.directLightingTexture.size);
. I let the light contribute to that pixel by adding a weighted gray value (each channel has the same contribution).
In my host program, I download the color buffer from the GPU, and try to write it to an image using stb_image_write
, but the output remains a fully black image with some fairly random colored looking pixels in the first few rows. I tried to write hardcoded values into hardcoded indexes, to see how that would change the result, but the result remains exactly the same:
I think I am overseeing something in the buffer management or optixLaunch’s parameters. Here is my launch call:
// Launch direct lighting pipeline
OPTIX_CHECK(optixLaunch(
directLightPipeline->pipeline, stream,
directLightPipeline->launchParamsBuffer.d_pointer(),
directLightPipeline->launchParamsBuffer.sizeInBytes,
&directLightPipeline->sbt,
scene.amountLights(), // dimension X: the light we are currently sampling
STRATIFIED_X_SIZE, // dimension Y: the amount of cells of our stratified sample grid cell in the X direction (on the light)
STRATIFIED_Y_SIZE // dimension Z: the amount of cells of our stratified sample grid cell in the Y direction (on the light)
// dimension X * dimension Y * dimension Z CUDA threads will be spawned
));
A device pointer to the color buffer itself is passed to the device program via the launch parameters. I first allocate memory for the buffer, namely textureSize * textureSize * sizeof(uint32_t)
bytes. I pretty much did the same steps for this buffer as I did for the color buffer from Ingo Wald’s OptiX course, which traces rays from a camera POV. The only thing that seems different to me here is that my launch size is not necessarily equal to the size of my light baked texture (I launched amountLights * stratify_X_resolution * stratify_Y_resolution
threads). Is there anything else I might be overseeing?