Hi,I’m still beginner of Optix.
I am still working on my Optix7.2 application on VS2019.
I copy a PathTracer project of Optix SDK Samples.
I want to generate the depth map and normal map of the first path tracing.
Currently, a normal map is generated, but I don’t know if it is correct.
When I proceed to generate the depth map, the final result is a piece of white.
The following is my code to generate the depth map and normal vector map.
extern "C" __global__ void __raygen__rg()
extern "C" __global__ void __closesthit__radiance()
const float3 N_0 = normalize(cross(v1 - v0, v2 - v0));
const float3 N = faceforward(N_0, -ray_dir, N_0);
const float3 P = optixGetWorldRayOrigin() + optixGetRayTmax() * ray_dir;
RadiancePRD* prd = getPRD();
prd->normal = N_0;
prd->optixdepth = optixGetRayTmax();
I have also seen the optixRayCast project, but I did not understand its implementation process.
Would you help me solve this problem.
I uploaded my .cu file
PathTracing.cu (11.5 KB)
There are multiple issues in your code:
1.) The primary ray is the one with depth == 0.
You need to store the normal and optixdepth when depth == 0, not at depth == 1. That would be after the first bounce.
2.) The prd.normal is not initialized.
If the ray doesn’t hit anything, means if it reaches the miss shader with the primary ray, the prd.normal contents are undefined. You should initialize all prd fields to working defaults after declaring it.
If you initialize the normal to the behaviour you want in the miss case, then you don’t need to set it inside the miss program.
Mind that you write the normal in full range component values [-1.0, 1.0] which will not display the negative values when visualizing this as image for debugging.
3.) Note that you’re calculating the object space normal.
In the case of the optixPathTracer example the object space is the same as the world space because there are no transformations above the geometry.
With more complex scenes using instance transforms and if you’d need world space normals or camera space normals for the denoiser in the future, the object space normal would need to be transformed into world space with the inverse transpose transform matrix (and then potentially into camera space).
Code which does that can be found here:
4.) Again your optixdepth is not initialized to a default value, means if the primary ray misses, your optixdepth is undefined.
5.) Depending on the scene size the optixGetRayTmax() can be anything between 0.0f and a huge value (far bigger than 1.0f usually).
If you simply add that to the red channel of your result, that will usually not display as nice color but will be out of gamut.
You would need to transform that depth value into a reasonable range, normally [0.0, 1.0] where miss events would be 1.0.
I would recommend writing the radiance, normal and depth values into individual output buffers. Then you can read them out to host and debug their contents with a memory window inside the debugger or print them.
6.) Mind that the distance along the ray shot from a single origin is a radial distance. This is not the same as for example in OpenGL where the depth is a planar distance from the camera plane.
Means if you intend to depth composite that with some rasterizer, you would need to calculate the planar distance matching to the resp. rasterizer projection instead.