OptixPathTracer captured image brighter than the rendered one?

I am trying to use the feature of capturing the rendered image in a custom/test renderer using the pathTracer optix sample.
It seems I have followed the basic steps (as shown in optix path tracer sample) to capture the rendered image, how ever, somehow the captured image (on the right) is much brighter that the rendered one (left cube).


I did the same test with the optix path tracer sample and I would hardly notice any difference between rendered and captured image.
It this extra brightness due to the png conversion? Is there a way to fix this?

It’s definitely fixable, but the fix will depend on what is different in your test renderer vs the OptiX path tracer sample, and what parts of the path tracer sample you’d like to keep. Try to map out the color handling completely on both sides and locate the difference.

For the SDK sample, take a look at the make_color() function called in raygen. That is defined in helpers.h, and it calls another helper function named toSRGB(). This function converts from linear colors to the sRGB color space, which involves a gamma value of 2.4 - it will change the apparent brightness if you look at the values before & after, however the linear colors aren’t intended to be viewed directly. PNG files are commonly stored with pixel values assumed to be sRGB as well.


Oh yeah, and also see the sutil::saveImage() declaration:

// Floating point image buffers (see BufferImageFormat above) are assumed to be
// linear and will be converted to sRGB when writing to a file format with 8
// bits per channel.  This can be skipped if disable_srgb is set to true.
// Image buffers with format UNSIGNED_BYTE4 are assumed to be in sRGB already
// and will be written like that.
SUTILAPI void        saveImage( const char* filename, const ImageBuffer& buffer, bool disable_srgb );

I tried disabling the sRGB when calling the sutil::saveImage function but nothing changed.

What puzzles me a bit is the different behavior in rendering/capturing. I am now double-checking if the optix pipeline of my renderer is identical to optix path tracer sample.

Just a quick question,
since I haven’t modified anything in raygen, aren’t functions make_color() and toSRGB() called in the same manner when rendering or capturing?

I tried disabling the sRGB when calling the sutil::saveImage function but nothing changed.

Yeah, note especially: “Image buffers with format UNSIGNED_BYTE4 are assumed to be in sRGB already”. If you want to play with disabling sRGB, you’ll have to modify the code inside saveImage().

since I haven’t modified anything in raygen, aren’t functions make_color() and toSRGB() called in the same manner when rendering or capturing?

Yes, that’s true for optixPathTracer, and if you copied raygen and didn’t modify it, then your test renderer will do the same, it should always convert linear color to sRGB. Something might be different on the display end, or you can check whether the sRGB conversion somehow happened twice on the file output side.


Checking out the make_color() function helped me remember a potential source of the differences between capturing and rendering.

In my test renderer the pixel format of the host buffer has to be in float4 format. So, in order to use the optix device buffer I used the float4* accum_buffer instead of the frame_buffer which is uchar4*

//allocate memory for host buffer|

float4* host_buffer_data = (float4*)malloc(buffer_size * sizeof(float4));

CUDA_CHECK(cudaMemcpy(host_buffer_data, state.params.accum_buffer, buffer_size, cudaMemcpyDeviceToHost));

(To be honest I am not 100% confident about the actual difference between accum_buffer and frame_buffer.)

In the capture image part of the path tracer I have:

sutil::CUDAOutputBufferType capture_output_buffer_type = sutil::CUDAOutputBufferType::CUDA_DEVICE;
sutil::CUDAOutputBuffer<uchar4> capture_output_buffer(
sutil::ImageBuffer buffer;
buffer.data = capture_output_buffer.getHostPointer();
buffer.width = capture_output_buffer.width();
buffer.height = capture_output_buffer.height();
buffer.pixel_format = sutil::BufferImageFormat::UNSIGNED_BYTE4;
sutil::saveImage(outfile.c_str(), buffer, false);

I think I am getting closer to the solution of the brightness problem.
If I understand it well, my renderer buffer is in float4 pixel format, whereas the path tracer sample uses uchar4 for capturing the image.
I would like to ask two questions:

  1. Is it correct to use the accum_buffer and not the frame_buffer to “feed” my host buffer when I need it in float4 format?

  2. Is it possible to make the capture part to work with state.params.accum_buffer?

 sutil::ImageBuffer buffer; 
 buffer.data = state.params.accum_buffer;
 buffer.width = state.params.width;
 buffer.height = state.params.height;
 buffer.pixel_format = sutil::BufferImageFormat::FLOAT4;
 sutil::saveImage(outfile.c_str(), buffer, true);

Let’s first clarify what the accum_buffer and frame_buffer are exactly.

The optixPathTracer sample is doing progressive rendering, meaning each frame is rendered with a low number of samples, in order to maintain a high frame rate for interactivity. The resulting image from a single frame is being accumulated and averaged into the final displayed result so that you get the benefit of taking many samples per pixel over many frames.

accum_buffer is for doing the averaging math. In order for the math to be correct and for the colors to not degrade, this buffer is stored in “linear color space” and is 32 bits per channel.

frame_buffer is the final gamma-corrected display buffer, and it’s used for both the OpenGL display buffer and the PNG file output. This buffer is 8 bits per channel, for display & file output space efficiency, and the values are encoded in sRGB color space. Because it’s low bits per channel, and because it’s sRGB encoded, the progressive averaging cannot be done on the frame buffer.

I’m not certain what you mean by ‘capture’ exactly - whether it’s the result of a single launch, or the accumulated final image. In the optix path tracer, there is no explicit buffer that represents the result of any single launch. The raygen program is doing the accumulation & averaging math directly into the accum_buffer. The conversion from float4 to uchar4 is done in raygen by calling make_color, which in turn is basically make_uchar4( toSRGB( color ) ). The SDK sample effectively outputs both of these two buffers - the accum_buffer is to feed back into the next launch, and the frame_buffer is to display or save the accumulated result.

So, in your host program, if you want to use the float4 linear colors for something, then use accum_buffer, and if you want the sRGB 8-bit colors, then use frame_buffer… assuming the result of the rendering and setup is the same as the optixPathTracer sample. If you want an explicit buffer for the results of any single frame’s launch, then you could make a third buffer. You would want it to also be float4 and linear color space, the same as accum_buffer.


1 Like