OptiXDenoiser support of unsigend char in future release

Hello,

I followed a tutorial to use the OptixDenoiser to get soft shadows, of which I implemented two variants without denoising before. The tutorial required me to change the image buffer from uint32_t (simply 8 bits for RGBA each) to float4. However, this butchered my frame rate from about 100 frames per second to merely 10 frames per second, which is basically caused by buffer operations (resizing, drawing to, etc.) on the float4 datatype. I measured about 50 ms only to resize the 1920x1080 framebuffer, whenever needed.

I tried to use the OptiXDenoiser with the previous uint32_t format. For this, I specified in the input and output layers OPTIX_PIXEL_FORMAT_UCHAR4 as format. As it turned out, unsigned char is unsupported, as I could find in a post from OptiX Version 7.0.

So now I’m basically left with the choice of using float4 as framebuffer format at 10 frames per second or using uint32_t and increasing the samples, which end up at about 20 frames per second without any notable noise.

Therefore, I wanted to ask if there are any plans to add support for unsigned char format to the OptixDenoiser and, if there is a timeline for when to expect that feature. I also considered half, but that would still cause a lot of wasted space.

Kind regards and thank you
Markus

Hi Markus, I believe we are considering adding a 1 byte per channel format in the future. I cannot discuss timing but a couple of things I might recommend in the mean time are to try the half-float format, and also to avoid rendering directly to float4 or resizing in float 4. Instead perhaps use a CUDA kernel to promote to the half-float format immediately before denoising, but not sooner. I would imagine this to be quite a bit faster than drawing & resizing in float4, it will certainly be much much faster than 50 ms.


David.

Hello David,

thank you for answering! I’ll give it a shot.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.