Is there any easy way to normalize float values in a buffer? Currently, I’m implementing a cuda program to scan my output buffer for the min/max values and then scale each element. I feel like this might be something that may have a simpler solution.
OptiX doesn’t directly support reduction and scan from within a ray generation program without using atomics (note there is only support for atomicAdd for float).