However, I don’t know how to generate the input exr files
Please read the OptiX Programming Guide about the denoiser inputs to see what data it expects in the different denoiser models.
https://raytracing-docs.nvidia.com/optix7/guide/index.html#ai_denoiser#nvidia-ai-denoiser
It’s recommended to use the HDR or AOV models.
Note that the optixSphere example wouldn’t need a denoiser because that is not implementing a progressive Monte Carlo renderer which would produce any noise, neither does the optixWhitted renderer which shows some more custom primitives.
How to render multiple spheres, and how to put triangles and spheres in one scene.
OptiX supports triangles and curves (linear, quadratic, cubic B-splines) as built-in geometric primitives.
The triangles have a built-in intersection program which doesn’t need to be set inside the shader binding table hit record for these primitives.
For the curves primitives you would need to query one of the built-in curve intersection programs and set it inside the hit record.
All other primitive types, like spheres, are custom geometric primitives in OptiX for which you need to provide an axis aligned bounding box element per primitive and your own intersection program calculating the ray-primitive intersection. The OptiX SDK contains examples for sphere and parallelogram intersection programs in different apps.
Built-in and custom primitives cannot be inside the same geometry acceleration structure (GAS). That means you would need to build at least two different GAS, one for the triangle geometric primitives and one for the custom sphere geometric primitives.
To get these into a scene, you would place these under and instance acceleration structure (IAS) with the two GAS holding the triangles and the spheres as two instances in that IAS.
(This specific render graph structure is indicated as special case in the OptixPipelineCompileOptions traversableGraphFlags as OPTIX_TRAVERSABLE_GRAPH_FLAG_ALLOW_SINGLE_LEVEL_INSTANCING.)
This is explained inside the OptiX programming guide chapter on acceleration structures:
https://raytracing-docs.nvidia.com/optix7/guide/index.html#acceleration_structures#acceleration-structures
If you want to render many sphere primitives, the optixSphere example is not a good foundation, since it only contains a single hardcoded sphere primitive which parameters (center, radius) are directly encoded inside additional data of a per shader binding table entry for the hit record. You cannot simply add more hit records with other such sphere parameters and expect that to work because that additional primitive is not contained inside the geometry acceleration structure (GAS) which is used as top level traversable handle for the optixTrace calls.
Means you’re not seeing the sphere you’ve added in another hit record because it neither exists inside the GAS, nor would your second hit record ever be called because you didn’t adjust the number of shader binding table (SBT) entries that single GAS used, which in turn requires an SBT instance offset to select the correct SBT hit record entry for your different primitives.
https://raytracing-docs.nvidia.com/optix7/guide/index.html#shader_binding_table#shader-binding-table
This is not a good approach if you want to render many custom primitives. For that you would simply place the sphere parameters (float3 center, float radius) into an array of float4 data, then calculate the axis aligned bounding boxes (AABB) around these spheres which are required for the custom primitive build input in the optixAccelBuild call.
You could then use a single hit record for all these spheres inside that single GAS. The intersection, closest hit and optional any hit programs in that hit record would use the optixGetPrimitiveIndex function to determine which sphere primitive has been hit. This works the same way for any geometric primitive type. With that index you can retrieve the per-primitive data, in this case the float4 holding the center and radius of the sphere, which you could store as pointer to that data array inside the global launch parameters if there is only one, or at additional per-hit record entry data inside the SBT similar to how it’s done with the hardcoded parameters inside the sphere example you cited above, just as a CUdeviceptr.
That per SBT data is retrieved inside the device programs with the OptiX device function optixGetSbtDataPointer.
https://raytracing-docs.nvidia.com/optix7/guide/index.html#device_side_functions#device-side-functions
How to generate flow/normal/BSDF pictures by optix.
The optixDenoiser example inside the SDK shows what OptiX API entry point functions to use when you have your input images with float or half components by simply loading some already rendered EXR images which are normally using the half format. That works the same way when you have rendered these images and have that data in the resp. CUDA memory buffers.
If you’re looking for examples which actually render an image with a path tracer and use the (HDR) denoiser on that, please have a look at the sticky posts of this sub-forum for links to more examples:
https://forums.developer.nvidia.com/t/optix-7-3-release/175373
The last link to the OptiX 7.2 release in there contains links to the OptiX 7 SIGGRAPH Course examples and more advanced examples. Both repositories contain example programs using the denoiser on interactively rendered images.
As you can see from the links to the OptiX Programming Guide, you should work through that and the OptiX SDK examples to learn how things work together.
This developer forum also contains a lot of additional information on top of the OptiX Programming Guide, including questions about sphere intersections and how to manage them in acceleration structures.