OptiX, OptiX Prime, Compatibility with CPU and RTX

Yes, you can simply use the OptiX high-level API for that.
There is a small example inside the SDK called optixRaycasting demonstrating that.
There will be future API changes which will allow to implement this more efficiently.

The current benefits of that would be support for everything OptiX offers over OptiX Prime, like more complex scene graphs, custom geometric primitives, possible anyhit ray continuation, programmable queries and hit records, motion blur, etc.
Note that the aforementioned considerations in this thread about memory bandwidth will still apply.

Thank you Detlef for pointing me toward that example!

Very good discussion.

We too are attempting to take advantage of GPU acceleration for the ray-tracing portion of our software’s workflow. Currently we use Intel Embree and we will require CPU fallback going forward. Glad to hear that Optix Prime isn’t deprecated. Eventually, we would like to take advantage of some of the features of the full Optix API. Given the API difference between Optix and Optix Prime is likely less than the difference between Optix and Embree, we might consider replacing Embree with Optix Prime as our CPU fallback.

Is Optix Prime’s CPU fallback based on Embree like AMD’s RadeonRays/FireRays CPU fallback or is it something nVidia created in house? If the latter, has anyone benchmarked the difference between Optix Prime in CPU mode and Embree?