Hello developers,
First of all I would like to thank all the developers here who supported me in my work.
I have developed an application in OptiX for solving Radiation equations. Now I want to run my optix application completely on CPU to compare the computation time between CPU and GPU.
Is it possible? I am using Optix 3.8.
Not really. For meaningful results comparing the raytracing performance itself you would need to have a fully optimized CPU implementation of the same algorithms used inside OptiX. That doesn’t exist.
I would also really recommend to update to OptiX 3.9.0 and remeasure your current data.
Just look at the list of performance improvements in the OptiX 3.9.0 release notes:
[i]Enhancements in OptiX 3.9
- CUDA 7.5 Toolkit support.
* Trbvh builds are now twice as fast in OptiX.
* Faster ray tracing of massive models in both OptiX and OptiX Prime.
- Access to new texture types available in CUDA 7.5 including MIP-mapped textures, cube textures, and layered textures via gather and fetch.
- Support for anisotropic texture filtering.
- Support for half float textures.
* Reduced CPU overhead for very large node graphs.
- Prime now supports a watertight ray-triangle intersection mode for improved robustness.
- Added new dynamicGeometry sample that illustrates performance alternatives for rigid body motion with large node graphs.
- OpenGL interop now supports GL_SRGB8 and GL_SRGB8_ALPHA8 modes.
- Fixed a bug where accessing textures on Fermi always returned black.
- Various bug fixes to VCA support.
* Improvements to compile times up to 7x of very large user code when using R358 or later drivers on Maxwell GPUs.
* Various bug fixes and improvements to sample code.
* Various bug fixes and performance improvements to acceleration structure builders and traversers.[/i]
Thank you for your response, I will update my Optix.
I wonder what should I do now, I should compare CPU and GPU computation time to highlight the importance of GPU in this field, can you suggest any thing?