After upgrading my dev env from
CUDA 5.0 / OptiX 3.0.0 to
CUDA 6.0 / OptiX 3.6.2
I really expected something new, or at least better or maybe faster. The new functionality that looks interesting is TRBVH but with any model I took, the TRBVH isn’t any faster than SBVH.
Also, my OptiX kernels were using local variables to boost global memory access. These kernels still build but simply don’t run anymore*, as if the available local memory per thread had decreased. Of course the hardware and the nvcc parameters are the same as before, but the resulting performance is (disappointingly) not.
The most annoying difference comes from CUDA 6.0: Visual Studio 10/11 or 12 is now mandatory, while the (completely FREE) MS Windows SDK 7.1 was working perfectly with CUDA 5.0. Also, the 64 bits compilers are NOT part of VS Express editions so unless you use less than 2GB memory (as lucky as rare case in CG) you’ll need to BUY MS products to use free NVidia products.
!?!?!? DID I MISS SOMETHING !?!?!? I’d rather pay to NVidia.
Long story short, can someone point one REAL advantage to switch the new OptiX release?
I just found it annoying to fix all the paths in the project and install VS 2010 and fix my nvcc building scripts to find that… there is no advantage (well there is one but not for me, moreover for MS business). So, did I just lose my time?
Thanks for reading!
- the working 3.0.0 code simply looks like this (same code does not work anymore with 3.6.2):
PackedOutputStruct out = outputBuffer[launch_index];
// SOME PROCESSING IS MADE AND WRITTEN TO OUT //
outputBuffer[launch_index] = out;