I’m new to OptiX in special and GPU programming in general, so below question might appear trivial to some ;)
Throughout one of my courses at university, I’m currently working on a direct volume rendering similar to what Wald et al. mentioned in their “RTX Beyond Ray Tracing […]” Paper (see (1)). Due to limited access to RTX capable hardware, I’m developing on a “non-RTX” graphics card (GeForce GTX 960, driver version 436.30, CUDA 10.1, OptiX 7.0). Evaluation will later take place on a RTX capable device (the exact card is still to be determined).
For clarification: Are there any explicit steps to be taken to ensure my code makes use of RTX hardware capabilities during evaluation? To my understanding of both the docs and some discussions on this board (e.g. (2)), all I have to do is to stick to the OptiX built-in triangles (i.e. do not use geometry types other than “OPTIX_BUILD_INPUT_TYPE_TRIANGLES”). OptiX then “transparently” makes use of RTX capabilities where possible. Is there anything else to do?
Also, is there a way to verify the use of the RTX hardware acceleration during runtime? I am aware of querying “OPTIX_DEVICE_PROPERTY_RTCORE_VERSION” using “optixDeviceContextGetProperty()”. But this only yields the device’s capabilities - not the capabilities’ use.
Thanks for your help!