The clock function in Optix is not working as expected.
I am using Optix 6.5. I am using RTX 2080 TI. I obtained the clock frequency via cudaDevAttrClockRate (154500)
For testing I used OptixHello from the samples and reduced my launch dimensions to 1x1.
I computed clock function on both CPU and GPU. I applied a delay in my GPU code.
The result of the time difference / clockRate that I get in the GPU is incorrect.
Code snippet for reference
long long int startTotalTime = clock64();
long long int delayTime = 10000000000;
while (clock64() < (startTotalTime + delayTime));
result_buffer[launch_index] = make_float4(draw_color, 0.f);
long long int endTotalTime = clock64();
double totalTime = ((double)(endTotalTime - startTotalTime)) / 154500.0;
printf("Total time GPU in ms %lf\n", totalTime);
The result that I get is
Total time GPU in ms 64724.929042
Total time CPU in ms 5299.000000
What I infer is that the clockRate that I use is probably incorrect.
What is the clock rate I need to be using ?