Clock() function in Optix

The clock function in Optix is not working as expected.
I am using Optix 6.5. I am using RTX 2080 TI. I obtained the clock frequency via cudaDevAttrClockRate (154500)
For testing I used OptixHello from the samples and reduced my launch dimensions to 1x1.
I computed clock function on both CPU and GPU. I applied a delay in my GPU code.
The result of the time difference / clockRate that I get in the GPU is incorrect.
Code snippet for reference

  long long int startTotalTime = clock64();
  long long int delayTime = 10000000000;
  while (clock64() < (startTotalTime + delayTime));
  result_buffer[launch_index] = make_float4(draw_color, 0.f);
  long long int endTotalTime = clock64();
  double totalTime = ((double)(endTotalTime - startTotalTime)) / 154500.0;
  printf("Total time GPU in ms %lf\n", totalTime);

The result that I get is
Total time GPU in ms 64724.929042
Total time CPU in ms 5299.000000

What I infer is that the clockRate that I use is probably incorrect.
What is the clock rate I need to be using ?

Hi there, the function clock() is a CUDA function, and there’s a CUDA sample demonstrating how to use it properly in the CUDA SDK samples named 0_Simple/clock_nvrtc. Documentation is here: https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#time-function

The docs mention that cudaDevAttrClockRate() returns the peak clock rate in kilohertz, which might not the be your current clock rate. You can lock your clock rate to a specific value for profiling and timings using nvidia-smi.


David.

Hi David,

Thanks for the direction, indeed I use CUDA sample example for clock.
I managed to lock my clock rate as you suggested and indeed it does give me timings that make sense.

Sukumar

1 Like