What does "Median Inference Time" measure specifically?

I see this number reported on the top of the TensorRT profile report page. It does not seem to correspond to the timeline provided below. Does it correspond to any number reported by trtexec?

We run multiple passes of inferences and report the median value. We are not using trtexec to measure the inference time, so there could be discrepancies.

So it’s measured without profiling enabled? Is the duration only on GPU time or does it include enqueue/synchronize overhead?