Meaning of Mixed Precision when used with TensorRT in Nvidia's blogs

Hey everyone, looking for some clarification on the content in the charts here: https://developer.nvidia.com/deep-learning-performance-training-inference

In the inference section, under Precision it says, mixed-precision even when mentioning TensorRT being used in the footnotes. What does that mean? I had understood AMP to be lowering of precision in ops that can support it and TensorRT precision values as lowering of precision for all ops even with hit to accuracy when enabled. Any clarification would be helpful.