Dose TITAN Xp support FP16

Description

I was using tensorrt to convert a yolov8s model, and I got this warning, and the detection result was so bad.

[W] [TRT] Half2 support requested on hardware without native FP16 support, performance will be negatively affected.

and I found this

my question is

my GPU’s compute capability is 6.1 and it’s support FP16, why there is still the warning?
and is this warning actually the reason that affect the performance?

Environment

TensorRT Version: 8.0.1.6-1+cuda11.3
GPU Type: NVIDIA TITAN Xp
Nvidia Driver Version: 510.85.02
CUDA Version: 11.3
CUDNN Version: 8.2.1.32-1+cuda11.3
Operating System + Version: ubuntu 20.04
Python Version (if applicable):
TensorFlow Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):

Please refer to the Support Matrix :: NVIDIA Deep Learning TensorRT Documentation which may help you.
Also, we recommend you to please use the latest TensorRT version.

Thank you.