Using FP16 precision mode on Tesla P4

Hi, does FP16 precision mode work on Tesla P4? . I ran a python test with TensorFlow-TensorRt integration and it threw the message when using FP16 precision mode:

DefaultLogger Half2 support requested on hardware without native FP16 support, performance will be negatively affected.

I ran the test on Tesla V100-SXM2 without problems.

Environment: via container image nvcr.io/nvidia/tensorflow:18.07-py3
Test reference: https://devblogs.nvidia.com/tensorrt-integration-speeds-tensorflow-inference/#disqus_thread

Hello,

The following GPUs currently support FP16: Quadro RTX 8000, Tesla V100, Tesla P100, and NVIDIA Jetson Xavier.

related to https://devtalk.nvidia.com/default/topic/1039249/tensorrt/is-fp16-running-only-on-the-volta