Hi, does FP16 precision mode work on Tesla P4? . I ran a python test with TensorFlow-TensorRt integration and it threw the message when using FP16 precision mode:
DefaultLogger Half2 support requested on hardware without native FP16 support, performance will be negatively affected.
I ran the test on Tesla V100-SXM2 without problems.
Environment: via container image nvcr.io/nvidia/tensorflow:18.07-py3
Test reference: https://devblogs.nvidia.com/tensorrt-integration-speeds-tensorflow-inference/#disqus_thread