Using FP16 precision mode on Tesla P4

Hi, does FP16 precision mode work on Tesla P4? . I ran a python test with TensorFlow-TensorRt integration and it threw the message when using FP16 precision mode:

DefaultLogger Half2 support requested on hardware without native FP16 support, performance will be negatively affected.

I ran the test on Tesla V100-SXM2 without problems.

Environment: via container image
Test reference:


The following GPUs currently support FP16: Quadro RTX 8000, Tesla V100, Tesla P100, and NVIDIA Jetson Xavier.

related to