is FP16 running only on the Volta?

Hi,

From the TensorRT document, there is no condition about using fp16. There is only condition for using int8 about gpu capability. I tried fp16 on the TITAN XP and 1080 Ti, but I failed in both. By the searching, I saw the statement: “You need volta for fp16”. This is right and I cannot use fp16 in my environment?

$ ./sample_int8 mnist

FP32 run:400 batches of size 100 starting at 100
........................................
Top1: 0.9904, Top5: 1
Processing 40000 images averaged 0.00181013 ms/image and 0.181013 ms/batch.

FP16 run:400 batches of size 100 starting at 100
Engine could not be created at this precision

INT8 run:400 batches of size 100 starting at 100
........................................
Top1: 0.9908, Top5: 1
Processing 40000 images averaged 0.00140439 ms/image and 0.140439 ms/batch.

Thanks!

Looking into this now. What version of TRT are you using?

I’ve tried TRT 4.0.1.6 on the TITAN XP and TRT 3.0.4 on the 1080 Ti, and I got the same results.

Hi, I am getting similar issues running the python example. Does TensortRT4 with FP16 work on Tesla P4?. I ran the python test used it https://devblogs.nvidia.com/tensorrt-integration-speeds-tensorflow-inference and got the below error:

For FP16: DefaultLogger Half2 support requested on hardware without native FP16 support, performance will be negatively affected.

I am using the docker image nvcr.io/nvidia/tensorflow:18.07-py3 (TensorRT 4)

Hello,

The following GPUs currently support FP16: Quadro RTX 8000, Tesla V100, Tesla P100, and NVIDIA Jetson Xavier.

regards,
NVIDIA Enterprise Support

Excuse me, dose the GTX1080Ti GPU support FP16 now ?

@lpkhappy,

please reference: Support Matrix :: NVIDIA Deep Learning TensorRT Documentation

Thanks very much :)