I am using Nvidia GTX 1080 Gpu to run inference of U-Net model. I converted it to .uff file and was able to run TensorRT engine in full precision mode, but I get the below error in Half precision mode.
[TensorRT] ERROR: Specified FP16 but not supported on platform
Traceback (most recent call last):
File "infer_from_uff.py", line 42, in <module>
engine = trt.utils.uff_to_trt_engine(G_LOGGER, uff_model, parser, 1, 1<<20, trt.infer.DataType.HALF)
File "/usr/lib/python2.7/dist-packages/tensorrt/utils/_utils.py", line 177, in uff_to_trt_engine
raise AttributeError("Specified FP16 but not supported on platform")
AttributeError: Specified FP16 but not supported on platform
Is FP16 mode not supported on GTX 1080 with TensorRT 3.0.4?