Unable to run with half precision on Nvidia GTX 1080

I am using Nvidia GTX 1080 Gpu to run inference of U-Net model. I converted it to .uff file and was able to run TensorRT engine in full precision mode, but I get the below error in Half precision mode.

[TensorRT] ERROR: Specified FP16 but not supported on platform
Traceback (most recent call last):
  File "infer_from_uff.py", line 42, in <module>
    engine = trt.utils.uff_to_trt_engine(G_LOGGER, uff_model, parser, 1, 1<<20, trt.infer.DataType.HALF)
  File "/usr/lib/python2.7/dist-packages/tensorrt/utils/_utils.py", line 177, in uff_to_trt_engine
    raise AttributeError("Specified FP16 but not supported on platform")
AttributeError: Specified FP16 but not supported on platform

Is FP16 mode not supported on GTX 1080 with TensorRT 3.0.4?

I’ve used builder->platformHasFastFp16() (https://docs.nvidia.com/deeplearning/sdk/tensorrt-api/c_api/classnvinfer1_1_1_i_builder.html#a1d18b948852cb088d22a87cff70a2b2f) to check. My 1050TI does NOT support fp16 with TensorRT

You need volta for fp15

recall = np.concatenate([[0], recall, [1]])
    precision = np.concatenate([[0], precision, [0]])

    # Preprocess precision to be a non-decreasing array
    for i in range(len(precision) - 2, -1, -1):
      precision[i] = np.maximum(precision[i], precision[i + 1])

    indices = np.where(recall[1:] != recall[:-1])[0] + 1
    average_precision = np.sum(
        (recall[indices] - recall[indices - 1]) * precision[indices])
    return average_precision