Different inference result with tensorrt FP16 and FP32

inferring fp16 appear NaN value. when i set builder->setStrictTypeConstraints(true), the problem is solved. However, without using builder->setStrictTypeConstraints(true), I set the precision for each layer by use layer->setPrecision(nvinfer1::DataType::kHALF), find this way doesn’t work, why is this?

Hello,

When strict type constraints are in use, TensorRT will always choose a layer implementation that conforms to the type constraints specified, if one exists. If this flag is not set, a higher-precision implementation may be chosen if it results in higher performance.