I have a tensorflow model that I convert to UFF format before importing it into tensorRT using the C++ API on the TX2.
I am following the example samples/sampleUffMNIST which has the following code to use either float32’s or float16’s:
#if 0
if (!parser->parse(uffFile, *network, nvinfer1::DataType::kFLOAT))
RETURN_AND_LOG(nullptr, ERROR, "Fail to parse");
#else
if (!parser->parse(uffFile, *network, nvinfer1::DataType::kHALF))
RETURN_AND_LOG(nullptr, ERROR, "Fail to parse");
builder->setFp16Mode(true);
#endif
When using float32’s, I verified that I get the same result on a test input as when performing inference directly from python using tensorflow, but when I turn on the float16 mode as shown above, my output (which is an array of two float32 numbers) is two NAN’s. What are reasons that could explain this behavior? Using the sampleUffMNIST example, I verified that I do get very similar outputs when using float32 and float16’s.
I am using tensorRT 4.0 and my network is a pretty standard vgg16 network.
Thank you!