FP16 integration in custom API implementation


I would like to use the FP16/Half mode for my convolution, batchnorm and pooling layers.

I use the fp16.h file provided in the last samples of TensorRT 4.0 to transform convolutions and batchnorms parameters to half data type.

Inputs and outputs of my networks are FP32 data type.

I set the outputs tensor type of all my layers using :


However during the initialization phase i received these message for all the layers:

“Tensor DataType is determined at build time for tensors not marked as input or output.”

With FP16 DataType the network outputs are wrong (correct results with FP32 Datatype), there is not a lot of information in the TensorRT documentation to use the FP16 under API (without Caffe parser or others tensorflow abstraction layer). I would like to know if i have to transform my inputs under FP16 and my outputs too ?