TensorRT FP16 model creation

I am manually loading a model in TensorRT using the C++ API. I have the weights and biases already in FP16 so I will load those setting the type to kHALF; however, what do I do for the input layer as the input is float? Does the engine handle this for me? Also if the weights and biases weren’t already FP16 is there a way to tell the API I want to use FP16?

Hey, wdrollinson. Do you have any update on this? I’m running into the same issue