FP32 vs FP16 vs INT8

Hi, so INT8 is obviously quantization. Is FP16/FP32 similar to what INT8 do?
If I just use normal FP32, are the weights changed in any way by tensorRT, and also similar question with FP16.

I am using TX2 so obviously INT8 is not supported, but I would like to understand more about FP32 and FP16. If this question feels dumb, I apologize.