TRT inference with int8 calibration and specific inputIOFormats

I want to convert my onnx model to a TRT engine using int8/“best” precision. My input format is fp16. So far I was able to use the trtexec command with --inputIOFormats=fp16:chw and --fp16 to get the correct predictions. I want to speed up inference using the “best” mode, but I’m getting wrong predictions. I read that I need to do post-calibration when using int8. I found python examples to do so, but how can I also specify that the input is fp16? This is required for my application. Is there a tensorrt.BuilderFlag that I can use?

hi there i want to convert yolov8’s predefined weights to int8. is that possible. if it is can you guide me through the process.
ps hope ur issue is resolved soon