TensorRT INT8 inference, the result is totally wrong!

I have follow to sampleINT8, to do int8 inference for my se-resnext onnx model, it’s convert from pytorch.
I use 1000 images to do calibration, but the calibrated model is totally wrong. What should I do to check where I made a mistake?
appreciate for your help~

the attachment is source code

senet_int8_src.tar.gz (9.46 KB)

any suggestion?

I have test a mobilenetv2 caffemodel, INT8 inference accuracy is only 75%(compare to FP32). Could you explain the reason?

any suggestion?

I also want to know the details of calibration. I tried it on sdd model but not work yet.
Hope anyone can help us.

Is there any Offical?

I will ask it once a day!

Could you please let us know if you are still facing this issue?

Thanks