I want to do INT8 inference model with tenorrt with onnx model, when I use a onnx model from GitHub - onnx/models: A collection of pre-trained, state-of-the-art models in the ONNX format, it works. but when I use my own onnx model from torch.onnx.export, it occurs errors, and get the information as follow,
WARNING: ONNX model has a newer ir_version (0.0.4) than this parser was built against (0.0.3).
[08/27/2019-20:04:01] [I] [TRT] 123:Conv → (1, 64, 112, 112)
[08/27/2019-20:04:01] [I] [TRT] 124:BatchNormalization → (1, 64, 112, 112)
[08/27/2019-20:04:01] [I] [TRT] 125:Relu → (1, 64, 112, 112)
[08/27/2019-20:04:01] [I] [TRT] 126:MaxPool → (1, 64, 56, 56)
[08/27/2019-20:04:01] [I] [TRT] 127:Conv → (1, 64, 56, 56)
[08/27/2019-20:04:01] [I] [TRT] 128:BatchNormalization → (1, 64, 56, 56)
[08/27/2019-20:04:01] [I] [TRT] 129:Relu → (1, 64, 56, 56)
[08/27/2019-20:04:01] [I] [TRT] 130:Conv → (1, 64, 56, 56)
[08/27/2019-20:04:01] [I] [TRT] 131:BatchNormalization → (1, 64, 56, 56)
[08/27/2019-20:04:01] [I] [TRT] 132:Add → (1, 64, 56, 56)
[08/27/2019-20:04:01] [I] [TRT] 133:Relu → (1, 64, 56, 56)
[08/27/2019-20:04:01] [I] [TRT] 134:Conv → (1, 64, 56, 56)
[08/27/2019-20:04:01] [I] [TRT] 135:BatchNormalization → (1, 64, 56, 56)
[08/27/2019-20:04:01] [I] [TRT] 136:Relu → (1, 64, 56, 56)
[08/27/2019-20:04:01] [I] [TRT] 137:Conv → (1, 64, 56, 56)
[08/27/2019-20:04:01] [I] [TRT] 138:BatchNormalization → (1, 64, 56, 56)
[08/27/2019-20:04:01] [I] [TRT] 139:Add → (1, 64, 56, 56)
[08/27/2019-20:04:01] [I] [TRT] 140:Relu → (1, 64, 56, 56)
[08/27/2019-20:04:01] [I] [TRT] 141:Conv → (1, 128, 28, 28)
[08/27/2019-20:04:01] [I] [TRT] 142:BatchNormalization → (1, 128, 28, 28)
[08/27/2019-20:04:01] [I] [TRT] 143:Relu → (1, 128, 28, 28)
[08/27/2019-20:04:01] [I] [TRT] 144:Conv → (1, 128, 28, 28)
[08/27/2019-20:04:01] [I] [TRT] 145:BatchNormalization → (1, 128, 28, 28)
[08/27/2019-20:04:01] [I] [TRT] 146:Conv → (1, 128, 28, 28)
[08/27/2019-20:04:01] [I] [TRT] 147:BatchNormalization → (1, 128, 28, 28)
[08/27/2019-20:04:01] [I] [TRT] 148:Add → (1, 128, 28, 28)
[08/27/2019-20:04:01] [I] [TRT] 149:Relu → (1, 128, 28, 28)
[08/27/2019-20:04:01] [I] [TRT] 150:Conv → (1, 128, 28, 28)
[08/27/2019-20:04:01] [I] [TRT] 151:BatchNormalization → (1, 128, 28, 28)
[08/27/2019-20:04:01] [I] [TRT] 152:Relu → (1, 128, 28, 28)
[08/27/2019-20:04:01] [I] [TRT] 153:Conv → (1, 128, 28, 28)
[08/27/2019-20:04:01] [I] [TRT] 154:BatchNormalization → (1, 128, 28, 28)
[08/27/2019-20:04:01] [I] [TRT] 155:Add → (1, 128, 28, 28)
[08/27/2019-20:04:01] [I] [TRT] 156:Relu → (1, 128, 28, 28)
[08/27/2019-20:04:01] [I] [TRT] 157:Conv → (1, 256, 14, 14)
[08/27/2019-20:04:01] [I] [TRT] 158:BatchNormalization → (1, 256, 14, 14)
[08/27/2019-20:04:01] [I] [TRT] 159:Relu → (1, 256, 14, 14)
[08/27/2019-20:04:01] [I] [TRT] 160:Conv → (1, 256, 14, 14)
[08/27/2019-20:04:01] [I] [TRT] 161:BatchNormalization → (1, 256, 14, 14)
[08/27/2019-20:04:01] [I] [TRT] 162:Conv → (1, 256, 14, 14)
[08/27/2019-20:04:01] [I] [TRT] 163:BatchNormalization → (1, 256, 14, 14)
[08/27/2019-20:04:01] [I] [TRT] 164:Add → (1, 256, 14, 14)
[08/27/2019-20:04:01] [I] [TRT] 165:Relu → (1, 256, 14, 14)
[08/27/2019-20:04:01] [I] [TRT] 166:Conv → (1, 256, 14, 14)
[08/27/2019-20:04:01] [I] [TRT] 167:BatchNormalization → (1, 256, 14, 14)
[08/27/2019-20:04:01] [I] [TRT] 168:Relu → (1, 256, 14, 14)
[08/27/2019-20:04:01] [I] [TRT] 169:Conv → (1, 256, 14, 14)
[08/27/2019-20:04:01] [I] [TRT] 170:BatchNormalization → (1, 256, 14, 14)
[08/27/2019-20:04:01] [I] [TRT] 171:Add → (1, 256, 14, 14)
[08/27/2019-20:04:01] [I] [TRT] 172:Relu → (1, 256, 14, 14)
[08/27/2019-20:04:01] [I] [TRT] 173:Conv → (1, 512, 7, 7)
[08/27/2019-20:04:01] [I] [TRT] 174:BatchNormalization → (1, 512, 7, 7)
[08/27/2019-20:04:01] [I] [TRT] 175:Relu → (1, 512, 7, 7)
[08/27/2019-20:04:01] [I] [TRT] 176:Conv → (1, 512, 7, 7)
[08/27/2019-20:04:01] [I] [TRT] 177:BatchNormalization → (1, 512, 7, 7)
[08/27/2019-20:04:01] [I] [TRT] 178:Conv → (1, 512, 7, 7)
[08/27/2019-20:04:01] [I] [TRT] 179:BatchNormalization → (1, 512, 7, 7)
[08/27/2019-20:04:01] [I] [TRT] 180:Add → (1, 512, 7, 7)
[08/27/2019-20:04:01] [I] [TRT] 181:Relu → (1, 512, 7, 7)
[08/27/2019-20:04:01] [I] [TRT] 182:Conv → (1, 512, 7, 7)
[08/27/2019-20:04:01] [I] [TRT] 183:BatchNormalization → (1, 512, 7, 7)
[08/27/2019-20:04:01] [I] [TRT] 184:Relu → (1, 512, 7, 7)
[08/27/2019-20:04:01] [I] [TRT] 185:Conv → (1, 512, 7, 7)
[08/27/2019-20:04:01] [I] [TRT] 186:BatchNormalization → (1, 512, 7, 7)
[08/27/2019-20:04:01] [I] [TRT] 187:Add → (1, 512, 7, 7)
[08/27/2019-20:04:01] [I] [TRT] 188:Relu → (1, 512, 7, 7)
[08/27/2019-20:04:01] [I] [TRT] 189:GlobalAveragePool → (1, 512, 1, 1)
[08/27/2019-20:04:01] [I] [TRT] 190:Constant →
[08/27/2019-20:04:01] [I] [TRT] 191:Shape → (4)
WARNING: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
Successfully casted down to INT32.
[08/27/2019-20:04:01] [I] [TRT] 192:Gather →
[08/27/2019-20:04:01] [I] [TRT] 193:Constant →
[08/27/2019-20:04:01] [I] [TRT] 194:Unsqueeze → (1)
WARNING: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
Successfully casted down to INT32.
[08/27/2019-20:04:01] [I] [TRT] 195:Unsqueeze → (1)
[08/27/2019-20:04:01] [I] [TRT] 196:Concat → (2)
[08/27/2019-20:04:01] [I] [TRT] 197:Reshape → (-1, -1)
[08/27/2019-20:04:01] [I] [TRT] output:Gemm → (-1, 1)
come in cal batch stream
[08/27/2019-20:04:01] [E] [TRT] (Unnamed Layer* 71) [Shuffle]: IShuffleLayer applied to shape tensor must have 0 or 1 reshape dimensions: dimensions were [1,4]
[08/27/2019-20:04:01] [E] [TRT] Builder failed while configuring INT8 mode.
I don’t know how to solve this error, Thanks for any help