Hi @mchi Thank for your Yolov7 QAT.
I follow your tutorial and successful on QAT training.
Loading and preparing results...
pycocotools unable to run: Results do not correspond to current coco set
QAT Finetuning 10 / 10, Loss: 0.67706, LR: 1e-06: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 590/590 [04:10<00:00, 2.36it/s]
Class Images Labels P R mAP@.5 mAP@.5:.95: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 597/597 [01:20<00:00, 7.39it/s]
all 5963 6888 0.953 0.915 0.942 0.809
Evaluating pycocotools mAP... saving qat_models/trained_qat/pgie/1/_predictions.json...
loading annotations into memory...
Done (t=0.44s)
creating index...
index created!
Loading and preparing results...
pycocotools unable to run: Results do not correspond to current coco set
After that, I convert the qat trained model to onnx then convert the onnx model to trt engine. But I failed on convert onnx to trt engine.
Pls help me to check it
**/usr/src/tensorrt/bin/trtexec --onnx=qat_models/trained_qat/pgie/1/qat.onnx --int8 --fp16 --workspace=1024000 --minShapes=images:1x3x416x416 --optShapes=images:16x3x416x416 --maxShapes=images:32x3x416x416**
[12/04/2023-09:06:58] [I] [TRT] ----------------------------------------------------------------
[12/04/2023-09:06:58] [E] [TRT] ModelImporter.cpp:740: While parsing node number 467 [QuantizeLinear -> "onnx::DequantizeLinear_924"]:
[12/04/2023-09:06:58] [E] [TRT] ModelImporter.cpp:741: --- Begin node ---
[12/04/2023-09:06:58] [E] [TRT] ModelImporter.cpp:742: input: "model.51.cv1.conv.weight"
input: "onnx::QuantizeLinear_921"
input: "onnx::QuantizeLinear_1885"
output: "onnx::DequantizeLinear_924"
name: "QuantizeLinear_467"
op_type: "QuantizeLinear"
attribute {
name: "axis"
i: 0
type: INT
}
[12/04/2023-09:07:36] [E] [TRT] ModelImporter.cpp:729: --- End node ---
[12/04/2023-09:07:36] [E] [TRT] ModelImporter.cpp:732: ERROR: builtin_op_importers.cpp:1216 In function QuantDequantLinearHelper:
[6] Assertion failed: scaleAllPositive && "Scale coefficients must all be positive"
[12/04/2023-09:07:36] [E] Failed to parse onnx file
[12/04/2023-09:07:36] [I] Finish parsing network model
[12/04/2023-09:07:36] [E] Parsing model failed
[12/04/2023-09:07:36] [E] Failed to create engine from model or file.
[12/04/2023-09:07:36] [E] Engine set up failed
&&&& FAILED TensorRT.trtexec [TensorRT v8502] # /usr/src/tensorrt/bin/trtexec --onnx=qat_models/trained_qat/pgie/1/qat.onnx --int8 --fp16 --workspace=1024000 --minShapes=images:4x3x416x416 --optShapes=images:4x3x416x416 --maxShapes=images:4x3x416x416
The full log is located at github
Please help me to check it