I am using Xavier NX. And I am testing 2 JP version 5.1.1, 5.1.2
I had qat onnx model. I converted this qat onnx model to qat engine for JP5.1.1 and 5.1.2.
I run inference and evaluate 2 qat engine with Python API, mAP of 2 qat engine was same.
But when I run inference and evaluate engine with Deepstream (DS6.2, TRT8.5.2 in both JP version, same source code), there was big mAP (~10%) between 2 engine models. mAP with JP5.1.1 is good, but mAP with JP5.1.2 is bad.
From my investigation, the issue is inference part with Deepstream (DS6.2, TRT8.5.2).
I also checked release note of JP5.1.2 and did not find any strange points.
Sorry, I can not share my models.
@fanzh
Thanks for quick response.
What does GA means in 5.1 GA?
I dont see DS version for JP5.1.1 in the above link.
In this link, DS6.2 is compatible with JP5.1.2 JetPack SDK 5.1.2 | NVIDIA Developer, but in the above link DS6.3 is compatible with JP5.1.2. It makes me confused.
Yes. DS6.2+JP5.1.1 gives good mAP. DS6.2+JP5.1.2 and DS6.3+JP5.1.2 give bad mAP for COCO dataset. The mAP gap is about 10%. For other dataset, the big mAP gap is not.
There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks
can you try network-mode=0? maybe in8 calibration file is bonding to specific low-level.
you can use this method to dump the Inference Input, then you will know if the inference inputs are the same.