Result of dla(int8) is bad

Please provide the following info (check/uncheck the boxes after creating this topic):
Software Version
DRIVE OS Linux 5.2.6
DRIVE OS Linux 5.2.0
[yes] DRIVE OS Linux 5.2.0 and DriveWorks 3.5
NVIDIA DRIVE™ Software 10.0 (Linux)
NVIDIA DRIVE™ Software 9.0 (Linux)
other DRIVE OS version
other

Target Operating System
[yes] Linux
QNX
other

Hardware Platform
[yes] NVIDIA DRIVE™ AGX Xavier DevKit (E3550)
NVIDIA DRIVE™ AGX Pegasus DevKit (E3550)
other

SDK Manager Version
[yes] 1.6.0.8170
other

Host Machine Version
[yes] native Ubuntu 18.04
other

I use the same calb table to quantify model in GPU and DLA, the result of GPU is good but of DLA is bad.
So, quantifying model in DLA can not use tha table create in GPU?

Hi,

May I know how do you generate the calibration cache?
Do you generate it on the AGX or a desktop server?

Thanks

Hi,@AastaLLL
I code a exec like tensorrt sample to transform models.
The table is generated in Xavier.

Hi,

Could you share the way you generate the calibration cache?
For DLA, it’s recommended to use IEntropyCalibratorV2.

https://docs.nvidia.com/deeplearning/tensorrt/developer-guide/index.html#optimizing_int8_c

Thanks.

Hi,AastaLLL,
Yes, I use IEntropyCalibratorV2.
log: DLA Node compilation failed can be found in the procession.

Hi,

Could you share the model/calibration app/calibration cache with us?
So we can check it internally.

Thanks.