Please provide the following info (check/uncheck the boxes after creating this topic): Software Version
DRIVE OS Linux 5.2.6
DRIVE OS Linux 5.2.6 and DriveWorks 4.0
[*] DRIVE OS Linux 5.2.0
DRIVE OS Linux 5.2.0 and DriveWorks 3.5
NVIDIA DRIVE™ Software 10.0 (Linux)
NVIDIA DRIVE™ Software 9.0 (Linux)
other DRIVE OS version
other
SDK Manager Version
1.7.1.8928
[*] other 1.7.3.9053
Host Machine Version
native Ubuntu 18.04
[*] other Ubuntu20
My segmentation model deploy on Xavier has bad accuracy when quantized to INT8.
I try to deploy it on Tensorrt 6.0 INT8, the accuracy is good.
I figure out it is something about Deconv layers.
Is this an known issue ?
I can upload my model and data if needed.
Thanks!
This is the backbone of my model, and one single calibration data.
Anyone who just build the engine using this ONNX file and do calibration with the only data to make the engine INT8 (I did this through C++ APIs) , will get bad results compared with FP32 original model when inference. 0.bin (16.9 MB) model.onnx (28.5 MB)
If you need more information, I would like to offer.
Thanks!
This forum is for developers in NVIDIA DRIVE™ AGX SDK Developer Program. We will need you to use your account with corporate or university email address.
Or you can also change your current account to use corporate or university email address by following the below links: My Profile | NVIDIA Developer → “Edit Profile” → “Change email” → “CHANGE”
Sorry for any inconvenience.
Which Xavier platform you’re using? Jetson or Drive?
Sorry for the account issue, due to bad communications in our company, I can not get the right email soon, however the problem is urgent, we really need your help.
It is the Drive platform.
Thanks!