Int8 Calibration is not accurate .. see image diff with and without

Without int8

With int8

This is the output from trt-yolo-app.
Are there additional steps to maintain accuracy while calibration ?

Thanks.

Can you guys reply with yes or no and link to improve accuracy ?

Hello,

Can you provide more details about your calibration process? What’s the dataset, what percentage of data you used for calibration?

My first suggestion is to increase the number of calibration images.

Thanks.

Added the same images used for training i.e 10000

File does not exist : data/face_last-kINT8-kGPU-batch1.engine
Unable to find cached TensorRT engine for network : yolov3-tiny precision : kINT8 and batch size :1
Total number of layers: 50
Total number of layers on DLA: 0
Building the TensorRT Engine…
New calibration table will be created to build the engine
Building complete!
Serializing the TensorRT Engine…
Serialized plan file cached at location : data/face_last-kINT8-kGPU-batch1.engine
Loading TRT Engine…
Loading Complete!
Total number of images used for inference : 10000
[======================================================================] 100 %
Network Type : yolov3-tiny Precision : kINT8 Batch Size : 1 Inference time per image : 7.09045 ms

Getting the following output:

The way i am seeing it is calibration is not good where i am detecting small objects. But there has to be logical explanation for this. Please share your findings.

Hello,

I’ve done some researches but I did not find things related to your problem. However, we have an open sourced Yolo inference on github: [url]https://github.com/NVIDIA-AI-IOT/deepstream_reference_apps/tree/master/yolo[/url].
Maybe you can use it as a reference and fix the problem.

Thank you.

Do you guys even read ?
I just said I am using the same app trt-yolo-app. And calibration has been done using the same app.

Hello,

Sorry for missing that line.
We have test the performance of the tet-yolo-app and the results are very consistent.
What version of TensorRT are you using?
Also, try to use legacy calibrator instead of entropy calibrator. That might help with the accuracy.

Thank you.

TensorRT 5.1.2.2

This is the most popular topic for people right now i.e increase speed without loosing accuracy so i would request you to kindly come straight to the point and help me close this issue once and for all.
I can share the trained model and cfg file and you can see the accuracy yourself and suggest.

This case is with engineering. will keep you updated.

Any update regarding this ?

Can anyone from Nvidia bother and share some insights ? Its been 2 months.

I have the same problem of this.my yolo3(which build in int8 by tftrt ) have a very lower recall,about 47%,but the original yolo3’s recall is 80%.why this problem?

Hi,have you solved this problem? I meet the same situation. Could you give some sugestions? Thanks!

Hello,
Are you able to read the calibration table?
I mean, if you have something like this:

yolo_107: 3cb957a3

How can you read this Hex numbers to understand which threshold the calibration process got for that layer?

Best, Fares

Any updates here, I am facing the same problem

NVIDIA if you want us to sell your hardware, you have to resolve issues. You cannot just be mum about it. If you cannot solve it close this issue and move forward. End of story.

Hi,
I am facing the same problem, I have own model based on YOLO architecture. Able to successfully convert FP32 and FP16 mode and it gives preety much good inference time with same accuracy of ensorflow model, however not able to get good result in INT8 precision mode.

How do I validate the generated calibrate_cache.bin file?
(Generated INT8 file not detect any object in image)

Any help would be appreciated.

Thank you.

Any update on this? I converted a Yolo V4 model to INT8 but the mAP drops significantly compared to FP16.