INT8 Yolo model conversion led to accuracy drop in deepstream

Here are the files.

When I used tensorrt-demo generated caches within the repo, they all worked fine. When I moved the cache into deepstream, I got the following:

ERROR: [TRT]: Calibration failure occurred with no scaling factors detected. This could be due to no int8 calibrator or insufficient custom scales for network layers. Please see int8 sample to setup calibration correctly.
ERROR: [TRT]: Builder failed while configuring INT8 mode.
Building engine failed!

I also tried the yolov3-tiny cache as you suggested, and the same thing happened – it only works for the given repo and cannot be transferred to Deepstream. The error is the same as mentioned above.