When IInt8Calibrator::read/WriteCalibrationCache been called?

There is a “sampleINT8” in tensorrt,
1. how to control the IBuilder to call IInt8Calibrator::readCalibrationCache or IInt8Calibrator::WriteCalibrationCache?
First time we need to generate the CalibrationTable file, it seems using the “search” option in sampleINT8 command line to generate one. anyone can confirm this one?
I read and debugged the code, it seems IBuilder::buildCudaEngine call IInt8Calibrator::WriteCalibrationCache when IInt8Calibrator::readCalibrationCache return nullptr.
2. Does ICudaEngine serialized calibration table to gieModelStream, and then no need IInt8Calibrator in deserializeCudaEngine?
3. Why turn off search when using CalibrationAlgoType::kENTROPY_CALIBRATION algorithm?
It is the code:

//sampleINT8.cpp
int main(int argc, char** argv)
{
...
    if (calibrationAlgo == CalibrationAlgoType::kENTROPY_CALIBRATION)
    {
        <b>search = false</b>;
    }
...
}

I just commented these out, and it doesn’t crash. anyone has reasons?

//sampleINT8.cpp
int main(int argc, char** argv)
{
...
    int batchSize = 100, firstScoreBatch = 100, nbScoreBatches = 400;	// by default we score over 40K images starting at 10000, so we don't score those used to search calibration
...
}

4. what’s the recommend input count(40K images) for INT8 calibration, and can we use different batch number with calibration and inference?
different means: I use batchSize=100 for generate the CalibrationTable, but call IExecutionContext::execute(batchSize = 4, …).

Hi,

Please check this tutorial first:

Thanks.

I read the blog, and answer it here:

  1. WriteCalibrationCache is called in “Creating a Calibration Cache Using the Python API” secion, readCalibrationCache is called in Optimizing the INT8 Model on DRIVE PX section.
  2. Yes
  3. Can use EntropyCalibrator to creating a calibration cache with TensorRT 3.x(with python interface)
  4. As you will see, the calibration algorithm can achieve good accuracy with just 100 random images! I recommend running the entire validation dataset…
    there is also a scenarios discussion in the blog.

Thanks.