My environments are Xeon E5-2620 + Nvidia T4,
I run sample_int8 mnist successfully within tensorrt:19.10-py3 container,
the code same as https://github.com/NVIDIA/TensorRT/blob/release/6.0/samples/opensource/sampleINT8/sampleINT8.cpp
After the execution, it produced CalibrationTablemnist and its contain had many " : value",
according to README: “value” corresponds to the floating point activation scales determined during calibration for each tensor in the network. The real " : value" were listed below:
- Each value is so huge and much great than 127(the max value of int8), how the values can be calibrated for int8 processing? Can you give me an explanation or example how to convert these values to the range [-127, 127]?
2.My input dataset are download from http://yann.lecun.com/exdb/mnist/, but the images parsing by sample_int8 are 12800 not 40000(README said), why?