When running the INT8 calibration, we end up in this error:
[TensorRT] ERROR: Tensor conv_layer6 is uniformly zero; network calibration failed.
python: ../builder/cudnnBuilder2.cpp:1227: nvinfer1::cudnn::Engine*
nvinfer1::builder::buildEngine(nvinfer1::CudaEngineBuildConfig&, const nvinfer1::cudnn::HardwareContext&,
const nvinfer1::Network&): Assertion `it != tensorScales.end()' failed.
Aborted
Our calibration is based on the tutorial https://devblogs.nvidia.com/int8-inference-autonomous-vehicles-tensorrt/ with adaptations for COCO dataset and a model similar to and larger than VGG16.
According to the error message we assume that for the specific tensor, no reasonable scale factor can be computed since the tensor is all zeros. Having no proper scaling factor for this layer fails the entire calibration.
This error does not occur when
- I turn off the normalization (... times 1/255) for preprocessing (i.e. input values are larger)
- I use an older version of our network which has fewer average pooling and deconvolution layers.
Any ideas on how to prevent the tensors from becoming uniformly zero?