ERROR: Tensor is uniformly zero; network calibration failed.

When running the INT8 calibration, we end up in this error:

[TensorRT] ERROR: Tensor conv_layer6 is uniformly zero; network calibration failed.
python: ../builder/cudnnBuilder2.cpp:1227: nvinfer1::cudnn::Engine*
nvinfer1::builder::buildEngine(nvinfer1::CudaEngineBuildConfig&, const nvinfer1::cudnn::HardwareContext&,
const nvinfer1::Network&): Assertion `it != tensorScales.end()' failed.

Our calibration is based on the tutorial with adaptations for COCO dataset and a model similar to and larger than VGG16.

According to the error message we assume that for the specific tensor, no reasonable scale factor can be computed since the tensor is all zeros. Having no proper scaling factor for this layer fails the entire calibration.

This error does not occur when

  • I turn off the normalization (... times 1/255) for preprocessing (i.e. input values are larger)
  • I use an older version of our network which has fewer average pooling and deconvolution layers.

Any ideas on how to prevent the tensors from becoming uniformly zero?

I’m also seeing the same error when attempting INT8 calibration with TensorRT 3.0.4. I’m not performing any normalization (i.e. the calibration set is basically consistent with the eval set), and I’ve verified that without INT8, these datapoints successfully eval. With the exact same code, I was also able to successfully calibrate a different network with one fewer pooling operation.

The error message is pretty unhelpful with debugging. Is there any way to get more insight into what’s going on? FWIW, because calibration fails, there’s no calibration table to aid in debugging.

hi, have you all found the answer for the error?

I am also running into this issue.
My model works correctly if I keep it in fp32 mode.
Furthermore if I slice the network such that the layer in question is the final output layer or one of the layers soon after it, I no longer have this issue.
This is with TensorRT, which I need to use for the my Drive PX2.
This feels like a bug in TensorRT at this point.