onnx->TRT int8 warnings: WARNING: Tensor (Unnamed Layer* 103) [Constant]_output is uniformly zero

Description

I have a detector model I converted successfully from ONNX to int8 TensorRT engine using Post Training Quantization with a representative calibration dataset of 1000+ images.
During the onnx–>TRT conversion process, I get the following warnings:

2020-12-24 15:04:51 - main - INFO - Building trt engine for precision: int8with dla id: 0
[TensorRT] WARNING: Tensor DataType is determined at build time for tensors not marked as input or output.
[TensorRT] WARNING: Tensor (Unnamed Layer* 103) [Constant]_output is uniformly zero.
[TensorRT] WARNING: Tensor (Unnamed Layer* 104) [Shuffle]_output is uniformly zero.
[TensorRT] WARNING: Tensor (Unnamed Layer* 283) [Constant]_output is uniformly zero.
[TensorRT] WARNING: Tensor (Unnamed Layer* 284) [Shuffle]_output is uniformly zero.

I applied the needed preprocessing for the inputs before the calibration- the same preprocessing we apply on an input image before doing inference.

What might cause these warnings? What should be done in order to remove them properly in the conversion process?

Environment

TensorRT Version : 7.1.2
CUDA Version : 11.0
Operating System + Version : Ubuntu 18.04
Python Version (if applicable) : 3.6
TensorFlow Version (if applicable) : The model was trained on tf 2.3, converted to onnx, and then converted to tensorRT engine.

I can’t share the relevant model for this.

Any help will be appreciated!

Hi, Request you to share the ONNX model and the script so that we can assist you better.

Alongside you can try validating your model with the below snippet

check_model.py

import sys
import onnx
filename = yourONNXmodel
model = onnx.load(filename)
onnx.checker.check_model(model).

Alternatively, you can try running your model with trtexec command.
https://github.com/NVIDIA/TensorRT/tree/master/samples/opensource/trtexec

Thanks!