int8 conversation attempts to calibrate Const nodes, producing bad output

Software version: TensorRT 5.1.2rc

Problem summary: int8 conversation attempts to calibrate Const nodes, producing bad output.

Suppose you use a Tensorflow method such as gather, which include constant arguments, such as:
row = tf.gather(net, 0, axis=0)
then the frozen Tensorflow model has a node of type GatherV2 with three inputs:
input: “bn2/Reshape”
input: “bn2/GatherV2/indices”
input: “bn2/GatherV2/axis”

The last two nodes are of ops of type “Const” (corresponding to the indices and axis, which are both constants).

Upon conversion to a TensorRT (in call to buildCudaEngine) with setInt8Mode(true),
the list of nodes being created is listed, and includes:
bn2/GatherV2/indices
(Unnamed Layer* 374) [Constant]
bn2/GatherV2

There are two (related problems) experienced:

  1. The builder then emits the following warning:
    Tensor (Unnamed Layer* 374) [Constant]_output is uniformly zero; network calibration failed.
    but of course that layer is zero, since it is a constant specified in user code with a value of 0.
    No calibration seems necessary of a Constant node.

  2. More important, the generated TensorRT int8 engine is of extremely low quality, generating output which is only 0.15 to 0.50 correlated with the correct output

Regarding (2) about poor int8 network fidelity, further testing suggests that this poor quality was not caused by (1), and the poor quality may be intrinsic to the particular problem and not indicative of any defect in TensorRT.

However I believe (1) is still an issue (however possibly only a misleading/unnecessary warning message).