buildCudaEngine(*network) runs forever

Description

I’m doing a calibration for model optimized in the int8 mode.

And I notice that this function call runs hours and never finished.

I am using a very simple model with just 1000 of calibration data.

Note that the network name is:
TF:2.4.1, TRT:7.2.2-Precision:INT8, Calibration:1, Max-Batch-Size:1000, Max-Workspace-Size:1073741824

As a comparison, this network name finished very fast:
TF:2.4.1, TRT:7.2.2-Precision:FP16, Calibration:0, Max-Batch-Size:1000, Max-Workspace-Size:1073741824

Can someone give me some insights about what’s going on?

Thanks!

Environment

TensorRT Version:
TRT:7.2.2
GPU Type:
GeForce 2070
Nvidia Driver Version:
465.19.01
CUDA Version:
11.3
CUDNN Version:
11.0
Operating System + Version:
Ubuntu 16.04
Python Version (if applicable):
Python 2.7.12 and Python 3.7.10
TensorFlow Version (if applicable):
2.4.1
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):

Relevant Files

Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)

Steps To Reproduce

Please include:

  • Exact steps/commands to build your repro
  • Exact steps/commands to run your repro
  • Full traceback of errors encountered

Hi, Please refer to the below links to perform inference in INT8
https://github.com/NVIDIA/TensorRT/blob/master/samples/opensource/sampleINT8/README.md

Thanks!