TensorRT 6.0.1 - trtexec: Users must provide dynamic range for all tensors that are not Int32

Hi all, how can I provide a dynamic range for tensors that are not Int32 when optimizing a resnet50 models as

./trtexec --uff=resnet50_v1.uff --output=ArgMax --uffInput=input_tensor,3,224,224 --iterations=40 --int8 --batch=8 --device=0 --avgRuns=100

See below the tracelog:

&&&& RUNNING TensorRT.trtexec # ./trtexec --uff=resnet50_v1.uff --output=ArgMax --uffInput=input_tensor,3,224,224 --iterations=40 --int8 --batch=8 --device=0 --avgRuns=100
[08/30/2019-20:35:17] [I] === Model Options ===
[08/30/2019-20:35:17] [I] Format: UFF
[08/30/2019-20:35:17] [I] Model: resnet50_v1.uff
[08/30/2019-20:35:17] [I] Uff Inputs Layout: NCHW
[08/30/2019-20:35:17] [I] Input: input_tensor,3,224,224
[08/30/2019-20:35:17] [I] Output: ArgMax
[08/30/2019-20:35:17] [I] === Build Options ===
[08/30/2019-20:35:17] [I] Max batch: 8
[08/30/2019-20:35:17] [I] Workspace: 16 MB
[08/30/2019-20:35:17] [I] minTiming: 1
[08/30/2019-20:35:17] [I] avgTiming: 8
[08/30/2019-20:35:17] [I] Precision: INT8
[08/30/2019-20:35:17] [I] Calibration: Dynamic
[08/30/2019-20:35:17] [I] Safe mode: Disabled
[08/30/2019-20:35:17] [I] Save engine:
[08/30/2019-20:35:17] [I] Load engine:
[08/30/2019-20:35:17] [I] Inputs format: fp32:CHW
[08/30/2019-20:35:17] [I] Outputs format: fp32:CHW
[08/30/2019-20:35:17] [I] Input build shapes: model
[08/30/2019-20:35:17] [I] === System Options ===
[08/30/2019-20:35:17] [I] Device: 0
[08/30/2019-20:35:17] [I] DLACore:
[08/30/2019-20:35:17] [I] Plugins:
[08/30/2019-20:35:17] [I] === Inference Options ===
[08/30/2019-20:35:17] [I] Batch: 8
[08/30/2019-20:35:17] [I] Iterations: 40 (200 ms warm up)
[08/30/2019-20:35:17] [I] Duration: 10s
[08/30/2019-20:35:17] [I] Sleep time: 0ms
[08/30/2019-20:35:17] [I] Streams: 1
[08/30/2019-20:35:17] [I] Spin-wait: Disabled
[08/30/2019-20:35:17] [I] Multithreading: Enabled
[08/30/2019-20:35:17] [I] CUDA Graph: Disabled
[08/30/2019-20:35:17] [I] Skip inference: Disabled
[08/30/2019-20:35:17] [I] Input inference shapes: model
[08/30/2019-20:35:17] [I] === Reporting Options ===
[08/30/2019-20:35:17] [I] Verbose: Disabled
[08/30/2019-20:35:17] [I] Averages: 100 inferences
[08/30/2019-20:35:17] [I] Percentile: 99
[08/30/2019-20:35:17] [I] Dump output: Disabled
[08/30/2019-20:35:17] [I] Profile: Disabled
[08/30/2019-20:35:17] [I] Export timing to JSON file:
[08/30/2019-20:35:17] [I] Export profile to JSON file:
[08/30/2019-20:35:17] [I]
[08/30/2019-20:35:19] [W] [TRT] Calling isShapeTensor before the entire network is constructed may result in an inaccurate result.
[08/30/2019-20:35:19] [W] [TRT] Tensor DataType is determined at build time for tensors not marked as input or output.
[08/30/2019-20:35:19] [W] [TRT] Calibrator is not being used. Users must provide dynamic range for all tensors that are not Int32.
trtexec: ../builder/cudnnBuilder2.cpp:1743: virtual std::vector<nvinfer1::query::RequirementsCombination> nvinfer1::builder::EngineTacticSupply::getSupportedFormats(const nvinfer1::builder::Node&): Assertion `!formats.empty()' failed.

“Calibrator is not being used. Users must provide dynamic range for all tensors that are not Int32.” is a warning that the trtexec application is not using calibration and the Int8 type is being used.
Int8 ranges are chosen randomly in trtexec, currently user input is not supported for Int8 dynamic range.

In case of further query, can you provide the following information so we can better help?

Provide details on the platforms you are using:

  • Linux distro and version
  • GPU type
  • Nvidia driver version
  • CUDA version
  • CUDNN version
  • Python version [if using python]
  • Tensorflow version
  • TensorRT version
  • If Jetson, OS, hw versions