Peoplenet Resnet18 Pruned Model Warnings

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) dGPU
• DeepStream Version 5.0.1
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only) 440.118
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

I get the following error when I run the peoplenet resnet18 pruned model downloaded from NGC.

[property]
gpu-id=0

preprocessing parameters:

net-scale-factor=0.0039215697906911373
batch-size=1
tlt-model-key=tlt_encode
tlt-encoded-model=resnet18_peoplenet_pruned.etlt
labelfile-path=labels.txt
int8-calib-file=resnet18_peoplenet_int8.txt
num-detected-classes=3
infer-dims=3;544;960
uff-input-order=0
uff-input-blob-name=input_1
output-blob-names=output_bbox/BiasAdd;output_cov/Sigmoid
force-implicit-batch-dim=1
network-mode=1
process-mode=1
interval=0
gie-unique-id=1
filter-out-class-ids=0;1

[class-attrs-all]
pre-cluster-threshold=0.5

Set eps=0.7 and minBoxes for cluster-mode=1(DBSCAN)

eps=0.7
minBoxes=1

[NvDCF] Initialized
0:00:01.667239236 130 0x564f143b0c30 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1715> [UID = 1]: Trying to create engine from model files
INFO: …/nvdsinfer/nvdsinfer_func_utils.cpp:39 [TRT]: Reading Calibration Cache for calibrator: EntropyCalibration2
INFO: …/nvdsinfer/nvdsinfer_func_utils.cpp:39 [TRT]: Generated calibration scales using calibration cache. Make sure that calibration cache has latest scales.
INFO: …/nvdsinfer/nvdsinfer_func_utils.cpp:39 [TRT]: To regenerate calibration cache, please delete the existing one. TensorRT will generate a new calibration cache.
WARNING: …/nvdsinfer/nvdsinfer_func_utils.cpp:36 [TRT]: Missing dynamic range for tensor output_bbox/BiasAdd, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
WARNING: …/nvdsinfer/nvdsinfer_func_utils.cpp:36 [TRT]: Missing dynamic range for tensor output_cov/BiasAdd, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
WARNING: …/nvdsinfer/nvdsinfer_func_utils.cpp:36 [TRT]: Missing dynamic range for tensor output_cov/Sigmoid, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
INFO: …/nvdsinfer/nvdsinfer_func_utils.cpp:39 [TRT]: Some tactics do not have sufficient workspace memory to run. Increasing workspace size may increase performance, please check verbose output.

Moving this topic into TLT forum.

That means the tensor output_bbox/BiasAdd or output_cov/Sigmoid does not have dynamic range in calibration file. You can ignore the warnings.
You can also refer to “wget https://api.ngc.nvidia.com/v2/models/nvidia/tlt_peoplenet/versions/pruned_v2.0/files/resnet34_peoplenet_int8_dla.txt

I tried using the same calib file (both DLA and non-DLA) and get same errors in both. Can you check and provide correct calib file.

Actually you can ingore the warnings. You can also add dummy range into the calibration file.
For example, adding below lines in the end of your resnet18_peoplenet_int8.txt. The warnings will disappear.

output_bbox/convolution: 3db89d98
output_bbox/BiasAdd: 3dc89aa5
output_cov/convolution: 3f47359c
output_cov/BiasAdd: 3f48cc3b
output_cov/Sigmoid: 3c010444

1 Like