Unable to run PeopleNet - Engine file generation failure

DeepStream Version - 7.0
Docker Image - nvcr.io/nvidia/deepstream:7.0-gc-triton-devel
GPU - NVIDIA A100-SXM4-40GB
NVIDIA GPU Driver - 535.183.01

I downloaded PeopleNet model files from PeopleNet | NVIDIA NGC and tried to run deepstream_python_apps/apps/deepstream-test3/deepstream_test_3.py and following is the error that I’m getting -

gstnvtracker: Loading low-level lib at libnvds_nvmultiobjecttracker.so
[NvMultiObjectTracker] Loading TRT Engine for tracker ReID...
[NvMultiObjectTracker] Loading Complete!
ERROR: [TRT]: 3: [runtime.cpp::~Runtime::346] Error Code 3: API Usage Error (Parameter check failed at: runtime/rt/runtime.cpp::~Runtime::346, condition: mEngineCounter.use_count() == 1. Destroying a runtime before destroying deserialized engines created by the runtime leads to undefined behavior.
)
[NvMultiObjectTracker] Initialized
WARNING: ../nvdsinfer/nvdsinfer_model_builder.cpp:1494 Deserialize engine failed because file path: /opt/nvidia/deepstream/deepstream-7.0/sources/deepstream_tao_apps/models/ansh/resnet34_peoplenet_int8.onnx_b1_gpu0_int8.engine open error
0:00:08.263997533 1126204 0x55f8a9b70b10 WARN                 nvinfer gstnvinfer.cpp:679:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:2083> [UID = 1]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-7.0/sources/deepstream_tao_apps/models/ansh/resnet34_peoplenet_int8.onnx_b1_gpu0_int8.engine failed
0:00:08.764024544 1126204 0x55f8a9b70b10 WARN                 nvinfer gstnvinfer.cpp:679:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2188> [UID = 1]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-7.0/sources/deepstream_tao_apps/models/ansh/resnet34_peoplenet_int8.onnx_b1_gpu0_int8.engine failed, try rebuild
0:00:08.770368322 1126204 0x55f8a9b70b10 INFO                 nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2109> [UID = 1]: Trying to create engine from model files
ERROR: [TRT]: 4: [standardEngineBuilder.cpp::initCalibrationParams::1714] Error Code 4: Internal Error (Calibration failure occurred with no scaling factors detected. This could be due to no int8 calibrator or insufficient custom scales for network layers. Please see int8 sample to setup calibration correctly.)
ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:1129 Build engine failed from config file
ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:821 failed to build trt engine.
0:00:22.651741760 1126204 0x55f8a9b70b10 ERROR                nvinfer gstnvinfer.cpp:676:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2129> [UID = 1]: build engine file failed
0:00:23.177399481 1126204 0x55f8a9b70b10 ERROR                nvinfer gstnvinfer.cpp:676:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2215> [UID = 1]: build backend context failed
0:00:23.180674866 1126204 0x55f8a9b70b10 ERROR                nvinfer gstnvinfer.cpp:676:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1352> [UID = 1]: generate backend failed, check config file settings
0:00:23.180705666 1126204 0x55f8a9b70b10 WARN                 nvinfer gstnvinfer.cpp:912:gst_nvinfer_start:<primary-inference> error: Failed to create NvDsInferContext instance
0:00:23.180722788 1126204 0x55f8a9b70b10 WARN                 nvinfer gstnvinfer.cpp:912:gst_nvinfer_start:<primary-inference> error: Config file path: nvic_ansh.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
[NvMultiObjectTracker] De-initialized

**PERF:  {'stream0': 0.0}


**PERF:  {'stream0': 0.0}

Error: gst-resource-error-quark: Failed to create NvDsInferContext instance (1): gstnvinfer.cpp(912): gst_nvinfer_start (): /GstPipeline:pipeline0/GstNvInfer:primary-inference:
Config file path: nvic_ansh.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
Exiting app

sys:1: Warning: g_object_get_is_valid_property: object class 'GstUDPSrc' has no property named 'pt'

My configuration file contents are -

[property]

gpu-id=0
net-scale-factor=0.0039215697906911373
offsets=0.0;0.0;0.0

onnx-file=resnet34_peoplenet.onnx
model-engine-file=resnet34_peoplenet_int8.onnx_b1_gpu0_int8.engine
labelfile-path=labels.txt
int8-calib-file=resnet34_peoplenet_int8.txt

infer-dims=3;544;960
uff-input-blob-name=input_1
batch-size=1
process-mode=1
model-color-format=0
network-mode=1 # 0=FP32, 1=INT8, 2=FP16
cluster-mode=3 # 0=GroupRectangles, 1=DBSCAN, 2=NMS, 3=Hybrid, 4=None
interval=0
gie-unique-id=1
output-blob-names=output_bbox/BiasAdd:0;output_cov/Sigmoid:0
num-detected-classes=3
filter-out-class-ids=1;2

[class-attrs-all]

pre-cluster-threshold=0.3
post-cluster-threshold=0.6
eps=0.6
group-threshold=1
minBoxes=1
dbscan-min-score=0.6
nms-iou-threshold=0.3
topk=12

This is because the peoplenet int8 calibration version you use does not match the tensorrt version. For DS-7.0, please refer to /opt/nvidia/deepstream/deepstream/samples/configs/tao_pretrained_models/README.md

Or the use following command line to download v2.3.3

wget --content-disposition 'https://api.ngc.nvidia.com/v2/models/org/nvidia/team/tao/peoplenet/pruned_quantized_decrypted_v2.3.3/files?redirect=true&path=resnet34_peoplenet_int8.onnx' -O resnet34_peoplenet_int8.onnx
wget --content-disposition 'https://api.ngc.nvidia.com/v2/models/org/nvidia/team/tao/peoplenet/pruned_quantized_decrypted_v2.3.3/files?redirect=true&path=resnet34_peoplenet_int8.txt' -O resnet34_peoplenet_int8.txt
wget --content-disposition 'https://api.ngc.nvidia.com/v2/models/org/nvidia/team/tao/peoplenet/pruned_quantized_decrypted_v2.3.3/files?redirect=true&path=labels.txt' -O labels.txt
1 Like

Thank you very much, these files are working!
In the future, for reference, how can I check the TensorRT version in my DeepStream installation and match that with the models downloaded from NGC Catalog?

deepstream-app --version-all

In the calibration file, it will indicate which version of TensorRT the file is adapted for.

like:

TRT-100300-EntropyCalibration2
1 Like