Unable to use custom-trained peoplenet model

Please provide the following information when requesting support.

• Hardware: Nano
• Network Type: Detectnet_v2
• Deepstream version: Deepstream 5.1

Hello

We are trying to use our new custom-trained peoplenet model with deepstream-occupancy-analytics app. When we load the resnet34_detector.trt.int8 engine to the configuration file and we start the app we get the following error:

(deepstream-test5-analytics:20807): GLib-CRITICAL **: 17:03:47.922: g_strchug: assertion ‘string != NULL’ failed

(deepstream-test5-analytics:20807): GLib-CRITICAL **: 17:03:47.922: g_strchomp: assertion ‘string != NULL’ failed
Warning: ‘input-dims’ parameter has been deprecated. Use ‘infer-dims’ instead.
Unknown or legacy key specified ‘gie-unique-output-blob-names’ for group [property]
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream-5.1/lib/libnvds_nvdcf.so
gstnvtracker: Batch processing is OFF
gstnvtracker: Past frame output is OFF
ERROR: [TRT]: coreReadArchive.cpp (38) - Serialization Error in verifyHeader: 0 (Version tag does not match)
ERROR: [TRT]: INVALID_STATE: std::exception
ERROR: [TRT]: INVALID_CONFIG: Deserialize the cuda engine failed.
ERROR: Deserialize engine failed from file: /opt/nvidia/deepstream/deepstream-5.1/sources/apps/sample_apps/deepstream-occupancy-analytics-AG/config/Age_and_Gender/experiment_dir_final/resnet34_detector.trt.int8
0:00:20.206519313 20807 0x558cd596f0 WARN nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1691> [UID = 1]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-5.1/sources/apps/sample_apps/deepstream-occupancy-analytics-AG/config/Age_and_Gender/experiment_dir_final/resnet34_detector.trt.int8 failed
0:00:20.206614472 20807 0x558cd596f0 WARN nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1798> [UID = 1]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-5.1/sources/apps/sample_apps/deepstream-occupancy-analytics-AG/config/Age_and_Gender/experiment_dir_final/resnet34_detector.trt.int8 failed, try rebuild
0:00:20.206652389 20807 0x558cd596f0 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1716> [UID = 1]: Trying to create engine from model files
WARNING: INT8 not supported by platform. Trying FP16 mode.
ERROR: No output layers specified. Need atleast one output layer
ERROR: Failed to create network using custom network creation function
ERROR: Failed to get cuda engine from custom library API
0:00:20.207064951 20807 0x558cd596f0 ERROR nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1736> [UID = 1]: build engine file failed
Segmentation fault (core dumped)

Below are the test5 and model configuration files:
config_infer_primary_peoplenet.txt (2.5 KB)
test5_config_file_src_infer.txt (4.0 KB)

Your help would be much appreciated.

Please comment out below and retry.

model-engine-file=Age_and_Gender/experiment_dir_final/resnet34_detector.trt.int8

Update:

We changed the deepstream version to deepstream6.0 along with the following nano environment the following:
Package: nvidia-jetpack
Version: 4.6-b197
Architecture: arm64
Maintainer: NVIDIA Corporation
Installed-Size: 194
Depends: nvidia-cuda (= 4.6-b197), nvidia-opencv (= 4.6-b197), nvidia-cudnn8 (= 4.6-b197), nvidia-tensorrt (= 4.6-b197), nvidia-visionworks (= 4.6-b197), nvidia-container (= 4.6-b197), nvidia-vpi (= 4.6-b197), nvidia-l4t-jetson-multimedia-api (>> 32.6-0), nvidia-l4t-jetson-multimedia-api (<< 32.7-0)
Homepage: Autonomous Machines | NVIDIA Developer
Priority: standard
Section: metapackages
Filename: pool/main/n/nvidia-jetpack/nvidia-jetpack_4.6-b197_arm64.deb
Size: 29356
SHA256: 104cd0c1efefe5865753ec9b0b148a534ffdcc9bae525637c7532b309ed44aa0
SHA1: 8cca8b9ebb21feafbbd20c2984bd9b329a202624
MD5sum: 463d4303429f163b97207827965e8fe0
Description: NVIDIA Jetpack Meta Package
Description-md5: ad1462289bdbc54909ae109d1d32c0a8

Below are the test5_config_file_src_infer_tlt.txt file (with the commented model-engine-file line) and the config_infer_primary_peoplenet.txt:
config_infer_primary_peoplenet.txt (2.7 KB)
test5_config_file_src_infer.txt (4.4 KB)

We get the following error (with commented and uncommented model-engine-file as recommended):
(deepstream-test5-analytics:27608): GLib-CRITICAL **: 17:15:20.248: g_strchug: assertion ‘string != NULL’ failed

(deepstream-test5-analytics:27608): GLib-CRITICAL **: 17:15:20.248: g_strchomp: assertion ‘string != NULL’ failed
Warning: ‘input-dims’ parameter has been deprecated. Use ‘infer-dims’ instead.
Unknown or legacy key specified ‘gie-unique-output-blob-names’ for group [property]
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream-5.1/lib/libnvds_nvdcf.so
gstnvtracker: Batch processing is OFF
gstnvtracker: Past frame output is OFF
ERROR: [TRT]: 6: The engine plan file is generated on an incompatible device, expecting compute 5.3 got compute 8.0, please rebuild.
ERROR: [TRT]: 4: [runtime.cpp::deserializeCudaEngine::76] Error Code 4: Internal Error (Engine deserialization failed.)
ERROR: Deserialize engine failed from file: /opt/nvidia/deepstream/deepstream-6.0/sources/apps/sample_apps/deepstream-occupancy-analytics-AG/config/Age_and_Gender/experiment_dir_final/resnet34_detector.trt
0:00:01.951993209 27608 0x55a7e8a8c0 WARN nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1889> [UID = 1]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-6.0/sources/apps/sample_apps/deepstream-occupancy-analytics-AG/config/Age_and_Gender/experiment_dir_final/resnet34_detector.trt failed
0:00:01.952137431 27608 0x55a7e8a8c0 WARN nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1996> [UID = 1]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-6.0/sources/apps/sample_apps/deepstream-occupancy-analytics-AG/config/Age_and_Gender/experiment_dir_final/resnet34_detector.trt failed, try rebuild
0:00:01.952172483 27608 0x55a7e8a8c0 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1914> [UID = 1]: Trying to create engine from model files
ERROR: No output layers specified. Need atleast one output layer
ERROR: Failed to create network using custom network creation function
ERROR: Failed to get cuda engine from custom library API
0:00:01.953026407 27608 0x55a7e8a8c0 ERROR nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1934> [UID = 1]: build engine file failed
terminate called after throwing an instance of ‘nvinfer1::InternalError’
what(): Assertion mRefCount > 0 failed.
Aborted (core dumped)

It’s not quite clear to me what the issue exactly is. Could you please help.

Thanks @Morganh

Obviously your setting is not correct for output blob names.

Refer to below file inside /opt/nvidia/deepstream/deepstream-6.0/

$ cat config_infer_primary_peoplenet.txt

output-blob-names=output_bbox/BiasAdd;output_cov/Sigmoid

Hey @Morganh

We continue to work on the app and the updates are the following:
-When we switched to Deepstream 6 we encountered a TRT device memory allocation problem when generating the device-based trt engine which was the reason behind our previous issue ( as we were using the one we generated in the Detectnet TAO notebook)
-We generated then the trt engine on jetson nano with jetpack 4.5.1 and Deepstream 5.1 instead, using this command:

./tao-converter /opt/nvidia/deepstream/deepstream-5.1/sources/apps/sample_apps/deepstream-occupancy-analytics-AG/config/Age_and_Gender/experiment_dir_final/resnet34_detector.etlt \
                   -k  ***\
                    -c /opt/nvidia/deepstream/deepstream-5.1/sources/apps/sample_apps/deepstream-occupancy-analytics-AG/config/Age_and_Gender/experiment_dir_final/calibration.bin \
                   -o output_cov/Sigmoid,output_bbox/BiasAdd \
                   -d 3,384,1248 \
                   -i nchw \
                   -m 1 \
                   -t int8 \
                   -e /opt/nvidia/deepstream/deepstream-5.1/sources/apps/sample_apps/deepstream-occupancy-analytics-AG/config/Age_and_Gender/experiment_dir_final/resnet34_detector.trt \
                   -b 4

-We now have a new trt engine that we load into the deepstream-occupancy-analytics. When we run we get this issue:

Command:
./deepstream-test5-analytics -c config/test5_config_file_src_infer_tlt.txt

Message
(deepstream-test5-analytics:21785): GLib-CRITICAL **: 18:10:20.503: g_strchug: assertion 'string != NULL' failed

(deepstream-test5-analytics:21785): GLib-CRITICAL **: 18:10:20.503: g_strchomp: assertion 'string != NULL' failed
Warning: 'input-dims' parameter has been deprecated. Use 'infer-dims' instead.
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream-5.1/lib/libnvds_nvdcf.so
gstnvtracker: Batch processing is OFF
gstnvtracker: Past frame output is OFF
0:00:14.925587603 21785   0x558ab2a2f0 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1702> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-5.1/sources/apps/sample_apps/deepstream-occupancy-analytics-AG/config/Age_and_Gender/experiment_dir_final/resnet34_detector.trt
INFO: [Implicit Engine Info]: layers num: 3
0   INPUT  kFLOAT input_1         3x384x1248
1   OUTPUT kFLOAT output_bbox/BiasAdd 12x24x78
2   OUTPUT kFLOAT output_cov/Sigmoid 3x24x78

0:00:14.925771200 21785   0x558ab2a2f0 WARN                 nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::checkBackendParams() <nvdsinfer_context_impl.cpp:1643> [UID = 1]: Backend has maxBatchSize 1 whereas 2 has been requested
0:00:14.925803805 21785   0x558ab2a2f0 WARN                 nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1814> [UID = 1]: deserialized backend context :/opt/nvidia/deepstream/deepstream-5.1/sources/apps/sample_apps/deepstream-occupancy-analytics-AG/config/Age_and_Gender/experiment_dir_final/resnet34_detector.trt failed to match config params, trying rebuild
0:00:14.935051306 21785   0x558ab2a2f0 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1716> [UID = 1]: Trying to create engine from model files
ERROR: [TRT]: UffParser: Unsupported number of graph 0
parseModel: Failed to parse UFF model
ERROR: failed to build network since parsing model errors.
ERROR: Failed to create network using custom network creation function
ERROR: Failed to get cuda engine from custom library API
0:00:15.601873211 21785   0x558ab2a2f0 ERROR                nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1736> [UID = 1]: build engine file failed
Segmentation fault (core dumped)

Is it clear to you why the parsing failed? Would appreciate the help.

Thanks

Issue solved. Inside the configuration file config_infer_primary_peoplenet.txtx, in the property group, the parameter tlt-model-key should be equal to the key (-k) used while training the model.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.