Build engine file failed free(): double free detected in tcache 2 in ddepstream sample

Please provide complete information as applicable to your setup.

• DeepStream Version=6.0
• JetPack Version (valid for Jetson only)=4.6
• TensorRT Versio=8.0.1

hi,
I followed this example: this
when I run this code : ./deepstream-lpr-app 1 2 0 us_car_test2.mp4 us_car_test2.mp4 output.264
I got this error:

Request sink_0 pad from streammux
Warning: 'input-dims' parameter has been deprecated. Use 'infer-dims' instead.
Warning: 'input-dims' parameter has been deprecated. Use 'infer-dims' instead.
Now playing: 1
Opening in BLOCKING MODE 
Opening in BLOCKING MODE 
0:00:15.562619630 16993   0x556bac4360 INFO                 nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<secondary-infer-engine2> NvDsInferContext[UID 3]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1900> [UID = 3]: deserialized trt engine from :/home/nano/deepstream_lpr_app/models/LP/LPR/us_lprnet_baseline18_deployable.etlt_b16_gpu0_fp16.engine
INFO: [FullDims Engine Info]: layers num: 3
0   INPUT  kFLOAT image_input     3x48x96         min: 1x3x48x96       opt: 16x3x48x96      Max: 16x3x48x96      
1   OUTPUT kINT32 tf_op_layer_ArgMax 24              min: 0               opt: 0               Max: 0               
2   OUTPUT kFLOAT tf_op_layer_Max 24              min: 0               opt: 0               Max: 0               

ERROR: [TRT]: 3: Cannot find binding of given name: output_bbox/BiasAdd
0:00:15.562921505 16993   0x556bac4360 WARN                 nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger:<secondary-infer-engine2> NvDsInferContext[UID 3]: Warning from NvDsInferContextImpl::checkBackendParams() <nvdsinfer_context_impl.cpp:1868> [UID = 3]: Could not find output layer 'output_bbox/BiasAdd' in engine
ERROR: [TRT]: 3: Cannot find binding of given name: output_cov/Sigmoid
0:00:15.562976453 16993   0x556bac4360 WARN                 nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger:<secondary-infer-engine2> NvDsInferContext[UID 3]: Warning from NvDsInferContextImpl::checkBackendParams() <nvdsinfer_context_impl.cpp:1868> [UID = 3]: Could not find output layer 'output_cov/Sigmoid' in engine
0:00:15.563004682 16993   0x556bac4360 INFO                 nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<secondary-infer-engine2> NvDsInferContext[UID 3]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2004> [UID = 3]: Use deserialized engine model: /home/nano/deepstream_lpr_app/models/LP/LPR/us_lprnet_baseline18_deployable.etlt_b16_gpu0_fp16.engine
0:00:15.615600828 16993   0x556bac4360 INFO                 nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<secondary-infer-engine2> [UID 3]: Load new model:lpr_config_sgie_us.txt sucessfully
ERROR: Deserialize engine failed because file path: /home/nano/deepstream_lpr_app/deepstream-lpr-app/../models/LP/LPD/usa_pruned.etlt_b16_gpu0_int8.engine open error
0:00:15.616821140 16993   0x556bac4360 WARN                 nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger:<secondary-infer-engine1> NvDsInferContext[UID 2]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1889> [UID = 2]: deserialize engine from file :/home/nano/deepstream_lpr_app/deepstream-lpr-app/../models/LP/LPD/usa_pruned.etlt_b16_gpu0_int8.engine failed
0:00:15.616870255 16993   0x556bac4360 WARN                 nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger:<secondary-infer-engine1> NvDsInferContext[UID 2]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1996> [UID = 2]: deserialize backend context from engine from file :/home/nano/deepstream_lpr_app/deepstream-lpr-app/../models/LP/LPD/usa_pruned.etlt_b16_gpu0_int8.engine failed, try rebuild
0:00:15.616902807 16993   0x556bac4360 INFO                 nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<secondary-infer-engine1> NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1914> [UID = 2]: Trying to create engine from model files
WARNING: INT8 not supported by platform. Trying FP16 mode.
WARNING: INT8 not supported by platform. Trying FP16 mode.
WARNING: [TRT]: Detected invalid timing cache, setup a local cache instead
0:04:17.888673444 16993   0x556bac4360 INFO                 nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<secondary-infer-engine1> NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1947> [UID = 2]: serialize cuda engine to file: /home/nano/deepstream_lpr_app/models/LP/LPD/usa_pruned.etlt_b16_gpu0_fp16.engine successfully
INFO: [Implicit Engine Info]: layers num: 3
0   INPUT  kFLOAT input_1         3x480x640       
1   OUTPUT kFLOAT output_bbox/BiasAdd 4x30x40         
2   OUTPUT kFLOAT output_cov/Sigmoid 1x30x40         

0:04:18.087185475 16993   0x556bac4360 INFO                 nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<secondary-infer-engine1> [UID 2]: Load new model:lpd_us_config.txt sucessfully
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so
gstnvtracker: Batch processing is ON
gstnvtracker: Past frame output is OFF
[NvMultiObjectTracker] Initialized
0:04:18.463979693 16993   0x556bac4360 WARN                 nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger:<primary-infer-engine1> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1161> [UID = 1]: Warning, OpenCV has been deprecated. Using NMS for clustering instead of cv::groupRectangles with topK = 20 and NMS Threshold = 0.5
ERROR: Deserialize engine failed because file path: /home/nano/deepstream_lpr_app/deepstream-lpr-app/../models/tao_pretrained_models/trafficcamnet/resnet18_trafficcamnet_pruned.etlt_b1_gpu0_int8.engine open error
0:04:18.466567506 16993   0x556bac4360 WARN                 nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger:<primary-infer-engine1> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1889> [UID = 1]: deserialize engine from file :/home/nano/deepstream_lpr_app/deepstream-lpr-app/../models/tao_pretrained_models/trafficcamnet/resnet18_trafficcamnet_pruned.etlt_b1_gpu0_int8.engine failed
0:04:18.466623808 16993   0x556bac4360 WARN                 nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger:<primary-infer-engine1> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1996> [UID = 1]: deserialize backend context from engine from file :/home/nano/deepstream_lpr_app/deepstream-lpr-app/../models/tao_pretrained_models/trafficcamnet/resnet18_trafficcamnet_pruned.etlt_b1_gpu0_int8.engine failed, try rebuild
0:04:18.466660579 16993   0x556bac4360 INFO                 nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary-infer-engine1> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1914> [UID = 1]: Trying to create engine from model files
WARNING: INT8 not supported by platform. Trying FP16 mode.
NvDsInferCudaEngineGetFromTltModel: Failed to open TLT encoded model file /home/nano/deepstream_lpr_app/deepstream-lpr-app/../models/tao_pretrained_models/trafficcamnet/resnet18_trafficcamnet_pruned.etlt
ERROR: Failed to create network using custom network creation function
ERROR: Failed to get cuda engine from custom library API
0:04:18.468793756 16993   0x556bac4360 ERROR                nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger:<primary-infer-engine1> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1934> [UID = 1]: build engine file failed
free(): double free detected in tcache 2
Aborted (core dumped)

how I solve this problem? Thank you.

Can you find this model?

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.