Why my etlt and engine file Engine deserialization failed

i have trained model with tao tool kit and saved etlt and engine and cache files for inference but when i use in python module ( taken from deepstream-python-app ) it fail.

• Hardware Platform (dGPU RTX 3060)
• DeepStream 6.1
• TensorRT 8.4
• NVIDIA GPU Driver Version (520.61.05 )

please helpme out i added tlt-encoded-model parameter but got error which you can see below the config file.

my dstest1_pgie_config.txt

[property]
gpu-id=0

#net-scale-factor=0.0039215697906911373
#model-file=../Primary_Detector/resnet10.caffemodel
#proto-file=../Primary_Detector/resnet10.prototxt
#model-engine-file=..//Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine
#labelfile-path=../Primary_Detector/labels.txt
#int8-calib-file=../Primary_Detector/cal_trt.bin


tlt-model-key=NGpmbHN0ZTNrZHFkOGRxNnFsbW9rbXNxbnU6Yzc5NWM5MjQtZDE1YS00NTYxLTg3YzgtNTU2MWVhNDg1M2M3
tlt-encoded-model = ../Primary_Detector/export_retrain/yolov4_resnet18_epoch_080.etlt
model-engine-file=../Primary_Detector/export_retrain/trt.engine
labelfile-path=../Primary_Detector/export_retrain/labels.txt
int8-calib-file=../Primary_Detector/export_retrain/cal.bin

net-scale-factor=1.0
offsets=103.939;116.779;123.68
infer-dims=3;384;1248
force-implicit-batch-dim=1
batch-size=1
network-mode=0
model-color-format=1
num-detected-classes=6
interval=0
gie-unique-id=1
output-blob-names=conv2d_bbox;conv2d_cov/Sigmoid
#scaling-filter=0
#scaling-compute-hw=0

[class-attrs-all]
pre-cluster-threshold=0.2
eps=0.2
group-threshold=1

i used tlt-encoded-model parameter instead of model-file it solve deserialization but got other error.

/Desktop/farid/O/Object-Detection-Deepstream$ sudo python3 deepstream_test_1.py  /opt/nvidia/deepstream/deepstream-6.1/samples/streams/camera-1_video_26.mp4 
Creating Pipeline 
 
Creating Source 
 
creating caps_filter 

Creating H264Parser 

Creating Decoder 

Creating EGLSink 

Playing file /opt/nvidia/deepstream/deepstream-6.1/samples/streams/camera-1_video_26.mp4 
Adding elements to Pipeline 

Linking elements in the Pipeline 

Starting pipeline 

0:00:00.159538914 504171      0x37e3120 WARN                 nvinfer gstnvinfer.cpp:643:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1170> [UID = 1]: Warning, OpenCV has been deprecated. Using NMS for clustering instead of cv::groupRectangles with topK = 20 and NMS Threshold = 0.5
0:00:00.159641791 504171      0x37e3120 INFO                 nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1923> [UID = 1]: Trying to create engine from model files
WARNING: [TRT]: onnx2trt_utils.cpp:369: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
WARNING: [TRT]: onnx2trt_utils.cpp:395: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: onnx2trt_utils.cpp:395: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: onnx2trt_utils.cpp:395: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: onnx2trt_utils.cpp:395: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: onnx2trt_utils.cpp:395: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: onnx2trt_utils.cpp:395: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: onnx2trt_utils.cpp:395: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: onnx2trt_utils.cpp:395: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: onnx2trt_utils.cpp:395: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: onnx2trt_utils.cpp:395: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: onnx2trt_utils.cpp:395: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: onnx2trt_utils.cpp:395: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: onnx2trt_utils.cpp:395: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: onnx2trt_utils.cpp:395: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: onnx2trt_utils.cpp:395: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: onnx2trt_utils.cpp:395: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: onnx2trt_utils.cpp:395: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: onnx2trt_utils.cpp:395: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: onnx2trt_utils.cpp:395: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: onnx2trt_utils.cpp:395: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: onnx2trt_utils.cpp:395: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: builtin_op_importers.cpp:4716: Attribute caffeSemantics not found in plugin node! Ensure that the plugin creator has a default value defined or the engine may fail to build.
WARNING: [TRT]: The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1.
WARNING: [TRT]: The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1.
0:02:05.141006352 504171      0x37e3120 INFO                 nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1955> [UID = 1]: serialize cuda engine to file: /home/experts-vision/Desktop/farid/O/Primary_Detector/export_retrain/yolov4_resnet18_epoch_080.etlt_b1_gpu0_fp32.engine successfully
WARNING: [TRT]: The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1.
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 5
0   INPUT  kFLOAT Input           3x384x1248      
1   OUTPUT kINT32 BatchedNMS      1               
2   OUTPUT kFLOAT BatchedNMS_1    200x4           
3   OUTPUT kFLOAT BatchedNMS_2    200             
4   OUTPUT kFLOAT BatchedNMS_3    200             

ERROR: [TRT]: 3: Cannot find binding of given name: conv2d_bbox
0:02:05.145256102 504171      0x37e3120 WARN                 nvinfer gstnvinfer.cpp:643:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::checkBackendParams() <nvdsinfer_context_impl.cpp:1876> [UID = 1]: Could not find output layer 'conv2d_bbox' in engine
ERROR: [TRT]: 3: Cannot find binding of given name: conv2d_cov/Sigmoid
0:02:05.145273353 504171      0x37e3120 WARN                 nvinfer gstnvinfer.cpp:643:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::checkBackendParams() <nvdsinfer_context_impl.cpp:1876> [UID = 1]: Could not find output layer 'conv2d_cov/Sigmoid' in engine
0:02:05.206911633 504171      0x37e3120 INFO                 nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<primary-inference> [UID 1]: Load new model:dstest1_pgie_config.txt sucessfully
Error: gst-stream-error-quark: Failed to parse stream (7): gstbaseparse.c(2998): gst_base_parse_check_sync (): /GstPipeline:pipeline0/GstH264Parse:h264-parser

i also removed the capsfilter plugin but still got same error

here is my -gen_ds_conf

net-scale-factor=1.0
offsets=103.939;116.779;123.68
infer-dims=3;384;1248
tlt-model-key=NGpmbHN0ZTNrZHFkOGRxNnFsbW9rbXNxbnU6Yzc5NWM5MjQtZDE1YS00NTYxLTg3YzgtNTU2MWVhNDg1M2M3
network-type=0
num-detected-classes=6
model-color-format=1
maintain-aspect-ratio=0
output-tensor-meta=0

Let’s closed this one and only keep How use TAO tool kit trained model in deepstream-python-app? open for discussion.
As the team is on CNY holiday, we can only check on it in next week, thank you.

1 Like