ERROR: ../rtSafe/coreReadArchive.cpp (31) - Serialization Error in verifyHeader: 0 (Magic tag does not match)

I have generated resnet18_detectnet.etlt model, which is trained to detect a single object using TLT 2.0 container. I have performed inference on that model inside the TLT container and I am able load the model and see the detections.

I would like to know when when I provide an .etlt file and tlt-model-key=nvidia_tlt do I need to configure any other things.
what is Magic tag does not match meaning. I have tried both tlt-model-key=nvidia_tlt and tlt-model-key=tlt_encode. But both times got the same error.

When I try to perform inference on Deepstream, I am getting serialization errors.
source config:

[primary-gie]
enable=1
gpu-id=0
#Modify as necessary
gpu engine file
model-engine-file=/opt/nvidia/deepstream/deepstream-5.0/samples/trtis_model_repo/barnet_models/resnet18_detector.etlt
batch-size=2
#Required by the app for OSD, not a plugin property
bbox-border-color0=0;1;0;1
bbox-border-color1=1;0;0;1
#bbox-border-color2=0;0;1;1 # Blue
#bbox-border-color3=0;1;0;1
gie-unique-id=1
config-file=config_infer_primary_barnet_gpu.txt

infer_config:

[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
#tlt-model-key=tlt_encode
tlt-model-key=nvidia_tlt
tlt-encoded-model=/opt/nvidia/deepstream/deepstream-5.0/samples/trtis_model_repo/barnet_models/resnet18_detector.etlt
labelfile-path=labels_masknet.txt
gpu Engine File
#model-engine-file=/opt/nvidia/deepstream/deepstream-5.0/samples/trtis_model_repo/barnet_models/resnet18_detector.etlt
dla Engine File
#model-engine-file=/home/nvidia/detectnet_v2_models/detectnet_4K-fddb-12/resnet18_RGB960_detector_fddb_12_int8.etlt_b1_dla$
input-dims=3;1072;1920;0
uff-input-blob-name=input_1
batch-size=2
model-color-format=0
##0=FP32, 1=INT8, 2=FP16 mode
network-mode=0
#int8-calib-file=/opt/nvidia/deepstream/deepstream-5.0/samples/trtis_model_repo/barnet_models/calibration.bin
num-detected-classes=2
cluster-mode=1
interval=0
gie-unique-id=1
is-classifier=0
classifier-threshold=0.3
output-blob-names=output_bbox/BiasAdd;output_cov/Sigmoid

[class-attrs-all]
pre-cluster-threshold=0.3
group-threshold=1
eps=0.5
#minBoxes=1
detected-min-w=0

The error log:

root@4e833ed765ea:/opt/nvidia/deepstream/deepstream-5.0/samples/configs/deepstream-app-trtis# deepstream-app -c source1_video_barnet_gpu.txt
Warning: ‘input-dims’ parameter has been deprecated. Use ‘infer-dims’ instead.
ERROR: …/nvdsinfer/nvdsinfer_func_utils.cpp:33 [TRT]: …/rtSafe/coreReadArchive.cpp (31) - Serialization Error in verifyHeader: 0 (Magic tag does not match)
ERROR: …/nvdsinfer/nvdsinfer_func_utils.cpp:33 [TRT]: INVALID_STATE: std::exception
ERROR: …/nvdsinfer/nvdsinfer_func_utils.cpp:33 [TRT]: INVALID_CONFIG: Deserialize the cuda engine failed.
ERROR: …/nvdsinfer/nvdsinfer_model_builder.cpp:1567 Deserialize engine failed from file: /opt/nvidia/deepstream/deepstream-5.0/samples/trtis_model_repo/barnet_models/resnet18_detector.etlt
0:00:00.412217865 166 0x55e4c0a95b30 WARN nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1690> [UID = 1]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-5.0/samples/trtis_model_repo/barnet_models/resnet18_detector.etlt failed
0:00:00.412243254 166 0x55e4c0a95b30 WARN nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1797> [UID = 1]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-5.0/samples/trtis_model_repo/barnet_models/resnet18_detector.etlt failed, try rebuild
0:00:00.412251167 166 0x55e4c0a95b30 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1715> [UID = 1]: Trying to create engine from model files
ERROR: …/nvdsinfer/nvdsinfer_func_utils.cpp:33 [TRT]: UffParser: Could not read buffer.
parseModel: Failed to parse UFF model
ERROR: tlt/tlt_decode.cpp:274 failed to build network since parsing model errors.
ERROR: …/nvdsinfer/nvdsinfer_model_builder.cpp:797 Failed to create network using custom network creation function
ERROR: …/nvdsinfer/nvdsinfer_model_builder.cpp:862 Failed to get cuda engine from custom library API
0:00:00.547224818 166 0x55e4c0a95b30 ERROR nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1735> [UID = 1]: build engine file failed
Segmentation fault (core dumped)

• Hardware Platform (GPU) GTX1060
• DeepStream Version 5.0.1
• TensorRT Version 7.0.0
• NVIDIA GPU Driver Version (valid for GPU only) 440.10
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing) Detectnet_v2 model,

I would like to know when when I provide an .etlt file and tlt-model-key=nvidia_tlt do I need to configure any other things.

yes, other options are also needed, please refer to the tlt config files under /opt/nvidia/deepstream/deepstream/samples/configs/tlt_pretrained_models .

Magic tag does not match

Should indicate, the etlt file or tlt-model-key is not correct. tlt-model-key is the key you are using to generated the etlt file.

1 Like

@sk.ahmed401
Please refer to the demo config files mentioned by mchi.
For example, an etlt file is not expected to set in the “model-engine-file”