[TensorRT] ERROR: ../rtSafe/coreReadArchive.cpp (31) - Serialization Error in verifyHeader: 0 (Magic tag does not match)

i used tensorrt 6.0 on centos 7.
Detailed error information is as follows:
------------ Options -------------
aspect_ratio: 1.0
batchSize: 1
checkpoints_dir: ./checkpoints
cluster_path: features_clustered_010.npy
data_type: 32
dataroot: ./datasets/cityscapes/
display_winsize: 512
engine: ./test.onnx
export_onnx: None
feat_num: 3
fineSize: 512
fp16: False
gpu_ids: [0]
how_many: 50
input_nc: 3
instance_feat: False
isTrain: False
label_feat: False
label_nc: 0
loadSize: 1024
load_features: False
local_rank: 0
max_dataset_size: inf
model: pix2pixHD
nThreads: 2
n_blocks_global: 9
n_blocks_local: 3
n_clusters: 10
n_downsample_E: 4
n_downsample_global: 4
n_local_enhancers: 1
name: label2city_1024p
nef: 16
netG: global
ngf: 64
niter_fix_global: 0
no_flip: False
no_instance: True
norm: instance
ntest: inf
onnx: None
output_nc: 3
phase: test
resize_or_crop: scale_width
results_dir: ./results/
serial_batches: False
tf_log: False
use_dropout: False
use_encoded_image: False
verbose: False
which_epoch: latest
-------------- End ----------------
CustomDatasetDataLoader
dataset [AlignedDataset] was created
[TensorRT] ERROR: …/rtSafe/coreReadArchive.cpp (31) - Serialization Error in verifyHeader: 0 (Magic tag does not match)
[TensorRT] ERROR: INVALID_STATE: std::exception
[TensorRT] ERROR: INVALID_CONFIG: Deserialize the cuda engine failed.
Traceback (most recent call last):
File “test.py”, line 55, in
generated = run_trt_engine(opt.engine, minibatch, [data[‘label’], data[‘inst’]])
File “/data/libo/pix2pixHD/run_engine.py”, line 142, in run_trt_engine
time_inference(engine, bs, it)
File “/data/libo/pix2pixHD/run_engine.py”, line 112, in time_inference
for io in get_input_output_names(engine):
File “/data/libo/pix2pixHD/run_engine.py”, line 67, in get_input_output_names
nbindings = trt_engine.get_nb_bindings();
AttributeError: ‘NoneType’ object has no attribute ‘get_nb_bindings’

1 Like

Hi,

Was this engine (opt.engine) also created using TensorRT6.0? If the engine was created and ran on different versions, this may happen. TensorRT engines are not compatible across different TensorRT versions.

Thanks,
NVIDIA Enterprise Support

I have the same problem. I compiled an .onnx model to .trt model and run it using same version of TensorRT(6.0), but this problem still appeared.

6 Likes

I have created the trt engines on tensorrt version 6.0 and also using it for inferencing on tensorrt 6.0. Getting the same error i.e.

[TensorRT] ERROR: …/rtSafe/coreReadArchive.cpp (31) - Serialization Error in verifyHeader: 0 (Magic tag does not match)
[TensorRT] ERROR: INVALID_STATE: std::exception
[TensorRT] ERROR: INVALID_CONFIG: Deserialize the cuda engine failed.

What could be a possible reason for this?

3 Likes

Same I’m facing the same issue after I’ve updated my software recently. Did you do this as well? Currently my TensorRT is on version 7.1.3.0.

1 Like

My problem was due to 2 issues :

  1. Wrong inputs to th model in terms of input size.
  2. Loading multiple runtimes at once in the memory. You should load the runtime only once and deserialize multiple models in that runtime only instead of loading different runtime for every model.

I was able to resolve the issue by fixing these issues.
Maybe you can try these !

1 Like

I’m having the same error message, but in a slightly different context:
I’m trying to launch a DeepStream pipeline on AWS G4 instances with TensorRT engines that were serialized in an exactly the same AMI and an exactly the same Docker container (which I’m assuming means that I have the same version of DeepStream, TensorRT, CUDA, etc.). I have checked that the input sizes (including the batch size) are matching the ONNX files from which I built the engines, and that there’s only one Runtime at once in the memory.
If anybody knows why these may be happening, I’d really appreciate any help! Thanks!

I am also having same problem. Anyone know if we can upgrade engine file to current version of whatever TensorRT system is having?

Hi,
Request you to share the ONNX model and the script if not shared already so that we can assist you better.
Alongside you can try few things:

  1. validating your model with the below snippet

check_model.py

import sys
import onnx
filename = yourONNXmodel
model = onnx.load(filename)
onnx.checker.check_model(model).
2) Try running your model with trtexec command.
https://github.com/NVIDIA/TensorRT/tree/master/samples/opensource/trtexec
In case you are still facing issue, request you to share the trtexec “”–verbose"" log for further debugging
Thanks!

I have solve this problem by delate these codes :
if len(sys.argv) > 1:
engine_file_path = sys.argv[1]
if len(sys.argv) > 2:
PLUGIN_LIBRARY = sys.argv[2]
because these codes can turn your env to other_env