[TensorRT] ERROR: ../rtSafe/coreReadArchive.cpp (31) - Serialization Error in verifyHeader: 0 (Magic tag does not match)

i used tensorrt 6.0 on centos 7.
Detailed error information is as follows:
------------ Options -------------
aspect_ratio: 1.0
batchSize: 1
checkpoints_dir: ./checkpoints
cluster_path: features_clustered_010.npy
data_type: 32
dataroot: ./datasets/cityscapes/
display_winsize: 512
engine: ./test.onnx
export_onnx: None
feat_num: 3
fineSize: 512
fp16: False
gpu_ids: [0]
how_many: 50
input_nc: 3
instance_feat: False
isTrain: False
label_feat: False
label_nc: 0
loadSize: 1024
load_features: False
local_rank: 0
max_dataset_size: inf
model: pix2pixHD
nThreads: 2
n_blocks_global: 9
n_blocks_local: 3
n_clusters: 10
n_downsample_E: 4
n_downsample_global: 4
n_local_enhancers: 1
name: label2city_1024p
nef: 16
netG: global
ngf: 64
niter_fix_global: 0
no_flip: False
no_instance: True
norm: instance
ntest: inf
onnx: None
output_nc: 3
phase: test
resize_or_crop: scale_width
results_dir: ./results/
serial_batches: False
tf_log: False
use_dropout: False
use_encoded_image: False
verbose: False
which_epoch: latest
-------------- End ----------------
CustomDatasetDataLoader
dataset [AlignedDataset] was created
[TensorRT] ERROR: …/rtSafe/coreReadArchive.cpp (31) - Serialization Error in verifyHeader: 0 (Magic tag does not match)
[TensorRT] ERROR: INVALID_STATE: std::exception
[TensorRT] ERROR: INVALID_CONFIG: Deserialize the cuda engine failed.
Traceback (most recent call last):
File “test.py”, line 55, in
generated = run_trt_engine(opt.engine, minibatch, [data[‘label’], data[‘inst’]])
File “/data/libo/pix2pixHD/run_engine.py”, line 142, in run_trt_engine
time_inference(engine, bs, it)
File “/data/libo/pix2pixHD/run_engine.py”, line 112, in time_inference
for io in get_input_output_names(engine):
File “/data/libo/pix2pixHD/run_engine.py”, line 67, in get_input_output_names
nbindings = trt_engine.get_nb_bindings();
AttributeError: ‘NoneType’ object has no attribute ‘get_nb_bindings’

1 Like

Hi,

Was this engine (opt.engine) also created using TensorRT6.0? If the engine was created and ran on different versions, this may happen. TensorRT engines are not compatible across different TensorRT versions.

Thanks,
NVIDIA Enterprise Support

I have the same problem. I compiled an .onnx model to .trt model and run it using same version of TensorRT(6.0), but this problem still appeared.

4 Likes

I have created the trt engines on tensorrt version 6.0 and also using it for inferencing on tensorrt 6.0. Getting the same error i.e.

[TensorRT] ERROR: …/rtSafe/coreReadArchive.cpp (31) - Serialization Error in verifyHeader: 0 (Magic tag does not match)
[TensorRT] ERROR: INVALID_STATE: std::exception
[TensorRT] ERROR: INVALID_CONFIG: Deserialize the cuda engine failed.

What could be a possible reason for this?

1 Like

Same I’m facing the same issue after I’ve updated my software recently. Did you do this as well? Currently my TensorRT is on version 7.1.3.0.

1 Like

My problem was due to 2 issues :

  1. Wrong inputs to th model in terms of input size.
  2. Loading multiple runtimes at once in the memory. You should load the runtime only once and deserialize multiple models in that runtime only instead of loading different runtime for every model.

I was able to resolve the issue by fixing these issues.
Maybe you can try these !