Upgrade Tensorrt engine file

I have already generated trt file for which I don’t have ONNX file. I am trying it to load in my Jetson AXG and I am getting error

[TensorRT] ERROR: coreReadArchive.cpp (31) - Serialization Error in verifyHeader: 0 (Magic tag does not match)
[TensorRT] ERROR: INVALID_STATE: std::exception
[TensorRT] ERROR: INVALID_CONFIG: Deserialize the cuda engine failed.
Traceback (most recent call last):

Possibly as same as in this post.

According to answer of post I realized my trt file was generated using TensorRT 5 and I am running it in Jetson AGX having TensorRT 7.

Is there any way to upgrade my trt file from version 5 to 7? And how?


Unfortunately, TensorRT engine doesn’t support portability.
So you will need to re-convert it with TensorRT v7 from the ONNX file directly.

You can find some details in our document:

Developer Guide :: NVIDIA Deep Learning TensorRT Documentation

Q: Are engines and calibration tables portable across TensorRT versions?

A: No. Internal implementations and formats are continually optimized and may change between versions. For this reason, engines and calibration tables are not guaranteed to be binary compatible with different versions of TensorRT. Applications should build new engines and INT8 calibration tables when using a new version of TensorRT.


This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.