[TensorRT] ERROR: coreReadArchive.cpp (41) - Serialization Error in verifyHeader

Please provide the following information when requesting support.

• Hardware (T4/V100/Xavier/Nano/etc)
• Network Type (Detectnet_v2/Faster_rcnn/Yolo_v4/LPRnet/Mask_rcnn/Classification/etc)
• TLT Version (Please run “tlt info --verbose” and share “docker_tag” here)
• Training spec file(If have, please share here)
• How to reproduce the issue ? (This is for errors. Please share the command line and the detailed log here.)

[TensorRT] ERROR: coreReadArchive.cpp (41) - Serialization Error in verifyHeader: 0 (Version tag does not match. Note: Current Version: 96, Serialized Engine Version: 89)
[TensorRT] ERROR: INVALID_STATE: std::exception
[TensorRT] ERROR: INVALID_CONFIG: Deserialize the cuda engine failed.
Traceback (most recent call last):
File “/home/vaaan/tlt-experiments/test.py”, line 249, in
inputs, outputs, bindings, stream = allocate_buffers(trt_engine)
File “/home/vaaan/tlt-experiments/test.py”, line 63, in allocate_buffers
for binding in engine:

TypeError: ‘NoneType’ object is not iterable
TensorRT Version7.2.1.6
Quadro RTX 5000 dual GPU
Driver Version: 455.23.05
CUDA Version: 11.1
Ubuntu 18.04
python 3.6
Network Type detectnet_v2
nvcr.io/nvidia/tlt-streamanalytics:v3.0-py

Please geneate tensorrt engine in the device where you want to run inference.

So if i want to deploy a model built using Transfer learning toolkit,should i train it on the same device eg :jetson xavier nx or can i align the dependencies on two devices train on my workstation and deploy it on my jetson xaviors nx?

No, you are not able to train on xaiver or nx. And it is not needed to train on Xaiver or nx.

You just need to copy the etlt model into Xaiver or NX, and run inference with two methods.

  1. Run inference when config etlt model in deepstream
  2. Or download the tao-converter and generate tensorrt engine to run inference in deepstream or standalone way.

Thank you ,so what dependency should be same on the workstation and jetson? eg tensorrt?

If you select the 2nd method, just need to download the correct version of tao-converter for your Jetson devices.
https://docs.nvidia.com/tao/tao-toolkit/text/tensorrt.html#id5

thank you

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.