Error Code 1: Serialization + device GPU, failed to create CUDA engine

Man I am heaps full of problems today.

Background:
I cloned my working disk IMG to a higher capacity drive so I have more room to install things. Everything seems to be working good, up until I try running my previously working script that run an object detection model I trained.

Error:
[TRT] 1: [stdArchiveReader.cpp::StdArchiveReader::54] Error Code 1: Serialization (Serialization assertion sizeRead == static_cast<uint64_t>(mEnd - mCurrent) failed.Size specified in header does not match archive size)
[TRT] 4: [runtime.cpp::deserializeCudaEngine::50] Error Code 4: Internal Error (Engine deserialization failed.)
[TRT] device GPU, failed to create CUDA engine
[TRT] failed to create TensorRT engine for /home/jb/catkin_ws/src/jetson-inference/python/training/detection/ssd/models/eraser/ssd-mobilenet.onnx, device GPU
[TRT] detectNet – failed to initialize.
jetson.inference – detectNet failed to load network

Thank you again for everyone that helps, posting my problems benefits me as I can record and learn from it efficiently, while helping others in the future.

Which JetPack version you’re using?

4.4

Hi,

The error indicates that the engine file is serialized with a different TensorRT version.

The TensorRT engine doesn’t support portability.
Both hardware (GPU architecture) and software (library version) need to be the same in the serializing and deserializing stage.

Thanks.

Is there a method I can use to put them back in the same serializing and deserializing stage? Can you elaborate what that means specifically? Thanks

Hi,

You can serialize the engine from ONNX or other model formats again.
Thanks.

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.