Deserializing TensorRT engine file

I am serializing then deserializing a resnet50 file.

Here is my code:
engine, context = build_engine(ONNX_FILE_PATH)
with open(“resnet50.engine”, “wb”) as f:
f.write(engine.serialize())
with open(“resnet50.engine”, “rb”) as f, trt.Runtime(TRT_LOGGER) as runtime:
engine = runtime.deserialize_cuda_engine(f.read())

When I do this, it works. I get an output later. However, if I just run:
with open(“resnet50.engine”, “rb”) as f, trt.Runtime(TRT_LOGGER) as runtime:
engine = runtime.deserialize_cuda_engine(f.read())
I get:
[TensorRT] ERROR: INVALID_ARGUMENT: Cannot deserialize with an empty memory buffer.
[TensorRT] ERROR: INVALID_CONFIG: Deserialize the cuda engine failed.

How can I work around this? I don’t want to have to build the trt model from onnx everytime. Also, why does it work when I serialize then deserialize, vs when I just deserialize?

Hi,

[TensorRT] ERROR: INVALID_ARGUMENT: Cannot deserialize with an empty memory buffer.

This error indicates there are some issues when opening the file and reading it into the memory.
Do you have more logs regarding reading the file?

Below is a deserialization example for your reference:
https://elinux.org/Jetson/L4T/TRT_Customized_Example#OpenCV_with_PLAN_model

Thanks.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.