What we benefit from serialization of ONNX model?

Tensorrt provides function serializedEngine() to serialize a ONNX model to serialized model, why we should do this?

It is said serializeation is for transmission, BUT we also can transmit ONNX file without serialization

I found the serialized model is usually larger in size than its ONNX model, if our purpose is just for storing the model, ONNX format is enough.

Any help is appreciated.

Hi,
Request you to share the ONNX model and the script if not shared already so that we can assist you better.
Alongside you can try few things:

  1. validating your model with the below snippet

check_model.py

import sys
import onnx
filename = yourONNXmodel
model = onnx.load(filename)
onnx.checker.check_model(model).
2) Try running your model with trtexec command.

In case you are still facing issue, request you to share the trtexec “”–verbose"" log for further debugging
Thanks!

Serialized model is larger than ONNX model
Here is my ONNX model
model2.onnx (1.8 KB)

serialized model
example3.engine (6.6 KB)

Hi,

Serialization of the model saves time when you plan to reuse the model with TensorRT on the same platform.
You need not rebuild every time, which saves time. Yes, for the transmission you need to use the ONNX model.
TensorRT built engine is specific to the platform(GPU, OS, TensorRT version, etc) and not portable across the different platforms.

Thank you.

The explanation is reasonable

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.