error while using TensorRT

infer: engine.cpp:1104: bool nvinfer1::rt::Engine::deserialize(const void*, std::size_t, nvinfer1::IGpuAllocator&, nvinfer1::IPluginFactory*): Assertion `size >= bsize && “Mismatch between allocated memory size and expected size of serialized engine.”’ failed.
Aborted (core dumped)

Specs:

dpkg -l | grep TensorRT

ii graphsurgeon-tf 5.1.5-1+cuda10.1 amd64 GraphSurgeon for TensorRT package
ii libnvinfer-dev 5.1.5-1+cuda10.1 amd64 TensorRT development libraries and headers
ii libnvinfer-samples 5.1.5-1+cuda10.1 all TensorRT samples and documentation
ii libnvinfer5 5.1.5-1+cuda10.1 amd64 TensorRT runtime libraries
ii python-libnvinfer 5.1.5-1+cuda10.1 amd64 Python bindings for TensorRT
ii python-libnvinfer-dev 5.1.5-1+cuda10.1 amd64 Python development package for TensorRT
ii python3-libnvinfer 5.1.5-1+cuda10.1 amd64 Python 3 bindings for TensorRT
ii python3-libnvinfer-dev 5.1.5-1+cuda10.1 amd64 Python 3 development package for TensorRT
ii tensorrt 5.1.5.0-1+cuda10.1 amd64 Meta package of TensorRT
ii uff-converter-tf 5.1.5-1+cuda10.1 amd64 UFF converter for TensorRT package

Hi,

Please refer to below sample implementation:
https://github.com/NVIDIA/TensorRT/blob/572d54f91791448c015e74a4f1d6923b77b79795/samples/common/sampleEngines.cpp#L488

If issue persist, could you please share the script and model file so we can help better?
Also, can you provide details on the platforms you are using:
o Linux distro and version
o GPU type
o Nvidia driver version
o CUDA version
o CUDNN version
o Python version [if using python]
o Tensorflow and PyTorch version
o TensorRT version

Thanks