infer: engine.cpp:1104: bool nvinfer1::rt::Engine::deserialize(const void*, std::size_t, nvinfer1::IGpuAllocator&, nvinfer1::IPluginFactory*): Assertion `size >= bsize && “Mismatch between allocated memory size and expected size of serialized engine.”’ failed.
Aborted (core dumped)
Specs:
dpkg -l | grep TensorRT
ii graphsurgeon-tf 5.1.5-1+cuda10.1 amd64 GraphSurgeon for TensorRT package
ii libnvinfer-dev 5.1.5-1+cuda10.1 amd64 TensorRT development libraries and headers
ii libnvinfer-samples 5.1.5-1+cuda10.1 all TensorRT samples and documentation
ii libnvinfer5 5.1.5-1+cuda10.1 amd64 TensorRT runtime libraries
ii python-libnvinfer 5.1.5-1+cuda10.1 amd64 Python bindings for TensorRT
ii python-libnvinfer-dev 5.1.5-1+cuda10.1 amd64 Python development package for TensorRT
ii python3-libnvinfer 5.1.5-1+cuda10.1 amd64 Python 3 bindings for TensorRT
ii python3-libnvinfer-dev 5.1.5-1+cuda10.1 amd64 Python 3 development package for TensorRT
ii tensorrt 5.1.5.0-1+cuda10.1 amd64 Meta package of TensorRT
ii uff-converter-tf 5.1.5-1+cuda10.1 amd64 UFF converter for TensorRT package