[error] deserialize_cuda_engine(): incompatible funtion arguments in sample fc_plugin_caffe_mnist

Hello,
I modify the TensorRT python sample in samples/python/fc_plugin_caffe_mnist to support serializing and deserializing. But when I deserialize the saved engine file, it crashes and produces too much log.

load_engine
Reading engine from file mnist.engine
<class 'build.fcplugin.FCpluginFactory'>
Traceback (most recent call last):
  File "sample2.py", line 156, in <module>
    des_engine()
  File "sample2.py", line 151, in des_engine
    engine = runtime.deserialize_cuda_engine(f.read(), fc_factory)
TypeError: deserialize_cuda_engine(): incompatible funtion arguments. The following argument types are supported:
    1. (self: tensorrt.tensorrt.Runtime, serialized_engine: buffer, plugin_factory: tensorrt.tensorrt.IPluginFactory = None) -> tensorrt.tensorrt.ICudaEngine

Invoked with: <tensorrt.tensorrt.Runtime object at 0x7f2b50478298, '\xd8l\x1a\x00\x00\x00  ...............'>

This gist https://gist.github.com/crouchggj/63ebd84193ff4a695efe82c9b1d54f82 or attachment file is my modified test source code which you can test. I use main() function to save engined file

def main():
    # Get data files for the model.
    data_path, [deploy_file, model_file, mean_proto] = common.find_sample_data(description="Runs an MNIST network using a Caffe model file", subfolder="mnist", find_files=["mnist.prototxt", "mnist.caffemodel", "mnist_mean.binaryproto"])

    with build_engine(deploy_file, model_file) as engine:
        # Build an engine, allocate buffers and create a stream.
        # For more information on buffer allocation, refer to the introductory samples.
        print("save engine")
        buf = engine.serialize()
        print (type(buf))
        with open("mnist.engine", 'wb') as f:
            f.write(buf)
            f.close()

and des_engine() function to deserialize engine file.

def des_engine():
    print("load enine")
    engine_file_path = "mnist.engine"
    if os.path.exists(engine_file_path):
        print("Reading engine from file {}".format(engine_file_path))
        with open(engine_file_path, "rb") as f, trt.Runtime(TRT_LOGGER) as runtime:
            print (type(fc_factory))
            engine = runtime.deserialize_cuda_engine(f.read(), fc_factory)
            print("success!")

I test it in TensorRT5 RC and TensorRT5 GA, they produce same error.
My environment:
CUDA version: CUDA10
CUDNN version: 7.3.1
Python version: 2.7
TensorRT version: TensorRT 5 RC or TensorRT 5 GA

sample2.py.zip (3.46 KB)

I haven’t solved this issue now. But I think maybe it’s about pybind11 usage. I use pybind11 v2.2.3. When I check type of variable: fc_factory, its type is FCPluginFactory, maybe it cannot be recognized as class IPluginFactory. But I have no idea about it.

Hello,

Per engineering, we have a fix candidate, and should be part of the next TensorRT release.

regards,
NVIDIA Enterprise Support

Hello, i’m facing the same issue. But i used C++ https://github.com/lewes6369/TensorRT-Yolov3 repo to compile engine. And while trying to deserialize it, i’m getting seg fault (core dumped).

with open('yolov3_fp16.engine', 'rb') as f, trt.Runtime(TRT_LOGGER) as runtime:
  engine = runtime.deserialize_cuda_engine(f.read())
  • Xavier with Jetpack 4.2
  • cuda 10.0.166-1
  • tensorrt 5.0.6.3-1+cuda10.0
  • pycuda==2019.1

i met the same question,are there any solutions to solve this problem?
i have ubuntu 16.04, TensorRT 5.0.2.6

@NVES, any update about the issue?

just upgrade TensorRT,see example in tensorrt directory/samples/python/fc_plugin_caffe_mnist
i use tensorrt 5.1.2.2 and this problem is solved

I have tried tensorrt 5.1.2.2, but it didn’t work.