tf_to_trt Error: Mismatch between allocated memory size and expected size of serialized engin...

Hello!

I’m trying to run the tf_to_trt_image_classification app, but it only runs the Inception_v1, vgg_16 and mobilenet_v1_0p5_160 NNs. I’m using a Jetson TX2 with JetPack 3.2.

I’ve followed the tf_to_trt_image_classification tutorial, but the test application is crashing with the message

test_trt: cudnnEngine.cpp:640: bool nvinfer1::cudnn::Engine::deserialize(const void*, std::size_t, nvinfer1::IPluginFactory*): Assertion `size >= bsize && "Mismatch between allocated memory size and expected size of serialized engine."' failed.

I’ve tried running it with TensorFlow 1.5.0 pip wheel from the tutorial and with TensorFlow 1.7 wheel from this topic.

The second line src/test/test_trt.cu is failing.

IRuntime *runtime = createInferRuntime(gLogger);
ICudaEngine *engine = runtime->deserializeCudaEngine((void*)plan.data(),
      plan.size(), nullptr);

Can anyone help?

Thanks!!

Hi,

Could you check the value of testConfig.planPath?
Please help to make sure the path for serializing model is valid.

Thanks.

Hi leonardopsantos,

Any update? Or issue has been resolved?

Thanks

Oh, sorry, yes, it has been resolved. It was an interface problem: the interface between the screen and the keyboard (i.e. myself) wasn’t doing things right. The plan file was missing, just that.

Thank you for your help!

Good to know it works now. : )