tensorrt inference server faild to load model

sys info:

os : ubuntu 16.04

tensorrt inference server : 1.15.0

container 19.06 {tensorflow 1.13.1, tensorrt 5.1.5.0}

CUDA Version 10.1.243

Driver Version: 418.87.01

failed to load 'demo' version 1: Not found: Op type not registered 'BatchMatMulV2' in binary running on fa60ca095bbf. Make sure the Op and Kernel are registered in the binary running in this process. Note that if you are loading a saved graph which used ops from tf.contrib, accessing (e.g.) `tf.contrib.resampler` should be done before importing the graph, as contrib ops are lazily registered when the module is first accessed.

could you help me to solve this issue?

Please see https://github.com/NVIDIA/tensorrt-inference-server/issues/855

thanks, that’s my proposed issue!!