TensorRT and C++ Message Op type not registered 'TRTEngineOp'

I’m trying to load a TensorRT model using the TensorFlow C API, but I am getting the following error:

Unable to create session: Code 5; Message Op type not registered ‘TRTEngineOp’ in binary running on desktop. Make sure the Op and Kernel are registered in the binary running in this process. Note that if you are loading a saved graph which used ops from tf.contrib, accessing (e.g.) tf.contrib.resampler should be done before importing the graph, as contrib ops are lazily registered when the module is first accessed.

The TensorRT model is a SavedModel created with in Python.

I am facing the same issue in Python when using a system with an Nvidia GeForce GTX1050M (4GB). However, the graph loads on a VM on GCP using an Nvidia Tesla P100. Probably the GPUs need to support this.

Are you loading the graph on the same GPU you used to optimise the graph?

Apparently, when you create a TensorRT plan using a GPU with a particular compute capability, the optimised model is valid only for a GPU with the same compute capability.

Source: https://devtalk.nvidia.com/default/topic/1052148/tensorrt/gpu-requirements-to-run-create_inference_graph-using-tensorrt-trt-in-tensorflow-/post/5341105/#5341105

Are you optimising and running the C API on the same GPU?