No OpKernel registered to support 'TRTEngineOp' even after LoadLibrary

Trying to make a stand-alone C++ application that runs a saved TensorFlow-TRT graph from a .pb file.

So far, I’ve been successful converting the TF graph to TRT, saving, loading, and inferencing with it in Python, as well as loading the unoptimized TensorFlow graph with the C++ application. Loading the saved TensorRT graph (really a TensorFlow graph with some TRTEngineOp nodes in it, as I understand) with the C++ application, using the TensorFlow C API, has proven problematic.

If I simply change the working C++ application to load the optimized .pb file, I immediately see the error

Unable to import GraphDef from buffer Op type not registered 'TRTEngineOp' in binary...

This is expected, since I didn’t load any TRT library.

I change the code to load the _trt_engine_op.so library corresponding to the version was used to create the TensorRT graph. Importing the graph works correctly, as does setting up the necessary TensorFlow inputs and outputs. Come time to run the TF session, however, I see this error:

No OpKernel was registered to support Op 'TRTEngineOp'...

This error also indicates there are no registered kernels. It would seem that even though the _trt_engine_op.so library contains this kernel, loading the library didn’t register it.

Any ideas on how to resolve this? This is not a standalone TensorRT engine, since there were parts of the TF graph that were not converted.

Using TF 1.13.1, TRT 5.1.5.

Someone knows how to solve this; in my case the same it is happening. Thanks.

Did you ever get this sorted out? I’m seeing the same exact problem.

I have no idea how to load a TensorRT SavedModel using the C API. Nothing I’ve tried works.

Never got it to work, so I stopped pursuing using TensorRT with C++ code.