The GitHub demonstrates how to convert the model into TF-TRT, not the standalone TensorRT plan.
As a result, these model need to be executed with TensorFlow interface:
For a pure TensorRT sample, here are some detection sample for your reference:
I want to know if I have successfully converted the classification model (Inception V1) into a .plan file?
I do not understand the info provided:
The script will display which nodes were excluded
for the engine. If there are any nodes listed besides
the input placeholders, TensorRT engine, and output
identity nodes, your engine does not include the entire model.
I used TensorFlow-TRT to optimize a classification model. Then I applied some code (I found it in Nvidia’s documentation) to convert that graph to a .plan file. I just do not know if I have succeeded.