TensorRT 3.0 UFF parser failed to parse output node (Assertion Error)

I had a problem with using tensorRT. I tried “vgg keras example”, and it worked very well.
However when I tried my own model I got an “Assertion Error”.

I froze “Inception Resnet V1” Network, and loaded a graph like this.

uff_model = uff.from_tensorflow(tf_model, ["InceptionResnetV1/Classifier/logits/weights"])

and I got the success message “Converting to UFF graph No. nodes: 2”
Then like “vgg keras example”, I tried to do next step, creating uff parser, registering input and output nodes and creating engine

G_LOGGER = trt.infer.ConsoleLogger(trt.infer.LogSeverity.INFO)
parser = uffparser.create_uff_parser()
parser.register_input("input", (3,160,160), 0)
parser.register_output("InceptionResnetV1/Classifier/logits/weights")
engine = trt.utils.uff_to_trt_engine(G_LOGGER, uff_model, parser,1,1<<20, trt.infer.DataType.FLOAT)

and it failed because of AssertionError.

[TensorRT] INFO: UFFParser: parsing InceptionResnetV1/Classifier/logits/weights
[TensorRT] INFO: UFFParser: parsing MarkOutput_0
[TensorRT] ERROR: Network must have at least one input and one output
[TensorRT] ERROR: Failed to create engine

File “/usr/lib/python2.7/dist/packages/tensorrt/utils/_utils.py”, line 214, in uff_to_trt_engine
assert(engine)
AssertionError: Assertio…ngine)’

Is there any problem with my output or input node name?
I checked node name by reading graph def and changed output node name to other. But it didn’t work either. I also tried changed network from InceptionResnetV1 to MobileNetV1… But… I got the same error message.
Actually, I even don’t know if I tried the code correctly.
Is there anyone who faced the same problem like this? How to solve it?

environment: python 2.7, tensorflow 1.4.0, tensor RT 3.0, V100, inceptionresnet v1, mobilenet v1

I have the same problem. Were you able to solve this?

parser.register_output(“InceptionResnetV1/Classifier/logits/weights”)
i think you did a mistake as function argument should be the last node
as in official tutorial from nvida
http://docs.nvidia.com/deeplearning/sdk/tensorrt-api/topics/topics/workflows/tf_to_tensorrt.html
check this function in above link

the same problem, is there any solution to this problem?