TensorRT3:Failed to create engine

I tried to use trt.utils.uff_ to_trt_engine() to create a tensorrt engine and save the plan, however i got these error:

Using output node upscore32/up_filter
Converting to UFF graph
No. nodes: 2
[TensorRT] INFO: UFFParser: parsing upscore32/up_filter
[TensorRT] INFO: UFFParser: parsing MarkOutput_0
[TensorRT] ERROR: Network must have at least one input and one output
[TensorRT] ERROR: Failed to create engine
File “/usr/local/lib/python2.7/dist-packages/tensorrt/utils/_utils.py”, line 214, in uff_to_trt_engine
assert(engine)
Traceback (most recent call last):
File “trtinfmod.py”, line 19, in
1<<30)
File “/usr/local/lib/python2.7/dist-packages/tensorrt/utils/_utils.py”, line 222, in uff_to_trt_engine
raise AssertionError(‘UFF parsing failed on line {} in statement {}’.format(line, text))
AssertionError: UFF parsing failed on line 214 in statement assert(engine)

Here is my source code:

from tensorrt.lite import Engine
from tensorrt.infer import LogSeverity
import tensorrt as trt
import uff
from tensorrt.parsers import uffparser
G_LOGGER = trt.infer.ConsoleLogger(trt.infer.LogSeverity.INFO)
uff_model = uff.from_tensorflow_frozen_model(“test_frozen_model.pb”, [“upscore32/up_filter”])

parser = uffparser.create_uff_parser()
parser.register_input(“input”, (3,600,600),0)
parser.register_output(“upscore32/up_filter”)

engine = trt.utils.uff_to_trt_engine(G_LOGGER,
uff_model,
parser,
1,
1<<20)

trt.utils.write_engine_to_file(“test_tensorrt.engine”, engine.serialize())

Did you figure this issue out? The python output is pretty much useless b/c it only tells you that building the engine has failed. However, if you look at the terminal output, you can often get more info. Just FYI, I had this error and my issue was that I had another Jupyter notebook open running Tensor flow that was using my GPU. Python output was the error you showed, but console output showed:

[TensorRT] INFO: Fusing  conv1/add with activation pool1/transpose
[TensorRT] INFO: Fusing  conv2/add with activation pool2/transpose
[TensorRT] INFO: Fusing  fc1/BiasAdd with activation fc1/Relu
[TensorRT] INFO: Fusing  fc2/BiasAdd with activation fc2/Relu
[TensorRT] INFO: After conv-act fusion: 8 layers
[TensorRT] INFO: After tensor merging: 8 layers
[TensorRT] INFO: After concat removal: 8 layers
[TensorRT] ERROR: resources.cpp (199) - Cuda Error in gieCudaMalloc: 2
[TensorRT] ERROR: resources.cpp (199) - Cuda Error in gieCudaMalloc: 2
[TensorRT] ERROR: Failed to create engine

We created a new “Deep Learning Training and Inference” section in Devtalk to improve the experience for deep learning and accelerated computing, and HPC users:
https://devtalk.nvidia.com/default/board/301/deep-learning-training-and-inference-/

We are moving active deep learning threads to the new section.

URLs for topics will not change with the re-categorization. So your bookmarks and links will continue to work as earlier.

-Siddharth

The error problem is here:
[TensorRT] ERROR: Network must have at least one input and one output

Can you check that ‘upscore32/up_filter’ exists in the graph?

Can you try it without the ‘/’ in the name?

If both of those still fail.
Please file a bug here: https://developer.nvidia.com/nvidia-developer-program
Please include the steps used to reproduce the problem along with the output of infer_device and the frozen graph via the email that is provided.