[TensorRT5.0] Parsing the converted UFF format InceptionV3 model failed, which is converted from TF frozen model

OS: CentOS 7.0
TensorRT: 5.0.0.10
TensorFlow: 1.7.0
Model: Inception V3

Description:
Firstly, I managed to convert a frozen model of InceptionV3 to UFF model by TensorRT 5.0.0.10. This conversion seems succeeded. Then I tried to parse the uff model but failed. Though the log shows “Segmentation fault (core dumped)”, there is no core file to debug actually. I would like to try the inference performance enhancement that brought by the new TRT version, but not sure whether my procedure is correct.
Any idea will be welcome.

Conversion Log:

$ convert-to-uff frozen_graph.pb 
/home/karafuto/lib/python2.7/site-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
  from ._conv import register_converters as _register_converters
Loading frozen_graph.pb
=== Automatically deduced input nodes ===
[name: "Placeholder"
op: "Placeholder"
attr {
  key: "dtype"
  value {
    type: DT_FLOAT
  }
}
attr {
  key: "shape"
  value {
    shape {
      dim {
        size: -1
      }
      dim {
        size: 299
      }
      dim {
        size: 299
      }
      dim {
        size: 3
      }
    }
  }
}
]
=========================================

=== Automatically deduced output nodes ===
[name: "InceptionV3/Logits/SpatialSqueeze"
op: "Squeeze"
input: "InceptionV3/Logits/Conv2d_1c_1x1/BiasAdd"
attr {
  key: "T"
  value {
    type: DT_FLOAT
  }
}
attr {
  key: "squeeze_dims"
  value {
    list {
      i: 1
      i: 2
    }
  }
}
]
==========================================

Using output node InceptionV3/Logits/SpatialSqueeze
Converting to UFF graph
Warning: keepdims is ignored by the UFF Parser and defaults to True
No. nodes: 789
UFF Output written to frozen_graph.uff

Parsing Log:

$ python trtuff_test.py 
TensorRT Version: 5.0.0.10
TensorRT parse UFF model
Segmentation fault (core dumped)

Parser Script:

$ cat trtuff_test.py 
import tensorrt as trt

print('TensorRT Version: '+trt.__version__)
model_file = './frozen_graph.uff'
TRT_LOGGER = trt.Logger(trt.Logger.WARNING)

print('TensorRT parse UFF model')
with trt.Builder(TRT_LOGGER) as builder, builder.create_network() as network, trt.UffParser() as parser:
    parser.register_input("Placeholder", (-1,299,299,3))
    parser.register_output("InceptionV3/Logits/SpatialSqueeze")
parser.parse(model_file, network)

hello,

can you share the frozen graph (pb file) to help us debug?

thanks,
NVIDIA Enterprise Support

hi dear.
are you resolve your problem?
i have the same problem.

os: ubuntu 18.04
tensorrt: 5.1.5
python: 3.6.8 GCC 8.0.1 20180414 (experimental) [trunk revision 259383
model: customer very easy keras model, dimension is (28, 28, 1)(MNIST)

i transfer pb file to uff follow as:

uff_model = uff.from_tensorflow_frozen_model(
    frozen_file='model.pb', 
    output_nodes=['output_tensor/Softmax'],
    output_filename='model_uff.uff'
)

get the message and uff file model_uff.uff

WARNING:tensorflow:From /usr/local/lib/python3.5/dist-packages/uff/converters/tensorflow/conversion_helpers.py:185: FastGFile.__init__ (from tensorflow.python.platform.gfile) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.gfile.GFile.
UFF Version 0.5.5
=== Automatically deduced input nodes ===
[name: "input_tensor_input"
op: "Placeholder"
attr {
  key: "dtype"
  value {
    type: DT_FLOAT
  }
}
attr {
  key: "shape"
  value {
    shape {
      dim {
        size: -1
      }
      dim {
        size: 28
      }
      dim {
        size: 28
      }
      dim {
        size: 1
      }
    }
  }
}
]
=========================================

Using output node output_tensor/Softmax
Converting to UFF graph
DEBUG: convert reshape to flatten node
No. nodes: 30
UFF Output written to model_uff.uff

and then try to parser uff follow https://docs.nvidia.com/deeplearning/sdk/tensorrt-developer-guide/index.html#python_topics

like your code

import tensorrt as trt
TRT_LOGGER = trt.Logger(trt.Logger.WARNING)
model_file = 'model_uff.uff'

with trt.Builder(TRT_LOGGER) as builder, builder.create_network() as network, trt.UffParser() as parser:
    parser.register_input("Placeholder", (1, 28, 28)) 
    # and i try difference dimension setting as below
    # parser.register_input("Placeholder", (-1, 28, 28, 1))
    parser.register_output("output_tensor/Softmax")

above will get two times True feedback

finally i run parser.paser as below:

parser.parse(model_file, network)

i will get Segmentation fault (core dumped), then exit python3.
can somebody help me?

now i try to use tensorrt’s lenet5.uff file to parser as below:

import tensorrt as trt
TRT_LOGGER = trt.Logger(trt.Logger.WARNING)
model_file = 'lenet5.uff'

with trt.Builder(TRT_LOGGER) as builder, builder.create_network() as network, trt.UffParser() as parser:
    	parser.register_input("Placeholder", (1, 28, 28))
    	parser.register_output("fc2/Relu")
        # below i don't sure nvidia document in here is right or wrong.
        # original document is not in this section.
        parser.parse(model_file, network)

running the code, it will feedback as below:

True
True
[TensorRT] ERROR: UffParser: Parser error: in: Invalid number of Dimensions 0
False

i think in my model maybe something error, but i am not sure why follow as nvidia’s document get error too.

i guess i know what’s happen.

my model’s shape is (28, 28, 1) and default trt is (1, 28, 28)

so i define the shape type as below:

with trt.Builder(TRT_LOGGER) as builder, builder.create_network() as network, trt.UffParser() as parser:
    parser.register_input("input_tensor_input", (28, 28, 1), trt.UffInputOrder.NHWC) 
    parser.register_output("output_tensor/Softmax")
    parser.parse(model_file, network)

then i get three true, finish.