about convert to uff from tf model

Hi
I saved my own tf model as model.pb file
and convert it to an uff file by following code:

import tensorflow as tf
import uff

pb_file_path = “model.pb”
with tf.Graph().as_default():
output_graph_def = tf.GraphDef()
with open(pb_file_path, “rb”) as f:
output_graph_def.ParseFromString(f.read())

uff_model = uff.from_tensorflow_frozen_model(output_graph_def,
[“output”],
output_filename=“model.uff”)

this code saved in an .py file nanmed pb_to_uff.py
and I keyin command “python3 pb_to_uff.py” in ubuntu terminal
It seems worked well and the converted content shown in terminal like this picture.
External Image
but no uff file generated in current folder.
It seems the convertion just occured in terminal but no save…
How can I get the uff file??

Hi,

To save uff file, please use this function:

uff.from_tensorflow(graphdef=frozen_graph,
                    output_filename=UFF_OUTPUT_FILENAME,
                    output_nodes=OUTPUT_NAMES,
                    text=True)

There is also relevant sample for your reference:
/usr/lib/python2.7/dist-packages/tensorrt/examples/tf_to_trt/lenet5.py.

Thanks.

Hi,thanks very much for your kind reply.
I used uff.from_tensorflow() function, but rise error log like this:

Using output node final/lanenet_loss/instance_seg
Using output node final/lanenet_loss/binary_seg
Converting to UFF graph
Warning: No conversion function registered for layer: Slice yet.
Converting as custom op Slice final/lanenet_loss/Slice
name: "final/lanenet_loss/Slice"
op: "Slice"
input: "final/lanenet_loss/Shape_1"
input: "final/lanenet_loss/Slice/begin"
input: "final/lanenet_loss/Slice/size"
attr {
  key: "Index"
  value {
    type: DT_INT32
  }
}
attr {
  key: "T"
  value {
    type: DT_INT32
  }
}

Traceback (most recent call last):
  File "tfpb_to_uff.py", line 16, in <module>
    uff_model = uff.from_tensorflow(graphdef=output_graph_def, output_filename=output_path, output_nodes=["final/lanenet_loss/instance_seg", "final/lanenet_loss/binary_seg"], text=True)
  File "/home/dream/.local/lib/python3.5/site-packages/uff/converters/tensorflow/conversion_helpers.py", line 75, in from_tensorflow
    name="main")
  File "/home/dream/.local/lib/python3.5/site-packages/uff/converters/tensorflow/converter.py", line 64, in convert_tf2uff_graph
    uff_graph, input_replacements)
  File "/home/dream/.local/lib/python3.5/site-packages/uff/converters/tensorflow/converter.py", line 51, in convert_tf2uff_node
    op, name, tf_node, inputs, uff_graph, tf_nodes=tf_nodes)
  File "/home/dream/.local/lib/python3.5/site-packages/uff/converters/tensorflow/converter.py", line 28, in convert_layer
    fields = cls.parse_tf_attrs(tf_node.attr)
  File "/home/dream/.local/lib/python3.5/site-packages/uff/converters/tensorflow/converter.py", line 177, in parse_tf_attrs
    for key, val in attrs.items()}
  File "/home/dream/.local/lib/python3.5/site-packages/uff/converters/tensorflow/converter.py", line 177, in <dictcomp>
    for key, val in attrs.items()}
  File "/home/dream/.local/lib/python3.5/site-packages/uff/converters/tensorflow/converter.py", line 172, in parse_tf_attr_value
    return cls.convert_tf2uff_field(code, val)
  File "/home/dream/.local/lib/python3.5/site-packages/uff/converters/tensorflow/converter.py", line 146, in convert_tf2uff_field
    return TensorFlowToUFFConverter.convert_tf2numpy_dtype(val)
  File "/home/dream/.local/lib/python3.5/site-packages/uff/converters/tensorflow/converter.py", line 74, in convert_tf2numpy_dtype
    return np.dtype(dt[dtype])
TypeError: list indices must be integers or slices, not AttrValue

It meaning that TensorRT doesn’t support layer: Slice yet,
but the operation Slice of tensorflow is a basic operation. It is strange that an basic operation of tensorflow unsupported by TensorRT.
As for operation Slice, your can refer to tensorflow::ops::Slice
https://www.tensorflow.org/api_docs/cc/class/tensorflow/ops/slice

Is it true?

Hi,

We are discussing the support of Slice op but doesn’t have concrete schedule yet.
We will feedback your comment to our internal TensorRT team.

Thanks and sorry for any inconvenience.