where is convert-to-uff

how to run this code:
convert-to-uff tensorflow -o name_of_output_uff_file –
input_file
name_of_input_pb_file -O name_of_output_tensor

should I execute it in the terminal?

Hi,

Please use Python API to covert Tensorflow model into UFF.
Currently, python API is only available on x86-based Linux machine.

Model conversion sample can be found at ‘/usr/local/lib/python2.7/dist-packages/tensorrt/examples/tf_to_trt/’.

Thanks.

Hi AastaLLL:

I want to use the convert-to-uff.py utility, convert the .pb frozen graph to .uff.
I don’t know how to execute the sample code in the user guide:
convert-to-uff tensorflow -o name_of_output_uff_file –
input_file
name_of_input_pb_file -O name_of_output_tensor

if I run it in the terminal ,it always tells me that:
convert_to_uff: command not found

Hi,

Please check our tf_to_trt.py example:
Flow should be like this:

......
uff_model = uff.from_tensorflow(tf_model, ["fc2/Relu"])

#Convert Tensorflow model to TensorRT model
parser = uffparser.create_uff_parser()
parser.register_input("Placeholder", (1, 28, 28), 0)
parser.register_output("fc2/Relu")

engine = trt.utils.uff_to_trt_engine(G_LOGGER,
                                     uff_model,
                                     parser,
                                     MAX_BATCHSIZE,
                                     MAX_WORKSPACE)
......

Thanks.

Hi , 373197201
I think that if want to run python code
the command maybe like this
python convert-to-uff.py tensorflow -o name_of_output_uff_file –
input_file

How do I save uff_model to file? I want to run exported model other device (Jetson).

Thanks.

I resolved it. In “conversion_helpers.py” is source code for uff.from_tensorflow.

uff.from_tensorflow(tf_model, ["fc2/Relu"], output_filename = "model.uff")

So is there any sample code available to do this in C++? I want to do this on the TX2.

-siddarth

Hi,

Convert TensorFlow model to UFF is only available on an x86-based machine.
The flow we recommend is:
1. Convert TensorFlow to UFF model on x86-machine with python API.
2. Launch TensorRT engine with UFF model on Jetson with C++ API.

Thanks.

That is the workflow I would like to see. A simple example of the python code to save to uff and a simple c++ example to read the uff on the jetson for inference would be sweet.

I will share if I put it together.

I have two questions:

  1. How can you save a uff model once we have:
uff_model = uff.from_tensorflow_frozen_model(config['frozen_model_file'], OUTPUT_LAYERS)
  1. It looks like my simple tensorflow model has nodes that aren’t supported. Is that true? Why do I get this traceback?
Traceback (most recent call last):
  File "tf_to_uff.py", line 75, in <module>
    create_and_save_inference_engine()
  File "tf_to_uff.py", line 36, in create_and_save_inference_engine
    uff_model = uff.from_tensorflow_frozen_model(config['frozen_model_file'], OUTPUT_LAYERS)
  File "/opt/uff/uff/converters/tensorflow/conversion_helpers.py", line 103, in from_tensorflow_frozen_model
    return from_tensorflow(graphdef, output_nodes, **kwargs)
  File "/opt/uff/uff/converters/tensorflow/conversion_helpers.py", line 75, in from_tensorflow
    name="main")
  File "/opt/uff/uff/converters/tensorflow/converter.py", line 64, in convert_tf2uff_graph
    uff_graph, input_replacements)
  File "/opt/uff/uff/converters/tensorflow/converter.py", line 46, in convert_tf2uff_node
    inp_node = tf_nodes[inp_name]
KeyError: u'^dropout/cond/switch_t'

I see that dropout is not registered. Is there a way to clean up a tensorflow model’s nodes so that it passes for uff model?

Also, do I have to return the logits or is a argmax node supported to get the integer that corresponds to the class?

Hi,

1. Try this command to remove the non-necessary node:

tf.graph_util.remove_training_nodes(frozen_graph)

2. You can return the layer you preferred if it’s supported.

Thanks.

Here are the answers to my questions:

  1. how to save uff file (simple as):
uff_model = uff.from_tensorflow_frozen_model(config['frozen_model_file'], OUTPUT_LAYERS, output_filename = "model.uff")
  1. The error above signifies that the dropout node in the tensorflow graph is not supported.

The advice of

tf.graph_util.remove_training_nodes(frozen_graph)

will not remove the dropout node. Also, the graph transform tool does not support removing the dropout node for optimizing the model for inference. https://github.com/tensorflow/tensorflow/tree/master/tensorflow/tools/graph_transforms.

I found this guys post to do the trick, which is sort of involved (manually have to remove the node)

https://dato.ml/drop-dropout-from-frozen-model/

Thanks for update this information with us.

@ljstrnadiii Saving .uff file doesn’t work for me, any ideas?
uff.version’0.2.0’

uff_model = uff.from_tensorflow_frozen_model(‘test.pb’, [‘out/out’], out_filename=‘model.uff’)

Solved.

out_filename → output_filename

input_file → input-file

Issue is still unresolved. Could you please clarify:

  • The TensorRT documentation makes reference to a utility by the name “convert-to-uff”:
  1. Convert the .pb file to .uff, using the convert-to-uff utility:
    convert-to-uff models/lenet5.pb
    
    The converter will display information about the input and output nodes, which you can use to >>> the register
    inputs and outputs with the parser. In this case, we already know the details of the input and >>> output nodes
    and have included them in the sample.

Is this utility still available in the Linux distribution?

YES. But please noticed that it is only available in the x86 Linux.

/usr/lib/python2.7/dist-packages/uff/bin/convert_to_uff.py
/usr/lib/python3.5/dist-packages/uff/bin/convert_to_uff.py

Thanks.

Why is this information not available anywhere else? I have been looking for this utility for hours. Extremely disappointed by the quality of documentation. It is vague, incomplete and sometimes refers to tools and files that are nowhere to be found, resulting in valuable loss of working hours.