Uff Conversion Tools documentation

Is there something written down which explains the functions uff.from_tensorflow and uff.from_tensorflow_frozen_model better then the offical docs? https://docs.nvidia.com/deeplearning/sdk/tensorrt-api/python_api/uff/uff.html#tensorflow-frozen-protobuf-model-to-uff

I know that there are several examples online, but a nice overview of all parameters of these functions would be nice to have.

For example I am having the problem, that during conversion the function is printing a lot of information to the console and I can not turn this behaviour off.

Greets and thanks in advance

Moritz

Hello,

We will work to improve documentation.

Regarding reducing verbosity, you can set quiet=true

https://docs.nvidia.com/deeplearning/sdk/tensorrt-archived/tensorrt_302/tensorrt-developer-guide/index.html#convert_model_tensorflow

That’s what I tried, but it is still printing to the console.

Greets

Hello,

quiet=true should be honored in TRT. To help us debug, can you share a small repro package containing the source and model/.pb that demonstrate the quiet=true not reducing verbosity?

@NVES

Hi,
I am wondering about the UFF conversion tool parameters that has given by the NVIDIA. Could you please tell me how to parse two output names to the function “uff.from_tensorflow_frozen_model”.

I have the .pb file which has two output names in the graph, they are a/Softmax and o/Softmax.

I tried as follows,

frozen_graph_filename = ‘pb_files/xxx.pb’
output_name = “a/Softmax,o/Softmax”
UFF_FILENAME = ‘xxx.uff’

generate uff from frozen graph

uff_model = uff.from_tensorflow_frozen_model(
frozen_file=frozen_graph_filename,
output_nodes=[output_name],
output_filename=UFF_FILENAME,
text=False,
)

But it is giving me that “OUTPUT_NAMES are not found in the graph”
Could you please help me with this.

Thank you