Convert Tensorflow model to run inference on TX2

Hi, I have trained a Tensorflow model on GPU workstations, the model has the following files:


How can I run inference on TX2 with this trained model? Currently, I am using the Jetson-Inference ( But this framework only takes Tensorflow UFF model. How can I convert tensorflow orignal model to tensorflow UFF?

Thanks. Is there a command line utility that can do this?

It looks like a few lines of Python code so I doubt it. Just use the API to convert and try it!

I use the following code

import uff

frozen_file ='./frozen_model.pb'
output_node_names = ['lanenet_model/vgg_frontend/vgg16_decode_module/binary_seg_decode/binary_final_logits/W'];

uff.from_tensorflow_frozen_model(frozen_file, output_node_names, out_filename='./model.uff')

It ran, however, I did not see a “model.uff” saved. Why?


Could you share the log when you executing the script.
If there is no uff file generated, it should be some error message.