Convert Tensorflow model to run inference on TX2

Hi, I have trained a Tensorflow model on GPU workstations, the model has the following files:

xxx.ckpt.index
xxx.ckpt.data
xxx.ckpt.meta

How can I run inference on TX2 with this trained model? Currently, I am using the Jetson-Inference (GitHub - dusty-nv/jetson-inference: Hello AI World guide to deploying deep-learning inference networks and deep vision primitives with TensorRT and NVIDIA Jetson.). But this framework only takes Tensorflow UFF model. How can I convert tensorflow orignal model to tensorflow UFF?

https://docs.nvidia.com/deeplearning/sdk/tensorrt-api/python_api/uff/uff.html

Thanks. Is there a command line utility that can do this?

It looks like a few lines of Python code so I doubt it. Just use the API to convert and try it!

I use the following code

import uff

frozen_file ='./frozen_model.pb'
output_node_names = ['lanenet_model/vgg_frontend/vgg16_decode_module/binary_seg_decode/binary_final_logits/W'];

uff.from_tensorflow_frozen_model(frozen_file, output_node_names, out_filename='./model.uff')

It ran, however, I did not see a “model.uff” saved. Why?

Hi,

Could you share the log when you executing the script.
If there is no uff file generated, it should be some error message.

Thanks.