How to export a frozen graph (.pb) to uff on Jetson TX2?


I can do this on x86 using the python APIs but how do I do this on the TX2? Is there any sample c++ code that I can refer to? Since python APIs are not available on TX2.



There are two approaches to launch UFF model on Jetson TX2.

1. Pure c++ based workflow.
We have a native sample for exporting an UFF model and creating TensorRT engine.
/usr/src/tensorrt/samples$ vim sampleUffMNIST/sampleUffMNIST.cpp

2. Python-based code with TensorRT C++ wrapper
If you have some preprocess code written in python and not easy to convert to C++, you can try to launch TensorRT with swig wrapper.
(Wrap TensorRT to a python module)
Please notice that there will be a GPU memory copy step when launching TensorRT engine from python interface.


The sampleUffMNIST.cpp takes the uff model as input and runs it. I am asking about the step before this. Converting the TensorFlow model to uff on TX2 (using C++ APIs, as python is not available). Any sample code for that?



Python API is required for converting the Tensorflow model into UFF.
Please convert Tensorflow model to UFF on the x86 machine first and exporting UFF model on Jetson with the approaches mentioned in comment #2.

You can find an example for converting TensorFlow mode to UFF here:


Hi, I meet the same problem too. My question is the file can only get a python uff variable,but how to save the variable as a .uff model file?


Sorry for the late reply.

You can save a UFF model via uff.from_tensorflow(…) function:


Check /usr/local/lib/python2.7/dist-packages/tensorrt/examples/tf_to_trt/ for information.

1 Like


//2. Python-based code with TensorRT C++ wrapper//

Is there an example of how I can do this? i.e, run an existing .uff model in jetson using Python?


Please check this GitHub for information: