Typical workflow here is to convert the frozen graph (.pb) to uff format. There are two approaches to launch UFF model on Jetson TX2.
c++ based workflow.
native sample for exporting an UFF model and creating TensorRT engine.
/usr/src/tensorrt/samples/sampleUffMNIST/sampleUffMNIST.cpp
Python-based code with TensorRT C++ wrapper
If you have some preprocess code written in python and not easy to convert to C++, you can try to launch TensorRT with swig wrapper. Please check this GitHub for information: https://github.com/AastaNV/ChatBot