R studio, tensorflow and Tensor RT

I have tried to search these forums and also other places such as stack exchange but have not seen clear answer.
Has anyone tried to

  1. Train models in R Studio (Linux X86_64 - Ubuntu flavor)
  2. Save the model using Tensor save_model (a .pb file)
  3. Generate an optimized model for deploying with TensorRT on Jetson or any other embedded device platforms?

#1 and #2 are straightforward. I am not clear on #3 - what steps are involved and how does run inference?

Any directions would be apprecaited!

Hello,

Typical workflow here is to convert the frozen graph (.pb) to uff format. There are two approaches to launch UFF model on Jetson TX2.

  1. c++ based workflow.
    native sample for exporting an UFF model and creating TensorRT engine.
    /usr/src/tensorrt/samples/sampleUffMNIST/sampleUffMNIST.cpp

  2. Python-based code with TensorRT C++ wrapper
    If you have some preprocess code written in python and not easy to convert to C++, you can try to launch TensorRT with swig wrapper. Please check this GitHub for information: https://github.com/AastaNV/ChatBot

The third option to use TRT for TF models is TF-TRT.

NVIDIA provides a TF pip package for Jetson platform: https://docs.nvidia.com/deeplearning/dgx/index.html#installing-frameworks-for-jetson

TF-TRT user guide: https://docs.nvidia.com/deeplearning/dgx/tf-trt-user-guide/index.html