I have tried to search these forums and also other places such as stack exchange but have not seen clear answer.
Has anyone tried to
- Train models in R Studio (Linux X86_64 - Ubuntu flavor)
- Save the model using Tensor save_model (a .pb file)
- Generate an optimized model for deploying with TensorRT on Jetson or any other embedded device platforms?
#1 and #2 are straightforward. I am not clear on #3 - what steps are involved and how does run inference?
Any directions would be apprecaited!