Hi,
I am new here, recently i trained a model custom model based on mobilenet v2 fpnlite using the tensorflow object detection api. Currently i am able to run the model on my laptop, and I hope to enquire how do I proceed to deploy this model on my jetson xavier.
You can deploy it with TensorFlow directly on the Xavier.
The installation procedure can be found in the below document:
However, it’s more recommended to convert the model into TensorRT engine.
It can give you a better performance on Jetson as well as save the memory usage.