object detection model deployment


I’m trying to train detectNet with custom dataset and i like to use the model with some hardware i.e both Xavier and Raspberry Pi


  1. How to integrate the trained model to xavier using TensorFlow ?
  2. Is it possible to do the same with Raspberry Pi and TensorFlow ? If yes , please give me few references.

Hi anilkunchalaece, you can install TensorFlow natively on Jetson Xavier using the official TensorFlow installer for Jetson: https://devtalk.nvidia.com/default/topic/1042125/jetson-agx-xavier/official-tensorflow-for-jetson-agx-xavier/

Then you could run your TensorFlow model like normal (you would want to enable GPU device in your graph).

For improved performance, you could also run the TensorFlow model with NVIDIA TensorRT. There are two workflows for doing that:

  1. Using TF-TRT, an interoperability layer of TensorRT that runs within TensorFlow See this GitHub repo for examples: https://github.com/NVIDIA-AI-IOT/tf_trt_models  
  2. Freeze/export the TensorFlow graph to TensorRT UFF format, which can be imported into TensorRT without requiring TensorFlow at runtime See this GitHub repo for examples: https://github.com/NVIDIA-AI-IOT/tf_to_trt_image_classification