Custom Model from Digits to Nano

Hi

I have created a custom model in Digits using tensorflow and Alexnet.

I have downloaded the model and extracted.

The problem is, I have no idea what to do with it now on the nano? I have tried looking on google etc, but cannot find any information.

Can someone point me in the right direction of what to do next in terms of copying it over, where to? and then how to load imagenet to run the model.

Thanks

Mark

Hi Mark, see this GitHub repo for examples of converting trained classification models from TensorFlow to TensorRT for inference:

https://github.com/NVIDIA-AI-IOT/tf_to_trt_image_classification

Thankyou very much, really basic qquestion… but i am unsure wherte to copy the model to on the jetson nano?

Do i run it in imagemet?

Thanks

Mark

imagenet from jetson-inference? imagenet from jetson-inference doesn’t support TensorFlow UFF (only detectnet from jetson-inference has been coded with UFF support). jetson-inference imagenet supports Caffe models.

This depends on where the inference script you are running on Jetson (TensorFlow or otherwise) expects the model to be, or sometimes you can pass in the full path (so the model can be anywhere on the Nano, pretty much) depending on the script.