I am trying to use the models supplied with the google coral usb accelerator product (the inat-insect model to be precise, no traning required for my use case). A sample image-console style program would be ideal
Not sure what models supplied by google coral usb accelerator product’, you may start with:
The pretrained models I have been looking at are at
The one I am particularly interested in is the MobileNet V2 (iNat insects) one.
This has been pretrained on the insect dataset from the web site
This is an invaluable source of classified images of thousands of animals. The insect section is of particular interest to me, especially honey bee images, and being able to use the coral pretrained models would save me a huge amount of work.
Ok so Google may be seen as a competitor of yours but finding pretrained models of anything more than imagenet or coco or faces is not easy. So any pretrained models from any source to run on any hardware is good as far as I am concerned.
Would you mind to check does this sample meet your requirement?
Does COCO include iNaturalist insects and is it in tflite format?
So anyway I flashed the latest nano image, got python to install tensorflow and all its dependecies.
Then I tried to run convert_to_uff.py but that failed. Finished with
Other methods seem to require that I know the names of the input and output nodes.
Ran up tensorboard pointing at a log_dir direct by using something like
import tensorflow as tf
from tensorflow.python.platform import gfile
with tf.Session() as sess:
with gfile.FastGFile(model_filename, ‘rb’) as f:
graph_def = tf.GraphDef()
g_in = tf.import_graph_def(graph_def)
train_writer = tf.summary.FileWriter(LOGDIR)
to populate the log dir
looked at the resulting graph and went upl!
There really must be an easier way of doing all of this. There are all these pretrained models out there but unless I want COCO classes then I no can do.
By the way I am trying to use the following models
I guess I will never know
The simplest way is to use TFTRT.
But the performance is not as good as pure TensorRT due to the overhead from TensorFlow.
Here is a tutorial for your reference:
We verify several object detection models, including faster_rcnn_resnet50_coco.
Although the sample is not for TensorFlow lite, you should still be able to use the similar way to create a tftrt model.
Thanks and sorry for any inconvenience.
It should work with Google’s instructions and Google has sample code here.
“edgetpu” is probably what you are after.
Let me know if you run into problems. I have a Coral (god help me, they will cancel it now for sure) but haven’t tested it with my nano yet. I have ideas about using it as a secondary inference engine. They have canned models for exactly what I want to do.
Here are tflite build instructions for arm64: