I have currenlty installed Tensorflow r0.9 on Jetson TX1. After some hacks, I was able to make it work.
Version r0.9 was the one suggested to work in other posts.
Now, I want to add an implementation of Facenet on Tensorflow for face detection and recognition.
However, the open source implementation of Facenet requires 0.11 version of Tensorflow.
Updating to 0.11 could cause me problems on TX1. In general I realize that Tensorflow is not very stable on Jetson TX1.
So I was thinking of the following alternative.
Install Tensorflow on another machine , not Jetson, and run retraining there. Take the produced graph and put it on Jetson TX1 and “tell” it to run with TensorRT.
You think this is possible? If so, has anybody any suggestion on how to connect TensorRT with Tensorflow graph?
TensorRT targets for quick inference, so it only supports inference, no training flow.
There are quick check we can do first.
Check the network type used in facenet is supported by tensorRT or not?
If all the network are supported, just as your alternative,
Train facenet on another machine with tensorflow r0.11
Covert facenet model into caffemodel
there are some public source code can achieve this, just goolge “convert tensorflow to caffemodel”
Use tensorRT for inference, which allows input to be caffemodel
If some network(inference only) are not supported by tensorRT but supported by tensorflow r0.9
Train facenet on another machine with tensorflow r0.11
Use tensorflow r0.9 for inference
If there are some special layer in inference time only supported by tensorflow r0.11
Paste the network type here, let’s find if there is another alternative
TensorRT(GIE) supports layer type can be found in here:
Sorry for the late reply.
I searched for the converting tool, you are right, it’s hard to find one for tensorflow to caffe. Sorry for the wrong information.
I think conversion should still be possible, but it need more knowledge on both framework.
As you said, another possible way is to implement a caffe network and then use it for tensorRT inference.
It’s recommended that train your target model on desktop by DIGITs. https://developer.nvidia.com/digits
And then launch tensorRT with your caffe network for quick inference on tx1.