I would like to do fast face recognition (with limited number of faces). Can I use facenet with jetson-inference? How can I train the model? Could someone direct me to resources that would help me start.
I have face detection that work with dlib but too slow (less than 6 fps with 320p image).
In the jetson inference doc there is some info on transfer learning for some models but not facenet it seems
Ideally I would like to do some emotion detection as well, can facenet be used for that?
Hi @yandssiegel, the object detection models that jetson-inference supports are mostly SSD-based, for example SSD-Mobilenet, SSD-Inception, ect. There are some older models in there too based on the outdated architectures that aren’t used as much anymore.
What you would want to check is if FaceNet can be exported to ONNX, and if TensorRT can import that ONNX model. You can try running that with the trtexec tool (found under
/usr/src/tensorrt/bin). That will quickly allow you to load and benchmark a model with TensorRT.
Hello, thanks for your answer.
Did not know about ONNX will check it.
But why would facenet need to be converted if it is available in the jetson inference model downloader?
Not sure what this model does, face detection or recognition. Is it based on face encoding ?
I think we may be referring to two different “FaceNet” models. The one in jetson-inference model downloader is an older model that does face detection (of any faces), and not recognition.
Is this the FaceNet you are referring to? https://arxiv.org/abs/1503.03832
No I was just referring to the model available for download in jetson inference. Will have a look at this one. There is some implementation available on GitHub like this one https://github.com/nwesem/mtcnn_facenet_cpp_tensorRT
Regarding the experienced performance with dlib, is there a simple way to check on jetson that installs are correct or if I need to update something / revert to previous version of jetpack?
You might want to check these threads regarding dlib: