Face Recognition on TX2 or jetson nano using TensorRT

have built GitHub - AastaNV/Face-Recognition: Demonstrate Plugin API for TensorRT2.1, which appears to be a little dated these days.
In fact will not compile as is. However with a little bit of surgery it can be built on TX2 running the now older jetpack 3.3 installation. This is based around Ubuntu 16.04. Which is also due to be EOL soon.

So would like to know if anyone has been successful in building on the latest JETPACK 4.2 release for Xavier, TX2 or even nano. The latest JETPACK looks much nicer to upgrade to given the releases of all
the components it contains plus moving to a Ubuntu 18.04 base is more in keeping with other development platforms.

The above software does run well on a TX2, using the pre-built models and the trained data sets.
What I would ideally like to do would be add or only use the classifier to recognise faces that are captured on the TX2. Similar to how a smartphone does face unlock.

I realise that the original training for the detector and classifier were done using very large data sets. I would like to used the current detector model and even the classifier model but only use a limited set of subjects for the actual face recognition.

Is there tooling that can be run on the TX2 that would allow the injection of a number of subjects into the classifier to do the recognition? Don’t really want to run this on a remote host to do this unless I have to. Also would like to know if it would be possible to run such a setup on a Jetson nano.

Regards,

R

Hi,

Maybe a single classifier is enough to do a face unlock feature.
Just set the class label to be:

  • face a
  • face b ...
  • not a face

To run a single classifier on Jetson, it’s recommended to start from jetson_inference:
https://github.com/dusty-nv/jetson-inference

Thanks.

I have used the GitHub - dusty-nv/jetson-inference: Hello AI World guide to deploying deep-learning inference networks and deep vision primitives with TensorRT and NVIDIA Jetson..

Which incidently seems broken at the moment, will not compile because it is looking for a file “mat33.h”. Seems to be busted by the recent addition of the homographyNET modules.

this seems fine for DetecNET using this to detect faces.

Likewise using a number of trained classifiers.

However, everything within jetson-inference seems to indicate that you should use DIGITS as the tool
to train models.

Will DIGITS actually run on these platforms such as the TX2 or nano?

The idea is to capture a face as presented to the camera on the TX2 or nano and then use that to train the classifier with those images captures. It anticipated that there would only be a small limited number of images that would be needed for training. Likewise the number of subjects for classification might be small. Then let this do the recognition. It is anticipated that the total number of number of faces to recognise might be small, less than 100. Based on this recognition attributes and functionality would be set based on the identified recognised face.

So are you suggesting to use DIGITS on the TX2 as the tool to train the classifier?

This is not that different to what a number of cameras or smartphones currently do.
Capture a subject face, store and label the captured face, then recognise that captured face.

Regards,

R

Hi GrunPferd, do you have the file jetson-inference/utils/mat33.h in your source tree? If not, you should re-clone the repo. Make sure to run the “git submodule update --init” step.

DIGITS is only supported on PC (x86/x64 architecture), so it is not intended to be run onboard the Jetson.

The detectnet demo from jetson-inference is intended as face detection, so it will show the location of any face, not facial recognition. You could use DIGITS on PC to accomplish the recognition part. If you want to train onboard the Jetson as the new faces are added to your database, I would recommend looking into PyTorch which you can theoretically use to train onboard Jetson (training won’t be as fast as on PC).

Typical existing methods for facial recognition use traditional CV methods for the recognition part as opposed to full neural network classifier, because of the online re-training required. You could use pretrained DNN for the face detection, and then a per-pixel comparision algorithm (like Sum of Squared Difference) to compare against face database. Other approaches may use the encoder portion of DNN to extract features to compare against (as opposed to per-pixel). Or if you have the capability to do online retraining (either in the cloud or onboard the device) then you can use DNN’s for all of it.

Thanks Dustin for your reply,

In the past I have used purely CV techniques to do the recognition. but the speed achieved with the DNN is impressive.

So I will investigate Pytorch to see what it can offer in terms of running directly on the TX2. The idea is to continue the training on the TX2 using the existing pretrained models.
Are there any examples of using Pytorch that would help using DetectNET or ImageNet?

Reagrds,

R

Here is an ImageNet training example from PyTorch that I have run onboard the Jetson before: https://github.com/pytorch/examples/tree/master/imagenet

I tried it with the AlexNet and ResNet-18 networks. It helps to train faster if you use the --pretrained option.

Thanks Dustin,

I will checkout PyTorch examples and see if this helps with transferred pretrained learning that I am looking for. I am sure this is not the first time someone has wanted to leverage the pretrained models with new additional data.

R

Dustin, AastaLLL,

I have struggled to find how to run PyTorch and vision on the actual TX2 itself.

I built the Pytorch and vision software, but the examples that you pointed to seem still to run remotely on a host rather than the actual TX2.

The examples in examples/imagenet at main · pytorch/examples · GitHub

refer to node numbers and GPU numbers in a distributed fashion.

What I am really looking for is actually advancing the training on the TX2 itself using a pretrained model, but extending its learning locally.

The idea being that you capture local images on the TX2 and then use the subject faces to further train the TX2 locally.
From a security perspective do not want images of subjects being sent to a central remote base.
The extended captured images and training should stay on the TX2.

Do you have a direct example of running PyTorch on an actual TX2?
So using a pretrained model loaded, that further trains a classifier locally thus extending its capabilities.
I think there would be a limited number of images used for the further training <= 100.

Also do you have any idea if this would also run on a jetson nano with its lesser processing powers?

Ideally would like to use the model used in:

but anyone of the classifiers using:

would be good.

Regards,

R

Hi,

TX2 is not recommended to use for training.

An alternative is to train a model for face feature generation.
This will simply your problem to the following:

[Training]

  • Train a model to represent face feature

[Inference]
- Add face by face feature into the database

  • Compare similiary between new face and database.

Thanks.

GrunPferd

I was looking to do something similar. I want to exploit the pre-trained models and use the new data to improve it by on-board training on the Jetson Xavier. Did you find any way to run PyTorch on the TX2? Any help would be appreciated.

Hi,

I am going thru similar use case.
Any update on this thread?

Thanks,
Sara

Hi Sara,

Please open a new topic for your own issue.

Thanks