Questions about Face-Recongnition

Hi S4WRXTTCS,

I would say both work a lot better.

for celebrity one, although it does not recognize the celebrities all the time, at least at some angles or distance the predict results do keep showing a celebrity name.

Actually, I only tested several celebrities, such as Steve Jobs, Dane Cook, Lady Gaga…

for the data I trained, at some angles or distance the predict results do keep showing a same name, but it does show same name on different people at a frame.

I’m trying to add more data to our dataset and hope to increase the accuracy consistently.

Hi AastaLLL,

Is there a way to get the confidence of the classification in Face-Recognition?
The conf value of DrawBoxes looks not reasonable.

Thank you,

Hi,

Conf in DrawBoxes is the detection confidence.

For classification confidence, please check res blob:
[url]Face-Recognition/face-recognition.cpp at master · AastaNV/Face-Recognition · GitHub

We use res information to output the class label in recognition plugin implementation:
[url]Face-Recognition/pluginImplement.cpp at master · AastaNV/Face-Recognition · GitHub

Thanks.

Hi @HuiW

Were you able to solve the segmentation fault error? I believe that I have merged my models correctly(according to instructions provided by AastaLLL) but after I build and run I get a segmentation fault. Thank you for any help you can provide.

Hi lcain,

The output size of DataROI layer needs to match the image size of classification dataset.

As AastaLL suggested at #23 of below link.
https://devtalk.nvidia.com/default/topic/1007313/jetson-tx2/how-to-build-the-objection-detection-framework-ssd-with-tensorrt-on-tx2-/2

Plus, it’s better to generate a random caffemodel by using the dataset of classification.

Hope it helps.

Dear AastaLLL,
first, thanks for taking some time and sharing this recipe… some parts are giving me problems and I hope you could help me clarify what to do with them.

Just to be sure, this recipe is for creating my own dog recognition application, right? It’s not to create a face recognition application.

I couldn’t find any labels.txt inside the DetectNet-COCO-Dog folder. Is this a file that I have to create with my own dog classes? But if I’m working with Digits, the labels are created from the image folders, right?

When I try to see the network in Digits by clicking on Visualize I get an error saying that a layer does not have parameters of type dim so I’m not sure if the changes in line 20 and 21 should be there or in the input block

This new Image Classification Dataset should be a collection of dogs images of my own? do they need to be size 224x224 all of them? Should they be grouped in different folders depending on the race of the dog?

I get until this point since I get errors on the new model and the new dataset so I hope you can give me an extra detailed explanation on what to do. My original interest is to use the Jetson TX2 for deploying my own Face Recognition application (recognize people in my office which are not at all Hollywood stars :))

Thanks and regards,

Boris

Hi,

Just to be sure, this recipe is for creating my own dog recognition application, right? It’s not to create a face recognition application.
Our default sample is for facial recognition.
This post is to help user switch to their model. So we select a dog detection/classification as an example.

I couldn’t find any labels.txt inside the DetectNet-COCO-Dog folder. Is this a file that I have to create with my own dog classes? But if I’m working with Digits, the labels are created from the image folders, right?
The label comes from classification model.
If you use DIGITs, the label will auto-generate into the zipped model file.

When I try to see the network in Digits by clicking on Visualize I get an error saying that a layer does not have parameters of type dim so I’m not sure if the changes in line 20 and 21 should be there or in the input block
Not sure the cause, but you can check this comment for information:
https://devtalk.nvidia.com/default/topic/1023699/jetson-tx2/questions-about-face-recongnition/post/5208940/#5208940

This new Image Classification Dataset should be a collection of dogs images of my own? do they need to be size 224x224 all of them? Should they be grouped in different folders depending on the race of the dog?
You don’t need to resize all images to 224x224. But please remember to correct the source to the custom size:
https://github.com/AastaNV/Face-Recognition/blob/master/pluginImplement.cpp#L253

Thanks.

Thanks for your help AastaLLL,
I’ve created a small “dogs” dataset and fixed what I was doing wrong inside step4.prototxt so I’m now able to visualize the model when I click on Visualize. However the moment I click on Create I get the following error:

ERROR: Check failed: error == cudaSuccess (8 vs. 0) invalid device function

Do you know how I can solve this? As I mentioned I’m using DIGITs (version 6) which was installed from the nvidia-docker image so I assumed all dependencies were already working together without problems.

Hi,

This is an invalid function error and should come from NvCaffe.
Do you build NvCaffe with correct GPU architecture?

That is, have you added the corresponding GPU architecture here:
[url]caffe/Makefile.config.example at caffe-0.16 · NVIDIA/caffe · GitHub

GPU architecture can be found on this page:

Thanks.

Hi again, as I mentioned before I’m using the nvidia/digits docker image, so I’m not building anything… I just downloaded the docker image and created a container. Is there a way to specify the GPU architecture when creating a container with nvidia-docker?

Nvidia driver: 384.90

:~$ sudo nvidia-docker version
NVIDIA Docker: 2.0.0
Client:
 Version:      17.09.0-ce
 API version:  1.32
 Go version:   go1.8.3
 Git commit:   afdb6d4
 Built:        Tue Sep 26 22:42:18 2017
 OS/Arch:      linux/amd64

Server:
 Version:      17.09.0-ce
 API version:  1.32 (minimum version 1.12)
 Go version:   go1.8.3
 Git commit:   afdb6d4
 Built:        Tue Sep 26 22:40:56 2017
 OS/Arch:      linux/amd64
 Experimental: false

And the command for the DIGITs image was:

docker pull nvidia/digits

Hi,

Sorry for missing it.

Just want to confirm that you run nvdocker on an x86-based machine, right?
If yes, could you file your issue to Sign in to GitHub · GitHub ?

Thanks.

Hi again,
yes it’s a x86-based machine. I’ve opened a report as you suggested and I hope I can get some help there.
Do you know how to do the same process as you’ve described for Caffe if I want to do it on Tensorflow? Since Tensorflow models work well in DIGITs I’m wondering if the whole Face-Recognition application could be transported to Tensorflow instead of Caffe. What would I need to do?

Thanks again for your help,

Boris

Hi,

We don’t have such sample directly.
Guess that if you have converted TensorFlow model into UFF format, the only difference is to load UFF model rather the Caffe.

We have some examples to demonstrate these features:

  1. Export UFF model from TensorFlow(python): only available on an x86-based machine
    /usr/local/lib/python2.7/dist-packages/tensorrt/examples/tf_to_trt/

  2. Launch TensorRT engine with UFF model(C++)
    /usr/src/tensorrt/samples/samplePlugin

Thanks.

Hi,
How can I retrain with my face and my office mates face?
Do you have any tutorial to do that?
I had find many places’s docutments, ex jetson-inference, Face-Recognition.
I still can’t figure out how to retain Face-Recongnition’s model.

Hi,

We train it with classical GoogleNet.
Just follow the setting of imageNet and create a classification jobs with DIGITs.

1. Create a folder for each colleague and save some images.

2. Follow the classification training step in Jetson_inference:
[url]https://github.com/dusty-nv/jetson-inference#classifying-images-with-imagenet[/url]
Each colleague is represented as a class here.

Thanks.

Hi, AastaLLL.
I know how to train model with DIGITs, but I don’t know how to merge network.
Where can I find the tutorial?

Hi,

Check this comment for information:
[url]https://devtalk.nvidia.com/default/topic/1023699/jetson-tx2/questions-about-face-recongnition/post/5209485/#5209485[/url]

Thanks

Hi, AastaLLL

For the following step:

  • New Image Classification Dataset with image size 224x224

  • New Image Classification Model with

epochs = 1 (will finish faster)
Custom Network → paste steps4.prototxt → Creat
Download model → extract snapshot_iter_[N].caffemodel

I train the dataset with size 224x224, however for the model portion. I train the model out but when I tested it on DIGITS, the result it shows is not accurate at all. I trained it using me and my friend faces. Is the result suppose to be like this after I finish the classification model? As attached is the screenshot that I taken from the classification model


When I use the above model and merge it together, the moment it detect my face it gives segment error as shown:

0 bounding boxes detected
ROI: 0 0 0 0
0 bounding boxes detected
pass 0 to trt
211.344 180.281 315.5 350.125 
ROI: 211 180 105 170
ID=0, label=633
1 bounding boxes detected
bounding box 0   (360.693329, 230.759995)  (538.453369, 448.160004)  w=177.760040  h=217.400009
Segmentation fault (core dumped)

My classification data is set to 224x224 and the detection model I used jetson-inference default facenet-120. I also tried to change the Network size on pluginImplement.cpp but it still give the same error.

Do advise, Thanks!

Screenshot from 2018-01-30 14-03-23.png

Hi,

  1. This is not a training step.
    It’s a simple workaround to generate a random weight for a given prototxt.

  2. There are lots of information in topic_1007313. Please check it for information.
    How to build the objection detection framework SSD with tensorRT on tx2? - Jetson TX2 - NVIDIA Developer Forums

Thanks.

Hi! I have problem with make classification model for my faces dataset. I make model on Digits with GoogLeNet but test this model showed that model worked incorrect. Maybe need aligned images or need use other network? Thanks!