for celebrity one, although it does not recognize the celebrities all the time, at least at some angles or distance the predict results do keep showing a celebrity name.
Actually, I only tested several celebrities, such as Steve Jobs, Dane Cook, Lady Gaga…
for the data I trained, at some angles or distance the predict results do keep showing a same name, but it does show same name on different people at a frame.
I’m trying to add more data to our dataset and hope to increase the accuracy consistently.
Were you able to solve the segmentation fault error? I believe that I have merged my models correctly(according to instructions provided by AastaLLL) but after I build and run I get a segmentation fault. Thank you for any help you can provide.
Dear AastaLLL,
first, thanks for taking some time and sharing this recipe… some parts are giving me problems and I hope you could help me clarify what to do with them.
Just to be sure, this recipe is for creating my own dog recognition application, right? It’s not to create a face recognition application.
I couldn’t find any labels.txt inside the DetectNet-COCO-Dog folder. Is this a file that I have to create with my own dog classes? But if I’m working with Digits, the labels are created from the image folders, right?
When I try to see the network in Digits by clicking on Visualize I get an error saying that a layer does not have parameters of type dim so I’m not sure if the changes in line 20 and 21 should be there or in the input block
This new Image Classification Dataset should be a collection of dogs images of my own? do they need to be size 224x224 all of them? Should they be grouped in different folders depending on the race of the dog?
I get until this point since I get errors on the new model and the new dataset so I hope you can give me an extra detailed explanation on what to do. My original interest is to use the Jetson TX2 for deploying my own Face Recognition application (recognize people in my office which are not at all Hollywood stars :))
Just to be sure, this recipe is for creating my own dog recognition application, right? It’s not to create a face recognition application.
Our default sample is for facial recognition.
This post is to help user switch to their model. So we select a dog detection/classification as an example.
I couldn’t find any labels.txt inside the DetectNet-COCO-Dog folder. Is this a file that I have to create with my own dog classes? But if I’m working with Digits, the labels are created from the image folders, right?
The label comes from classification model.
If you use DIGITs, the label will auto-generate into the zipped model file.
This new Image Classification Dataset should be a collection of dogs images of my own? do they need to be size 224x224 all of them? Should they be grouped in different folders depending on the race of the dog?
You don’t need to resize all images to 224x224. But please remember to correct the source to the custom size: https://github.com/AastaNV/Face-Recognition/blob/master/pluginImplement.cpp#L253
Thanks for your help AastaLLL,
I’ve created a small “dogs” dataset and fixed what I was doing wrong inside step4.prototxt so I’m now able to visualize the model when I click on Visualize. However the moment I click on Create I get the following error:
ERROR: Check failed: error == cudaSuccess (8 vs. 0) invalid device function
Do you know how I can solve this? As I mentioned I’m using DIGITs (version 6) which was installed from the nvidia-docker image so I assumed all dependencies were already working together without problems.
Hi again, as I mentioned before I’m using the nvidia/digits docker image, so I’m not building anything… I just downloaded the docker image and created a container. Is there a way to specify the GPU architecture when creating a container with nvidia-docker?
Nvidia driver: 384.90
:~$ sudo nvidia-docker version
NVIDIA Docker: 2.0.0
Client:
Version: 17.09.0-ce
API version: 1.32
Go version: go1.8.3
Git commit: afdb6d4
Built: Tue Sep 26 22:42:18 2017
OS/Arch: linux/amd64
Server:
Version: 17.09.0-ce
API version: 1.32 (minimum version 1.12)
Go version: go1.8.3
Git commit: afdb6d4
Built: Tue Sep 26 22:40:56 2017
OS/Arch: linux/amd64
Experimental: false
Hi again,
yes it’s a x86-based machine. I’ve opened a report as you suggested and I hope I can get some help there.
Do you know how to do the same process as you’ve described for Caffe if I want to do it on Tensorflow? Since Tensorflow models work well in DIGITs I’m wondering if the whole Face-Recognition application could be transported to Tensorflow instead of Caffe. What would I need to do?
We don’t have such sample directly.
Guess that if you have converted TensorFlow model into UFF format, the only difference is to load UFF model rather the Caffe.
We have some examples to demonstrate these features:
Export UFF model from TensorFlow(python): only available on an x86-based machine
/usr/local/lib/python2.7/dist-packages/tensorrt/examples/tf_to_trt/
Launch TensorRT engine with UFF model(C++)
/usr/src/tensorrt/samples/samplePlugin
Hi,
How can I retrain with my face and my office mates face?
Do you have any tutorial to do that?
I had find many places’s docutments, ex jetson-inference, Face-Recognition.
I still can’t figure out how to retain Face-Recongnition’s model.
New Image Classification Dataset with image size 224x224
New Image Classification Model with
epochs = 1 (will finish faster)
Custom Network → paste steps4.prototxt → Creat
Download model → extract snapshot_iter_[N].caffemodel
I train the dataset with size 224x224, however for the model portion. I train the model out but when I tested it on DIGITS, the result it shows is not accurate at all. I trained it using me and my friend faces. Is the result suppose to be like this after I finish the classification model? As attached is the screenshot that I taken from the classification model
When I use the above model and merge it together, the moment it detect my face it gives segment error as shown:
My classification data is set to 224x224 and the detection model I used jetson-inference default facenet-120. I also tried to change the Network size on pluginImplement.cpp but it still give the same error.
Hi! I have problem with make classification model for my faces dataset. I make model on Digits with GoogLeNet but test this model showed that model worked incorrect. Maybe need aligned images or need use other network? Thanks!