I need some direction of where to start and what is needed. The only thing I would like to do is to re-train the fpenet model with more details.
I don’t have hardware with an NVidia GPU (besides the jetson TX2 NX ) so I thought I’d use a EC2 on AWS .
We are currently using deployable_v2 model from Facial Landmarks Estimation | NVIDIA NGC on deepstream 6.0.1 . I believe this is limited to 80 facial landmarks. Q1: is 80 correct ?
Going through multiple bits of documentation I believe there are a few ways to run the TAO toolkit to retrain.
on bare metal (not available to us because I don’t have the hardware here)
Apparently, you can also install the TAO toolkit using ‘pip3 install nvidia-tao’ ?
Q2: Are there really 4 different ways ? Q3: Which way would be the easiest way to install the tao toolkit with the idea of using it to create a new model trained on more images ?
Then after installing the tao-toolkit I need to call the retrain command. From what i see there are 2 options ?
run the fpenet.ipnbk notebook and follow the very confusing instructions regarding all the directories
@Morganh thank you for that. I’ve gone with the pip3 install nvidia-tao option.
regarding Q6: Input images of (80,80,1) does that mean the images that you supply to the tao fpenet train .... need to be 80x80 pixels in grayscale ? would it need to already be cropped to just the face ?