Training emotionnet with tao toolkit through Jupyter Notebook

oh my bad… I have exactly the same num_samples as you.

i’m surprised of the result of training because i expected this notebook to a be sort of a tutorial thing providing correct result by using a popular dataset such as CK+… I will try now to implement it to my application with the etlt file through DeepStream using a USB Camera and after that i will come back to make this model better…

Thanks for your help and time !

I cannot find the place to modify the sample number for training and validation. Isn’t it in the yaml model file ?

It is just a quick training with the CK+ dataset. Please continue training for more epochs.
More, you can run “tao emotionnet inference” directly if you want to check the effect of ngc pretrained model of emotionnet. Officially, we also recommend you to run it with deepstream. Emotion Classification — TAO Toolkit 3.22.05 documentationdeepstream_tao_apps/apps/tao_others/deepstream-emotion-app at master · NVIDIA-AI-IOT/deepstream_tao_apps · GitHub

For the modification of tfrecords, from below log of yours, the tfrecords are under postData/ckplus/Ground_Truth_DataFactory/TfRecords . The TfRecords_combined are under postData/ckplus/Ground_Truth_DataFactory/TfRecords_combined . They are defined in the training yaml file.

Suggest you to run dataset_convert to generate new tfrecords.

2022-12-05 08:51:20,424 [INFO] __main__: Start to split data...
/workspace/tao-experiments/emotionnet/postData/ckplus/Ground_Truth_DataFactory/TfRecords_combined
2022-12-05 08:51:20,425 [INFO] __main__: Test: ['S051', 'S108', 'S158', 'S149', 'S137', 'S032', 'S066', 'S046', 'S097', 'S504', 'S091']
2022-12-05 08:51:20,425 [INFO] __main__: Validation ['S094', 'S122', 'S082', 'S147', 'S060', 'S042', 'S096', 'S014', 'S083', 'S089', 'S113']

See Emotion Classification — TAO Toolkit 3.22.05 documentation

Use these steps to evaluate on a new test set with ground truth labeled:

  1. Create tfrecords for this test set by following the steps listed in Pre-processing the Dataset section.
  2. Update the dataloader configuration part of the training experiment spec file to update kpiset_info with newly generated tfrecords for the test set. For more information on the dataset config, please refer to Creating an Experiment Specification File. The evaluate tool iterates through all the folds in the kpiset_info.

Thanks for this complete reply, since yesterday, i’m trying to use the etlt file i already have on deepstream with my jetson AGX Xavier card but i haven’t already succeeded

The original issue is gone. Please create new topic if there is further concern. Thanks.

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.