How to train a secondary detector (SGIE)

After a steep learning period with loads of help from the mods and posts from other members, I’m having fun in tryingtraining a model (DetectNet v2 detector) through TLT and run it in a custom built Deepstream App. Now I want to train a secondary detector (SGIE) that further specifies the primary predictions.

While I already have labeled 1000s of images, I came across a forumpost, stating the secondary detector (SGIE) actually is an image classification model instead of an object detector.
I have 3 questions regarding the SGIE;

  1. Is the SGIE (no back-to-back detector) indeed an image classifier and do I need to use the TLT Image classifier Jupyter notebook?
  2. Can I use a trained object detector as SGIE, without being used as a back-to-back detector
  3. If I need to train an image classifier, does it use the labeled bounding boxes (As area if interest)?

Thanks in advance!

Gerard

  1. It depends on your target. You want to train a SGIE as an image classification model , right? If yes, you can use TLT classification network to train.
  2. What do you mean by “trained object detector”? Is it the primary GIE ?
  3. If you train a classification network, there are no bboxes. You just need to split your images into different classes you want to train. You can refer to classification Jupyter notebook.

Hi @Morganh,

Thanks for the post.

You have specified to refer to classification Jupyter notebook for the classification network train,

  • Can you provide a link to that notebook?
  • Is annotated data with coordinates required for classification network(sgie)?

Download jupyter notebooks from Requirements and Installation — Transfer Learning Toolkit 3.0 documentation

wget --content-disposition https://api.ngc.nvidia.com/v2/resources/nvidia/tlt_cv_samples/versions/v1.0.2/zip -O tlt_cv_samples_v1.0.2.zip
unzip -u tlt_cv_samples_v1.0.2.zip -d ./tlt_cv_samples_v1.0.2 && rm -rf tlt_cv_samples_v1.0.2.zip && cd ./tlt_cv_samples_v1.0.2

When you train a classification network, just need to split your images into different classes you want to train.

Hey @Morganh ,

You have mentioned to split the images into folders,

  • Does that require annotation to KITTI format?

  • What is the directory structure I have to use while training for gender: male and female for example?

Thanks

image