For ReIdentificationNet, a pre-trained model is available in the NGC catalog trainable_v1.1: resnet50_market1501_aicity156.tlt . I can’t find many information about the pre-trained process though.
includes a combination of NVIDIA proprietary datasets along with Open Images V5.
However in the NGC page, they are only mentioning market-1501 + synthetic IDs. Therefore, I am wondering on which dataset has the model been pre-trained? is it on a combination of NVIDIA proprietary datasets along with Open Images or is it on market-1501 + synthetic data?
Also the blog mention, fine-tuning using 4470 real IDs, does that mean the model tested here is different from deployable_v1.2 for NGC?
Therefore, we should have 41712/128 = 236 batches for each epoch. I am wondering why TAO is showing 570?
And why are the epochs split into 2? thought this was due to num_workers but even setting it up to 1 shows a split epochs output.