training_1/SGD/DistributedSGD_Allreduce/HorovodAllreduce_training_1_SGD_gradients_block_3f_bn_1_FusedBatchNorm_grad_FusedBatchNormGrad_1 [missing ranks: 0]. Both AP and MAP are 0.

The map values of the first 20 epochs were normal, but afterwards they were all 0.

For the mAP issue, seems that the training loss is reducing in the first 20 epochs but increasing afterwards.
Please finetune the max_learning_rate and trigger more experiments.
You can try a smaller value.

Moreover, please finetune the batch size too.