Object detection accuracy of pretrained model

I am using a ssd-inception v2 pretrained object detection model from GitHub - dusty-nv/jetson-inference: Hello AI World guide to deploying deep-learning inference networks and deep vision primitives with TensorRT and NVIDIA Jetson.

Can I have the evaluation results when the model was trained with COCO dataset?

What was the accuracy of the training, validation, and test dataset in terms of mean average precision (mAP) for different Intersections over Union (IoU)?

Quick help will be much appreciated.


Hi @tareq.khan, the original ssd-inception-v2-coco model came from the TensorFlow Model Zoo with a mAP of 24:

Thanks for the quick reply. Really appreciate.

Three more questions.

  1. How many images were used for training, validation, and testing?

  2. Can I get the confusion matrix regarding predicting the object labels for the test set?

  3. Can mAP of 24 be expressed as a normalized number from 0 to 1?


Hi @tareq.khan, sorry I don’t know the specifics of how that model was trained, as it came pre-trained from TensorFlow. I would recommend filing a GitHub issue against the TF model zoo repo if you require more details.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.