TAO evaluation vs COCO evaluation

Hi.
According to the coco documentation, the value of mAP 0.5 is equal to pascal mAP.

I tested both SAMPLE and INTEGRATE modes for average_precision_mode in eval_config and then calculated coco eval metrics. but the results are different.

Which network did you test? And how did you set matching_iou_threshold?

Hi
I tested all object detection models.
matching_iou_threshold is 0.5 for all models

It is normal to have different result when you test sample or integrate. In the case of ‘sample’, the average precision will be computed according to the Pascal VOC formula that was used up until VOC 2009, where the precision will be sampled for num_recall_points recall values. In the case of ‘integrate’, the average precision will be computed according to the Pascal VOC formula that was used from VOC 2010 onward, where the average precision will be computed by numerically integrating over the whole precision-recall curve instead of sampling individual points from it. ‘integrate’ mode is basically just the limit case of ‘sample’ mode as the number of sample points increases.
To calculate coco eval metrics, need to set matching_iou_threshold to 0.5, 0.55, … , 0.95 separately.

Sorry, I did not explain well. I did not mean that sample and integrate metrics have different results. I mean non of them are equal to coco mAP with iou-threshold=0.5 (The second row in the above picture https://aws1.discourse-cdn.com/nvidia/original/3X/b/4/b421390a80c85ee526ba53136fc22fb06c66f98f.png)

Where is the baseline? How did you get the coco mAP ?

I inference on the dataset and convert the kitti inference results to coco result format and also I have coco ground truth json file. Then run cocoEval.

def coco_eval(output_folder_address, gt_annotation_path):
    categories = ['car', 'bicycle', 'tree']

    inference_kitti_labels = output_folder_address + '/annotated_labels'

    cocoGt = COCO(gt_annotation_path)

    bbox_data = []

    for img in cocoGt.imgs:
    	image_name = cocoGt.imgs[img]['file_name']
        inference_kitti_file = inference_kitti_labels + f'/{image_name.rsplit(".", 1)[0]}.txt'
        with open(inference_kitti_file, 'r') as f:
            kitti = f.readlines()
            if len(kitti) == 0:
                bbox_data.append({'image_id': img,
                                  'category_id': 0,
                                  'bbox': [0, 0, 0, 0],
                                  'score': 0})

            else:
                for line in kitti:
                    line = line.split(' ')

                    category, _, _, _, x1, y1, x2, y2, _, _, _, _, _, _, _, confidence_score = line

                    category_index = categories.index(category) + 1

                    x1, y1, x2, y2 = map(float, [x1, y1, x2, y2])
                    bbox = [x1, y1, x2 - x1, y2 - y1]
                    bbox = [float("{:.2f}".format(float(b))) for b in bbox]

                    confidence_score = float("{:.2f}".format(float(confidence_score)))

                    bbox_data.append({'image_id': key,
                                      'category_id': category_index,
                                      'bbox': bbox,
                                      'score': confidence_score})

    json_object = json.dumps(bbox_data)
    with open(output_folder_address + "/bbox_detection.json", 'w') as outfile:
        outfile.write(json_object)

    # COCO Evaluation
    annType = ['segm', 'bbox', 'keypoints']
    annType = annType[1]

    # initialize COCO detections api
    resFile = output_folder_address + "/bbox_detection.json"
    cocoDt = cocoGt.loadRes(resFile)

    imgIds = sorted(cocoGt.getImgIds())
    imgIds = imgIds[0:len(cocoDt.getImgIds())]
    
    # running evaluation
    cocoEval = COCOeval(cocoGt, cocoDt, annType)
    cocoEval.params.imgIds = imgIds
    cocoEval.evaluate()
    cocoEval.accumulate()
    cocoEval.summarize()

    stats = cocoEval.stats