Skip to content

compute_coco_map ignores classes with Ground Truth but zero predictions #385

@iocalangiu

Description

@iocalangiu

HI, while writing a test for compute_coco_map() I identified a bug.

# Setup: Class 1 and 2 have predictions.
# Result: mAP is ~0.39 (averaged over 2 classes)

# Act: Add Class 3 (1 GT box, 0 Predictions)
metrics_factory.update(
    np.array([[100, 100, 110, 110]]), [3], # GT for Class 3
    np.empty((0, 4)), [], []               # 0 Preds
)

mAP_after = metrics_factory.compute_coco_map()

# Assert
# Expected: mAP should drop (averaged over 3 classes: [AP1 + AP2 + 0] / 3)
# Actual: mAP remains ~0.39 (still averaged over 2 classes)

The issue is that ground truth counts are updated too late

        # Update ground truth counts
        for g_label in gt_labels:
            self.gt_counts[int(g_label)] += 1

Specifically, after

        # Handle case where there is ground truth but no predictions
        if len(pred_boxes) == 0:
            for g_label in gt_labels:
                self.results[g_label].append((None, -1))  # All are false negatives
            return

I moved the code snippet that counts ground truths prior to all the early returns and the mAP is now lower (3rd class was included).

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions