-
Notifications
You must be signed in to change notification settings - Fork 63
Open
Description
HI, while writing a test for compute_coco_map() I identified a bug.
# Setup: Class 1 and 2 have predictions.
# Result: mAP is ~0.39 (averaged over 2 classes)
# Act: Add Class 3 (1 GT box, 0 Predictions)
metrics_factory.update(
np.array([[100, 100, 110, 110]]), [3], # GT for Class 3
np.empty((0, 4)), [], [] # 0 Preds
)
mAP_after = metrics_factory.compute_coco_map()
# Assert
# Expected: mAP should drop (averaged over 3 classes: [AP1 + AP2 + 0] / 3)
# Actual: mAP remains ~0.39 (still averaged over 2 classes)The issue is that ground truth counts are updated too late
# Update ground truth counts
for g_label in gt_labels:
self.gt_counts[int(g_label)] += 1Specifically, after
# Handle case where there is ground truth but no predictions
if len(pred_boxes) == 0:
for g_label in gt_labels:
self.results[g_label].append((None, -1)) # All are false negatives
returnI moved the code snippet that counts ground truths prior to all the early returns and the mAP is now lower (3rd class was included).
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels