# Boundary IoU: Improving Object-Centric Image Segmentation Evaluation CVPR 2021

* Work done during an internship at Facebook AI Research.

### Abstract

We present Boundary IoU (Intersection-over-Union), a new segmentation evaluation measure focused on boundary quality. We perform an extensive analysis across different error types and object sizes and show that Boundary IoU is significantly more sensitive than the standard Mask IoU measure to boundary errors for large objects and does not over-penalize errors on smaller objects. The new quality measure displays several desirable characteristics like symmetry w.r.t. prediction/ground truth pairs and balanced responsiveness across scales, which makes it more suitable for segmentation evaluation than other boundary-focused measures like Trimap IoU and F-measure. Based on Boundary IoU, we update the standard evaluation protocols for instance and panoptic segmentation tasks by proposing the Boundary AP (Average Precision) and Boundary PQ (Panoptic Quality) metrics, respectively. Our experiments show that the new evaluation metrics track boundary quality improvements that are generally overlooked by current Mask IoU-based evaluation metrics. We hope that the adoption of the new boundary-sensitive evaluation metrics will lead to rapid progress in segmentation methods that improve boundary quality.

### Boundary IoU

Given ground truth mask $$G$$ and prediction mask $$P$$, Boundary IoU first computes the set of the original masks' pixels that are within distance $$d$$ from each contour, and then computes the intersection-over-union of these two sets: $$\text{Boundary IoU}(G, P) = {| (G_{d} \cap G) \cap (P_{d} \cap P) | \over | (G_{d} \cap G) \cup (P_{d} \cap P) |},$$ where boundary regions $$G_{d}$$ and $$P_{d}$$ are the sets of all pixels within $$d$$ pixels distance from the ground truth and prediction contours respectively.

We compare Boundary IoU with Mask IoU, Trimap IoU and F-measure using a new sensitivity analysis framework. Please check the paper for detailed analysis of the new measure.

### Boundary AP and Boundary PQ

The most common evaluation metrics for instance and panoptic segmentation tasks are Average Precision (AP or Mask AP) and Panoptic Quality (PQ or Mask PQ) respectively. Both metrics use Mask IoU and inherit its bias toward large objects and, subsequently, they are insensitivity to the boundary quality. We update the evaluation metrics for these tasks by replacing Mask IoU with min(Mask IoU, Boundary IoU). We name the new evaluation metrics Boundary AP and Boundary PQ. The change is simple to implement and we demonstrate that the new metrics are more sensitive to the boundary quality while able to track other types of improvements in predictions as well.

Please check the paper for detailed analysis of the new Boundary IoU-based metrics.

### Acknowledgments

The website template was borrowed from Michaël Gharbi.