Model Evaluation & Metrics

Cohen’s Kappa

A metric measuring agreement between raters/models accounting for chance agreement.

This concept is essential for understanding model evaluation & metrics and forms a key part of modern AI systems.

  • Evaluation
  • Agreement
  • Inter-Rater Reliability

Tags

model-evaluation-metrics evaluation agreement inter-rater-reliability

Added: November 18, 2025