Model Evaluation & Metrics
Cohen’s Kappa
A metric measuring agreement between raters/models accounting for chance agreement.
This concept is essential for understanding model evaluation & metrics and forms a key part of modern AI systems.
Related Concepts
- Evaluation
- Agreement
- Inter-Rater Reliability
Tags
model-evaluation-metrics evaluation agreement inter-rater-reliability