How to perform simple intraclass correlation to determine interrater reliability?
I'm trying to figure out the best way to complete a relatively simple intraclass correlation (ICC) analysis. In my study, three raters scored 100 performance evaluations on a 1-10 scale for various components of performance. Each rater scored each performance evaluation once. I ran the Measurement Systems Analysis platform with Rater as "X", Evaluation (1-100) as "part", and Score as "Y". I got a ...