Inter-rater Reliability Overview
What is Inter-rater Reliability?
Inter-rater Reliability is the most common form of Reliability testing. Inter-rater Reliability testing examines the degree to which two or more assessors agree on scores for a single work sample. It is expressed as the percentage of scoring criteria that received the same score (percent agreement). A percent agreement of >80% is considered reasonable. Inter-rater Reliability test results are often used by institutions to help inform changes to rubric descriptions or assessor training practices.
What are the best practices for Inter-rater Reliability testing?
In order to achieve the best results, you will want to re-allocate the same student work sample to at least three assessors. If possible, you should re-allocate more than one student work sample, as this will give you a better picture of your inter-rater reliability.
What will the results of my Inter-rater Reliability test tell me?
If the percent agreement in your Inter-rater Reliability tests for a particular rubric is consistently >80%, this generally indicates that your rubric descriptions are clear and your assessors understand how to use this rubric to assess student work.
If the percent agreement in your Inter-rater Reliability tests for a particular rubric is <80%, this generally suggests that your rubric descriptions (performance level/score descriptions for each criterion) may not be clear, are not detailed enough, or are ambiguous. It may also suggest that your assessors need to be re-trained on assessment practices when using this rubric. In this case there should be a collaborative effort to arrive at a consensus as to the meaning of the scoring criteria on which the assessors disagreed and how to interpret the work sample being assessed.
First ensure that you have a sufficient number of assessments that have been completed using the regular online scoring method. You must have one completed assessment for each reallocation you intend to send.
1. Click on the Main Menu icon.
2. Select Reporting.
3. Select Inter-rater Reliability.
On the Inter-rater Reliability screen:
4. Click the Allocate Re-Assessments button.
5. Using the Sources column, select the department that contains the Assessment Instrument you wish to use.
6. Click on the instrument that you would like to select. Use the Add Selected button or drag-and-drop it into the third column.
7. Select your Assessor Choice Method using the drop down menu.
8. Using the Assessors Available column select the assessor(s) to whom you wish to assign the reassessment and use the >> button to move the assessor(s) into the Assessors that will Reassess column.
9. If you wish to use a different assessment instrument for the reassessment click the Choose Instrument button. If you wish to use the same assessment instrument, skip this step.
If you have clicked the Choose Instrument button:
10. Using the Sources column select the department to which the assessment instrument is assigned.
11. In the Assessment Instruments column select the assessment instrument you wish to use for the reassessment and add it to the Chosen Instruments column by clicking the Add Selected button or dragging and dropping.
12. Click Done.
13. Select the number of re-assessments you wish to generate.
If you would like one work sample to be re-allocated to 5 assessors, you should enter "5" here, and you will also need to select 5 different assessors in Step 8 above.
If you would like more than one work sample to be re-allocated for assessment, you will need to complete the steps on this help page again (one time per student work sample that you wish to re-allocate).
14. Select whether or not you would like to Reallocate Past Reassessments (selecting this option will allow the system to re-allocate past reliability assessments).
15. Select whether or not you would like to Anonymize Students.