At the same time as some faculty committees and corporations are appealing to the use of online ratings from RateMyProfessors.com to inform promotion decisions and nationwide university rankings, others are derogating the site as an unreliable source of idiosyncratic student ratings and commentary. In this paper we describe a study designed to test the assumption that student's ratings are unreliable. The sample included 366 instructors with 10 or more student ratings. Contrary to the assumption that student's ratings are unreliable, variance in student's ratings about a given instructor was similar across number of raters, with 10 raters showing the same degree of consensus as 50 or more raters. Students showed the most consensus about instructors who were among the top third of the distribution in quality, and this effect occurred even among instructors rated as the most difficult. Taken alongside other investigations of RateMyProfessors.com and the broad literature on student evaluations of teaching, our findings suggest that students who use RateMyProfessors.com are likely providing each other with useful information about quality of instruction. Accessed 9,408 times on https://pareonline.net from November 07, 2011 to December 31, 2019. For downloads from January 1, 2020 forward, please click on the PlumX Metrics link to the right.

Creative Commons License

Creative Commons Attribution-Noncommercial-No Derivative Works 4.0 License
This work is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 4.0 License.