The question of equivalence between two or more groups is frequently of interest to many applied researchers. Equivalence testing is a statistical method designed to provide evidence that groups are comparable by demonstrating that the mean differences found between groups are small enough that they are considered practically unimportant. Few recommendations exist regarding the appropriate use of these tests under varying data conditions. A simulation study was conducted to examine the power and Type I error rates of the confidence interval approach to equivalence testing under conditions of equal and non-equal sample sizes and variability when comparing two and three groups. It was found that equivalence testing performs best when sample sizes are equal. The overall power of the test is strongly influenced by the size of the sample, the amount of variability in the sample, and the size of the difference in the population. Guidelines are provided regarding the use of equivalence test Accessed 20,707 times on https://pareonline.net from August 16, 2014 to December 31, 2019. For downloads from January 1, 2020 forward, please click on the PlumX Metrics link to the right.
Creative Commons License
This work is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 4.0 License.
Rusticus, Shayna A. and Lovato, Chris Y.
"Impact of Sample Size and Variability on the Power and Type I Error Rates of Equivalence Tests: A Simulation Study,"
Practical Assessment, Research, and Evaluation: Vol. 19
, Article 11.
Available at: https://scholarworks.umass.edu/pare/vol19/iss1/11