The argument-based validation of a large-scale high-stakes vocabulary test




The purpose of this study was to investigate the validity of the vocabulary subsection of a high-stakes university entrance exam for PhD programs using the argument-based approach. All the three different versions of the test administered in a period of five years and the responses of 12500 test-takers were studied. The study focused on four inferences of domain definition, evaluation, generalization and explanation mainly using corpus linguistics, the Rash measurement model and factor analysis. The results indicated substantial threats to the validity of the test in terms of vocabulary choice, item difficulty, item discrimination, construct representation, and reliability.