Off-campus UMass Amherst users: To download campus access dissertations, please use the following link to log into our proxy server with your UMass Amherst user name and password.

Non-UMass Amherst users: Please talk to your librarian about requesting this dissertation through interlibrary loan.

Dissertations that have an embargo placed on them will not be available to anyone until the embargo expires.

Author ORCID Identifier


Open Access Dissertation

Document Type


Degree Name

Doctor of Philosophy (PhD)

Degree Program


Year Degree Awarded


Month Degree Awarded


First Advisor

Michael J. Constantino

Subject Categories

Psychological Phenomena and Processes


To solidify further their scientific footing, qualitative approaches would ideally demonstrate that they yield replicable information about a phenomenon under study. Although consensual qualitative research (CQR; Hill, 2012) proposes a rigorous, multistep method to enhance interjudge reliability and instill confidence in the results, it remains unclear if multiple uniformly trained teams analyzing the same stimulus set would arrive at similar analytic output (i.e., replicability—a high form of trustworthiness). Moreover, it is unclear if replicability (or lack thereof) might be influenced by the process through which CQR judges arrive at their output (i.e., social reliability). Addressing these gaps, this exploratory study employed mixed methods to evaluate replicability and social reliability between 2 teams that each consisted of 4 randomly assigned judges. These judges were uniformly trained in CQR before the teams separately analyzed 12 transcripts of semi-structured interviews assessing mental health care consumers’ perspectives on using provider performance information to inform their treatment decisions. Replicability was examined quantitatively and qualitatively by comparing the output elements established by the CQR teams (i.e., domains, categories, core ideas, and core idea exemplars). Social reliability was examined quantitatively and qualitatively by comparing the teams on objective group process and self-reported group climate. Replicability results were fairly nuanced. Whereas the teams tended to perceive similar content that comprised domains, categories, and core ideas, they notably differed in their level of abstraction. The teams also remarkably differed in how representative they saw the information discussed among the interview participants. Moreover, the team that demonstrated more vs. less abstraction also generated more representative findings, spent more time analyzing transcripts, equitably divided time spent discussing their perspectives, evidenced fewer auditor disagreements, and reported more positive group climate than the other team. Results preliminarily inform the practical utility of existing CQR findings, and future methods for optimizing CQR process and the replicability of its output.