Publication:
Consensual Qualitative Research: Replicability of Results and Social Reliability of Process

dc.contributor.advisorMichael J. Constantino
dc.contributor.authorMorrison, Nicholas
dc.contributor.departmentUniversity of Massachusetts Amherst
dc.date2024-03-28 16:44:24
dc.date.accessioned2024-04-26T15:35:37Z
dc.date.available2024-04-26T15:35:37Z
dc.date.submittedSeptember
dc.date.submitted2019
dc.description.abstractTo solidify further their scientific footing, qualitative approaches would ideally demonstrate that they yield replicable information about a phenomenon under study. Although consensual qualitative research (CQR; Hill, 2012) proposes a rigorous, multistep method to enhance interjudge reliability and instill confidence in the results, it remains unclear if multiple uniformly trained teams analyzing the same stimulus set would arrive at similar analytic output (i.e., replicability—a high form of trustworthiness). Moreover, it is unclear if replicability (or lack thereof) might be influenced by the process through which CQR judges arrive at their output (i.e., social reliability). Addressing these gaps, this exploratory study employed mixed methods to evaluate replicability and social reliability between 2 teams that each consisted of 4 randomly assigned judges. These judges were uniformly trained in CQR before the teams separately analyzed 12 transcripts of semi-structured interviews assessing mental health care consumers’ perspectives on using provider performance information to inform their treatment decisions. Replicability was examined quantitatively and qualitatively by comparing the output elements established by the CQR teams (i.e., domains, categories, core ideas, and core idea exemplars). Social reliability was examined quantitatively and qualitatively by comparing the teams on objective group process and self-reported group climate. Replicability results were fairly nuanced. Whereas the teams tended to perceive similar content that comprised domains, categories, and core ideas, they notably differed in their level of abstraction. The teams also remarkably differed in how representative they saw the information discussed among the interview participants. Moreover, the team that demonstrated more vs. less abstraction also generated more representative findings, spent more time analyzing transcripts, equitably divided time spent discussing their perspectives, evidenced fewer auditor disagreements, and reported more positive group climate than the other team. Results preliminarily inform the practical utility of existing CQR findings, and future methods for optimizing CQR process and the replicability of its output.
dc.description.degreeDoctor of Philosophy (PhD)
dc.description.departmentPsychology
dc.identifier.doihttps://doi.org/10.7275/14792839
dc.identifier.orcidhttps://orcid.org/0000-0002-7268-1170
dc.identifier.urihttps://hdl.handle.net/20.500.14394/18056
dc.relation.urlhttps://scholarworks.umass.edu/cgi/viewcontent.cgi?article=2734&context=dissertations_2&unstamped=1
dc.source.statuspublished
dc.subjectconsensual qualitative research
dc.subjectqualitative methods
dc.subjectreplicability
dc.subjectsocial reliability
dc.subjectmental health care patients
dc.subjectprovider performance
dc.subjectPsychological Phenomena and Processes
dc.titleConsensual Qualitative Research: Replicability of Results and Social Reliability of Process
dc.typeopenaccess
dc.typedissertation
digcom.contributor.authorisAuthorOfPublication|email:nicholas.r.morrison@gmail.com|institution:University of Massachusetts Amherst|Morrison, Nicholas
digcom.identifierdissertations_2/1799
digcom.identifier.contextkey14792839
digcom.identifier.submissionpathdissertations_2/1799
dspace.entity.typePublication
Files
Original bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
Morrison_Dissertation_Grad_School_Submission.pdf
Size:
961.04 KB
Format:
Adobe Portable Document Format
Collections