Doctoral Dissertation: "Charting chronicity: A Thematic Analysis of Clinical Pain Assessment Tools and Patient Documentation Design"
"Expert Audiences’ Ability to Differentiate Between Authentic and Synthetic Literacy Narratives"
Scholars in Rhetoric of Health and Medicine (RHM) and Disability Studies have long focused on medical discourse and how it shapes perceptions of chronic disease. According to the Centers for Disease Control and Prevention (CDC), pain is the leading cause of disability and seeking medical care today. Widespread disparities exist for chronic pain assessment across race, gender, class, and sexuality, especially for female-identifying patients whose discomfort is often downplayed or dismissed. My dissertation investigates how clinical patient documentation rhetorically frames chronicity and the body in pain. Ultimately, my study reveals how the design of assessment documentation contributes to these disparities from a rhetorical perspective.
While pain has been studied ideologically (Graham & Herndl, 2011, 2018) across clinical practice, using multiple ontologies and stasis theory, very little rhetorical work exists that studies the pain assessment process. As a rhetorical scholar, I am interested in the written, technical decisions of patient intake forms and pain scales that drive the assessment process—My research hones in on how routine documentation practices shape the assessment decisions that are made in the first place. I conduct a thematic analysis that maps out the major narratives across these technical artifacts, paying close attention to how their content and design map onto the rhetorical concepts of chronos (time), and hexis, (one’s bodily state). Conducting a thematic analysis affords new humanistic insights into how pain, Western notions of health and wellness, and disability are baked into seemingly objective forms of medical documentation.
My major findings reveal that chronic pain documentation still relies upon acute, universally designed measurements of pain. While quick standardized assessments are clinically valuable for practitioners to practice efficiently, these assessment methods assume an ease of simplicity that masks the usability needs of those living in chronicity. In brief, U.S pain and spine documentation still largely value kairotic time instead of the “flux” or physical uncertainty that comes with living with chronic pain, disability, or other long-term medical conditions. My dissertation offers novel insights into how this genre of technical communication can benefit from a Patient Experience Design (PXD) approach. PXD seeks to enhance the usability of technical tools and documentation from a human-centered, patient perspective. (Melonçon, 2017).
My research in Rhetoric of Health and Medicine (RHM) naturally lends itself to other scholarly conversations of access and equity. One’s health literacy and access to care often dictate their quality of life. This is especially true in rural areas where a shortage of healthcare locations and medical professionals contribute to chronic health conditions and a lack of ongoing care. This project addresses the communication challenges faced by rural healthcare clinics in North Central United States, particularly in Indiana, where geographic isolation, financial constraints, and limited resources lead to healthcare disparities.
I, and my research team—Dr. Richard Johnson-Sheehan, Dr. Thomas Rickert, and Paul Hunter—have snowball recruited, surveyed, and interviewed a range of health care practitioners working with rural populations to identify their daily communication needs. These practitioners struggle with inefficient communication networks, generic patient education materials, and a lack of specialized resources. Our aim is to develop rhetorically-driven strategies in which generative artificial intelligence can be used to increase the bandwidth of medical staff and improve communications between providers and patients. The project involves three phases: assessing current communication practices, developing AI methods to strengthen these practices, and educating healthcare professionals on implementing AI. We will present on our initial findings at this year's Council for Professional, Technical, and Scientific Communication conference.
"Enhancing Rural Healthcare by Incorporating Generative AI and Machine Learning: Building Stronger Communication Networks"
Challenges persist in integrating large language models into writing education and distinguishing between AI-generated and student-produced texts.
I and my co-collaborators—Dr. Mason Pelligrini, Paul Thompson Hunter, and David Rowe—seek to answer the following research questions: (1) Can writing instructors at U.S. colleges accurately differentiate between synthetic (AI-generated) and authentic (human-written) proposals without contextual cues? (2) What strategies, attested to by writing instructors, serve to accurately differentiate between synthetic and authentic texts? (3) What strategies, attested to by writing instructors, are found to be ineffective for such differentiation? And (4) are there verifiable relationships between U.S. college writing instructors' teaching experience, education level, AI familiarity, or other demographic items and their success in differentiating between authentic and synthetic texts? Proposals, across Writing in the Disciplines (WID), present a promising avenue for empirical research because they require both persuasion and originality. This project will identify the major strategies participants used to distinguish between these texts and offer new insights into instructional expertise, effective assessment practices, and teaching authentic prose in professional contexts.
Our research is based on the recently articulated concept of Rhetorical Authenticity, an AI-informed theory that synthesizes Erich Fromm’s true/pseudo self and Aristotlean ethos (Deptula et al., 2024). In this effort, we employ a mixed-methods research design where writing instructors will be given a series of proposals either written by undergraduates or GPT-4. Instructors will complete a series of survey questions, using a Likert scale to rank their confidence of whether the text was created by a human or AI. Study participants will also provide brief open-responses on how they came to their conclusions. Our study aims to provide new insights related to instructional expertise, best assessment practices, and teaching authenticity in professional contexts.
Adrianna Deptula
(she/her/hers)
Rhetoric & Composition PhD Candidate
Bilsland Fellow
Purdue University
adeptula@purdue.edu
© 2024. All rights reserved.