Gallagher, Chris W. “Review Essay: All Writing Assessment is Local” CCC 65.3 Feb 2014.
Gallagher takes a critical look at what we mean when we refer to assessment as local. As he writes, “’local’ is not the answer; it’s a question: What kind of community, neighbor, home shall we be? Local means local responsibility—the obligation to learn how to live well together. It means—because this is what it will take to live well together—challenging entrenched privilege and systemic racism and classism. It means recognizing that our local is interconnected with other locals from which we have much to learn” (487). In the four books that Gallagher refers, each refer to the idea of local in three ways:
- “Assessment decisions are always experienced locally—by the people in the places they teacher and learn.
- “It also insists that the construct being assessed—writing—is itself a highly contextualized activity, learned and practiced by individuals and groups in specific rhetorical situations—and so assessments of it must be, too…
- Like scrappy Bostonians, compositionists don’t want outsiders—policymakers, psychometricians, the testing industry—imposing their agenda on us” (487-8).
However, each book—while affirming and confirming assessment’s commitment to the local—also raise questions about how we should use local as a guiding principle.
Writing Assessment in the 21st Century, Elliot and Perelman
Gallagher draws attention to the book’s “plea for collaboration between writing assessment folks in rhetoric and composition and those in educational assessment” namely, ETS. This plea comes in the form of including 5 chapters from ETS administrators. In response to our call for locality in assessment, “The ETS authors are quick to point out that their ability to make their assessments responsive to our highly contextualized, locally controlled approach is limited by the scale of their respective tests” (489). Condon, in particular takes up this perceived problem of scale by noting that “a new assessment method, having gained prominence because of its demonstrated superiority to older methods,” (read: holistically scored writing samples or even eportfolio), “is scaled up to the point that ‘the need for efficiency results in a continual reduction of the new model, until what is left is hardly different from—let alone better than—its predecessor” (490). We saw this in the reliability checks of holistically scored exams in the move toward the second wave of writing assessment. Condon inquires, “is appropriateness and adequacy to local needs sacrificed on the altar of efficiency?” (491). Put simply, should we advocate for the preservation of the affordances of the local exam, i.e. without scaling up?
Turning to Yancey’s piece, the fourth wave of writing assessment may be marked by the tension between external (federal government standards) and local assessments: “we see a shift away from assessment practice driven by a ‘single local exigence’ toward interconnected ‘locals’ collaboratively addressing the questions and concerns of an emerging field, acting on what Yancey helpfully terms a ‘self-created exigence’” (Yancey 477 qtd in Gallagher 493). In Yancey’s concept, she prompts us to consider the ways in which a method or tool of assessment is scaled across linked inquiries (horizontal) rather than Condon’s scaling up (vertical).
Race and Writing Assessment, Inoue and Poe
As Gallagher writes, this book addresses the concerns raised by yancey through the idea of “self-created exigence”: “the need to elucidate how race functions in writing assessment in order to make our local assessments fairer” (493). He specifically notes Yancey’s piece in the final section of the book that looks at the Insight Resume as a means of offering a “model of ’linked’ local institutions adapting the IR or something like it to their local needs and collaborating to ‘counter the rhetoric of scientific testing that is race-blink” (Yancey 184 qtd in Gallagher 496).
Writing Assessment and the Revolution in Digital Texts, Michael Neal
Gallagher notes Neal’s even hand when discussing digital text and technology’s role in the history of writing assessment: while he looks forward to how “the digital revolution” participates in writing assessment in innovative ways, he also is careful not to walk away from the field’s belief in the rhetorical nature of writing. For example, in discussing Automated Essay Scoring, he notes that AES technologies can’t read, especially in the rhetorical sense of reading and writing that our field embraces. But Gallagher notes that Neal opens his critique to other means of assessment that “attempts to devalue the human response, expertise, experience, and agency of the reader in an attempt to standardize the procedure for the sake of consistency” (Neal 64 qtd in Gallagher 498).
Digital Writing Assessment and Evaluation, McKee and DeVoss
While claiming to be a book on assessment, there is not much mention of the core concepts of assessment that have historically framed how we discuss it: “Nothing from Assessing Writing or Journal of Writing Assessment. Nary a mention of validity and reliability. It would be unfortunate if this omission sent the message to readers that none of the work in writing assessment, either in our field or in educational measurement, is relevant to digital and multimodal composing” (500). He points to the obvious exception of Mya Poe who specifically aligns this brave new world of digital technology with concepts of assessment, “’it is not enough,’ Poe suggests, ‘to articulate criteria on rubrics; we also need to use the best practices articulated in the writing assessment literature in the measurement community.’ (Gallagher 501). But also, as Yancey, McElory, and Powers writes, new frameworks to view/read digital texts and the vocabulary we use to talk about and assess such wrting “should, at least in part, emerge from the portfolios themselves” (502). However, Gallagher’s largest critique is with the cluster of readings concerning writing program administration. It is here that he outlines some of the questions we need to consider to think critically about the local in writing assessment:
- “is standardization the only way to achieve consistently high-quality teaching and learning experience?…if we do standardize, how can we ensure tha teachers and students maintain agency?
- “What is the relationship between classroom and program assessment? …When classroom assessment is taken out of the hands of teachers, what happens to their professional expertise and judgment?
- “What happens when students write for program-level scorers and raters? How does this influence students’ understanding of the audience(s) and purposes of their writing? How does it affect teachers’ classroom aims and purposes?
- “Does rubric-based outcomes assessment serve the best interests of students, teachers, programs, and the discipline?…
- “To what extent are our local assessments being conscripted into a surveillant managerial agenda? In what ways are they abetting, perhaps exacerbating, problematic labor practices?”