Camp, Roberta “Changing the Model for the Direct Assessment of Writing” AW
In light of the movement from indirect measures of writing toward direct measures of writing, Camp takes a closer look at the validity of using multiple choices tests and holistic scoring of the one-off writing sample. In particular, she notes the ways these formats define a particular construct of writing and explores whether such a construct reflects what we want in judging student writing ability.
She begins by offering a brief history of the motivation behind shifting between indirect and direct measures of writing ability. She points to the push and pull of reliability and validity as motivating factors in moving between these formats. However, she notes that while initially it was seen that both formats were meant to measure supposedly identical constructs, the two formats were not measuring identical skills/knowledge. However, in both cases, they may still be limited in what they can measure; in particular, there are still limits to the single sample of writing as a means of making judgments on a student’s academic career (106): “the timed, controlled conditions for writing that once seemed the means to ensure equal opportunity to all test takers now seem unnatural limits that preclude use of the processes, among them interactions with others, that we now understand to be part of most writing” (108).
In this way, Camp takes up the question of the constructs of writing that the tests evaluate: “Performance on the writing sample no longer appears to be an adequate representation of the accepted theoretical construct of writing nor does it seem an adequate representation of students’ likely experiences with writing, past or future, or with the skills and strategies called upon in those experiences” (108). She notes, in particular that there is not much attention given to writing formats that represent “processes related to the communicative contexts of writing…We discover that we have excluded from the assessment many of the experiences and resources that motivate and shape writing, especially for novice writers and for writers who do not easily see themselves as participants in the academic discourse community” (112). In this way, these formats “deprive many student writers of the advantages that comes with writing for genuine communicative purposes and contexts” (113). (Though we might consider what “genuine communicative purposes and contexts” mean). Camp notes that without such considerations, students who are unfamiliar with mainstream culture and the discourses of academic settings—namely, ELL students or students from poor socio-economic environments—will be at a particular disadvantage.
With these considerations, Camp places validity—and the research of it—at the center of her discussion: “all evidence for validity is to be interpreted in relation to the theoretical construct, the purpose for the assessment, and therefore the inferences derived from it, and the social consequences” (116). She continues, “in the case of writing, we should think about whether our assessment adequately represent writing as we understand it” (116). Psychometric approaches to writing, however, cannot be much help to us here because such approaches have a lack of theory on learning—thus, they do not consider the impact various formats of assessment has on the teaching and learning of assessment.
Camp concludes by noting that portfolios appear to show some promise: “Portfolios can provide evidence of complex and varied performances of writing, of writing generated in rich instruction and social contexts, of the processes and strategies that students use, and of their awareness of those processes” (125).