McKee & DeVoss, Digital Writing Assessment and Evaluation

Mckee, Heidi A. & Danielle Nicole DeVoss. Digital Writing Assessment and Evaluation (Intro, Chapters 1, 3, 5, 6, 7, 8, & Afterward)

Preface, Heidi A. McKee and Dànielle Nicole DeVoss

In their preface, McKee and DeVoss write that their edited collection is designed around two emphases: First, they emphasize assessment and evaluation of digital writing; of note, they claim that nearly all writing today is digital “because it exists as pixels and bits on a computer at some point in the composing process” (2). Second, they emphasize how digital technologies have changed how writing (both digital and traditional) and writing instruction has changed in delivery and assessment.

Chapter 1: Making Digital Writing Assessment Fair for Diverse Writers, Mya Poe

Poe’s concern is with the ways digital writing assessment, as it becomes more commonplace, is also increasingly “laden with a range of ideological values” which prompts a renewed discussion about how to make such assessments fair for students of color, working class students, and students with disabilities. As she writes, few validation studies have told us the impact large-scale digital writing assessments have had on these populations of students; making such assessments fair involves attending to the construct and consequences of digital writing.

She begins by drawing upon critical theories of technology—particularly those forwarded by Neal—to frame her discussion. She begins by noting assessment as a technology where a test is “something put together for a purpose, to satisfy a pressing and immediate need, to solve a problem” (Madaus qtd in Poe 3). Understanding assessments as technologies also allows us to note how tests are culturally constructed realities where test designs “create what they supposedly measure” thus all assessments are laden with ideological values. Likewise, considering assessment with technology—such as computer-assisted scoring—also demonstrates how those technologies are likewise built with assumptions that are racialized.

From here Poe offers some core definitions that lead to her central focus of reliability. Validity refers to “the degree to which evidence and theory support the interpretation of test scores entailed by the proposed uses of tests” and reliability refers to “the consistency of [tasks and scoring procedures] when the testing procedure is repeated on a population of individuals or groups” (4-5. Finally, fairness—her central concern—refers to “assessment procedures that measure the same thing for all test takers regardless of their membership in an identified subgroup” (5). She highlights, in particular, the way that fairness directs attention to intended and unintended consequences—or impact—of assessments. She writes, explicitly, “fairness, in other words, is not about making better assessments; it’s about making better decisions—decisions that have a meaningful, positive impact on student learning” (6).

Using the Standards for Educational and Psychological Testing, Poe highlights a few guidelines that can lead to fairness in assessment. Design can include how we “theorize the construct of digital writing and ensure that we have a meaningful assessment process in place” (7). But more, validation studies must “define groups and then to compare differences between groups” (7); likewise, we should be collecting evidence based on student digital identities: not just access to digital technologies, but also “frequency and conditions of access, type and place of access, attitudes towards digital technology, prior experiences and parental influences, and kind of devices that individuals use” (9). Once we gather such evidence regarding the differences between these groups and identities, decisions should be made both in the assessment/testing but—more importantly—curricular interventions: “the problem comes when our curriculum doesn’t do something about that disparity, and we perpetuate those inequalities through the combination of testing and curricular interventions” (11). She references construct irrelevance where students are tested on content (like Facebook) that could privilege a particular experience (e.g. Chinese students are more familiar with Renren, not Facebook). Also, she considers the differences in devices like mobile devices.

Chapter 3: Seeking Guidance for Assessing Digital Compositions/Composing, Charles Moran & Ann Herrington

“Where are we to find guidance in assessing this new, digital writing? We have examined for potential sources: tip sheets developed in and by industry; position papers and standards documents offered by professional societies and councils; recent resources offering general guidance on assessing digital writing; and full accounts by teachers and their situatied practices…In the teachers’ accounts…we get as close as we can to the full context” (13).

Chapter 5: ‘Something Old, Something New’: Evaluative Criteria in Teacher Responses to Student Multimodal Texts, Emily Wierszewski

Based on eight teacher’s evaluative (and verbal) responses to their students’ multimodal projects, Wierszewski argues, “the criteria teachers use to assess multimodal texts are predominately aligned with print criteria.” Using Connors and Lunsford’s study of teacher comments on students’ print projects as a point of departure, Wierszewski uncovers the kinds of evaluative comments that emerge beyond those described by Connors and Lunsford. She asks, “what print values do teachers use when they assess multimodal work, and what kinds of criteria seem to be unique to new, multimodal pedagogies?” Notably, Wierszewski specifically looks toward the inclusion of comments on multimodality as well as the connection between form and content: “Specifically, teachers need to consider the relationship between rhetorical goals and the new kinds of textual features and choices that students created or risk a return to formalism.”

Her study consists of various teachers who have encountered and assigned multimodal projects with varying motivations: “In Anna and Susan’s case, for instance, multimodality was integral to the professional and technical writing courses they taught decades before “multimodality” became a buzzword in Rhetoric and Composition…For Leah and Marie, however, multimodality wasn’t always a part of their pedagogy. It was only through professional development after their teaching careers had already begun that they were encouraged to ask students to compose multimodal texts. Finally, for Joe, multimodality has primarily functioned as part of his teaching presentation, rather than as a type of student composition.” As Wierszewski writes, teachers who were more likely to offer comments beyond those described in Connors and Lunsford were those who had the most experience assigning and assessing multimodal texts; further, those who incorporated multimodal texts most recently tended to contain fewer multimodal-specific comments.

She sums up her research succinctly here: “The results of this study suggest a great deal of congruence between the types of comments teachers made on their students’ multimodal texts and the kinds of comments teachers made on students’ print essays decades ago in Connors and Lunsford’s (1993) study. The top four most frequent evaluative comment types in this study— formal arrangement, overall, organization, and audience—all overlapped with categories found in Connors and Lunsford’s data set.”

Chapter 6: Stirred, Not Shaken: An Assessment Remixology, Susan H. Delagrange, Ben McCorkle, and Catherine C. Braun

The purpose of this chapter is to offer assignments that encourage students to remix; such assingments give students the opportunity “ to engage with new tools, techniques, and technologies of multimodal composition” and “invite occasion to reflect upon and raise critical awareness of how student projects fit into this larger contextual framework of intellectual production and derivative use” (2). Such assignments, then prompt them to rethink the act of assessment, namely, how assessment intertwines with learning. As they write, assessment becomes a heuristic for transformation of habits of mind. Each author references the idea of distributed assessment forwarded by Manion and Selfe where assessment weaves throughout the course and invites student participation and reflection upon their assessments.

Delagrange offers the idea of an evolving, collaborative rubric where students and teacher (alike) build a vocabulary for visual design and rhetoric (through examples) and build a rubric from these discussions of examples and terms. McCorkle focuses his attention on the Fair Use Guidelines as both a heuristic for assessment as well as assessment. the Fair Use Guidelines act as a common language that allows students to draw attention to their purpose with the remixed text, their potential audience, and the coherence of argument of the visual . The four factor used to determine Fair use are:

  1. The purpose and the character of use, including whether such use is of commercial nature or is for nonprofit educational purposes;
  2. The nature of the copyrighted work;
  3. The amount and substantiality of the portion used in relation to the copyrighted work as a whole;
  4. The effect of the use upon the potential market for, or value of, the copyrighted work

Finally, Braun confronts students’ preconceived notions of writing—often developed and socialized in print-centric notions of writing—and seeks to disrupt such understandings by building syllabi and assignments that focus on the knowledge-making process (much like Broad’s DCM).

Chapter 7: Developing Domains for Multimodal Writing Assessment: The Language of Evaluation, the Language of Instruction, Elyse Eidman-Aadahl, Blair, DeVoss, Hochman, Jimerson, Jurich, Murphy, Rupert, Whithaus, Wood

The Multimodal Assessment Project (MAP), part of the NWP’s Digital Is… Initiative, offers five domains that can aid in developing a common language to discuss digital assessment: “Taken together as a full set, these domains name interrelated areas of interest and learning significant in students’ growth as digital writers” (2). Those five domains are,

  1. Artifact: “final consumable (readable/viewable) product that stands on its own, can travel across space and time, and offers readers a coherent message through an appropriate use of structure, medium, and technique” (4).
  2. Context: “domain that helps explain how the artifact fits into the world. Context encourages us to ask about the environments surrounding the creation of the artifact and how the artifact enters into the world” (6).
  3. Substance: “the content, overall quality, and significance of the ideas presented. The substance of a piece is related to an artifact’s message in relationship to the contextual elements of purpose and audiences. Considering the substance of a piece encourages us to think about four main areas: quality of ideas, credibility, accuracy, and significance” (11). In a way, this refers to the ways in which a composer creates something meaningful.
  4. Process Management and Technique: “the skills, capacities, and processes involved in planning, creating, and circulating multimodal artifacts” (2). The authors refer to both the technical elements to make a site functional (digital skills, knowledge of file formats, software) and awareness of rhetorical principles.
  5. Habits of Mind: “patterns of behavior or attitudes that reach beyond the artifact being created at the moment” (2). This refers to the ways in which a writer’s epistemologies have been transformed—this looks more toward the incremental, consistent learning of a student, scaffolded over time.

Chapter 8: Composing, Networks, and Electornic Portfolios: Notes toward a Theory of Assessing ePortfolios, Kathleen Blake Yancey, Stephen J. McElroy, and Elizabeth Powers

The movement toward eportfolios has created “a new exigence for assessment” that prompts us to consider “a new vocabulary, a new set of criteria, a new set of practices, and a new theory congruent with the affordances that eportfolios offer” (2). Using an extended reading of one student’s eportfolio project (not attached to any class), the authors trace the ways Kristina (the eportfolio composer) creates a space for meaning-making. In this way, “readers become participants, control outcomes, and shape the text itself” (3). Eportfolios, of course, have been used in assessment contexts, and reading—and thus making meaning from-an eportfolio in an assessment context often prompts a reader/viewer to “quit” once they have answered the question of evaluation. However, the authors propose reading an eportfolio with the purpose of understanding (or tracing, see: Rice) the “network of relationships an eportfolio stipulates and evidences through multimedia texts” (4).

Central to the author’s new theory of eportfolio assessment is the idea that the eportfolio as a composition offers many possible physical constructions of the text where the reader creates one (Bernhardt). The authors inquire into how we can represent our readings since two readers/viewers may be coming to different readings: they offer the concept of visual maps or “pinning up” which refers to a tactile means of hanging ‘pages’ in relation to another in a spatial plane (i.e. wall). In doing so, the authors theorize a new kind of reading: “viewing/reading, which bridges what in an eportfolio is not a dichotomy (between print and digital, between page and screen), but rather a set of continuous practices” (10). In other words, while an eportfolio is, of course, electronic, our theory of reading/viewing such electronic texts involves iterative reading processes that exceed the screen.

Afterward: Not Just a Better Pencil, Edward M. White

Ed White’s concern, in the closing chapter of this edited collection, is the “darker side of technology”, namely that of automated essay scoring: “it is clear to me that we need to distinguish the issues of writing in a digital environment form those of assessment in a digital environment, a difficult set of problems that has become exceedingly complex and tangled in a political and economic as well as pedagogical problem” (2). Namely, White sees the potentially detrimental effects of assessing using automated technology: it has the potential to define the construct of writing for students, but also—in the ways that assessment trickles into the classroom—“readers are sometimes urged to read more like computers, so that high score correlaciotns can be obtained” (2). In other words, White sees an ecological (though he wouldn’t/doesn’t use that word) effect of automated essay scoring on the entire landscape of writing.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s