In “The Literature of Direct Writing Assessment: Major Concerns and Prevailing Trends,” Huot surveys the large body of literature on writing assessment in an attempt to determine what the major areas of focus and attention are in this field (as of 1990). As far as scoring essays, I was interested in Huot’s description of holistic scoring, which is how I feel I score my students’ essays. According to Huot, holistic scoring involves a “rater’s general impression of the quality of a piece of writing” (238). To me, holistic scoring of essays may involve elements of primary trait scoring and analytic scoring, but an overall determination of the quality of the writing is the goal with a holistic approach. And, particularly during this semester, I have found that grading World Lit essays based on my general impressions of them in their entirety is the most efficient way for me to go.
I was also interested in Huot’s section on raters of papers. I was intrigued by the idea that raters’ expectations about student essays plays a significant role in how those essays are scored in the assessment process. He note studies that determined essays known to be written by honors students were scored higher by raters than other kinds of students. I have found that when I’m grading essays, particularly later into a semester after I’ve had a chance to get to know my students and see what kind of work to do, that I find myself grading my best students at a higher level than those students who demonstrate less ability with writing. Perhaps I should figure out a way for my students to turn their papers in to me blindly so that I don’t know whose paper is whose and grade them that way?
White, in “The Scoring of Writing Portfolios: Phase 2,” argues that holistic scoring is a wholly inappropriate method for scoring writing portfolios. This is because holistic scoring was designed for limited and specific circumstances (1 or 2 essays with a set range of requirements), while portfolios cover a much wider range of work from a longer period of time. For White, the key to better scoring of portfolios of student writing is the reflective letter students must write as the most important component of the portfolio. In other words, students must think about the work they have done and be able to demonstrate if that work meets—or does not meet, as the case may be—the requirements of the course, program, major, or whatever overall activity/discipline they were associated with. This means that faculty and administrators of such programs must be able to provide students with clear statements about those requirements from the get-go so students know what they have to do to meet them. By the end of the program, students’ reflective letters become rhetorical arguments, with the portfolio components serving as specific evidence, that the requirements have been met. I have never scored portfolios for assessment or otherwise, but I find White’s ideas intriguing. Perhaps they can be adapted on a smaller scale for the Comp and World Lit classes I teach, but I do worry about the investment of time (of which I have so little as a Graduate Assistant) to administer such a grading method.
Royer and Gilles’s essay, “Directed Self-Placement: An Attitude of Orientation,” explores what, to me, at any rate, is a unique way of placing students in regular Freshman Composition courses or in preparatory (remedial) Freshman Composition courses: letting students decide for themselves which course they feel they should be in—based on their knowledge of themselves and their reading/writing ability. I admit, while I was reading this article, I found the very idea of “directed self-placement” to be revolutionary and I started wondering how it might work here at UNLV. In any case, the whole idea of letting students decide for themselves which composition course they should be in places the responsibility for their success or failure on them, rather than on faculty and administrators who were, in the past, trying to make decisions about people they do not know based on test scores and GPAs. After using the “directed self-placement” method for their composition courses, Royer and Gilles report good success. They feel that most students who place themselves into an ENG 098 section are those whose test scores and GPAs would have “placed” them there from the get go. What allowing students to decide for themselves which course to go into creates, are circumstances where students aren’t resentful and angry at a system and its faculty and administrators for putting them into a course they shouldn’t be in. There is just something so sensible and eminently pragmatic about Royer and Gilles ideas that seems hard to resist.
Reading the NCTE report, “The Impact of the SAT and ACT Timed Writing Tests,” was interesting and presented results I wasn’t too surprised to read. It was in March 2005 that SAT and ACT exams came to include a 25 minute timed essay as a required component—adding another layer or hoop that graduating high school students had to jump through in order to get into college. All sorts of concerns about the new timed essay portion of the exam were raised by the task force charged with looking at what the SAT/ACT folks were up to: concerns about the test’s validity and reliability as an indicator of writing ability, how implementation of the test would impact writing instruction and curriculum in high school, about the unintended consequences as far as the uses of such tests, and about equity and diversity. Not surprisingly, to me, the NCTE task force felt that the timed essay would not be a good indicator of student writing ability, that high school teachers would be forced to teach to the test (and take away valuable teaching time from other more important learning objectives), that the tests could be used as de facto placement tests by colleges and universities, and that students from different socio-economic backgrounds would be unable (for a variety of reasons) to perform as well on such exams as their white, middle-class American counterparts. This report only confirms my mistrust of standardized exams like this.
Lastly, in “An Apologia for the Timed Impromptu Essay Test,” White encourages us to not overlook (and, perhaps more importantly, not to condemn outright) the advantages timed impromptu essays offer. He points out that such tests do involve student writing, rather than the answering of multiple choice questions about grammar, mechanics, and literary attributes of texts. Students actually have to write when it comes to essay tests, and that calls for a combination of skills and abilities such as “recalling information, selecting an appropriate vocabulary, constructing sentences and paragraphs, and, somehow, having something to say” (35). But even though White encourages use of timed essay tests, he does caution us to be aware of their limitations, too. But, they can be as effective as other methods of assessment (such as portfolios)—it just depends on the circumstances. And so it does. With assessment there is no one size fits all.
Subscribe to:
Post Comments (Atom)
Good reflections, Tony. Thank you for reading that last week as closely as you read throughout the semester. As for "holistic" grading in World Lit, yes, that sounds appropriate. Ideally, you will provide students with a rubric that spells out the criteria for a high scoring (and low scoring) paper (e.g., A score of 5 essay has clear thesis, references to readings to support thesis, strong organization, minimal mechanical mistakes...) etc. Portfolio grading sounds like it wouldn't be necessary. Remember that the English department hasn't completely resolved the role of writing in that course, as the classes are too large to really be "teaching writing" and any writing in world lit would be focused on literature, which is not bad but not the only kind of writing that exists, etc. etc.
ReplyDelete