Editor’s note: This commentary is by Peter Berger, an English teacher at Weathersfield School, who writes “Poor Elijah’s Almanack.” The column appears in several publications, including the Times Argus, the Rutland Herald and the Stowe Reporter.

[A] few weeks back Vermont newspapers headlined that students’ science scores had declined on our statewide test. For some of you, those who don’t live in Vermont, this constitutes a less than significant announcement. For those of you who do live in Vermont, this is also a less than significant announcement.

For the three grade levels at which the test is administered – fourth, eighth, and 11th – statewide scores dropped 3, 6 and 1 percent, respectively. As with most standardized tests, no matter where you live, scorers use subjective rubrics to decide, for example, whether a student’s answer reflects a “thorough,” “general,” “limited,” or “minimal” understanding. According to officials, scorers can precisely and scientifically make those determinations because a “general” answer, for instance, includes an entirely unspecified number of “errors and omissions” as opposed to a “limited” response, which includes “several errors and omissions.”

Can you count to “several”? I can’t.

On a four-point scale, an answer with “errors” earns three points while “several errors” earns two points. That’s a 25 percent variation on the scoring scale based on the difference between “I don’t exactly know how many” and “several.”

Further compromising the “data,” students work on parts of the test in randomly assigned groups. Do you think a student’s scientifically determined individual score might be somewhat influenced by how smart the other students in his group happen to be?

The insignificance of marginal variations aside, if a statewide average markedly declines, or improves, from one year to the next, there exist two alternative explanations. Either the statewide student body got remarkably dumber, or smarter, from one year to the next, which isn’t likely, or despite test promoters’ claims of statistical consistency and precision, the test accidentally got harder or easier. Publishers and education officials have had to swallow plenty of that assessment crow over the past two decades of our national testing frenzy.

Welcome to the world of modern assessment.

Don’t misunderstand. When I’m grading an essay, I’m not always perfectly scientific either. The difference is I don’t claim to be, and I don’t base a whole year’s grade on one test. Also my grades cost a lot less.

Ironically, as part of the standards-based move to better communicate with parents, my students’ parents will now be receiving three report cards a year instead of four.

 

Somehow education policymakers and officials aren’t overly troubled by these repeated, expensive assessment fiascoes. Some have, however, instead once again focused their attention and wrath on the letter grades parents are accustomed to finding on their children’s report cards. According to reformers, A-F report cards provide only confusing “hodgepodge” grades that are “impossible to interpret” and “rarely present a true picture of a student’s proficiency.” Critics complain that letter grades inappropriately mingle and thereby blur evaluations of academic competency, effort, and progress, and that teachers should award a separate grade in each category.

You don’t need a separate grade in “progress” to determine if a student has made any. You just need to track how well he does in succeeding quarters and years.

As for effort, we gave effort grades in my school for decades until our new computer grading system made awarding them a clerical nightmare. Our middle school teacher team also gave each student checkmarks in various areas that contribute to academic success, like homework completion and class participation, until our new computer program eliminated our ability to do that.

I’m not going to pretend that all the complexities of a student’s academic performance can be fully captured in a single alphabet character. Letter grades are a shorthand system, nothing more and nothing less. The question is, are they an effective shorthand? Do they communicate what parents want and need to know about their children’s school performance?

Any grading system short of a comprehensive narrative, whether it consists of 4’s and 1’s, or A’s and F’s, is shorthand. Teachers’ comments have long provided a brief narrative note about areas of particular concern or interest when a letter grade needs specific explanation. For example, I commonly add a note when incomplete work and missing assignments have lowered a student’s overall grade. I also add comments about everything from strong effort to exemplary class participation.

I’ve found that for most parents this is enough. Those who want more information can and do call, write, or arrange for a conference. Over the years I’ve talked with many parents, and a face-to-face conversation almost always answers any lingering questions they have.

Reformers’ latest recycled grading marvel, standards-based grading, rests on the assumption that parents aren’t satisfied anymore with a summary A through F grade in English, for example, but instead want multiple 4 through 1 grades in specific language arts categories. I’m skeptical as to the validity of those category grades, especially since so many assignments simultaneously involve and assess a combination of skills. I’m also not sure how many parents actually want separate grades assessing their child’s performance in “narrative” as opposed to “explanatory” writing, just two of the 10 new standards-based language arts grades each elementary student in my school will begin receiving this year.

Promoters insist these elaborate standards-based changes are a response to parental demands for better communication and reporting. Ironically, as part of the standards-based move to better communicate with parents, my students’ parents will now be receiving three report cards a year instead of four. Apparently, according to advocates, you can’t really compute meaningful standard-based grades over a nine-week marking period, prompting the shift to 12-week trimesters.

A grading system that can’t produce meaningful results in nine weeks isn’t much of a grading system. Changing how you report what students know doesn’t change how much they know. It’s also hard to reconcile proponents’ claims that standards-based grading is a response to parental demands when in districts where it’s been implemented, so many parents don’t like it.

Yeah, but what’s my kid’s grade?

Like the Common Core, standards-based grading isn’t the grassroots idea its boosters claim it is. It didn’t originate with parents. It was born in the fevered imaginations of experts and theorists who are strangers to the real world of kitchen tables and classrooms.

For how long will so few be permitted to visit so much folly and harm on so many?

Pieces contributed by readers and newsmakers. VTDigger strives to publish a variety of views from a broad range of Vermonters.

3 replies on “Poor Elijah’s Almanack: Counting to several”