Skip to main content

Adventures in Specifications Grading

In summer 2020, I attended a Grading Alternatives webinar put on by the Teaching Innovation Center (TIC). Josh Caulkins (previous Assistant Director) and Sarah Prosory (Instructional Designer) shared a number of intriguing ideas that, for better or for worse, I decided to try to implement in my spring “boutique” course Parasites and Their Relatives.

Person holding three plants that are just sprouting
The big one is specifications grading. This idea has been featured in this Inside Higher Ed article, this Robert Talbert, Ph.D. article, and more fully in the book Specifications Grading: Restoring Rigor, Motivating Students, and Saving Faculty Time by Linda B. Nilson. (No, I did not read the book.) The essential philosophy is to make grades more like badges, like indications of a completed contract, and less like value judgments. All assessments are considered either “complete” or “incomplete” according to a detailed rubric. The final letter grade is based on a predefined basket of completes, with the number and/or level of assessments included in the basket for an “A” indicating a higher level of mastery than the basket of assessments required for a “B”.

My Course Design

Because I was adapting a course I had already run four times, I wanted to keep a similar assessment structure. My grading scheme was based on the number of successfully completed assignments in each of three essential, equally-weighted categories (attendance & participation, reading quizzes, and research project). Students could complete all assessments for an A+, miss one of them for an A, miss 2 or 3 of them for a B, and so on. An “incomplete” could mean the assignment was never attempted, or it could mean that the student attempted it but did not meet the requirements laid out in the rubric. For any student attempt deemed “incomplete”, I provided feedback on which aspects needed to be improved for the assignment to be considered “complete”. I allowed students up to three assignment revisions for each of the class activity and reading quiz categories, while the number of resubmissions was unlimited for the research essay drafts. The three assessment categories were weighted equally and the grading scheme required a minimum level of completion for all three to earn a productive grade. In this way, I attempted to ensure that the grades would reflect mastery of each of the learning objectives for the course.

How did this function in practice?

The positives.

I’ll admit that one of the main benefits that attracted me to the concept of specifications grading in the first place, was that I would no longer have to agonize over grading. I have always been uncomfortable with the judgment process of grading because despite my detailed rubrics, I know I am human and I am readily biased by factors other than the quality of the student work. For example, the first essay in the pile might be treated differently than the last one, despite my best efforts. (Also, my blood sugar level, my caffeine status, the tone of my most recent interaction with the student … the list goes on.) Inevitably I would doubt myself when leaning toward a lower score, not wishing to hurt my students’ confidence or risk confrontation, and end up with nearly all As at the end of the term. By contrast, with specifications grading I felt more comfortable assessing a binary “complete” or “incomplete”, especially because students still had the chance to revise and resubmit.

More importantly, I see this grading style as encouraging a growth mindset. Rather than telling the students how good or bad their work was, I told them how to improve. I gave feedback on all assignments, but especially on the “incomplete” ones in which I could explain to the student specifically where the deficiencies lay and how to correct them. Philosophically, this felt right: I was no longer the sole (and potentially biased) arbiter of quality, I was a coach, encouraging my students and pointing the way for them to achieve their best work.

Three weeks in, an anonymous and ungraded survey told me that the students understood the grading scheme, despite its foreignness and complexity. 77% either agreed or strongly agreed with the statement “I know what is expected of me in the course; 15% were neutral. 100% agreed or strongly agreed with the statement “I understand the grading scheme of the course”. I didn’t ask whether they thought I was more like a coach or an arbiter of quality; maybe I should have.

The downsides.

Person writing feedback on paper
There were some downsides, mainly related to time and effort. Because of the remote, synchronous format, I devised in-class activities for each and every class, in order to promote active learning and break up the class time. Students had to read an article from the scientific literature and answer a few questions about each (11 total over the semester). All of these things needed to be “graded” as complete/incomplete, with specific feedback from me. These took time, but it was the research project that ate up the most of my time. Covid exacerbated the workload: I was intentionally and transparently flexible about deadlines, so the assignments trickled in. Because I wanted to provide timely feedback for those who wished to revise and resubmit, there was always something for me to grade.

The other main downside was the awkwardness of grading this way on Canvas. Assignments can be set to complete/incomplete, but quizzes cannot. They need to have a score. And there is no easy way to set up a grading scheme whereby a certain number of completes equals a letter grade. I kept a separate spreadsheet where I noted each student’s completed assignments, with a simple “countif” function to keep track of the total. Between Canvas’ limitations and the deadline flexibility demanded by covid, staying on top of everyone’s research projects was more challenging than usual.

Would I do it again?

Yes, though if I had higher enrollment, I would either need to reduce the number of assessments that need feedback or get help with grading (I only had 14 students compared to last year’s 23). I might also consider a hybrid scheme that would work better with canvas, for example including a category of graded quizzes while the writing assignments would stay on a complete/incomplete basis. Maybe one day, Canvas will build features to facilitate specifications grading. There are certainly plenty of requests for such features on the Canvas forums. The trick, which I suppose is not unique to specifications grading, is to find the balance between providing high quality, individualized feedback, and saving time for other things. I’m planning a new course for spring 2022, so I have the opportunity to design the assessments with a specifications grading scheme in mind. Watch this space for an update.



Post-Author:

Gillian Gile photo in striped shirt
Dr. Gillian Gile
is an Associate Professor for the School of Life Sciences and an Evolutionary Microbiologist who studies the diversity and evolution of microbial eukaryotes, otherwise known as protists. The Gile lab also studies the evolution of plastids, which represent a microbial symbiosis so ancient that the protist and cyanobacterial partners have merged into a single organism.

Comments

Popular Posts

TeachT@lk Webinar: Engaging Discussions

"Asking Great Questions" Workshop

Evolving Exams: Adapt Your Assessments for the Time of COVID