Grading isn’t just about putting a number in a box, it’s a chance to shape how your students see their learning journey. The scale you choose is more than a technical setting; it’s part of your teaching voice. It tells students, “This is what I value, and here’s how you can grow.”
Educational theorists like Dylan Wiliam (Wiliam, 2011), a leading voice in formative assessment, remind us that feedback works best when it “moves learners forward.” That means your grading scale shouldn’t just signal where they are now, but also help them understand what’s next.
Imagine you’re grading an essay on environmental policy. Using an A–F scale might make sense if you need to report a final mark to parents or align with national standards. But if your goal is to encourage deeper analysis and critical thinking (hello, Bloom’s taxonomy’s higher-order skills), you might pair that letter with a short explanation:
B (84%) – Clear structure and relevant examples, but analysis could go deeper into long-term policy impacts.
The 5-point scale — Excellent, Good, Satisfactory, Needs Improvement, Poor — can be a lifesaver for project-based learning or quick peer reviews. For example, in a group presentation on renewable energy, you could give “Excellent” for creativity and “Satisfactory” for data accuracy, then follow up with tips for sourcing stronger evidence.
If you’re tracking progress over time, a descriptive feedback scale like Mastered, Developing, Needs Improvement works beautifully. In a language class, a student might move from “Developing” in verb conjugation to “Mastered” over the term, a visible progression that’s motivating in itself.
Percentages, of course, give precision, but as Paul Black and Dylan Wiliam’s (1998) research warns, a percentage without context often has little learning value. That’s why pairing numbers with narrative comments turns them into tools for reflection.
Clarity is the antidote to confusion and “grade gaming.” If “Excellent” in your class means “original thought, supported by evidence, and flawless mechanics,” then spell it out, literally.
Here’s how a science teacher might frame it for a lab report:
This explicitness aligns with criterion-referenced assessment (Sadler, 2005), where students are measured against standards rather than each other. It also prevents the “but I tried hard” argument from turning into unearned grade inflation.
Grades should mean the same thing over time. If an A this year is easier (or harder) to earn than it was last year, students lose a sense of fairness. Preventing grade inflation or deflation starts with applying your standards consistently to every student, every time.
Borderline cases are where inconsistency can creep in. If a student’s work hovers between two grades, ask yourself: Does the evidence clearly support the higher grade? Or am I being influenced by effort, personality, or previous performance? Review these decisions periodically to make sure your scale still reflects your intended rigor. When you’re unsure whether a current essay deserves a 4 or 5, compare it to past examples. This is a method often used in moderation processes in IB and Cambridge programs to keep standards steady.
Let’s say a student scores 68% on a math test. That number alone might feel disappointing. But with a comment like:
“Your understanding of linear equations is solid; spend more time practicing factorization problems from Section 4.2,”
the same grade becomes a roadmap for action.
Carol Dweck’s (Dweck, 2006) growth mindset research shows that when students see ability as something that can improve through effort and strategy, they respond better to challenges. Instead of seeing a grade as a final verdict, students start to see it as feedback on a work-in-progress. A 68% becomes “Here’s where you are now, and here’s how to get to 80% next time.”
Effective feedback has three parts: it identifies what was done well, points out what can improve, and gives a clear next step. Vague statements like “needs more detail” don’t cut it. Compare these:
❌ Needs more detail.
✅ Include two more examples of renewable energy sources and explain their environmental impact.
This aligns with Hattie and Timperley’s feedback model (2007), which emphasizes “Where am I going? How am I going? Where to next?”
And don’t underestimate the tone. “You missed three key points” feels final and judgmental. “You’ve nailed the structure; now let’s work on expanding your evidence” keeps the door to improvement open. As Dylan Wiliam puts it, “The purpose of feedback is to reduce the gap between where the student is and where they are meant to be.” Your grading scale is one of the most powerful tools you have to make that happen.
Black, P., & Wiliam, D. (1998). Assessment and classroom learning. Assessment in Education: Principles, Policy & Practice, 5(1), 7–74.
Dweck, C. S. (2006). Mindset: The new psychology of success. Random House.
Hattie, J., & Timperley, H. (2007). The power of feedback. Review of Educational Research, 77(1), 81–112.
Sadler, D. R. (2005). Interpretations of criteria-based assessment and grading in higher education. Assessment & Evaluation in Higher Education, 30(2), 175–194.
Wiliam, D. (2011). Embedded formative assessment. Solution Tree Press.