Fall Back to Feedback

Fall Back to Feedback

My origin story: In my first year of doctoral research, I stumbled upon a 1986 study by Ruth Butler and Mordecai Nisan about feedback. It was more than 25 years old. As a teacher for most of that time, I had never heard about the results. Once I did, it changed everything…everything. Fueling a clearer understanding of feedback and its value was my calling and edTech resources were the method. In light of the AI revolution, fast, fair, and quality feedback has NEVER been more achievable.

Check the citation below (click here) and read the seven-page study. The methodology is clear. The statistical data is provided. The following exploration presents percentage comparisons (click here) which are mind-blowing.

Photo by Ann H on Pexels.com

Population Sample

Researchers worked with 261 sixth grade students (145 girls and 116 boys) at the school where they attended class every day in a middle class area of the city.

Methodology

Classes were randomly assigned to three conditions: 1) received brief comments on their performance; 2) received grades, but no comments; and 3) received no evaluation at all, grades or comments.

Subjects participated in three sessions over several days. Their tasks involved questionnaires, word-related, and writing activities from the Torrence & Templeton (1963) divergent thinking “uses” test. The activity was identical across all three groups. Subjects were given five minutes to complete each of two activities before the materials were collected.

The activities were scored according to the Torrence & Templeton standards. These “scores” were NOT what was provided later to the subjects. These results are presented statistically in the article tables. Descriptive results are provided below.

Subjects were told that the activities were constructed by the experimenters to see how different children would complete their answers. Two days after Session 1:

  • Subjects in Group 1 received their workbook with written comments (one sentence with a phrase identifying one area of strength and a phrase about one area which could be improved);
  • Subjects in Group 2 received their workbook with a grade, ranging from 30 to 100; and
  • Subjects in Group 3 received their workbook with no feedback.

Subjects completed Session 2 after receiving their Session 1 work using the same procedure. Session 3 was conducted on the same day as Session 2, but approximately two hours later. Subjects received their returned Session 2 work before completing Session 3. Session 3 results were not returned to subjects.

Results

Comparing the scores on the first session with the last session, students who received comments on their work (Group 1) demonstrated:

  • 68% more short words
  • 32% more long words
  • 43% higher scores on Task A
  • 31% higher scores on Task B
  • 17% higher fluency
  • 22% higher flexibility in language
  • 70% more elaboration
  • 58% more originality

Some educators would consider the case closed at this mere finding. Any educator in any grade level in any subject area would welcome improvements in any or all of those areas in a semester, but this happened in two days…as a result of one sentence of feedback on their work. However, the story does not end there.

Students who received grades only (Group 2) demonstrated the following changes in performance between Session 1 and Session 3:

  • 71% more short words (3% higher than Group 1)
  • 8% more long words (24% lower than Group 1)
  • 32% higher scores on Task A (11% lower than Group 1)
  • 32% lower scores on Task B (63% lower than Group 1)
  • 7% lower fluency (24% lower than Group 1)
  • 52% lower flexibility in language (74% lower than Group 1)
  • 58% less elaboration (128% lower than Group 1)
  • 59% less originality (117% lower than Group 1)

Therefore, receiving grades on a task diminishes the likelihood of success on subsequent practices of the same task. This runs counter to the educational mindset that students will be interested in improving their grade. Granted, these “grades” students received did not reflect a course grade which would have an end result of passing or failing the class. Motivation would clearly be higher in that circumstance. However, stepping aside from that end goal, grades resulted in poorer performance than when the subjects received feedback. This speaks volumes to the idea that formative assessment should be based on feedback, not on scores. Administering grades for formative work only pleases parents as score-keepers. In the meantime, it disrupts progress for learners. Still, the lessons learned from this study do not end there.

Students who received no feedback at all (Group 3) declined in all but one performance area:

  • 42% more short words (26% lower than Group 1)
  • 47% fewer long words (80% lower than Group 1)
  • 26% lower scores on Task A (70% lower than Group 1)
  • 38% lower scores on Task B (68% lower than Group 1)
  • 21% lower fluency (38% lower than Group 1)
  • 46% lower flexibility in language (68% lower than Group 1)
  • 53% less elaboration (123% lower than Group 1)
  • 69% less originality (127% lower than Group 1)

Here, the study provides conclusive evidence that providing even a limited amount of feedback to students improves their subsequent performance. It seems that the magnitude of that impact could lead to a strong malpractice case. How could any educator faced with this data assign work on which students get no feedback of their level of success? In the era of automated tools and generative AI, there is no need to leave students wondering if they were successful or not. Indeed, to do so is the opposite of an educators goal because it reduces the likelihood of improved performance at least 21% and perhaps as much as 69%.

Analysis

The model for success is clear. Educators must provide feedback. Waiting to get everything in the gradebook erodes the educational value of formative assessment. If learners declined so much in their performance in the space of two days without feedback, most teachers would find it difficult to justify the week it may take to respond to an assignment or the couple of weeks or even a month that essays or projects may take to grade.

The sense of alarm is clear, but the hope of Group 1 sustains the real call to action. Don’t wait for perfect. Don’t ignore readily available automated responses because they aren’t thorough. Don’t eliminate the value of generative AI feedback on writing, content, or format because you doubt its accuracy. Most AI programs are embedded with sensitivity to allow teachers to affirm or make changes to the feedback AND the programs are sensitive to the changes made so that the AI doesn’t continue to make the same error.

One study over two days tells the whole story — students need feedback on their performance AND grades are about the same as having no feedback at all. Feedback can come from a lot of sources. Peer feedback can be extraordinarily helpful because the teacher is only one person in a classroom, but the students are many. They know the assignment. Most of the time, their feedback is accurate and not driven by spite. In small group collaboratives, the inaccuracy and the spitefulness is illuminated quickly. Explore all opportunities to get students feedback on their progress toward the learning goal.

Empower students with feedback — every day, every assignment.


Teacher Takeaways

Students who get feedback on their performance demonstrate 17% to 70% improvement, whereas students who just get grades or get no feedback at all can decline in performance by the same factors.


Butler, R., & Nisan, M. (1986). Effects of no feedback, task-related comments, and grades on intrinsic motivation and performance.  Journal of Educational Psychology, 78(3), 210. Link here

Leave a comment