Fine-Tuning Feedback

Fine-Tuning Feedback

Ertmer, P.A., Richardson, J.C., Belland, B., and Camin, D.  (2007). Using peer feedback to enhance the quality of student online postings:  An exploratory study. Journal of Computer-Mediated Communication, 12, 412-433.

Ertmer et al propose to address three research questions at the outset of their study, including:  1) the linkage between peer feedback and quality of postings; 2) perceived value of peer feedback; and 3) perceived value of feedback offered (p. 416).  

In the article, one semester-long, graduate-level education course was selected for analysis, including 15 enrolled students, reflecting an extremely small sample size.  During the course of the semester, the instructor modelled a feedback method of evaluating student discussion posts based upon Bloom’s taxonomy and scoring responses at a 0, 1, or 2 level with 2 being superior and 0 failing to demonstrate application of the taxonomy strategy.  Students were required to submit two posts per week during this introductory phase. In the third week, student requirements changed to one post and one feedback response to another student. At this time the instructor shifted to a role of processing student posts to remove names and any potentially objectionable material before posting the feedback to the student, resulting in a delay of up to two weeks for a response.  Other researchers participating in the assessment of the posts being created to determine if there was a change over time resulting from student feedback. Further, at two points in the semester, surveys of students were taken to collect data on their perspective on the value of peer feedback, both given and received. A final step was an interview of each participant to gain additional information about their answers on the surveys and their reasons.

Results were limited.  The “quality of students’ postings did not improve with peer feedback” (p. 421), although some results were presented to support that students believe feedback is slightly more important in an online course vs. a traditional course, though the quantity of feedback was less important.  From the initial survey to the final one, researchers saw a slight increase in the importance of feedback overall and the timeliness of it, though the data was incomplete as a result of 20% of the students not participating in the initial survey, undermining the already small sample size.

In discussion, Ertmer et al make the statement that “results of the students highlight the importance of feedback in an online environment” (p. 425), though their admission that there was no quantitative improvement in scores, only an increase in the perception of importance of the task after students had been required to do it eleven weeks, does not strictly support that assertion.  A key factor presented in discussion was the timeliness of feedback, as the structure of the study created a delay of up to two weeks for the processing of peer feedback before it was offered to students, whereas the feedback from the instructor had been more immediate in the first two weeks of the modelling and instructional phase because it was coming directly from the instructor without the need for processing.

Overall, Ertmer et al’s strategy of pre- and post- surveys is a sound method for analyzing the change in a participant’s thinking from one stage in the process to a later stage, as supported by Horton (2012).  However, the repetition of the questioning process a second time, though there were slight changes indicated, interferes with the validity of results in some cases. An additional undermining factor in the surveys is the rigor of asking students if a process has value when they have been required to do the task weekly for an entire semester, which presupposes that the instructor asserts its value to learning.  The sample size of the Ertmer et al study and the very specific qualities of the pool cast shadow on how robust the data collected can be and the applicability to a general statement about online discussions. Their failure to demonstrate a significant change in quality of posts could be the result of a wide variety of factors.

Horton, W.  (2012) E-Learning by Design, 2nd edition.  Pfeiffer: San Francisco, CA.


Teacher Takeaways

Even with a highly knowledgeable group of learners, feedback and the immediacy of it makes a striking difference to a course, since a two-week delay in response can invalidate any positive impact of the process of peer evaluation.  

Leave a comment