On Student Feedback
October 8, 2013
On September the 27th I attended the quite interesting “Effective Course Evaluation Seminar” organized by a commercial actor within this field. The product they are offering is quite impressive, fully focused on managing the entire student feedback cycle in a centralized manner. I will not review the product here, but instead present some reflections on the discussions and cases presented at the seminar.
First, some background information to this topic at Arcada. Earlier we’ve done end-term course evaluations, which have been criticized for contributing to course quality only with a long delay. Students also find end-term surveys frustrating since their response does not contribute to improving the course they participate in. In 2011 we started testing so called pulse checking surveys, where the idea is to invite students to a feedback survey after the first third of a course, such that they can actually contribute to improving the course they are participating in. During piloting students seemed enthusiastic, and during the academic year 2011-2012 pulse checking surveys were based on teachers/teams ordering the survey for their course. During the academic year 2012-2013 pulse checking surveys were applied categorically on all courses. This measure, however, dropped the response rate dramatically from 39 down to 20%. Thus, this year pulse checking surveys are again based on teachers/teams ordering the survey for their course. Currently it looks a bit more promising with a response rate around 30%, which is still not satisfying.
The discussion was rather much dominated by the issue paper vs. online surveys. Classically, paper surveys render considerably higher response rates than online surveys. However, this approach does not grasp the actual problem; it’s not necessarily a matter of presentation or delivery format, and as an alternative approach they also suggested the response setting i.e. online vs. in-class. According to our experience this does not necessarily solve the problem, either; teachers have reported having booked a computer class in order to provide students a true opportunity to complete the survey. The result was that the vast majority of the students spent the time on Facebook or reading newspapers’ web pages instead of completing the survey, and the response rate still ended up too low to be considered reliable.
In a short workshop during the seminar we reached the conclusion that “ownership” is one of the crucial factors in motivating students to respond, meaning that, although surveys are centrally administered, students should perceive the teacher (team) as the owner of the survey. Students are apparently reluctant to submit feedback to some anonymous bureaucrat in the school administration.
“What’s in it for me?” is another issue to be addressed; Student motivation can only be increased by back-feedbacking the survey results to the students and declaring what changes that have been made in the course based upon their feedback, or at least discussing the results and explaining the course setup and why suggested changes are perhaps not possible.
Seminar participants from Université de Pau et des Pays de l’Adour presented a good practice that proved successful; the surveys are centrally administered but based on teachers/teams ordering the survey, and perhaps most important, fully tailored according to the teacher’s/team’s needs and wishes. Sounds good but labor intensive.
We still have a long way to go at Arcada. The challenge is, at Arcada we want to listen to our students, but when we ask, we get no answers, be it then course surveys or barometers.
I’m curious to see, will there be any comments on this post – I’d really appreciate a discussion!
Tore« Previous post Next post »