AN ACADEMIC IN AMERICA
Do Students' Online Ratings of Courses 'Suck' (or 'Rock')?
A look at whether evaluations of teaching can be administered effectively via the Internet
Several years ago, our medium-sized liberal-arts college shifted from using in-class evaluations of teaching — Scantron forms and handwritten narratives — to an online system that students were supposed to complete on their own time.
The change had many attractions. Online evaluations could be customized; faculty members could add their own questions about particular goals for a course. The loss of class time and the awkwardness of administering teaching evaluations in class would be eliminated: "Students, please enjoy the cookies and brownies I've left for you, while I wait in the hall — like a pathetic loser — for you to complete these forms."
Most important, the substantial costs of administering paper evaluations — stuffing envelopes, sharpening pencils, feeding forms into machines, sending copies of the results and the narrative portions to the professors and their department chairs — would be almost entirely eliminated. With online evaluations, the development and maintenance of the software was the only major cost. Or so we thought.
The response rate for the old, in-class approach was as close to 100 percent as one could expect: It's a rare student who will refuse to spend class time criticizing a teacher. But students see an online evaluation as impinging on their time, and they have to make a special effort to complete it, usually when they already have too much to do.
Almost immediately after we moved to online evaluations, faculty members here found it difficult to get students to respond, even with incentives like giving the whole class a grade bonus if 90 percent of the students participated. After a few years, most of us gave up; many of our students now have no idea that there is a system of online evaluations for all courses.
For most classes, the response rate has become so low that the online evaluations have no value for individual faculty members or for the systematic assessment of our curriculum. Department chairs still have access to the results but have generally stopped consulting them, relying instead on personal observations, enrollment patterns, student-exit interviews, and — I guess — the word on the street for evidence of teaching effectiveness.
For several years, I continued to encourage my students to submit online evaluations, mainly by appealing to their sense of duty to other students and the college — an opportunity to "pay it forward." I sincerely wanted to know how I could improve my teaching and the content of my courses. (I held out against giving grade bonuses because I believe it's wrong to give credit for something unrelated to demonstrated learning.) Even so, I could rarely break the 75-percent barrier — the threshold of statistical reliability — even with repeated pleading via e-mail.
Most of the time, my results have been as polarized and useful as what you find on another voluntary online system of teaching evaluation: RateMyProfessors.com. The results I get are generally split between the uncritically enthusiastic and the obviously disgruntled, with — if I am lucky — a handful of conscientious students in between.
How helpful is it to be told by students, repeatedly, with little elaboration, that your class "totally rocks" or that "you suck"?
I think most faculty members and administrators concluded some time ago that our first experiment with online evaluations had failed. But what should we do?
Almost no one supports the idea of abandoning the online system. So much effort has gone into creating it, and returning to paper seems like a technological step backward.
There has been a lot of talk about ways to increase participation in the online system, such as escorting students to computer labs to complete the evaluations during class time. But the logistics are impossibly complicated. Maybe someday all of our students can complete evaluations online in class, using their laptops or cellphones, but we're not ready for that yet, and such an approach may present technological complications that we cannot anticipate.
Recently I was on a committee charged with fixing the participation problem. We proposed a "nuclear option": Withhold grades from students until they complete the evaluations. And, as you might expect, there were explosions all over the campus. Faculty critics said that approach would anger students and skew the data. Moreover, professors did not want to have to monitor compliance, holding students' grades hostage until they completed the online form.
At a general faculty meeting, one professor flatly refused to have anything to do with such a system. One open resister surely indicated the presence of others who would become passive-aggressive resisters: They would simply ignore every administrative request — unless we extended the nuclear option to, say, their annual pay raises.
In any case, that conflict sent the problem of enforcement to the registrar's office, since it would be easy enough for grades to be withheld automatically until the student completed the evaluations. As it turns out, the systems used for student records and online evaluations are nearly incompatible with each other, and it would take hundreds of hours to program a bridge between them — a bridge that just might collapse under the pressure of so many grades and evaluations being submitted at nearly the same time. Just talking about it gave our computing director fits.
Added to that, our registrar was opposed in principle to withholding grades from students who had completed their actual course work and who, no doubt, would bombard his office with legitimate complaints and who knows what else. The problem had to go back to the students and the faculty members.
We had no choice but to re-examine the existing system of voluntary online evaluation and find a way to make it work.
Working with students and Web designers, we could make the online evaluation form more visually appealing and fun to complete. We could give the system a new brand name and logo, with a matching Web site, posters, and glossy fliers to circulate in classes. Finally, we could hold a raffle every semester to give away iPods and bookstore coupons to some students who had completed their evaluations.
We also surveyed faculty members who have been able to achieve reliably high response rates and developed a list of best practices:
- Highlight the online evaluation system in your syllabi and course Web sites. Make it clear that participation is expected.
- Tell your students about the ways that their feedback has improved your teaching and changed the content of your courses. Give them specific examples of things you used to do that you've stopped doing because of their feedback.
- Send students an e-mail message with a link to the evaluation system and a note repeating how important it is for them to participate. Send them another reminder a few days later.
- Devise your own midterm evaluation system and make changes in your courses based on the results to demonstrate your commitment to the process. If students are convinced that their feedback is important, they may be more willing at the end of the semester to complete the online evaluation.
- Offer a small incentive (a few points of extra credit) for students who participate; possibly provide a grade bonus to the entire class if 80-percent compliance is attained.
As I mentioned, I am conflicted about that last suggestion, but I know from others' experiences that it will generate a higher response rate.
My larger concern — and it remains open for action — is finding incentives that will motivate faculty members to adopt some of those practices. I don't think e-mail messages or raffles will do the trick.
To understand faculty resistance, you have to understand how professors view student evaluations in general, and online ones in particular.
For one thing, student evaluations of teaching have always been based on one-way accountability: It's assumed that a professor's knowledge of what a student writes in an evaluation will color his or her grading of that student's work, but that students' knowledge of how they are being graded will not color their evaluation of the professor.
The approach assumes that students have integrity but professors do not — notwithstanding that professors are presumably trained in the ethics of grading, while students receive no training in the ethics of evaluations or even the basics of what constitutes effective teaching. That's why we see students who, out of petty resentment, fill in all of the lowest numbers on an evaluation form.
More and more, professors feel that they have to walk on eggshells in their criticisms of students — poised at all times for legalistic grade appeals — while students are given multiple venues in which to attack their professors in unprofessional ways with absolutely no accountability, not even using their own handwriting.
In the days of handwritten evaluations completed in class, the comments I received were more frequently constructive and almost always respectful.
But something about completing an anonymous evaluation in the privacy of one's room, with no other students around — and who knows in what state of mind — seems to lead increasing numbers of students to write things they would never own up to in public: Every batch of evaluations seems to include a few hateful personal remarks that are unfit to print here.
Perhaps today's students are angrier than ever before, but placing evaluations online seems to take them out of the zone of civilized conversation — the classroom — and into the realm of the unmoderated Internet.
The psychological cost to a faculty member of being "flamed" in the most personal terms by a few students every semester can overshadow any consideration of how one might enhance a course.
If we are going to continue online evaluations, then there should be some student accountability: Even if students remain anonymous to the teaching professor, someone should know who has written any given evaluation. Students should know that inappropriate remarks are as unacceptable on an evaluation as they would be if a professor wrote such remarks on a student's term paper. There should be disciplinary consequences for unprofessional behavior. In addition, we need to make a concerted effort, from the very beginning of college, to educate students about the evaluation process and not assume they are experts on teaching merely by virtue of being students.
I hope our revitalized online-evaluation system will be successful. Maybe a culture of regular assessment can be established in a few years and sustained over time, rather than backsliding into the indifference from which we now suffer. But unless there is a system of mutual accountability and fairness — something that recognizes the concerns of faculty members as much as those of students and administrators — I do not think an online system can succeed for long.
And then — once the assessment juggernaut has passed — we may have to try something radical, such as emphasizing peer evaluations, mentoring, and collegiality, as if professors were, in fact, conscientious professionals.
Thomas H. Benton is the pen name of William Pannapacker, an associate professor of English at Hope College, in Holland, Mich. He writes a monthly column about academic culture and welcomes comments from readers directed to his attention at firstname.lastname@example.org. For an archive of his previous columns, see http://chronicle.com/jobs/news/archives/columns/an_academic_in_america