Lest ye be judged: Student Evaluations

My semester plan tells me I need to read my student evaluations this week. They were just released to us a couple weeks ago, so it’s not like I’ve been sitting on these since mid-December, but this is still the part of my job I procrastinate about the most. In fact, for the last few years I’ve mostly ignored them unless I had to include them in a job application. But now, for the first time, they actually matter to my career. And I’m dreading it.

It’s not like my evals are scathing. Even when I teach 8am business calc classes, the scores are positive and so are (most of) the remarks. But reading that one comment – from the student who has spent the last 15 weeks hating my guts, biding their time until they’re granted this one prized moment of retribution – just kills me. Especially when their comment shows they’ve clearly missed the point of why I teach the way I do.

Talking to friends and colleagues, I know I’m not alone in feeling this way. I see it blow up online every semester, as people post choice excerpts from student comments. Interestingly, most people seem to like posting the bad ones more than the good. Do we find publishing praise too gauche, or just prefer moral support for our perceived shortcomings?

Then comes the eternal debate: are student evaluations even valid instruments for assessing instructor quality? I’m not qualified to weigh in about the research – there’s just too much to wade through for one thing, all muddied with arguments about study designs and response rates. I’m also inherently suspicious of our efforts to study ourselves, especially on such an emotionally-charged topic.

What I will comment on is the mounting evidence of at least one thing student evaluations are good at: exposing societal gender bias. Inside Higher Ed posted an article on the latest bit of evidence just last week. In an online class, students gave higher evaluations to the instructor they thought was male – even when the two professors (one male, one female) switched identities. This bias was present even in seemingly objective rating criteria; though both faculty members returned homework at the same time, the male identity was ranked 4.35 out of 5 for promptness, while the female identity received only a 3.55.

This study was small, but a larger one (without the same blind design) confirmed the difference in ratings of the male instructors versus the female instructors. Interestingly, there was no statistically significant difference in student learning between the two categories, as measured by scores on an anonymously graded final exam, though the paper did note that students of female instructors performed slightly better than those of the male instructors.

It’s unclear to me what role these evaluations will play in the future of academia. Everyone has opinions on their flaws, and yet they still get used for high-stakes decisions on employment and promotion. One of the authors of the above paper suspects this use, together with these issues of gender bias, may lead to class action lawsuits as early as this year. While I’m not explicitly required to include mine in my tenure dossier, it’s been made clear to me that not doing so would be a major red flag. In a way they remind me of polygraph tests – the data received might be virtually meaningless, but meaningless numbers are somehow better than no numbers at all.I’m not arrogant enough to assume my students have no valid feedback on my performance; on the other hand, having once been twenty years old myself, I know they may not have the perspective on their education that I might like them to have. I’m sure some of their comments will be useful and help me find weaknesses in my teaching that I need to address. But that doesn’t make it any easier.

This entry was posted in Uncategorized and tagged , . Bookmark the permalink.

4 Responses to Lest ye be judged: Student Evaluations

  1. bmalmskog says:

    Thanks for writing about this, Sara! I believe that using student evaluations as the main metric to assess teaching is deeply problematic. I admit that it’s hard to find a really good way to evaluate teaching. Other methods of evaluating teaching effectiveness, like peer evaluation, seem to involve much more work on the part of everyone on the faculty, and to introduce subjectivity into this supposedly objective measure. What these studies indicate, though, is that using student evaluation data systematically discriminates against female instructors. This is a really big deal, because getting jobs, tenure, promotion, and raises can all depend explicitly on teaching evaluation data. I think this is a conversation every university should be having, and probably a conversation we should be having with our students.

    • smalec says:

      Agree 100%. There are so many issues to unpack within this that it almost seems an intractable problem. At least these conversations are starting to happen on a higher level than just griping with colleagues.

  2. Mark says:

    There have been repeated studies that show that students who have instructors with high evaluations do worse in follow up classes. This shows that such evals are worse than useless. Don’t fall into the trap of thinking because a number exists it’s meaningful.

    see eg Carrel-West journal of political economy 2010

  3. OldLadyRocker says:

    Your blog posts are beautiful and concise. I do not read student comments unless forced to. I can tell what worked and what didn’t by the time I finish the course. Thank you for connecting this issue to the huge, ever-growing body of social science research that documents the near-futility of including student evaluation scores in tenure/job candidate assessment due to gender discrimination.

Comments are closed.