How I learned to stop worrying and love the teaching evaluation

It’s that time of year again, no, not just the time for giving and receiving Christmas presents, but also the time for giving final grades and receiving teaching evaluations.  In this post, I focus on the latter, and I offer alternative methods of evaluation, and discuss why we even do this in the first place.

Most schools do some form of teaching evaluation at the end of the semester, but the methods are very different. Bates has a college-wide standard form, with 13 questions ranging from “My instructor graded all assignments in a timely manner” (I don’t do so well on this one) to “My instructor fostered my interest in the subject” (which varies wildly, depending on whether I’m teaching Calculus or an upper level course). My favorite is any variation on “My instructor was outstanding” (which thankfully Bates doesn’t have). What is one supposed to interpret from “Strongly disagree”? I would say that if you’re great but not outstanding, someone might strongly disagree with calling you outstanding, right? Anyway. For each question there is a comments box which is completely optional, and which only the instructor sees at the end of the semester (the department chair and the committee on personnel can see the numbers). The form is filled online after the semester has ended, and the way we “encourage” students to fill it in is we hold their final grades hostage until they do the evaluation.

I have asked many of my peers for how their institutions do teaching evaluations, and as mentioned earlier, the methods are quite different. For example, some schools still do the bubble sheets (scantron) on the last week of class (this is how it was at Texas when I was a graduate student). Some schools do department-wide evals, but not college-wide ones. Many do online evals like Bates, but without the “encouragement”, and so the response rate is pretty low. Some schools don’t do student evaluations of teaching at all, only peer-to-peer evaluations. I think each place has a good justification for doing things the way they do them, but it seems that in many instances the answer is “this is the way it’s been done for X years”.

In my case, some of the questions just don’t work well for the types of classes I teach and my style of teaching. For example, we have one that says “This class increased my knowledge of the subject”. In a class like Calculus, which many students take in high school and then retake in college, the answer is mostly “Disagree”. Does that mean I’m not a good teacher? I also teach a lot of Inquiry Based Learning courses (mostly for the upper level courses, though). Many of the questions seem to be catered to lecture-based courses, and so they don’t address my teaching style very well. In fact, the main comment from my fourth-year review was that I should try to collect data on how my students feel about the IBL teaching style, since it is not addressed in the teaching evaluations Bates currently has. I’m sure there are other courses, like in the Humanities, that are more discussion and inquiry-based than lecture style, that are not evaluated well with these forms. So, what does one do?

One thing one could do is try to supplement the teaching evaluations with other questions.  At first I was thinking of writing my own questions and using Survey Monkey or some other free survey service (try saying that fast…). However, I opted to try SALG, which stands for Student Assessment of their Learning Gains, and wrote my survey through them. This is also a free service, but more importantly it has been designed by experts in pedagogy, and has a nice “survey wizard” feature that helps design the questions. The survey splits up the questions into categories, which are: understanding of class content, increases in skills, class impact on attitudes, integration of learning, class overall, class activities, assignments and tests, class resources, information given, and support as an individual learner. I have not yet received results, so I can’t report on the success of this experiment. Unfortunately, the only incentive I could give my students was “it would really help me and the next group of students who take this course”, so I expect the response numbers to be kind of low. However, any response will give me data for the next review, and also probably provide good feedback for next year.

One common complaint I’ve heard when I talk about the SALG survey is “Sure, we might be assessing learning gains, but how do we assess teacher quality?”. This question used to upset me very much, because my thinking was that if the students are learning, then I’m a good teacher. But I guess I see what people mean: they want to know more specifically what it is that I was doing that was helping the students’ learning. And this is why I like the SALG survey, because it doesn’t just ask “did you learn a lot?”, but it’s asking specific questions about the class. For example, the question asked at the beginning of the “Support for you as an individual learner” category is “HOW MUCH did each of the following aspects of the class HELP YOUR LEARNING?” and the aspects are “Interacting with the instructor during class”, “Interacting with the instructor during office hours”, “Working with the teaching assistant outside of class”, “Working with peers during class” and “Working with peers outside of class” to which the possible responses are no help, a little help, moderate help, much help, great help, or not applicable.  This works well with an IBL course, but it also would work really well for a lecture-based course. The point is that we as teachers are trying to create the the ideal environment for student learning, however it is we choose to do it. These questions are also asked in the Bates form, in a way, except the angle of our form is more focused on what the instructor did to help the understanding of the material, putting the onus on the instructor alone. In an interactive course (which I’m sure most of us do, to a certain degree, even when teaching lecture-based courses), it might be good to share the responsibility of the learning with the students.

Anyway, this is a matter for a long discussion, and there are no easy answers. This semester I have been in the Committee for the Evaluation of Teaching, and we have been thinking very hard about how or if we want to change the evaluations. It seems to be pretty evenly split among faculty, some people (I suspect people with tenure) like things the way they are, whereas others think there is room for improvement. There are other concerns, for example in regards to gender and minority bias. There are many studies on the topic, and although not completely conclusive (like many studies on these sorts of things) there seems to be a lot of evidence pointing to the fact that evaluations are biased in terms of age, race, and gender. Recently I was pointed to an article called “Are Student Teaching Evaluations Holding Back Women and Minorities?: The Perils of “Doing” Gender and Race in the Classroom” by Sylvia R. Lazos, which appears in a new book, Presumed Incompetent: The Intersections of Race and Class for Women in Academia.  There is a lot more food for thought at the University of Michigan’s Center for Research on Learning and Teaching’s section on Teaching Evaluations (a lot of which I probably should read before my next committee meeting).

So in the end, why do we do teaching evaluations? It is easy to feel judged and therefore upset when students say not so nice things about us, but we need to take a step back from our hurt pride and see these for what they are. The main personal reason to do these is to try to improve our own teaching methods and make the class as useful and as worthwhile as possible. If we think we are perfect, and there is nothing for us to learn, then we’ve already failed as teachers.

There are other people interested in seeing our evaluations, which seems scarier, but it shouldn’t be. There has to be some accountability, people want to make sure that you’re putting some effort and thought into your teaching, that you care about your students as people, and that you are interested in improving yourself. At Bates, luckily, for both tenure and promotion reviews, student evaluations are only a piece of the puzzle, complemented with peer evaluations (individual letters from each person in your department), and letters from students, both solicited and unsolicited. I especially love the student letters, as I think this helps put your teaching evaluations in context, and gives students the possibility to reflect on your teaching years after they were in your class. As is common in IBL classes, for example, the skills students learn are not as obvious right away, and most of what they may remember is that they had to do a lot of work on their own (and that they were not happy about it). A few years later, their memory of the experience might be different (or maybe not!), but still, this perspective is very useful in determining whether you are an effective teacher.

So dear readers, what do you think of teaching evaluations? Do you think they are meaningless, or important? How do you use the feedback? Have you tried alternative surveys, like SALG, or IDEA? Any other thoughts or suggestions? Please share in the comments section below.

This entry was posted in teaching, teaching evaluations. Bookmark the permalink.

2 Responses to How I learned to stop worrying and love the teaching evaluation

  1. Michelle says:

    Our school uses online evals which I don’t know anything about because I’ve never used them. My impression is that there’s general dissatisfaction and low response rate, and they’re trying to figure out something else.

    My department gives open-ended evals: “Please comment below on special strengths and weaknesses of the instructor and his or her conduct of the course. Constructive criticism is solicited.” And then a blank page. Student comments are (mostly) surprisingly thoughtful.

    These are totally optional for the instructor, but if you give them out they go to the department chair for reading / summary before they go back to you. (You “own” them.) If you don’t do them, you have to have *something* to demonstrate effectiveness & improvement come renewal / tenure time.

    You say we do evaluations to get feedback on our teaching. But really that can’t be true, right? Because if that were the goal, we would do them much earlier (so we could actually address the feedback that semester rather than in the future), more thoughtfully, and more often.

    I think they serve two purposes:
    (1) It’s an easy way to gather data for folks who need to evaluate if you’re doing a good job. And people like to believe that data has meaning, even when it doesn’t. And people like easy. See: high-stakes testing in K-12 education.

    (2) It makes students feel like they have a voice. Even if these evaluations were thrown in the trash and no one looked at them, if you *didn’t* do them, students would be angry. They want to feel like their opinion matters. See: ratemyprofessors.com for another exmaple.

    If we really cared about doing student evaluations to improve our teaching, it would be hard work. This is one thing our university offers through our center on teaching:

    “Mid-semester evaluation:
    A consultant from CTE will visit a class for 50-75 minutes and, without the instructor
    being present, ask the students three questions about the class: 1) what has helped
    them learn, 2) what has made learning difficult, and 3) what suggestions do they have
    for change. Small groups of students discuss and answer these questions. The
    consultant meets with the instructor and passes on this information.”

    This is a very brief description of what is actually a lengthy and well-thought-out process. Guess how many faculty take advantage of this service? Note: the feedback goes only to the instructor, never to anyone else. It is the instructor’s decision to include that feedback (or not) in any official documents.

    • Adriana Salerno says:

      Well, I DO use them as feedback on teaching, because I teach the same classes many times. I have taught Calculus 4 times at this point. I will be teaching IBL Real Analysis a third time next Fall, and I will be looking through the SALG responses for ideas on how to improve the course next time. Also, our semesters are short (12 weeks) and I always mean to do mid-semester evaluations but don’t get to them on time. I agree that this is probably a better way to get feedback during the semester, but it can certainly be useful afterwards.

      Secondly, thanks for your comment on the student perspective on this. Students value that we value their input, and I do think that’s important. Some of our students have mentioned wanting to have evaluations be public, but there is a host of reasons why that might not be the best idea (see ratemyprofessors.com for some pretty useless feedback).

Comments are closed.