Lectureless Modern Algebra and Foundations, Part III: Through the Evaluations and What I Found There

No better way to move past last semester's evaluations than get a chance to teach the class again. I restructured and created some new activities for Algebra this time around. For example, this semester's Algebra class made this awesome color-coded group table for D_4.

No better way to move past last semester’s evaluations than get a chance to teach the class again. I restructured and created some new activities for Algebra this time around. For example, this semester’s Algebra class made this awesome color-coded group table for D_4.

The 10 minutes of the semester when we give student teaching evaluations have repercussions that can last for an entire career. The numbers that these students choose to rate us can become the major or sole metric of our quality as teachers, and can be central to tenure, promotion, and performance-based raises. Many people (including me) have been hesitant to incorporate non-lecture strategies in the classroom because they are afraid of the effects on their teaching evaluations.* Last semester I tried many non-standard and active learning strategies in my Modern Algebra and Foundations classes. My students and I did so many things—including pre-reading, structured group work in class, blogs, and proof portfolios—and I found some struggles and some great successes in the classes (see my previous two posts here: description of the class, and my spring retrospective on how it went). But, the story isn’t complete until I address the thing that people are most nervous about: what happened with my evaluations.

No lying: the numbers were not great. Some of them were not great, anyway. The students rated me highly in many categories, but in at least one course, they rated me medium to low in some of the big ones: uses class time effectively, organizes course effectively, explains class material clearly, and overall quality of instruction. The scores weren’t horrible, but they were well below average, and low enough to make me feel pretty bad. After I suffered for a while, I looked back at the numbers in those categories and, well, they were still not good. But in most categories I was at least average, and I noticed several other categories that made me think more about my success. For example, (in some cases way) above average numbers of students said that they learned a great deal in the course, that hard work was required to get a good grade, they spent a lot of hours each week working on the course, that I encouraged student participation in the course, was available for help outside of class, and treated students respectfully. And every student gave me the highest score for enthusiasm, always a great category for me. I was happy that my intentions had come through so clearly in these areas.

It was hard to take that the students didn’t think my instruction was high quality, but then again, how did I expect that category to turn out?** A central goal had been to avoid what most students think of as instruction. Same with explaining course material—I answered questions, but turned the responsibility for a lot of the explaining onto the students themselves. Why be surprised when the evaluations reflect that? And the class time and organization of course material—I used class time the opposite way that almost all of their classes had, and organized the course material from behind the scenes. Of course that did not appear effective to everyone. Unfortunately, those particular categories just seem like a judgment of me personally in ways that some categories don’t. I understand why the students would make those ratings, but I think that they really don’t reflect the quality of my teaching. The surveys are just not designed to evaluate the kind of course I taught.

I didn’t get to read the students’ comments on the courses until late in the summer, and the sting was mostly gone from the numbers. I was prepared for some harshness. In fact, the comments were surprisingly good. A couple students complained as I expected that I “didn’t really teach” and they had to “learn everything on their own,” but many others were positive about the experience and they could tell I really cared about their learning. They said that the class was hard, which I was fine with. They made many of the same useful critiques that I myself had made looking back: too many assignments, hard to keep up with the cascading deadlines, not quite enough structure in the group work. Overall, the students were really respectful and said a lot of positive things about how much they learned in the course. I think that the quality of their comments indicates that I earned their respect and that they responded to and appreciated my high expectations. Overall, even with the low numbers in the categories I mentioned, I am proud of the classes.

After reading the evaluations, my thoughts turned to my upcoming third year review. At Villanova, this is the sort of “halfway to tenure” evaluation. As part of this process, I need to discuss my teaching methods and respond to student evaluations. Even with the processing I discussed above, I was still not feeling great about discussing the numbers in my response. I realized that while my reasoning about the evaluations was, well, reasonable, it would be wonderful if there was some way that I could really show that the course had been effective. Luckily, my Algebra course had coincided with the department’s internal assessment of one of our curriculum goals, essentially that students should be familiar with the roles of definitions and theorems in mathematics. I had volunteered to share my students’ anonymized proof portfolios for use in this. The proof portfolios consisted of 10 proofs from the course, typeset in LaTeX, revised versions of homework or test responses. I chose 12 portfolios from math majors in my Algebra course, 4 portfolios each from the top, middle, and bottom thirds of the course (ranked by overall course grade). I shared them with the assessment committee at the beginning of the summer and then mostly forgot about them.

When I read my evaluations, I decided to go ask the committee if they had found the students’ work to be proficient, so I could cite their opinion in my response. This is where my colleagues proved themselves, yet again, to be wonderfully, incredibly supportive. They not only encouraged me in person by saying they were very impressed with the portfolios, but they wrote a letter to my department chair for my file, describing their assessment and their opinion that the portfolios reflected my effectiveness as a teacher (as well as the students tremendous efforts). I can hardly describe how much this effort from my colleagues means to me. It makes me feel like part of a really healthy, supportive community.

Also, it tempers the anxiety that comes from the central role of student teaching evaluations in professor assessment. Too often, it seems that these numbers are all that matter in assessing our teaching. The fact that my colleagues were willing to document other evidence of my teaching effectiveness gave me a spark of excitement: maybe it is possible to undermine the hegemony of student evaluation numbers. Supported by our colleagues, we can create our own multi-faceted portfolios of teaching effectiveness, and just maybe they will mean something to our departments, colleges, and tenure committees. I don’t know yet that it will work, but it is something to try and a way to channel my frustration with the shortcomings of using student evaluations as the main metric for teaching quality.

What do you think? Let me know in the comments.


* For many reasons, as outlined in these articles from Inside Higher Ed, The Chronicle of Higher Education, and Slate, I do not think that student teaching evaluations are a good way to assess teaching in general, but that’s another blog post. I will just focus on the practical issue of how I responded to the evaluations that I got.

** As a quick aside, I have heard from seasoned active-learning practitioners that you can improve student evaluations in these categories by carefully and consistently explaining the reasons behind your methods. I did strive to do this, but I could probably have done more. Next time I will share more science before I start, and check in more through the semester.

 

This entry was posted in Uncategorized. Bookmark the permalink.