In the last few weeks, I have been driven, from different sources, to think about potential biases in grading. From what can we do to prevent bias in our grading, to protect ourselves from bias accusations, these ideas have been floating in my head. I decided to hare some of my thoughts and questions in this blog post.
Of course, we all like to think that we are perfectly fair to all of our students, and we may even try to be, but the reality is, as I have said in other contexts, the biases we have may not even be conscious. One of my students recently did a class presentation on how one can use the Vigenere cipher to encrypt student IDs, and anonymize the grading process. The paper she read and discussed was written by Karen Ayres, from the University of Reading, and published in the British Journal of Educational Technology (I have access through Bates but I don’t think it can be downloaded for free, unfortunately). In it, Ayres makes a good point (the first thing that came into my head when my student started presenting) that in fact, to give tailored feedback and help students individually, perhaps anonymous grading is not the best approach. I felt particularly strongly about this in the case of a school like mine, where personalized feedback is not only encouraged but expected. On the other hand, it seems like the Student Union in the UK was asking for more anonymous “marking” (British for “grading”) methods, and this was an attempt at doing so.
The Vigenere cipher is a substitution cipher (a polyalphabetic cipher, to be more precise) that is not really used anymore due to the fact that it is not that difficult to break. Ayres suggests using this method to encrypt student ID’s, so that professors can grade papers with only those numbers on them. Student IDs could be enough to keep students anonymous, in my opinion, but Ayres claims that since these are freely available one could still figure out who the student is just from the number. Anyway, the benefit of this encryption method is that it hides students IDs and does so quickly and efficiently, and is easy to decrypt once one has assigned the grades. Also, due to the fact that professors don’t actually want to be seen as biased (at least I don’t) there is no incentive to try to “break” this encryption by anyone, so it’s not too bad that the Vigenere is sort of breakable.
I do have doubts about the implementation though. I mostly grade hand-written assignments, and so I can definitely recognize handwriting in some cases. So even if I don’t know who this belongs to, I still know when I’m grading the same hand-writing. If I had some hand-writing bias, this would not solve this problem. Also, how do we assign the student ID numbers to the homework sets or exams? Do we we first encrypt the IDs then send them to the student to use? The more I think about this, the more it seems like this is all probably for computer-based homework and exam systems. In which case, I wonder if there really are that many problems with bias to begin with. It seems like the University of Reading Math and Stats department has started using this since October 2011, according to Ayres’ article, although I can’t find too many details on how they do this, exactly. I am really curious to know how this works in practice.
Another reason I have been thinking about bias in grading is that Bates is reviewing a new policy on how students can appeal or contest a grade. It is moving pretty slowly, because of the whole “can of worms” possibilities, but I also think this is a policy that needs to exist and needs to be very well though out (I mean, not at Bates, but I have heard horror stories in other schools where it does seem like bias played into a student’s grade). On the other hand, my professor colleagues are afraid that now every time they give an F they will be accused of bias and have to go through a long appeals process (even if the grade was fully deserved). A grading policy like the one above (if I could figure out how it works), would prevent students from being targeted by biased professors, but also protect professors from unfounded accusations. If the grading was anonymous (truly anonymous), then obtaining an F could not be construed as a personal vendetta or anything of that sort. So I like the idea of this system, I just think it is not quite feasible.
This reminds me of a previous post I wrote about double-blind refereeing in mathematics. Many people thought that this would be a good way to avoid bias, or to guarantee that mathematical papers are being reviewed only on their merits and not on who wrote them. But what most people argued is that in mathematics it is still very easy to figure out who wrote a math paper (through the arXiv or just by area of expertise), so what is the point in hiding someone’s name if it is so easy to find this out on your own? Anyway, the MAA still decided to institute a double-blind refereeing system (starting this calendar year), so I look forward to seeing how it actually changes things (or doesn’t). At the very least, it might just make people feel better about the process, which is still a good thing.
So, dear readers, any thoughts on bias in grading? Can anyone explain to me how the system described by Ayres can actually be implemented? Any strategies you yourselves use to anonymize your grading? Please share your thoughts in the comments section below.
I’ve been grading as anonymously as possible for a long time now, just by having the students name entirely separate from their work (for one page quizzes, I put the spot for the name on the opposite side, and for exams I use a cover sheet for the name that I flip over on all the exams before I start grading). It is not foolproof since, as you mentioned, the handwriting sometimes gives a student away, but I’ve found less often then you might think. Sometimes I’ll find myself grading something and wonder who’s it is. If I make a guess, based on the handwriting, I’m generally wrong. What I’ve liked about this is that it makes me feel like I am doing what I can to eliminate some biases on my end, and I find that a relief. I have found myself surprised many times, when I turn over a quiz after grading it and see a name, because the grade isn’t apparently in line with my perceptions about that student’s work. I *like* this being surprised, because it has helped me become less and less attached to any notions I carry about any particular student’s ability level. I know for certain my perceptions aren’t spot on, because they are regularily set right by these surprises, so it breaks down this very human tendency to believe too much in your own limited experience of a person (and even worse, just the bits you actually end up remembering) being the full picture of that person.
Thanks for the tip, Heather!
The cipher seems to me to be overkill if the argument is: Since profs don’t want to be biased, they won’t try to break the cipher.
Student IDs work just as well and would be far easier on everyone. If the prof won’t try to break the cipher, then they won’t look up the easily accessible IDs either. Conversely, if they care enough to look up the ID, then they probably would break the easy cipher too.
I agree with you, in the sense that we as professors do believe that we are as unbiased as we can (and there are two suggestions already in the comments on how to do this). The Student ID thing is worse when you know that Jane Doe has the ID 111222333 or something really easy to remember. Also, from the point of view of the student, maybe just knowing that it is really hard for you to decipher who they are makes them feel safer from bias. Finally, the Vigenere still requires some work (and some guesswork) to be broken. So you would not want to encrypt social security numbers this way, but it is secure enough for other things.
I find marking question by question, that I tend to avoid the name bias.
As to research papers, isn’t that what Bourbaki was all about?