By Cody L. Patterson and Priya V. Prasad, Department of Mathematics, University of Texas at San Antonio
We frequently use writing assignments to encourage students to examine topics in greater depth than what we cover in class, and we emphasize to our students that writing assignments constitute one of the most important dimensions on which students’ thinking will be assessed. Yet in our early implementation of these assignments, we frequently received work that did not reflect students’ full potential for understanding the topic explored in the assignment. In these cases, because we were using a roughly linear scale to assign each submission a numerical grade, which would then become part of a student’s overall grade in the course, we faced a difficult decision.
- We could assign low grades to work that did not meet our expectations; this would have the advantage of sending students a clear message about whether their work meets the standards of the course, but it might demotivate students or limit the potential of an otherwise competent student to earn a good grade in the course.
- We could assign moderate-to-high grades to such work; this would lower the stakes of failure for students, but it would also require us to endorse work that does not meet a high standard.
Neither option, however, seemed to address our greatest concern: that some of our students had not explored and communicated about the topic of the assignment with the depth desired. Moreover, numerical grades allowed many students to decide that they had gained enough, grade-wise, out of the assignment, and did not need to take advantage of opportunities to revise their work. In this article, we’ll talk about our journey toward crafting and implementing a grading scheme for writing assignments that provides greater opportunity for student learning and growth. While we use writing assignments specifically in the context of content courses for preservice teachers, we believe much of our advice is adaptable to other mathematics courses.
One of the major breakthroughs that helped us support students in submitting higher quality work was to develop clear expectations for these assignments and share them with students, an idea consistent with Braun’s (2014) essay on mathematical writing in PRIMUS and the recent MAA Instructional Practices Guide (2018). (See also two articles by Ben Braun on the blog.) However, it was not enough to be explicit about our expectations. In order to ensure that each student turned in work that met high quality standards, we adopted two principles:
- To pass the course, each student must complete a specified number of writing assignments successfully; an assignment is not successful until it meets a set of predetermined standards.
- Whether a student’s attempt at an assignment is successful or not – but especially when it is not – provide specific feedback, aligned with the stated standards for the assignment, that provides a clear direction for improvement so that the student can revise and resubmit their assignment. One of the key mathematical practices that we want to instill in our students is the fact that their mathematical thinking can and should be revised, and that this revision process is an important part of the process of intellectual growth.
Feedback and opportunities for growth
In order to help students learn to produce higher-quality writing assignments, we had to improve the quality of the feedback we gave. In our own efforts to learn more about assessment, we learned about a study by Butler (1987) in which fifth- and sixth-graders were given a sequence of divergent thinking tasks, and periodically given either numerical grades or individual comments related to their performance on the tasks. Butler found that of these two groups, the students who received comments were more likely to maintain interest in the tasks, and more likely to attribute their success to their potential to grow through sustained effort. On the other hand, students who received grades were more likely to attribute success to innate ability, and tended to maintain interest in the tasks only as long as they received positive messages about their ability relative to other students’. This agreed with our own experience as college mathematics teachers: we knew that given both grades and comments, our students often glanced at the grades and discarded or ignored the feedback. Thus we concluded that the first step we needed to take was to reduce our dependency (and with it, our students’ dependency) on numerical grades. In Specifications Grading, Nilson (2015) discussed the potential of minimal, non-graded feedback on writing assignments, which provided the seed from which our grading systems grew.
In addition to reducing the role of numerical grades, we needed to learn to give useful written feedback efficiently. Black and Wiliam (1998) found that feedback is more effective when it is focused on specific characteristics of tasks rather than simply on whether a learner’s response to a task is correct or incorrect, or on characteristics of the learner. Hattie and Timperley (2007) reinforced these findings, reporting that feedback that concerns a learner’s processing of a task, or their self-regulatory and metacognitive processes, can be more effective than feedback that focuses on characteristics of the learner or of the learner’s performance on a specific task. For example, we will often circle or highlight a paragraph in which a student’s mathematical reasoning is flawed or unclear, and ask a question aimed at prompting them to think more deeply about what they have written or request a clarification. The goal with the feedback is not to provide a clear roadmap of what students need to do to “fix” their work, but instead to prompt further thinking and motivate students to talk to us, a classmate, or a tutor about their work.
Implementation of an assignment
Once we decided to provide process-level feedback rather than simply giving an overall evaluation of students’ work, the next step was to implement a framework for our assignments that would encourage students to interact with our feedback and pursue suggested avenues for further investigation. Over several semesters, through a process of trial and error, we each independently converged toward the following format.
- Assign work and specify expectations. When we first hand out the writing assignments, we remind students to refer to a list of general expectations that we have set for their work. These expectations cover aspects such as professionalism, completeness, and mathematical thinking. We try to be as explicit with these as possible, with examples. Here’s one example: “Every mathematical statement you make should be justified. For example, if you say, ‘the sum of two even numbers is even,’ you should explain briefly why that statement is true. This is doubly important for statements upon which you later rely to explain larger or more complicated concepts.” For students who have little experience writing in mathematics or even with justifying their thinking, this level of detail can be helpful, since they are often unsure of the level of mathematical depth we expect from their work.
- Initial assessment. Once students turn in their writing assignments by a set deadline, we read them over and provide feedback based on the principles outlined above. If a student’s work is clear, well-reasoned, and meets the stated expectations for the assignment (with minimal spelling and grammatical errors), we will assign them credit for having completed the assignment. Otherwise, we ask them to revise their work.
- Assessing the revision. The most important aspect of the revised writing assignment is whether students have addressed all of the comments that were made on their first draft. We stress that these assignments should be considered as cohesive essays, and even though only one part of the assignment may have a comment by it, addressing that comment can affect their whole essay. For example, one of us (Priya) has observed that preservice elementary students often struggle with the implications of the definition of a rhombus, which can have pervasive effects on their writing assignment about quadrilaterals. In such a case, we may only comment once on a paper that a student needs to check her understanding of the definition of a rhombus; but in the revision, we expect that she will revise the entire paper with this comment in mind. Revisions that meet expectations are assigned credit at this stage. If a revised version of a paper substantially improves upon the previous version but still does not earn credit, we may ask the student to submit another revision or to speak with us in person so that we can address any lingering issues in greater detail.
Pass/revise/fail grading can take a little getting used to. Now that we’ve done it for several semesters, our grading time is differently distributed, as well as more purposeful and better aimed at student learning, than it would be if we assigned numerical grades and focused on giving enough feedback to justify those grades. The need for students to resubmit assignments means that the end of the semester can be hectic; however, the quality of work that we get from the students after revisions makes it worth the effort. In particular, we find that grading a revised version of an assignment requires significantly less work than grading the original: the newer version usually contains fewer errors and is written more clearly; and we often remember the issues with the original version well enough to focus our attention on the parts of the paper that have been altered.
As far as students are concerned, although they initially find the requirement of revision and the all-or-nothing grading of these assignments onerous at first, many have expressed the sense that they see how this grading scheme supports our assertion that mathematics requires revision. Over multiple semesters of implementing these assignments as we improved in communicating our expectations, we have seen students’ work get more reflective, more thorough, and more professional. In addition, we have seen that by seizing opportunities to rethink and revise their work, students develop a more reliable command of some of the key ideas in a course; for example, preservice secondary teachers who complete a writing assignment on geometric proof are less likely to make unsupported claims in a geometric proof task on their final exam, and those who complete an assignment on the reasoning behind equation-solving procedures typically give a more mathematically precise explanation for extraneous solutions in an equation-solving task on the final. We believe that these learning gains would not be attained if we allowed students to settle for imprecise thinking on the writing assignments.
Personal notes on implementation
We include here some brief practical observations that have come from our individual implementation of these assignments in courses that we teach. Examples of our writing assignments may be found here.
Priya: My writing assignments for preservice elementary teachers are actually structured reflections/extensions on assigned readings. Students are asked to read about a mathematical topic in an excerpt from van de Walle et al. (2007) or Tobey and Minton (2010), for example, and answer some specific but interrelated questions about it. When I first instituted them, I chose 12 readings addressing key concepts that I wanted to assess with these assignments; trial and error, and a greater understanding of what I felt was truly important in the course has reduced that number to seven. I should also note that the mode of feedback in my class is not comprehensive written feedback, as it is in Cody’s class. Instead, I often provide quite minimal written feedback by simply circling a paragraph that needs to be rethought and allow students to work through the reasoning on their own. If they need further guidance, I encourage them to ask me or their peers questions.
Cody: I use writing assignments in my capstone course for preservice secondary teachers; each assignment asks students to do a thorough conceptual “unpacking” of a problem or procedure that can be found in high school mathematics. My assignments contrast with Priya’s in that each assignment has a specific set of expectations for mathematical reasoning; these expectations are enumerated in the document explaining the assignment. When writing these, I leave myself a bit of room to interpret the expectations flexibly depending on the specific direction a student takes with the assignment. For example, the assignment I have attached to this article asks students to identify two different ways of solving a rational equation – one that appears to lead to one real solution, and one that appears to lead to two (due to a transformation that changes the domain of the expression on each side). In addition to resolving this apparent contradiction and identifying the transformation that alters the solution set, students must explain each method in terms of properties of equality. Thus if a student uses “cross-multiplication” in one of their approaches and does not provide an algebraic explanation of why this strategy is valid, I ask them to revise the response to include such an explanation.
We have now both embraced the transition from grade-based feedback to process-based feedback on writing assignments. We find that this reorientation has allowed us to identify certain “non-negotiable” learning outcomes we want our students to achieve; and our practice of requiring revision and resubmission of papers allows us to provide appropriate assistance to each student, whether that assistance takes the form of a brief comment, a more elaborate marking of a paper, or a one-on-one consultation. One of the greatest benefits of this approach is that the progression in students’ papers provides us with clearer evidence of what students are learning as they work on these assignments. This evidence allows for further fine-tuning of the assignments so that each one provides an appropriate level of challenge for students, and so that success on an assignment clearly signifies attainment of the desired learning outcomes.
Black, P., & Wiliam, D. (1998). Assessment and classroom learning. Assessment in Education: Principles, Policy & Practice, 5(1), 7-74.
Braun, B. (2014). Personal, expository, critical, and creative: Using writing in mathematics courses. PRIMUS, 24(6), 447-464.
Butler, R. (1987). Task-involving and ego-involving properties of evaluation: Effects of different feedback conditions on motivational perceptions, interest, and performance. Journal of Educational Psychology, 79(4), 474.
Hattie, J., & Timperley, H. (2007). The power of feedback. Review of Educational Research, 77(1), 81-112.
Mathematical Association of America. (2018). MAA Instructional Practices Guide.Washington, DC: Mathematical Association of America.
Nilson, L. (2015). Specifications grading: Restoring rigor, motivating students, and saving faculty time. Sterling, VA: Stylus Publishing.
Tobey, C.R. & Minton, L. (2010). Uncovering student thinking in mathematics, grades K-5: 25 formative assessment probes for the elementary classroom. Corwin.
Van de Walle, J. A., Karp, K. S., Bay-Williams, J. M., Wray, J. A., & Brown, E. T. (2007). Elementary and middle school mathematics: Teaching developmentally. Pearson.
This article reinforces my practice as an instructor. I do this based on the experience and sharing by other instructors, but is the first time I read a paper about it.
Experieces of people teaching at places like Hampshire College and Evergreen State College are relevant here. For example, one of my older brothers was a founding faculty member at Evergreen, and later dean; he found a disturbing pattern of students transferring out due especially to their initial practice of relying just on written comments rather than grades. Those hoping to enter structured professional programs in law, medicine, etc., were unable to submit a GPA. So a modified system is probably needed in such cases though maybe not others. Anyway, seek out those with varied experiences.