I just returned from an all-years reunion of the Hampshire College Summer Studies in Mathematics (HCSSiM) program, a six-week program I attended during the summer between my sophomore and junior years of high school. It has been run by David C. Kelly, whom everyone refers to just as Kelly, since he started it in 1971. There are several other summer high school math programs around the country (a good start is this list from the AMS), which likely share some characteristics with Hampshire, but since Hampshire is the one I have personal experience with, this is the one I am compelled to talk about. And while several people and experiences were instrumental in my path to becoming a mathematician, Hampshire is the one that stands out most prominently in my mind, the one mathematical encounter that changed my life. And from talking to other people at the reunion last weekend, I know that many other program alumni feel the same way.

There are other accounts written about Hampshire. The AMS has a nice commentary on Hampshire and other similar programs, and Jim Propp has a very nice blog post about it, from the perspective of someone who has been a student, and has taught at the program as junior staff and senior staff. I was only a student one summer, and never taught there, but the program had such a profound effect on me that I want to share my personal reflection on the experience. Now over 34 years later I am a professor at UTEP, and I hope that the benefit of my looking back with the hindsight of years of learning and teaching math will outweigh the loss of some details through those same years. But, as with any transformative experience, some details remain crystal clear.

I’d loved math from an early age, but when I showed up at Hampshire as a rising high school junior, the only proofs I’d seen were the two-column proofs in high school geometry that previous year. From the outset, it was clear that this program was going to be … different from high school. The first day of class (we met in 4 classes with about 17 students for 4 hours in the morning, 6 days a week; each class with different instructors, but discussing similar topics), Kelly started immediately with a problem, which I still remember: If we have a (three-dimensional) hunk of cheese, and we slice it with some number of cuts with a knife, how many regions will we have? This was not at all like my math classes in school, and a little like some of the puzzles I read about in mathematical puzzle books on my own, but it was somehow a little deeper than those puzzles, and it was an entire class spent exploring the problem together.

We spent almost all morning (four hours!) on just this one problem. It served as a vehicle to develop for ourselves (with guiding questions from the staff, to be sure) mathematical problem-solving strategies we were to return to all summer: Work examples and gather data; formulate your assumptions carefully (early idea, soon discarded: What if all the cuts are parallel? That’s not an interesting problem anymore); don’t assume that patterns always continue (in this case, 1,2,4,8 was followed not by 16, but by 15); when problems are too hard, try a simpler problem first (we moved from 3-dimensions to 2-dimensions); draw pictures; make conjectures; try to prove your conjectures, or make new ones if necessary.

The rest of the summer proceeded similarly. Topics were introduced by way of problems, some of them imaginative, which we discussed and refined, with lots of student input. Methods of proof were woven into discussions about how to verify our conjectures. Classes were engaging, and even fun, because of the interesting problems and because of the interaction among students and staff. Looking back at it now I would describe it as active learning, with an additional ingredient: It *never* felt like anyone (student or staff) was doing something because they had to, or for a grade, or really for any reason other than that it was inherently interesting. But at the time, I only knew that I really liked it. If this was math, I could spend all day doing math!

Evenings were devoted to 3-hour problem sessions. Problem sets ranged in difficulty from working examples to writing proofs. Problems reviewed that day’s material, expanded on ideas, or let us play with ideas that had come up during the day. Sometimes they previewed upcoming material. During the problem sessions, we could work on whichever problems struck our fancy, and, when we solved something we were proud of, we could turn it in for constructive feedback. In a change from my previous learning/student experience, nothing was ever awarded a grade. (Instead, at the end of summer, Kelly wrote a detailed descriptive letter of recommendation for each student.) When I looked back many years later at some of the proofs I’d written, they looked very rudimentary to my more experienced eyes, but I know that by the end of the summer I really had learned at least the basics of writing proofs, and that I loved it.

Another remarkable difference from my previous experiences was that we were strongly encouraged to work in groups on problems. In fact, throughout the program there was a strong sense of cooperation instead of competition. I quickly grew to embrace this cooperative view of mathematics and of education, and never turned back.

Afternoons were free time, but I spent most of my afternoons working with other students on the weekly “program journal”, hanging out in the room where we put it together. Like many aspects of the program, it was almost entirely student-run. We wrote summaries of the week’s activities in the classes and of the daily “Prime Time Theorem” lectures (self-contained hour-long talks given by visiting mathematicians or by the staff), and ran a problems section (pose problems, solicit solutions, print solutions the next week). Of course, we also had some less serious features, such as reviews of the weekly math movies, cartoons, and silly math songs. I wrote some of the Prime Time Theorem write-ups, and I distinctly remember noting that by the final week of the program I was paying more attention to the precision I had to use to get the details correct.

Eventually we saw many different topics, none requiring much prerequisites beyond some high school algebra and, more importantly, an intense curiosity and a willingness to experiment and learn. I don’t recall precisely all the topics, but we certainly covered a lot of number theory, the basics of group theory, and some combinatorics and probability, some topology, some notions of infinity. It very much felt like that any subject might come up on any day. Halfway through the program, we finished the overview class, and each student could pick a class focused on one of four specific topics; I picked the class on large prime numbers and factoring large numbers, but later wished I’d picked the class on group theory centered around Rubik’s Cube (a few months before the Cube became wildly popular in the United States). I got to see lots of other things I would later take for granted (the geometry of how complex numbers multiply; how to think of GCD in terms of buying postage stamps; etc.). It took me some time to realize that not all math students learned these things in high school!

As with any good educational experience, I also learned a lot from my fellow students. Many were attending selective high schools in New York City and elsewhere (I was attending the public high school in my suburban town), and they generally had much higher expectations than I’d even thought about. They went to national math competitions, and did well. They planned to go to Ivy League colleges. Being around them raised my own expectations of college and my future. I entered Hampshire thinking I would be a meteorologist (because I liked looking at clouds), and left thinking I would be, well, if not a mathematician, at least an engineer. But I was also certain I would take any math class I could. (Which I did, and then eventually switched to math.) And I also believed I could go to the best programs in the country to pursue further education.

Even though we were studying serious and advanced mathematics (even without having taken calculus!), everything was infused with a sense of playfulness. From Kelly on down, staff conveyed the idea that what we were doing was *inherently* interesting, and that it was fun to just play around with ideas, and problems, and explore ideas. Though there were jokes and kidding around, it was the wonder of the mathematics that always took center stage. One of the ways in which this playfulness was transmitted was through the program’s adopted mascot and number.

You probably were expecting me to get to this part if you have heard of Hampshire before. We quickly found out that Kelly, and everyone else at the program, had a thing about yellow pigs and the number 17. Yellow pigs were everywhere, including on our t-shirts once Yellow Pigs Day happened on July 17, when Kelly gave his talk on the mathematical and social history of 17. (For instance, a regular 17-sided polygon can be constructed with ruler and compass because 17 is a Fermat prime, \(17=2^{2^n}+1\) with \(n=2\).) Soon all the students found and used YP’s and 17’s everywhere (for instance, look again carefully at the first sentence of this paragraph, especially the first two words and the number of words).

Of course, one purpose of yellow pigs and 17 was for a program identity and cohesion (and for alumni to be able to recognize one another), but 17 had another useful purpose. If you wanted to give a proof, but start with an example, you could pick 17 as the value of a variable such as \(n\). Everyone (at Hampshire) would recognize you were using it just as a placeholder, and the next step, replacing all the 17’s by \(n\)’s might not be too hard. (Kelly also showed us how this transition could be achieved typographically; see below illustration.)

Some years after I’d been teaching at the university level, I realized that **almost every innovation I tried to implement was based on some aspect of Hampshire**. Well, I didn’t try to use yellow pigs, but I do use 17 in examples in class when I can. (And I do reflexively look for 17’s everywhere.) More than that, the Hampshire way of thinking about mathematics remains at the core of how I approach mathematics and education. It remains remarkable to me that just six weeks could change my life so profoundly, but I remain eternally grateful to Kelly and HCSSiM that it did.

*Let us know in the comments what your most significant mathematical experience was, and what effect it had on your life.*

“Can you recommend a good math tutor?” I hear this question from friends with children in local schools, academic support staff at my institution, and my own students. Once or twice I’ve even heard it from a student on the first day of class. Although tutoring has much in common with other educational settings, it presents its own opportunities and challenges. In this post, I explore why one-on-one instruction is so appealing as a supplement to classroom instruction, and how effective tutors make the most of tutoring sessions.

As Lepper and Woolverton point out in setting the stage for “The Wisdom of Practice: Lessons Learned from the Study of Highly Effective Tutors” [2, p. 138], “tutorials provide a venue for learning that is inherently more individualized, more immediate, and more interactive than most common school settings.” Specifically, individualization ensures more focused attention from both tutor and tutee. Immediacy allows for instantaneous feedback. Interactivity means that the tutor can make real-time decisions and adjustments as the student’s comprehension level and emotional state become more clear.

The authors go on to identify specific practices of expert tutors. While the overview is limited to studies of tutors for elementary school students studying mathematics, many of the effective practices it describes are also applicable to secondary and college mathematics settings. For example, “our best tutors seem to prefer a Socratic to a more didactic approach” [2, p. 146]. Naturally this approach involves asking questions and providing hints rather than providing quick answers. It also includes making a distinction between “productive” and “nonproductive” errors [p. 147] and responding accordingly. A productive error is one that the student can self-correct, with the long-term learning benefits that ensue, while a nonproductive error is best corrected immediately by the tutor.

Readers of *How People Learn* and related works will recognize a metacognition theme in this observation of Lepper and Woolverton: “more effective tutors are more likely to ask students to articulate what they are learning, to explain their reasoning and their answers, and to generalize or relate their work in the tutoring session to other contexts and problems.” An expert tutor, then, guides the interaction not only for strong communication with the tutee, but also to strengthen and reinforce learning.

The question of tutor-student communication is a complex one. In a research review, Graesser et al. [1, p.418] point out five “illusions” that tutors may hold. These are the illusions of *grounding, feedback accuracy, discourse alignment, student mastery, *and *knowledge transfer.* They categorize the misunderstandings that tutors often have about their students’ thinking. Have you ever been asked whether you understood something, and said “yes” even though you weren’t sure? You were giving inaccurate feedback, and your questioner may not have caught on. Of course even a sincere “yes, I understand” may be inaccurate, as “it is the knowledgeable students who tend to say ‘No, I don’t understand.’ This result suggests that deeper learners have higher standards of comprehension” (p. 414).

For an example of poor discourse alignment, note that “tutors sometimes give hints, but the students do not realize they are hints” [p. 418]. Now that is a reality check. In our previous post, Jess Ellis Hagman wrote, “Mathematics education research is the systematic study of the teaching and learning of mathematics.” Sometimes a seemingly small detail emerging from such study can have profound implications.

More from Graessner et al.: “A good tutor is sufficiently skeptical of the student’s level of understanding. … A good tutor assumes that the student understands very little of what the tutor says and that knowledge transfer approaches zero … (E)xpert tutors are more likely to verify that the student understands what the tutor expresses by asking follow-up questions or giving follow-up troubleshooting problems” [1, p. 419]. I recall working with an algebra student who insisted that he understood the relationship between the graphs of $y = x^2$ and $y = x^2 +2$, even though his graphs intersected. Rather than pointing at the intersection and explaining my concern, I should have suggested that he add the graph of $y = x^2 + 1$ and tell me what he noticed.

Given recent research on the effects of students’ emotions and mindsets on learning, how do good tutors attend to those factors? For one thing, while they are supportive and kind, they are sparing with praise. When these tutors do offer compliments, they refer to the work, not the person. The compliment might be an indirect one, such as a simple, “That was a hard problem you just did.” Good tutors also find ways to turn control over to their students by, for example, letting the tutee choose between two equally challenging problems [2].

Many of the above observations about effective tutoring, and potential pitfalls, are relevant to considerations of classroom instruction, especially active learning environments in which instructors have frequent, though short, interactions with individual students and small groups. In addition, faculty office hours are often sequences of tutoring sessions. Occasionally I’ve had the sense that a meeting with a student didn’t go well because I said too much or corrected an interesting mistake too soon. The research seems to confirm my impressions.

Still, tutoring is different from classroom instruction in significant ways. Most obviously, perhaps, tutoring usually happens when someone determines that special intervention is required. A student is struggling, or not doing as well as expected. Perhaps the student’s parents see tutoring as a way to improve grades or test scores for college applications. Under these conditions, it is especially important for the tutor to attend to the student’s affective state.

Additionally, although the appeal of tutoring as a remedy springs from the one-on-one nature of tutoring sessions, there are usually other people on the periphery. There’s the classroom instructor, who may have recommended tutoring, or may not know that it is happening. Perhaps the student’s parents are involved. Many school districts coordinate tutoring programs in cooperation with local organizations. It seems reasonable to conclude that communication challenges come along with those added relationships.

For one thing, the tutor may not know or understand the instructor’s learning objectives for the student. A peer tutor for my Calculus I students may have taken AP Calculus in high school, which can be a very different course from mine. A volunteer tutor in a public school might remember shortcuts for working with fractions, while the teacher wants to Nix the Tricks.

Further, the tutor might not have a deep understanding of the relevant mathematical content. As a sophomore in college, I signed up to be a peer tutor. A junior came to me for help with multivariable calculus. She was baffled by parametric curves, which hadn’t been covered in my multivariable course the previous year. At the time I was mortified, feeling somehow that I’d failed personally. But she and I tried to work through that section of the textbook together, which (I now recognize) was probably good for both of us. According to [1], “tutors in …same-age and cross-age collaborations tend to learn more than the tutees” (p. 412). It’s probably important that I knew what I didn’t know about parametric curves. In contrast, a colleague once overheard a peer tutor say, “the individual terms of the series go to zero, so it has to converge” in our department common room. Fortunately, our peer tutors now undergo appropriate training before they start.

Can I recommend a good math tutor? Yes, but I would want that tutor to get training first. It wouldn’t hurt to also read [1] and [2]! (Other resource suggestions are welcome in the comments.) Good tutors know that showing and telling should be used sparingly, and only after careful listening.

*Thanks to Steve Klee for directing me* *to *[2].

**REFERENCES**

[1] Graesser, A. C., D’Mello, S., & Cade, W. (2011). Instruction based on tutoring. *Handbook of research on learning and instruction*, 408-426.

[2] Lepper, M. R., & Woolverton, M. (2002). The wisdom of practice: Lessons learned from the study of highly effective tutors. *Improving academic achievement: Impact of psychological factors on education*, 135-158.

]]>

I’ve recently finished my third year as an assistant professor in the mathematics department at Colorado State University. Since my research area is mathematics education, I am often asked what it is like to be a math-ed researcher in a math department. Such curiosity points to a cultural difference between mathematicians and mathematics-education researchers, and alludes to a specific culture where it may be difficult to be an education researcher in a mathematics department. To me, this question sometimes feels akin to being asked what it is like to work at Hogwarts as a Muggle, surrounded by real witches and wizards. Certainly, this comparison carries with it some information about how I perceive the question: that mathematicians are the real researchers, and that as a mathematics-education researcher I am lurking in their world. While this may be how I hear the question, it is very far from my experience in my math department with my colleagues. There are about 30 faculty in my department and three of us are active mathematics-education researchers. I have had overwhelmingly positive interactions in my department and feel valued as a teacher and as a researcher. When asked how I have had such a positive experience in my department (i.e. how I have gained acceptance at Hogwarts by the wizards and witches), my answer is both that my colleagues are just great people and that we have good relationships because we have gotten to know each other and each other’s work through conversations rooted in curiosity. I think it’s been valuable that we respect each other both as people and as researchers. In this blog post, I want to share some of the substance of what I have shared with them about mathematics education research.

**Overview of mathematics education research**

Mathematics education research is the systematic study of the teaching and learning of mathematics. This means that the types of questions we ask are about how people (students of all ages, non-students of all ages) think about and do mathematics, and about how people teach, why they teach that way, and what students might learn from that instruction. We use quantitative (typically survey based) and qualitative (typically interviews and observations) research methods to answer these questions. By its nature, math-ed research is extremely broad. All sorts of people think about and do mathematics – including school children, college students, graduate students, nurses, research mathematicians, mathematics teachers, food card vendors, etc. So when we ask questions about how people learn mathematics, we can attend to different conceptions of learning (I’ll say more about this below), different populations of people, different types of mathematics, where the people are thinking about or doing the mathematics, how their experience with mathematics relates to other things, and more. When we ask questions about the teaching of mathematics, we can also attend to the components above, as well as different ways that we can teach effectively and how teaching is impacted by internal and external factors.

**Flavors of Math-Ed Research**

There are a number of ways of categorizing different types of math-ed research. I’ll go over a few, specifically: topic of focus, pure versus applied, and two different strands emphasizing the post-secondary level.

*Topic of Focus*

The most obvious and common ways to split up different areas within math-ed research are by population of students/learners (elementary, secondary, post-secondary, future teachers, in-service teachers, graduate students, mathematicians, etc.) and by content area (geometry, algebra, calculus, etc.) Thus, it is common to describe a math-ed researcher as: “She studies proof and reasoning of students across age levels” or “He studies how teachers understand proportions and fractions.” While these are overly simplified versions of how we might actually describe these two specific math-ed researchers, it illustrates my point.

*Pure and Applied*

Just like mathematics researchers differentiate between research conducted without any practical end-use in mind and research conducted in order to solve a specific problem, math-ed research also has pure and applied flavors. While it may be easier to make someone’s pure math-ed research applicable because it is focused on education, there is certainly an abundance of math-ed research done without an intended concrete application. Alan Schoenfeld (2000) delineates pure and applied math-ed research by their goals: Pure math-ed research is done in order “to understand the nature of mathematical thinking, teaching, and learning” and applied math-ed research is done in order “to use such understandings to improve mathematics instruction” (p. 641). Often in math-ed research, pure investigations quickly become relevant and other researchers are able to directly leverage such work in concrete settings.

Pure math-ed research may look at topics such as the cognitive structures people hold and develop surrounding calculus (e.g. Pat Thompson’s work). Applied math-ed research, on the other hand, is more directly focused on how to improve mathematics instruction. I consider much of the MAA Calculus Project’s work to fall under this category – we have focused on investigating what makes a calculus program especially good for students and how to support other mathemaics departments to improve their programs. Often I find that pure math-ed research relies and extends theory (this will be explained more below) much more than applied work.

*RUME and SoTL*

One subfield within applied math-ed research comes from college mathematics faculty who do scholarly work around their teaching, called Scholarship of Teaching and Learning (SoTL). This contrast the Research in Undergraduate Mathematics (RUME) community, which is the primary academic home for mathematics education researchers (pure and applied) who focus our work on undergraduate mathematics, undergraduate mathematics students, teachers of undergraduate mathematics, or undergraduate mathematics programs. SoTL is a community of academics of different disciplines who are interested in scholarly inquiry into their own teaching of their discipline. In mathematics, this community is primarily populated by mathematicians who engage in scholarship related to college level mathematics. Since both communities use scholarly principals to investigate the teaching and learning of undergraduate mathematics, there are many overlaps between questions of interest. However, there are also some key differences. Curtis Bennett and Jacqueline Dewer, prominent leader in the mathematics SoTL community, describe the differences between “teaching tips”, SoTL, and RUME as follows:

Teaching tips refers to a description of a teaching method or innovation that an instructor reports having tried “successfully” and that the students “liked.” If the instructor begins to systematically gather evidence from students about what, if any, cognitive or affective effects the method had on their learning, she is moving toward scholarship of teaching and learning. When this evidence is sufficient to draw conclusion, and those conclusions are situated in the literature, peer reviewed, and made public, the instructor has produced a piece of SoTL work…. Mathematics education research or RUME is more in line with Boyer’s “scholarship of discovery” wherein research methodologies, theoretical frameworks, empirical studies, and reproducible results would command greater importance. This naturally influences the questions asked or considered worth asking, the methods used to investigate them, and what the community accepts as valid. (Bennet & Dewer, 2012, pp.461).

To carry their progression of the teacher’s description of a good teaching innovation toward her production of SoTL work onto a RUME study, I will put this in the context of teaching proofs. A nice example of a teaching tip related to proofs is found on a blog post called “How to teach someone how to prove something,” where the author describes that when she teaches proof she has asked “each student to give a presentation to the class on some proof they particularly enjoyed, and I sat through a preview of their presentation and gave them extensive advice on board work and eye contact.” She says that though this took a lot of work on her end, it was beneficial for the students, claiming that it “really helped them prepare and also boosted their egos while at the same time increased their sympathy with each other and with me.” The author shares a teaching approach with an (unsubstantiated) claim about how this positively affected her students. Both SoTL scholars and RUME researchers would agree that this claim is unsubstantiated because she did not collect data (either from her classroom or others) to support it.

Suppose this same teacher wanted to provide some evidence for this claim that may convince others that her approach is beneficial. She may survey her students’ mathematical confidence before and after the class, and interview them to understand the role of the classroom presentations on their confidence. She could write a paper describing her approach and her findings, connect her work to other literature, and submit this work to a SoTL outlet (such as PRIMUS). The result may look similar to Robert Talbert’s 2015 PRIMUS publication describing the benefits of inverting the transition to proof class based the author’s personal reflections as the teacher of the course and responses to a questionnaire about the class from about 30 of the 100 students in class. One of the authors’ conclusions from this work was that a student-centered introduction to proof course shows promise for “helping students emerge as competent, confident, self-regulating learners”.

If this teacher then wanted to pursue this work in a way more aligned with RUME work, she would have to identify a specific research question. In RUME, the research question is a necessary component of the work to identify the scope of the question and ensure that the research methods are aligned with the research question, and that the results answer the question. One such question that would explore the role of proof on students’ beliefs could be: “What are undergraduate students’ beliefs about the nature of proof, about themselves as learners of proof, and about the teaching of proof?”, and explore the question on a scale larger than her own classroom. Such a research question partially guided the work of Despina A. Stylianou, Maria L. Blanton, and Ourania Rotou in their 2015 publication in the International Journal of Research in Undergraduate Mathematics Education. To answer their research questions, the authors surveyed 535 early undergraduate students from six universities and then conducted follow up written test and interviews with a subset of the students to better understand the survey results. One of the findings from this work was a strong positive relationship between students’ beliefs about the role of proof and with their views of themselves as learners.

The claims made by teaching tip, SoTL work, and RUME work all shed light on the positive relationship with engaging in mathematical proofs and students’ beliefs about themselves. The difference is the audience of the claims, the degree to which the argument may convince others of the claims’ validity, and the role of theory in the arguments.

**Role of Theory**

Since mathematics education researchers are concerned with the teaching and learning of mathematics, math-ed research draws on theories of learning, often from psychology. In pure math-ed research, this theory is often made very explicit, such as in Pat Thompson’s work where he draws very explicitly on Jean Piaget’s constructivist perspective. In applied math-ed research, such as the work through the MAA’s Calculus projects (that I am involved with), the theory may be more implicit in the work, meaning that it is not at the forefront of the work but that there is an underlying theory guiding the work. In SoTL work, there is often no implicitly or explicitly mentioned guiding theory of learning. This is mostly due to a combination of differences in expectations and goals of SoTL versus mathematics education research. To wrap up this post, I will give a (very!) brief overview this idea.

A theory of learning is an explanation of how people learn – observe that this is subtle, as other ways of phrasing this sentence carry with them different assumptions about learning.

- “…how people gain knowledge” assumes that knowledge is something to be acquired, and draws on an
**acquisition metaphor**of learning where our brains are vessels for carrying around our knowledge. - “…how people develop knowledge” draws on a
**constructivist perspective**, where each individual reconstructs mental images based on interactions in the world. - “…how people become more proficient in certain practices” is a more
**participation-oriented**phrasing, highlighting that knowledge is not something one owns but something one does by participating in the practices of a community to growing level of expertise.

Based on which explanation of how people learn a researcher subscribes to, they will ask different research questions and answer these questions using different approaches. For instance, suppose a researcher were interested in exploring student learning of derivatives. Taking the first approach to learning (i.e. drawing on an acquisition metaphor), a math-ed researcher may create a research study to investigate “How much do students learn about derivative in teaching approach A?” To answer this question, the researcher could develop a test with a number of derivative questions and administer the same test to students at the beginning and end of the class and compare the results. This approach assumes that what students have learned in the setting of the classroom is carried with them into a testing situation and how they do on the exam is indicative of what they know. If, instead, the researcher draws on the second approach to learning (a constructivist approach), then she may ask the question “What are different student conceptions of derivative?”. To answer this question, she may create a think-aloud interview where the students are filmed or recorded working on various problems about derivative, and asked to explain how they are thinking about the problems. This approach assumes that an interview setting can recreate a situation where students can access and share how they make sense of derivative. Lastly, if the researcher ascribes to the third perspective of learning (a participation-oriented approach), then the question asked may be “How do students use their understanding of derivative in mechanical engineering classes?”. This question could be answered by observing student interactions in the classroom as they work in groups on problems that rely on the derivative. This approach assumes that it does not make sense to decontextualize student thinking from the real learning environment.

**Conclusions**

This brief introduction to mathematics education research was written to shed some light on the aspects of math-ed research that are often of interest to mathematics researchers, from my perspective as a Muggle in a Math department. If you have more questions about what we do – ask us! For more information, check out the SIGMAA on RUME page which has more information on publication venues and conferences related to RUME. This book about SoTL has more information about the community, and much more information can be found online. Lastly, both RUME and SoTL sessions appear at the Joint Meetings, which are great ways to get a small taste of this work. In closing, please enjoy these musings about Muggles:

]]>“Muggles have garden gnomes, too, you know,” Harry told Ron as they crossed the lawn… “Yeah, I’ve seen those things they think are gnomes,” said Ron, bent double with his head in a peony bush, “like fat little Santa Clauses with fishing rods…”

―J.K. Rowling,Harry Potter and the Chamber of Secrets“The wizards represent all that the true ‘muggle’ most fears: They are plainly outcasts and comfortable with being so. Nothing is more unnerving to the truly conventional than the unashamed misfit!”

―J.K. Rowling, Salon 1999

For several years I’ve been incorporating active-learning and inquiry-based learning activities in my teaching. There is ample documented evidence of the benefits of these approaches for students, but equally as important, they make teaching and learning more fun! Shifting class time from lecturing to having students work on problems, present their solutions to the class, and explain answers to each other has a dramatic effect: students become more engaged, learn communication skills, and gain confidence. These soft skills are in high demand in the job market. In this article, I will describe my use of these approaches and my experience teaching in a classroom designed for collaborative learning.

So far I’ve mostly been doing these active learning activities in traditional classrooms, but for smaller classes of about 25 students I’ve used collaborative classrooms with great success. The main difference between a “traditional classroom” and a “collaborative classroom” are (A) the seating arrangement; and (B) the presence of integrated technology. In a collaborative classroom, students usually sit around tables, often facing each other, which facilitates working in small groups. Many collaborative classrooms do not have an obvious “central location” where the instructor can stand, so teaching in such as classroom requires getting used to (see picture below). The main hesitation I had with using a collaborative classroom is this lack of a central location from which to lecture. I normally don’t use slides when lecturing so I wanted a way of emulating writing on a blackboard. I used a tablet computer with writing software to project what I would write on overhead screens. It ended up working very well. Students took notes as I wrote them, and I made the notes available to them after class. As can be seen from the pictures, collaborative classrooms tend to have many screens so students can see at least one of them easily.

As instructors, we are aware that “traditional classrooms” can come with different seating arrangements. Some have individual desks that one can move around, some have tables that are fixed and chairs that can be moved, some have multiple tables and chairs that cannot be moved, and some have a typical auditorium setting. I have taught in all of these types of classrooms and I have tried to incorporate active learning techniques with different degrees of success. It is significantly harder to have students work on a problem collaboratively if they can’t really face each other in a natural way. Collaborative classrooms, on the other hand, are designed to foster discussion by having multiple tables where one can move chairs as needed. For the particular classroom I was using, the tables were distributed in such a way that it made it easy for the instructor and the teaching staff (composed mostly of undergraduate students who had done well in the class in prior semesters) to circulate in the room to answer questions and address students.

Many of these collaborative classrooms also have multiple screens where the instructor can project information in a way that all students can see easily, without rearranging the way they are seated. So, a collaborative classroom accomplishes two goals: it allows students to work in groups, thus allowing the teaching staff easy access to every student, and allows for multiple displays so that the entire class has an easy view of what the instructor is projecting. There is no need to rearrange the seating every time one transitions from group-work time to “instruction” time and back.

This past summer I had the opportunity to teach in a collaborative classroom for a larger class of 59 students. This class was a proof-based introductory discrete mathematics course that emphasized logic, proof techniques, and both oral and written communication of mathematical ideas. The class did better overall than the same class in the regular semester. I was happy about how things went, and I decided to share my experience in case other instructors are considering utilizing more collaborative approaches to teaching. To take advantage of the collaborative space, I incorporated the following components.

**Course staff helped me answer questions while students worked during class.** To make this process work, it was important to have more than one teaching staff member in the classroom. To accomplish this, I recruited a few undergraduate students who had taken the class previously and had done exceptionally well. When it was time to work on the worksheet problems, we had about five people walking around (one instructor and four undergraduate instructors), answering questions, and talking to the students about the class material. These undergraduate instructors also held office hours, so we ended up having about 13 office hours every week.

Choosing the right undergraduate instructors is extremely important. I selected students who I knew could do the job, understood the material reasonably well, and were able to express mathematical ideas. Seeing them work with students was also a rewarding experience, as I was able to notice a significant improvement in their mathematical ability since they had taken the class. There is no better way to learn a topic than to teach it! We also had graduate assistants, who were in charge of grading homework, but in my experience undergraduate instructors do an excellent job understanding student questions, even if they are not perfectly formulated. There is something about talking to a peer that makes everyone, student and teaching assistant, more comfortable.

**Reading quizzes, both individual and team-based.** The idea of these reading quizzes comes from team-based learning (TBL), where instructors assign a reading before class, and at the beginning of the class they give an individual quiz (referred to as an individual readiness assessment test, or iRAT) and a team-based quiz (referred to as team readiness assessment test, or tRAT). Both the iRAT and tRAT for a given day have the same questions. At the beginning of the term, students were placed in teams according to a brief survey asking them about their level of comfort with teamwork as well as with logical and mathematical thinking. Then groups of four were formed according to their answers in such a way as to have “balanced teams.” These teams were used for the team quizzes and in-class work. For the reading quizzes, I assigned a specific section from the textbook for each class, and then gave a quiz on that section before it was officially covered in class. For many students, the idea of being asked questions before seeing a topic in class is preposterous. Nonetheless, reading comprehension is an important skill to develop. So that it wouldn’t greatly affect the students’ grades, the topics were carefully chosen and these quizzes didn’t count for a large portion of the final grade (but did count for something, as otherwise students might not be motivated enough to do the reading). The students would first do the quiz individually, and then would get together in their teams and work on the team quiz. Not surprisingly, students did better in the team version of the quiz than in the individual version. I witnessed many spirited discussions as members of the same team were choosing their answers: students were indeed teaching each other!

**Worksheets containing a summary of the major concepts for a given class, along with problems to test student knowledge.** I prepared a worksheet for every class that included the basic definitions, and then several problems for students to work on. Students were given time to work on the problems while the teaching staff walked around, answered questions, and discussed the problems with students (without giving them the answers). Most of lecture time was spent clarifying concepts from the reading, and providing examples that would inevitably bring more questions. But I tried to avoid talking continuously for more than 10 minutes and would provide several “breaks” where students would work on the problems provided in the worksheets.

**Opportunities for students to explain their work to others.** After several students had worked on a problem, we selected someone to present the solution to the rest of the class. We utilized a document camera to project students’ work on an overhead screen, and had the student walk us through their solutions. Sometimes the instructor or other students would ask questions. I would often compare the work of multiple students, which was a great way to highlight the fact that there are multiple correct ways of solving a problem or proving a proposition. I would also show work that wasn’t quite complete and correct, but without revealing the student who had made the work, and I would ask the class how to fix the mistake or how to complete the problem.

Overall, teaching in a collaborative classroom was a great experience. I will be politely requesting these kinds of rooms to the powers-that-be for all my future classes!

]]>Every university instructor would be thrilled if their students came to their mathematics classes with the ability to make viable arguments and to critique the reasoning of others; if their inclination were

- to persevere through difficult problems,
- to look for and make use of mathematical structures, and
- to strategically use tools in their mathematical toolbox.

But how do students develop these mathematical practices? The foundation is laid during a student’s 13 years of mathematics classes in K-12 – learning from their teachers and engaging in mathematics with their peers. The eight Mathematical Practice Standards that are an integral part of the Common Core State Standards (CCSS) for Mathematics, have elevated the importance and visibility of productive mathematical habits of mind in K-12 education. It is now an expectation and not a bonus. But are teachers equipped to help their students develop the practices until they become habits? Do teachers even have productive mathematical habits of minds themselves?

We actually know quite a bit about pre-service teachers’ habits of mind from research (Karen King: Because I love mathematics, Mathfest 2012, Madison). For example, pre-service teachers who hold mathematics degrees have an inclination to first state rules (Floden & Maniketti, 2005). They are not in the habit to seek meaning, which is such an important mathematical habit of mind. We can think of habits as acquired actions that we have practiced so much, that we eventually do them without thinking. At first, they are deliberately chosen but at some point they become automatic.

This has important implications for teaching at the university level, especially for pre-service teachers. Many professors and policy makers assume that completing a major in mathematics builds some kind of maturity. Undergraduate courses should be an opportunity to further refine productive mathematical habits of mind. Instead, this coursework often appears to reinforce unproductive habits of mind for engaging in mathematical practice. So I think we college/university faculty should take a serious look at what we are doing in our classes—not just in specific classes for future teachers, but in all our math classes. Mathematics faculty have a tendency to assign responsibility for K-12 math teacher quality to math education courses. But let’s think about that for a moment. In California, future high school teachers take 4 credit hours of math methods courses in their credential program. If they are lucky, they take at most a handful of courses as part of a math major specifically designed for future teachers, maybe 6 more credit hours. And they complete about 40 credit hours of mathematics content courses that are part of the normal mathematics degree programs. If they don’t learn productive mathematical habits of mind from their professors in their 10 or more college math courses, then who is responsible for this?

This is our responsibility and our opportunity! Pre-service teachers come to college with already formed ideas of what mathematics is and how the game of mathematics is played. They have already developed mathematical habits of mind—for good or for bad. It is up to us to help them replace unhelpful habits and develop productive habits, and we have approximately 4 years to do it.

When we are trying to change habits and practices, we often focus on directly changing actions and we hope this will lead to better results. In this case, we want teachers to change their teaching practice so that all students will develop productive mathematical habits of mind. But actions are affected by beliefs and beliefs are based on experiences. So it would be much more productive for us to provide pre-service teachers (and all students) with a series of compelling and positive experiences to change their beliefs. This, in turn, will lead to more coherent, consistent, grounded, and therefore stable results.

In my work with in-service teachers around transitioning to the CCSS, we have explored a variety of productive pedagogical ideas that provide students with experiences where they engage in mathematical practices. I have adopted several into my college classroom to better prepare my students for their work as teachers but also because I think this is simply good teaching for everybody. I’ll give two examples that focus on “Make a viable argument and critique the reasoning of others”.

**Gallery Walks**

In many of our courses, students write proofs; this is a mathematician’s idea of a viable argument. How do students learn how to write a proof? What are characteristics of a good proof? How do you critique other people’s arguments? On the first day of my combinatorics and graph theory class we worked on the following problem:

Students first collaborated on the problem in groups of 3—4. After students solved the problem, they made a poster to explain how they found their solution and how they knew that they had found all solutions. We then did a gallery walk: With a stack of sticky-notes in hand, students studied each poster. They asked questions about parts that they did not understand and they made suggestions when they found something that could be improved. They also pointed out aspects of the posters they found helpful in understanding the argument.

(sample posters with sticky note feedback)

Next, students went back to their own posters and studied the feedback they had received. They discussed revisions, and for homework each student individually wrote up an improved version of their proofs.

Before we finished the class, we had a discussion about the purpose of this activity. Students were surprised about the variety of proofs they had seen. After reading each other’s solutions, they were able to decide if there were gaps in arguments and describe what made a proof easy to read. They saw that there are a variety of ways to structure the argument, that a complete proof is not necessarily a good proof, and that a “proof by example” is not a proof but could possibly be revised into a general proof. They recognized the value of their peers’ feedback; and that they did not need the instructor to validate their proofs—rather, they possessed the mathematical authority to do so themselves.

You may ask: Our students write proofs and have to show their work all the time, why is this activity useful? In this case, it set the tone for the semester, and it made expectations clear to the students. Aside from seeing that they would be expected to actively work with their peers in class, they also experienced giving feedback and then using feedback to revise their work. They learned that an important goal of mathematics is communicating solutions, not just getting answers, and for the future teachers in the room, they saw a pedagogical structure they can use at any grade level and in any subject.

I do variations of the gallery walk in most of my classes a few times each semester. It works with modeling problems in calculus just as well as with proofs in real analysis.

**Re-engagement Lessons**

Every instructor knows the following situation very well: Students have done a task. You assess it. There are major gaps. What do you do? You could

- Re-teach the topic or do more examples.
- Offer review sessions or office hours for students with gaps and work with them separately.
- Ignore the gap, go on, and hope the students will pick the content up later.

I want to describe another option: re-engage the students with the task and the concepts, using their responses to move everybody forward.

While learning how to write proofs involving the algebra of sets in my “Intro to Proofs” class, students did the following standard problem on a homework assignment: Given sets *A* and *B,* prove that *A* U (*B* – *A*) = *A* U *B.* While grading the homework, I found myself writing the same comments over and over again: “Pick a point,” “double set inclusion,” etc. I decided to use the proofs that students had written as the basis for the next day’s activity. To prepare, I compiled a collection of students’ proofs. In class, I handed out copies of these proofs to pairs of students. I asked them to discuss:

- What is good about each proof?
- Are there actual mistakes? Gaps?
- What makes a proof easy to understand? Hard to understand?
- Fill in gaps, correct mistakes.

Then we had a whole class discussion, keeping track on a document camera of changes students suggested.

Why was this activity better than just going over the proof again on the board or doing a similar problem, which would certainly have been faster?

By using a compilation of actual student work, students were invested in the exercise from the start. They already had engaged with this problem, so even if they had not written a perfect proof, they had a basis to build on. The examples I chose included good and bad features of proofs. The contrast and repetition allowed the students to transfer ideas from one to the other. The setup of the activity allowed students at every level to engage and benefit. One of my top students told me after the class that he had learned a lot about reading and critiquing others’ work. Finally, by contrasting several proofs, we had an excellent discussion about the structure of proofs, not just small details.

Research is compelling that students learn more from making and then confronting mistakes than from avoiding them (Boaler, 2016). My goal as a teacher is shifting from providing clear explanations so students don’t make mistakes, to creating situations, which are likely to produce important mistakes, and then helping the entire class confront and learn from those mistakes. Re-engagement lessons are a great method for this confrontation.

This is just one example of a re-engagement lesson. David Foster from the Silicon Valley Math Project contrasts re-teaching and re-engagement:

Re-teaching |
Re-engagement |

Teach the unit again. | Revisit student thinking. |

Address basic skills that are missing. | Address conceptual understanding. |

Do the same or similar problems over. | Examine task from different perspective. |

Practice more to make sure student learn the procedures | Critique student approaches/solutions to make connections |

Focus mostly on underachievers. | The entire class is engaged in the math. |

Cognitive level is usually lower. | Cognitive level is usually higher. |

(Foster & Poppers, 2009)

I offer the two classroom activities as examples to help us start talking about changing the mathematics culture in our classrooms and schools so that all students, including future teachers, have experiences that support them in forming productive mathematical habits of mind.

To educate our students to become mathematicians and teachers we have to do more than role-model mathematical practices, we have to create the environment where students engage in them, and we have to talk more about what we are doing and why. We have 4 years to help our students replace bad mathematical habits (speed, answer-getting, anxiety) with productive ones (sense-making, perseverance, use of tools and structure). This is our responsibility, but maybe even more importantly, this is our opportunity.

**References:**

Boaler, J., & Dweck, C. S. (2016). Mathematical mindsets: Unleashing students’ potential through creative math, inspiring messages and innovative teaching.

Connors, R., & Smith, T. (2012). Change the culture, change the game: The breakthrough strategy for energizing your organization and creating accountability for results. [Also https://www.partnersinleadership.com/insights-publications/changing-your-culture/]

Common Core State Standards: http://www.corestandards.org/Math/Practice/

Floden, R., and Meniketti, M. (2005). Research on the effects of coursework in the arts and sciences and in the foundations of education. In M. Cochran-Smith and K. Zeichner (Eds.), Studying teacher education: The report of the AERA panel on research and teacher education. Mahwah, NJ: Lawrence Erlbaum Associates

Foster, D. and Poppers, A. (2009). Using Formative Assessment to Drive Learning: http://www.svmimac.org/images/Using_Formative_Assessment_to_Drive_Learning_Reduced.pdf

]]>“I am so glad you made that mistake,” I’ve come to realize, is one of the most important things I say to my students.

When I first started using inquiry-based learning (IBL) teaching methods, I had a tough time creating an atmosphere where students felt comfortable getting up in front of class and presenting their work. It is a natural human instinct to not want to expose your weaknesses in front of others. Making a mistake while presenting the solution to a problem at the board is a huge potential source of embarrassment and shame, and hence also anxiety. So how do we—as educators who understand the critical importance in the learning process of making and learning from mistakes—diminish the fear of public failure in our students? For me, the answer involves persistent encouragement. It also relies on setting the right tone on the first day of class.

To prepare my students on Day One of class, I talk about the importance of making and learning from mistakes. I often refer to one of my favorite books on this subject, *The Talent Code *by Daniel Coyle [1]. Coyle has studied several hotbeds of “genius,” places where an unreasonable number of virtuosos—e.g., world-famous violinists, baseball players, and writers of fiction—emerge. He is interested in discovering just how people like Charlotte Brontë, Pelé, and Michelangelo learn to perform at the top of their fields. The answer involves a simple idea: talented people are those who have made far more mistakes than others and who have deliberately learned from those mistakes. For my students, the takeaway is that the most accomplished people have made many more mistakes than the average person. Consequently, it is of high value for us to make our mistakes public and discover how to correct them together. (As a side note, Francis Su employs the same strategy in his article “The Value of Struggle” [2].)

After the first day of class, whether I am teaching Quantitative Reasoning, Calculus, or a more advanced course such as Introduction to Knot Theory, nearly every class period begins with presentations of homework problems by student volunteers. Students have homework due each day, and they are required to present problems a certain number of times during the term. The number of problems we do depends on how long the class period is, how complex the problems are, and what I need to teach in the remainder of class. In a course like Introduction to Knot Theory, we might spend 45 minutes or an hour on student presentations, while we will spend 20-30 minutes on calculus homework presentations in an 85-minute class period. This general structure could be modified to fit shorter class periods or weekly recitation sections at universities with larger lecture courses. For instance, we used to teach calculus classes four days a week in 50-minute blocks at Seattle University. Within this structure, I had a weekly “Problem Day” for my calculus classes instead of having daily student presentations of homework. After students volunteer to present problems at the board on a typical class day, all students who are chosen to present simultaneously write up problem solutions while their classmates review the homework or work on another activity. Once all solutions have been written up, we reconvene; one by one, students come to the board to walk us through their solution. *This is where supportive facilitation becomes critical.*

Encouraging students to make mistakes in the abstract—as I do one Day One—is one thing, but helping students accept their mistakes in front of class is quite another. This is where my new catch phrase comes in. Let’s say, for example, a student is computing the derivative of \(y=x^2\sin x\) at the board and writes \(y’=2x\cos x\). I might say, “I am so glad you made that mistake! You’ve just made one of the most common mistakes I’ve seen on this type of problem, so it’s worth us spending some time talking about. Can anyone point out what the mistake is?” If someone in the class comments that the presenter should have used the Product Rule, I might follow up with, “That’s a good idea. How can we see that this function is a product? Let’s work together to break the problem down into pieces.” Going forward, I facilitate the process of the class coming up with their collective correction of the mistake. Collaboratively working to correct mistakes like this tends to help students observe more subtle differences between different types of problems while building a more sophisticated mental problem-solving framework.

Making and correcting mistakes together can also help address more basic misconceptions. Suppose a student—let’s call them Riley—writes, in the middle of a calculus problem, a line like the following.

\(1/(x+x^2) = 1/x + 1/x^2\)

This mistake will most likely lead to an incorrect final answer. Many of the presenter’s classmates will discover the final answer is wrong, and some will even be able to pinpoint where the computation went awry. How would I address this? Once a classmate has identified the problem, I might say, “Riley, I’m so glad you made that mistake! This is one of the most common algebraic mistakes students make in calculus—I’m willing to bet others in the class made this same exact mistake, so it’ll be really helpful for us to talk about it together. This is a question for anyone in the class: How can we prove that this equality doesn’t hold, in general?” Suppose a student, Dana, in the audience suggests we try plugging in some numbers to see what happens. I’d follow up with, “Riley, could you be a scribe for this part of the discussion? Please write up Dana’s suggestion beside your work. Dana, can you tell Riley exactly what to write?” Once we’ve cleared up the confusion with Riley’s algebra, I might ask them to work through the rest of their problem again at the board, fixing their work accordingly. On the other hand, if Riley appears to be too shaken or confused to fix the rest of the problem or if the actual problem was much more complex than the one that resulted from the algebraic error, I might ask the class to collectively help Riley figure out what to write each step of the way. A third option I frequently use is the “phone a friend” option. I could see if Riley wants to “phone a friend” in the class to dictate a correct answer.

Mistakes can be common in class presentations, but I occasionally have a class that is so risk-averse that very few people offer to present their work unless they know it’s perfect. If I have too many correct solutions presented, but I know some in the class are struggling, I might follow up with a comment like: “That was perfect! Too bad there were no mistakes in your work for us to learn more from. I’d like to hear from someone who tried a method for solving this problem that *didn’t* work out so well. Would anyone be willing to share something they tried with the class?” At this point, someone may come forward with another (incorrect, or partially correct) way to attempt the problem. If nobody comes forward, I could offer a common wrong way to do the problem and ask my students to identify the misunderstanding revealed by my “solution.” I might even tell a little white lie and say something like, “When I first learned this concept, I had a lot of trouble understanding it. I made the following mistake all the time before I figured out why I was confused.” Alternatively, I could mention, “The last time I taught this class, someone made the following mistake. What’s wrong with this approach to solving this problem?”

Now, let’s say one of my students has just presented a problem at the board. Perhaps they made a mistake, or perhaps they did everything perfectly. What happens next? I will ask the class, “Any questions, comments, or *compliments*?” The request for compliments is one of the most important parts of this solicitation of feedback. It is so important that, during the first several weeks of class, I make my students give each presenter at least one compliment. Some of the best compliments I’ve heard from students follow some of the worst presentations. For instance, after a disastrous presentation where the presenter appeared clueless and needed their peers to help them complete all parts of a problem, a student of mine once observed, “That took a lot of guts to get up there and make mistakes. I thought you did a great job fixing the solution and taking constructive criticism from us!” If nobody offers up such a supportive compliment after a bad presentation, I might give this feedback myself to publicly recognize the presenter’s courage. What’s more, if a student appears shaken by the experience of messing up so thoroughly, I’ll follow up again after class, reinforcing my appreciation for their bravery. Over time, this strategy helps build a supportive classroom environment.

Looking back on how my classes have evolved, I can see that it is difficult to convince students to be vulnerable in a math class without the three following elements:

(1) setting the stage by sharing my expectations of students making mistakes and being clear about the *reasons* for these expectations,

(2) encouraging students to help each other come to the right answer while recognizing the benefits of making specific mistakes, and

(3) acknowledging students’ willingness to make mistakes both publicly and privately.

We’ve been primarily focused on *how* to encourage students to make mistakes, but let’s turn our attention to *why* it might be important in our math classes. One thing that I found to be particularly striking when I started teaching this way was my students’ exam performance. I typically ask a mixture of conceptual and computational questions on exams. I was surprised to see how much more sophisticated students’ responses were to conceptual questions in courses where I spent a great deal of class time on student presentations. At first, this was surprising to me since we spent quite a lot of time in class working through computational problems. The more I reflected on this phenomenon, though, the more it made sense. The repairing of computational mistakes in class often led to a discussion of the more conceptual mathematics underlying the computations. What’s more, these discussions were sparked by students grappling with problems that they cared about—problems they had spent time outside of class trying to solve—and not simply problems they had just been introduced to in the course of a lecture. Discussion that takes place during a homework presentation session seems to stick with students in a way that a “discussion” (where the instructor is doing much of the talking) during a lecture does not.

There are myriad other benefits I’ve observed, including development of a tight-knit classroom community, increased student self-confidence, and more engaged student participation in all aspects of class. In short, I’m convinced. I’m all in. The benefits of teaching this way far outweigh the costs of redistributing precious class time, making room for students to publicly make and collaboratively fix their delightful mathematical mistakes.

**References**

[1] Coyle, Daniel. *The Talent Code: Greatest Isn’t Born, It’s Grown, Here’s how*. Bantam, 2009.

[2] Su, Francis. The Value of Struggle. *MAA FOCUS.* June/July 2016.

What happens to the data from your teaching evaluations? Who sees the data? Are your numbers compared with other data? What interpretations or conclusions result? How well informed is everyone, including you, about the limitations of this data, and conditions that should be satisfied before it is used in evaluating teaching?

Despite many shortcomings of student ratings of teaching (SRT), some of which I mention below, their use is likely to continue indefinitely because the data is easy to collect, and gathering it requires little time on the part of students or faculty. I refer to them as student ratings, not evaluations, because “evaluation” indicates that a judgment of value or worth has been made (by the students), while “ratings” denote data that need interpretation (by the faculty member, colleagues, or administrators) (Benton & Cashin, 2011).

Readers may be asked to interpret the data from their SRT on their annual reviews or in their applications for tenure or promotion. They may even find themselves on committees charged with reviewing the overall teaching evaluation process or the particular form that students use at their institutions, as I did. For these reasons, I thought it might be helpful to discuss some general issues concerning SRT and then present a few practical guidelines for using and interpreting SRT data.

My career as a mathematics professor spanned four decades (1973-2013) at Loyola Marymount University, a comprehensive private institution in Los Angeles. During that time, my teaching was assessed each semester by student “evaluations.” For nearly all of those 40 years this was the only method used on a regular basis. If there were student complaints, a classroom observation by a senior faculty member might take place, which happened to me once as an untenured faculty member. Later on, as a senior faculty member, I myself was called upon to perform a few classroom observations.

During 2006–2011, I also directed a number of faculty development programs on campus, including the Center for Teaching Excellence. In that role, I served as a resource person to a Faculty Senate committee appointed in 2010 to develop a comprehensive system for evaluating teaching. Prior to that, I had participated in a successful faculty-led effort to revise the form students used to rate our teaching, and I worked to develop and disseminate guidelines about how that data should be interpreted. During that two-year process (2007-2009), I discovered that my colleagues and I, and even faculty developers on other campuses, had a lot to learn about the limitations of this data (Dewar, 2011).

Because teaching is such a complex and multi-faceted task, its evaluation requires the use of multiple measures. Classroom observations, peer review of teaching materials (syllabus, exams, assignments, etc.), course portfolios, student interviews (group or individual), and alumni surveys are other measures that could be employed (Arreola, 2007; Chism, 2007; Seldin, 2004). In practice, SRT are the most commonly used measure (Seldin, 1999) and, frequently, the primary measure (Ellis, Deshler, & Speer, 2016; Loeher, 2006). Even worse, “many institutions reduce their assessment of the complex task of teaching to data from one or two questions” (Fink, 2008, p. 4).

The use of SRT has garnered many critics (e.g., Stark & Freishtat, 2014) and supporters (e.g., Benton & Cashin, 2011; Benton & Ryalls, 2016) of their reliability and validity. Back-and-forth discussions about SRT occur frequently on the listserve maintained by the professional society for faculty developers known as the POD (for Professional and Organizational Development ) Network (see http://podnetwork.org). Earlier this month, in just one 24-hour period, there were 18 postings by 12 individuals on the topic (see https://groups.google.com/a/podnetwork.org/forum/#!topic/discussion/pBpkkck_xEk)

The advent of online courses has provided new opportunities to investigate gender bias in SRT, leading to new calls for banishing them from use in personnel decisions (MacNell, Driscoll, & Hunt, 2015; Boring, Ottoboni, & Stark, 2016). Still, as noted above, experts continue to argue their merits.

Setting aside questions of bias, readers should be aware of many factors that can affect the reliability and validity of SRTs. These include the content and wording of the items that are on the form and how the data are reported.

Some issues related to the items on the form are:

- They must address qualities that students are capable of rating (e.g., students would not be qualified to judge an instructor’s knowledge of the subject matter).
- The students’ interpretation of the wording should be the same as the intended meaning (e.g., students and instructors may have very different understandings of words like “fair” and “challenging”).
- The wording of items should not privilege or be more applicable to certain types of instruction than others (e.g., references to the instructor’s “presentations” or the “classroom” may inadvertently favor traditional lecture over pedagogies such as IBL, cooperative learning in small groups, flipped classrooms, or community-based learning).
- The items should follow the principles of good survey design (e.g., no item should be “double-barreled,” that is, ask for a rating of two distinct factors, such as
*The instructor provided timely and useful feedback.*See Berk (2006) for more practical and entertaining advice.) - Inclusion of global items, such as
*Rate this course as a learning experience*, maybe be efficient for personnel committees, but data obtained from such items provide no insight into specific aspects of teaching and can be misleading (Stark & Freishtat, 2014).

Regarding how the data are reported:

#1.* Sufficient Response Ratio*

There must be an appropriately high response ratio. If the response rate is low, the data cannot be considered representative of the class as a whole. For classes with 5 to 20 students enrolled, 80% is recommended; for classes with between 21 and 50 students, 75% is recommended. For still larger classes, 50% is acceptable. Data should not be considered in personnel decisions if the response rate falls below these levels (Stark & Freishtat, 2014; Theall & Franklin, 1991, p. 89). (NOTE: Items left blank or marked Not Applicable should not be included in the count of the number of responses. Therefore, the response ratio for an individual instructor may vary from item to item.)

#2. *Appropriate Comparisons*

Because students tend to give higher ratings to courses in their majors or to electives than they do to courses required for graduation, the most appropriate comparisons are made between courses of a similar nature (Pallet, 2006). For example, the average across all courses in a College of Arts and Sciences or even across all mathematics department courses would *not* be a valid comparison for a quantitative literacy course.

#3. *When Good Teaching is the Average*

When interpreting an instructor’s rating on a particular item, it is more appropriate to look at the descriptor corresponding to the rating, or the rating’s location along the scale, instead of comparing it to an average of ratings (Pallet, 2006). In other words, a good rating is still good, even when the numerical value falls below the average (for example, getting a 4.0 on a scale of 5, when the average is 4.2). Stark and Freishtat (2014) go even farther, recommending reporting the distribution of scores, the number of responders, and the response rate, but not averages.

#4. *Written Comments*

Narrative comments are often given great consideration by administrators, but this practice is problematic. Only about 10% of students write comments (unless there is an extreme situation), and the first guideline recommends a minimum 50% response threshold. Thus decisions should not rest on a 10% sample just because the comments were written rather than given in numerical form! Student comments can be valuable for the insights they provide into classroom practice and they can guide further investigation or be used along with other data, but they should not be used by themselves to make decisions (Theall & Franklin, 1991, pp. 87-88).

#5. *Other Considerations*

- Class-size can affect ratings. Students tend to rank instructors teaching small classes (less than 10 or 15) most highly, followed by those with 16 to 35 and then those with over 100 students. Thus, the least favorably rated are classes with 35 to 100 students (Theall & Franklin, 1991, p. 91).
- There are disciplinary differences in ratings. Humanities courses tend to be rated more highly than those in the physical sciences (Theall & Franklin, 1991, p. 91).

Many basic, and difficult, issues related to the use of SRT for evaluating teaching effectiveness have not been addressed here, such as how to *define* “teaching effectiveness.” I hope even this limited discussion has helped make readers more aware of issues surrounding the use of SRT, and that they will sample the resources and links provided.

**References**

Arreola, R. (2007). *Developing a comprehensive faculty evaluation system: A handbook for college faculty and administrators on designing and operating a comprehensive faculty evaluation system* (3rd ed.). San Francisco: Anker Publishing.

Berk, R. A. (2006). *Thirteen strategies to measure college teaching*. Sterling, VA: Stylus.

Benton, S. L., & Cashin, W. E. (2011). *IDEA Paper No. 50: Student ratings of teaching: A summary of research and literature.* Manhattan, KS: The IDEA Center. Retrieved from: http://ideaedu.org/wp-content/uploads/2014/11/idea-paper_50.pdf

Benton, S. L., & Ryalls, K. R. (2016). *IDEA Paper #58: Challenging misconceptions about student ratings of instruction. *Manhattan, KS: The IDEA Center. Retrieved from http://www.ideaedu.org/Portals/0/Uploads/Documents/IDEA%20Papers/IDEA%20Papers/PaperIDEA_58.pdf

Boring, A., Ottoboni, K., & Stark, P.B. (2016). Student evaluations of teaching (mostly) do not measure teaching effectiveness. *Science Open Research*. DOI: 10.14293/S2199-1006.1.SOR-EDU.AETBZC.v1

Chism, N. (2007). *Peer review of teaching: A sourcebook* (2nd ed). Bolton, MA: Anker.

Dewar, J. (2011). Helping stakeholders understand the limitations of SRT data: Are we doing enough? *Journal of Faculty Development, 25*(3), 40-44.

Ellis, J., Deshler, J., & Speer, N. (2016, August). How do mathematics departments evaluate their graduate teaching assistant professional development programs? Paper presented at the 40^{th} Conference of the International Group for the Psychology of Mathematics Education, Szeged, Hungary.

Fink, L. D. (2008). Evaluating teaching: A new approach to an old problem. In D. Robertson & L. Nilson (Eds.), *To improve the academy: Vol. 26 *(pp. 3-21). San Francisco, CA: Jossey-Bass.

Loeher, L. (2006, October). *An examination of research university faculty evaluations policies and practices. *Paper presented at the 31^{st} annual meeting of the Professional and Organizational Development Network in Higher Education, Portland, OR.

MacNell, L., Driscoll, A. & Hunt, A.N. (2015). What’s in a name: Exposing gender bias in student ratings of teaching. *I**nnovative Higher Education, 40*(4), 291-303. DOI:10.1007/s10755-014-9313-4

McKeachie, W. J. (2007). Good teaching makes a difference—and we know what it is. In R. P. Perry and J.C. Smart, (Eds.), *The scholarship of teaching and learning in higher education: An evidence-based approach *(pp. 457-474). New York, NY: Springer.

Pallett, W. (2006). Uses and abuses of student ratings. In P. Seldin (Ed.), *Evaluating faculty performance: A practical guide to assessing teaching*, *research, and service*. Bolton, MA: Anker Publishing.

Seldin, P. (Ed.). (1999). *Changing practices in evaluating teaching*. Bolton, MA: Anker Publishing.

Seldin, P. (2007). *The teaching portfolio: A practical guide to improved performance and promotion/tenure decisions* (3rd ed). Bolton, MA: Anker Publishing.

Stark, P. B., & Freishtat, R. (2014). An evaluation of course evaluations. *ScienceOpen Research*. DOI: 10.14293/S2199-1006.1.SOR-EDU.AOFRQA.v1

Theall, M., & Franklin, J. (Eds.). (1991). *New directions for teaching and learning: No. 48. Effective practices for improving teaching.* San Francisco, CA: Jossey-Bass.

]]>

Over the years I have been asked the questions: Why do you direct undergraduate research? How do you pick a research problem for your students? How do you manage a research group? In this blog post I would like to present my personal points of view regarding these questions.

I have been involved in research with undergraduates since 2001. I have worked with students as part of REU programs at large research universities, at mostly undergraduate state universities, and at programs in mathematics institutes. I have also worked with small groups of local students. In 2001, I was a graduate TA at the Summer Institute of Mathematics for Undergraduates (SIMU), an REU program hosted at the University of Puerto Rico – Humacao that received the first ever Mathematics Programs that Make a Difference award from the AMS in 2006. This program fundamentally shaped my view regarding working on research with undergraduate students.

There are several resources for students to learn about undergraduate research programs. There are webpages with lists of active REU programs, some maintained by the AMS, NSF, and the Math Alliance and also some other independent websites. The webpages of many of these programs also relate the points of view of previous students and what makes these programs successful and special. There is an MAA website that, while outdated, does a nice job at answering the question: Is an REU for You? There are also more recent articles and blogs describing the relevance of undergraduate research from the students’ own point of view.

In contrast, there is only a limited amount of information targeting faculty who are interested in leading an REU but lack the experience to work with undergraduate students on research. Currently, the American Institute of Mathematics (AIM) and The Institute of Computational and Experimental Research in Mathematics (ICERM) offer a one-week workshop on Research Experiences for Undergraduate Faculty (REUF). The goal of this workshop is to equip faculty at primarily undergraduate institutions with tools to engage in research with undergraduate students. In 2014, Leslie Hogben and Ulrica Wilson published an article in Involve detailing this program. The REUF program has been very successful and I encourage anyone who wants to start advising undergraduate students in research to apply. As another example, for several years the Center for Undergraduate Research in Mathematics (CURM) has promoted academic year undergraduate research in mathematics. This program, located at Brigham Young University, has provided training and funds to professors to establish undergraduate research groups. Some of the materials developed in this program are accessible online. For those who are interested in reading more about leading REUs, the journal PRIMUS published in 2013 a Special Issue on Undergraduate Research in the Mathematical Sciences. All the articles in this issue are great reads for anyone interested in the topic. In the rest of this article, I will provide some personal reflections about leading REUs.

**Why do you direct undergraduate research?**

In my opinion, the job of a mathematician consists in learning, discovering, and disseminating mathematical knowledge. More than 60% of undergraduate mathematics degrees are awarded by colleges and universities that do not have doctoral programs. At these institutions, students may not necessarily get enough training focused on discovering and presenting mathematical knowledge. So, working with students on research is an essential complement to their undergraduate education.

Some mathematicians feel that students interested in research should simply continue to graduate programs and do “real research” there. First, I think that undergraduate research can be real research, and I will talk more about this later. But as a general response to this perspective, I offer an analogy. When a child starts to learn how to ride a bike, they go in stages. First, they have training wheels, then the wheels are removed but the very concerned adults remain jogging right next to the kids to catch them in case they are about to fall, and finally the stage arrives when the kids ride free and unassisted. The second stage is very short but important in building confidence and self-assurance. Undergraduate research plays the role of this second stage. In our classes, students learn mathematics using training wheels. The problems are not too difficult and they have all the tools that they need to solve the problem. Undergraduate research is a short experience where the safety net is removed, students explore their capabilities, but the faculty is nearby to make sure that students do not fall or, if they do, to encourage them to get up and continue working.

I also believe that it is important to provide mentoring of students from a diverse range of backgrounds and demographics. Doing research with students that are ready and prepared to do research is very exciting. Doing research with students that will greatly benefit from having companionship, guidance, and mentoring is similarly extremely fulfilling. For this reason, I have focused my attention on first generation and other underrepresented groups in the STEM sciences. At the end of the day, these groups of students have produced mathematics that are just as beautiful and significant as any other “top students”. But this group usually lacks knowledge about graduate school and the diverse jobs that exist outside of academia which require advanced degrees, and they tend to be more aware of their perceived mathematical limitations. They also have to fight against some preconceived notions in their family and society about their future careers.

**How do you pick a research problem? **

A research problem must truly be carefully selected to simultaneously provide an honest “real research” experience for the student but also an experience that is meaningful and productive. I do not think that the ultimate product should necessarily be a research paper. But at the end of the program, a student should be able to pinpoint some specific contribution to the subject that is entirely their own. I have always been open to both concrete problems and open-ended investigations that start without a clear target. The main thing is to pick a problem that is flexible enough to get adjusted according to students’ needs so that there can be a successful outcome at the end of the experience. Partial results, conjectures, and databases of non-trivial computations, or even a detailed report regarding the pitfalls in a certain approach are great examples of positive outcomes.

Yet how does one find such problems? It is usually difficult to find problems that satisfy the above constraints and that are also in the faculty’s research program. So one has to be willing to expand the search. In my opinion, there are three main sources: articles, talks, and conversations. Read undergraduate research journals like Involve, Principia, Rose-Hulman undergraduate math journal, SIURO and the Minnesota Journal of Undergraduate Mathematics. CURM has a more complete list of undergraduate math journals. Travel to conferences like the Joint Math Meetings, Math Fest, Field of Dreams, SACNAS or the NSF Mathematics Institutes’ Modern Math Workshop (at SACNAS). Most of these conferences have poster and/or talk sessions devoted to undergraduate research. Talk to colleagues or presenters at conferences or workshops.

Through the years, I have done a combination of the three activities detailed above. I would usually write a note on the main area of research, a certain open problem, and some references. I would then read a couple of introductory papers, write a short introduction to the problem and perform some computations. At the end, I have a 4-5 page self-contained note that I can use to remember the problem or give to a group of students. Some of these notes get refined over the years as students work on some aspects of the problem or discover new avenues to pursuit.

**How do you manage a research group?**

First, I always start a collaboration with a crash course on the subject. My goal is to provide all the necessary information that the students need to understand the given problem and be able to do some experiments. I do this to cover the background material in the shortest possible time but also to establish a relationship with the students. Once students start working on their problem, I meet with them every day. In a short 15-20 minute meeting, each group presents the advances and challenges of the previous day. Only one student in the group presents on a given day and he or she must discuss the advances of the entire group. Students rotate through the week and at the end of the week they give a beamer presentation on their weekly advances. After this presentation, I discuss the goals for the weekend and the following week but also improvements on their presentation and their report. I have found that at the end of the program, it is much easier to compile the partial reports into the final report and the weekly presentations into a poster or a final talk.

As I have mentioned above, the experience must be a real research experience. So I listen to the students, point them to some useful references. I also give them suggestions. But mostly I act as a cheerleader. Using the bike analogy as before, I am not as close as to catch them when they fall, but I am always right there to cheer them up and continue looking at the road ahead.

**Final Comments**

Undergraduate research is usually a short time event. But faculty involvement is a long race. It takes time to find good problems. It also takes time to learn how to interact with students to improve their abilities and confidence while making sure that students retain ownership of their own work. It takes time to successfully find funding to support this activity. And despite all the time it takes, one only affects very few students every year. But even then, for me it has been one of the most rewarding activities that I have been involved in.

]]>Several years ago, I took up running. At first, I wasn’t particularly good at it, but I persisted: about two or three times each week, I would go for a jog, increasing my pace or distance in small increments. This measurable growth in my running ability and physical fitness was a great motivator for me, and I increased the frequency of my workouts. After about a year, I was able to complete a local 5K race; this remains among the proudest achievements of my life to date. This was the most authentic experience I’ve had of putting sustained effort into a domain in which I had little natural ability, observing my own growth, and working toward a specific, achievable goal. I attribute my success to two factors:

- I didn’t measure my own performance against others’. I knew that many people were more accomplished at running than I was when I got started. I set this thought aside and enjoyed the fresh air and the feel of the pavement under my feet.
- I took notice of any growth in my distance or speed, no matter how small. I took pleasure in being able to observe so many improvements in such a short time.

I have often wondered how I can create a similar experience for students in my mathematics classes, especially for those students who lack confidence in their mathematical knowledge and skills. These are the students who are in danger of developing the mindset that the sustained effort they need to master challenging topics indicates that they are not qualified for advanced study in mathematics. Therefore, one goal of every class I teach is to help students let go of concerns about how they are performing relative to their peers, and enjoy observing their own growth and learning. In his September 2015 article in this blog, Benjamin Braun described some of the mindset interventions he uses to help focus students’ attention on their mathematical growth. In this article, I’ll describe how the recent work on growth mindset has influenced assessment practices in my own courses.

**Mathematical mindsets**

In her research, Carol Dweck describes implicit theories of intellectual and social traits that influence how and whether people choose to invest effort in developing skills (see, for example, Dweck, 2008). Dweck uses the term *entity theory *to refer to the idea that traits such as mathematical skill are innate, and that adversity and failure are indications that one does not possess these traits. She uses the term *incremental theory* to refer to the idea that traits such as mathematical skill are malleable and can be developed through sustained effort. Students who have an entity theory of mathematical intelligence often demonstrate a “fixed mindset” in mathematics classes, interpreting challenges as opportunities to display their innate abilities in mathematics, or as threats to their mathematical identity. On the other hand, students who have an incremental theory often demonstrate a “growth mindset,” embracing challenging and open-ended tasks as opportunities to discover and develop new ideas.

Jo Boaler’s recent book *Mathematical Mindsets* (2015) provides a wealth of advice on how to structure mathematics instruction to promote the development of growth mindsets. She recommends practices that recent research has proven successful, such as the use of low-threshold-high-ceiling problems that are accessible to all students but require extended effort to solve completely, and strategies for managing groupwork that are consistent with Complex Instruction (Cohen *et al.*, 1999).

**Integrating the growth mindset into assessment and grading**

I have taken some steps to reframe assessment and grading in my courses as a way of stimulating growth and providing guidance for learning, rather than rewarding success or punishing failure.

*Specifications grading.* Specifications (specs) grading (Nilson, 2015) is a system in which students earn course grades by meeting a set of clearly defined criteria rather than by achieving a certain weighted average across exams, homework, and other assignments. I now use specs grading in all of my courses; in most cases, to earn a grade of A, students must pass exams with specified scores, give a successful presentation in class, and earn a passing score on homework problem sets. I also include class attendance and participation in my specs grading scheme; since I started doing this, I have had over 90% attendance in my classes. I have an “exception clause” in which students who fall short of a standard in one category can compensate by exceeding standards in another category. This provides flexibility and sets the tone that there are many ways to demonstrate mastery. In November 2015, Kate Owens wrote about a similar system called standards-based grading (SBG); while SBG is generally organized around learning goals rather than assignment types, both systems have the essential feature of providing opportunities for students to deepen their own mastery of course content.

I’ve found that specs grading provides greater clarity for students; at the end of the semester, there is little mystery about what students must do in order to earn a desired grade. The grading scheme also allows me to be serious about things that matter: since I’ve adopted this system, I’ve never had to award a course grade of B to a student who did extremely well on homework (by getting external assistance) but did not demonstrate any mastery of the course material on exams.

*Revision policy.* I knew that if I implemented a specifications grading scheme and did nothing else, I would only end up being stingier with grades. I wanted to reshape my course policy into one that embraces mistakes as opportunities for growth and learning. Therefore, I have the policy that any written homework assignment in my course can be revised. Students get constructive feedback on problem sets; if they read the feedback and submit revisions, I replace the old grade with a better one. I used to impose a nominal penalty (say, one point out of ten) on revisions, but I stopped doing this because I could no longer defend a practice that punished students for making mistakes. The revision policy allows me to be much more consistent in holding students accountable for producing high-quality work. This does not cause too great an increase in my overall grading load, because students’ revised work is usually of higher quality and therefore easier to grade.

*Exam scores as “work in progress.”* Exams have a way of bringing students’ sense of non-belonging in mathematics into sharp relief. I try to manage exams in my classes in ways that encourage growth and do not position students as competing with one another. First, I set cut scores for each exam based on the difficulty of the test itself, not based on a “curve” nor in response to student performance on the test. I don’t use a 90-80-70-60 scale to interpret exam scores; there is nothing mathematically natural about this scale (Reeves, 2006), and it offers little hope for students who earn a score in the 20s or 30s on a test.

I make it clear that exam scores are “in progress” until the end of the semester, as each student earns a number of “extra lives” that can be used to retry exam questions at the end of the course. Students earn “extra lives” by doing things that will help them succeed in the course, such as completing the homework, doing practice problems, and completing short “Lesson Launch” assignments in which they watch video examples prior to class and write summaries (as in some “flipped” instruction models). On the last day of class, students take a customized test with questions covering topics on which they didn’t demonstrate mastery during the midterm exams. Their scores on these questions replace their old midterm question scores.

Finally, I try to make sure my own messaging about exams is consistent. After each exam, I send a brief e-mail summarizing a few places on the exam where I thought the class as a whole did well, and reinforcing the “growth mindset” message. My most recent e-mail contained the following:

I don’t make it a practice to give class-wide statistics from the exam … My purpose in giving exams (as with all the other work) is to give you opportunities to discover where your knowledge is already strong, and where you still have room to grow. I’d sooner see you spend your energy and effort on learning the material you haven’t mastered yet, rather than positioning yourself with respect to your classmates. My view is that this class has 19 terrific mathematical thinkers, and your current score on this exam is an indicator of your current level of mastery of this material, not of how smart you are in mathematics. I say “current score” because as you know, under the Extra Life system your score on this exam may well improve at the end of the semester if you do a good job of learning the material you haven’t mastered yet.

**Impact on students**

Students in my courses seem to appreciate the various opportunities to revisit and improve their work. In a typical semester, I will receive homework revisions from about 75% of my students, with some students submitting revisions for as many as 50% of the problems. The revised solutions that students submit are usually substantial improvements; the majority of revisions earn at least two additional points on a five-point scale.

I asked students in my Fall 2016 capstone course for preservice secondary teachers for feedback on how the course policy influenced their learning and their identity as mathematics learners. One student responded,

A huge benefit was that we could correct our assessments after being graded. It made us go back and actually think about every problem and how we could correct it. You gave great feedback and showed that you were willing to help us.

Another student commented not only on the revision policy, but on the overall tone of collaboration and personal growth it set:

Feedback – Fantastic. No other course has allowed me to continuously correct my work. Although grading must be time consuming, it’s greatly appreciated.

Engagement/Involvement/Interpersonal Connection – I felt a unique atmosphere of collaboration in MAT 4303. The trichotomy of student, instructor, and group motivated me to consistently work to the best of my ability. This is especially true in homework. In most courses, if I have an imperfect solution to a problem that I can’t resolve, it remains imperfect. This was not true for MAT 4303.

Above all else, MAT 4303 helped me to mature as a student and take a collaborative, selfless approach to courses. I enjoyed helping other students learn just as much as I enjoyed learning.

**What’s next?**

In the future, I also hope to research the use of growth mindset assessment practices and investigate their effects on students’ learning and mathematical identity. Every program in which I have taught has wanted students to be more confident in their potential to solve challenging problems and more motivated to pursue learning opportunities on their own. I believe that an indispensable step in this direction is to help students develop the mindset that even when initial efforts fail, their hard work will result in powerful and long-lasting intellectual growth.

**Acknowledgement: **I would like to thank my colleague Dr. Priya V. Prasad, who uses a similar system in her courses at UTSA and whose feedback led to improvements in my own implementation.

**References**

Aronson, J., Fried, C. B., & Good, C. (2002). Reducing the effects of stereotype threat on African American college students by shaping theories of intelligence. *Journal of Experimental Social Psychology*, *38*(2), 113-125.

Blackwell, L. S., Trzesniewski, K. H., & Dweck, C. S. (2007). Implicit theories of intelligence predict achievement across an adolescent transition: A longitudinal study and an intervention. *Child development*, *78*(1), 246-263.

Boaler, J. (2015). *Mathematical mindsets: Unleashing students’ potential through creative math, inspiring messages and innovative teaching*. John Wiley & Sons.

Butler, R. (1987). Task-involving and ego-involving properties of evaluation: Effects of different feedback conditions on motivational perceptions, interest, and performance. *Journal of educational psychology*, *79*(4), 474.

Cohen, E. G., Lotan, R. A., Scarloss, B. A., & Arellano, A. R. (1999). Complex instruction: Equity in cooperative learning classrooms. *Theory into practice*, *38*(2), 80-86.

Dweck, C. S. (2008). *Mindset: The new psychology of success*. Random House Digital, Inc.

Good, C., Rattan, A., & Dweck, C. S. (2012). Why do women opt out? Sense of belonging and women’s representation in mathematics. *Journal of personality and social psychology*, *102*(4), 700.

Leslie, S. J., Cimpian, A., Meyer, M., & Freeland, E. (2015). Expectations of brilliance underlie gender distributions across academic disciplines. *Science*, *347*(6219), 262-265.

Nilson, L. (2015). *Specifications grading: Restoring rigor, motivating students, and saving faculty time*. Stylus Publishing, LLC.

Reeves, D. B. (2006). *The learning leader: How to focus school improvement for better results*. ASCD.

What teaching practices support a diverse student body in your mathematics classroom? In this post, I suggest six concrete teaching practices you can implement today to help make your classroom a more inclusive environment for your students:

- Use students’ interest in contextualized tasks
- Expose students to a diverse group of mathematicians
- Design assessments and assignments with a variety of response types
- Use systematic grading and participation methods
- Consider your course logistics
- Encourage students to embrace a growth mindset

I hope these strategies can spark conversation with colleagues on how we, as educators, can support a diverse and inclusive mathematics classroom.

**1. Use students’ interest in contextualized tasks. **

What communities and interests are represented in the problems you assign students? How do these backgrounds align with those of your students? Researchers have shown that students are more motivated in material when it is applicable to their own interests and communities (Carlone & Johnson, 2007; Jones, Howe, & Rua, 2000). In order to identify the interests of our students, consider giving your students a survey to ask for their hobbies, motivations for taking the course, and career goals (for example, a form I made for my own students is available here). Use what you learn about your students while framing the mathematical tasks and problems. Be sure to consider if the tasks you assign represent *all* of those interests in your classroom and which students might be left out.

In a traditional calculus course, for example, a common topic is related rates. Some typical related rates problems involve falling ladders, hemispherical reservoirs, and cars and trucks and things that go. These applications may be very appropriate for students with these interests or interests in certain types of engineering. However, exposing students exclusively to such applications signals to students who do not have such interests that mathematics is not relevant for them. Consider diversifying such tasks and (depending on the interests of your students) include applications to medicine, biology, conservation, music, baking, etc. Here are a few suggestions I have used with my own students.

*Chris and Jake are ***cooking*** pancakes. Jake ladles the pancake batter into the fry pan. While the pancake cooks, the radius of the circular pancake formed increases at a rate of 1 cm per minute. How fast is the circumference changing when the radius is 7 cm?*

*At a ***conservation*** site in the Amazon rainforest, a hyacinth macaw ***parrot*** is spotted flying horizontally 37 feet above a research site. The parrot is flying at 20 ft/sec. How fast is the distance from the parrot to the research site changing when the bird is 35 feet away?*

*The ***velocity of blood*** in a human’s blood vessels is related to the radius R of the blood vessel and the radius r of the layer of blood in the blood vessel. This relationship known as Poiseuille’s law and is given by** *\(v=375(R^2-r^2)\)*. Assume the radius of the layer of blood r is constant but cold weather causes the radius of the blood vessel R to contract at a rate of 0.01mm per minute. What is the velocity of blood flow when the radius R of the blood vessel has contracted to 0.03mm?*

**2. Expose students to a diverse group of mathematicians.**

Who are the mathematicians you tell your students about? Are they white, male, and introverted? These common stereotypes make students who do not identify with such qualities feel they do not belong in mathematics (Carlone & Johnson, 2007; Cheryan & Plaut, 2010; Good, Aronson, & Harder, 2008; Thoman, Arizaga, Smith, Story, & Soncuya, 2014). Diversify your students’ image of mathematicians by highlighting mathematicians who do not fit the typical stereotype. Describe mathematicians as multidimensional individuals with struggles, hobbies, and families. Communicating short biographies to students and showing students pictures of mathematicians from underrepresented groups are great ways to do this. If students are able to see mathematicians as genuine individuals, they are more able to identify with them and see themselves in mathematics. For resources to increase your own exposure to individuals in the mathematics community, consider perusing books and websites that highlight important contributions from women or individuals from underrepresented ethnicities in the field. For example, check out the Mathematically Gifted and Black website, recent articles such as Lathisms: Latin@s and Hispanics in Mathematical Sciences and The Black Female Mathematicians Who Sent Astronauts to Space, or books such as Kenschaft, 2005 and Murray, 2000.

Communicate stories of mathematicians to students while engaging in mathematical contributions the individuals have made. For example, in a calculus course, consider discussing with students the mathematics related to the curve known as the “Witch of Agnesi” (a mistranslation from *averisera* meaning “turned sine curve”) given in Figure 1 from Weiqing Gu’s website. This curve was studied in the calculus textbook *Analytical Institutions* written by the Italian mathematician Maria Gaetana Agnesi (1718-1799). Agnesi published this text at the age of thirty; she began writing the book at age twenty, originally writing the text as a resource for her brothers (Osen, 1975). The curve can be constructed by tracing the points P obtained from the \(x\) (horizontal) and \(y\) (vertical) coordinate of the points \(A\) and \(Q\) (respectively) in Figure 1 below. The curve can be given parametrically as \(x(t)=2a \cot(t)\) and \(y(t)=a[1-\cos(2t)]\) (for \(0 \leq t \leq \pi\) and a suitable positive constant \(a\)). Activities for students related to this curve could have students construct the parametric equation (from a more suitable description of the curve) or deriving an equation of the tangent line at any point P on the curve (see MathForum for a construction of the curve).

**3. Design assessments and assignments with a variety of response types. **

We as mathematics instructors have been successful mathematics students. Thus, many of us have likely found success with traditional mathematics assessments in traditional settings. However, not all students succeed in such environments. Create and structure assignments to include a variety of types of problems as well as settings. For example, consider including problems that ask students to write long responses to explain their thinking or draw a visual to demonstrate an argument. Vary the test environment by allowing students to work in groups or give a take-home assessment in order to give students flexibility in the amount of time for completion. Consider allowing students to retest. This strategy has been shown to provide students who experience math anxiety with a mental “safety net” that can help alleviate some of the pressures involved in testing and improve their test performance (Juhler, Rech, From, & Brogan, 1998). If you are not able to vary the assessments in your course (possibly due to departmental or other constraints), consider using these suggestions in class assignments or quizzes.

In my own courses, I encourage students to express and develop their thinking outside of class through something I call “Try it” opportunities. Try it opportunities have included responding to open-ended questions I post on our course discussion board, posting practice test solutions, or responding to fellow students’ practice test solutions. I also encourage students to tweet class summaries to my professional Twitter account, bring to class a picture or news article of a math concept we have discussed in class that they see in their own lives, or take a photo of their math study group. The following is an example of an open-ended question I asked students in a recent geometry for teachers course on our course discussion board as part of a try it opportunity.

*Discuss the following two statements: “two triangles put together always make a square” and “a square cut in half always forms two triangles.” Are these statements true? Why might a student think this? What would you tell this student?*

The question provided students an opportunity to think about their own conceptions of triangles and shapes before we formally discussed the topic in course. The setting of the discussion board environment allowed students the flexibility in their timing of response and opportunity to reflect on other students’ thinking.

**4. Use systematic grading and participation methods.**

Who are the students that you expect to succeed in your course? Who are the students whose contributions you encourage in class? Teachers often have expectations and judgments of different groups of students based on student identity (Anderson-Clark, Green, & Henley, 2008; Riegle-Crumb & Humphries, 2012; Van den Bergh, Denessen, Hornstra, Voeten, & Holland, 2010). It has also been reported that teachers provide a “warmer” academic climate to students for whom they hold higher expectations, in the form of in-class interactions and assignment feedback (Rosenthal, 2002). Such treatment has positive effects on student performance.

Attempt to hold all of your students to the same high standard. Consider implementing systematic ways of getting student participation and methods of grading. Keep a record of which students participate in your class and make an effort to elicit contributions from all students. While grading, create a rubric to evaluate student work. After grading, look over the comments and feedback you give your students. Do all students have similar depth and specificity of feedback? Consider having a colleague who is unfamiliar with the identities of your students look over a sample of the work you have graded and provide *you* feedback on the types of responses you give to your students.

**5. Consider your course logistics.**

*Office Hours. *What time do you host office hours? Are they immediately after class when a student might have to rush off to work in the campus cafeteria? Or early in the morning when a student might be commuting into campus? Another useful item for a pre-semester survey is a question about the best times for office hours.

*Deadlines. *When are your assignments due? What obligations do your students have outside of your course? Requiring students to turn in a homework set to your office door by 5PM might not be doable for a student who has to work until 6PM. Having an online homework set due on Sunday evening might not be feasible for a student without access to a computer on the weekends.

*Technology. *What technologies do your assignments require? If your department requires online quizzes or homework, is there technology on campus that students can use to complete these assignments? Know when such resources are available to students and be sure your students know as well.

**6. Encourage students to embrace a growth mindset.**

Carol Dweck’s popular work shows that individuals’ mind-sets regarding intelligence can influence their academic motivation and performance (Dweck, 2008). Dweck describes students with a fixed mindset as having a static view of intelligence and students with a growth mindset believing intelligence can be developed, the latter mindset being able to persist in the face of challenges and setbacks and grow in the process.

Remind students that mistakes are an essential part of learning and a vehicle for growth. Provide feedback on students’ strategies and reasoning, rather than just their answers. Celebrate students’ effort and persistence and avoid praising a student for getting an answer quickly. Treat exams as an opportunity for students to demonstrate their effort and understanding rather than their intelligence and ability. Allow students to engage in productive failure by providing limited scaffolding and challenging students to collaborate with each other (Kapur & Bielaczyc, 2012).

In my own classes, I begin the semester by assigning my students the task to watch and give a short reflection on the TED talk by Eduardo Briceno on “Mindsets and Success.” My students’ responses reflect encouragement from the talk; many express a shift from believing they are “not good at mathematics” to believing they are “not good at mathematics *yet*.” I then usually emphasize to students that they *are* already mathematical thinkers but with persistence and effort they can feel success in our mathematics course.

I hope these strategies invite you to reflect on your teaching practices and consider the influence we can have in creating inclusive classrooms that support diversity. As a final recommendation, I hope this post can start or continue conversations with colleagues on the topic of diversity and inclusion in mathematics. Having a community to discuss and develop the ways we teach and interact with students is essential for making such efforts lasting and productive.

**Acknowledgements:** I would like to express my gratitude to Laura Provolt, Debbie R. Hale, and Dr. Kecia M. Thomas for their helpful feedback and many insightful discussions.

References

Anderson-Clark, T. N., Green, R. J., & Henley, T. B. (2008). The relationship between first names and teacher expectations for achievement motivation. *Journal of Language and Social Psychology, 27*(1), 94-99.

Carlone, H. B., & Johnson, A. (2007). Understanding the science experiences of successful women of color: Science identity as an analytic lens. *Journal of research in science teaching, 44*(8), 1187-1218.

Cheryan, S., & Plaut, V. C. (2010). Explaining underrepresentation: A theory of precluded interest. *Sex roles, 63*(7-8), 475-488.

Dweck, C. S. (2008). *Mindset: The new psychology of success*: Random House Digital, Inc.

Good, C., Aronson, J., & Harder, J. A. (2008). Problems in the pipeline: Stereotype threat and women’s achievement in high-level math courses. *Journal of Applied Developmental Psychology, 29*(1), 17-28.

Jones, M. G., Howe, A., & Rua, M. J. (2000). Gender differences in students’ experiences, interests, and attitudes toward science and scientists. *Science education, 84*(2), 180-192.

Juhler, S. M., Rech, J. F., From, S. G., & Brogan, M. M. (1998). The effect of optional retesting on college students’ achievement in an individualized algebra course. *The Journal of experimental education, 66*(2), 125-137.

Kapur, M., & Bielaczyc, K. (2012). Designing for Productive Failure. *Journal of the Learning Sciences, *21(1), 45-83.

Kenschaft, P. C. (2005). *Change is possible: Stories of women and minorities in mathematics*: American Mathematical Soc.

Lopez, A. D., Sosa, G., Langarica, A. P., & Harris, P. E. (2016). Lathisms: Latin@ s and Hispanics in the Mathematical Sciences. *Notices of the American Mathematical Society, 63*(9), 1019-1022.

Murray, M. A. M. (2000). *Women Becoming Mathematicians: Creating a Professional Identity in Post-World War II America*: MIT Press Cambridge MA.

Osen, L. M. (1975). *Women in mathematics*: Mit Press.

Riegle-Crumb, C., & Humphries, M. (2012). Exploring bias in math teachers’ perceptions of students’ ability by gender and race/ethnicity. *Gender & Society, 26*(2), 290-322.

Rosenthal, R. (2002). Covert communication in classrooms, clinics, courtrooms, and cubicles. *American Psychologist, 57*(11), 839.

Thoman, D. B., Arizaga, J. A., Smith, J. L., Story, T. S., & Soncuya, G. (2014). The Grass Is Greener in Non-Science, Technology, Engineering, and Math Classes Examining the Role of Competing Belonging to Undergraduate Women’s Vulnerability to Being Pulled Away From Science. *Psychology of Women Quarterly, 38*(2), 246-258.

Van den Bergh, L., Denessen, E., Hornstra, L., Voeten, M., & Holland, R. W. (2010). The implicit prejudiced attitudes of teachers: Relations to teacher expectations and the ethnic achievement gap. *American Educational Research Journal, 47*(2), 497-527.

Whitney, A. K. (2015). The Black Female Mathematicians Who Sent Astronauts to Space. 2017, from __http://mentalfloss.com/article/71576/black-female-mathematicians-who-sent-astronauts-space__