By Gavin LaRose, University of Michigan
It could be the punchline of a joke that at any given college or university, at some point, the administration will lean on departments to be more “efficient” by teaching classes in larger sections, or online, or with some technology or another. By the metric of student credit hour to faculty work hour, of course, large lectures are tremendously efficient, and scale admirably. One may argue that there is little difference between an instructor lecturing to 100 or to 200 students, and little difference between an instructor rendered small by the distance to the front of a large lecture hall and one rendered small in the pixels of a video screen. This is the Massive Open Online Course (MOOC) model, which extends this efficiency of scale from 200 to 20,000. Anecdotally, the MOOC tide seems to be receding, but the pressures that argue for this efficiency are not going away. Many departments are being asked to teach, with fewer resources and greater accountability, more students whose mathematical preparation is weaker than in the past .
The difficulty here is that the student credit hour metric is easy to measure, while student learning is not. Research says that our efficient passive lecture does not result in the student learning gains we can see with more active teaching techniques [6,7,8]. Indeed, through the Conference Board of the Mathematical Sciences, the presidents of fifteen professional societies in the mathematical sciences have recognized this conclusion and endorsed the use of active teaching methods . But neither the research nor the endorsement provide us with a simple, usable measure by which to demonstrate the effectiveness of these techniques. So our endeavor of teaching remains by easily applied metrics an inefficient one, and I increasingly think one that is inevitably inefficient by even more measures.
This is probably true for all education, but there is a case to be made that it is particularly true for mathematics. For over twenty years, we have watched an accelerating rush to calculus in American high schools , and I recall a discussion with Project NExT  Fellows in about 1995 in which the observation was made that as calculus is pushed down into the high school curriculum, algebra is pushed up into college. But it feels worse than a one-to-one exchange, because while calculus is, more-or-less, well-defined―though this characterization deserves further evaluation―what bubbles up is not.
As a result, our efficient lecture, in which all students are assumed equally well-prepared and equally well-served by a uniform delivery method, becomes even more poorly suited to reach our students. We are stuck, again, with our most effective―and perhaps only effective―remediation being inherently inefficient: we need the instruction for these students to be individually responsive, not broadly scalable. Further, mathematics is not a field in which students’ understanding is built up from only a small number of physical laws. Thus, the responsive diagnosis and remediation must be nimble and able to evaluate individual students daily, as we navigate in class a varying mathematical terrain that requires varying prerequisite knowledge. As a discipline, the thinking and logic we demand of all practitioners, students included, is that of a science, but in the interconnected myriad details our subject may be more akin to the humanities. (The need for languages to be taught in small sections is rarely questioned; perhaps we may argue that the mathematics education research is really demanding the same for mathematics.)
So to be effective instructors, especially in our present environment, we may claim that we are inevitably inefficient. But I think that this premise extends beyond the classroom, infiltrating even the systemic support that effective instruction requires.
I wrote above that calculus is, more-or-less, well-defined, and I think this is true. Comparing two arguably different calculus textbooks―Stewart’s Calculus and the text of the same name by Hughes Hallett, et al.―reveals 28 sections that cover essentially the same material and only four that are demonstrably different in mathematical content. But calculus courses themselves are, by dint of institutional constraints (student preparation and needs foremost among them) far less uniform, and this difference in courses between institutions gets only more pronounced as we move on from calculus. To some degree this has limited import when students stay at one institution through graduation, but this is becoming less and less the case. The push, and need, for greater affordability of higher education means that increasing numbers of students may be transferring between colleges (especially two-year colleges) and universities. Thus there are increasing numbers of students who have taken courses at other institutions entering our classrooms, and if they have unexpected gaps in background knowledge we need to give the instructor time and contact to be able to diagnose and remediate that. For this to be at all a feasible undertaking, we need first to ensure that students are in the right course in the first place, which requires a “diagnosis” of any courses with which our students arrived. I think that this diagnosis is also one that is necessarily inefficient. It is not one that someone without knowledge of the subject can do reliably, and thus for it to be done well we must use expert faculty time to do it. This use of faculty time is also not efficient by any standard business model: we are using our most highly trained employees in the organization to evaluate on a student-by-student basis these course requests.
I won’t argue that this is the only way to evaluate transfer credit, but I think it is an effective way to do it, even as it is by some metrics inefficient. I’ve evaluated perhaps 200 courses for equivalency to courses at the University of Michigan in the past year. And while this doesn’t show up in the list of teaching duties that I have performed in that time, I believe it to be a service that allows our teaching to be effective.
I watched a similar, and similarly inefficient, evaluation unfold this summer in the course of several meetings of the faculty who are most directly involved in the administration of our Introductory Program (loosely, our course preceding calculus, calculus I, and calculus II). The University has a summer program to promote diversity in STEM subjects, and asked if we could designate some sections of calculus I for those students so that they could enroll in calculus with other members of their cohort, and in sections taught by instructors the students already know. From the perspective of this program these are obviously desirable outcomes. However, it also has the potential to isolate students in those sections who are not part of the cohort, and requires that the Introductory Program Directors establish these sections in advance so that this is possible. These latter outcomes are less desirable, and the former is a significant concern for the learning of other students in those sections. As a result, this evaluation was not a straightforward one, and it unfolded over the course of two or three meetings of the five to seven people involved in these decisions. There are perhaps 18 students involved in this program for the fall. To me this seems to fall in the category of inefficient processes, at least as measured by the time spent by the decision makers.
At any given college or university that is trying to do a good job in teaching mathematics I suspect there will be similarly inefficient systems supporting the inefficient work of the mathematics teaching itself. They may not―arguably, will not―be doing the tasks I’ve picked here. Because of the nearly infinite variation in the colleges and universities across the country and world, the systemic challenges at each will be correspondingly different and varied. But because the difficulties in dealing with differently prepared students in all of these environments is the one thing we are certain will be constant, it’s hard to imagine the systemic issues will ever be absent.
It is said of at least some theoretical mathematicians that they are proud that their chosen studies do not have (visible) practical applications. I think that we as mathematicians who are concerned with the effectiveness of our institutions at educating students in our chosen field should perhaps be similarly proud of our inefficiency by practical measures. Active learning really is better, and is better done on a scale at which there is significant student-faculty interaction . And our students learn best when the systemic support for these active learning classrooms allows them to operate at their best. Insofar as these are inefficient, these inefficiencies are inevitable. Thus they are also an essential characteristic of the effective teaching of mathematics.
 ALEKS. (2016). Accessed Aug. 31, 2016.
 Bressoud, D. (2015) Calculus at Crisis I: The Pressures. Launchings. (May 1, 2015). Accessed Jul. 7, 2016.
 Bressoud, D. (2015) Calculus at Crisis II: The Rush to Calculus. Launchings. (Jun. 1, 2015). Accessed 16 May 2016.
 CBMS Statement on Active Learning in Post-Secondary Mathematics Education.
(July 15, 2016). Accessed 24 Aug 2016.
 Chickering, A.W. and Z.F. Gamson. Seven Principles for Good Practice in Undergraduate Education, New Directions for Teaching and Learning #47. Jossey-Bass (1991).
 Freeman, S., et al. (2014). Active Learning Increases Student Performance in Science, Engineering and Mathematics. Proceedings of the National Academies of Sciences, 111(23):8410–8415.
 Kogan, M. and S.L. Laursen (2014). Assessing Long-term Effects of Inquiry-based Learning: A Case Study from College Mathematics. Innovative Higher Education, 39(3):183-199.
 Laursen, S.L., M.L. Hassi, M. Kogan and T. Weston (2014). Benefits for Women and Men of Inquiry-based Learning in College Mathematics: A Multi-institutional Study. Journal for Research in Mathematics Education, 45(4):406-418.
 Project NExT. Accessed 24 Aug 2016.