By Benjamin Braun, Editor-in-Chief, University of Kentucky; Priscilla Bremser, Contributing Editor, Middlebury College; Art Duval, Contributing Editor, University of Texas at El Paso; Elise Lockwood, Contributing Editor, Oregon State University; and Diana White, Contributing Editor, University of Colorado Denver.
Editor’s note: This is the second article in a series devoted to active learning in mathematics courses. The other articles in the series can be found here.
Mathematics faculty are well-aware that students face challenges when encountering difficult problems, and it is common to hear instructors remark that successful students have high levels of “mathematical maturity,” or are particularly “creative,” or write “elegant” solutions to problems. To appreciate research results regarding active learning, it is useful to make these ideas more precise. Motivated by research in education, psychology, and sociology, language has been developed that can help mathematicians clarify what we mean when we talk about difficulty levels of problems, and the types of difficulty levels problems can have. This expanded vocabulary is in large part motivated by…
…the “cognitive revolution” [of the 1970’s and 1980’s]… [which] produced a significant reconceptualization of what it means to understand subject matter in different domains. There was a fundamental shift from an exclusive emphasis on knowledge — what does the student know? — to a focus on what students know and can do with their knowledge. The idea was not that knowledge is unimportant. Clearly, the more one knows, the greater the potential for that knowledge to be used. Rather, the idea was that having the knowledge was not enough; being able to use it in the appropriate circumstances is an essential component of proficiency.
— Alan Schoenfeld, Assessing Mathematical Proficiency [17]
In this article, we will explore the concept and language of “level of cognitive demand” for tasks that students encounter. A primary motivation for our discussion is the important observation in the 2014 Proceedings of the National Academy of Science (PNAS) article “Active learning increases student performance in science, engineering, and mathematics” by Freeman, et al. [8], that active learning has a greater impact on student performance on concept inventories than on instructor-written examinations. Concept inventories are “tests of the most basic conceptual comprehension of foundations of a subject and not of computation skill” and are “quite different from final exams and make no pretense of testing everything in a course” [5]. The Calculus Concept Inventory is the most well-known inventory in mathematics, though compared to disciplines such as physics these inventories are less robust since they are in relatively early stages of development. Freeman et al. state:
Although student achievement was higher under active learning for both [instructor-written course examinations and concept inventories], we hypothesize that the difference in gains for examinations versus concept inventories may be due to the two types of assessments testing qualitatively different cognitive skills. This is consistent with previous research indicating that active learning has a greater impact on student mastery of higher- versus lower-level cognitive skills…
After introducing levels of cognitive demand in this article, our next article in this series will directly connect this topic to active learning techniques that are frequently used and promoted for postsecondary mathematics courses.
Bloom’s Taxonomy and its Variants
A well-known and long-established framework in educational psychology is Bloom’s taxonomy [2]. In 1956, Benjamin Bloom and a team of educational psychologists outlined multiple levels of skills in the cognitive domain of learning, increasing from simple to complex. These are often simplified into six skill levels: knowledge, comprehension, application, analysis, synthesis, evaluation. By associating verbal cue words with each level, they categorized test questions over a variety of topics at the college level, and found that over 95% of these questions were at the very lowest level, “recall of knowledge” [11, p. 1]. Since these original findings, which were further developed in a second volume published in 1964, the core ideas of Bloom’s taxonomy have been widely used in education across disciplines.
The original taxonomy has been extended and adapted by many researchers in educational psychology. For example, Anderson et al. [1] developed a two-dimensional extension of Bloom’s taxonomy with a cognitive process dimension (remember, understand, apply, analyze, evaluate, create) similar to Bloom’s taxonomy, but also with a knowledge dimension (factual knowledge, conceptual knowledge, procedural knowledge, and metacognitive knowledge) — a taxonomy table encoding this appears below. When categorizing a task by this taxonomy, the cognitive process is represented by the verb used when specifying the task (what the student is doing) and the knowledge process dimension corresponds to the noun (what kind of knowledge the student is working with). In 2002, a special volume of the journal Theory Into Practice was devoted to this revised taxonomy; examples of applications of this taxonomy can be found throughout the volume.
Remember | Understand | Apply | Analyze | Evaluate | Create | |
Factual Knowledge | ||||||
Conceptual Knowledge | ||||||
Procedural Knowledge | ||||||
Metacognitive Knowledge |
An important shortcoming of each of these taxonomies for mathematicians is that the specific descriptors used for the different levels aren’t always appropriate for mathematics. For instance, in Bloom’s taxonomy, application comes after comprehension, which does make sense in a general context. But trying to apply this to mathematics, it is too easy to put routine word problems in the “application” category. The idea of “application” in the general sense is to take ideas presented in one context and be able to use them in a somewhat new setting, but in mathematics the word “application” can be used to represent both the development of a mathematical model to fit a situation or data set and the “cookbook” application of a previously-established mathematical model; most word problems in textbooks fit into the latter category.
Specialized Cognitive Taxonomies and General Student Intellectual Development
Around the same time as [1], several papers appeared that used taxonomies specialized to mathematics, e.g., [15, 19, 20, 21]. These have the two-dimensional nature of [1], with the columns or verbs replaced by labels that are specific to mathematics, while the rows or nouns simply correspond to different topics in mathematics. In 2006, Andrew Porter [14] explained it this way:
Unfortunately, defining content in terms of topics has proven to be insufficient at least if explaining variance in student achievement is the goal [9]. For example, knowing whether or not a teacher has taught linear equations, while providing some useful information, is insufficient. What about linear equations was taught? Were students taught to distinguish a linear equation from a non-linear equation? Were students taught that a linear equation represents a unique line in a two space and how to graph the line? For every topic, content can further be defined according to categories of cognitive demand. In mathematics cognitive demand might distinguish memorize; perform procedures; communicate understanding; solve non-routine problems; conjecture, generalize, prove.
More details about this taxonomy of levels of cognitive demand can be found in [15]. A comparison of various such taxonomies can be found in [15].
Similarly, several papers of Mary Kay Stein and various co-authors [19, 20, 21] analyze mathematical tasks and how they are implemented, focusing on middle school, using four levels of cognitive demand: Memorization; procedures without connections; procedures with connections; and “doing mathematics”. They identify the first two levels as “low-level”, matching the first two levels of [15]; and they identify the last two levels as “high-level”, matching the last three levels of [15].
There are also broad models for student intellectual development across not only individual topics but their entire college experience. One of the first such models is due to William Perry, and it can be (overly) simplified into the following description. Most college students will begin with the belief that there are right and wrong answers to questions, and that professors hold the knowledge of which these are. As students progress through their studies, they realize that sometimes their teachers are not always aware of the answers to questions, and also that answers can be more subtle than merely “right” or “wrong.” After this realization, students often enter a phase of relativism, where everyone’s opinions are equally valid. In the final stages of intellectual development, students recognize that different areas of intellectual inquiry have different standards and (some students) develop a balance between intellectual independence and commitment to the discipline. The Perry model has been refined and revised by many psychologists to account for diverse student experiences with respect to gender and other factors; an excellent survey of these developments, with pedagogical implications, has been given by Felder and Brent [6,7].
As Thomas Rishel points out [16], students in the early stages of the Perry model or one of its variants often enjoy mathematics precisely because all the answers are perceived as known, and they frequently value mathematical problems that focus on verification of these truths. As these students begin to encounter complicated modeling problems, or as they are first asked to seriously participate in proof-based mathematical reasoning, the cognitive load of such tasks can be much higher than for students who have developed further along this model. Thus, the intellectual stage of development for a given student can impact the level of cognitive demand for various tasks and problems they will encounter in mathematics courses.
Practical Issues: Level Identification and Task Assessment
Given these theoretical frameworks for both cognitive engagement and intellectual development, a practical challenge for instructors is to use these frameworks effectively to increase the quality of teaching and learning in the classroom. With any of the cognitive taxonomies, it can be hard to assess precisely which level(s) a given student task is hitting. The taxonomy tables discussed in previous sections provide instructors with tools to produce reasonable cognitive demand analysis of the tasks they give students. Engagement with all cognitive levels is necessary for deep learning to take place, so it is important that mathematics faculty identify and provide students with tasks representing a range of levels. Since lower-level tasks are typically already most prevalent, and easiest to assess both in terms of time and resources, faculty have to make the effort to bring in the higher levels. As a result, three challenges for instructors are to identify high-quality mathematical activities for students at higher levels of cognitive demand, to develop methods for assessing student work on such activities, and to create or make use of institutional programs, culture, and resources to support the use of high-quality activities. We will comment on the third issue in our next article in this series.
Some mathematics problems afford a wide range of cognitive engagement. For example, in the K-12 setting Jo Boaler and others have promoted activities described as “low-floor, high-ceiling” (LFHC) [23]. These are activities that can give students practice in lower levels of cognitive demand, but also are open-ended enough to eventually lead to (grade-appropriate) mathematical investigations with high-cognitive demand. Good examples of problems that students can engage with all the way from elementary school procedures to the highest levels of cognitive demand, leading to college-level abstract topics, can be found on the youcubed website, on sites for Math Circles, and on sites for Math Teachers’ Circles. When students are working on LFHC problems, they have flexibility in how they navigate through the problem. Unless explicit guidance is given regarding how students should investigate a LFHC problem, it is possible for them to spend most of their time working inside a small range of cognitive demand. Consequently, it is important for instructors to provide some pathways or scaffolding for students to use when first engaging with such problems.
Though they are not as common as they deserve to be, mathematicians have developed a wide range of techniques for assessing high-cognitive demand tasks, including written assignments, group work, projects, portfolios, presentations, and more [3, 4, 10, 12, 13]. However, task-appropriate techniques for assessing a given high-cognitive demand task can be challenging to identify and put in practice. It is important that the method for assessing specific tasks be selected in the context of overall course assessment. Some mathematicians have been experimenting with grading schemes that more directly support high-cognitive demand assignments, such as specifications grading and standards-based grading. Unfortunately, the fact remains that there is much to be learned about the efficacy of different methods of assessment [17].
Active Learning and Theories of Learning
Implicit in our discussion has been an important point that should be made explicit: as Stein et al. state [19], “…cognitive demands are analyzed as the cognitive processes in which students actually engage as they go about working on the task” as opposed to what students witness others doing. Thus, it is not possible to discuss the cognitive level of a mathematical proof itself, though proofs certainly vary in level of sophistication. Rather, one focuses on the cognitive level of what a student is asked to do with the proof: memorize the proof verbatim? construct a concrete example illustrating the proof method? derive a similar result using the same technique? analyze the proof in order to identify the key steps? compare the proof to a different proof of the same result? These tasks are all different from the perspective of cognitive demand, hence they are not interchangeable from the perspective of student learning, yet they exhibit superficial similarities and would each generally be considered valuable for students to complete. It is worth remarking that the verbs used in describing each of the tasks are helpful indicators of level of cognitive demand, as the taxonomies suggest.
This observation brings us back to active learning, which by definition has as a primary goal to engage students through explicit mathematical tasks in the classroom, in view of peers, instructors, and teaching assistants. One major effect of active learning techniques is that the mathematical processes and practices of students, which are tightly interwoven with high-cognitive achievement, are brought into direct confluence with peer and instructor feedback. Thus, active learning techniques complement the shift in emphasis described by Schoenfeld, from received knowledge to committed engagement, of the primary goal of student learning. Active learning techniques are also well-aligned with contemporary theories of learning, for example constructivism, behaviorism, sociocultural theory, and others [18, 22].
As one example of this alignment, constructivism is based on the idea that people construct their own understanding and knowledge through their experiences rather than through the passive transfer of knowledge from one individual to another. This is a prominent theory of learning among mathematics education researchers with many refinements, e.g. radical constructivism remains agnostic about whether there actually is any objective truth/reality, while social constructivism views individual thought and social interaction as inseparable with no model for a socially isolated mind. Generally, constructivism’s emphasis on the actions of the learner reinforces the need to emphasize consideration of the cognitive demands placed on students.
Conclusion
As research regarding the teaching and learning of postsecondary mathematics and science matures and becomes more well-known, both inside the mathematics community and beyond, significant evidence is building that active learning techniques have a strong impact on student achievement on high-cognitive demand tasks. We began this article with recognition that mathematicians are fully aware of students’ difficulties with mathematical tasks at all levels, and the observation that the mathematical community has developed language such as “mathematical maturity” and “elegance” which is often applied to distinguish successful from unsuccessful student work. Our main purpose in writing this survey of concepts related to levels of cognitive demand is to introduce mathematicians to the rich and complex set of ideas that have been developed in an attempt to distinguish different types of student activities and actions related to learning. Given the current evidence supporting the positive impact of active learning techniques, mathematics faculty will have an increased need for a refined language with which to discuss both the successes and failures of our students and the efficacy of the large variety of active learning techniques that are available. In our next post in this series, we will discuss the most prominent of these active learning techniques and environments with an eye toward both institutional constraints (as discussed in Part I of this series) and student learning in the context of levels of cognitive demand.
References
[1] Anderson, L.W. (Ed.), Krathwohl, D.R. (Ed.), Airasian, P.W., Cruikshank, K.A., Mayer, R.E., Pintrich, P.R., Raths, J., & Wittrock, M.C. A taxonomy for learning, teaching, and assessing: A revision of Bloom’s Taxonomy of Educational Objectives (Complete edition). New York: Longman. 2001
[2] Bloom, Benjamin, et al., eds. Taxonomy of Educational Objectives: the classification of educational goals. Handbook I: Cognitive domain. New York: Longmans, Green. 1956
[3] Benjamin Braun. Personal, Expository, Critical, and Creative: Using Writing in Mathematics Courses. PRIMUS, 24 (6), 2014, 447-464.
[4] A. Crannell, G. LaRose, and T. Ratliff. Writing Projects for Mathematics Courses: Crushed Clowns, Cars, and Coffee to Go. Mathematical Association of America, 2004.
[5] J. Epstein. 2013. The Calculus Concept Inventory—Measurement of the Effect of Teaching Methodology in Mathematics. Notices of the American Mathematical Society. 60 (8), 1018–1026.
[6] Felder, Richard M. and Brent, Rebecca. The Intellectual Development of Science and Engineering Students. Part 1: Models and Challenges, Journal of Engineering Education, Volume 93, Issue 4, October 2004, 269–277.
[7] Felder, Richard M. and Brent, Rebecca. The Intellectual Development of Science and Engineering Students. Part 2: Teaching to Promote Growth, Journal of Engineering Education, Volume 93, Issue 4, October 2004, 279–291.
[8] Scott Freeman, Sarah L. Eddy, Miles McDonough, Michelle K. Smith, Nnadozie Okoroafor, Hannah Jordt, and Mary Pat Wenderoth. Active learning increases student performance in science, engineering, and mathematics. Proc. Natl. Acad. Sci. U.S.A. 2014, 111, (23) 8410-8415
[9] Gamoran, A., Porter, A.C., Smithson, J., & White, P.A. (1997, Winter). Upgrading high school mathematics instruction: Improving learning opportunities for low-achieving, low-income youth. Educational Evaluation and Policy Analysis, 19(4), 325-338.
[10] Bonnie Gold, Sandra Z. Keith, William A. Marion, (eds). Assessment Practices in Undergraduate Mathematics. Mathematical Association of America Notes #49, 1999.
[11] Karin K. Hess. Exploring Cognitive Demand in Instruction and Assessment. National Center for Assessment, Dover, NH 2008. http://www.nciea.org/publications/DOK_ApplyingWebb_KH08.pdf
[12] Reva Kasman. Critique That! Analytic writing assignments in advanced mathematics courses. PRIMUS XVI (2006) 1–15.
[13] John Meier and Thomas Rishel. Writing in the Teaching and Learning of Mathematics. MAA Note #48, 1998.
[14] Porter, Andrew. Curriculum Assessment, In J. L. Green, G. Camilli, & P. B. Elmore (Eds.), Complementary methods for research in education (3rd edition). Washington, DC: American Educational Research Association, 2006. http://www.andyporter.org/sites/andyporter.org/files/papers/CurriculumAssessment.pdf
[15] Andrew C. Porter and John L. Smithson. Defining, Developing, and Using Curriculum Indicators. CPRE Research Report Series RR-048, December 2001. Consortium for Policy Research in Education University of Pennsylvania Graduate School of Education. https://secure.wceruw.org/seconline/Reference/rr48.pdf
[16] Rishel, Thomas. Teaching First: A Guide for New Mathematicians. MAA Notes #54, 2000
[17] Schoenfeld, Alan H., ed. Assessing Mathematical Proficiency. MSRI Book Series, Volume 53, 2007.
[18] B. Sriraman, & L. English (Eds.). Theories of mathematics education. New York: Springer, 2010.
[19] Mary Kay Stein, Barbara W. Grover, and Marjorie Henningsen. Building Student Capacity for Mathematical Thinking and Reasoning: An Analysis of Mathematical Tasks Used in Reform Classrooms. American Educational Research Journal, Vol. 33, No. 2 (Summer, 1996), pp. 455-488
[20] Stein, Mary Kay and Smith, Margaret Schwan. “Mathematical Tasks as a Framework for Reflection: From Research to Practice.” Mathematics Teaching in the Middle School, Vol. 3, No. 4 (January 1998), pp. 268-275
[21] Stein, Mary Kay and Smith, Margaret Schwan. “Reflections on Practice: Selecting and Creating mathematical Tasks: From Research to Practice.” Mathematics Teaching in the Middle School, Vol. 3, No. 5 (February 1998), pp. 344- 350
[22] T. Rowland & P. Andrews (Eds.). Master class in mathematics education: International perspectives on teaching and learning. London: Continuum Publishers, 2014.
Hello – I am a senior studying math education at the University of Illinois. Reading this post really gave me a chance to think about the levels of cognitive demand I’ve been exposed to in my various college math classes. For example, in my Real Analysis class over the summer, our exams mainly involved restating and reproving theorems from the lectures. We would be given a list of about 10 theorems a week before the exam, and about 3-4 of these would show up on the test. While it was nice having a heads up as to what we would be tested on, I honestly know this hasn’t benefited me and did not involve anything more than memorization of a limited scope of the class’s topics. If you were to come to me today and ask me to prove many of these theorems that I had to know over the summer, I wouldn’t be able to recall even how to begin the proofs…which demonstrates what a low level of cognitive demand the class was structured around. However, I’ll contrast this with my Abstract Algebra class I took last semester. My professor constantly put us in small groups to complete discovery-based worksheets which allowed us to really engage in high-level cognitive skills. If we got stuck, he would always walk around and use scaffolding techniques to get us on the right track, but without actually giving us the answer. This was probably my most memorable math class because of how engaging he made the learning process for us. When I am student teaching next semester, I want my classes to be memorable for students, like this class was for me. My goal for next semester will be to provide students with memorable math class that involves a balance of low and high cognitive demand.
Hey Cam, I think that anybody with a mathematics concentration knows what you mean when you talk about how certain proof-based classes do not provide much cognitive demand. This is unfortunate, since these are the classes that should provide the most cognitive stimulation. The problem, I think, is that the professors do not give students a chance to think over the problem and work through the proofs alone. Struggling through problems is one of the best ways to develop cognitive skills. Rote memorization, which is what the students are being asked to do before one of these tests, is not beneficial to cognitive development. Similarly to you, I had a Abstract Algebra class that required us to work through the problems on our own before-hand then during class we would go over them. This type of class was a flipped class, even though it was extremely difficult to adapt to, I felt as though it was beneficial to actually learning the material. Are there any ways you, as a teacher, could promote cognitive development within your classroom, especially those problems that require just memorizing a formula?
Hi Cam,
I’m glad the post is helping you reflect on the cognitive demands you have faced in your own classes. This is a good first step towards thinking about getting the right mix of cognitive demands you will make of your students when you have your own classroom.
But I want to speak up for the idea of learning and being able to reproduce a major theorem as *one piece* of a package of assessment. (As you’ve noted, you want a mix of different cognitive demands.) Here’s the value of this exercise: The proofs of these theorems tend to be longer and more complex than we might expect students to come up with on their own at this level, though we might hope for them to be able to do this in the future. The proofs also tend to be long enough so that you can’t simply memorize it word-for-word (though I did have one student who did that once, while getting no other points on that particular exam), so you have to figure out how to break it down into the big pieces. I contend this is a valuable skill. The point is not necessarily that you will remember it forever, but that you had to work with it intensively for a while.
Of course, we want to have other components of assessment. When I was a student, in some of my classes we would have to reproduce just *one* of these theorems per exam, and I’ve continued that rate with my advanced courses.