# The Battle of Numbers

Our topic is the game called rithmomachia or rithmomachy—literally, the battle of numbers…

Ursula Whitcher
AMS | Mathematical Reviews, Ann Arbor, Michigan

This month, we’re going to explore a very old—indeed, medieval—educational game and correct a mathematical error in a sixteenth-century game manual. But before we delve into the past, let me remind you that the Feature Column is seeking new columnists. If you’re interested in sharing your writing about intriguing mathematical ideas, please get in touch!

### Pleasant Utility and Useful Pleasantness

Our topic is the game called rithmomachia or rithmomachy—literally, the battle of numbers. The game is played with pieces shaped like geometric figures and labeled with different numbers, on a board like a double chessboard.

A rithmomachia set. Photo by Justin du Coeur.

The twelfth-century scholar Fortolfus described the experience of rithmomachia as the pinnacle of educated leisure:

Indeed, in this art, which you will admire in two ways, is pleasant utility and useful pleasantness. Not only does it not cause tedium, but rather it removes it; it usefully occupies one uselessly idle, and usefully un-occupies the person uselessly busy.

The game’s rules are elaborate. Their importance, and their draw for medieval intellectuals, lies in their connection to the quadrivium. Arithmetic, geometry, astronomy, and music were the four advanced arts in the medieval liberal arts curriculum. All four required an understanding of ratios and sequences. Playing rithmomachia allowed medieval people to practice their math skills and show off their erudition.

Some rithmomachia proponents even claimed the game made you a better person. They often quoted the late Roman philosopher Boethius’ “demonstration of how every inequality proceeds from equality,” which makes grand claims:

Now it remains for us to treat of a certain very profound discipline which pertains with sublime logic to every force of nature and to the very integrity of things. There is a great fruitfulness in this knowledge, if one does not neglect it, because it is goodness itself defined which thus comes to knowable form, imitable by the mind.

Boethius describes a specific procedure for creating different types of sequences and ratios, beginning with the same number:

Let there be put down for us three equal terms, that is three unities, or three twos, or however many you want to put down. Whatever happens in one, happens in the others.

Now these are the three rules: that you make the first number equal to the first, then put down a number equal to the first and the second, finally one equal to the first, twice the second, and the third.

For example, if we begin with $1, 1, 1$ we obtain $1, 2, 4$.

This is the beginning of a geometric sequence where the numbers double at each step: in Boethius’s language, it is a duplex. Applying the same rule to the new list of numbers will create a list with more complicated relationships.

### Rithmomachia Pieces

Every rithmomachia piece has its own number (or, in some cases, a stack of numbers):

A 1556 illustration of a rithmomachia board from Claude de Boissière’s book Le tres excellent et ancien jeu pythagorique, dict Rythmomachie

The choice of numbers is not arbitrary; they are generated by rules similar to Boethius’ rules for creating inequality from equality. Traditionally, the white gamepieces are considered the “evens” team and the black pieces are considered the “odds” team, though as we will see, this split between even and odd only applies to the circles.

#### Circles

Each side has eight circle pieces, given by the first four even or odd numbers and their perfect squares. (The odd numbers skip 1, which is a more mystical “unity” in the Boethian scheme.)

Evens

 2 4 6 8 4 16 36 64

Odds

 3 5 7 9 9 25 49 81

#### Triangles

The triangles in a rithmomachia set appear in pairs that demonstrate superparticular proportions. These are ratios of the form $n+1:n$, such as $3:2$ or $4:3$. Practically speaking, one can lay out the triangle and circle pieces in a table. The numbers for the first row of triangles are obtained by adding the two circle numbers above that number, in the same column. One may find the numbers for the second row of triangles using ratios. In each column, the ratio of the number in the first triangles row to the number in the last circles row and the ratio of the number in the second triangles row to the number in the first triangles row are the same.

I’ll start with partially completed tables, in case you want to try finding the values yourself:

Evens

 Circles I 2 4 6 8 Circles II 4 16 36 64 Triangles I 6 20 Triangles II 9 Ratio 3:2

Odds

 Circles I 3 5 7 9 Circles II 9 25 49 81 Triangles I Triangles II Ratio

In medieval and Renaissance music, different ratios were used to create different musical scales and analyze the differences between musical notes within those scales. For example, the Pythagorean temperament is based on the ratio $3:2$, which appears when finding the first Team Evens triangle values.

Here are all the triangle values:

Evens

 Circles I 2 4 6 8 Circles II 4 16 36 64 Triangles I 6 20 42 72 Triangles II 9 25 49 81 Ratio 3:2 5:4 7:6 9:8

Odds

 Circles I 3 5 7 9 Circles II 9 25 49 81 Triangles I 12 30 56 90 Triangles II 16 36 64 100 Ratio 4:3 6:5 8:7 10:9

#### Squares

The triangular rithmomachia gamepieces used ratios of the form $n+1:n$. The squares use ratios of the form $n+(n-1):n$, which we may simplify to the less evocative form $2n-1:n$. This is a special case of the more general superpartient proportions. A superpartient proportion is any ratio of the form $n+a:n$ where $a$ is an integer greater than 1 and $a$ and $n$ are relatively prime (that is, their greatest common divisor is 1).

The numbers for the square pieces may be found by repeating the method for finding the numbers for triangular pieces, but now shifted two rows down. The numbers for the first row of squares are obtained by adding the two triangle numbers above above that number, in the same column. One may find the numbers for the second row of squares using ratios. In each column, the ratio of the number in the first squares row to the number in the last triangles row and the ratio of the number in the second squares row to the number in the first squares row are the same.

Evens

 Circles I 2 4 6 8 Circles II 4 16 36 64 Triangles I 6 20 42 72 Triangles II 9 25 49 81 Squares I 15 45 91 (pyramid) 153 Squares II 25 81 169 289 Ratio 5:3 9:5 13:7 17:9

Odds

 Circles I 3 5 7 9 Circles II 9 25 49 81 Triangles I 12 30 56 90 Triangles II 16 36 64 100 Squares I 28 66 120 190 (pyramid) Squares II 49 121 225 361 Ratio 7:4 11:6 15:8 19:10

#### Pyramids

The pyramids or kings are sums of perfect squares. Ideally, they should be built out of spare pieces of the appropriate color with these values. The
Even team’s pyramid has the value $1 + 4 + 9 + 16 + 25 + 36 = 91$. The Odd team’s pyramid has the value $16 + 25 + 36 + 49 + 64 = 190$.

### Moving Pieces

We have already seen the starting board layout, in the illustration from Claude de Boissière’s manual. Black (Team Odds) always moves first. Each shape of piece follows a different movement rule. The following guidelines are based on the 1563 English rithmomachia manual by Lever and Fulke, which was in turn based on de Boissière’s book in French.

• The circles move one space diagonally.
• The triangles move two spaces horizontally or vertically. If not taking a piece, they may also make a chess knight’s move (“flying”).
• The squares move three spaces horizontally or vertically. If not taking a piece, they may also make a “flying” knight-like move that crosses four squares total. This may be either three vertical squares followed by one horizontal square, or three vertical squares followed by one vertical square.
• The pyramids may move in the same way as any of the circles, triangles, or squares.

Lever and Fulke give the following diagram illustrating potential moves:

Diagram from Lever and Fulke, 1563

They illustrate the square’s knight-like move by pointing out a square may move from P to Y or T in their diagram.

### Capturing Pieces

When a player takes a piece, they change its color to their team’s color (ideally, rithmomachia pieces are two-sided!) The transformed piece moves to the row of the board closest to the player, and may now be used like other pieces. There are many ways to take pieces, using different mathematical properties. Lever and Fulke mention Equality, Obsidion (in some editions, Oblivion), Addition, Subtraction, Multiplication, and Division, as well as an optional Proportion rule.

The simplest capture method is Equality. If a piece could move to another piece with the same number, it takes that piece. The Obsidion capture is a trap: if four pieces prevent another piece from moving horizontally or vertically, it is taken.

If two pieces from one team can each move to the same piece of the other team, and those two pieces can add, subtract, multiply, or divide to make the number on the opposing piece, they capture that piece. Whether one of the two attacking pieces has to move into the space they are attacking depends on when the possible capture appears. If a player moves a piece on their turn, bringing it into position for an addition, subtraction, multiplication, or division capture, then they immediately take the other player’s piece without having to move their piece again. On the other hand, if a player notices a possible capture at the start of their turn, before they have moved a piece, they must place one of their attacking pieces in the captured piece’s space in order to take a piece by addition, subtraction, multiplication, or division.

Pyramids may not be taken by equality. They may be taken by obsidion, by addition, subtraction, multiplication, or division, by the optional proportion capture if this is in play, or by taking the pieces with square numbers that make up the pyramid one by one.

### Capture by Proportion

What is the optional rule for taking pieces by proportion? Lever and Fulke refer to arithmetic, geometric, and musical or harmonic proportion, so this optional rule has three sub-rules.

Capture by arithmetic proportion is similar to capture by addition: if two pieces may move into the space of a third and the numbers on all three pieces fit into a partial arithmetic sequence of the form $n, n+a, n+2a$, then the third piece is captured. Three pieces may also capture a fourth by arithmetic proportion. Capture by geometric proportion uses the same idea, but using partial geometric sequences of the form $n, an, a^2n$ or $n, an, a^2n, a^3n$.

Musical proportion only applies to three-term sequences. Lever and Fulke give a “definition” of musical proportion:

Musicall proportion is when the differences of the first and last from the middes, are the same, that is betwene the first and the last, as .3.4.6., betwene .3. and .4. is .1. betwene .4. and .6. is .2. the whole difference is .3. which is the difference betwene .6. and .3. the first and the last.

Unfortunately, this “definition” of musical proportion would apply to any three numbers $a, b, c$. We are comparing $(b-a) + (c-b)$ with $c-a$, but these values are the same! The correct definition of musical proportion (perhaps better known as harmonic proportion) uses ratios. Three numbers $a,b,c$ with $a < b < c$ are in harmonic proportion if $c:a = (c-b):(b-a)$. For example, $4,6,12$ is a musical proportion, because $12:4 = 3:1$ and $(12-6):(6-4) = 6:2 = 3:1$. We can now say that capture by musical proportion happens when two pieces may move into the space of a third and all three pieces fit into a harmonic proportion.

### Victory Conditions

Even figuring out how to take pieces in rithmomachia is complex! Thus, players may agree on any of several victory conditions. These are divided into “common” victories, which are based on capturing enough pieces by some measure, and “proper” victories (also known as triumphs) which involve capturing the enemy’s pyramid and then arranging three or four of one’s one pieces to create an arithmetic, geometric, or harmonic proportion.

Here are the common victories.

• Victory of bodies: The first player to take a certain number of pieces wins.
• Victory of goods: The first player to take pieces adding to at least a certain number wins.
• Victory of quarrel: The first player to take pieces adding to at least a certain number and using a certain total number of digits wins. (This prevents a player from winning by taking a single very high value piece, as might be possible in the victory of goods.)
• Victory of honor: The first player to take a specified number of pieces adding to at least a certain number wins.

Let us quote Lever and Fulke on how to complete a proper victory or triumph:

When the king is taken, the triumph must be prepared to be set in the adversaries campe. The adversaries campe is called al the space, that is betweene the first front of his men, as they were first placed, unto the neither ende of the table, conteyning .40. spaces or as some wil .48. When you entend to make a triumph you must proclaime it, admonishing your adversarie, that he medle not with anye man to take hym, whiche you have placed for youre triumphe. Furthermore, you must bryng all your men that serve for the triumph in their direct motions, and not in theyr flying draughtes.

To triumphe therefore, is to place three or foure men within the adversaries campe, in proportion Arithmeticall, Geometricall, or Musicall, as wel of your owne men, as of your enemyes men that be taken, standing in a right lyne, direct or crosse, as in .D.A.B. or els .5.1.3. if it consist of three numbers, but if it stande of foure numbers, they maye be set lyke a square two agaynst two.

Anyone who attained a proper victory would indeed feel triumphant!

• Rafe Lever and William Fulke, The Philosophers Game, posted by Justin du Coeur.
• Michael Masi, Boethian Number Theory: A Translation of the De Institutione Arithmetica (Rodopi, 1996)
• Ann E. Moyer, The Philosophers’ Game: Rithmomachia in Medieval and Renaissance Europe (Ann Arbor: University of Michigan Press, 2001).

# The Once and Future Feature Column

We’re going to look back at the Column’s history, revisit some of our favorite columns, and talk about what comes next. Spoiler alert: We’re recruiting new columnists!

Ursula Whitcher
AMS | Mathematical Reviews, Ann Arbor, Michigan

The number 24 has many charming properties. For instance, it can be written as $4!$ (that is, $24 = 4 \times 3 \times 2 \times 1$), and it is one of the first nonagonal numbers (the number of dots that can be evenly arranged on the sides of a regular nine-sided polygon). This year, 24 has an even more charming feature: the Feature Column is celebrating its 24th birthday (or, if you prefer, the Feature Column is 4!)

The first three nonagonal numbers. From Eric W. Weisstein, “Nonagonal Number.” (MathWorld–A Wolfram Web Resource.)

Loyal readers of the Column may have noticed some recent changes: a new address (https://blogs.ams.org/featurecolumn/), a shiny new banner incorporating artwork by Levi Qışın, and new navigational tools. This month, we’re going to look back at the Column’s history, revisit some of our favorite columns, and talk about what comes next. Spoiler alert: We’re recruiting new columnists! Check out the last section for information about how to get involved.

### The First Feature Columns

The Feature Column was founded in 1997. Its goals were to increase public awareness of the impact of mathematics and take advantage of the functionality the then-new World Wide Web offered for sharing pictures through the internet. The Column appeared was before blogs were invented: indeed, the Oxford English Dictionary dates the very first use of the long form “weblog” for a blog-like enterprise to December 1997. By that time, the Column had been running for months.

A visualization of the 1997 internet, from opte.org. (CC BY-NC 4.0.)

Steven H. Weintraub wrote the first Feature Columns. The early columns were focused on images, including intertwined knots and pictures taken by the Pathfinder Mars rover. Weintraub also took advantage of the internet’s capability to spread news quickly: Feature Columns could be posted right away, rather than adhering to the publication schedule of the AMS Notices or the Bulletin.

Steven H. Weintraub.)

Some of the early columns had an element of adventure. Steven Weintraub recalls:

One that I remember in particular was the September 1988 Feature Column “Prize Winners at the 1998 International Congress of Mathematicians“. The column itself was a listing of the winners of the various prizes, with links to further information about them and their work. But there is a more interesting back story. In 1998 email and the internet were far less widespread than they are today. I attended the 1998 ICM in Berlin, and, feeling like an old-time reporter, as soon as the prize ceremony was over, I rushed out to a phone booth, called up Sherry O’Brien, the AMS staff member with whom I worked with on WNIM (“What’s New in Mathematics”), and told her who the winners were. She promptly posted the information on the AMS website, and that was how many people first found out the news.

Over the course of the next few years, Tony Phillips, Bill Casselman, and Joe Malkevitch took on roles as regular Feature Columnists. They explored the web’s potential for longer columns, serializing some explorations over multiple months. They were later joined by Dave Austin, and eventually by me. For much of the Column’s existence, it was ably edited by Michael Breen, the longtime AMS Public Affairs expert. I took over the editorial role in 2020.

### Some Favorite Columns

The Feature Column has always been curiosity-driven. Though individual columns may riff on current events, the underlying mathematics is enduring. Thus, individual columns have enduring popularity: some have been read and re-read for decades. Here are some of our most popular Feature Columns.

• In 2009, David Austin made a stirring case for a key concept in applied linear algebra: We Recommend a Singular Value Decomposition. Austin illustrates the Singular Value Decomposition with clear diagrams and inviting applications, from data compression to the $\$1$million Netflix prize. • In 2016, Bill Casselman investigated The Legend of Abraham Wald. We’ve all seen the meme: a diagram of a fighter plane, with red marks for bullet holes on the wings and tail, but not the engine. The legend says that Abraham Wald identified this phenomenon as an example of survivorship bias: airplanes with damage in other places did not survive to be measured. Casselman explores what we do and do not know about the real, historical Abraham Wald. • Image by Martin Grandjean, McGeddon, and Cameron Moll. (CC 4.0.) • In 2004, Joe Malkevitch wrote about Euler’s Polyhedral Formula, describing it as one of his all-time favorite mathematical theorems and also one of the all-time most influential mathematical theorems. Joe Malkevitch’s research focuses on graph theory and discrete geometry, including the properties of polyhedra. His description showcases his expertise, enthusiasm, and long-standing interest in the history of mathematics. • In 2008, Tony Phillips described The Mathematics of Surveying. His discussion offers practical, tangible applications for key concepts in school mathematics, from similar triangles to estimated volumes. • In the summer of 2020, I (Ursula Whitcher) wrote Quantifying Injustice, describing statistical strategies for assessing predictive policing algorithms. These algorithms can both obscure and magnify police injustices: new research provides tools to identify problems and measure their impact. ### New Columnists We’re looking for mathematicians who are enthusiastic about communicating mathematics in written and visual form to join the Feature Column! The typical commitment is two columns per year, though we occasionally welcome guest columnists. We are particularly excited about involving columnists with a variety of backgrounds and experiences. Please send questions and letters of interest to uaw@ams.org. If you’re ready to apply, include a CV and a writing sample! ## Risk Analysis and Romance # Risk Analysis and Romance Happily ever after for Courtney Milan’s math-major heroine Maria Camilla Lopez involves a master’s degree focused on risk analysis. Let’s explore real-world research in risk and management, from food bank strategies to the moons of Jupiter. Ursula Whitcher AMS | Mathematical Reviews, Ann Arbor, Michigan February brings Valentine’s Day, and with it an opportunity to play one of my favorite games: what research would this fictional character be working on? This time, our protagonist is Maria Camilla Lopez, the heroine of Courtney Milan’s novel Hold Me. Courtney Milan is a genuine polymath. She has bachelor’s degrees in mathematics and chemistry, and a master’s degree in physical chemistry. She then switched gears to earn a law degree, clerked for Supreme Court Justices Sandra Day O’Connor and Anthony Kennedy, and worked as a law professor before leaving academia for a full-time writing career. Courtney Milan (Photo by Jovanka Novakovic) The fictional Maria Lopez is just finishing her own bachelor’s degree in math. Maria is a nontraditional student. She took time off between high school and college to work, saving money for hormones and gender affirmation surgery. To keep herself intellectually engaged, Maria started an anonymous blog about hypothetical disasters. She funnels her real anxiety and wide-ranging curiosity into mathematical models of subjects such as international cyberattacks and zombie plagues. As the book begins, she’s running a Monte Carlo simulation of grocery supply chain failures during an apocalyptic pandemic. (Hold Me was published in 2016, but Maria’s puzzles have all-too-enduring relevance!) Maria is becoming closer and closer friends with one of the regular commenters on her blog, a man who goes by the handle ActualPhysicist. They share science jokes and pictures of their day. Maria even shares a photo of the gorgeous, bright red, hand-decorated high heels she’s wearing as a sort of armor, to meet with an acquaintance who has been dismissive and rude. The only problem is, her acquaintance, Jay Thalang, is ActualPhysicist. Hold Me is a romantic comedy, so eventually Maria and Jay work things out. This entails Jay admitting what a jerk he has been. His rudeness stemmed from a combination of stress, sexism, youthful trauma, and a form of loneliness many mathematicians will relate to—the loneliness of having your closest friends scattered all over the world. The cover of Hold Me I want to focus on the resolution of another of Maria’s problems, the question of what to do after graduation. She applies to entry-level positions in actuarial science, but she wants to do something weirder and riskier: use her expertise in imaginary disasters to advise companies on preventing real ones. To do so, she needs credentials. She seeks them in a very specific place: Stanford’s Management Science and Engineering (MS&E) department. Novels are full of fictional departments at fictional universities, but Management Science and Engineering is a real program. It focuses on mathematically informed approaches to solving business and policy problems, drawing on disciplines such as operations research, statistics, and computer science. What kinds of projects would Maria find intriguing? Let’s explore some of the real research at MS&E that could engage someone with a strong mathematical background and experience modeling a wide range of scenarios. ### Elisabeth Paté-Cornell and the Europa Clipper The Engineering Risk Research Group headed by Professor Elisabeth Paté-Cornell, the founding chair of MS&E, provides an obvious source of projects for Maria. Paté-Cornell, whose father was an officer in the French Marine Corps, was born in Dakar, Senegal in 1948. Growing up, she was interested in both mathematics and literature, but decided that a more technical career would offer her more job opportunities while still allowing her to indulge her literary interests. Thus, she majored in mathematics and physics at Aix-Marseille University, where she earned bachelor’s degrees in mathematics and physics in 1968. Though her undergraduate program was highly theoretical, Paté-Cornell knew she wanted to attack more applied problems. She did a master’s degree in the exciting new field of computer science at the Institute Polytechnique in Grenoble. Based on advice from one of her professors there, she came to Stanford for a second master’s degree in operations research. She combined all of this experience for her PhD from Stanford’s Engineering-Economic Systems department, where she worked on risk analysis and models of earthquakes. Over the course of her career, Paté-Cornell has pursued research on a huge variety of topics, including space shuttle heat shielding and the risk of nuclear war. She has analyzed lessons from disasters such as the failure of the Fukushima Daichi nuclear plant and the Deep Water Horizon oil spill, studied hospital trauma centers, and considered terrorism risks. Elisabeth Paté-Cornell (Professional photo used under CC-BY-SA 4.0) Recently, Paté-Cornell mentored Stanford mechanical engineering PhD student Yiqing Ding and four MS&E master’s students, Sean Duggan, Matthew Ferranti (now an economics PhD student at Harvard), Michael Jagadpramana, and Rushal Rege, in a study of radiation risk in outer space. Their subject was NASA’s Europa Clipper spacecraft, which is due to launch toward Jupiter’s moon Europa in 2024. Europa is covered in smooth water ice, streaked with lines or cracks. Scientists hypothesize that a moon-wide liquid ocean layer lies between the ice and Europa’s rocky core. Learning about Europa’s structure is made more difficult because the moon orbits within a belt of radiation trapped by Jupiter’s magnetic field. Radiation is a danger to both spacecraft and the scientific instruments they carry. To manage the radiation risk, instead of orbiting Europa itself, the Clipper will enter an elliptical orbit around Jupiter. Each time the Clipper flies by Europa it will pass by at a different angle, slowly building a detailed picture of the moon’s surface. Schematic illustration shows Europa Clipper flybys Ding, Paté-Cornell, and their group write that past quantitative analyses of radiation risk in space exploration have focused on possible radiation exposure to individual astronauts. The radiation risk to the Europa Clipper is different because of its cumulative exposure over multiple flybys, and because the difficulties of exploration near Jupiter limit our information about how intense those exposures might be. Furthermore, different parts of the spacecraft and its payload may have different radiation tolerances. Instead of assuming a constant radiation dose on each flyby, the MS&E group built a probabilistic model that allowed for radiation to be higher or lower, according to a log-normal distribution. In other words, the logarithm of the radiation dose was normally distributed; in one example they considered, the most likely radiation dose in a single twelve-hour flyby was just under 2000 rad, but the potential dose was far higher. (For comparison, doctors treating cancer might target a tumor with 2000 rad over the course of five days.) After constructing their model, the group ran simulations, modeling approximately 1000 missions with about 70 flybys in each mission. The extra flybys allowed them to see how long it might take for multiple instruments to fail. Their model showed that multiple instruments were likely to fail in quick succession as radiation accumulated. Log normal distribution curves for different parameters Of course, radiation is only one form of risk to the Europa Clipper mission. Paté-Cornell has written elsewhere about the importance of incorporating multiple types of error, including human error, in complete risk analyses. Systematic attempts to measure risk encourage us to contemplate dangers we might otherwise ignore. In an essay entitled “Improving Risk Management: From Lame Excuses to Principled Practice,” Paté-Cornell and Louis Anthony Cox Jr. (University of Colorado) write: Deliberate exercises in applying “prospective hindsight” (i.e., assuming that a failure will occur in the future, and envisioning scenarios of how it could happen) and probabilistic analysis using systems analysis, event trees, fault trees, and simulation can be used to overcome common psychological biases that profoundly limit our foresight. These include anchoring and availability biases, confirmation bias, group-think, status quo bias, or endowment effects. ### Volunteers and Food Systems Maria hopes that graduate school will help her connect with businesses and community groups that are trying to make better choices. She would find new opportunities to do so by collaborating with Professor Irene Lo, who researches ways to use operations research for social good. Lo majored in math at Princeton University and received a PhD from Columbia’s Industrial Engineering and Operations Research department in 2018. She has studied school choice algorithms and problems in graph theory. Irene Lo (Professional photo used by permission) Recently, Lo put her expertise in matching to the test in a collaboration with Food Rescue U.S. (FRUS), a nonprofit that connects businesses that have extra food with food banks that need it. Coordinating food pickup is a hard problem. Food Rescue U.S. uses an app to connect volunteers who want to help with donor businesses that have food ready to share. Lo, Yale School of Management professor Vahideh Manshadi, and the PhD students Scott Rodilitz (Yale) and Ali Shameli (Stanford) teamed up with Food Rescue U.S. to look for ways to maximize volunteer engagement. Volunteers are more likely to keep contributing to an organization when it’s easy for them to find ways to participate. One strategy Food Rescue U.S. uses to keep volunteers involved is “adoption”: a volunteer can promise to visit a particular site at the same time every week. Adoption makes food delivery more predictable for both volunteers and businesses. But if too many sites are adopted, volunteers logging into the app for the first time won’t have anything to do. This conundrum illustrates an economic concept called market thickness: buyers and sellers (or, here, volunteers and donors) can only accomplish their goals when sufficient numbers of people participate in the process. Lo, Manshadi, Rodilitz, and Shameli built a mathematical model to study matching between volunteers and donor sites. Choose a scaling parameter$n$that controls the overall size of the market, and suppose there are$na$donor sites and$nb$volunteers. Suppose the probability that a volunteer likes an available donor site is$c/n$, where$c$is another fixed parameter (so matching is easier when$c$is large, and tougher when$c$is small). When the first volunteer arrives, the probability that none of the sites works for them is$(1-\frac{c}{n})^{na}$. Thus, the probability that the volunteer can find a good match is$1-(1-\frac{c}{n})^{na}$. If they are successful, the number of available sites drops by 1. Write$M$for the total number of matches after all volunteers have arrived. Lo, Manshadi, Rodilitz, and Shameli showed that as the scaling factor$n$grows large,$M/n$converges (almost surely) to$a + b – \frac{1}{c} \log(e^{ca}+e^{cb}-1)$. Using this mathematical model, Lo and her collaborators then considered multiple rounds of volunteer and donor matching, and explored how removing some donor sites due to adoption would change the overall matching process. They identified two simple and appealing optimal strategies, depending on market characteristics: either all of the donor sites should be adopted, or none of them should be removed from the pool. More complicated efforts at optimization did not increase the number of overall matches. They point out that this theoretical prediction matches real-world observations about the differences between volunteer pools in different places: Our interviews with site directors reveal that there are inherent differences between the volunteer pools in different locations. For example, some FRUS sites are in college towns, and thus, the volunteer base consists of many engaged students who are more likely to be attentive to last-minute needs. In other sites, however, a majority of volunteers are professionals who may not be as flexible in their level of engagement. Nonprofits could use this insight to find new, subtle ways to encourage their volunteers to keep coming back. For example, instead of showing the same “adopt” button to everyone logging into the app, Food Rescue U.S. could encourage adoption in big cities and discourage it in college towns. Our heroine Maria Lopez, who knows a lot about building online communities, might have other ideas to test! ### Further reading ## Quantifying Injustice Just as a YouTube algorithm might recommend videos with more and more extremist views, machine learning techniques applied to crime data can magnify existing injustice. … Ursula Whitcher AMS | Mathematical Reviews, Ann Arbor, Michigan ### What is predictive policing? Predictive policing is a law enforcement technique in which officers choose where and when to patrol based on crime predictions made by computer algorithms. This is no longer the realm of prototype or thought experiment: predictive policing software is commercially available in packages with names such as HunchLab and PredPol, and has been adopted by police departments across the United States. Algorithmic advice might seem impartial. But decisions about where and when police should patrol are part of the edifice of racial injustice. As the political scientist Sandra Bass wrote in an influential 2001 article, “race, space, and policing” are three factors that “have been central in forwarding race-based social control and have been intertwined in public policy and police practices since the earliest days” of United States history. One potential problem with predictive policing algorithms is the data used as input. What counts as a crime? Who is willing to call the police, and who is afraid to report? What areas do officers visit often, and what areas do they avoid without a specific request? Who gets pulled over, and who is let off with a warning? Just as a YouTube algorithm might recommend videos with more and more extremist views, machine learning techniques applied to crime data can magnify existing injustice. ### Measuring bias in predictive policing algorithms In 2016, two researchers, the statistician Kristian Lum and the political scientist William Isaac, set out to measure the bias in predictive policing algorithms. They chose as their example a program called PredPol. This program is based on research by the anthropologist P. Jeffrey Brantingham, the mathematician Andrea Bertozzi, and other members of their UCLA-based team. The PredPol algorithm was inspired by efforts to predict earthquakes. It is specifically focused on spatial locations, and its proponents describe an effort to prevent “hotspots” of concentrated crime. In contrast to many other predictive policing programs, the algorithms behind PredPol have been published. Such transparency makes it easier to evaluate a program’s effects and to test the advice it would give in various scenarios. Lum and Isaac faced a conundrum: if official data on crimes is biased, how can you test a crime prediction model? To solve this technique, they turned to a technique used in statistics and machine learning called the synthetic population. The term “synthetic population” brings to mind a city full of robots, or perhaps Blade Runner-style androids, but the actual technique is simpler. The idea is to create an anonymized collection of profiles that has the same demographic properties as a real-world population. For example, suppose you are interested in correlations between choices for a major and favorite superhero movies in a university’s freshman class. A synthetic population for a ten-person freshman seminar might look something like this: 1. Education major; Thor: Ragnarok 2. Education major; Wonder Woman 3. History major; Wonder Woman 4. Math major; Black Panther 5. Music major; Black Panther 6. Music major; Black Panther 7. Music major; Thor: Ragnarok 8. Undeclared; Black Panther 9. Undeclared; Thor: Ragnarok 10. Undeclared; Wonder Woman This is a toy model using just a couple of variables. In practice, synthetic populations can include much more detail. A synthetic population of students might include information about credits completed, financial aid status, and GPA for each individual, for example. Lum and Isaac created a synthetic population for the city of Oakland. This population incorporated information about gender, household income, age, race, and home location, using data drawn from the 2010 US Census. Next, they used the 2011 National Survey on Drug Use and Health (NSDUH) to estimate the probability that somebody with a particular demographic profile had used illegal drugs in the past year, and randomly assigned each person in the synthetic population to the status of drug user or non-user based on this probabilistic model. They noted that this assignment included some implicit assumptions. For example, they were assuming that drug use in Oakland paralleled drug use nationwide. However, it’s possible that local public health initiatives or differences in regulatory frameworks could affect how and when people actually use drugs. They also pointed out that some people lie about their drug use on public health surveys; however, they reasoned that people have less incentive to lie to public health workers than to law enforcement. ###### A West Oakland transit stop. (Photo by Thomas Hawk, CC BY-NC 2.0.) According to Lum and Isaac’s probabilistic model, individuals living anywhere in Oakland were likely to use illegal drugs at about the same rate. Though the absolute number of drug users was higher in some locations than others, this was due to greater population density: more people meant more potential drug users. Lum and Isaac compared this information to data about 2010 arrests for drug possession made by the Oakland Police Department. Those arrests were clustered along International Boulevard and in an area of West Oakland near the 980 freeway. The variations in arrest levels were significant: Lum and Isaac wrote that these neighborhoods “experience about 200 times more drug-related arrests than areas outside of these clusters.” These were also neighborhoods with higher proportions of non-white and low-income residents. The PredPol algorithm predicts crime levels in grid locations, one day ahead, and flags “hotspots” for extra policing. Using the Oakland Police crime data, Lum and Isaac generated PredPol crime “predictions” for every day in 2011. The locations flagged for extra policing were the same locations that already had disproportionate numbers of arrests in 2010. Combining this information with their demographic data, Lum and Isaac found that Black people were roughly twice as likely as white people to be targeted by police efforts under this system, and people who were neither white nor Black were one-and-a-half times as likely to be targeted as white people. Meanwhile, estimated use of illegal drugs was similar across all of these categories (white people’s estimated drug use was slightly higher, at just a bit more than 15%). This striking disparity is already present under the assumption that increased police presence does not increase arrests. When Lum and Isaac modified their simulation to add arrests in targeted “hotspots,” they observed a feedback effect, in which the algorithm predicted more and more crimes in the same places. In turn, this leads to more police presence and more intense surveillance of just a few city residents. In a follow-up paper, the computer scientists Sorelle Friedler, Carlos Scheidegger, and Suresh Venkatasubramanian worked with a pair of University of Utah undergraduate students to explore feedback effects. They found that if crime reports were weighted differently, with crime from areas outside the algorithm’s “hotspots” given more emphasis, intensified surveillance on just a few places could be avoided. But such adjustments to one algorithm cannot solve the fundamental problem with predictions based on current crime reports. As Lum and Isaac observed, predictive policing “is aptly named: it is predicting future policing, not future crime.” ### Further reading • Sandra Bass, “Policing Space, Policing Race: Social Control Imperatives and Police Discretionary Decisions,” Social Justice, Vol. 28, No. 1 (83), Welfare and Punishment In the Bush Era (Spring 2001), pp. 156-176 (JSTOR.) • Danielle Ensign, Sorelle A. Friedler, Scott Neville, Carlos Scheidegger and Suresh Venkatasubramanian. Runaway Feedback Loops in Predictive PolicingProceedings of the Conference on Fairness, Accountability, and Transparency (FAT*), 2018. • Kristian Lum and William Isaac, To predict and serve? Significance, October 10, 2016. (The Royal Statistical Society.) • Cathy O’Neil, Weapons of Math Destruction. Ursula Whitcher AMS | Mathematical Reviews, Ann Arbor, Michigan ## From Strings to Mirrors # From Strings to Mirrors To tell you where mirror symmetry came from, I have to tell you about string theory. And to do that, I have to tell you why you should care about string theory in the first place. That story starts with an old question: what is the smallest piece of the universe? … Ursula Whitcher AMS | Mathematical Reviews, Ann Arbor, Michigan ### Introduction Scientists often use mathematical ideas to make discoveries. The area of research mathematics known as mirror symmetry does the reverse: it uses ideas from theoretical physics to create mathematical discoveries, linking apparently unconnected areas of pure mathematics. ### String theory To tell you where mirror symmetry came from, I have to tell you about string theory. And to do that, I have to tell you why you should care about string theory in the first place. That story starts with an old question: what is the smallest piece of the universe? Here's a rapid summary of a couple of thousand years of answers to this question in the West. The ancient Greeks theorized that there were four elements (earth, air, fire, water), which combined in different ways to create the different types of matter that we see around us. Later, alchemists discovered that recombination was not so easy: though many chemicals can be mixed to create other chemicals, there was no way to mix other substances and create gold. Eventually, scientists (now calling themselves chemists) decided that gold was itself an element, that is, a collection of indivisible components of matter called gold atoms. Continued scientific experimentation prompted longer and longer lists of elements. By arranging these elements in a specific way, Dmitri Mendeleev produced a periodic table that captured common properties of the elements and suggested new, yet-to-be discovered ones. Why were there so many different elements? Because (scientists deduced) each atom was composed of smaller pieces: protons, neutrons, and electrons. Different combinations of these sub-atomic particles produced the chemical properties that we ascribe to different elements. This story is tidy and satisfying. But there are still some weird things about it: for example, protons and neutrons are huge compared to electrons. Also, experiments around the beginning of the twentieth century suggested that we shouldn't just be looking for components of matter. The electromagnetic energy that makes up light has its own fundamental component, called a photon. The fact that light sometimes acts like a particle, the photon, and sometimes like a wave is one of the many weird things about quantum physics. (The word "quantum" is related to "quantity"—the idea that light might be something we can count!) Lots and lots of work by lots and lots of physicists trying to understand matter and energy particles, over the course of the twentieth century, produced the "Standard Model." Protons and neutrons are made up of even smaller components, called quarks. Quarks are held together by the strong force, whose particle is a gluon. The weak force, which holds atoms together, has its own force particles. The full Standard Model includes seventeen different fundamental particles. There are two theoretical issues with the Standard Model. One is essentially aesthetic: it's really complicated. Based on their experience with the periodic table, scientists suspect that there should be some underlying principle or structure relating the different types of particles. The second issue is more pressing: there's no gravity particle. Every other force in the universe can be described by one of the "force carrier" particles in the Standard Model. Why is gravity different? The best description we have of gravity is Einstein's theory of general relativity, which says gravitational effects come from curvature in the fabric of spacetime. This is an excellent way to describe the behaviors of huge objects, such as stars and galaxies, over large distances. But at small distance scales (like atoms) or high energies (such as those seen in a particle accelerator or in the early universe), this description breaks down. People have certainly tried to create a quantum theory of gravity. This would involve a force carrier particle called a graviton. But the theory of quantum physics and the theory of general relativity don't play well together. The problem is the different ways they treat energy. Quantum physics says that when you have enough energy in a system, force-carrier particles can be created. (The timing of their appearance is random, but it's easy to predict what will happen on average, just as we know that when we flip a coin over and over, we'll get tails about half the time.) General relativity says that the shape of spacetime itself contains energy. So why aren't we detecting random bursts of energy from outer space, as gravitons are created and destroyed? String theory is one possible answer to this question. String theory says that the smallest things in the universe are not point particles. They extend in one dimension, like minuscule loops or squiggles— thence the name string. Strings with different amounts of energy correspond to the particles with different properties that we can detect in a lab. The simplicity of this setup is compelling. Even better, it solves the infinite energy problem. Interactions that would occur at a particular moment in spacetime, in the point particle model, are smoothed out over a wider area of spacetime if we model those interactions with strings. But string theory does pose some conceptual complications. To explain them, let's look at the underlying mathematical ideas more carefully. In general relativity, we think of space and time together as a multidimensional geometric object, four-dimensional spacetime. Abstractly, the evolution of a single particle in time is a curve in spacetime that we call its worldline. If we start with a string instead of a point particle, over time it will trace out something abstractly two-dimensional, like a piece of paper or a floppy cylinder. We call this the worldsheet. One can imagine embedding that worldsheet into higher-dimensional spacetime. From there, we have a standard procedure to create a quantum theory, called quantization. If we work with four-dimensional spacetime, we run into a problem at this point. In general relativity, the difference between time and the other, spatial dimensions is encoded by a negative sign. Combine that negative sign with the standard quantization procedure, and you end up predicting quantum states—potential states of our universe, in this model—whose probability of occurring is the square root of a negative number. That's unphysical, which is a nice way of saying "completely ridiculous." Since every spatial dimension gives us a positive sign, we can potentially cancel out the negatives and remove the unphysical states if we allow our spacetime to have more than four dimensions. If we're trying to build a physical theory that is physically realistic, in the sense of having both bosonic and fermionic states (things like photons and things like electrons), it turns out that the magic number of spacetime dimensions is ten. If there are ten dimensions in total, we have six extra dimensions! Since we see no evidence of these dimensions in everyday life, they must be tiny (on a scale comparable to the string length), and compact or curled up. Since this theory is supposed to be compatible with general relativity, they should be "flat" in a precise mathematical sense, so their curvature doesn't contribute extra gravitational energy. And to allow for both bosons and fermions, they should be highly symmetric. Such six-dimensional spaces do exist. They're called Calabi-Yau manifolds: Calabi for the mathematician who predicted their existence, Yau for the mathematician who proved they really are flat. ### String dualities One of the surprising things about string theory, and one of the most interesting from a mathematical perspective, is that fundamentally different assumptions about the setup can produce models of the universe that look identical. These correspondences are called string dualities. The simplest string duality is called T-duality (T is for torus, the mathematical name for doughnut shapes and their generalizations). Suppose the extra dimensions of the universe were just a circle (a one-dimensional torus). A string's energy is proportional to its length; we can't directly measure the length of a string, but we can measure the energy it has. However, a string wrapped once around a big circle and a string wrapped many times around a small circle can have the same length! So the universe where the extra circle is radius 2 and the universe where the radius is ½ look the same to us. The same holds for the universes of radius 3 and 1/3, 10 and 1/10, or generally$R$and$1/R$. But what if we want a more physically realistic theory, where there are six extra dimensions of the universe? Well, we assume that the two-dimensional string worldsheet is mapping into these six extra dimensions. Our theory will have various physical fields, similar to the electromagnetic field. To keep track of what a particular field is doing back on the worldsheet, we use coordinates$x$and$y$. We can combine those coordinates into a single complex number$z$=$x$+$iy$. That$i$there is an imaginary number. When I first learned about imaginary numbers, I was certain they were the best numbers, since they used the imagination; I know that "Why are you wasting my time with numbers that don't even exist?" is a more typical reaction. In this case, though,$i$is standing in for a very concrete concept, direction: changing$x$moves right or left, while changing$iy$moves up or down. If we simultaneously increase$x$a little bit and$y$a little bit, we'll move diagonally right and up; we can think of that small diagonal shift as a little change in$z$=$x$+$iy$. If you want to be able to move all around the plane, just increasing or decreasing$z$like this isn't enough. Mathematicians use$\bar{z}$=$x$–$iy$to talk about motion that goes diagonally right and down. Now, back to building our string theory. The fields depend on$x$and$y$, but they're highly symmetric: to figure out how they act on the whole worldsheet, it's enough to know either how they change either based on a little change in$z$, or based on a little change in$\bar{z}$(so we don't have to measure right-and-up and left-and-down changes separately). If you have two fields like this, they might change in similar ways (both varying a lot due to small changes in$z$, say), or they might change in different ways (one depending on$z$and the other on$\bar{z}$). From the physics point of view, this choice is not a big deal. You're just choosing either two plus signs (this choice is called the B-model) or a plus and a minus sign (the A-model). Either way, you can carry on from there and start working out all the physical characteristics of these fields, trying to understand predictions about gravity, and so on and so forth. Because this choice really doesn't matter, it shouldn't make any difference to your eventual predictions. In particular, any type of universe you can describe by choosing two plus signs and working out the details should also be a type of universe you can describe by choosing one plus and one minus, then working out those details. How do we match up those two types of universes? By choosing different shapes for the six extra dimensions. Using this logic, physicists predicted that if you picked a specific shape for the extra dimensions of the universe and worked out the details of the A-model, you should be able to find a different shape that would give you the same physical details once you worked out its B-model theory. Now, I said the sign choice wasn't a big deal from the physical perspective. But it's a huge deal from the mathematical perspective. If you only choose plus signs, you can rewrite everything that happens in terms of just powers of$z\$, and start doing algebra. Algebra is great! You can program your computer to do algebra, and find lots of information about your six-dimensional space really fast! On the other hand, if you choose one plus and one minus sign, you're stuck doing calculus (a very special kind of multivariable, multidimensional calculus, where experts engage in intense arguments about what sorts of simplifying assumptions are valid).

Thus, when physicists came along and said, "Hey, these two kinds of math give you the same kinds of physical predictions," that told mathematicians they could turn incredibly difficult calculus problems into algebra problems (and thereby relate two branches of mathematics that had previously seemed completely different). Mathematicians call this insight, and the research it inspired, "mirror symmetry."