Let’s Take Responsibility For Our Math

In an open letter to the AMS Notices, a collaboration of prominent mathematicians and other stakeholders insist that mathematicians and universities suspend any relationship with law enforcement (see here). Their reasoning, as the letter makes clear, is that the tools mathematicians have developed for law enforcement have exacerbated harm and furthered discrimination of historically subjugated communities in the United States. Given the current national conversation concerning the treatment of Black Americans spurred by the murder of George Floyd and others, it’s only natural that these letter writers would re-evaluate mathematics’ relationship to law enforcement.

For some, it may initially seem far-fetched that mathematics, often lauded as a pure and objective discipline, might play a role in divisiveness and even harm. Surely, they argue, mathematicians don’t need to revisit our pre-sheafs and morphisms for fear that we’ve somehow played a role in societal destruction. Well, yes and no. The line between pure and applied mathematics is blurry at best. A theorem in category theory might have implications in optimization and control. An analyst might dabble in applied probability and machine learning. Even abstract mathematicians might relate their work to important scientific motifs in other fields when applying for a grant. Simply put, mathematics does not exist in a vacuum and as its students, we are all partially responsible for its use.

The letter writers focus on PredPol, a clear example of advanced mathematics directly impacting society. PredPol, “the predictive policing company,” boasts that visitors can “join 1,000s of other Law Enforcement and Security Professionals” in using its service (1). Indeed, in 2019, PredPol algorithms were in use by more than fifty police departments (2, 3). In addition to PredPol, there are many companies (Palantir and Third Eye Labs, for instance) that serve as a bridge between mathematics and its applications in policing. Unfortunately, these companies often treat the algorithms and AI they employ as a black box for profit, just as law enforcement treat them as a black box for arrests. As one police Captain succinctly put it, “It’s PredPol, and it’s going to reduce crime” (3).

So, what exactly is the failure of these predictive policing algorithms? As one team of researchers puts it, such algorithms have been “empirically shown to be susceptible to runaway feedback loops, where police are repeatedly sent back to the same neighborhoods regardless of the true crime rate” (4). First, over-policed areas lead to over-reported crime. For example, the Stanford Open Policing Project elucidates a significant disparity in traffic stops for Black Americans compared to non-Black Americans (5). Moreover, activists often point out the long history of racist policing policy that further skews the frequency of incidents involving Black Americans (6, 7). This creates biased crime statistics. Companies then use these biased crime statistics to train reinforcement learning algorithms or to measure the “accuracy” of their crime models. Law enforcement then purchases these flawed models and uses them to inform their policing practices.

A natural rejoinder from advocates of predictive policing is to acknowledge the bias and potential harm, but to argue that the existence of bias only reasserts the necessity for continued innovation of the algorithms in play. As any applied mathematician will say, however, every model is an approximation of reality. Since policing deals with life-altering situations, are we really comfortable with the error of that approximation being human life? I am not. And if the reality we attempt to approximate is structurally racist, is it ethical to build models which reflect that structural inequity?

Moreover, we should remain wary of arguments which allege that predictive policing algorithms only require further refinement. From a purely mathematical perspective, a problem that optimizes for one outcome is an interesting publication. A problem that optimizes for many outcomes is a field of research. A claim that one can eventually solve issues with predictive policing through mathematical research seems grandiose at best. After all, the implementation of such tools relies on the discretion of law enforcement in the first place. And much like a game of telephone, the intent of the original mathematicians involved in creating predictive models inevitably becomes obscured through company adaptation and police implementation.

Unsurprisingly, the misuse of mathematics goes beyond the current predictive policing debate. Internationally, different law enforcement agencies have faced censure for their use of flawed facial recognition software (8, 9, 10). Besides the obvious privacy concerns about facial recognition technology, activists have raised the argument that facial recognition far more often misidentifies darker complexions (8). While predictive policing has been the genesis for productive calls to action within mathematics, there are clearly other ethical concerns which require continued attention.

Mathematics, powerful as it may be, has never been a panacea for society’s ills. So as mathematicians, what can we do? Ideologically, we must humbly accept that the conversation about predictive policing requires a diverse coalition of experts in order to avoid perpetuating harm. In doing so, we acknowledge the limitation of our mathematical expertise, allowing ourselves to learn about issues outside our field. Crucially, we must sign our support for the petition to suspend cooperation with law enforcement, and work to see its goals are realized within our home institutions. In the future, we must ensure we engage only with responsible companies and organizations. In pursuing the question of what companies are “responsible,” we must be sure to solicit the opinions of a diverse array of colleagues and peers, in addition to doing our own research. More broadly, we must support initiatives to diversify our science in our classrooms, at our universities, and nationally.

If you have suggestions for additional actionable ways to address these challenges, please leave a comment.

Edit: Since the writing of this article, the Association for Women in Mathematics has also written an extensively sourced petition (see here) and I encourage people to lend their support. Also, thank you to Michael Breen for providing this New York Times article, which is a humanizing case study detailing the actual harm that results from the misuse and inaccuracy of facial recognition software.

(1) Predpol

(2) Predictive Policing Using AI Tested by Bay Area Cops

(3) Predictive Policing Lacks Accuracy Tests

(4) Runaway Feedback Loops in Predictive Policing

(5) Open Policing Stanford

(6) Black Lives Matters Police Departments have Long History Racism

(7) 13th

(8) Federal Study Confirms Racial Bias Many Facial Recognition Systems Casts Doubt Their Expanding Use

(9) How China Is Using Facial Recognition Technology

(10) The Global Expansion of AI Surveillance

 

Disclaimer: The opinions expressed on this blog are the views of the writer(s) and do not necessarily reflect the views and opinions of the American Mathematical Society.

Comments Guidelines: The AMS encourages your comments, and hopes you will join the discussions. We re- view comments before they are posted, and those that are offensive, abusive, off-topic or promoting a commercial product, person or website will not be posted. Expressing disagreement is fine, but mutual respect is required.

This entry was posted in Diversity, General, Grad School, Jobs, Math in Pop Culture, Mathematics in Society, News, Social Justice. Bookmark the permalink.