What is going on with the Supreme Court vis-à-vis gerrymandering?
The Supreme Court justices are busy finishing up their current term and the past weeks have seen decisions handed down on gerrymandering cases.
To get you up to speed, the court considered cases from three states during its 2017 term:
- from Wisconsin, Gill v. Whitford,
- Benisek v. Lamone, the Maryland case, and
- the Texas case Abbott v. Perez.
Information about these, and all Supreme Court cases for the 2017 term can be found at www.scotusblog.com/case-files/terms/ot2017/
The first two cases focused on partisan gerrymandering and the third on racial gerrymandering. Decisions for the partisan claims had the potential to usher in sweeping changes in the way map-makers draw district lines and, in fact, Justice Ruth Bader Ginsberg said that Gill v. Whitford is the most important case the court would hear this term.
In August I wrote a little bit about the Wisconsin case and more in the January Notices of the AMS.
Incidentally, there were other cases considered that have to do with voting rights including one examining the process Ohio uses to remove voters on change-of-residence grounds, and one having to do with what voters can wear to polling places in Minnesota.
But what happened with the important Wisconsin and Maryland cases? In short, the Wisconsin case charged that Republicans had done the rigging, while in Maryland the Democrats were accused. The justices’ opinions for both were announced on June 18; by announcing their decisions in the two cases at the same time, they are able to appear not to favor one party over another. Chief Justice John Roberts delivered the Wisconsin opinion and the Maryland opinion was per curiam.
Why are these decisions important to the mathematics and statistics community?
Amy Howe writes that with these two cases on their docket, “there were high hopes that the justices would finally weigh in definitively on challenges to the practice of purposely drawing maps to favor one party at the expense of another – either by holding that courts should steer clear of such claims or by laying out standards for courts to use in evaluating them.”
Justice Anthony Kennedy was the pivotal judge; he believes that courts could have a role in partisan gerrymandering cases if a workable standard for evaluating them were to be found (see pages 10-11 of the Gill v. Whitford opinion). Justice Kennedy was not persuaded by arguments this term, and the status quo remains.
Had the Wisconsin case gone differently, it could, for example, have deemed the efficiency gap to be a workable standard. The mathematics community has not come together around this particular standard, nor any other quantitative measure of partisan fairness. However, our community is focusing on a general approach, which we refer to as “outlier analysis.” See below for more on this approach.
Over the past year, the AMS Council has approved a joint statement with the American Statistical Association on the role of the mathematical sciences in redistricting and the AMS has become a partner of the ASA’s Count on Stats initiative which educates about, supports and advocates for the use of sound statistical science by federal agencies (including, of course, the Census Bureau whose work is foundational for redistricting).
Researchers could/should take these Supreme Court decisions as a “call to action.”
Read the last paragraph of this article in The Hill!
We need to continue to:
- Improve our methods so that – one day – map-drawers, Supreme Court justices, and other key players will be able to effectively use mathematics and statistics to create (partisanly speaking) “fair” voting districts. (There are, of course, teams of researchers working on this problem already.)
- Educate – before 2020 – about the strengths of our approaches to the problem of partisan gerrymandering so that map-drawers will choose to adopt them, even without a Supreme Court decision (by, for example, giving talks in our local communities at schools, churches, senior centers; writing op eds; getting involved actually drawing maps if possible).
And, finally, where do we mathematicians and statisticians go from here?
As mentioned, researchers in the mathematics and statistics community have been developing a general approach to evaluate partisan gerrymandering that can be explained – very roughly – as follows.
The starting point is census data and a proposed map you are trying to evaluate for partisan qualities. Then:
- Create a large “ensemble” of possible alternative maps.
- Apply a metric to each map in the ensemble, to assess partisan bias of the map.
- Make a histogram of this metric (horizontal axis shows range of values of metric; vertical axis shows percentage of maps in the ensemble with the different values of the metric).
- Ask the question: Is the proposed map an outlier in this histogram? If yes, consider it designed with partisan bias and reject it.
If we were to know the partisan metric(s) of all possible redistricting maps, then we could make statements about whether the partisan metric(s) of the proposed map is an outlier. However, there are way too many possible maps.
Central to this “outlier analysis” is the ability to generate a large number of different possible alternative maps (step 1). The goal is to create a large ensemble of maps each of which is “reasonable” in the sense that it:
- gets within some small margin of error for equal population,
- is composed of districts that are compact according to some measure (Polsby-Popper is often used),
- is composed of districts that are contiguous, and
- respects political (e.g. county) boundaries.
There are competing algorithms out there for generating the ensemble, and this is an active research area. Look for the work of Jowei Chen (Michigan) and Jonathan Rodden (Stanford); Wendy Cho (UIUC); Kosuke Imai and Benjamin Fifield (Princeton); Alan Frieze (Carnegie Mellon), Wesley Pegden (Carnegie Mellon), and Maria Chikina (Pittsburgh); Jonathan Mattingly (Duke).
The metric (step 2) could be the well-established mean-median score or the relatively new efficiency gap (as proposed in the Wisconsin case). The metric instead could be number of seats that would be won by each party, if the map were adopted, based on past election results.
This procedure and analysis can be done – up front – as line-drawers are drawing their maps in 2020 or in court later, when maps are challenged. I prefer it be done up front and court challenges avoided. Researchers can continue to perfect the algorithms used for creating ensembles and continue to run simulations to evaluate the pros and cons of the various metrics.
All this said, we have to be very careful: Justice Roberts wrote (pages 20-21 of the Gill v. Whitford opinion) about the efficiency gap and other such metrics:
The difficulty for standing purposes is that these calculations are an average measure. They do not address the effect that a gerrymander has on the votes of particular citizens. Partisan-asymmetry metrics such as the efficiency gap measure something else entirely: the effect that a gerrymander has on the fortunes of political parties. …….. this Court is not responsible for vindicating generalized partisan preferences. The Court’s constitutionally prescribed role is to vindicate the individual rights of the people appearing before it.
This tells me that at least some justices will never adopt our outlier analysis because it can only determine if the parties are treated “fairly” or “symmetrically” by maps in question. If you are going for a “deep” read of this opinion, be sure to also read Justice Elena Kagan’s concurrence, which addresses this issue of individual harm done (the “one person, one vote” context) by partisan gerrymandering and lays out a plan for how future claims could be argued successfully.