There is this joke that people like to make. It’s something about how real mathematicians don’t use numbers. It’s a little bit funny, and a lot true. As a number theorist, I, more so than anybody, should be using numbers, but the truth is that I rarely do. In my research, almost everything I do is symbolic. You know, like “let *p* be a prime,” or “Π is an element of the permutation group on *p* many elements”. For better or worse, we love to use symbols to simplify the way we communicate math.

But are we really simplifying things, or just making them a whole lot more confusing?

In the newest episode of the mathematics podcast *Relatively Prime*, host Samuel Hansen has a conversation with Joseph Mazur, author of the history of symbols in mathematics, *Enlightening Symbols.* Hansen and Mazur discuss the relatively recent — well, bearing in mind that in terms of mathematics the 16th century was just yesterday — creep of symbols into the mathematical lexicon, and what that means for understanding mathematical ideas. If the father of algebra, Hansen points out, “had picked up your algebra textbook, he would have no idea what he was looking at.”

According to Mazur, “a symbol is something that is graphic, but is also something that doesn’t look like the thing it represents.” Such a distinction is important to make, lest we start thinking that words or even numbers themselves are symbols. The letter π (that’s the lowercase one, not to be confused with the upper case Π in the first paragraph), Mazur says, doesn’t count as a symbol because the greek π denotes that irrational number 3.14159… which is just the perimeter of a circle divided by its diameter, and perimeter starts with p, the greek equivalent of p is π…you get the idea.

I’m not sure that I totally agree with Mazur’s definition of what constitutes a symbol. By his logic Π as an element of the permutation group, isn’t a symbol since permutation starts with *p*, nor is the Σ for summation, and therefore ∫ for integral. So I’m going to say that in math, a symbol is any non-number or non-word representing a mathematical idea.

The good thing is that you can convey a great deal of information very quickly, and to the initiated audience, very little preamble is necessary. For example, my head nearly exploded when I took my first course in graduate number theory, taught by my advisor, who used the following three versions of the letter *p* all to denote distinct objects.

At first I was horrified. But after a few hours it started to make sense. You begin by letting *p* be a prime, and then the other two versions of *p* are just describing the way that same *p* looks in a different stage of life. Without getting too jargony on you, it’s just like having baby, mama, and grandma *p*, all from the same lineage.

This of course brings with it a collateral problem of writing these letters on the board, which is tantamount to writing high-speed calligraphy with a dull crayon. I’ve found that you just sort of find your personal style after awhile, but for those just starting, *Old Pappus’ Book of Mathematical Calligraphy* is the ultimate style guide for writing math symbols.

So, are we really simplifying things, or are we just building a big gigantic paywall around mathematics to make everything we do look as scary as possible? I’m not sure, because when I write math I’m certainly happy for the symbols, but when I stare at a page of math I know that my eyes always scan for written prose first. What do you think? Let me know @extremefriday.

Your comment that you always look for the written word first connects with the work that math ed people have been doing that shows that students get into trouble because they DON’T look at the words enough. In fact they skip over the words and just look at the formulas. This is true even at the college level.

Yes! In fact, I just found the link to an article that inspired much of this, it’s from the August 2015 AMS Notices, I wonder if that’s the research you’re thinking of. http://www.ams.org/notices/201507/rnoti-p742.pdf

Well don’t open it too much as this is the way mostly scientists get their papers published nowadays. They tend to use such tricks to make it work. Quite ironic.

Weierstrass p

https://en.wikipedia.org/wiki/Weierstrass_p

As an undergraduate, I was in love with all those cool symbols, even if my awful calligraphy made it all look like chicken scratching. Sometime in graduate school, my perspective began to change (a cynical person might point to the struggles of explaining all the symbols to the typist); I began to focus more on the content and meaning of the things the symbols are intended to represent. Years of dealing with Very Bad Numerical programming — not mine, that of the engineers with whom I worked! — made me value long, descriptive names (“first_coefficient”, “specific_impulse”) instead of cryptic symbols (“c”, z321″). I still use symbols, of course, but as sparingly as possible, and firmly believe that if the thing can’t be explained without recourse to multiple alphabets and fonts, then there is a fundamental problem with the presentation.