JMM: Wednesday, January 13th

Talks by Wilkinson, Harris, Stanley, Blum, Bhargava, Shor on Wednesday.

by Nicholas Neumann-Chun

I am now one full day of math lectures older.  The invited addresses are fairly accessible; that is, they’re aimed at a large audience (quite literally!).  That doesn’t mean I understood all, or even most, of what was said, but I was able to engage in bits of the six talks I saw throughout the day.  I’ll try to give a feeling for my immediate reactions to each of these talks.

First, I got to hear Professor Amie Wilkinson, from Northwestern University, talk about Chaos and Symmetry in Partially Hyperbolic Systems.  At first, I was somewhat intimidated by the title, but I soon found there were several ideas that I could latch on to, and so get a general understanding of the lecture.  Basically, the subject was the three-body problem, which I had heard of before.  Professor Wilkinson began by talking about the 1885 contest to solve a certain case of this problem, the winner of the contest, Poincaré, and his discovery of chaos.  Then followed a lot of more recent mathematics, especially work by Smale in the mid-20th century.  I came away with a vague, pictorial sense of what “hyperbolic,” “ergodic,” and “partially hyperbolic” mean.  The talk ended with a discussion of the Pugh-Shub Conjecture, and a brief explanation of how symmetry relates to all of this.

Encouraged that I wasn’t completely lost in the first talk, I eagerly awaited the next speaker: Professor Joseph Harris from Harvard University.  He spoke on The Interpolation Problem.  I was unable to guess at the nature of the talk from the title, but the key is just to go in with an open mind.  I even understood his very first slide, which I believe was just a slightly different statement of the Fundamental Theorem of Algebra than I am used to.  This deals with the zeros of polynomials in one variable.  I think that the rest of the talk was an attempt to explain an attempt to generalize this statement to polynomials in multiple variables.  Professor Harris was an exceptionally engaging speaker, in that he was lively and animated, so it’s a shame that I didn’t understand anything past that first slide!

To start off the afternoon, Professor Richard Stanley, from MIT, delivered the first of three talks that he’s giving on permutations.  This first was on Increasing and Decreasing Subsequences.  I’m getting used to the pattern of these talks: I’ll understand the first couple of minutes very well, then a combination of new notation and concepts will leave me grasping only the bare outline of the talk, and finally a few key theorems will be introduced, of which I’ll understand only small bits and pieces.  I gather that this is as it should be!  Learning mathematics, I have heard, is in many respects like learning a language.  I have learned many words already, and this allows me to partially understand many of the talks, but I will always have more to learn; I am familiar with some common syntactical constructs, allowing me to understand some common mathematical arguments, but, again, there is always more to learn!

Back to Professor Stanley’s talk on subsequences.  First, he defined increasing and decreasing subsequences, and how these notions can be used to naïvely model situations such as passengers boarding an airplane.  Then he introduced SYT (standard Young tableaux).  I think I understand what these are, but I didn’t see what purpose they serve in studying subsequences, nor did I follow the explanation of the RSK algorithm.  Interestingly, we somehow wound up with two new interpretations of Catalan numbers.

Next came a talk by Professor Lenore Blum from Carnegie Mellon titled The Real Computation Controversy: Is It Real?.  This was about computer science, about which I know very little; as far as I could tell, the subject was basically the difference between computation based in discrete mathematics (i.e. Turing Machines) and computation based in analysis.

My favorite talk of the day was from Professor Manjul Bhargava (of Princeton University), entitled The Factorial Function, Integer-valued Polynomials, and p-adic Analysis.  The basis of the talk was work that Professor Bhargava has done himself on generalizing the concept of factorial to rings other than the usual integers.  This generalized factorial function is achieved in a non-trivial, non-obvious way, but is not too difficult to understand, and works as one would hope.  That is, all the theorems from number theory and so forth that involve factorials extend in a simple way to general rings (or subsets of rings) using this new factorial function.

I’ve never been introduced to p-adic numbers before, but Professor Bhargava’s simplified description made sense.  Instead of the notion of “distance” between two numbers in either the real number line or in the complex plane, we say that two numbers are “close” if their difference is a high power of p.  How exactly this defines a new area of analysis is unclear to me, but that seems at least possible.

If you wish to clarify any of my confusions or perhaps correct some incorrect statement I made, you are more than welcome to do so!

This entry was posted in General. Bookmark the permalink.

2 Responses to JMM: Wednesday, January 13th

  1. Steven Sam says:

    SYT usually means “standard Young tableau”, and I think this is what Stanley had in mind. And tableaux is the plural of tableau, so no s is necessary at the end. The connection between increasing / decreasing subsequences in permutations and SYT usually goes through the RSK correspondence in the form of Greene’s theorem. If you’re curious about this stuff, I think an excellent reference is Stanley’s book Enumerative Combinatorics Vol.2. For these things, section 7.11 covers the RSK algorithm and Appendix A.1 covers Greene’s theorem (those sections can probably be read independently).

    By the way, who wrote this article?

    Nicholas: Thank you very much for the corrections and the reference suggestion!

  2. Nick Salter says:

    In regards to p-adic analysis and how the p-adic valuation introduces a new flavor of analysis, remember that every metric induces a topology – the standard topology on Q (and by extension R) is the one induced by the metric you’re used to: the absolute value metric, which is also called an “Archimedean valuation” (to contrast, the p-adic valuations are “non-Archimedean”). Things get interesting in the p-adic case because the induced topology is, to quote William Stein, weird. This is a consequence of the fact that they are non-Archimedian, which means that they satisfy the “ultrametric inequality”, which is much stronger than the triangle inequality that holds for archimedean valuations. The ultrametric inequality is that for x,y,z, d(x,z) ≤ max{d(x,y), d(y,z)}. Here’s one consequence: given x,y in Q, let the distance between them with respect to some p-adic metric be r. Then, for any s≤r, the balls of radius s around x and y are disjoint!* Compare this to the archimedean case. As to why anybody cares about this stuff, number theory is one good answer. Analytic methods are central in number theory, and it’s often useful to have *every* possible metric at one’s disposal. For a reference, try the second half of William Stein’s book: “A Brief Introduction to Classical and Adelic Algebraic Number Theory” (http://modular.math.washington.edu/papers/ant/)

    *This is Lemma 16.2.11 in the Stein book. The reason why this is interesting is because this quickly implies that every open ball is also closed, which in turn gives that Q_p (the completion of Q with respect to the p-adic valuation) is totally disconnected.

Comments are closed.