Dan Spielman: Miracles of algebraic graph theory

I’m at this talk by Dan Spielman of Yale University, who’s trying to give us an introduction to spectral and algebraic graph theory.  I’m here because he was my friend’s undergraduate advisor and my friend said that “Professor Dan” is great!  Dan has won a ton of fancy prizes and there are so many people in the audience to watch him.

This talk wove together computer science, graph theory, physics in a very engaging manner.  It was a lot of fun, and any errors in this blog post are mine alone, which is embarrassing because I should know a fair amount of spectral graph theory and group theory (I even did my senior math seminar at Yale in spectral graph theory in 2010.  Woof.)

In spectral graph theory, we relate graphs to matrices.  The first example is an adjacency matrix , where you label the vertices of a graph and then use those labels as row/column labels for a square matrix, and put a “0” when there is no edge between the corresponding vertices, and a “1” if there is such an edge.

Dan: “I think it’s an accident that the adjacency matrix is useful.” He’ll go on to talk about associated matrices, linear equations, quadratic forms, and operators that are less accidental.

Quadratic forms give us some beautiful theorems about graphs.

Ugh people in this ballroom keep walking to stage right and I keep sitting on stage left.  Anyway, here’s what Dan’s head looks like from the other side of the stage.

First we’re going to try to make a linear system from a graph.  Imagine that in a graph, every edge represents an ideal spring.  If we nail down some vertices and we let the rest of them settle where they will, then every node in this spring network will be at the average of its neighbors positions.  He then used some physics to say that the position that nodes land in will be the one that minimizes total potential energy.

The equation for potential energy is (let’s hope tex works in this blog!):

$\frac{1}{2}\sum_{(a,b)\in E} (x(a)-x(b))^2$

Where E is the set of edges, (a,b) is an edge, and x(v) is the position of the vertex v.

He showed us a cool animation of how to make a planar graph from a 3-connected graph: you take three of the nodes, throw them out to a huge convex polygon outside the rest of them, and then treat the graph as a spring network after nailing down those three nodes.  After letting the springs settle, you get a planar graph!

(Planar means that edges don’t cross each other, n-connected means that if you remove any (n-1)-vertices, the graph remains connected).

I thought wanting to draw a graph with no crossing edges was a human thing, but apparently springs want to do it too.

He also showed us why you have trouble with this algorithm if it’s not 3-connected.

The Laplacian Quadratic form

This is the fundamental object of study for the rest of the talk.  It’s also the thing we saw up above.  We can also come up with a Laplacian matrix that has to do with this quadratic form (here we’re adding a weight $ w_{a,b}$ associated with each edge:)

$ x^T L_G x = \sum_{(a,b)\in E} w_{a,b}(x(a)-x(b))^2$

Now we’re using spectral graph theory! Eigenvalues and eigenvectors are amazing! Any n-by-n symmetric matrix has real eigenvalues and eigenvectors such that Lv_i=\lambda_iv_i.

Dan likes to use Courant-Fischer Theorem which tells you about the smallest eigenvalue- it’s the minimum possible value of the Laplacian quadratic form over unit vectors, while the corresponding eigenvector is where that minimum is achieved.

This gives you that the first eigenvalue is 0 and the first eigenvector is a constant.  So whatevs.  But you can keep using the theorem by adding conditions: still the minimum possible value of the Laplacian quadratic form over unit vectors, but with the added condition that you’re orthogonal to the first eigenvector.  And you can do this for all the eigenvalues.

Spectral graph drawing

Now we’re using Hall’s theorem- we’ll use the eigenvectors of the graph to make a spectral drawing of the graph.  You plot the vertices of the graph by using the eigenvectors to make a prettier picture of the graph.

These drawings look super nice! And we don’t know why! Dan said maybe we need someone who understands differential equations a lot better than he does to try to learn why they look so nicely planar.

Spectral graph drawing in this manner doesn’t always work great, because sometimes you get multiplicity of eigenvalues and you have trouble picking the right number of eigenvectors.  He gave an example of a squashed up dodecahedron or Erdos’s co-authorship graph.

A random graph should not have a good drawing, because a good drawing shows structure, and a random graph will not have a great structure.

Next he proves that idea.  One undergraduate assignment he gives is: Prove that if there’s a nice drawing of a graph, then \lambda_2 is small.  Nice here would be mostly short edges, and vertices are spread out and don’t clump too much.

The Laplacian quadratic form helps us measure boundary.

The boundary of a set of vertices is the set of edges leaving that set to the rest of the vertices.  If you make a vector that gives a 0 to vertices outside your desired set and a 1 to vertices inside your desired set, then throwing that vector into the Laplacian quadratic form gives you the size of the boundary.

One thing you might want to do is find a big set of vertices with a small boundary. So we’re looking to partition our set of vertices.  And you can use the Laplacian quadratic form to help figure out that partition, and that will solve a problem!  Now we need to figure out what that problem is.

Augh too fast! Alright he has to do with Cheeger’s inequality.  Unfortunately I have some background in this which means that I should know what the conductance of a graph is (it has something to do with minimizing the ratio of boundary to number of vertices in the subset).  Sorry!  Go look it up; here’s a wikipedia link!

Both sides of this inequality are incredibly useful to me.

Sometimes you want to know that there isn’t a community structure in your graph (left hand side).  It can also help you with random graphs and expanders (this is what I should know about).  The right hand side also helps you analyze things.

The graph isomorphism problem

You’d think if you know anything about graphs, you should be able to tell if those two are the same.

But actually this is a “disturbingly hard problem”! There aren’t any great algorithms to label the vertices of two given graphs to show that they’re same.  In fact Babai showed in 2017 that he found an algorithm that takes some time between polynomial and exponential time.

The fact that this is hard means that many other things are hard.

An equivalent problem is if I give you two shapes in d-dimensional shape and ask if one is a rotation of the other.  So this graph isomorphism problem is wide-ranging.  But eigenvalues and eigenvectors can help us.  If you swap labelings of a graph, you won’t change the spectral drawings too much (this is a theorem).  So if you make the spectral drawings of some graphs, you might be able to tell that they’re isomorphic if you can map the spectral drawings exactly to themselves.

There’s a different problem: do the graph automorphism problem instead.  Let’s try to compute a set of generators of the automorphism group of a graph.  We can use the eigenvectors to find classes of vertices, and then use some group theory to stitch the groups of automorphisms of each class into the group of automorphisms for the entire graph.  If the eigenvalue multiplicities are bounded, you can do this pretty quickly!

Approximating graphs

All I’ve got is this slide for you on definitions of approximating graphs.

A theorem he did with Josh Batson (who, incidentally, once spoke with me for an hour about my career and life plans thank you very much! And was a AAAS Mass Media Fellow at WIRED! And founded the Yale Undergraduate Math Society, where I ended up treasurer at some point as an undergrad) and Srivastava in 2012 said that every graph can be approximated by a sparse graph.  And the sparse approximations of complete graphs are expanders.

We can learn more at Dan Spielman’s web page!

Leave a Reply

Your email address will not be published. Required fields are marked *

HTML tags are not allowed.

327 Spambots Blocked by Simple Comments