The long exact sequence of a pair of topological spaces \((X,A)\) is one of the consummately abstract algebraic results that end up being very useful in calculating the homology of various spaces. Here I want to explain how we can make this result much more concrete: in fact, it’s a statement about how Gaussian elimination behaves in certain types of matrices.

In the last post, I talked about how to compute the homology of a simplicial complex by performing elementary row and column operations. The long exact sequence arises from doing the same sorts of operations with a little bit more information.

When \(A\) is a subcomplex of \(X\), the boundary matrix for \(X\) can be split into blocks. The boundary of a simplex in \(A\) is still in \(A\), but the boundary of a simplex not in \(A\) may lie partially in \(A\) and partially outside \(A\). So we can split the standard basis for \(C_k(X)\) up into two sets: simplices in \(A\) and simplices not in \(A\), and with respect to this division, the boundary matrix has a block we know is zero. Since the complementary basis to the basis for \(C_k(A)\) serves as a basis for \(C_k(X,A)\), this zero block corresponds to the triviality of the boundary \(\partial: C_{k+1}(A) \to C_{k}(X,A)\).

Block boundary operator matrix

The two “diagonal” blocks each in fact qualify as boundary maps of their own. That is, \(\partial_{k}^A\partial_{k+1}^A = 0\) and \(\partial_k^{(X,A)}\partial_{k+1}^{(X,A)} = 0\). This can be seen by evaluating \(\partial^2\) and noting that the blocks corresponding to maps \(C_{k+2}(A) \to C_{k}(A)\) and \(C_{k+2}(X,A) \to C_{k}(X,A)\) are the squares of the diagonal blocks.

As a result, the full boundary map \(\partial\) also encodes the boundary map \(\partial^A\) for the subcomplex \(A\) and the relative boundary map \(\partial^{(X,A)}\). We can compute these homologies in context by performing the same operations as before, just restricted to the diagonal blocks of the full boundary \(\partial\). This leaves us with bases for \(\ker \partial^A\) containing bases for \(\text{im } \partial^A\). But there’s a bit more left in the matrix: a block corresponding to a map \(C_{k+1}(X,A) \to C_k(A)\).

Step 2: Homology of A and (X,A)

What do we know about this block? Quite a bit can be deduced from the fact that this reduced matrix squares to zero. Let’s look at what happens when we multiply the two relevant blocks.

Most of the blocks of the product are automatically zero by the structure of the factors, but the upper right block, corresponding to a map \(C_{k+1}(X,A) \to C_{k-1}(A)\), has two potentially nonzero regions. One comes from the composition of the block \(C_{k+1}(X,A) \to C_{k}(A)\) with the identity submatrix in the block \(C_{k}(A) \to C_{k-1}(A)\), and the other comes from the composition of the identity submatrix in \(C_{k+1}(X,A) \to C_{k}(X,A)\) with the block \(C_{k}(X,A) \to C_{k-1}(A)\).

Here’s what that looks like. There’s a vertical strip in \(\partial_{k}\) and a horizontal strip in \(\partial_{k+1}\) that contribute to the product. Where they don’t overlap in the product, these strips must be zero. Where they do overlap, they must sum to zero.

Tracking where the zeros must be

This translates to some known zeros in the full matrix, which we will be able to leverage in our reduction operations. In particular, we know that the red block and the green block sum to zero here.

Step 3: Some known zeros in the reduced matrix

We can use the identity block for \(C_{k+1}(A) \to C_{k}(A)\) to clear out the red block using column operations. Since this is the negative of the green block, the corresponding row operations clear out the green block.

What are we doing here? The column operations amount to subtracting chains in \(C_{k+1}(A)\) from the basis vectors for \(C_{k+1}(X,A)\), which makes sense when we think of the latter space as a quotient.

We continue clearing out columns to get a basis for which \(H_{k+1}(X,A)\) maps directly onto the basis for \(H_k(A)\). This is our connecting map. The corresponding row operations involve adding multiples of zero rows to other zero rows, so this manipulation has no side effects. We are in effect choosing different representatives in \(C_*(X)\) for chains in \(C_*(X,A)\).

Step 4: A matrix representation of the connecting map is revealed

Can we tell that the sequence this describes is exact? We need to finish reducing the matrix so we know what the rest of the maps in the sequence look like. Once we get the connecting map in Jordan form, we have in fact calculated \(H_k(X)\) as well.

Step 5: The fully reduced boundary matrix reveals exactness

We can now extract all the maps in the long exact sequence from this matrix. We can extract bases for \(H_k(A)\), \(H_k(X,A)\) and \(H_k(X)\) from this matrix and the column operations we performed. The map \(H_k(A) \to H_k(X)\) is given by the map sending basis vectors for \(H_k(A)\) to either themselves (if they are not in \(\im \partial\)) or to zero (if they are in \(\im \partial\)). Similarly, the map \(H_k(X) \to H_k(X,A)\) sends a basis vector to itself (if it is in the basis for \(H_k(X,A)\)) and zero otherwise (which implies it is in the basis for \(H_k(A)\)).

Exactness of the sequence is almost immediate. The bases for \(H_k(A)\), \(H_k(X)\), and \(H_k(X,A)\) factor each space into two subspaces: a piece which is mapped isomorphically onto the next term in the sequence, and a complement consisting of the image of the previous term. So we have actually gotten a bit more than just the long exact sequence—we’ve produced a decomposition of each term of the sequence. In fact, this expresses \(H_k(X)\) as a quotient of \(H_k(A)\) plus a subspace of \(H_k(X,A)\). We’ve performed the calculation involved in the spectral sequence of \(X\) associated to the two-step filtration \(A \subset X\). Without too much more effort, we should be able to extend this to a computation of the spectral sequence for a longer filtration, which I hope to write about soon.