An introduction to incidence matrices

Last time we saw some algebraic properties of adjacency matrices. Today I’d like to discuss another related representation of graphs using matrices: the incidence matrix. The incidence matrix is defined as such:

Definition. Given a graph \(G = (V, E)\), with \(|V| = n, |E| = m\), and \(V = \{v_1, \cdots, v_n\}\), \(E = \{e_1, \cdots, e_m\}\), we define the incidence matrix of \(G\), \(B\) as the \(\{0, 1\}^{n \times m}\) matrix such that \(B_{i, j} = 1 \iff v_i \in e_j\). In other words, if edge \(e_j\) is incident to node \(v_i\).

To help in understanding, let us see an example. Suppose \(G = (V, E)\) is the following graph:

Since we have \(n = 5\) nodes and \(m = 6\) edges, the adjacency matrix \(B\) of \(G\) will be an element in \(\{0, 1\}^{5 \times 6}\). Specifically, we will have the following as its incidence matrix:

\[ B = \begin{pmatrix} 1 & 0 & 1 & 0 & 0 & 0\\ 1 & 1 & 0 & 0 & 0 & 0\\ 0 & 1 & 1 & 1 & 0 & 1\\ 0 & 0 & 0 & 1 & 1 & 0\\ 0 & 0 & 0 & 0 & 1 & 1 \end{pmatrix} \]

Unlike the adjacency matrix, the incidence matrix is rarely used as a data structure in programming. There are several factors which make this unattractive in many cases:

For all its faults, however, the incidence matrix does have some very interesting properties. In some sense, it is giving more of a personality to edges, as opposed to considering them “something that may happen between nodes”. We will see why this intuition makes sense.

As an initial property, I’d like to show a very common one that relates incidence matrices to adjacency matrices. It was somewhat unexpected for me :)

A connection between incidence and adjacency matrices

Since we have these two ways to represent graphs, it is natural to ask oneself if they are related in some algebraic way. The answer is yes, and it is a rather simple relationship:

Lemma. Let \(G = (V, E)\) be a graph, \(n = |V|\), \(m = |E|\), \(A(G)\) its adjacency matrix, and \(B = B(G)\) its incidence matrix. Then there exists a diagonal matrix \(\Delta \in \mathbb{N}^{n \times n}\) such that

\[ B B^t = A(G) + \Delta \]

First, let’s check that the dimensions match. \(B \in \{0, 1\}^{n \times m}\), so \(B B^t \in \mathbb{N}^{n \times n}\). Since \(A \in \{0, 1\}^{n \times n}\) and \(\Delta \in \mathbb{N}^{n \times n}\), the dimensions match. For convenience, let’s say \(V = \{v_1, \cdots, v_n\}\), \(E = \{e_1, \cdots, e_m\}\). Let us also note \(A(G)\) as just \(A\).

Now let’s see what happens to a given entry of \(B B^t\). Let’s call the \(i\)th row of \(B\) \(B_i\). Then we have \[ (B B^t)_{i, j} = \sum_{k=1}^m b_{i, k} b^t_{k, j} = \sum_{k=1}^m b_{i, k} b_{j, k} = \langle B_i, B_j \rangle \]

So it is the inner product of the \(i\)th and \(j\)th rows. Since the elements in this inner product are either \(0\) or \(1\), we can look at when both \(b_{i, k}\) and \(b_{j, k}\) are nonzero. This will be the case whenever edge \(k\) connects nodes \(v_i\) and \(v_j\). In other words, when \(e_k = (v_i, v_j)\). This happens at most once. In fact, if \(i \ne j\), it is \(1\) if and only if \(A_{i, j} = 1\). So for all \(i \ne j\), \((B B^t)_{i, j} = A_{i, j}\).

If \(i = j\), then what we have is

\[ (B B^t)_{i, i} = \langle B_i, B_i \rangle = \sum_{k=1}^m b_{i, k}^2 \]

\(b_{i, k}\) will be \(0\) if \(e_k\) is not incident to \(v_i\), and \(1\) otherwise. Hence, this summation is the same as \(deg(v_i)\).

Then, if we call \(\Delta_{i, j} = \delta_{i, j} deg(v_i)\), we have that

\[ B B^t = A + \Delta \]

A connection between \(G\) and its line graph \(L(G)\)

Lemma. Given a graph \(G\) and its incidence matrix \(B\), call \(A(L(G))\) the adjacency matrix of its line graph, \(L(G)\). Then \[ B^t B = 2I_m + A(L(G)) \]

With \(I_m\) the identity matrix in \(\{0, 1\}^{m \times m}\). The proof is straightforward.

Call \(n = |V|, m = |E|\). \(B \in \{0, 1\}^{n \times m}\), so \(B^t B \in \{0, 1\}^{m \times m}\). Let \(1 \le i, j \le m\). \[ (B^t B)_{i, j} = \sum_{k = 1}^n b^t_{i, k} b_{k, j} = \sum_{k = 1}^n b_{k, i} b_{k, j} = \langle B_i, B_j \rangle \]

where we denote \(B_i\) the \(i\)th column of \(B\).

Suppose \(i = j\). Then \(\langle B_i, B_j \rangle = \langle B_i, B_i \rangle = \sum_{k = 1}^n b_{k, i}^2 = 2\), since \(e_i\) only has two nodes incident to it.

Suppose \(i \ne j\). For \(b_{k, i} b_{k, j}\) to be nonzero, both \(e_i\) and \(e_j\) are incident to \(v_k\). Clearly, two edges \(e_i\) and \(e_j\) can only be both incident to at most \(1\) node, so for all other \(k'\) which are not that specific \(v_k\) (if it exists), \(b_{k', i} b_{k', j} = 0\), and \(b_{k, i} b_{k, j} = 1\). Hence \(\langle B_i, B_j \rangle \in \{0, 1\}\).

Consider the line graph of \(G\), \(L(G)\). \(L(G)\) has, as nodes, the edges of \(G\). Two nodes of \(L(G)\) are connected by an edge in \(L(G)\) if and only if the edges of \(G\) they represent share an incidence to a vertex in \(G\) (which we saw above is unique if it exists). Hence, if we take the adjacency matrix \(A(L(G))\) of \(L(G)\), then for \(1 \le i, j \le m\), \(A(L(G))_{i, j} = 1 \iff e_i\) and \(e_j\) share an incidence to some node \(v_k\). This was the same as \(\langle B_i, B_j \rangle\).

Then \(B^t B = 2I_m + A(L(G))\).

From this, we can extract some useful information. For instance, we can tell that every eigenvalue of a line graph will be at least \(-2\). The proof is simple.

Ranks and determinants of incidence matrices

This relation between an incidence matrix and a line graph is already an interesting algebraic property, and we will make use of it in future expositions. I’d now like to introduce longer, but maybe more interesting results regarding incidence matrices.

Theorem. Let \(G = (V, E)\) be a graph, and let \(B\) be its incidence matrix. Call \(rk(B)_K\) the rank of \(B\) over the field \(K\). Then

\[ rk(B)_{GF(2)} = |V| - 1 \iff G\text{ is connected.} \]

$) $ We will first show that the \(n\) rows of \(B\) are linearly dependent, and then that any subset of \(n-1\) rows is linearly independent. The result follows.

For \(1 \le i \le n\), take \(v_i\) to be the \(i\)th row of \(B\). We have that \(\sum_{i = 1}^n v_i = 0\), since each entry \(j\) of the result will be \(\sum_{i = 1}^m B_{i, j}\), the sum of the entries of the \(j\)th column of the matrix \(B\), and every one of those columns has exactly two \(1\)s. Hence, every entry in this vector will be even (in particular, \(2\)), and since we are working over \(GF(2)\), this is equivalent to the \(0\) vector. Hence the rows are linearly dependent, and the matrix \(B\) does not have full row rank over \(GF(2)\). Then \(rk(A)_{GF(2)} \le n-1\).

To see that the rank over \(GF(2)\) is \(n-1\), let us consider any proper subset \(S\) of the rows. So let’s say we have \(d \le n-1\) rows \(S =\{v_{r_1}, \cdots, v_{r_d}\}\), such that \(x = \sum_{i=1}^d v_{r_i} = 0\). This is a linear combination of the rows in \(S\) and it’s equal to \(0\), so if it exists, this proves that \(S\) is linearly dependent over \(GF(2)\). Since \(G\) is connected, there’s at least one edge \(e_j\) that connects one of these \(d\) nodes represented by these \(d\) rows, to some other node not in these \(d\) rows. Then the \(j\)th column of \(x\) must be nonzero, because we are only taking into account the first of its two \(1\) entries when adding it. But then \(x\) was nonzero, which is absurd.

Therefore, any subset of \(d \le n-1\) rows of the matrix \(B\) is linearly independent over \(GF(2)\), and thus \(rk(B)_{GF(2)} \ge n-1\). Since we also had \(rk(B)_{GF(2)} \le n-1\), the result follows.

\(\Rightarrow)\) We know that \(rk(B)_{GF(2)} = |V|-1\), and we want to prove that \(G\) is connected. As always, call \(n = |V|\). Because \(rk(B)_{GF(2)} = n-1\), there are no \(d < n\) rows \(v_1, \cdots, v_d\) such that \(\sum_{i=1}^d v_i = 0\). So if we take any subset \(v_1, \cdots, v_d\) of the nodes, we must always have at least one edge connecting it to the other nodes, since if there wasn’t, the sum over all of these rows would be \(0\). Thus, no set of \(d < n\) nodes can be isolated from the rest of the graph, meaning \(G\) is connected.

Lemma. Let \(G = (V, E)\) be a graph with \(c\) connected components, and let \(B\) be its incidence matrix. Then \(rk(B)_{GF(2)} = |V| - c\).

This is an easy proof by considering the above lemma, and re-ordering the nodes and edges such that our incidence matrix now looks like this:

\[ B = \begin{pmatrix} C_1 & 0 & 0 & \cdots & 0 \\ 0 & C_2 & 0 & \cdots & 0 \\ \vdots & \vdots & \ddots & \cdots & \vdots \\ 0 & 0 & \cdots & 0 & C_c \end{pmatrix} \]

such that each \(C_i\) is the incidence matrix of one connected component.

Definition. Given a graph \(G = (V, E)\) and its incidence matrix \(B\), if we remove one row from \(B\), we call the result a reduced incidence matrix. Clearly we will have \(|V|\) different reduced incidence matrices for every incidence matrix \(B\).

Lemma. Let \(G\) be a graph, and \(B'\) be any of its reduced incidence matrices. Then \(B'\) is invertible \(\iff G\) is a tree.

Call \(n = |V|\). This proposition is easy to see, since \(T\) has \(n-1\) edges, so its incidence matrix is in \(\{0, 1\}^{n \times n-1}\), with rank \(n-1\). By removing one row, we are left with a \(n-1 \times n-1\) matrix of rank \(n-1\), and hence it is invertible. The converse holds just as easily.

Finally, a broader theorem, applying outside \(GF(2)\).

Theorem. Let \(G = (V, E)\) be a graph, with \(n = |V|\), and let \(B\) be its incidence matrix. Then \(rk_{\mathbb{R}}(B) = n \iff\) no connected component of \(G\) is bipartite.

This result is interesting, since we usually consider linear independence over the reals. Furthermore, it gives a characterization for the presence of odd-length cycles in \(G\). And we also had a previous characterization of bipartite graphs as lacking odd-length cycles, that is, if \(A\) is the adjacency matrix of a graph \(G\), then \(G\) is bipartite \(\iff A^k_{i, i} = 0\ \forall\ 1 \le i \le n, k \equiv 1 \pmod{2}\).

In essence, this is a bridge of the form algebra \(\to\) graph theory \(\to\) algebra. :)

Without further ado, the proof.

\(\Rightarrow\)) We at once see that if the graph has a bipartite connected component, then \(B\) must be singular. Indeed, if this were the case, let \(V = V_1 \sqcup V_2\) be the bipartition, we see that the rows corresponding to \(V_1\) must add up to the same as the rows corresponding to \(V_2\). Hence, the matrix cannot have full row rank, and so \(rk_{\mathbb{R}}(B) \ne n\).

The following proof I get from Chris Godsil.

\(\Leftarrow\)) Now we want to see that if \(G\) is not bipartite, and is connected, it has full row rank.

First, let us imagine a spanning tree \(T\) of \(G\). Consider its edges. There must be at least one edge that, when added to \(T\), creates a non-bipartite subgraph \(H\) of \(G\). This is because, if there was no such edge, we could partition \(V\) into two sets, where there are no edges in \(T\) that go from one set to itself, and there are no edges outside \(T\) that do so. In all, this means there are no edges in the graph that go from one of the partitions to the other, making \(G\) bipartite, which we know is not true.

So let \(e\) be that edge, and let \(H = T + e\).

What we see is that \(H\) is a single odd-length cycle, plus trees coming out of its nodes. We will prove our proposition by induction in the amount of edges outside the cycle.

The base case is when we have \(0\) edges outside of our cycle, i.e. \(H\) is an odd-length cycle. In that case, we can reorder the nodes and edges of \(H\) so we have this incidence matrix \(B_H\) for \(H\) (noting that rearranging/relabeling nodes and edges is simply a permutation of the matrix, so it does not change its rank):

\[ B_H = \begin{pmatrix} 1 & 0 & 0 & 0 & \cdots & 0 & 1\\ 1 & 1 & 0 & 0 & \cdots & 0 & 0\\ 0 & 1 & 1 & 0 & \cdots & 0 & 0\\ 0 & 0 & 1 & 1 & \cdots & \vdots & \vdots\\ 0 & 0 & 0 & 1 & \cdots & \vdots & \vdots\\ \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots\\ 0 & 0 & 0 & 0 & \cdots & 1 & 0\\ 0 & 0 & 0 & 0 & \cdots & 1 & 1 \end{pmatrix} \]

We can take the determinant by the first row, and, using that \(n\) is odd, we will get \(1 \times det(X) + 1 \times det(Y)\) as our determinant. In fact, \(Y = X^t\). Furthermore, this matrix \(X\) is upper triangular, with \(1\)s in the diagonal, and thus its determinant is \(1\). Then, the determinant of \(B_H\) is nonzero.

Now let’s assume our result is true for \(k\) edges outside the cycle, and let’s look at the case where we have \(k+1\) edges. Since \(H\) is nothing more than an odd-length cycle plus trees sprouting from its nodes, we can consider any leaf in these trees. Consider that node, call it \(x\). Removing it does not disconnect \(H\), and it does not make it bipartite. Now, however, we have \(H \setminus x\) having only \(k\) edges outside the odd-length cycle, and thus our hypothesis guarantees its matrix has full row rank.

Now let’s add this edge \(e\) back into \(H\), coupled with its node \(x\). We are adding one row and one column to the matrix \(B_{H \setminus x}\), which we knew had full row rank. This edge, however, will be the only nonzero entry in this last row of this new matrix. Hence, it is easy to see it cannot be a linear combination of any other columns. So we just increased the column rank. But the column rank is the same as the row rank, so we just increased the row rank.

Then, our new matrix has full row rank, and our result follows by induction.

I hope you found these results interesting, I sure did. I especially like how, using algebraic graph theory, we can jump back and forth between algebra and graph theory, using results from one to prove results in the other. These sorts of connections are part of the beauty I find in mathematics.