A few days ago, the hydra game came up in discussion with Brian Lawrence; we'd both run into a somewhat mystifying analysis of the game via ordinals. Brian and I came up with a much more down-to-earth analysis (which experts will recognize as the same as the usual one); Brian decided to write it up, and I asked him if I could post his write-up here.

*Guest post by Brian Lawrence:*

I've read in a number of places about the Hydra game, which one plays by applying a certain type of move to a rooted tree. The theorem (surprising at first) is that the Hydra game always terminates: there is no infinite sequence of legal moves. The theorem is used in mathematical logic as an example of a result that can only be proven in a sufficiently strong proof system. I'm not an expert in logic, so I won't say any more about that. Instead, I'd like to give a straightforward proof of the Hydra theorem.

The game is based on the mythical Hydra, a many-headed monster. In myth, every time you cut off one of the Hydra's heads, it would grow two more.

Our Hydra is a (finite) rooted tree. One performs a sequence of moves; each move is as follows. Choose a leaf \(x_0\) of the rooted tree, and remove it. Then, let \(x_1\) be the parent node of \(x_0,\) and \(x_2\) the parent node of \(x_1\) (assuming it exists). Let \(S\) be the subtree consisting of \(x_1\) and all its descendants, after \(x_0\) has been removed. When you remove \(x_0,\) the Hydra chooses a \(N > 0,\) and attaches \(N\) new copies of \(S\) as children of \(x_2.\) If \(x_0\) doesn't have a grandparent, the Hydra doesn't get to add anything to the tree. (Note that the Hydra can choose a different \(N\) at every step.) The game ends when only a single vertex (the root) remains.

There's a link to a Java version at the bottom of this page. (The in-website applet doesn't work for me, but the downloadable version does.)

Here's the remarkable fact: no matter how we choose which leaves to cut off, and no matter how large the Hydra chooses its positive integers \(N,\) the game is guaranteed to terminate after finitely many steps.

Let's define the "depth" of a rooted tree to be the length of the longest path from root to leaf. If a tree has only one vertex, it has depth \(0;\) if all leaves are adjacent to the root, the tree has depth \(1;\) and so forth. I'll prove the theorem by induction on the depth, starting with the case of depth \(2.\) (The theorem is trivial for trees of depth \(0\) and \(1.\))

For any rooted tree, call an "immediate subtree" the subtree consisting of \(x\) and all its descendants, where \(x\) is a neighbor of the root.

Now suppose we have a tree \(T\) of depth at most \(2.\) Its immediate subtrees have depth at most \(1.\) For every integer \(r \geq 0,\) call \(T_r\) the tree of depth at most \(1\) where the root has \(r\) children.

(The depth of \(T_0\) is \(0.\)) To describe our tree \(T,\) we just have to describe its immediate subtrees; in fact, if \(a_r\) is the number of immediate subtrees isomorphic to \(T_r,\) then our tree \(T\) is determined completely by the tuple \((a_0, a_1, a_2, \ldots).\)

Let \(\mathcal{I}_2\) be the set of tuples \((a_0, a_1, a_2, \ldots)\) of nonnegative integers, such that all but finitely many \(a_r\) are nonzero. We impose the lexicographic total ordering on \(\mathcal{I}_2:\) given \(a = (a_0, a_1, a_2, \ldots)\) and \(b =(b_0, b_1, b_2, \ldots)\) in \(\mathcal{I}_2,\) take the largest \(r\) such that \(a_r \neq b_r,\) and define \(a > b\) if and only if \(a_r > b_r.\)

We have described a bijection between trees of depth at most \(2\) and tuples in \(\mathcal{I}_2.\) The theorem (for depth \(2\)) now follows from the following two lemmas, which are left as an exercise to the reader.

**Lemma. **Any legal move changes a tuple \(a \in \mathcal{I}_2\) to another tuple \(b\) such that \(b < a\) for our total ordering.

**Lemma. **The set \(\mathcal{I}_2\) is well-ordered: it contains no infinite descending chain.

We will construct recursively a well-ordered set \(\mathcal{I}_d\) which classifies the isomorphism types of trees of depth at most \(d.\) Specifically, we will construct \(\mathcal{I}_{d+1}\) as a set of tuples of nonnegative integers, where instead of indexing by \(\mathbb{N}\) we index by \(\mathcal{I}_d.\) Precisely, \(\mathcal{I}_{d+1}\) is the set of functions \(a: \mathcal{I}_d \rightarrow \mathbb{N}\) (we notate such a function \(i \mapsto a_i\)) such that \(a_i = 0\) for all but finitely many \(i \in \mathcal{I}_d.\)

Again, we impose on \(\mathcal{I}_{d+1}\) the lexicographic total ordering: given any \(a, b \in \mathcal{I}_{d+1},\) choose \(i \in \mathcal{I}_d\) maximal such that \(a(i) \neq b(i);\) and take \(a > b\) if and only if \(a(i) > b(i).\) (Only finitely many \(i\) satisfy \(a(i) \neq b(i),\) so there is no difficulty in choosing the maximal one.)

Just like before, there is a bijection between elements of \(\mathcal{I}_{d+1}\) and isomorphism types of trees of depth at most \(d+1:\) given a tree \(T,\) produce the tuple which records for each \(i \in \mathcal{I}_d\) the number of copies of \(T_i\) that are immediate subtrees of \(T.\)

The result now follows from the following two lemmas, the first of which we again leave as an exercise.

**Lemma. **Any legal move changes a tuple \(a \in \mathcal{I}_{d+1}\) to another tuple \(b\) such that \(b < a\) for our total ordering.

**Lemma. **The set \(\mathcal{I}_{d+1}\) is well-ordered: it contains no infinite descending chain.

**Proof. **Suppose \(a^{(1)} \geq a^{(2)} \geq a^{(3)} \geq \cdots\) is an infinite descending chain in \(\mathcal{I}_{d+1}.\) We want to show that eventually the chain stabilizes: there exists \(N\) such that \(a^{(n)} = a^{(N)}\) for \(n \geq N.\)

\((\)A remark about notation: \(a^{(N)} \in \mathcal{I}_{d+1}\) is a tuple in our infinite descending chain of tuples. For \(i \in \mathcal{I}_d,\) the \(i\)-th entry of this tuple is \(a^{(N)}_i.)\)

Let \(j \in \mathcal{I}_d\) be an index; we say that the sequence \((a^{(n)})\) \emph{stabilizes to the right of index \(j\)} if there exists a positive integer \(N\) such that

$$a^{(n)}_i = a^{(N)}_i$$

whenever \(i \geq j\) and \(n > N.\)

Since \(\mathcal{I}_d\) is well-ordered, we can choose \(j\) minimal such that our sequence stabilizes to the right of \(j;\) also choose \(N\) as above, so that \((a^{(n)})_i = (a^{(N)})_i\) whenever \(i \geq j\) and \(n > N.\)

The tuple \(a^{(N)}\) has only finitely many nonzero entries. If \(a^{(N)}_k = 0\) for all \(k < j\), then the sequence stabilizes at \(a^{(N)}\) and we are done. Otherwise, choose \(k\) maximal such that \(k < j\) and \(a^{(N)}_k > 0.\) (There are only finitely many \(k\) with \(a^{(N)}_k > 0,\) so again there is no harm in choosing the maximum.)

I claim that in fact the sequence stabilizes to the right of index \(k\), contradicting our choice of \(j.\) Consider an index \(i \geq k.\) If \(i \geq j\) then we know \((a^{(n)})_i = (a^{(N)})_i\) for \(n > N.\) If \(k < i < j\) then \(a^{(N)}_i = 0;\) and by the lexicographic ordering on \(\mathcal{I}_{d+1}\) we know that \(a^{(n)}_i = 0\) for \(n > N\) as well. So we only need to worry about \(i = k.\) Again by lexicographic ordering, the sequence

$$a^{(N)}_k, a^{(N+1)}_k, a^{(N+2)}_k, \ldots$$

is a nonincreasing sequence of natural numbers. Any such sequence in \(\mathbb{N}\) eventually stabilizes. So our sequence \((a^{(n)})\) stabilizes to the right of \(k\) as well, contradicting our choice of \(j.\) \(\blacksquare\)

In closing I'll just mention that the sets \(\mathcal{I}_d\) are examples of ordinals; interpreting them as such gives rise to the proofs linked to at the very beginning of this post.