1,158
Views
3
CrossRef citations to date
0
Altmetric
Interactive Theorem Provers

Formalizing Ordinal Partition Relations Using Isabelle/HOL

ORCID Icon, ORCID Icon & ORCID Icon

ABSTRACT

This is an overview of a formalization project in the proof assistant Isabelle/HOL of a number of research results in infinitary combinatorics and set theory (more specifically in ordinal partition relations) by Erdős–Milner, Specker, Larson and Nash-Williams, leading to Larson’s proof of the unpublished result by E.C. Milner asserting that for all mN, ωω(ωω,m). This material has been recently formalised by Paulson and is available on the Archive of Formal Proofs; here we discuss some of the most challenging aspects of the formalization process. This project is also a demonstration of working with Zermelo–Fraenkel set theory in higher-order logic.

AMS 2020 SUBJECT CLASSESMATHEMATICS SUBJECT CLASSIFICATION:

1 Introduction

Higher-order logic theorem proving was originally intended for proving the correctness of digital circuit designs. The focus at first was bits and integers. But in 1994, a bug in the Pentium floating point division unit cost Intel nearly half a billion dollars [Citation32], and verification tools suddenly needed a theory of the real numbers. The formal analysis of more advanced numerical algorithms, e.g., for the exponential function [Citation20], required derivatives, limits, series and other mathematical topics. Later, the desire to verify probabilistic algorithms required a treatment of probability, and therefore of Lebesgue measure and all of its prerequisites [Citation23]. The more verification engineers wanted to deal with real-world phenomena, the more mathematics they needed to formalise. And it was the spirit of the field to reduce everything to a minimal foundational core rather than working on an axiomatic basis. So a great body of mathematics came to be formalised in higher-order logic, including advanced results such as the central limit theorem [Citation1], the prime number theorem [Citation21], the Jordan curve theorem [Citation19] and the proof of the Kepler conjecture [Citation18].

The material we have formalized for this case study [Citation12, Citation28, Citation45] comes from the second half of the 20th century and concerns an entirely unexamined field: infinitary combinatorics and more specifically, ordinal partition relations. This field deals with generalizations of Ramsey’s theorem to transfinite ordinals. It was of special interest to the legendary Paul Erdős, and it is particularly lacking in intuition, to such an extent that even he could make many errors [Citation13]. Moreover, because our example requires ordinals such as ωω, it is a demonstration of working with Zermelo-Fraenkel set theory in higher-order logic.

From a mathematical point of view, this work has the ambitious aim to be more than a case study of formalization. We hope that it is a first step in a programme of finding ordinal partition relations by new methods, using the techniques developed in the formalization. The reader familiar with Ramsey theory on cardinal numbers, including the encyclopaedic book by Paul Erdős et al. [Citation11] or more recent works on the use of partition relations in topology and other celebrated applications, e.g., by Stevo Todorčević [Citation43–45], might be doubtful about the need for new methods in discovering partition relations. But none of the powerful set-theoretic methods for studying cardinal partition relations apply to ordinal partition relations. The difficulty is that in addition to the requirement on the monochromatic set to have a given size, which we would ask of a cardinal partition relation, the ordinal case also has the requirement of preserving the order structure through having a fixed order type. In fact, the difference between the order structure versus an unstructured set shows up already in the case of addition: the addition of infinite cardinals is trivial, whereas with ordinals we do not even have 1+α=α+1. Ordinal partition relations are the first instance of structural Ramsey theory, which is a growing and complex area of combinatorics.

The fact is that everything we know about ordinal partition relations—which is short enough to be reviewed in our Section 2—has been proven painfully and laboriously. Ingenious constructions by several authors since the 1950s have chipped the edges off the most important problem in the subject, which is to characterise the countable ordinals α such that α(α,m) for a given natural number m (for the notation see Section 2). The simplest nontrivial case of m = 3 is the subject of a $1000 open problem of Erdős, posed back in 1987, see Section 2.

We may ask why it is that modern set theory is so silent on the subject of ordinal partitions. Perhaps it is the case of the chicken and the egg. In the case of cardinal numbers, whose study has been at the heart of almost everything done in set theory since the time of Cantor, one of the first important advances was exactly the understanding of partition relations. They are the backbone of infinite combinatorics: many theorems in set theory can be formulated in terms of partitions, an attitude well supported by the work of Todorčević cited above. So to better understand the ordinals, we first have to understand their partition properties, rather than expecting that powerful general methods will be developed first and then yield an understanding of ordinal partitions. The most interesting problems about ordinal partitions are about countable ordinals, while modern combinatorial set theory really only starts at the first uncountable cardinal. Even the popular method of using countable elementary submodels is only useful if one applies it to cardinals, since the intersection of a countable elementary submodel M with the ordinals is a countable highly indecomposable limit δ, such that δ is actually a subset of M and δ=ω1M. So M reflects everything about ordinals below δ and nothing about the ones above, not giving any room for reflection arguments that elementary submodels are used for. Consequently, it cannot be used to argue about countable ordinals, as much as it can be used to argue about ω1.

The notation in the paper is quite standard, while every new notion is explained in the relevant part of the paper. Throughout we use the identification of the ordinal ω with the set of natural numbers and of each natural number n > 0 with the set of its predecessors {0,1,,n}. This is also equal to ω(n+1).

The plan of this paper is as follows: in the next section we give a short but comprehensive introduction to ordinal partition relations; in Section 3, we present the material formalised including brief sketches of Larson’s proofs of partition theorems for ω2 and ωω; in Section 4, we give a more detailed exposition on the Nash-Williams partition theorem including two different proofs; in Section 5, we present an introduction to the Isabelle/HOL theorem prover; Section 6 presents our formalization of the Nash-Williams theorem; Section 7 discusses the formalization of Larson’s proofs; finally, Section 8 summarizes what we learned from this project.

2 Ordinal partition relations

As a side lemma in his work on decidability, Frank Ramsey [Citation39] in 1929 proved what we now call Ramsey’s theorem. It states that for any two natural numbers m and n, if we divide the unordered m-tuples of an infinite set A into n pieces, there will be an infinite subset B of A whose all unordered m-tuples are in the same piece of the division. This theorem has since been generalized in many directions and Ramsey theory now forms an important part of combinatorics, both in the finite and in the infinite case. We shall only discuss Ramsey theory of cardinal and ordinal numbers, although a vast theory exists extending Ramsey theory to various structures, see, for example, the work of Todorčević [Citation44, Citation45]. Also, we shall only be interested in partitions of pairs of ordinals, as it already proves to be quite challenging. In this section, we review what is known about such ordinal partitions.

It is convenient to introduce the notation coming from Erdős and his school, for example in combinatorial set theory [Citation11]. Writing(1) α(β,γ)(1) means that for every partition of the set [α]2 of unordered pairs of elements of α into two parts (called colors, say 0 and 1), there is either a subset B of α of order type β whose pairs are all colored by 0, or a subset C of α of order type γ whose pairs are all colored by 1. Such a B is said to be 0-monochromatic, while C is 1-monochromatic. The notation implies that the situation is trivial unless β, γα, as we shall assume. Note also that for all ordinals, the following rules of monotonicity hold:if α(β,γ) then α(β,γ) for αα, ββ, γγ.

Some authors, such as Larson [Citation28], use the notation α(β,γ)2 to emphasize that it is pairs that are colored, but since we shall only ever work with pairs, we omit the superscript. The negation of α(β,γ) is written α/(β,γ).

It turns out that finding the triples α, β, γ for which the relation (1) holds is highly non-trivial. It is even non-trivial when all of the ordinals α, β, γ are countable, which is the case to which we shall restrict our attention. On the other hand, for αω the situation is already understood by the classical Ramsey theorem, so we shall assume α>ω. We should also assume that γ is finite [Citation17, p. 177] (note that all arithmetic in the paper is ordinal arithmetic):

Observation 2.1. For any ordinal α>ω we have α/(|α|+1,ω).

By definition we have α(α,2) for any α, so the first nontrivial case is the following question, to which Erdős attached a prize of $1000 in 1987 [Citation9]:

Erdős’s problem. Characterize the set of all countable ordinals α such that α(α,3).

This problem is still very much open. In fact, the more general problem of characterizing countable ordinals α and natural numbers m such thatα(α,m)holds was asked already by Erdős and Richard Rado [Citation10] in 1956 and it was the slow progress on it that made Erdős reiterate the simplest case of this problem in his 1987 problem list [Citation9].

The first progress toward solving Erdős’s problem came from Ernst Specker [Citation42], whose results were continued by Chen-Chung Chang [Citation5]. Chang gave a very involved proof of ωω(ωω,3) and gained $250 from Erdős. In an unpublished manuscript, Eric Milner improved Chang’s result to say thatωω(ωω,m)for all natural numbers m. The main proof that we have formalized is Jean Larson’s proof of Milner’s result. Comparing her paper [Citation28] with earlier proofs explains why the paper is called “A short proof …,” but it is not a short proof and formalizing it was a challenge.

It turns out that the behavior of countable ordinals with respect to partition relations is influenced by their Cantor Normal Form, so we take the opportunity to remind the reader of that concept.

Theorem 2.1

(Cantor Normal Form). Every ordinal number can be written in a unique way as an ordinal sum of the form ωβ0·m0+ωβ1·m1+ωβn·mn,where n is non-negative integer, as are the mi for in, and β0>β1>βn0 are ordinals.

The state of the art regarding the known positive instances of Erdős’s problem is the following theorem of Rene Schipperus from his 1999 Ph.D. thesis [Citation40], published many years later, in 2010, as a journal article [Citation41]:

Theorem 2.2

(Schipperus 1999). Suppose that β is a countable ordinal whose Cantor Normal Form has at most two summands. Then ωωβ(ωωβ,3).

The delay between the thesis and the paper is indicative of the difficulty of the proof and the process of checking its correctness. A sketch of Schipperus’ proof, divided in seven subsections, is given on pages 188–209 of the excellent survey article [Citation17] by András Hajnal and Larson. The reason that Schipperus focused on ordinals of the type ωωβ is that if the ordinal α is not a power of ω then it cannot satisfy α(α,3), as shown in Observation 2.2. Hence, only the powers of ω are of interest. Fred Galvin showed [Citation14] that for an ordinal of the form α=ωβ where β2 is not itself a power of ω, we have α/(α,3). Hence Schipperus’ choice of ordinals. Still open is the case of α=ωωβ where β has at least three summands in its Cantor Normal Form.

Observation 2.2. Suppose that α is an ordinal which is not a power of ω. Then α/(α,3).

Proof.

It follows from the Cantor Normal Form that any α which is not a power of ω is additively decomposable: there exist ordinals β, γ<α such that α=β+γ. Fixing such β and γ, we define c on [α]2 by letting c(x,y)=0 if either x, y<β or x, yβ. Otherwise, we let c(x,y)=1.

Then it suffices to note that any 0-monochromatic subset of α is either contained in β or in [β,β+γ) and hence has the order type at most max(β,γ), which is strictly less than α. On the other hand, if we have distinct x, y, z<α, at least two of them will be <β or at least two of them will be β and in either case, the set they form will get mapped to 0 by c. Hence the set {x, y, z} is not 1-monochromatic. 2.2

In this section, we have mostly concentrated on the result we formalized and Erdős’s problem. Information on some additional instances of α(α,m) for m > 3 can be found in the Hajnal-Larson paper [Citation17].

3 Theorems formalized

The ultimate objective of the project was to formalize Larson’s proof [Citation28] of the following unpublished result by E. C. Milner:

Theorem 3.1.

For all mN,ωω(ωω,m).

While working toward that objective, many set-theoretic prerequisites had to be formalized, notably Cantor Normal Form, indecomposable ordinals and many elementary properties of order types. Her paper contains, as a simpler example of the methods she employed, a proof of Specker’s theorem [Citation42]:

Theorem 3.2

(Specker). For all m<ω,ω2(ω2,m).

Although not strictly necessary, the proof of Theorem 3.2 was formalized as a warmup exercise, as it is structured similarly to the proof of Theorem 3.1. The project also required the formalization of a short but difficult (and error filled) proof by Erdős and Milner [Citation13]:

Theorem 3.3

(Erdős-Milner). For all n<ω and for all α<ω1,ω1+α·n(ω1+α,2n).

The last significant side project necessitated by Larson’s proof of Theorem 3.1 was to formalize the Nash-Williams partition theorem, as presented by Todorčević [Citation45]. The main objects in Nash-Williams’ theorem are families of finite subsets of ω. We introduce some notation and definitions regarding such sets.

Notation 3.1. (1) Let A be any subset of ω. We write [A]<ω for the set of all finite subsets of A and [A] for the set of all infinite subsets of A. For an integer k0, let [A]k be the set of all the k-element subsets of A.

(2) We identify sets in [ω]<ω with their increasing enumerations, and hence the set [ω]<ω becomes a subset of the set <ωω of finite sequences in ω. Therefore, we can consider the relation of being an initial segment on P(ω), writing st when s is an initial segment of t.

We shall be interested in subsets F of [ω]<ω that are dense in the sense that every element of [ω] has an element of F as an initial segment. In particular, we shall consider such sets that are minimal, meaning that we cannot take away an element of F and still satisfy the density requirement. This is the same as to say that if we have an element of F then none of its proper initial segments are in F. Larson [Citation28] calls the sets given by the latter requirement, thin and Todorčević [Citation45, Def. 1.1.2 (2)] calls them Nash-Williams. Here is the formal definition.

Definition 3.2.

(Thin families) A thin or Nash-Williams family on an infinite set Aω is a subset F of [A]<ω such that for every s, tF, if st then s = t. A set A is thin if for all s, tA, s is not a proper initial segment of t.

Theorem 3.4

(Nash-Williams). For any infinite set Mω, for any thin set A, for any function h:{sA:sM}{0,1}, there exists an i{0,1} and an infinite set NM so that h({sA:sN}){i}.

A more detailed analysis of the formalization of the Nash-Williams theorem will be given in the following section. This theorem is a generalization of Ramsey’s theorem [Citation39]: Footnote1

Theorem 3.5

(Ramsey). For every nonzero p<ω, every infinite set Mω and every function h:[M]p{0,1}, there exists an i{0,1} and an infinite set NM so that h([N]p)={i}.

For simplicity, both theorems are presented in their 2-color versions. We obtain Ramsey’s theorem from Nash-Williams simply by noting that the set [M]p is thin. While Ramsey’s theorem (Theorem 3.5) was used in the proof of Theorem 3.2, the Nash-Williams partition theorem (Theorem 3.4) was used in a similar fashion in the proof of Theorem 3.1.

We give brief sketches of the proofs of Theorems 3.1 and 3.2. For details, the reader may refer to Larson’s paper [Citation28] or to the formalized versions of the proofs, where every step is made explicit, in the entries by Paulson at the Archive of Formal Proofs [Citation36, Citation37].

3.1 Sketch of Larson’s proof of Specker’s Theorem

It is sufficient to prove Theorem 3.2 for functions f:[U]2{0,1} where U={(a,b):a<b<ω} is ordered lexicographically and has order type ω2. The following is Larson’s Definition 2.2 [Citation28].

Definition 3.3

(Interaction Scheme). A pair A={(a,b),(c,d)} of elements from U with ac is of form 0 if a<b<c<d, form 1 if a<c<b<d, form 2 if a<c<d<b and form 3 if a = c and bd. If A has one of these forms, then the interaction scheme of A is defined by i(A)={a,b,c,d}.

Let m<ω and f:[U]2{0,1} be given so that there is no m-element set M for which f([M]2)={1}. It is enough to find a set XU order isomorphic to ω2 for which f([X]2)={0}.

It follows from Ramsey’s theorem (Theorem 3.5) that we can obtain an infinite set Nω and j0, j1, j2, j3{0,1} so that for any k < 4 and any pair {x,y}U of form k with i({x,y})N, we have f({x,y})=jk. Then it is shown that for N and each k=0,1,2,3 we can construct four m-element sets M0, M1, M2, M3 such that for any k < 4 and any pair {x,y}Mk,{x,y} has form k and i({x,y})N, so f({x,y})=jk. As we had assumed that there is no m-element set M for which f([M]2)={1}, there follows j0=j1=j2=j3=0. It is then shown that for the infinite set N we may obtain a set XU which is order isomorphic to ω2 so that for any pair {x,y}X there is a k < 4 so that {x, y} has form k and i({x,y})N. Therefore,f({x,y})=j0=j1=j2=j3=0.

Thus we have shown that f([X]2)={0}.

3.2 Sketch of Larson’s proof of Milner’s theorem

The reader will notice that the proof of Theorem 3.1 follows a pattern similar to the one above. For each n<ω, defineW(n)={(a0,a1,,an1):a0<a1<<an1<ω}ordered lexicographically. W(n) is thus order isomorphic to ωn. Let W=W(0)W(1) be ordered first by length of sequence and then lexicographically, so that W is order isomorphic to ωω. It now suffices to prove the theorem for functions f:[W]2{0,1}.

Some notational conventions:

  • s and t denote increasing finite sequences of elements of ω, and we write s < t to mean every element of s is less than every element of t.

  • st denotes the concatenation of two finite sequences.

  • |s| denotes the length of s.

  • nk is the kth term in the enumeration of N in increasing order.

We now present Larson’s Definition 3.5 [Citation28].

Definition 3.4

(Interaction Scheme). A pair {x,y}W is of form 0 if |x|=|y|. Let k<ω with k > 0. A pair {x,y}W with |x|<|y| is of form 2k1 (form 2k) if there are non-empty sequences a1, a2, ,ak(ak+1), b1, b2, , bk and c and d such that

  1. x=a1a2ak(ak+1),

  2. y=b1b2bk,

  3. c=(|a1|,|a1|+|a2|,,|a1|+|a2|++|ak|(+|ak+1|)),

  4. d=(|b1|,|b1|+|b2|,,|b1|+|b2|++|bk|),

  5. c<a1<d<b1<a2<b2<<ak<bk(<ak+1).

If {x,y} is of form 2k1 (form 2k) and a1, a2, ,ak(ak+1), b1, b2, , bk, c and d are as above, then we call i({x,y})=ca1db1a2b2ak*bk(ak+1)

the interaction scheme of {x,y}.

Let m<ω. The theorem is trivially true for m = 0 and m = 1, so we assume that m > 1. In a similar fashion as before, we assume that f:[W]2{0,1} is given so that there is no m-element set M for which f([M]2)={1}. So, to prove the theorem, we will find a set X of type ωω for which f([X]2)={0}.

To this end, from the Erdős-Milner theorem (Theorem 3.3), we infer that for every k, n<ω with n > 0, ωn·k(ωn,k); then considering f restricted to W(n·m) we obtain a set W(n) order isomorphic to ωn for which f([W(n)]2)={0}. Let W:=W(0)W(1)W(2). It is order isomorphic to ωω and f has value zero on pairs of sequences in W of the same length.

Larson [Citation28, p. 134] remarks “Without loss of generality, we may assume that W is our original set W. Thus to prove the theorem, we must find a set XW of type ωω for which f has value zero on pairs of sequences of different lengths.” This identification of W and W is possible because there is an order isomorphism between them that preserves lengths. Strictly speaking, the bulk of the elaborate construction is done using W, then finally mapped back to W; this can be seen in the formal version [Citation37].

By applying the Nash-Williams partition theorem (Theorem 3.4) to f, we can obtain an infinite set N and a sequence {jk:k<ω} so that for any k<ω with k > 0 and any pair {x,y} of form k with (nk)<i({x,y})N,f({x,y})=jk. Then it can be shown that for each k<ω with k > 0 we obtain an m-element set Mk, so that for any {x,y}Mk we have f({x,y})=jk. Thus, for any k<ω with k > 0 it follows that jk = 0. It is then shown that we may obtain a set XW order isomorphic to ωω, so that for each {x,y}X there is an l<ω for which {x, y} has form l and if l > 0, then (nl)<i({x,y})N. Thus, for pairs {x,y}X which are not of form 0, we have f({x,y})=jl=0 for some l. In the case l = 0, for any pair of form 0 we have by assumption f({x,y})=0. So we have shown that f([X]2)={0}.

4 A proof of the Nash-Williams theorem

In this section, we shall give a proof of a fundamental theorem due to Crispin Nash-Williams [Citation31] which we stated as Theorem 3.4 above. This result was discovered while studying the notion of well quasi orders (wqo) P, notably distinguishing those that have the property that for every countable ordinal α, the set Pα of all sequences of length α from P is wqo when ordered by the embeddability relation. Such orders are called bqo or better quasi orders [Citation31]. Neither wqo nor bqo are relevant here, but the theorem proved by Nash-Williams is of use in the study of ordinal partition relations, as well as in many other contexts, for example reverse mathematics [Citation27, Citation30]. In particular, the theorem was used by Larson in the article we formalized [Citation28].

Nash-Williams’ theorem has seen many different proofs. While above we have steered away from presenting full proofs about ordinal partition relations as they are too long, we do present a proof of Nash-Williams’ theorem, to give the reader the flavour of the way that proofs are constructed in this subject. Many other proofs of the theorem are known: Alberto Marcone gives one [Citation30] and refers to several others, including one by Stephen G. Simpson using descriptive set theory and the notion of bad arrays, which is perhaps the most popular proof these days (it originates in the methods of Fred Galvin and Karel Prikry [Citation15] and Richard Laver [Citation29]) and a proof using well-founded trees, which can be found in the survey paper [Citation4] by Raphaël Carroy and Yann Pequignot. Paulson formalized Nash-Williams’ theorem as described in Section 6. The formalization [Citation36] corresponds to Todorčević’s [Citation45] presentation of the original Nash-Williams proof, which is the proof we give. He uses a notion of combinatorial forcing due to Galvin and Prikry [Citation15]. The proof of Nash-Williams’ theorem using combinatorial forcing appeared as early as 1985 in a note by Ian Hodkinson on a Ph.D course given by Wilfrid Hodges [Citation22] (which the authors have specifically asked not to use as a primary reference), but in fact Galvin and Prikry [Citation15] prove a stronger theorem by the same method.

The theorem basically says that if we divide a thin set on a set A of natural numbers into a finite number of pieces, then one of them will contain a thin set on some infinite subset A of A. This is the analogue of the version of the pigeonhole principle which says that if an infinite set is divided into a finite number of pieces, then one of the pieces is infinite. Notice that Theorem 3.4 yields this by applying the theorem finitely many times.

Many results in infinite combinatorics can be seen as instances of the fact that one can do set-theoretic forcing over a countable family of dense sets without changing the underlying universe, as proved by Paul Cohen [Citation7] and explained more carefully by others [Citation8, Citation26]. This approach is known as combinatorial forcing and is used in the proof we present. We use capital letters close to the beginning of the Latin alphabet B,C to denote infinite subsets of A and lowercase letters close to s such as s, u, t to denote finite sequences in A. Here comes a key definition of this particular instance of combinatorial forcing:

Definition 4.1

. Comparable, accepts, rejects, decides:

  1. s and t are comparable if either st or ts.

  2. B accepts s if there is tF comparable to s such that tsB (equivalently, tsB). Moreover B strongly accepts s if every C[B] accepts s.

  3. B rejects s if B does not accept s.

  4. B decides s if B either strongly accepts s or rejects s.

  5. If F is a subset of F, we make definitions similar to (1)-(4) taking F as a parameter, and then we add the qualification with respect to F to the notion of accepting, rejecting and so on. Footnote2

Let us make some simple observations about the notions introduced.

Observation 4.2. Let B and s be given.

  1. If B rejects (strongly accepts) s, then so does every C[B]. It follows that the analogue is true for the notion of deciding.

  2. There exists CB which decides s and where max(s)<min(C).

  3. B accepts (rejects, strongly accepts, decides) s iff Bmax(s) accepts (rejects, strongly accepts, decides) s.

Proof.

(of Observation 4.2). (1) is evident from the definitions. For (2), let C=B[max(s)+1]. If C strongly accepts s, then C is as required. If that C does not strongly accept s, then there is D[C] which rejects s and then D is as required.

For (3), if B accepts s then there is tF comparable with s such that tsB. But then it follows that ts(Bmax(s)) since if ts this is vacuously true, and if st then ts is disjoint from max(s). The rest of the cases are proved similarly. 4.2

Lemma 4.3.

There is B[A] which decides all its finite subsets.

Proof.

(of Lemma 4.3). By recursion on n<ω we shall choose pairs (sn, An) so that

  • An[A] and sn[A]<ω,

  • An decides every subset of sn,

  • max(sn)<min(An) and

  • An+1[An].

We let s0= and we choose A0 using Observation 4.2 (2). Given (sn, An), let sn+1=snmin(An) and let An+1[An] be a set which decides every subset of sn+1. Such a set is obtained by a finite sequence of applications of Observation 4.2 (2). By cutting off the first several elements of An+1, which we can do by applying Observation 4.2 (1), we can assume that max(sn+1)<min(An+1).

At the end of this recursion, let B=n<ωsn. Since we have made sure that |sn|=n for every n, we can conclude that B is infinite. If s is a finite subset of B, then there is first n such that ssn. We have that Bmax(sn)An and therefore Bmax(sn) decides s. By Observation 4.2(3), we conclude that B decides s. 4.3

Let B be as provided by Lemma 4.3. The final lemma we need is the following:

Lemma 4.4.

If sB is strongly accepted by B, then B strongly accepts s{n} for all but finitely many nB. In particular, there is m such that Bm strongly accepts s{n} for all nBm.

Proof.

(of Lemma 4.4) Suppose, for a contradiction, that there is sB such that the setC={nB[max(s)+1]: B rejects s{n}}.is infinite, since B decides every s{n}. Hence C, in particular, accepts s by the assumption on B, as exemplified by some tF. If ts then clearly ts{n} for any n and tsC, so a contradiction. Hence, s is a proper initial segment of t. Let n=min(ts). We claim that C accepts s{n}, which will give a contradiction with the choice of C. Indeed, t(s{n})C and tF, so we are done with the first claim of the lemma.

The second claim follows by taking m large enough so that B strongly accepts s{n} for all n > m and then using the hereditary nature of strong acceptance, as per Observation 4.2(3). 4.4

We now go back to the proof of Theorem 3.4. Let Fi=c1(i) for i < 2. Clearly, both Fi are thin sets, so all the observations and lemmas we proved about F apply also to each Fi. In particular, by applying Lemmas 4.3 and 4.4 to F0, we can find A[A] which decides every of its finite subsets with respect to F0 and moreover, for every s[A]<ω which A strongly accepts, A also strongly accepts s{n} for all nA[max(s)+1]. If A rejects , then clearly no finite subset of A is in F0 and hence we have c(s) = 1 for every element of [A]<ωF.

Now suppose that A strongly accepts with respect to F0. It follows by the choice of A (and an inductive argument) that A strongly accepts all its finite subsets with respect to F0. If there were to exist an element s[A]<ω with sF1, then the strong acceptance by A of s would yield a tF0 (so ts) comparable with s, which is impossible since F is thin. Therefore we have c(s) = 0 for every element of [A]<ωF. 3.4

5 Introduction to Isabelle

Isabelle is an interactive theorem prover originally developed in the 1980s with the aim of supporting multiple logical formalisms. These include first-order logic (intuitionistic as well as classical) and higher-order logic as well as Zermelo Fraenkel set theory. However, the 1990s saw higher-order logic take a dominant role in the field of interactive theorem proving, particularly in hardware verification [Citation20, Citation25, Citation34]. While Isabelle/ZF and Isabelle/HOL share the entire Isabelle code base (basic inference procedures, a sophisticated user interface, etc.), Isabelle/HOL [Citation33] has much additional automation: so much so that it’s the best choice even for set theory.

Unlike proof assistants based on constructive type theories, Isabelle/HOL implements simple type theory. Types can take types as parameters but not integers for example. The type of finite sequences (known as lists) takes the component type as a parameter, but there is no type of n-element lists. Logical predicates form the basis of a basic typed set theory, where any desired set can be expressed by comprehension over a formula. We can define the set of n-element lists where the elements are drawn from some other set. Thus, the simple framework given by types can be refined through the use of sets.

There are always tradeoffs between expressiveness of a formalism and ease of automation. Reliance on a simple classical formalism frees us from the many technical difficulties which seem to plague constructive type theories, such as intensional equality, difficulties with the concept of set, and performance issues in space and time.

Isabelle employs the time-honored LCF approach [Citation16]. This architecture ensures soundness through the use of a small proof kernel that implements the rules of inference and has the sole right to declare a statement to be a theorem. Upon this foundation, Isabelle/HOL provides many forms of powerful automation [Citation38]:

  • Simplification, i.e., systematic directed rewriting using identities.

  • Sophisticated logical reasoning even with quantifiers.

  • Sledgehammer: strong integration with external theorem provers.

  • Automatic counterexample finding for many problem domains.

We found that some of this automation works effectively with Larson’s elaborate constructions on sequences.

5.1 Simple type theory in Isabelle/HOL

Isabelle’s higher-order logic is closely based on Church’s simple type theory [Citation6]. It includes the following elements:

  • Types and type operators, for example int (the type of integers), or αβ (the type of functions from α to β), or α list (the type of finite sequences whose elements have type α). Note the postfix syntax: (int list) set is the type of sets of lists of integers.

  • Terms built of constants, variables, λ-abstractions, and function applications.

  • Formulas: terms of the truth value type, bool (Church’s o), with the usual logical connectives and quantifiers.

  • The axiom of choice (AC) for all types via Hilbert’s operator ϵx.ϕ, denoting some a such that ϕ(a) if such exists. Footnote3 The Isabelle syntax is SOME x. P x .

The typed set theory essentially identifies sets with predicates, with type α set essentially the same as α bool. On this foundation, recursive definitions of types, functions and predicates/sets are provided through programmed procedures that reduce such definitions to primitive constructions and automatically prove the essential properties. This basis is expressive enough for the formalization of the numerous advanced results mentioned in the introduction.

5.2 ZFC in simple type theory

Since the set type operator can be iterated only finitely many times, simple type theory turns out to be weaker than Zermelo set theory (let alone ZF). For work requiring the full power of ZFC, it is convenient to assume some version of the ZF axioms within Isabelle/HOL. The approach adopted here [Citation35] seeks a smooth integration between ZFC and simple type theory. We introduce a type V (the type of all ZF sets) and then V set is the type of classes. Our ZF axioms characterize the small classes, those that can be embedded into elements of V. The axiom of choice is inherited from Isabelle/HOL. On this basis, it is straightforward to define the usual elements of Cantor’s paradise, including ordinals, cardinals, alephs, and order types. We borrow large formal developments from the existing Isabelle/HOL framework: recursion on is just an instance of well-founded recursion, and it’s easy to deduce that the type real corresponds to some element of V without redoing the construction of the real numbers.

We define order types on wellorderings only (yielding ordinals). These wellorderings can be defined on any Isabelle/HOL type: we can consider orderings defined on type nat rather than on the equivalent ordinal, ω. We started with a small library of facts about order types, which grew and grew in accordance with the demands of the case study.

The point of adopting Isabelle/HOL over Isabelle/ZF is the possibility of making use of its aforementioned automation (Sledgehammer) and the counterexample-finding tools (Nitpick [Citation2] and Quickcheck [Citation3]). The main drawback of doing set theory in Isabelle/HOL is the impossibility of working without AC: the axiom is inherently part of the framework. That drawback has no bearing on the present project, however.

6 Formalizing the Nash-Williams theorem

Although the Nash-Williams partition theorem is only a minor part of the project, it’s a key result and its formalization is brief enough to present in reasonable detail. In the next section, we’ll turn to Larson’s paper.

6.1 Preliminaries

As we saw in Section 4 above, the theorem is concerned with sets of natural numbers. Finite sets of natural numbers can be identified with ascending finite sequences; typical treatments of Nash-Williams use sets, while Larson uses sequences. For sets S and T, we write S < T to express that every element of S is less than every element of T (it holds vacuously if either set is empty). This is essentially the same statement as the s < t mentioned in the previous section. Our formalization [Citation36] follows Todorčević [Citation45] as in the proof in Section 4.

S is an initial segment of T if T can be written in the form SS such that S<S, written S S’ in Isabelle syntax.definition init_segment:: "nat set ⇒ nat set ⇒ bool" where "init_segment S T ≡S’. T = SS’S « S’"

The Ramsey property expresses the conclusion of the theorem for the general case of r components. Its definition for a family F of sets F and an integer r is straightforward. A partition of F into r disjoint sets is expressed as a map f:F{0,,r1}. Partition j is expressed as the inverse image f1(j), written f -‘{j} in Isabelle syntax.

Now the Ramsey property for F and r holds if for every partition map f and every infinite set M, there exists an infinite NM and i < r such that for all j < r, if j=i then partition j does not contain any subsets of N. ( Pow N is the powerset of N.)definition Ramsey:: "[nat set set, nat] ⇒ bool" where "Ramseyr

∀f ∈ ℱ → {.<r}. ∀M. infinite M →

(∃N i. N ⊆ M ∧ infinite N ∧ i < r ∧

(j < r. j ≠ i → f -‘{j} ∩∩ Pow N = {}))"

Recall that a family F of sets is thin provided every element of F is finite (expressed as F {X. finite X}) and it does not contain distinct elements S and T where one is an initial segment of the other (see Definition 3.2).definition thin_set:: "nat set set ⇒ bool" where "thin_setℱ ≡

F{X. finite X}(S∈ℱ. ∀T∈ℱ. init_segment S TS = T)"

These definitions provide the necessary vocabulary to state the theorem, although its proof appears at the end of the development after those of all prerequisite lemmas.theorem Nash_Williams: assumes: "thin_set" " r > 0"shows "Ramsey r"

We now formalize the concepts of rejecting, strongly accepting and deciding a set, as described in Section 4. We regard F as fixed and say M decides S, etc. Note that the identification of “M accepts S” with “M does not reject S” is formalized as an abbreviation rather than a definition, a distinction that affects proof procedures but makes no difference mathematically.definition comparables:: "nat setnat set ⇒ nat set set" where "comparables S M

{T. finite T(init_segment T Sinit_segment S TT-SM)}"definition "rejectsS Mcomparables S M ∩ ℱ = {}" abbreviation "acceptsS M ≡ ¬ rejectsS M"

M strongly accepts S provided all infinite subsets of M accept S.definition

"strongly_acceptsS M ≡ ∀ NM. rejectsS Nfinite N"definition "decidesS M ≡ rejectsS M ∨ strongly_accepts ℱ S M" definition "decides_subsetsM ≡∀T. TMfinite TdecidesT M"

6.2 The proofs

A great many obvious facts about these primitives can be proved automatically. But there are some nontrivial properties not mentioned in the text that require elaborate proofs. A key technique in this field is called diagonalization, involving the construction of a sequence of infinite sets M0M1Mk from which something can be obtained.

The following proposition states that an infinite set M can be refined to an infinite NM that decides all subsets of the given finite set S. The formal proof is 49 lines long and involves an inductive construction along with multiple inductive subproofs.proposition ex_infinite_decides_finite: assumes "infinite M" "finite S" obtains N where "NM" "infinite N" "^ T. TSdecidesT N"

Todorčević’s Lemma 1.18 states that an infinite set M can be refined to an infinite NM that decides all of its finite subsets. He notes that it follows by some “immediate properties” of the definitions “and a simple diagonalization procedure”. The formal equivalent of his one-line remark is 190 lines. Of this, nearly 90 lines are devoted to the diagonalization argument, including the construction of M0M1Mk and then the set {m0,m1,,mk,} of the corresponding least elements; proving that this set is the desired N takes up the remaining lines. Perhaps, a shorter formal proof could be found, given a few more hints.proposition ex_infinite_decides_subsets: assumes "thin_set ℱ" "infinite M" obtains N where "N M" "infinite N" "decides_subsets N"

Todorčević’s Lemma 1.19 states that M strongly accepts {n}S for all but finitely many n in M, under the given conditions.proposition strongly_accepts_1_19: assumes acc: "strongly_acceptsS M"

and "thin_set ℱ" "infinite M" "SM" "finite S"and dsM: "decides_subsetsM" shows " finite {n M. ¬ strongly_accepts (insert n S) M}"

He gives a four-line proof that leaves out any details. The formal equivalent is 69 lines and bears little resemblance to the original, despite starting with the key definitionFootnote4define N where "N = {nM. rejects(insert n S) M} ∩ {Sup S<.}"

The formal proof is by contradiction, and the key claim rejects F S N is never established but rather its negation, from which False is tediously squeezed. Perhaps an expert could find a much neater proof.

Lemma 1.19 turns out to be too weak for its intended use in the sequel. The following strengthening is necessary. It yields an infinite NM such that N strongly accepts {n}S for all n in N such that n>max(S). Todorčević emailed a six-line proof sketch (he also helped with 1.19); the formal proof is 167 lines. It includes a diagonalization argument preceded by 50 lines of elaborate preamble, using Lemma 1.19.proposition strongly_accepts_1_19_plus: assumes "thin_set" "infinite M" and dsM: "decides_subsetsM" obtains N where "NM" "infinite N"

"^ S n. [[SN; finite S; strongly_acceptsS N; nN; S « {n}]]

strongly_accepts(insert n S) N"

The proof of Nash-Williams itself is given for the case of r = 2 and the informal text is 11 lines long. Thanks to this more detailed proof, the formal equivalent is only 65 lines long, shorter than those of several supposedly obvious lemmas. The straightforward generalization to r > 0 by induction is, at 66 lines, slightly longer but follows a standard argument.theorem Nash_Williams_2: assumes "thin_set ℱ" shows "Ramsey ℱ 2"

6.3 A short proof, in detail

Major formal proofs are too long to include in full, so we present a trivial proof in order to illustrate the style. Isabelle proofs are written in a structured language containing nested scopes in which variables may be introduced along with assumptions and local definitions. Each such scope shows some explicitly stated conclusion, perhaps establishing intermediate results along the way.

Here is a trivial lemma stating that {n}S is an initial segment of T if and only if S is an initial segment of T and nT, provided S<{n} and nx for all xTS. The keyword proof sets up both directions of the equivalence. The right-to-left direction (appearing after the keyword next) is more interesting. From the right-hand side we obtain some R such that T=SR and S < R, and after a few calculations, we derive the left-hand side.lemma init_segment_insert_iff: assumes Sn: "S « {n}" and TS: "^x. xT-Snx" shows "init_segment (insert n S) T ←→ init_segment S TnT" proof assume "init_segment (insert n S) T" then have "init_segment ({n} ∪ S) T" by auto then show "init_segment S TnT" by (metis Sn Un_iff init_segment_def init_segment_trans insertI1 sup_commute) next assume rhs: "init_segment S T ∧ n ∈ T" then obtain R where R: "T = SR" "S « R" by (auto simp: init_segment_def less_sets_def) then have "S∪R = insert n (S(R-{n}))insert n S « R-{n}" unfolding less_sets_def using rhs TS nat_less_le by auto then show "init_segment (insert n S) T" using R init_segment_Un by force qed

Intermediate results are inserted using the keyword have and existential claims with obtain, while the conclusion is presented using show. Justifications are introduced with by. A justification can consist of a proof method such as auto, or a series of proof methods, or a full proof structure enclosed within the brackets proof and qed. Thus the various proof elements (there are many others) can be nested to any depth.

Because proofs in Isabelle’s Isar language are structured, they can be much more readable than proofs in other theorem provers. Explicit statements of the assumptions and conclusions make structured proofs more verbose than the tactic-style proofs that predominate with other proof assistants, but infinitely more legible. The ideal is not merely to formalize mathematical results but to create a document that makes the proof clear to the reader, where—without having to trust the software—a knowledgeable reader could decide for herself whether the claims follow from the assumptions. A human mathematician would not like to see an incomprehensible “black box” proof. The possibility to recreate a ‘traditional’ proof from Isabelle code does help the user feel more at ease.

6.4 On the length of formal proofs

The de Bruijn factor [Citation46] is defined as the ratio of the size of the formal mathematics to the size of the corresponding mathematical exposition. It can be regarded as measuring the cost of formalization.

Unfortunately, it’s highly inexact. Mathematical writing varies greatly in its level of detail. The first ever de Bruijn factor was calculated for Jutting’s translation of Landau’s Grundlagen der Analysis—on the construction of the complex numbers—into AUTOMATH.

An aspect which has not been mentioned so far is the ratio between the length of pieces of AUT-QE text and the length of the corresponding German texts. Our claim at the outset was that this ratio can be kept constant. …As a measure of the lengths the number of stored AUT-QE expressions … and (rough estimates of) the number of German words. [Citation24, p. 46]

The highest ratio, 6.4, is obtained for the translation of Landau’s Chapter Citation4, in which the real numbers are constructed from the positive real numbers (in the form of Dedekind cuts, which had been defined in Chapter Citation3). But this is a book that devotes 173 pages to the development of complex number arithmetic from logic. The proof that a/b=c/dad=bc includes two references to previous theorems about complex arithmetic. The material could be covered in 10% of the space.

Formal proofs can also be more or less compact, the price of compactness generally being a loss of legibility. Wiedijk [Citation46] has proposed to deal with some of the arbitrariness by comparing the sizes of compressed text files, but this requires retyping possibly lengthy texts into LATEX. He and others report de Bruijn factors in the range of 3–6, but for the Nash-Williams proof above it is 20 and upward (crudely counting lines rather than symbols). Clearly one reason is that the source text is highly concise, only sketching out the key points. One of the lemmas (the strengthening of Todorčević’s 1.19) is not even stated. But also, our proof style is more prolix than is strictly necessary.

7 Formalizing Larson’s proof

Larson’s proof [Citation28, pp. 133–140] of Theorem 3.1 is an intricate tour de force and the formal proof development is almost 4600 lines long. Here, we can only cover its main features, focusing on a few typical constructions and some particular technical issues. We also take a brief look at a few of the formal theorem statements. Our objective is simply to highlight aspects of the formalization task and the strengths and weaknesses of today’s formal verification tools.

7.1 Preliminary remarks

Much of the task of formalization is simply labor: translating the definitions and arguments of the mathematical exposition into a formal language and generating proofs using the available automated methods. Particular difficulties arise when the exposition appeals to intuition or presents a construction. In the case of a construction there are two further sources of difficulties: when properties of the construction are claimed without further argument (as if the construction itself were sufficient proof), or worse, when proofs later in the exposition depend on properties of the construction that are never even stated explicitly.

An example of appeal to intuition is when Larson constructs the set W and remarks that “without loss of generality”—a chilling phrase to the formaliser—we can regard it as the same as W. This turns out to mean that the bulk of the argument will be carried on using W, which is simply the set of increasing sequences of integers. It is presumed obvious to the reader that none of the main lemmas could possibly be proved using W, about which little can be known. W is introduced toward the end, and the necessary adaptations to the main proof aren’t that hard to figure out. But a little hint would have been helpful.

For an example of properties claimed without argument, consider the notation i({x,y}) from Definition 3.4, suggesting that i({x,y}) is uniquely determined by x and y. And so it turns out to be, though the formal version makes explicit its dependence on the “form” of {x, y}, namely l. That il({x,y}) is well-defined is perhaps obvious, since the decomposition of x and y into concatenations of sequences turns out to be unique, but the formal proof of this “obvious” fact is an elaborate induction, around 200 lines long. The function is also injective, which is never claimed explicitly but required for the proof of Lemma 3.6, which defines a function gk by gk(i({x,y}))=gk({x,y}). Proving this additional fact requires another substantial formal proof (100 lines).

For another example of the difficulty of formalization, consider the following construction, typical of this problem domain. We are given a positive integer k and an infinite set N={ni:i<ω} of natural numbers, where the ni are an increasing enumeration of N. Larson [Citation28, Lemma 3.7] defines sequences d1, d2, …, dm and a11,a21, …, ak+11,a12, …, a1m, …ak+1m as follows:

Let d1=(n1,n2,,nk+1)=(d11,d21,,dk+11) and let a11 be the sequence of the first d11 elements of N greater than dk+11. Now suppose we have constructed d1, a11, …, di, a1i. Let di+1=(d1i+1,,dk+1i+1) be the first k + 1 elements of N greater than the last element of a1i, and let a1i+1 be the first d1i+1 elements of N greater than dk+1i+1. This defines d1, d2, …, dm, a11,a12 …, a1m. Let the rest of the sequences be defined in the order that follows, so that for any i and j, aji is the sequence of the least (djidj1i) elements of N all of which are larger than the largest element of the sequence previously defined:(a1m)a21,a22,a23,,a2m,a31,,a3m,ak1,akm,ak+1m,ak+1m1,ak+11.

This construction is carefully crafted, particularly in the reversal at the end: akm1,akm,ak+1m,ak+1m1. The point is to achieve the conclusion of Larson’s Lemma 3.7, namely that if m and l are natural numbers with l > 0, then there is an m element set M such that for every {x,y}M, {x, y} has form l and i({x,y})N.

It turns out that if l=2k1 then we can putM={a1i*a2i**aki:1im},

while if l=2k then we can putM={a1i*a2i**ak+1i:1im}.

The reversal noted above turns out to be crucial to the second case, where x is the concatenation of k + 1 segments while y is the concatenation of only k: the last segment of y has the form aki*ak+1i, and this meets all the requirements of Definition 3.4 above.

The formalization of such a construction amounts to writing a tiny but delicately crafted computer program.Footnote5 Significant effort is needed to prove fairly obvious properties of this construction, such as that the aji are nonempty, or that aji<aji if j<jk and i, i < m. These are immediate by construction since elements are drawn from the set N in increasing order. But the formal proofs require fully worked out inductions.

A similar situation arises with Larson’s lemma 3.8 [Citation28, p. 139]. She defines three collections of sequences:{dj:0<j<ω},{aj:0<j<ω},{b(i,j,k):1ijk<ω}.

The ordering constraints guarantee that the set{aj*b(1,j,k1)*b(2,j,k2)**b(j,j,kj):j<k1<k2<<kj<ω}has order type ωj, and the union over j of these has order type ωω. These order type claims seem plausible, but the text contains no hint of how to prove them. Relevant is that the construction ensures that for j and k with 1jik, the sequence b(i,j,k) has length dijdi1j and is therefore independent of k. For the ωj claim, an induction on j seems to be indicated; our formal proof is nearly 300 lines long, including 200 lines of auxiliary definitions and lemmas, yet a much shorter proof may exist.

It’s finally time to look at the formalization itself. We do not present actual proofs—they are long and not especially intelligible—nor even all of the numerous definitions and technical lemmas required for the formalization.

7.2 Preliminary definitions and results

We begin with something simple: the set W, which is written WW and is the set of all strictly sorted (increasing) sequences of natural numbers. We do not work with W except in the body of the main theorem.definition WW:: "nat list set" where "WW ≡ {l. strict_sorted l}"

Type nat list set is the type of sets of lists (sequences) of natural numbers.

Next comes the notion of an interaction scheme, Definition 3.4. We need to build up to this. First, more basic Isabelle/HOL definitions:

  • length l is |l|, the length of l

  • x#l is the list consisting of l prefixed with x as its first element

  • u@v is the concatenation of lists u and v, like Larson’s u*v

  • List.set maps a list to the corresponding finite set

The function acc_lengths formalises the accumulation of the lengths of lists for the variables c and d there. The integer argument acc is a necessary generalization for the sake of the recursion but will initially be zero.fun acc_lengths:: "nat ⇒ ’a list list ⇒ nat list" where "acc_lengths acc [] = []" | "acc_lengths acc (l#ls) = (acc + length l) # acc_lengths (acc + length l) ls"

Many trivial properties of acc_lengths must be proved. Here lists(-{[]})denotes the set of lists of nonempty lists, and the claim is that acc_lengths yields an element of W.lemma strict_sorted_acc_lengths: assumes "lslists (- {[]})" shows "strict_sorted (acc_lengths acc ls)"

The built-in function concat joins a list of lists. But we also need a function to concatenate two lists of lists, interleaving corresponding elements:fun interact:: "’a list list’a list list ⇒ ’a list" where "interact [] ys = concat ys" | "interact xs [] = concat xs" | "interact (x#xs) (y#ys) = x @ y @ interact xs ys"

We are finally ready to define interaction schemes. We split up the long list of conditions in Definition 3.4 as two inductive definitions, although there is no actual induction: this form of definition works well when there are additional variables and conditions, which would otherwise have to be expressed by a big existentially quantified conjunction. Here xs and ys are the x and y of the definition, ka and kb are the lengths of the a-lists and b-lists, and zs is the interaction scheme.inductive Form_Body:: "[nat,nat,nat list,nat list,nat list]bool" where "Form_Body ka kb xs ys zs" if "length xs < length ys" "xs = concat (a#as)" "ys = concat (b#bs)"

"a#aslists (- {[]})" "b#bslists (- {[]})"

"length (a#as) = ka" "length (b#bs) = kb"

"c = acc_lengths 0 (a#as)"

"d = acc_lengths 0 (b#bs)"

"zs = concat [c, a, d, b] @ interact as bs"

"strict_sorted zs"

The following definition allows us to write Form l U to express that U has form l. Even and odd forms are treated differently; if l=2k for k > 0, the numeric parameters of Form_Body are k + 1 and k. A naive treatment of the definition would force a lot of proofs to be written out twice. The two cases, while not identical, are similar enough to be treated uniformly in terms of the more general Form_Body ka kb .inductive Form:: "[nat, nat list set] ⇒ bool"

where "Form 0 {xs,ys}" if "length xs = length ys" "xs ≠ ys"

| "Form (2*k-1) {xs,ys}" if "Form_Body k k xs ys zs" "k > 0"

| "Form (2*k) {xs,ys}" if "Form_Body (Suc k) k xs ys zs" "k > 0"

Finally we can define the interaction scheme itself, writing inter_scheme k U for ik(U). Recall that SOME denotes Hilbert’s epsilon; the zs mentioned below is actually unique.definition inter_scheme:: "nat ⇒ nat list set ⇒ nat list" where "inter_scheme l U

SOME zs. ∃k xs ys. l > 0

∧ (l = 2*k-1 ∧ U = {xs,ys} ∧ Form_Body k k xs ys zs

∨ l = 2*k ∧ U = {xs,ys} ∧ Form_Body (Suc k) k xs ys zs)"

It turns out to be injective in the following sense, for two sets U and U that have the same form l > 0. The proof is a painstaking reversal of the steps shown in Definition 3.4 and is about 50 lines long.proposition inter_scheme_injective: assumes "Form l U" "Form l U’" "l > 0" "inter_scheme l U’ = inter_scheme l U" shows "U’ = U"

A considerable effort is needed to show that the interaction scheme is defined uniquely. The following lemma takes a long set of assumptions derived from the possibility of two distinct interaction schemes and shows that the a-lists and b-lists necessarily coincide. The proof is by induction on the length of as.proposition interaction_scheme_unique_aux: assumes "concat as = concat as’" "concat bs = concat bs’" and "as ∈ lists (- {[]})" "bs ∈ lists (- {[]})" and "strict_sorted (interact as bs)" and "length bs ≤ length as" "length as ≤ Suc (length bs)" and "as’ ∈ lists (- {[]})" "bs’ ∈ lists (- {[]})" and "strict_sorted (interact as’ bs’)"

and "length bs’ ≤ length as’" "length as’ ≤Suc (length bs’)"

and "length as = length as’" "length bs = length bs’"shows "as = as’ ∧ bs = bs’"

It is now fairly straightforward to show that the first four arguments of Form_Body determine the fifth, which is the interaction scheme. The inequality kbkakb+1 eliminates the need to treat the cases of even and odd forms separately.proposition Form_Body_unique: assumes "Form_Body ka kb xs ys zs" "Form_Body ka kb xs ys zs’" and "kb ≤ ka" "ka ≤ Suc kb" shows "zs’ = zs"

And so we find that the interaction scheme is uniquely defined for every valid instance of the predicate Form_Body. The full proof of this more-or-less obvious statement, for which Larson [Citation28] gives no justification, is longer than 240 lines.lemma Form_Body_imp_inter_scheme: assumes "Form_Body ka kb xs ys zs" "0 < kb" "kb ≤ ka" "ka ≤ Suc kb" shows "zs = inter_scheme ((ka + kb) - 1) {xs,ys}"

7.3 Major lemmas

The material presented thus far (plus much else not presented) serves to make sense of Larson’s definitions. Now we turn to the results that make up the proof of her main result, which we sketched in Section 3.2 above. But first, we need some Isabelle notation:

  • [A]k, the set of all k-element subsets of A is [A] k

  • nk, the kth element of the infinite set N, is enum N k

  • Larson’s a < b for ordered lists is simply a < b in Isabelle

  • (nk)<AN is the conjunction of [enum N k] < A and A N.

An inductive definition expresses initial segments in terms of list concatenation, yielding a definition of thin sets. Footnote6inductive initial_segment:: "’a list ⇒ ’a list ⇒ bool" where "initial_segment xs (xs@ys)"

The first of Larson’s technical lemmas [Citation28, Lemma 3.11] states that the set of interaction schemes {il(U)U has form l} is thin for l > 0. This set is formalized using the image operator ( ). The proof involves deriving a contradiction from the existence of U and U with distinct interaction schemes, one an initial segment of another. As with previous results, the proof involves breaking things down according to Definition 3.4, fairly straightforwardly (under 75 lines).definition thin where "thin A ≡ ¬(x y. x∈A ∧ y∈A ∧ x ≠ y ∧ initial_segment x y)"

The next step in Larson’s development [Citation28, Lemma 3.6] is proved in 150 lines from the original 11-line text. It uses the Nash-Williams partition theorem to obtain an infinite set N and a sequence jk such that if k > 0, U has form k and (nk)<i(U)N, then g(U)=jk.lemma lemma_3_11: assumes "l > 0" shows "thin (inter_scheme l ‘{U. Form l U})"

Larson next proves that for every infinite set N and every m, l<ω with l > 0, there is an m element set M such that MW (necessary but omitted in the text) and for every {x,y}M, {x, y} has form l and i({x,y})N [Citation28, Lemma 3.7]. In the proof of the main theorem, the N above is derived from the one obtained from the previous lemma. The formalization of this one-page proof takes nearly 900 lines, including some preparatory lemmas. About 240 of those lines are devoted to establishing the basic properties of the sequences d1, …, dm and a11,a21, …, ak+11,,a1m, …ak+1m outlined in Section 7.1 above. About 130 lines were devoted to the degenerate cases l = 1 and l = 2, which needed to be treated separately.proposition lemma_3_6: fixes g:: "nat list set ⇒ nat" assumes g: "g ∈ [WW]2 → {0,1}" obtains N j where "infinite N" and "^k U. [[k > 0; U ∈ [WW]2; Form k U; [enum N k] < inter_scheme k U; List.set (inter_scheme k U) ⊆ N] ] ⇒ g U = j k"

Larson’s next result states that for every infinite set N, there is a set XW of order type ωω such that for any {x,y}X, there is an l such that {x, y} has form l and if l > 0 then [nl]<i({x,y})N. Her proof is slightly longer than a page and the full formalization is about 1700 lines long. Of this, approximately 400 concern the construction and properties of the sequences, with a further 400 for the order type calculation and about 600 lines to formalize the last paragraph of the proof (nine lines of text). Recall that [X] 2 denotes the set of all two element subsets of X.proposition lemma_3_7: assumes "infinite N" "l > 0" obtains M where "M ∈ [WW]m" and "^U. U[M]2 ⇒ Form l U ∧ List.set(inter_scheme l U) ⊆ N"

The lemmas described above constitute the main body of Larson’s development. Building on them, the main theorem can be formally proved with just 360 lines of code. The formulation below—in terms of the lexicographic ordering on W—trivially leads to the standard formulation in terms of the ordinal ωω.

To state the main theorem, we use the Isabelle definition of a more general partition relation, β(α1,,αk)n, which is concerned with n-element subsets of β rather than only pairs and allows k colors rather than two. In the Isabelle version, B is any set, r is a well-founded relation on B used for order types and α is a list of ordinals. The expression {.<length α}is the set {0,,k1} while f ‘([H] n ) i expresses that every element of [H]n has the color i.definition partn_lst where

"partn_lst r B α n

f ∈ [B]n → {.<length α}.

∃i < length α. ∃H. H ⊆ B ∧ ordertype H r = (α!i) ∧

f ‘([H]n) ⊆ i"

The proof begins by assuming a partition f of the set [W]2 such that there is no m-element set M for which f([M]2)={1}. It takes about 200 lines to construct the set W and to ‘replace’ W by W; the trick is to find a length-preserving order isomorphism between the two. The result of this work is a partition f of [W]2 that assigns the color 0 to all pairs of sequences in W of the same length, a condition necessary to make the proof go through. The remainder of the proof, approximately 90 lines, completes the argument using the previously proved lemmas and with no mention of W.theorem partition_ωω_aux: assumes "α ∈ elts ω" shows "partn_lst (lenlex less_than) WW [ω↑ω,α] 2"

7.4 On some tricky spots in the proofs

The most frustrating aspect of formalization is the need to spell out the proofs of obvious statements. We could not escape this phenomenon, and discuss a few examples below, some of them positive.

The function interact, defined above, concatenates alternating elements of two lists. A key property is that if the two lists satisfy the constraints given in the definition of a form (Definition 3.4), then the result of interact will be strictly ordered (and therefore in W). The proof is a messy induction and we would like to state the theorem in the simplest possible way.

Fortunately, Isabelle/HOL provides counterexample finding tools: Nitpick [Citation2] and Quickcheck [Citation3]. Their purpose is to identify invalid conjectures before any time is wasted in proof attempts. Nitpick works by abstracting the conjecture to a propositional formula and attempting to find a model with the help of a satisfiability checker, while Quickcheck simply tries to evaluate the conjecture at intelligently chosen values. Both need the conjecture to be computational, in a broad sense. As much of our work here is concerned with finite sets or sequences of integers, these tools can be effective.

We were able to use Nitpick to formulate this theorem correctly, including nuances such as Suc n < length xs, while omitting irrelevant conditions.lemma strict_sorted_interact_I: assumes "length ys ≤ length xs" "length xs ≤ Suc (length ys)"

"^x. x ∈ list.set xs ⇒ strict_sorted x"

"^y. y ∈ list.set ys ⇒ strict_sorted y"

"^n. n < length ys ⇒ xs!n < ys!n"

"^n. Suc n < length xs ⇒ ys!n < xs!Suc n"

"xs ∈ lists (- {[]})" "ys ∈ lists (- {[]})"shows "strict_sorted (interact xs ys)"

There is another challenge at the very end of Lemma 3.8 [Citation28, p. 139], when Larson constructs families of sequences satisfying the conditions of a form for a given pair {x, y}:x={aj*b(1,j,k1)**b(j,j,kj)}y={ar*b(1,r,p1)**b(r,r,pr)}.

Here, we know that aj < ar, hence j < r and this turns out to guarantee the disjointness of all the segments shown. Now Larson [Citation28, p. 140] remarks, “for some l<2j, {x, y} has form l.” Referring to the definition of form, this again is clear: the sequences for x and y need to be interleaved in strict order, which may force adjacent sequences in the expression above to be concatenated. The form can be as small as 1, if x < y, when all the sequences get concatenated; it can be as high as 2j+1 if no concatenations occur for x, as in this example: Footnote7 aj<ar<b(1,j,k1)<b(1,r,p1)<b(2,j,k2)<<b(j,r,pj)**b(r,r,pr).

To ask for a justification of this obvious claim would be unreasonable. And yet to formalize the process of examining the interleavings of these sequences and arranging them so as to satisfy the conditions of Definition 3.4—so that those conditions can be proved—turned out to require weeks of work.

Our first attempt involved the following function, which took as arguments a sequence of sequences coupled with a set B, which in practice would contain the elements of the opposite sequence. The idea was to concatenate consecutive subsequences unless some element of B separated them.fun coalesce where "coalesce [] B = []"

|"coalesce [a] B = [a]"

| "coalesce (a1#a2#as) B =

(if ∃y∈B. a1 < [y] ∧ [y] < a2

then a1 # coalesce (a2#as) B else coalesce ((a1@a2)#as) B)"

Nearly all the necessary properties could be proved easily, but there seemed to be no way to show that the resulting interaction scheme (obtained by applying coalesce to both sequences of sequences) was correctly ordered. Thanks to the counterexample finder, many conjectures could be rejected without attempting a proof.

Therefore coalesce was abandoned in favor of the following predicate, which deals with both sequences of sequences simultaneously, considering each of them cut into two parts (the arguments as1@as2 and bs1@bs2 represent arbitrary cut points for them both). Then the two leading parts, as1 and bs1 , are concatenated provided all the ordering properties are satisfied.inductive merge where NullNull: "merge [] [] [] []"

| Null: "as1 ≠ [] ⇒ merge as [] (concat as) []"

| App: "[[as1 ≠ []; bs1 ≠ [];concat as1 < concat bs1; concat bs1 < concat as2; merge as2 bs2 as bs]] ⇒ merge (as1@as2) (bs1@bs2) (concat as1 # as) (concat bs1 # bs)"

The conditions concat as1 < concat bs1 and concat bs1 < concat as2 ensure that the elements of concat bs1 lie between the two halves of the first sequence. Thus, just enough of the a-sequence is taken so that it lies before the start of the b-sequence, from which just enough elements are taken to allow the a-sequence to resume. This formulation avoids any direct expression of iteration or computation (the root of the problems with coalesce) in favor of writing the a and b-sequences as each divided at an arbitrary point, the rule applying only subject to the ordering constraints shown.

With this approach, most of the required properties are shown easily enough. The most difficult is the following statement, which expresses that any two sequences—subject to certain conditions—can be successfully merged. Again, counterexample checking was crucial to find the simplest formulation of the necessary conditions. The proof is by induction on the sums of the lengths of as and bs .proposition merge_exists: assumes "strict_sorted (concat as)" "strict_sorted (concat bs)"

"as ∈ lists (- {[]})" "bs ∈ lists (- {[]})"

"hd as < hd bs" "as ≠ []" "bs ≠ []"and disj: "^a b. [[a ∈ list.set as; b ∈ list.set bs]] ⇒ a < b ∨ b < a" shows "∃us vs. merge as bs us vs"

7.5 Final remarks on Larson’s proof

The formalization of Larson’s proof of ωω(ωω,m) took approximately six months. This includes a month and a half spent formalizing Erdős–Milner [Citation12] and half a month proving the Nash-Williams partition theorem. Her Lemma 3.8 required two months, 11 days of which were devoted to the order type proof mentioned in Section 7.1. The remaining two months were devoted to Lemma 3.7 and the main theorem. Due to COVID-19, most of the work was undertaken at home, not the best environment for doing mathematics.

Having looked at Larson’s work in excruciating detail for months, we can only be impressed by the intricacy, delicacy and fragility of her constructions and wonder how she kept so many details in mind. She deserves her reputation of being careful and clear. Although formalization efforts regularly identify flaws in mathematical exposition, we found no serious errors in hers. Her narrative proof is seven pages long [Citation28, p. 133–140] and the corresponding formalization is some 4600 lines, not counting prerequisites such as Nash-Williams and Erdős–Milner, which are proved elsewhere. Estimating 30 lines per page, this suggests a de Bruijn factor of roughly 23.

8 Conclusion

Our work shows that ordinal partition theory is clearly formalisable within Isabelle/HOL augmented with a straightforward axiomatization of ZFC. We have found no serious errors in the original mathematical material, and although we struggled in some places it is quite hard to fault Larson’s exposition [Citation28] beyond noting that a few hints here and there could have saved us quite a bit of effort. The reader of a mathematical proof is expected to invest much thought.

As usual, a concern is the disproportionate effort needed to prove some simple observations. The inductive constructions of sequences that appear in Larson’s Lemmas 3.7 and 3.8 [Citation28] must surely be regarded as straightforward and yet we struggled to find the right language in which to express them and derive their obvious properties. The same can be said of the order type calculation in 3.8. It is also unfortunate that the degenerate cases l = 1 and l = 2 in 3.7 required so much work. However, given the inherent complexity of the subject matter, it is reassuring to know that the entire development has been checked formally, with a proof text [Citation37] that is available for inspection or automated analysis.

This case study also demonstrates the diversity of mathematical topics that can be formalized in Isabelle/HOL: we have formalized material that is light years away from what is usually formalised.

Ordinal partition relations seem to be at the same time formalisable in Isabelle/HOL and at a point of their mathematical development where human advances seem rare and not forthcoming. None of the high power techniques of set theory and model theory such as large cardinals, forcing, pcf or elementary submodes seem to be relevant. Therefore, we hope that some advances in this subject might be obtained by automatization. However, we remain with a humble conviction that doing enough preparatory work with Isabelle to be able to produce such results will require a considerable intellectual effort.

Acknowledgments

All three authors thank the London Mathematical Society for support through their Grant SC7-1920-11. Thanks to Stevo Todorčević for advice and to the anonymous reviewers for their helpful feedback on the first submitted version of this article.

Conflict of Interest

No potential conflict of interest was reported by the author(s).

Correction Statement

This article has been republished with minor changes. These changes do not impact the academic content of the article.

Additional information

Funding

Angeliki Koutsoukou-Argyraki and Lawrence C. Paulson thank the ERC for their support through the Advanced Grant ALEXANDRIA (Project GA 742178). Mirna Džamonja’s research was supported by the GAČR project EXPRO 20-31529X and RVO: 67985840 at the Czech Academy of Sciences; she received funding from the European’s Union Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 1010232.

Notes

1 The word generalization deserves explanation. It is often said, including by Larson [Citation28], that Nash-Williams theorem is a Ramsey-type statement, however this does not appear immediately. Namely, Ramsey’s theorem applies to colorings of pairs of elements of ω, while the Nash-Williams theorem applies to coloring singletons in a thin set, not the pairs of the elements of it. The crude analogue of the theorem applying to pairs of elements of a thin set is easily seen to be false, while however there exists a finer version of Nash-Williams theorem using fronts and the shift-initial segment relation, obtaining a result that does apply to pairs and n-tuples of the elements of a thin set.

2 For readers familiar with forcing: we may see these definitions as coming from a forcing notion consisting of the pairs (s, B) with max(s)<min(B) where the extension is given by (s,B)(t,C) if st,CB and tsB. This resembles Prikry or Mathias forcing. In Hodkinson’s notes such pairs are called Prikry pairs and the idea of using them in combinatorics comes from the Galvin-Prikry partition theorem [Citation15].

3 An undefined value is simply regarded as underspecified. It has the expected type, and we always have (ϵx.ϕ)=(ϵx.ϕ) for example.

4 {Sup S<.} is the set of integers greater than maxS

5 Which is even executable, if N is effectively enumerable.

6 Conceptually the same as thin_set, it is a property of sets of lists, not sets of sets.

7 Thus it seems that Larson should have written l<2(j+1). This seems to be the only error in her paper.

References

  • Avigad, J., Hölzl, J., Serafin, L. (2017). A formally verified proof of the central limit theorem. J. Autom. Reason. 59(4): 389–423.
  • Blanchette, J. C., Nipkow, T. (2010). Nitpick: A counterexample generator for higher-order logic based on a relational model finder. In: Kaufmann, M., Paulson, L. C., eds. Interactive Theorem Proving, volume 6172 of Lecture Notes in Computer Science, Springer; pp. 131–146.
  • Bulwahn, L. (2012). The new Quickcheck for Isabelle. In Hawblitzel, C., Miller, D., eds. Certified Programs and Proofs, LNCS Vol. 7679; pp. 92–108. Springer.
  • Carroy, R., Pequignot, Y. (2020). Well, better and in-between. In: Monika Seisenberger, Peter Schuster, Andreas Weiermann, eds. Well Quasi-orders in Computation, Logic, Language and Reasoning. Springer; pp. 1–27.
  • Chang, C-C. (1972). A partition theorem for the complete graph on ωω. J. Comb. Theory. (A). 12: 396–452.
  • Church, A. (1940). A formulation of the simple theory of types. J. Symb. Log. 5: 56–68. doi:https://doi.org/10.2307/2266170
  • Cohen, P. (1966). Set Theory and the Continuum Hypothesis. New York: Benjamin.
  • Džamonja, M. (2020). Fast Track to Forcing. Cambridge: Cambridge University Press.
  • Erdős, P. (1987). Some problems on finite and infinite graphs. In Logic and combinatorics (Arcata, Calif., 1985), Volume 65 of Contemp. Math. Providence, RI: Amer. Math. Soc.; pp. 223–228
  • Erdős, P., Rado, R. (1956). A partition calculus in set theory. Bull. Amer. Math. Soc. 62(427/489): 427–489. doi:https://doi.org/10.1090/S0002-9904-1956-10036-0
  • Erdös, P., Hajnal, A., Mate, A., Rado, R. (1984). Combinatorial Set Theory: Partition Relations for Cardinals. Studies in Logic and the Foundations of Mathematics 106. Elsevier Science Ltd.
  • Erdős, P., Milner, E. C. (1972). A theorem in the partition calculus. Canad. Math. Bull. 15(4): 501–505.
  • Erdős, P., Milner, E. C. (1974). A theorem in the partition calculus corrigendum. Canad. Math. Bull. 17(2): 305.
  • Galvin, F., Larson, J. A. (1974). Pinning countable ordinals. Fund. Math. 82: 357–361. doi:https://doi.org/10.4064/fm-82-4-357-361
  • Galvin, F., and Prikry, Karel L. (1973). Borel sets and Ramsey’s theorem. J. Symb. Log. 38: 192–198. doi:https://doi.org/10.2307/2272055
  • Gordon, M. J. C. (2015). Tactics for mechanized reasoning: A commentary on Milner (1984) ‘The use of machines to assist in rigorous proof’. Philos. Trans. R. Soc. A. 373(2039).
  • Hajnal, A., Larson, J. A. (2010). Partition relations. In M. Foreman, A. Kanamori, eds. Handbook of Set Theory, Vol. 1, Springer; pp. 120–213.
  • Hales, T., Adams, M., Bauer, G., Dang, T. D., Harrison, J., Hoang, L. T., Kaliszyk, C., Magron, V., Mclaughlin, S., Nguyen, T. T., et al. (2017). A formal proof of the Kepler conjecture. Forum Math. Pi. 5:e2.
  • Hales, T. C. (2007). The Jordan curve theorem, formally and informally. Amer. Math. Monthly. 114(10): 882–894.
  • Harrison, J. (2000). Floating point verification in HOL: Light: the exponential function. Form. Methods Syst. Des. 16: 271–305.
  • Harrison, J. (2009). Formalizing an analytic proof of the prime number theorem. J. Autom. Reason. 43(3): 243–261.
  • Hodkinson, I. (2003). Kruskal’s theorem and Nash-Williams theory, after Wilfrid Hodges. version 3.6. Available at: www.doc.ic.ac.uk/imh/papers/bar.pdf, February.
  • Hölzl, J., Heller, A. (2011). Three chapters of measure theory in Isabelle/HOL. In: Eekelen, M., Geuvers, H., Schmaltz, J., Wiedijk, F., eds. Interactive Theorem Proving — Second International Conference, LNCS 6898. Springer; pp. 135–151.
  • van Benthem Jutting, L.S. (1977). Checking Landau’s “Grundlagen” in the AUTOMATH System. PhD thesis, Eindhoven University of Technology. doi:https://doi.org/10.6100/IR23183.
  • Kalvala, S. HOL around the world. In: Archer, M., Joyce, J. J., Levitt, K. N., Windley, P. J., eds. International Workshop on the HOL Theorem Proving System and its Applications. IEEE Computer Society, 1991; pp. 4–12.
  • Kunen, K. (1980). Set Theory, volume 102 of Studies in Logic and the Foundations of Mathematics. North-Holland.
  • Kříž, I., Thomas, R. (1991). Analyzing Nash-Williams’ partition theorem by means of ordinal types. Discrete Math. 95(1–3): 135–167. Directions in infinite graph theory and combinatorics (Cambridge, 1989).
  • Larson, J. A. (1973). A short proof of a partition theorem for the ordinal ωω. Ann. Math. Logic. 6: 129–145.
  • Laver, R. (1971). On Fraïssé’s order type conjecture. Ann. Math. 93(1): 89–111.
  • Marcone, A. (1996). On the logical strength of Nash-Williams’ theorem on transfinite sequences. In: W. Hodges, M. Hyland, C. Steinhorn, J. Truss, eds. Logic: from Foundations to Applications; European logic colloquium. Clarendon Press, 1996, pp. 327–351.
  • Nash-Williams, C. St. J. A. (1965). On well-quasi-ordering transfinite sequences. Proc. Cambridge Philos. Soc. 61: 33–39. doi:https://doi.org/10.1017/S0305004100038603
  • Nicely, T. R. Pentium FDIV flaw, 2011. FAQ. Available at: https://faculty.lynchburg.edu/nicely/pentbug/pentbug.html.
  • Nipkow, T., Paulson, L. C., Wenzel, M. (2002). Isabelle/HOL: A Proof Assistant for Higher-Order Logic. Springer. Available at http://isabelle.in.tum.de/dist/Isabelle/doc/tutorial.pdf.
  • Owre, S., Rajan, S., Rushby, J. M., Shankar, N., Srivas, M. K. (1996). PVS: Combining specification, proof checking, and model checking. In: Alur, R., Henzinger, T. A. eds. Computer Aided Verification: 8th International Conference, CAV ’96, LNCS 1102, Springer; pp. 411–414.
  • Paulson, L. C. (2019). Zermelo Fraenkel set theory in higher-order logic. Archive of Formal Proofs, October. Available at: http://isa-afp.org/entries/ZFC_in_HOL.html, Formal proof development.
  • Paulson, L. C. (2020). The Nash-Williams partition theorem. Archive of Formal Proofs, May. Available at: http://isa-afp.org/entries/Nash_Williams.html, Formal proof development.
  • Paulson, L. C. (2020).Ordinal partitions. Archive of Formal Proofs, August. Available at: http://isa-afp.org/entries/Ordinal_Partitions.html, Formal proof development.
  • Paulson, L. C., Nipkow, T., Wenzel, M. (2019). From LCF to Isabelle/HOL. Formal Aspects Comput. 31(6): 675–698. doi:https://doi.org/10.1007/s00165-019-00492-1
  • Ramsey, F. P. (1929). On a problem of formal logic. Proc. London Math. Soc. (2), 30(4): 264–286.
  • Schipperus, R. (1999). Countable partition ordinals. PhD thesis, University of Colorado-Boulder.
  • Schipperus, R. (2010). Countable partition ordinals. Ann. Pure Appl. Log. 161(10): 1195–1215.
  • Specker, E. (1956). Teilmengen von Mengen mit Relationen. Commentarii Mathematici Helvetici. 31: 302–314. doi:https://doi.org/10.1007/BF02564361
  • Todorčević, S. (1987). Partitioning pairs of countable ordinals. Acta Math. 159(3/4): 261–294.
  • Todorčević, S. (1989). Partition Problems in Topology. Providence, RI: Amer. Math. Soc.
  • Todorcevic, S. (2010). Introduction to Ramsey spaces, volume 174 of Annals of Mathematics Studies. Princeton, NJ: Princeton University Press.
  • Freek Wiedijk. (2000). The De Bruijn factor. Technical report. Nijmegen, Netherlands: Department of Computer Science Nijmegen University. Available at: http://www.cs.ru.nl/freek/factor/.