www.howardism.org
Babblings of an aging geek in love with the Absurd, his family, and his own hubris.... oh, and Lisp.

Types are Not Harmless

Introduction

Types have become ubiquitous in computer science. Because the advantages of typed programming languages are so obvious, most computer scientists assume that mathematics should be formalized with types. They seem unaware of the harm that types inflict on the simplicity and elegance of mathematics. Some even feel that mathematics cannot be formalized without types. I believe that types do more harm than good in a formal system for mathematical reasoning. I do not expect many readers to agree with this conclusion. But, I hope to demonstrate that types are neither harmless nor necessary, and that their use in mathematics needs to be justified.

It is quite easy to formalize mathematics using sets and operators. Section 2 describes a formalism that I call ZFM (ZF for Mathematics). This section will seem excruciatingly obvious to some readers, since ZFM is a straightforward formalization of everyday mathematics. However, I have found that many computer scientists think that a typeless set theory must harbor logical inconsistencies.

Section 3 describes how the concepts of mathematics are expressed in typed formalisms, and Section 4 discusses some problems that types introduce. These problems arise in an of area of particular concern to computer scientists–formal reasoning about programs. I expect them to occur in more conventional branches of mathematics as well.

The problems raised by types are not insurmountable. Each problem that I describe can be solved in most type systems. But, the solutions add complexity and make a formalism harder to use. The question I wish to raise is not how type systems can solve these problems, but whether the benefits types provide offset the harm they do. Section 5 discusses what those benefits might be.

Math Without Types

There are a number of ways to formalize mathematics without using types. The formalism I call ZFM is the one I find to be the simplest and most natural way to capture ordinary mathematical reasoning. I describe it informally; the formal definition of ZFM should be a straightforward exercise for a mathematician familiar with the foundations of logic and set theory.

Logic

ZFM is based on first-order predicate logic with equality. Its formulas contain variables, quantifiers, and operators. Each operator has a signature, defined in terms of the primitive signatures term and formula. For example, the equality operator (=) has signature termtermformula. Quantification is only over variables, which have signature term.1

ZFM also includes Hilbert’s ε operator⟦1⟧, which I call choose. The expression choose 𝑥 : 𝑃(𝑥) denotes an arbitrary value 𝑥 that satisfies 𝑃(𝑥), if one exists; otherwise it denotes a completely arbitrary value. The choose operator satisfies the following axiom schemas (⇒ is implication and ≡ is the boolean operator “if and only if”).

⊢ (∃𝑥 : 𝑃(𝑥)) ⇒ 𝑃(choose x : 𝑃(𝑥))

⊢ (∀𝑥 : 𝑃(𝑥) ≡ 𝑄(𝑥)) ⇒ (choose x : 𝑃(𝑥)) = (choose x : 𝑄(𝑥))

Mathematicians need to define new operators in terms of the primitive ones provided by a formalism. ZFM takes the simple view that definitions are purely syntactic. For example, writing 𝐹(x) ≜ ∃𝑦 : 𝐺(𝑥,𝑦) makes 𝐹(𝑒) an abbreviation for ∃𝑦 : 𝐺(𝑒, 𝑦), for any expression 𝑒. An operator can be defined only in terms of primitive operators and operators that have already been defined. (Recursion is discussed below.) Thus, by replacing defined symbols with their definitions, any expression can be reduced to one containing only the primitive operators of ZFM. A definition cannot introduce unsoundness, so we never have to prove a theorem in order to make a definition. However, we may have to prove that a defined operator has the properties we want.

One useful operator is the if/then/else construct, which has signature formulatermtermterm. It is defined by

if 𝑝 then 𝑒₁ else 𝑒₂ ≜ choose 𝑥 : (𝑝 ⋀ (𝑥 = 𝑒₁)) ⋁ (¬𝑝 ⋀ (𝑥 = 𝑒₂))

Set Theory

The primitive operators of ZFM include the ones of Zermelo-Fraenkel set theory⟦2⟧. ZFM uses the simplest form of set theory, in which everything is a set. Thus, the number 2 is a set, though we usually don’t think of it as one because we don’t care what its elements are. The phrase “the set 2” seems strange, so I let value be synonymous with set and refer to 2 as a value.

Figure 12 describes a collection of operators from set theory that I have found adequate for ordinary mathematics. Some of these operators are defined in terms of the others; the rest are primitive and are effectively defined by axioms. For example, the Separation Axiom⟦2, page 325⟧ asserts that, for any set 𝑆, the collection {𝑥 ∈ 𝑆 : 𝑃(𝑥)} of all elements of 𝑆 satisfying 𝑃(𝑥) is a set.3 It makes no practical difference which of these operators are taken to be primitive. (They can all be defined in terms of ∈.)

For most uses of ZFM, it suffices to understand the meanings of the operators of Figure 12, and to know that two sets are equal iff they have the same elements. Formally, we need axioms to guarantee the existence of “enough” distinct sets. For computer science, it seems necessary only to assume an axiom of infinity that implies the existence of the natural numbers. More sophisticated mathematical applications may need the Axiom of Choice⟦2, page 335⟧.4

Naive informal reasoning about sets can be unsound. In ZFM, soundness is guaranteed by restricting how one can construct sets. First order logic with the operators of Figure 1, and the corresponding axioms, is sound. For example, Russell’s paradox is avoided because the “set” 𝓡 of all sets that are not elements of themselves, which satisfies 𝓡 ∈ 𝓡 iff it satisfies 𝓡 ∉ 𝓡, cannot be expressed with the operators of Figure 12.

Functions

Mathematicians usually define a function to be a set of ordered pairs. Formally, one can define the operator Apply by

Apply (𝑓, 𝑥) ≜ choose 𝑦 : ⟨𝑥, 𝑦⟩ ∈ 𝑓

and let 𝑓(𝑥) be an abbreviation for Apply (𝑓, 𝑥). It doesn’t matter how functions are defined. I prefer simply to regard the four operators of Figure 25 as primitive, where I write 𝑓[𝑥] instead of the customary 𝑓(𝑥) to distinguish function application from operator application.6 A function 𝑓 has a domain, which is the set written dom 𝑓. The set [𝑆 → 𝑇] consists of all functions 𝑓 such that dom 𝑓 = 𝑆 and 𝑓[𝑥] ∈ 𝑇 for all 𝑥 ∈ 𝑆. The notation [𝑥 ∈ 𝑆 ↦ 𝑒(𝑥)] is used to describe a function explicitly. For example, if 𝐑 is the set of real numbers, then [𝑟 ∈ 𝐑\{0} ↦ 1/𝑟] is the reciprocal function, whose domain is the set 𝐑\{0} of nonzero reals.

Functions can be defined recursively using the choose operator. For example, the factorial function fact is defined by

𝑓𝑎𝑐𝑡 ≜ choose 𝑓 : 𝑓 = [𝑛 ∈ 𝐍 ↦ if 𝑛 = 0 then 1 else 𝑛 • 𝑓[𝑛—1]]

where 𝐍 is the set of natural numbers. I write this definition as

𝑓𝑎𝑐𝑡[𝑛:𝐍] ≜ if 𝑛 = 0 then 1 else 𝑛 𝑓𝑎𝑐𝑡[𝑛—1]

In general, 𝑓[𝑥:𝑆] ≜ 𝑒(𝑥) defines 𝑓 to equal choose 𝑓 : 𝑓 = [x ∈ 𝑆 ↦ 𝑒(𝑥)]. To deduce that 𝑓 is a function with domain 𝑆, we must prove that there exists a function 𝑓 that equals [𝑥 ∈ 𝑆 ↦ 𝑒(𝑥)]. This is the same proof that is often cited as a requirement for the definition to be legal. In ZFM, definitions are always legal; proofs are needed only to deduce properties of the definiendum.

Functions are different from operators.7 A function 𝑓 has a domain, and we can define the value of 𝑓[𝑥] only for elements 𝑥 in its domain. The expression \(fact[\sqrt{2}]\), which is an abbreviation for \(Apply(fact, \sqrt{2}\)), is syntactically a term, so it denotes a value. However, we don’t know what value. It need not equal \(\sqrt{2} \cdot fact[\sqrt{2} - 1]\). It need not even be a number. But, whatever its value, it must equal \(fact[2/\sqrt{2}]\), since \(\sqrt{2}\) equals \(2/\sqrt{2}\). Functions are just like other values; for example, 𝑓𝑎𝑐𝑡 by itself is syntactically a term. We can quantify over sets of functions, writing expressions such as ∀𝑓 ∈ [𝐑 → 𝐑] : |𝑓| ≥ 0.8

Operators are different from functions. Consider the operator ∪, where ∪𝑆 is the union of all elements of 𝑆. We cannot define a function union so that \(union[S]\) equals ∪𝑆 for all sets 𝑆. The domain of union would have to be a set that contains all sets, and there is no such set. (If there were, we could express Russell’s paradox.) The symbol ∪ by itself is not a term, so it does not denote a value. We cannot quantify over operators. The string ∃𝑈 : 𝐑 ∈ 𝑈(𝐑) is not syntactically well formed, since we can write 𝐑 ∈ 𝑈 (𝐑) only if the signature of 𝑈 is termterm, and bound variables are terms.

Examples

ZFM is easy to use in practice. As an example of something that can pose difficulties for a typed system but is trivial in ZFM, let us define len(𝑠) to be the length of 𝑠, for any finite sequence 𝑠. Formally, a sequence 𝑠1, …, 𝑠𝑛 is a function 𝑆 whose domain is the set {1, …, 𝑛} such that 𝑠[𝑖] = 𝑠𝑖, for 1 ≤ 𝑖 ≤ 𝑛. It is impossible to define len to be a function, since its domain would have to be a set consisting of all finite sequences, and there is no such set in ZFM. But, the operator len is easily defined by

len(𝑠) ≜ choose 𝑛 : (𝑛 ∈ 𝐍) ⋀ (dom 𝑠 = {𝑖 ∈ 𝐍 : 1 ≤ i ≤ 𝑛})

This defines \(len(s)\) for all 𝑠, even if 𝑆 is not a sequence. In that case, dom 𝑠 is not a set of the form {1, …, 𝑛} for any 𝑛 in N, and the value of \(len(s)\) is some unspecified value.

As a further example of how easily mathematics can be formalized with ZFM, Figure 39 defines the Riemann integral of elementary calculus. The definition uses the operators of Figures 1 and 2, the set R of real numbers, the subset N of naturals, the usual arithmetic operations and ordering relations on numbers, and the operator len. The integral R ba 𝑓 is defined to be the limit of trapezoidal approximations Sp(f ) determined by partitions p of the interval [a; b], as the diameter (maximum subinterval length) of p goes to zero. Figure 3 makes the following preliminary definitions: seq(S )

is the set of sequences of elements of the set𝑆;

nP

m 𝑓 , an abbreviation forP( m; 𝑓 )[n], equals 𝑓 [m] + 𝑓 [m + 1] + Δ Δ Δ + 𝑓 [n]; R+ is the set of positive

reals; jr j is the absolute value of r ; and P ba(ffi) is the set of partitions of [a; b] (sequences a = p10, p11, . . . , b) of diameter less than ffi.

It is often best to define something axiomatically rather than constructing it explicitly. The choose operator makes this easy. For example, to define the natural numbers N with successor function succ and zero element 0, we first define Peano(N ; s; z ) to be the formula asserting that the set N with successor function 𝑆and zero element z satisfies Peano’s axioms.12

We then define

𝐍 (choose Ψ : Peano(Ψ10, Ψ11, Ψ13)) 10
succ (choose Ψ : Peano(Ψ10, Ψ11, Ψ13)) 11
0 (choose Ψ : Peano(Ψ10, Ψ11, Ψ13)) 13

“Higher-order” operators pose no problem. For example, consider the operator Γ defined by

Γ(𝐅)(𝑥,𝑦) ≜ 𝑓(𝑥) = 𝑦

It has signature (term → term) → (term Θ term → formula). If G has signature term → term, then Γ (G) has signature term Θ term → formula, and Γ (G)(e; 𝑓 ) equals the formula G(e) = 𝑓 , for any terms e and 𝑓 . We are still in the domain of first-order logic, because we can quantify only over terms, not over arbitrary operators.

What is 1/0?

Elementary school children and programmers are taught that 1/0 is meaningless, and they are committing an error by even writing it. More sophisticated logicians say that 1/0 equals the nonvalue ?, which acts like a virus, turning infected expressions to ?. They devise complicated rules to describe how it spreads–for example, declaring that (0 = 1) . (0 = ?) equals ?, but (0 = 1) ^ (0 = ?) equals false. ZFM provides a simpler answer to the question of what 1/0 is: we don’t know and we don’t care.

Let us take = to be a function with domain R Θ (Rnf0g), the set of pairs hr ;𝑆i of real numbers with𝑆nonzero. Then 1=0 is an abbreviation for =[h1; 0i]. Since h1; 0i is not in the domain of =, we know nothing about the value of 1=0; it might equal p2, it might equal R, or it might equal anything else. We don’t care what it equals. For example, consider

(x 2 R) ^ (x 6= 0) ) (x Δ (1=x ) = 1) (1) This formula is valid, meaning that it is satisfied by all values of 𝑥 . Substituting 0 for 𝑥 yields the formula false ) (0 Δ (1=0) = 1), which equals true regardless of the value of 1=0, and regardless of whether or not 0 Δ (1=0) equals 1. The subformula 𝑥 Δ (1=x ) = 1 of (1) may or may not be valid; we don’t know what 0 Δ (1=0) or R Δ (1=R) equal, so we don’t know whether or not they equal 1.

Semantic Types

We can define what it means for a ZFM formula to be semantically type correct. For each operator 𝑓 , we must define an operator TF that is true if the arguments of 𝑓 are “sensible”. For example, TApply(f; x) equals true iff 𝑓 is a function with 𝑥 in its domain, in which case we say that the pair 𝑓 , x is in the semantic type of Apply. A formula is semantically type correct iff its validity does not depend on the value of any operator applied to values not in its semantic type. Formula (1) is semantically type correct, but its subformula 𝑥 Δ (1=x ) = 1 is not.

Theorems should be semantically type correct, but arbitrary formulas that appear within them need not be. If we know nothing about a value, then we can’t prove a theorem whose validity depends on that value. Thus, proving a theorem shows that it is semantically type correct. ZFM has rules for proving theorems; there is no need to introduce semantic types and semantic type checking.

Types

There is no consensus on what constitutes a type. Although everyone agrees that the prime natural numbers form a set, computer scientists disagree on whether they should form a type. There are many different type systems; they differ in what kinds of types can be declared and what those type declarations mean. However, most type systems share some features, which I now describe.

Type systems allow some sets as types. Although one might consider ZFM to be a typed logic with only two basic types, term and formula, we expect a type system to include such types as Nat (natural numbers) and Real (real numbers). Type systems can be classified according to how rich a collection of sets can be expressed as types. A fine type system allows the set Rnf0g of nonzero reals to be type; a coarse one allows only “simpler” sets like R as types.

Type systems generally replace the separate concepts of functions and operators with the single concept of a typed function. For example, the cuberoot function 3p might have type Real → Real. Replacing an operator like len with a typed function requires the concept of polymorphism, discussed in Section 3.1.

Types are used for type checking. Typed formalisms prevent errors by allowing one to write only formulas that are accepted by the type checker.

Other uses of types are discussed in Section 3.2.

Polymorphism

Advocates of type systems might claim that eliminating the distinction between functions and operators produces a simpler formalism. However, this simplification would be of no interest if it were achieved by eliminating useful operators like len. In most type systems, len is a polymorphic function, often thought of as a collection of simple functions of type Seq(T ) → Nat, for certain types T . What those “certain types” T are depends on the type system; the type of len itself may or may not be one of them. Just replacing functions and operators with simple functions and polymorphic functions does not constitute a simplification. Devising a type system in which there is no essential difference between 3p and len is a nontrivial task. Such a type system is unlikely to be simpler than just distinguishing between functions and operators.

A form of polymorphism that is often called overloading is used to formalize some informal notation. For example, a mathematician might write Θ for both the Cartesian product and the vector product on R3, so 𝑥 Θ y ∈ 𝑆 Θ T asserts that the vector product 𝑥 Θ y is an element of the Cartesian product𝑆Θ T . This can be formalized by declaring Θ to be a polymorphic function. In a typeless system, one could do the same thing by defining the operator Θ as

a Θ b

≜ if a 2 R3 then : : : else : : :

Both approaches seem complicated and potentially confusing. If a rigorous treatment is required, it is probably best to use two different symbols and write 𝑥 Θ y ∈ 𝑆 Θ T .

In some formalisms, + is regarded as polymorphic because Nat and Real are disjoint types, with separate definitions of addition. In such a formalism, the real number 2:0 does not equal the natural number 2. Most mathematicians would regard this as bizarre, but some computer scientists believe that the naturals should not be a subset of the reals. Typed formalisms usually permit either approach. However, with many type systems, it is impossible first to define the naturals, and then to define the reals as a superset. This is easy to do in ZFM by defining R axiomatically, with N ` R as one of the axioms.

Types as Abbreviations

The type declaration r : Real often means that, within its scope, 8r is an abbreviation for 8r 2 R. Such an abbreviation can be convenient, and one could add a similar convention to ZFM. However, I have found that the small advantage of eliminating the “2 R” is outweighed by the disadvantage of having to declare bound variables. Many potentially confusing expressions14 can be ruled out by the syntactic restriction that the same variable cannot appear free and bound in the same expression. Since variables may appear implicitly through the use of previously defined operators, an easy way to enforce this restriction is by requiring that free variables be declared and bound variables be undeclared.

The type declaration r : Real can serve as an assertion that, within its scope, r is assumed to be an element of R. Such an assertion is often used in proofs; it is easily expressed without types by writing “assume r 2 R”. Moreover, one may want to introduce a variable r satisfying certain properties and then prove r 2 R. Requiring a type declaration for r simply to introduce it could make writing a formal proof difficult.

The Trouble with Types

Formal reasoning is especially important when proving properties of programs. The social process by which mathematicians check each others’ proofs does not exist in the realm of program verification, so formal reasoning is crucial for eliminating errors. I will therefore discuss the problems introduced by types when reasoning about programs.

Formalisms for reasoning about programs usually provide some extension to conventional mathematics–for example, Hoare triples or weakest precondition operators. For my examples, I assume the kind of formalism typically used to reason about concurrency, in which an execution of a program is represented by a sequence of states or events, and the formula 2P asserts that P is true throughout an execution. However, nothing I say is peculiar to this formalism; types cause exactly the same problems in other formalisms for reasoning about programs.

Consider the program of Figure 4. It uses Dijkstra’s do construct

do g1 → s1 : : : gn → sn od

which is executed by repeatedly choosing an arbitrary i such that g i is true, and executing si. The statement terminates when all the g i are false. Tail and Append are the usual operations on sequences, and OE is the empty sequence. The program of Figure 4 loops forever, nondeterministically appending and removing 42’s from the sequence s, while keeping n equal to the length of s. The property I want to prove of this program is that it never sets𝑆to Tail (OE).

Figure 415 is meant to describe an abstract program, not a real one. The symbols +, Γ , Append , and Tail represent mathematical operations on natural numbers and sequences, not procedures for manipulating bits in a computer. The program permits executions in which the value of n becomes arbitrarily large; a real program would produce an error if n became too large. We want to write and reason about abstract descriptions of programs, without having to introduce the complexity of a real programming language.

In a logic based on ZFM, one can prove that the program satisfies the property 2(s ∈ 𝑆eq(N)), which asserts that the value of𝑆is always a finite sequence of natural numbers. Since Tail (OE) is unspecified, one cannot prove that it is in seq(N). Thus, proving 2(s ∈ 𝑆eq(N)) shows that𝑆is never set to Tail (OE). We prove this result by using induction to show that the program satisfies 2((s ∈ 𝑆eq(N)) ^ (n = len(s))). This is the same method used to prove any property of the form 2P , such as the partial correctness property 2(terminated ) Q), which asserts that Q holds if and when the program terminates.

Semantic type correctness of a program means that the value of each variable is always an element of the appropriate set. It is a property of the form 2((x 1 ∈ 𝑆 1) ^ : : : ^ (x n ∈ 𝑆 n )). As the example indicates, proving such a property can be hard. In fact, semantic type correctness is undecidable.

With a typed formalism, one must declare types for the variables n and 𝑆and the “functions” +, Γ , Append and Tail . The representation of the

program is then type checked. Different type systems will require different type declarations, and will differ in the meanings of those declarations. In almost all systems, one would declare n to be of type Nat and𝑆to be of type Seq(Nat). Fine and coarse type systems would use different declarations for Tail , which cause different kinds of problems.

Fine Type Systems

In a fine type system, Tail will be of type Seq6=OE(Nat) → Seq(Nat), where Seq6=OE(Nat) is the type of nonempty sequences of naturals. Type checking the program means proving that the program satisfies 2(s ∈ 𝑆eq(N)). Type correctness means semantic type correctness, and type checking requires the same kind of proofs needed to prove other program properties, such as partial correctness. Type checking is undecidable for a fine type system.

In a fine type system, there seems to be no logical justification for giving type declarations a different status than other correctness properties. Still, if a type system did nothing more than introduce a redundant way of expressing properties, it would be relatively harmless. However, fine type systems introduce a worse problem.

In reasoning about programs, as in any kind of mathematics, complexity is handled by combining simple results about small expressions to obtain more difficult results about big expressions. We therefore want to write and reason about parts of programs and then compose those parts. For example, in proving semantic type correctness for the program of Figure 4, we prove that executing the statement

n ? 0 → n : = n Γ 1;𝑆: = Tail (s) (2) leaves the assertion (s ∈ 𝑆eq(N)) ^ (n = len(s)) true. However, if𝑆is of type Seq(Nat) and Tail is of type Seq6=OE(Nat) → Seq(Nat), then (2) is not type correct. A logic based on a fine type system would not let us define and reason about this statement.

One can solve the problem of handling statement (2) by introducing the notion of dependent types and declaring the type of𝑆to be all sequences of naturals whose length equals n. However, the substatement𝑆: = Tail (s) of (2) still does not type check.

This example illustrates a fundamental problem with type systems that try to capture semantic type correctness. Type systems type check definitions. But, just as semantic correctness in ZFM holds only for complete theorems, semantic correctness in a programming logic holds only for complete programs. Type checking definitions can make it impossible in practice to build up complicated theorems or programs by defining their components separately.

Course Type Systems

In a coarse type system, Tail has type Seq(Nat) → Seq(Nat). With such a system, there is no problem separately type checking statement (2) or its substatements. However, we are left with the question of what it means for Tail to have this type. There seem to be three possibilities: Tail is a partial function that is defined on a subset of seq(N), it really is a function from seq(N) to seq(N), or the declaration is a piece of syntax that doesn’t really mean anything.

Partial Functions

If Tail is a partial function, and Tail (OE) is undefined, then what is the meaning of a program that assigns Tail (OE) to s–for example, the program obtained by changing the initial value of n to 1 in Figure 4? Simply declaring a program to be illegal if it might set𝑆to Tail (OE) would leave us unable to reason about substatement (2) by itself, since it would be illegal.

One possibility is that Tail (OE) is the infectious nonvalue ?. In this case, type checking does not guarantee semantic type correctness. We still have to prove that the original program satisfies 2(s ∈ 𝑆eq(N)). However, as noted earlier, we would have to do this in a more complicated formalism than ZFM. For example, in some methods of coping with ?, the formula a 6= b is not equivalent to :(a = b).

Another possibility is that declaring𝑆to be of type Seq(Nat) means that the program is guaranteed never to set𝑆to Tail (OE). The type declaration implies that substatement (2) really means

n ? 0 ^𝑆6= OE → n : = n Γ 1;𝑆: = Tail (s) (3) This possibility is unsatisfactory because it provides no way to distinguish the incorrect program obtained from Figure 4 by initializing n to 1, and the correct program obtained by then replacing statement (2) with (3).

Total Functions

If Tail is a function from seq(N) to seq(N), then Tail (OE) is some sequence of naturals. It might be defined to equal OE, or it might be some unspecified sequence of naturals. Semantic type correctness is trivially satisfied. This is unsatisfactory because we want a program to be incorrect if it sets𝑆to Tail (OE). We don’t want to be able to prove reasonable properties about such a program.

Pure Syntax

One can interpret the declaration that Tail has type Seq(Nat) → Seq(Nat) to be just an informal comment meaning that Tail should be used only in a context in which its argument and its result are both expected to be sequences of naturals. Type checking provides a sanity check for detecting errors, but type declarations have no semantic significance. Type declarations might also indicate implicit ranges of quantification and of index sets, but these are just syntactic abbreviations.

Syntactic type checking is harmless. It is probably used unconsciously by mathematicians. If𝑆is supposed to be a sequence of naturals, a mathematician notices that the expression𝑆2 N must be wrong because it doesn’t type check. As observed above, there is no reason for mathematicians to formalize this kind of type checking because proving a theorem demonstrates that the theorem is semantically type correct.

Syntactic type checking is redundant when proving theorems, but mathematics is not used only to prove theorems. A formal specification of a program is a mathematical formula, and large specifications are often written without proving anything about them. Type checking is a good way to catch errors in these specifications, and a purely syntactic type system seems like a good choice for a specification language. Its type checker would never declare a specification to be illegal; it would just issue warnings about formulas that it suspects to be incorrect. Since its type declarations have no semantic content, a syntactic type system would be freed from many of the constraints that hinder conventional type systems. In principle, it could be better at detecting errors. Unfortunately, I know of no such type system. Designing one seems to be a good topic for research.

Do Types Help?

The usual justification for types is that type checking catches errors. But, is type checking better than other methods for catching those errors?

For programming languages, the answer seems to be yes. Types allow the compiler to catch errors that are more difficult to find at run time, when debugging the program. They also allow more efficient code to be generated. Although types can be inconvenient–mainly by making it difficult or impossible to write some correct programs–it seems clear that the advantages of types can outweigh their inconvenience.

For mathematics, the answer is not so clear. Type checking allows errors to be caught when making definitions. But those errors should be caught when writing a proof. Mathematicians who are so careless in writing proofs that they miss the obvious errors found by type checking are unlikely to detect the subtle errors that lead to incorrect theorems. Most computer scientists believe that type checking helps because it catches errors sooner. But, this belief does not seem to be based on any evidence. I have written many rigorous proofs using an untyped formalism; a few of them were mechanically checked. I have never felt that a type checker could have saved me any significant amount of effort–even if it had required me to do no extra work.

Another justification for types is that type checking isolates the part of theorem proving that can be done automatically. When reasoning in an untyped logic, we must prove type correctness as part of a proof. To prove anything about the expression m +n, we must first prove that it is a number. This is what type checking proves. In general, when semantic types coincide with coarse types, as they do for +, semantic type correctness can be proved by the type checker.

This argument for type checking can be expressed as follows. A type checker can deduce automatically that, if n and m have type Nat and + has type NatΘ Nat → Nat, then n +m has type Nat. A theorem prover needs human interaction to deduce that, if n and m are elements of N and + is a function from N Θ N to N, then n + m is an element of N. The absurdity of this argument is evident. Yet, it has become a self-fulfilling prophecy. Some theorem provers based on a typed formalism will automatically type check the expression m + n, but will require a great deal of human effort to prove

(m 2 N) ^ (n 2 N) ^ (+ 2 [N Θ N → N]) ) (m + n 2 N) (4) This is a problem in the design of theorem provers. When the two are equivalent, there is no reason why type checking should be any easier than proving semantic type correctness. When the two are not equivalent, type checking does not prove any useful mathematical result. If Tail (OE) need not be a number, then certifying that the expression Tail (s) + n is type correct does not prove that it is a number.

It can be argued that type declarations provide needed guidance to a theorem prover. There are many theorems a prover could conceivably try to prove automatically; how can it tell that it should try to prove (4)? This suggests adding type declarations as hints to the theorem prover, with no semantic content. It is another possible argument in favor of syntactic types. However, it is not clear that theorem provers really need this help.

Conclusion

Types are very useful in programming languages. They are useful in mathematics too. Type declarations can help the reader understand a formula, and type checking can catch errors. But types have their cost. The only form of type system that seems to have no significant drawbacks is a purely syntactic one in which type declarations are just comments, and type checking serves only to ensure that the formulas appear to be consistent with those comments. I know of no formal system that uses such a type system.

Computer scientists seem to have embraced types without giving them much thought. There is a general feeling that everyone uses types, so they must be good. But types are not harmless. Their benefits must be weighed against the problems they introduce. For programming languages, there is a great deal of evidence to demonstrate the benefits of types. I know of no similar evidence for mathematical formalisms. Mathematicians reason informally without using types, and their style of reasoning can easily be formalized without types. Mathematics is not programming, and the use of types in mathematics is not justified by the success of typed programming languages. Typed formalisms may turn out to be good for mathematics, but this has yet to be demonstrated.

Acknowledgments

Robert Boyer, Luca Cardelli, Peter Hancock, Peter Ladkin, Denis Roegel, Fred Schneider, and Andrzej Trybulec suggested improvements to previous versions.

References

⟦1⟧

A. C. Leisenring. Mathematical Logic and Hilbert’s ε-Symbol. Gordon and Breach, New York, 1969.

⟦2⟧

J. R. Shoenfield. The axioms of set theory. In Jon Barwise, editor, Handbook of Mathematical Logic, chapter B1, pages 317-344. NorthHolland, Amsterdam, 1977.

Footnotes:

1

Bound variables introduce complications. A simpler formalism can be obtained by eliminating them. For example, the formula ∃𝑥 : ∃𝑦 : 𝑃(𝑥, 𝑦) can be replaced by ∃(∃(𝑄)), where 𝑄 is the operator of sort term → (termformula) such that 𝑄(𝑥)(𝑦) equals 𝑃(𝑥,𝑦). Such a formalism is less convenient in practice, so ZFM uses bound variables. However, thinking of ZFM as based on an underlying formalism with no bound variables can help clarify some subtle points, such as the meaning of substitution.

2

Figure 1: The operators of set theory.

= ≠ ∈ ∉ ∅ ∪ ∩ \ [set difference]

{𝑒₁, …, 𝑒n } [Set consisting of elements 𝑒i ]
{𝑥 ∈ 𝑆 : 𝑃(𝑥)} [Set of elements 𝑥 in 𝑆 satisfying 𝑃(𝑥)]
{𝑒(𝑥) : 𝑥 ∈ 𝑆} [Set of elements 𝑒(𝑥) such that 𝑥 in 𝑆]
2S [Set of subsets of 𝑆]
∪𝑆 [Union of all elements of 𝑆]
⟨𝑒₁, …, 𝑒n [The 𝑛-tuple whose 𝑖th component is 𝑒𝑖 ]
𝑆₁ ✕ … ✕ 𝑆n [The set of all 𝑛-tuples with 𝑖th component in 𝑆𝑖 ]
3

This axiom is valid only because 𝑆 is considered to lie outside the scope of the bound variable 𝑥. Similarly, 𝑆 lies outside the scope of 𝑥 in {𝑒(𝑥) : 𝑥 ∈ 𝑆}.

4

The axioms for the choose operator are independent of the Axiom of Choice.

5

Figure 2: Operators for expressing functions

𝑓[𝑒] [Function application]
dom 𝑓 [Domain of the function 𝑓 ]
[𝑆 → 𝑇] [Set of functions with domain 𝑆 and range a subset of 𝑇 ]
[𝑥 ∈ 𝑆 ↦ 𝑒(𝑥)] [Function 𝑓 such that 𝑓[𝑥] = 𝑒(𝑥) for 𝑥 ∈ 𝑆]
6

One could use 𝑓(𝑥) for both, since the signature of 𝑓 determines which is meant.

7

More precisely, a function is an operator of signature term, so it has no (operator) arguments.

8

8x 2 S : P(x ) and 9x 2 S : P(x ) are abbreviations for 8x : (x 2 S) ) P(x ) and

9x : (x 2 S) ^ P(x ), respectively.

9

Figure 3: The definition of the Riemann integral.

seq(S )

≜𝑆𝑓 [fi 2 N : 1 ^ i ^ ng →𝑆] : n 2 Ng

n : NP

m f

≜ if n → m then 0 else 𝑓 [n] + nΓ 1P

m f R+

≜ fr 2 R : 0 → r g

jr j

≜ if r → 0 then Γ r else r

Pba(ffi)

≜ 𝑓 p ∈ 𝑆eq(R) : (p10 = a) ^ (p[len(p)] = b)

^ 8 i 2 f1 : : : len(p)Γ 1g :

( jp[i +1] Γ p[i ]j → ffi)^

if a ^ b then p[i ] ^ p[i +1]

else p[i ] * p[i +1] g

Sp(f )

≜ len(p)Γ 1P

1 [ i 2 N ↦ (p[i +1] Γ p[i ]) Δ ( 𝑓 [ p[i ] ] + 𝑓 [ p[i +1] ] ) = 2 ]R b a f

≜ choose r :

(r 2 R) ^ ( 8 ffl 2 R+ : 9 ffi 2 R+ : 8 p 2 Pba(ffi) :j

r Γ Sp(f ) j → ffl )

10

DEFINITION NOT FOUND.

11

DEFINITION NOT FOUND.

12

With first-order logic alone, one cannot rule out nonstandard models of the naturals. This is seldom a problem in practice, since we are usually content to prove theorems that are valid for any model satisfying Peano’s axioms.

13

DEFINITION NOT FOUND.

14

For example, fx 2 fx g : 𝑥 =2 𝑥 g, in which the second 𝑥 is not the same variable as the other three 𝑥’s.

15

Figure 4: A simple algorithm.

initially 𝑛 = 0;𝑆= OE do true → 𝑛 := 𝑛 + 1;𝑆:= Append(s; 42) ▯ 𝑛 > 0 → 𝑛 := 𝑛 — 1;𝑆:= Tail(s) od