2 \chapter{Zermelo-Fraenkel Set Theory}
5 The theory~\thydx{ZF} implements Zermelo-Fraenkel set
6 theory~\cite{halmos60,suppes72} as an extension of~\texttt{FOL}, classical
7 first-order logic. The theory includes a collection of derived natural
8 deduction rules, for use with Isabelle's classical reasoner. Much
9 of it is based on the work of No\"el~\cite{noel}.
11 A tremendous amount of set theory has been formally developed, including the
12 basic properties of relations, functions, ordinals and cardinals. Significant
13 results have been proved, such as the Schr\"oder-Bernstein Theorem, the
14 Wellordering Theorem and a version of Ramsey's Theorem. \texttt{ZF} provides
15 both the integers and the natural numbers. General methods have been
16 developed for solving recursion equations over monotonic functors; these have
17 been applied to yield constructions of lists, trees, infinite lists, etc.
19 \texttt{ZF} has a flexible package for handling inductive definitions,
20 such as inference systems, and datatype definitions, such as lists and
21 trees. Moreover it handles coinductive definitions, such as
22 bisimulation relations, and codatatype definitions, such as streams. It
23 provides a streamlined syntax for defining primitive recursive functions over
26 Because {\ZF} is an extension of {\FOL}, it provides the same
27 packages, namely \texttt{hyp_subst_tac}, the simplifier, and the
28 classical reasoner. The default simpset and claset are usually
31 Published articles~\cite{paulson-set-I,paulson-set-II} describe \texttt{ZF}
32 less formally than this chapter. Isabelle employs a novel treatment of
33 non-well-founded data structures within the standard {\sc zf} axioms including
34 the Axiom of Foundation~\cite{paulson-mscs}.
37 \section{Which version of axiomatic set theory?}
38 The two main axiom systems for set theory are Bernays-G\"odel~({\sc bg})
39 and Zermelo-Fraenkel~({\sc zf}). Resolution theorem provers can use {\sc
40 bg} because it is finite~\cite{boyer86,quaife92}. {\sc zf} does not
41 have a finite axiom system because of its Axiom Scheme of Replacement.
42 This makes it awkward to use with many theorem provers, since instances
43 of the axiom scheme have to be invoked explicitly. Since Isabelle has no
44 difficulty with axiom schemes, we may adopt either axiom system.
46 These two theories differ in their treatment of {\bf classes}, which are
47 collections that are `too big' to be sets. The class of all sets,~$V$,
48 cannot be a set without admitting Russell's Paradox. In {\sc bg}, both
49 classes and sets are individuals; $x\in V$ expresses that $x$ is a set. In
50 {\sc zf}, all variables denote sets; classes are identified with unary
51 predicates. The two systems define essentially the same sets and classes,
52 with similar properties. In particular, a class cannot belong to another
53 class (let alone a set).
55 Modern set theorists tend to prefer {\sc zf} because they are mainly concerned
56 with sets, rather than classes. {\sc bg} requires tiresome proofs that various
57 collections are sets; for instance, showing $x\in\{x\}$ requires showing that
64 \it name &\it meta-type & \it description \\
65 \cdx{Let} & $[\alpha,\alpha\To\beta]\To\beta$ & let binder\\
66 \cdx{0} & $i$ & empty set\\
67 \cdx{cons} & $[i,i]\To i$ & finite set constructor\\
68 \cdx{Upair} & $[i,i]\To i$ & unordered pairing\\
69 \cdx{Pair} & $[i,i]\To i$ & ordered pairing\\
70 \cdx{Inf} & $i$ & infinite set\\
71 \cdx{Pow} & $i\To i$ & powerset\\
72 \cdx{Union} \cdx{Inter} & $i\To i$ & set union/intersection \\
73 \cdx{split} & $[[i,i]\To i, i] \To i$ & generalized projection\\
74 \cdx{fst} \cdx{snd} & $i\To i$ & projections\\
75 \cdx{converse}& $i\To i$ & converse of a relation\\
76 \cdx{succ} & $i\To i$ & successor\\
77 \cdx{Collect} & $[i,i\To o]\To i$ & separation\\
78 \cdx{Replace} & $[i, [i,i]\To o] \To i$ & replacement\\
79 \cdx{PrimReplace} & $[i, [i,i]\To o] \To i$ & primitive replacement\\
80 \cdx{RepFun} & $[i, i\To i] \To i$ & functional replacement\\
81 \cdx{Pi} \cdx{Sigma} & $[i,i\To i]\To i$ & general product/sum\\
82 \cdx{domain} & $i\To i$ & domain of a relation\\
83 \cdx{range} & $i\To i$ & range of a relation\\
84 \cdx{field} & $i\To i$ & field of a relation\\
85 \cdx{Lambda} & $[i, i\To i]\To i$ & $\lambda$-abstraction\\
86 \cdx{restrict}& $[i, i] \To i$ & restriction of a function\\
87 \cdx{The} & $[i\To o]\To i$ & definite description\\
88 \cdx{if} & $[o,i,i]\To i$ & conditional\\
89 \cdx{Ball} \cdx{Bex} & $[i, i\To o]\To o$ & bounded quantifiers
92 \subcaption{Constants}
96 \index{*"-"`"` symbol}
97 \index{*"` symbol}\index{function applications!in \ZF}
101 \begin{tabular}{rrrr}
102 \it symbol & \it meta-type & \it priority & \it description \\
103 \tt `` & $[i,i]\To i$ & Left 90 & image \\
104 \tt -`` & $[i,i]\To i$ & Left 90 & inverse image \\
105 \tt ` & $[i,i]\To i$ & Left 90 & application \\
106 \sdx{Int} & $[i,i]\To i$ & Left 70 & intersection ($\int$) \\
107 \sdx{Un} & $[i,i]\To i$ & Left 65 & union ($\un$) \\
108 \tt - & $[i,i]\To i$ & Left 65 & set difference ($-$) \\[1ex]
109 \tt: & $[i,i]\To o$ & Left 50 & membership ($\in$) \\
110 \tt <= & $[i,i]\To o$ & Left 50 & subset ($\subseteq$)
114 \caption{Constants of {\ZF}} \label{zf-constants}
118 \section{The syntax of set theory}
119 The language of set theory, as studied by logicians, has no constants. The
120 traditional axioms merely assert the existence of empty sets, unions,
121 powersets, etc.; this would be intolerable for practical reasoning. The
122 Isabelle theory declares constants for primitive sets. It also extends
123 \texttt{FOL} with additional syntax for finite sets, ordered pairs,
124 comprehension, general union/intersection, general sums/products, and
125 bounded quantifiers. In most other respects, Isabelle implements precisely
126 Zermelo-Fraenkel set theory.
128 Figure~\ref{zf-constants} lists the constants and infixes of~\ZF, while
129 Figure~\ref{zf-trans} presents the syntax translations. Finally,
130 Figure~\ref{zf-syntax} presents the full grammar for set theory, including
131 the constructs of \FOL.
133 Local abbreviations can be introduced by a \texttt{let} construct whose
134 syntax appears in Fig.\ts\ref{zf-syntax}. Internally it is translated into
135 the constant~\cdx{Let}. It can be expanded by rewriting with its
136 definition, \tdx{Let_def}.
138 Apart from \texttt{let}, set theory does not use polymorphism. All terms in
139 {\ZF} have type~\tydx{i}, which is the type of individuals and has class~{\tt
140 term}. The type of first-order formulae, remember, is~\textit{o}.
142 Infix operators include binary union and intersection ($A\un B$ and
143 $A\int B$), set difference ($A-B$), and the subset and membership
144 relations. Note that $a$\verb|~:|$b$ is translated to $\neg(a\in b)$. The
145 union and intersection operators ($\bigcup A$ and $\bigcap A$) form the
146 union or intersection of a set of sets; $\bigcup A$ means the same as
147 $\bigcup@{x\in A}x$. Of these operators, only $\bigcup A$ is primitive.
149 The constant \cdx{Upair} constructs unordered pairs; thus {\tt
150 Upair($A$,$B$)} denotes the set~$\{A,B\}$ and \texttt{Upair($A$,$A$)}
151 denotes the singleton~$\{A\}$. General union is used to define binary
152 union. The Isabelle version goes on to define the constant
155 A\cup B & \equiv & \bigcup(\texttt{Upair}(A,B)) \\
156 \texttt{cons}(a,B) & \equiv & \texttt{Upair}(a,a) \un B
158 The $\{a@1, \ldots\}$ notation abbreviates finite sets constructed in the
159 obvious manner using~\texttt{cons} and~$\emptyset$ (the empty set):
161 \{a,b,c\} & \equiv & \texttt{cons}(a,\texttt{cons}(b,\texttt{cons}(c,\emptyset)))
164 The constant \cdx{Pair} constructs ordered pairs, as in {\tt
165 Pair($a$,$b$)}. Ordered pairs may also be written within angle brackets,
166 as {\tt<$a$,$b$>}. The $n$-tuple {\tt<$a@1$,\ldots,$a@{n-1}$,$a@n$>}
167 abbreviates the nest of pairs\par\nobreak
168 \centerline{\texttt{Pair($a@1$,\ldots,Pair($a@{n-1}$,$a@n$)\ldots).}}
170 In {\ZF}, a function is a set of pairs. A {\ZF} function~$f$ is simply an
171 individual as far as Isabelle is concerned: its Isabelle type is~$i$, not
172 say $i\To i$. The infix operator~{\tt`} denotes the application of a
173 function set to its argument; we must write~$f{\tt`}x$, not~$f(x)$. The
174 syntax for image is~$f{\tt``}A$ and that for inverse image is~$f{\tt-``}A$.
178 \index{lambda abs@$\lambda$-abstractions!in \ZF}
181 \begin{center} \footnotesize\tt\frenchspacing
183 \it external & \it internal & \it description \\
184 $a$ \ttilde: $b$ & \ttilde($a$ : $b$) & \rm negated membership\\
185 \ttlbrace$a@1$, $\ldots$, $a@n$\ttrbrace & cons($a@1$,$\ldots$,cons($a@n$,0)) &
187 <$a@1$, $\ldots$, $a@{n-1}$, $a@n$> &
188 Pair($a@1$,\ldots,Pair($a@{n-1}$,$a@n$)\ldots) &
189 \rm ordered $n$-tuple \\
190 \ttlbrace$x$:$A . P[x]$\ttrbrace & Collect($A$,$\lambda x. P[x]$) &
192 \ttlbrace$y . x$:$A$, $Q[x,y]$\ttrbrace & Replace($A$,$\lambda x\,y. Q[x,y]$) &
194 \ttlbrace$b[x] . x$:$A$\ttrbrace & RepFun($A$,$\lambda x. b[x]$) &
195 \rm functional replacement \\
196 \sdx{INT} $x$:$A . B[x]$ & Inter(\ttlbrace$B[x] . x$:$A$\ttrbrace) &
197 \rm general intersection \\
198 \sdx{UN} $x$:$A . B[x]$ & Union(\ttlbrace$B[x] . x$:$A$\ttrbrace) &
200 \sdx{PROD} $x$:$A . B[x]$ & Pi($A$,$\lambda x. B[x]$) &
201 \rm general product \\
202 \sdx{SUM} $x$:$A . B[x]$ & Sigma($A$,$\lambda x. B[x]$) &
204 $A$ -> $B$ & Pi($A$,$\lambda x. B$) &
205 \rm function space \\
206 $A$ * $B$ & Sigma($A$,$\lambda x. B$) &
207 \rm binary product \\
208 \sdx{THE} $x . P[x]$ & The($\lambda x. P[x]$) &
209 \rm definite description \\
210 \sdx{lam} $x$:$A . b[x]$ & Lambda($A$,$\lambda x. b[x]$) &
211 \rm $\lambda$-abstraction\\[1ex]
212 \sdx{ALL} $x$:$A . P[x]$ & Ball($A$,$\lambda x. P[x]$) &
213 \rm bounded $\forall$ \\
214 \sdx{EX} $x$:$A . P[x]$ & Bex($A$,$\lambda x. P[x]$) &
215 \rm bounded $\exists$
218 \caption{Translations for {\ZF}} \label{zf-trans}
227 term & = & \hbox{expression of type~$i$} \\
228 & | & "let"~id~"="~term";"\dots";"~id~"="~term~"in"~term \\
229 & | & "if"~term~"then"~term~"else"~term \\
230 & | & "{\ttlbrace} " term\; ("," term)^* " {\ttrbrace}" \\
231 & | & "< " term\; ("," term)^* " >" \\
232 & | & "{\ttlbrace} " id ":" term " . " formula " {\ttrbrace}" \\
233 & | & "{\ttlbrace} " id " . " id ":" term ", " formula " {\ttrbrace}" \\
234 & | & "{\ttlbrace} " term " . " id ":" term " {\ttrbrace}" \\
235 & | & term " `` " term \\
236 & | & term " -`` " term \\
237 & | & term " ` " term \\
238 & | & term " * " term \\
239 & | & term " Int " term \\
240 & | & term " Un " term \\
241 & | & term " - " term \\
242 & | & term " -> " term \\
243 & | & "THE~~" id " . " formula\\
244 & | & "lam~~" id ":" term " . " term \\
245 & | & "INT~~" id ":" term " . " term \\
246 & | & "UN~~~" id ":" term " . " term \\
247 & | & "PROD~" id ":" term " . " term \\
248 & | & "SUM~~" id ":" term " . " term \\[2ex]
249 formula & = & \hbox{expression of type~$o$} \\
250 & | & term " : " term \\
251 & | & term " \ttilde: " term \\
252 & | & term " <= " term \\
253 & | & term " = " term \\
254 & | & term " \ttilde= " term \\
255 & | & "\ttilde\ " formula \\
256 & | & formula " \& " formula \\
257 & | & formula " | " formula \\
258 & | & formula " --> " formula \\
259 & | & formula " <-> " formula \\
260 & | & "ALL " id ":" term " . " formula \\
261 & | & "EX~~" id ":" term " . " formula \\
262 & | & "ALL~" id~id^* " . " formula \\
263 & | & "EX~~" id~id^* " . " formula \\
264 & | & "EX!~" id~id^* " . " formula
267 \caption{Full grammar for {\ZF}} \label{zf-syntax}
271 \section{Binding operators}
272 The constant \cdx{Collect} constructs sets by the principle of {\bf
273 separation}. The syntax for separation is
274 \hbox{\tt\ttlbrace$x$:$A$.\ $P[x]$\ttrbrace}, where $P[x]$ is a formula
275 that may contain free occurrences of~$x$. It abbreviates the set {\tt
276 Collect($A$,$\lambda x. P[x]$)}, which consists of all $x\in A$ that
277 satisfy~$P[x]$. Note that \texttt{Collect} is an unfortunate choice of
278 name: some set theories adopt a set-formation principle, related to
279 replacement, called collection.
281 The constant \cdx{Replace} constructs sets by the principle of {\bf
282 replacement}. The syntax
283 \hbox{\tt\ttlbrace$y$.\ $x$:$A$,$Q[x,y]$\ttrbrace} denotes the set {\tt
284 Replace($A$,$\lambda x\,y. Q[x,y]$)}, which consists of all~$y$ such
285 that there exists $x\in A$ satisfying~$Q[x,y]$. The Replacement Axiom
286 has the condition that $Q$ must be single-valued over~$A$: for
287 all~$x\in A$ there exists at most one $y$ satisfying~$Q[x,y]$. A
288 single-valued binary predicate is also called a {\bf class function}.
290 The constant \cdx{RepFun} expresses a special case of replacement,
291 where $Q[x,y]$ has the form $y=b[x]$. Such a $Q$ is trivially
292 single-valued, since it is just the graph of the meta-level
293 function~$\lambda x. b[x]$. The resulting set consists of all $b[x]$
294 for~$x\in A$. This is analogous to the \ML{} functional \texttt{map},
295 since it applies a function to every element of a set. The syntax is
296 \hbox{\tt\ttlbrace$b[x]$.\ $x$:$A$\ttrbrace}, which expands to {\tt
297 RepFun($A$,$\lambda x. b[x]$)}.
299 \index{*INT symbol}\index{*UN symbol}
300 General unions and intersections of indexed
301 families of sets, namely $\bigcup@{x\in A}B[x]$ and $\bigcap@{x\in A}B[x]$,
302 are written \hbox{\tt UN $x$:$A$.\ $B[x]$} and \hbox{\tt INT $x$:$A$.\ $B[x]$}.
303 Their meaning is expressed using \texttt{RepFun} as
305 \bigcup(\{B[x]. x\in A\}) \qquad\hbox{and}\qquad
306 \bigcap(\{B[x]. x\in A\}).
308 General sums $\sum@{x\in A}B[x]$ and products $\prod@{x\in A}B[x]$ can be
309 constructed in set theory, where $B[x]$ is a family of sets over~$A$. They
310 have as special cases $A\times B$ and $A\to B$, where $B$ is simply a set.
311 This is similar to the situation in Constructive Type Theory (set theory
312 has `dependent sets') and calls for similar syntactic conventions. The
313 constants~\cdx{Sigma} and~\cdx{Pi} construct general sums and
314 products. Instead of \texttt{Sigma($A$,$B$)} and \texttt{Pi($A$,$B$)} we may
316 \hbox{\tt SUM $x$:$A$.\ $B[x]$} and \hbox{\tt PROD $x$:$A$.\ $B[x]$}.
317 \index{*SUM symbol}\index{*PROD symbol}%
318 The special cases as \hbox{\tt$A$*$B$} and \hbox{\tt$A$->$B$} abbreviate
319 general sums and products over a constant family.\footnote{Unlike normal
320 infix operators, {\tt*} and {\tt->} merely define abbreviations; there are
321 no constants~\texttt{op~*} and~\hbox{\tt op~->}.} Isabelle accepts these
322 abbreviations in parsing and uses them whenever possible for printing.
325 As mentioned above, whenever the axioms assert the existence and uniqueness
326 of a set, Isabelle's set theory declares a constant for that set. These
327 constants can express the {\bf definite description} operator~$\iota
328 x. P[x]$, which stands for the unique~$a$ satisfying~$P[a]$, if such exists.
329 Since all terms in {\ZF} denote something, a description is always
330 meaningful, but we do not know its value unless $P[x]$ defines it uniquely.
331 Using the constant~\cdx{The}, we may write descriptions as {\tt
332 The($\lambda x. P[x]$)} or use the syntax \hbox{\tt THE $x$.\ $P[x]$}.
335 Function sets may be written in $\lambda$-notation; $\lambda x\in A. b[x]$
336 stands for the set of all pairs $\pair{x,b[x]}$ for $x\in A$. In order for
337 this to be a set, the function's domain~$A$ must be given. Using the
338 constant~\cdx{Lambda}, we may express function sets as {\tt
339 Lambda($A$,$\lambda x. b[x]$)} or use the syntax \hbox{\tt lam $x$:$A$.\ $b[x]$}.
341 Isabelle's set theory defines two {\bf bounded quantifiers}:
343 \forall x\in A. P[x] &\hbox{abbreviates}& \forall x. x\in A\imp P[x] \\
344 \exists x\in A. P[x] &\hbox{abbreviates}& \exists x. x\in A\conj P[x]
346 The constants~\cdx{Ball} and~\cdx{Bex} are defined
347 accordingly. Instead of \texttt{Ball($A$,$P$)} and \texttt{Bex($A$,$P$)} we may
349 \hbox{\tt ALL $x$:$A$.\ $P[x]$} and \hbox{\tt EX $x$:$A$.\ $P[x]$}.
356 \tdx{Let_def} Let(s, f) == f(s)
358 \tdx{Ball_def} Ball(A,P) == ALL x. x:A --> P(x)
359 \tdx{Bex_def} Bex(A,P) == EX x. x:A & P(x)
361 \tdx{subset_def} A <= B == ALL x:A. x:B
362 \tdx{extension} A = B <-> A <= B & B <= A
364 \tdx{Union_iff} A : Union(C) <-> (EX B:C. A:B)
365 \tdx{Pow_iff} A : Pow(B) <-> A <= B
366 \tdx{foundation} A=0 | (EX x:A. ALL y:x. ~ y:A)
368 \tdx{replacement} (ALL x:A. ALL y z. P(x,y) & P(x,z) --> y=z) ==>
369 b : PrimReplace(A,P) <-> (EX x:A. P(x,b))
370 \subcaption{The Zermelo-Fraenkel Axioms}
372 \tdx{Replace_def} Replace(A,P) ==
373 PrimReplace(A, \%x y. (EX!z. P(x,z)) & P(x,y))
374 \tdx{RepFun_def} RepFun(A,f) == {\ttlbrace}y . x:A, y=f(x)\ttrbrace
375 \tdx{the_def} The(P) == Union({\ttlbrace}y . x:{\ttlbrace}0{\ttrbrace}, P(y){\ttrbrace})
376 \tdx{if_def} if(P,a,b) == THE z. P & z=a | ~P & z=b
377 \tdx{Collect_def} Collect(A,P) == {\ttlbrace}y . x:A, x=y & P(x){\ttrbrace}
378 \tdx{Upair_def} Upair(a,b) ==
379 {\ttlbrace}y. x:Pow(Pow(0)), (x=0 & y=a) | (x=Pow(0) & y=b){\ttrbrace}
380 \subcaption{Consequences of replacement}
382 \tdx{Inter_def} Inter(A) == {\ttlbrace}x:Union(A) . ALL y:A. x:y{\ttrbrace}
383 \tdx{Un_def} A Un B == Union(Upair(A,B))
384 \tdx{Int_def} A Int B == Inter(Upair(A,B))
385 \tdx{Diff_def} A - B == {\ttlbrace}x:A . x~:B{\ttrbrace}
386 \subcaption{Union, intersection, difference}
388 \caption{Rules and axioms of {\ZF}} \label{zf-rules}
394 \tdx{cons_def} cons(a,A) == Upair(a,a) Un A
395 \tdx{succ_def} succ(i) == cons(i,i)
396 \tdx{infinity} 0:Inf & (ALL y:Inf. succ(y): Inf)
397 \subcaption{Finite and infinite sets}
399 \tdx{Pair_def} <a,b> == {\ttlbrace}{\ttlbrace}a,a{\ttrbrace}, {\ttlbrace}a,b{\ttrbrace}{\ttrbrace}
400 \tdx{split_def} split(c,p) == THE y. EX a b. p=<a,b> & y=c(a,b)
401 \tdx{fst_def} fst(A) == split(\%x y. x, p)
402 \tdx{snd_def} snd(A) == split(\%x y. y, p)
403 \tdx{Sigma_def} Sigma(A,B) == UN x:A. UN y:B(x). {\ttlbrace}<x,y>{\ttrbrace}
404 \subcaption{Ordered pairs and Cartesian products}
406 \tdx{converse_def} converse(r) == {\ttlbrace}z. w:r, EX x y. w=<x,y> & z=<y,x>{\ttrbrace}
407 \tdx{domain_def} domain(r) == {\ttlbrace}x. w:r, EX y. w=<x,y>{\ttrbrace}
408 \tdx{range_def} range(r) == domain(converse(r))
409 \tdx{field_def} field(r) == domain(r) Un range(r)
410 \tdx{image_def} r `` A == {\ttlbrace}y : range(r) . EX x:A. <x,y> : r{\ttrbrace}
411 \tdx{vimage_def} r -`` A == converse(r)``A
412 \subcaption{Operations on relations}
414 \tdx{lam_def} Lambda(A,b) == {\ttlbrace}<x,b(x)> . x:A{\ttrbrace}
415 \tdx{apply_def} f`a == THE y. <a,y> : f
416 \tdx{Pi_def} Pi(A,B) == {\ttlbrace}f: Pow(Sigma(A,B)). ALL x:A. EX! y. <x,y>: f{\ttrbrace}
417 \tdx{restrict_def} restrict(f,A) == lam x:A. f`x
418 \subcaption{Functions and general product}
420 \caption{Further definitions of {\ZF}} \label{zf-defs}
425 \section{The Zermelo-Fraenkel axioms}
426 The axioms appear in Fig.\ts \ref{zf-rules}. They resemble those
427 presented by Suppes~\cite{suppes72}. Most of the theory consists of
428 definitions. In particular, bounded quantifiers and the subset relation
429 appear in other axioms. Object-level quantifiers and implications have
430 been replaced by meta-level ones wherever possible, to simplify use of the
431 axioms. See the file \texttt{ZF/ZF.thy} for details.
433 The traditional replacement axiom asserts
434 \[ y \in \texttt{PrimReplace}(A,P) \bimp (\exists x\in A. P(x,y)) \]
435 subject to the condition that $P(x,y)$ is single-valued for all~$x\in A$.
436 The Isabelle theory defines \cdx{Replace} to apply
437 \cdx{PrimReplace} to the single-valued part of~$P$, namely
438 \[ (\exists!z. P(x,z)) \conj P(x,y). \]
439 Thus $y\in \texttt{Replace}(A,P)$ if and only if there is some~$x$ such that
440 $P(x,-)$ holds uniquely for~$y$. Because the equivalence is unconditional,
441 \texttt{Replace} is much easier to use than \texttt{PrimReplace}; it defines the
442 same set, if $P(x,y)$ is single-valued. The nice syntax for replacement
443 expands to \texttt{Replace}.
445 Other consequences of replacement include functional replacement
446 (\cdx{RepFun}) and definite descriptions (\cdx{The}).
447 Axioms for separation (\cdx{Collect}) and unordered pairs
448 (\cdx{Upair}) are traditionally assumed, but they actually follow
449 from replacement~\cite[pages 237--8]{suppes72}.
451 The definitions of general intersection, etc., are straightforward. Note
452 the definition of \texttt{cons}, which underlies the finite set notation.
453 The axiom of infinity gives us a set that contains~0 and is closed under
454 successor (\cdx{succ}). Although this set is not uniquely defined,
455 the theory names it (\cdx{Inf}) in order to simplify the
456 construction of the natural numbers.
458 Further definitions appear in Fig.\ts\ref{zf-defs}. Ordered pairs are
459 defined in the standard way, $\pair{a,b}\equiv\{\{a\},\{a,b\}\}$. Recall
460 that \cdx{Sigma}$(A,B)$ generalizes the Cartesian product of two
461 sets. It is defined to be the union of all singleton sets
462 $\{\pair{x,y}\}$, for $x\in A$ and $y\in B(x)$. This is a typical usage of
465 The projections \cdx{fst} and~\cdx{snd} are defined in terms of the
466 generalized projection \cdx{split}. The latter has been borrowed from
467 Martin-L\"of's Type Theory, and is often easier to use than \cdx{fst}
470 Operations on relations include converse, domain, range, and image. The
471 set ${\tt Pi}(A,B)$ generalizes the space of functions between two sets.
472 Note the simple definitions of $\lambda$-abstraction (using
473 \cdx{RepFun}) and application (using a definite description). The
474 function \cdx{restrict}$(f,A)$ has the same values as~$f$, but only
482 \tdx{ballI} [| !!x. x:A ==> P(x) |] ==> ALL x:A. P(x)
483 \tdx{bspec} [| ALL x:A. P(x); x: A |] ==> P(x)
484 \tdx{ballE} [| ALL x:A. P(x); P(x) ==> Q; ~ x:A ==> Q |] ==> Q
486 \tdx{ball_cong} [| A=A'; !!x. x:A' ==> P(x) <-> P'(x) |] ==>
487 (ALL x:A. P(x)) <-> (ALL x:A'. P'(x))
489 \tdx{bexI} [| P(x); x: A |] ==> EX x:A. P(x)
490 \tdx{bexCI} [| ALL x:A. ~P(x) ==> P(a); a: A |] ==> EX x:A. P(x)
491 \tdx{bexE} [| EX x:A. P(x); !!x. [| x:A; P(x) |] ==> Q |] ==> Q
493 \tdx{bex_cong} [| A=A'; !!x. x:A' ==> P(x) <-> P'(x) |] ==>
494 (EX x:A. P(x)) <-> (EX x:A'. P'(x))
495 \subcaption{Bounded quantifiers}
497 \tdx{subsetI} (!!x. x:A ==> x:B) ==> A <= B
498 \tdx{subsetD} [| A <= B; c:A |] ==> c:B
499 \tdx{subsetCE} [| A <= B; ~(c:A) ==> P; c:B ==> P |] ==> P
500 \tdx{subset_refl} A <= A
501 \tdx{subset_trans} [| A<=B; B<=C |] ==> A<=C
503 \tdx{equalityI} [| A <= B; B <= A |] ==> A = B
504 \tdx{equalityD1} A = B ==> A<=B
505 \tdx{equalityD2} A = B ==> B<=A
506 \tdx{equalityE} [| A = B; [| A<=B; B<=A |] ==> P |] ==> P
507 \subcaption{Subsets and extensionality}
509 \tdx{emptyE} a:0 ==> P
510 \tdx{empty_subsetI} 0 <= A
511 \tdx{equals0I} [| !!y. y:A ==> False |] ==> A=0
512 \tdx{equals0D} [| A=0; a:A |] ==> P
514 \tdx{PowI} A <= B ==> A : Pow(B)
515 \tdx{PowD} A : Pow(B) ==> A<=B
516 \subcaption{The empty set; power sets}
518 \caption{Basic derived rules for {\ZF}} \label{zf-lemmas1}
522 \section{From basic lemmas to function spaces}
523 Faced with so many definitions, it is essential to prove lemmas. Even
524 trivial theorems like $A \int B = B \int A$ would be difficult to
525 prove from the definitions alone. Isabelle's set theory derives many
526 rules using a natural deduction style. Ideally, a natural deduction
527 rule should introduce or eliminate just one operator, but this is not
528 always practical. For most operators, we may forget its definition
529 and use its derived rules instead.
531 \subsection{Fundamental lemmas}
532 Figure~\ref{zf-lemmas1} presents the derived rules for the most basic
533 operators. The rules for the bounded quantifiers resemble those for the
534 ordinary quantifiers, but note that \tdx{ballE} uses a negated assumption
535 in the style of Isabelle's classical reasoner. The \rmindex{congruence
536 rules} \tdx{ball_cong} and \tdx{bex_cong} are required by Isabelle's
537 simplifier, but have few other uses. Congruence rules must be specially
538 derived for all binding operators, and henceforth will not be shown.
540 Figure~\ref{zf-lemmas1} also shows rules for the subset and equality
541 relations (proof by extensionality), and rules about the empty set and the
544 Figure~\ref{zf-lemmas2} presents rules for replacement and separation.
545 The rules for \cdx{Replace} and \cdx{RepFun} are much simpler than
546 comparable rules for \texttt{PrimReplace} would be. The principle of
547 separation is proved explicitly, although most proofs should use the
548 natural deduction rules for \texttt{Collect}. The elimination rule
549 \tdx{CollectE} is equivalent to the two destruction rules
550 \tdx{CollectD1} and \tdx{CollectD2}, but each rule is suited to
551 particular circumstances. Although too many rules can be confusing, there
552 is no reason to aim for a minimal set of rules. See the file
553 \texttt{ZF/ZF.ML} for a complete listing.
555 Figure~\ref{zf-lemmas3} presents rules for general union and intersection.
556 The empty intersection should be undefined. We cannot have
557 $\bigcap(\emptyset)=V$ because $V$, the universal class, is not a set. All
558 expressions denote something in {\ZF} set theory; the definition of
559 intersection implies $\bigcap(\emptyset)=\emptyset$, but this value is
560 arbitrary. The rule \tdx{InterI} must have a premise to exclude
561 the empty intersection. Some of the laws governing intersections require
565 %the [p] gives better page breaking for the book
568 \tdx{ReplaceI} [| x: A; P(x,b); !!y. P(x,y) ==> y=b |] ==>
569 b : {\ttlbrace}y. x:A, P(x,y){\ttrbrace}
571 \tdx{ReplaceE} [| b : {\ttlbrace}y. x:A, P(x,y){\ttrbrace};
572 !!x. [| x: A; P(x,b); ALL y. P(x,y)-->y=b |] ==> R
575 \tdx{RepFunI} [| a : A |] ==> f(a) : {\ttlbrace}f(x). x:A{\ttrbrace}
576 \tdx{RepFunE} [| b : {\ttlbrace}f(x). x:A{\ttrbrace};
577 !!x.[| x:A; b=f(x) |] ==> P |] ==> P
579 \tdx{separation} a : {\ttlbrace}x:A. P(x){\ttrbrace} <-> a:A & P(a)
580 \tdx{CollectI} [| a:A; P(a) |] ==> a : {\ttlbrace}x:A. P(x){\ttrbrace}
581 \tdx{CollectE} [| a : {\ttlbrace}x:A. P(x){\ttrbrace}; [| a:A; P(a) |] ==> R |] ==> R
582 \tdx{CollectD1} a : {\ttlbrace}x:A. P(x){\ttrbrace} ==> a:A
583 \tdx{CollectD2} a : {\ttlbrace}x:A. P(x){\ttrbrace} ==> P(a)
585 \caption{Replacement and separation} \label{zf-lemmas2}
591 \tdx{UnionI} [| B: C; A: B |] ==> A: Union(C)
592 \tdx{UnionE} [| A : Union(C); !!B.[| A: B; B: C |] ==> R |] ==> R
594 \tdx{InterI} [| !!x. x: C ==> A: x; c:C |] ==> A : Inter(C)
595 \tdx{InterD} [| A : Inter(C); B : C |] ==> A : B
596 \tdx{InterE} [| A : Inter(C); A:B ==> R; ~ B:C ==> R |] ==> R
598 \tdx{UN_I} [| a: A; b: B(a) |] ==> b: (UN x:A. B(x))
599 \tdx{UN_E} [| b : (UN x:A. B(x)); !!x.[| x: A; b: B(x) |] ==> R
602 \tdx{INT_I} [| !!x. x: A ==> b: B(x); a: A |] ==> b: (INT x:A. B(x))
603 \tdx{INT_E} [| b : (INT x:A. B(x)); a: A |] ==> b : B(a)
605 \caption{General union and intersection} \label{zf-lemmas3}
613 \tdx{pairing} a:Upair(b,c) <-> (a=b | a=c)
614 \tdx{UpairI1} a : Upair(a,b)
615 \tdx{UpairI2} b : Upair(a,b)
616 \tdx{UpairE} [| a : Upair(b,c); a = b ==> P; a = c ==> P |] ==> P
618 \caption{Unordered pairs} \label{zf-upair1}
624 \tdx{UnI1} c : A ==> c : A Un B
625 \tdx{UnI2} c : B ==> c : A Un B
626 \tdx{UnCI} (~c : B ==> c : A) ==> c : A Un B
627 \tdx{UnE} [| c : A Un B; c:A ==> P; c:B ==> P |] ==> P
629 \tdx{IntI} [| c : A; c : B |] ==> c : A Int B
630 \tdx{IntD1} c : A Int B ==> c : A
631 \tdx{IntD2} c : A Int B ==> c : B
632 \tdx{IntE} [| c : A Int B; [| c:A; c:B |] ==> P |] ==> P
634 \tdx{DiffI} [| c : A; ~ c : B |] ==> c : A - B
635 \tdx{DiffD1} c : A - B ==> c : A
636 \tdx{DiffD2} c : A - B ==> c ~: B
637 \tdx{DiffE} [| c : A - B; [| c:A; ~ c:B |] ==> P |] ==> P
639 \caption{Union, intersection, difference} \label{zf-Un}
645 \tdx{consI1} a : cons(a,B)
646 \tdx{consI2} a : B ==> a : cons(b,B)
647 \tdx{consCI} (~ a:B ==> a=b) ==> a: cons(b,B)
648 \tdx{consE} [| a : cons(b,A); a=b ==> P; a:A ==> P |] ==> P
650 \tdx{singletonI} a : {\ttlbrace}a{\ttrbrace}
651 \tdx{singletonE} [| a : {\ttlbrace}b{\ttrbrace}; a=b ==> P |] ==> P
653 \caption{Finite and singleton sets} \label{zf-upair2}
659 \tdx{succI1} i : succ(i)
660 \tdx{succI2} i : j ==> i : succ(j)
661 \tdx{succCI} (~ i:j ==> i=j) ==> i: succ(j)
662 \tdx{succE} [| i : succ(j); i=j ==> P; i:j ==> P |] ==> P
663 \tdx{succ_neq_0} [| succ(n)=0 |] ==> P
664 \tdx{succ_inject} succ(m) = succ(n) ==> m=n
666 \caption{The successor function} \label{zf-succ}
672 \tdx{the_equality} [| P(a); !!x. P(x) ==> x=a |] ==> (THE x. P(x)) = a
673 \tdx{theI} EX! x. P(x) ==> P(THE x. P(x))
675 \tdx{if_P} P ==> (if P then a else b) = a
676 \tdx{if_not_P} ~P ==> (if P then a else b) = b
678 \tdx{mem_asym} [| a:b; b:a |] ==> P
679 \tdx{mem_irrefl} a:a ==> P
681 \caption{Descriptions; non-circularity} \label{zf-the}
685 \subsection{Unordered pairs and finite sets}
686 Figure~\ref{zf-upair1} presents the principle of unordered pairing, along
687 with its derived rules. Binary union and intersection are defined in terms
688 of ordered pairs (Fig.\ts\ref{zf-Un}). Set difference is also included. The
689 rule \tdx{UnCI} is useful for classical reasoning about unions,
690 like \texttt{disjCI}\@; it supersedes \tdx{UnI1} and
691 \tdx{UnI2}, but these rules are often easier to work with. For
692 intersection and difference we have both elimination and destruction rules.
693 Again, there is no reason to provide a minimal rule set.
695 Figure~\ref{zf-upair2} is concerned with finite sets: it presents rules
696 for~\texttt{cons}, the finite set constructor, and rules for singleton
697 sets. Figure~\ref{zf-succ} presents derived rules for the successor
698 function, which is defined in terms of~\texttt{cons}. The proof that {\tt
699 succ} is injective appears to require the Axiom of Foundation.
701 Definite descriptions (\sdx{THE}) are defined in terms of the singleton
702 set~$\{0\}$, but their derived rules fortunately hide this
703 (Fig.\ts\ref{zf-the}). The rule~\tdx{theI} is difficult to apply
704 because of the two occurrences of~$\Var{P}$. However,
705 \tdx{the_equality} does not have this problem and the files contain
706 many examples of its use.
708 Finally, the impossibility of having both $a\in b$ and $b\in a$
709 (\tdx{mem_asym}) is proved by applying the Axiom of Foundation to
710 the set $\{a,b\}$. The impossibility of $a\in a$ is a trivial consequence.
712 See the file \texttt{ZF/upair.ML} for full proofs of the rules discussed in
720 \tdx{Union_upper} B:A ==> B <= Union(A)
721 \tdx{Union_least} [| !!x. x:A ==> x<=C |] ==> Union(A) <= C
723 \tdx{Inter_lower} B:A ==> Inter(A) <= B
724 \tdx{Inter_greatest} [| a:A; !!x. x:A ==> C<=x |] ==> C <= Inter(A)
726 \tdx{Un_upper1} A <= A Un B
727 \tdx{Un_upper2} B <= A Un B
728 \tdx{Un_least} [| A<=C; B<=C |] ==> A Un B <= C
730 \tdx{Int_lower1} A Int B <= A
731 \tdx{Int_lower2} A Int B <= B
732 \tdx{Int_greatest} [| C<=A; C<=B |] ==> C <= A Int B
734 \tdx{Diff_subset} A-B <= A
735 \tdx{Diff_contains} [| C<=A; C Int B = 0 |] ==> C <= A-B
737 \tdx{Collect_subset} Collect(A,P) <= A
739 \caption{Subset and lattice properties} \label{zf-subset}
743 \subsection{Subset and lattice properties}
744 The subset relation is a complete lattice. Unions form least upper bounds;
745 non-empty intersections form greatest lower bounds. Figure~\ref{zf-subset}
746 shows the corresponding rules. A few other laws involving subsets are
747 included. Proofs are in the file \texttt{ZF/subset.ML}.
749 Reasoning directly about subsets often yields clearer proofs than
750 reasoning about the membership relation. Section~\ref{sec:ZF-pow-example}
751 below presents an example of this, proving the equation ${{\tt Pow}(A)\cap
752 {\tt Pow}(B)}= {\tt Pow}(A\cap B)$.
758 \tdx{Pair_inject1} <a,b> = <c,d> ==> a=c
759 \tdx{Pair_inject2} <a,b> = <c,d> ==> b=d
760 \tdx{Pair_inject} [| <a,b> = <c,d>; [| a=c; b=d |] ==> P |] ==> P
761 \tdx{Pair_neq_0} <a,b>=0 ==> P
763 \tdx{fst_conv} fst(<a,b>) = a
764 \tdx{snd_conv} snd(<a,b>) = b
765 \tdx{split} split(\%x y. c(x,y), <a,b>) = c(a,b)
767 \tdx{SigmaI} [| a:A; b:B(a) |] ==> <a,b> : Sigma(A,B)
769 \tdx{SigmaE} [| c: Sigma(A,B);
770 !!x y.[| x:A; y:B(x); c=<x,y> |] ==> P |] ==> P
772 \tdx{SigmaE2} [| <a,b> : Sigma(A,B);
773 [| a:A; b:B(a) |] ==> P |] ==> P
775 \caption{Ordered pairs; projections; general sums} \label{zf-pair}
779 \subsection{Ordered pairs} \label{sec:pairs}
781 Figure~\ref{zf-pair} presents the rules governing ordered pairs,
782 projections and general sums. File \texttt{ZF/pair.ML} contains the
783 full (and tedious) proof that $\{\{a\},\{a,b\}\}$ functions as an ordered
784 pair. This property is expressed as two destruction rules,
785 \tdx{Pair_inject1} and \tdx{Pair_inject2}, and equivalently
786 as the elimination rule \tdx{Pair_inject}.
788 The rule \tdx{Pair_neq_0} asserts $\pair{a,b}\neq\emptyset$. This
789 is a property of $\{\{a\},\{a,b\}\}$, and need not hold for other
790 encodings of ordered pairs. The non-standard ordered pairs mentioned below
791 satisfy $\pair{\emptyset;\emptyset}=\emptyset$.
793 The natural deduction rules \tdx{SigmaI} and \tdx{SigmaE}
794 assert that \cdx{Sigma}$(A,B)$ consists of all pairs of the form
795 $\pair{x,y}$, for $x\in A$ and $y\in B(x)$. The rule \tdx{SigmaE2}
796 merely states that $\pair{a,b}\in \texttt{Sigma}(A,B)$ implies $a\in A$ and
799 In addition, it is possible to use tuples as patterns in abstractions:
801 {\tt\%<$x$,$y$>. $t$} \quad stands for\quad \texttt{split(\%$x$ $y$.\ $t$)}
803 Nested patterns are translated recursively:
804 {\tt\%<$x$,$y$,$z$>. $t$} $\leadsto$ {\tt\%<$x$,<$y$,$z$>>. $t$} $\leadsto$
805 \texttt{split(\%$x$.\%<$y$,$z$>. $t$)} $\leadsto$ \texttt{split(\%$x$. split(\%$y$
806 $z$.\ $t$))}. The reverse translation is performed upon printing.
808 The translation between patterns and \texttt{split} is performed automatically
809 by the parser and printer. Thus the internal and external form of a term
810 may differ, which affects proofs. For example the term {\tt
811 (\%<x,y>.<y,x>)<a,b>} requires the theorem \texttt{split} to rewrite to
814 In addition to explicit $\lambda$-abstractions, patterns can be used in any
815 variable binding construct which is internally described by a
816 $\lambda$-abstraction. Here are some important examples:
818 \item[Let:] \texttt{let {\it pattern} = $t$ in $u$}
819 \item[Choice:] \texttt{THE~{\it pattern}~.~$P$}
820 \item[Set operations:] \texttt{UN~{\it pattern}:$A$.~$B$}
821 \item[Comprehension:] \texttt{{\ttlbrace}~{\it pattern}:$A$~.~$P$~{\ttrbrace}}
829 \tdx{domainI} <a,b>: r ==> a : domain(r)
830 \tdx{domainE} [| a : domain(r); !!y. <a,y>: r ==> P |] ==> P
831 \tdx{domain_subset} domain(Sigma(A,B)) <= A
833 \tdx{rangeI} <a,b>: r ==> b : range(r)
834 \tdx{rangeE} [| b : range(r); !!x. <x,b>: r ==> P |] ==> P
835 \tdx{range_subset} range(A*B) <= B
837 \tdx{fieldI1} <a,b>: r ==> a : field(r)
838 \tdx{fieldI2} <a,b>: r ==> b : field(r)
839 \tdx{fieldCI} (~ <c,a>:r ==> <a,b>: r) ==> a : field(r)
841 \tdx{fieldE} [| a : field(r);
846 \tdx{field_subset} field(A*A) <= A
848 \caption{Domain, range and field of a relation} \label{zf-domrange}
853 \tdx{imageI} [| <a,b>: r; a:A |] ==> b : r``A
854 \tdx{imageE} [| b: r``A; !!x.[| <x,b>: r; x:A |] ==> P |] ==> P
856 \tdx{vimageI} [| <a,b>: r; b:B |] ==> a : r-``B
857 \tdx{vimageE} [| a: r-``B; !!x.[| <a,x>: r; x:B |] ==> P |] ==> P
859 \caption{Image and inverse image} \label{zf-domrange2}
863 \subsection{Relations}
864 Figure~\ref{zf-domrange} presents rules involving relations, which are sets
865 of ordered pairs. The converse of a relation~$r$ is the set of all pairs
866 $\pair{y,x}$ such that $\pair{x,y}\in r$; if $r$ is a function, then
867 {\cdx{converse}$(r)$} is its inverse. The rules for the domain
868 operation, namely \tdx{domainI} and~\tdx{domainE}, assert that
869 \cdx{domain}$(r)$ consists of all~$x$ such that $r$ contains
870 some pair of the form~$\pair{x,y}$. The range operation is similar, and
871 the field of a relation is merely the union of its domain and range.
873 Figure~\ref{zf-domrange2} presents rules for images and inverse images.
874 Note that these operations are generalisations of range and domain,
875 respectively. See the file \texttt{ZF/domrange.ML} for derivations of the
883 \tdx{fun_is_rel} f: Pi(A,B) ==> f <= Sigma(A,B)
885 \tdx{apply_equality} [| <a,b>: f; f: Pi(A,B) |] ==> f`a = b
886 \tdx{apply_equality2} [| <a,b>: f; <a,c>: f; f: Pi(A,B) |] ==> b=c
888 \tdx{apply_type} [| f: Pi(A,B); a:A |] ==> f`a : B(a)
889 \tdx{apply_Pair} [| f: Pi(A,B); a:A |] ==> <a,f`a>: f
890 \tdx{apply_iff} f: Pi(A,B) ==> <a,b>: f <-> a:A & f`a = b
892 \tdx{fun_extension} [| f : Pi(A,B); g: Pi(A,D);
893 !!x. x:A ==> f`x = g`x |] ==> f=g
895 \tdx{domain_type} [| <a,b> : f; f: Pi(A,B) |] ==> a : A
896 \tdx{range_type} [| <a,b> : f; f: Pi(A,B) |] ==> b : B(a)
898 \tdx{Pi_type} [| f: A->C; !!x. x:A ==> f`x: B(x) |] ==> f: Pi(A,B)
899 \tdx{domain_of_fun} f: Pi(A,B) ==> domain(f)=A
900 \tdx{range_of_fun} f: Pi(A,B) ==> f: A->range(f)
902 \tdx{restrict} a : A ==> restrict(f,A) ` a = f`a
903 \tdx{restrict_type} [| !!x. x:A ==> f`x: B(x) |] ==>
904 restrict(f,A) : Pi(A,B)
906 \caption{Functions} \label{zf-func1}
912 \tdx{lamI} a:A ==> <a,b(a)> : (lam x:A. b(x))
913 \tdx{lamE} [| p: (lam x:A. b(x)); !!x.[| x:A; p=<x,b(x)> |] ==> P
916 \tdx{lam_type} [| !!x. x:A ==> b(x): B(x) |] ==> (lam x:A. b(x)) : Pi(A,B)
918 \tdx{beta} a : A ==> (lam x:A. b(x)) ` a = b(a)
919 \tdx{eta} f : Pi(A,B) ==> (lam x:A. f`x) = f
921 \caption{$\lambda$-abstraction} \label{zf-lam}
927 \tdx{fun_empty} 0: 0->0
928 \tdx{fun_single} {\ttlbrace}<a,b>{\ttrbrace} : {\ttlbrace}a{\ttrbrace} -> {\ttlbrace}b{\ttrbrace}
930 \tdx{fun_disjoint_Un} [| f: A->B; g: C->D; A Int C = 0 |] ==>
931 (f Un g) : (A Un C) -> (B Un D)
933 \tdx{fun_disjoint_apply1} [| a:A; f: A->B; g: C->D; A Int C = 0 |] ==>
936 \tdx{fun_disjoint_apply2} [| c:C; f: A->B; g: C->D; A Int C = 0 |] ==>
939 \caption{Constructing functions from smaller sets} \label{zf-func2}
943 \subsection{Functions}
944 Functions, represented by graphs, are notoriously difficult to reason
945 about. The file \texttt{ZF/func.ML} derives many rules, which overlap more
946 than they ought. This section presents the more important rules.
948 Figure~\ref{zf-func1} presents the basic properties of \cdx{Pi}$(A,B)$,
949 the generalized function space. For example, if $f$ is a function and
950 $\pair{a,b}\in f$, then $f`a=b$ (\tdx{apply_equality}). Two functions
951 are equal provided they have equal domains and deliver equals results
952 (\tdx{fun_extension}).
954 By \tdx{Pi_type}, a function typing of the form $f\in A\to C$ can be
955 refined to the dependent typing $f\in\prod@{x\in A}B(x)$, given a suitable
956 family of sets $\{B(x)\}@{x\in A}$. Conversely, by \tdx{range_of_fun},
957 any dependent typing can be flattened to yield a function type of the form
958 $A\to C$; here, $C={\tt range}(f)$.
960 Among the laws for $\lambda$-abstraction, \tdx{lamI} and \tdx{lamE}
961 describe the graph of the generated function, while \tdx{beta} and
962 \tdx{eta} are the standard conversions. We essentially have a
963 dependently-typed $\lambda$-calculus (Fig.\ts\ref{zf-lam}).
965 Figure~\ref{zf-func2} presents some rules that can be used to construct
966 functions explicitly. We start with functions consisting of at most one
967 pair, and may form the union of two functions provided their domains are
973 \tdx{Int_absorb} A Int A = A
974 \tdx{Int_commute} A Int B = B Int A
975 \tdx{Int_assoc} (A Int B) Int C = A Int (B Int C)
976 \tdx{Int_Un_distrib} (A Un B) Int C = (A Int C) Un (B Int C)
978 \tdx{Un_absorb} A Un A = A
979 \tdx{Un_commute} A Un B = B Un A
980 \tdx{Un_assoc} (A Un B) Un C = A Un (B Un C)
981 \tdx{Un_Int_distrib} (A Int B) Un C = (A Un C) Int (B Un C)
983 \tdx{Diff_cancel} A-A = 0
984 \tdx{Diff_disjoint} A Int (B-A) = 0
985 \tdx{Diff_partition} A<=B ==> A Un (B-A) = B
986 \tdx{double_complement} [| A<=B; B<= C |] ==> (B - (C-A)) = A
987 \tdx{Diff_Un} A - (B Un C) = (A-B) Int (A-C)
988 \tdx{Diff_Int} A - (B Int C) = (A-B) Un (A-C)
990 \tdx{Union_Un_distrib} Union(A Un B) = Union(A) Un Union(B)
991 \tdx{Inter_Un_distrib} [| a:A; b:B |] ==>
992 Inter(A Un B) = Inter(A) Int Inter(B)
994 \tdx{Int_Union_RepFun} A Int Union(B) = (UN C:B. A Int C)
996 \tdx{Un_Inter_RepFun} b:B ==>
997 A Un Inter(B) = (INT C:B. A Un C)
999 \tdx{SUM_Un_distrib1} (SUM x:A Un B. C(x)) =
1000 (SUM x:A. C(x)) Un (SUM x:B. C(x))
1002 \tdx{SUM_Un_distrib2} (SUM x:C. A(x) Un B(x)) =
1003 (SUM x:C. A(x)) Un (SUM x:C. B(x))
1005 \tdx{SUM_Int_distrib1} (SUM x:A Int B. C(x)) =
1006 (SUM x:A. C(x)) Int (SUM x:B. C(x))
1008 \tdx{SUM_Int_distrib2} (SUM x:C. A(x) Int B(x)) =
1009 (SUM x:C. A(x)) Int (SUM x:C. B(x))
1011 \caption{Equalities} \label{zf-equalities}
1017 % \cdx{1} & $i$ & & $\{\emptyset\}$ \\
1018 % \cdx{bool} & $i$ & & the set $\{\emptyset,1\}$ \\
1019 % \cdx{cond} & $[i,i,i]\To i$ & & conditional for \texttt{bool} \\
1020 % \cdx{not} & $i\To i$ & & negation for \texttt{bool} \\
1021 % \sdx{and} & $[i,i]\To i$ & Left 70 & conjunction for \texttt{bool} \\
1022 % \sdx{or} & $[i,i]\To i$ & Left 65 & disjunction for \texttt{bool} \\
1023 % \sdx{xor} & $[i,i]\To i$ & Left 65 & exclusive-or for \texttt{bool}
1027 \tdx{bool_def} bool == {\ttlbrace}0,1{\ttrbrace}
1028 \tdx{cond_def} cond(b,c,d) == if b=1 then c else d
1029 \tdx{not_def} not(b) == cond(b,0,1)
1030 \tdx{and_def} a and b == cond(a,b,0)
1031 \tdx{or_def} a or b == cond(a,1,b)
1032 \tdx{xor_def} a xor b == cond(a,not(b),b)
1034 \tdx{bool_1I} 1 : bool
1035 \tdx{bool_0I} 0 : bool
1036 \tdx{boolE} [| c: bool; c=1 ==> P; c=0 ==> P |] ==> P
1037 \tdx{cond_1} cond(1,c,d) = c
1038 \tdx{cond_0} cond(0,c,d) = d
1040 \caption{The booleans} \label{zf-bool}
1044 \section{Further developments}
1045 The next group of developments is complex and extensive, and only
1046 highlights can be covered here. It involves many theories and ML files of
1049 Figure~\ref{zf-equalities} presents commutative, associative, distributive,
1050 and idempotency laws of union and intersection, along with other equations.
1051 See file \texttt{ZF/equalities.ML}.
1053 Theory \thydx{Bool} defines $\{0,1\}$ as a set of booleans, with the usual
1054 operators including a conditional (Fig.\ts\ref{zf-bool}). Although {\ZF} is a
1055 first-order theory, you can obtain the effect of higher-order logic using
1056 \texttt{bool}-valued functions, for example. The constant~\texttt{1} is
1057 translated to \texttt{succ(0)}.
1062 \it symbol & \it meta-type & \it priority & \it description \\
1063 \tt + & $[i,i]\To i$ & Right 65 & disjoint union operator\\
1064 \cdx{Inl}~~\cdx{Inr} & $i\To i$ & & injections\\
1065 \cdx{case} & $[i\To i,i\To i, i]\To i$ & & conditional for $A+B$
1068 \tdx{sum_def} A+B == {\ttlbrace}0{\ttrbrace}*A Un {\ttlbrace}1{\ttrbrace}*B
1069 \tdx{Inl_def} Inl(a) == <0,a>
1070 \tdx{Inr_def} Inr(b) == <1,b>
1071 \tdx{case_def} case(c,d,u) == split(\%y z. cond(y, d(z), c(z)), u)
1073 \tdx{sum_InlI} a : A ==> Inl(a) : A+B
1074 \tdx{sum_InrI} b : B ==> Inr(b) : A+B
1076 \tdx{Inl_inject} Inl(a)=Inl(b) ==> a=b
1077 \tdx{Inr_inject} Inr(a)=Inr(b) ==> a=b
1078 \tdx{Inl_neq_Inr} Inl(a)=Inr(b) ==> P
1080 \tdx{sumE2} u: A+B ==> (EX x. x:A & u=Inl(x)) | (EX y. y:B & u=Inr(y))
1082 \tdx{case_Inl} case(c,d,Inl(a)) = c(a)
1083 \tdx{case_Inr} case(c,d,Inr(b)) = d(b)
1085 \caption{Disjoint unions} \label{zf-sum}
1089 \subsection{Disjoint unions}
1091 Theory \thydx{Sum} defines the disjoint union of two sets, with
1092 injections and a case analysis operator (Fig.\ts\ref{zf-sum}). Disjoint
1093 unions play a role in datatype definitions, particularly when there is
1094 mutual recursion~\cite{paulson-set-II}.
1098 \tdx{QPair_def} <a;b> == a+b
1099 \tdx{qsplit_def} qsplit(c,p) == THE y. EX a b. p=<a;b> & y=c(a,b)
1100 \tdx{qfsplit_def} qfsplit(R,z) == EX x y. z=<x;y> & R(x,y)
1101 \tdx{qconverse_def} qconverse(r) == {\ttlbrace}z. w:r, EX x y. w=<x;y> & z=<y;x>{\ttrbrace}
1102 \tdx{QSigma_def} QSigma(A,B) == UN x:A. UN y:B(x). {\ttlbrace}<x;y>{\ttrbrace}
1104 \tdx{qsum_def} A <+> B == ({\ttlbrace}0{\ttrbrace} <*> A) Un ({\ttlbrace}1{\ttrbrace} <*> B)
1105 \tdx{QInl_def} QInl(a) == <0;a>
1106 \tdx{QInr_def} QInr(b) == <1;b>
1107 \tdx{qcase_def} qcase(c,d) == qsplit(\%y z. cond(y, d(z), c(z)))
1109 \caption{Non-standard pairs, products and sums} \label{zf-qpair}
1113 \subsection{Non-standard ordered pairs}
1115 Theory \thydx{QPair} defines a notion of ordered pair that admits
1116 non-well-founded tupling (Fig.\ts\ref{zf-qpair}). Such pairs are written
1117 {\tt<$a$;$b$>}. It also defines the eliminator \cdx{qsplit}, the
1118 converse operator \cdx{qconverse}, and the summation operator
1119 \cdx{QSigma}. These are completely analogous to the corresponding
1120 versions for standard ordered pairs. The theory goes on to define a
1121 non-standard notion of disjoint sum using non-standard pairs. All of these
1122 concepts satisfy the same properties as their standard counterparts; in
1123 addition, {\tt<$a$;$b$>} is continuous. The theory supports coinductive
1124 definitions, for example of infinite lists~\cite{paulson-mscs}.
1128 \tdx{bnd_mono_def} bnd_mono(D,h) ==
1129 h(D)<=D & (ALL W X. W<=X --> X<=D --> h(W) <= h(X))
1131 \tdx{lfp_def} lfp(D,h) == Inter({\ttlbrace}X: Pow(D). h(X) <= X{\ttrbrace})
1132 \tdx{gfp_def} gfp(D,h) == Union({\ttlbrace}X: Pow(D). X <= h(X){\ttrbrace})
1135 \tdx{lfp_lowerbound} [| h(A) <= A; A<=D |] ==> lfp(D,h) <= A
1137 \tdx{lfp_subset} lfp(D,h) <= D
1139 \tdx{lfp_greatest} [| bnd_mono(D,h);
1140 !!X. [| h(X) <= X; X<=D |] ==> A<=X
1141 |] ==> A <= lfp(D,h)
1143 \tdx{lfp_Tarski} bnd_mono(D,h) ==> lfp(D,h) = h(lfp(D,h))
1145 \tdx{induct} [| a : lfp(D,h); bnd_mono(D,h);
1146 !!x. x : h(Collect(lfp(D,h),P)) ==> P(x)
1149 \tdx{lfp_mono} [| bnd_mono(D,h); bnd_mono(E,i);
1150 !!X. X<=D ==> h(X) <= i(X)
1151 |] ==> lfp(D,h) <= lfp(E,i)
1153 \tdx{gfp_upperbound} [| A <= h(A); A<=D |] ==> A <= gfp(D,h)
1155 \tdx{gfp_subset} gfp(D,h) <= D
1157 \tdx{gfp_least} [| bnd_mono(D,h);
1158 !!X. [| X <= h(X); X<=D |] ==> X<=A
1159 |] ==> gfp(D,h) <= A
1161 \tdx{gfp_Tarski} bnd_mono(D,h) ==> gfp(D,h) = h(gfp(D,h))
1163 \tdx{coinduct} [| bnd_mono(D,h); a: X; X <= h(X Un gfp(D,h)); X <= D
1166 \tdx{gfp_mono} [| bnd_mono(D,h); D <= E;
1167 !!X. X<=D ==> h(X) <= i(X)
1168 |] ==> gfp(D,h) <= gfp(E,i)
1170 \caption{Least and greatest fixedpoints} \label{zf-fixedpt}
1174 \subsection{Least and greatest fixedpoints}
1176 The Knaster-Tarski Theorem states that every monotone function over a
1177 complete lattice has a fixedpoint. Theory \thydx{Fixedpt} proves the
1178 Theorem only for a particular lattice, namely the lattice of subsets of a
1179 set (Fig.\ts\ref{zf-fixedpt}). The theory defines least and greatest
1180 fixedpoint operators with corresponding induction and coinduction rules.
1181 These are essential to many definitions that follow, including the natural
1182 numbers and the transitive closure operator. The (co)inductive definition
1183 package also uses the fixedpoint operators~\cite{paulson-CADE}. See
1184 Davey and Priestley~\cite{davey-priestley} for more on the Knaster-Tarski
1185 Theorem and my paper~\cite{paulson-set-II} for discussion of the Isabelle
1188 Monotonicity properties are proved for most of the set-forming operations:
1189 union, intersection, Cartesian product, image, domain, range, etc. These
1190 are useful for applying the Knaster-Tarski Fixedpoint Theorem. The proofs
1191 themselves are trivial applications of Isabelle's classical reasoner. See
1192 file \texttt{ZF/mono.ML}.
1195 \subsection{Finite sets and lists}
1197 Theory \texttt{Finite} (Figure~\ref{zf-fin}) defines the finite set operator;
1198 ${\tt Fin}(A)$ is the set of all finite sets over~$A$. The theory employs
1199 Isabelle's inductive definition package, which proves various rules
1200 automatically. The induction rule shown is stronger than the one proved by
1201 the package. The theory also defines the set of all finite functions
1202 between two given sets.
1206 \tdx{Fin.emptyI} 0 : Fin(A)
1207 \tdx{Fin.consI} [| a: A; b: Fin(A) |] ==> cons(a,b) : Fin(A)
1212 !!x y. [| x: A; y: Fin(A); x~:y; P(y) |] ==> P(cons(x,y))
1215 \tdx{Fin_mono} A<=B ==> Fin(A) <= Fin(B)
1216 \tdx{Fin_UnI} [| b: Fin(A); c: Fin(A) |] ==> b Un c : Fin(A)
1217 \tdx{Fin_UnionI} C : Fin(Fin(A)) ==> Union(C) : Fin(A)
1218 \tdx{Fin_subset} [| c<=b; b: Fin(A) |] ==> c: Fin(A)
1220 \caption{The finite set operator} \label{zf-fin}
1225 \it symbol & \it meta-type & \it priority & \it description \\
1226 \cdx{list} & $i\To i$ && lists over some set\\
1227 \cdx{list_case} & $[i, [i,i]\To i, i] \To i$ && conditional for $list(A)$ \\
1228 \cdx{map} & $[i\To i, i] \To i$ & & mapping functional\\
1229 \cdx{length} & $i\To i$ & & length of a list\\
1230 \cdx{rev} & $i\To i$ & & reverse of a list\\
1231 \tt \at & $[i,i]\To i$ & Right 60 & append for lists\\
1232 \cdx{flat} & $i\To i$ & & append of list of lists
1235 \underscoreon %%because @ is used here
1237 \tdx{NilI} Nil : list(A)
1238 \tdx{ConsI} [| a: A; l: list(A) |] ==> Cons(a,l) : list(A)
1243 !!x y. [| x: A; y: list(A); P(y) |] ==> P(Cons(x,y))
1246 \tdx{Cons_iff} Cons(a,l)=Cons(a',l') <-> a=a' & l=l'
1247 \tdx{Nil_Cons_iff} ~ Nil=Cons(a,l)
1249 \tdx{list_mono} A<=B ==> list(A) <= list(B)
1251 \tdx{map_ident} l: list(A) ==> map(\%u. u, l) = l
1252 \tdx{map_compose} l: list(A) ==> map(h, map(j,l)) = map(\%u. h(j(u)), l)
1253 \tdx{map_app_distrib} xs: list(A) ==> map(h, xs@ys) = map(h,xs) @ map(h,ys)
1255 [| l: list(A); !!x. x: A ==> h(x): B |] ==> map(h,l) : list(B)
1257 ls: list(list(A)) ==> map(h, flat(ls)) = flat(map(map(h),ls))
1259 \caption{Lists} \label{zf-list}
1263 Figure~\ref{zf-list} presents the set of lists over~$A$, ${\tt list}(A)$. The
1264 definition employs Isabelle's datatype package, which defines the introduction
1265 and induction rules automatically, as well as the constructors, case operator
1266 (\verb|list_case|) and recursion operator. The theory then defines the usual
1267 list functions by primitive recursion. See theory \texttt{List}.
1270 \subsection{Miscellaneous}
1274 \it symbol & \it meta-type & \it priority & \it description \\
1275 \sdx{O} & $[i,i]\To i$ & Right 60 & composition ($\circ$) \\
1276 \cdx{id} & $i\To i$ & & identity function \\
1277 \cdx{inj} & $[i,i]\To i$ & & injective function space\\
1278 \cdx{surj} & $[i,i]\To i$ & & surjective function space\\
1279 \cdx{bij} & $[i,i]\To i$ & & bijective function space
1283 \tdx{comp_def} r O s == {\ttlbrace}xz : domain(s)*range(r) .
1284 EX x y z. xz=<x,z> & <x,y>:s & <y,z>:r{\ttrbrace}
1285 \tdx{id_def} id(A) == (lam x:A. x)
1286 \tdx{inj_def} inj(A,B) == {\ttlbrace} f: A->B. ALL w:A. ALL x:A. f`w=f`x --> w=x {\ttrbrace}
1287 \tdx{surj_def} surj(A,B) == {\ttlbrace} f: A->B . ALL y:B. EX x:A. f`x=y {\ttrbrace}
1288 \tdx{bij_def} bij(A,B) == inj(A,B) Int surj(A,B)
1291 \tdx{left_inverse} [| f: inj(A,B); a: A |] ==> converse(f)`(f`a) = a
1292 \tdx{right_inverse} [| f: inj(A,B); b: range(f) |] ==>
1293 f`(converse(f)`b) = b
1295 \tdx{inj_converse_inj} f: inj(A,B) ==> converse(f): inj(range(f), A)
1296 \tdx{bij_converse_bij} f: bij(A,B) ==> converse(f): bij(B,A)
1298 \tdx{comp_type} [| s<=A*B; r<=B*C |] ==> (r O s) <= A*C
1299 \tdx{comp_assoc} (r O s) O t = r O (s O t)
1301 \tdx{left_comp_id} r<=A*B ==> id(B) O r = r
1302 \tdx{right_comp_id} r<=A*B ==> r O id(A) = r
1304 \tdx{comp_func} [| g:A->B; f:B->C |] ==> (f O g):A->C
1305 \tdx{comp_func_apply} [| g:A->B; f:B->C; a:A |] ==> (f O g)`a = f`(g`a)
1307 \tdx{comp_inj} [| g:inj(A,B); f:inj(B,C) |] ==> (f O g):inj(A,C)
1308 \tdx{comp_surj} [| g:surj(A,B); f:surj(B,C) |] ==> (f O g):surj(A,C)
1309 \tdx{comp_bij} [| g:bij(A,B); f:bij(B,C) |] ==> (f O g):bij(A,C)
1311 \tdx{left_comp_inverse} f: inj(A,B) ==> converse(f) O f = id(A)
1312 \tdx{right_comp_inverse} f: surj(A,B) ==> f O converse(f) = id(B)
1314 \tdx{bij_disjoint_Un}
1315 [| f: bij(A,B); g: bij(C,D); A Int C = 0; B Int D = 0 |] ==>
1316 (f Un g) : bij(A Un C, B Un D)
1318 \tdx{restrict_bij} [| f:inj(A,B); C<=A |] ==> restrict(f,C): bij(C, f``C)
1320 \caption{Permutations} \label{zf-perm}
1323 The theory \thydx{Perm} is concerned with permutations (bijections) and
1324 related concepts. These include composition of relations, the identity
1325 relation, and three specialized function spaces: injective, surjective and
1326 bijective. Figure~\ref{zf-perm} displays many of their properties that
1327 have been proved. These results are fundamental to a treatment of
1328 equipollence and cardinality.
1330 Theory \thydx{Univ} defines a `universe' $\texttt{univ}(A)$, which is used by
1331 the datatype package. This set contains $A$ and the
1332 natural numbers. Vitally, it is closed under finite products: ${\tt
1333 univ}(A)\times{\tt univ}(A)\subseteq{\tt univ}(A)$. This theory also
1334 defines the cumulative hierarchy of axiomatic set theory, which
1335 traditionally is written $V@\alpha$ for an ordinal~$\alpha$. The
1336 `universe' is a simple generalization of~$V@\omega$.
1338 Theory \thydx{QUniv} defines a `universe' ${\tt quniv}(A)$, which is used by
1339 the datatype package to construct codatatypes such as streams. It is
1340 analogous to ${\tt univ}(A)$ (and is defined in terms of it) but is closed
1341 under the non-standard product and sum.
1344 \section{Automatic Tools}
1346 {\ZF} provides the simplifier and the classical reasoner. Moreover it
1347 supplies a specialized tool to infer `types' of terms.
1349 \subsection{Simplification}
1351 {\ZF} inherits simplification from {\FOL} but adopts it for set theory. The
1352 extraction of rewrite rules takes the {\ZF} primitives into account. It can
1353 strip bounded universal quantifiers from a formula; for example, ${\forall
1354 x\in A. f(x)=g(x)}$ yields the conditional rewrite rule $x\in A \Imp
1355 f(x)=g(x)$. Given $a\in\{x\in A. P(x)\}$ it extracts rewrite rules from $a\in
1356 A$ and~$P(a)$. It can also break down $a\in A\int B$ and $a\in A-B$.
1358 Simplification tactics tactics such as \texttt{Asm_simp_tac} and
1359 \texttt{Full_simp_tac} use the default simpset (\texttt{simpset()}), which
1360 works for most purposes. A small simplification set for set theory is
1361 called~\ttindexbold{ZF_ss}, and you can even use \ttindex{FOL_ss} as a minimal
1362 starting point. \texttt{ZF_ss} contains congruence rules for all the binding
1363 operators of {\ZF}\@. It contains all the conversion rules, such as
1364 \texttt{fst} and \texttt{snd}, as well as the rewrites shown in
1365 Fig.\ts\ref{zf-simpdata}. See the file \texttt{ZF/simpdata.ML} for a fuller
1369 \subsection{Classical Reasoning}
1371 As for the classical reasoner, tactics such as \texttt{Blast_tac} and {\tt
1372 Best_tac} refer to the default claset (\texttt{claset()}). This works for
1373 most purposes. Named clasets include \ttindexbold{ZF_cs} (basic set theory)
1374 and \ttindexbold{le_cs} (useful for reasoning about the relations $<$ and
1375 $\le$). You can use \ttindex{FOL_cs} as a minimal basis for building your own
1376 clasets. See \iflabelundefined{chap:classical}{the {\em Reference Manual\/}}%
1377 {Chap.\ts\ref{chap:classical}} for more discussion of classical proof methods.
1382 a\in \emptyset & \bimp & \bot\\
1383 a \in A \un B & \bimp & a\in A \disj a\in B\\
1384 a \in A \int B & \bimp & a\in A \conj a\in B\\
1385 a \in A-B & \bimp & a\in A \conj \neg (a\in B)\\
1386 \pair{a,b}\in {\tt Sigma}(A,B)
1387 & \bimp & a\in A \conj b\in B(a)\\
1388 a \in {\tt Collect}(A,P) & \bimp & a\in A \conj P(a)\\
1389 (\forall x \in \emptyset. P(x)) & \bimp & \top\\
1390 (\forall x \in A. \top) & \bimp & \top
1392 \caption{Some rewrite rules for set theory} \label{zf-simpdata}
1396 \subsection{Type-Checking Tactics}
1397 \index{type-checking tactics}
1399 Isabelle/{\ZF} provides simple tactics to help automate those proofs that are
1400 essentially type-checking. Such proofs are built by applying rules such as
1403 [| ?P ==> ?a: ?A; ~?P ==> ?b: ?A |] ==> (if ?P then ?a else ?b): ?A
1405 [| ?m : nat; ?n : nat |] ==> ?m #+ ?n : nat
1407 ?a : ?A ==> Inl(?a) : ?A + ?B
1409 In typical applications, the goal has the form $t\in\Var{A}$: in other words,
1410 we have a specific term~$t$ and need to infer its `type' by instantiating the
1411 set variable~$\Var{A}$. Neither the simplifier nor the classical reasoner
1412 does this job well. The if-then-else rule, and many similar ones, can make
1413 the classical reasoner loop. The simplifier refuses (on principle) to
1414 instantiate variables during rewriting, so goals such as \texttt{i\#+j :\ ?A}
1417 The simplifier calls the type-checker to solve rewritten subgoals: this stage
1418 can indeed instantiate variables. If you have defined new constants and
1419 proved type-checking rules for them, then insert the rules using
1420 \texttt{AddTCs} and the rest should be automatic. In particular, the
1421 simplifier will use type-checking to help satisfy conditional rewrite rules.
1422 Call the tactic \ttindex{Typecheck_tac} to break down all subgoals using
1423 type-checking rules.
1425 Though the easiest way to invoke the type-checker is via the simplifier,
1426 specialized applications may require more detailed knowledge of
1427 the type-checking primitives. They are modelled on the simplifier's:
1428 \begin{ttdescription}
1429 \item[\ttindexbold{tcset}] is the type of tcsets: sets of type-checking rules.
1431 \item[\ttindexbold{addTCs}] is an infix operator to add type-checking rules to
1434 \item[\ttindexbold{delTCs}] is an infix operator to remove type-checking rules
1437 \item[\ttindexbold{typecheck_tac}] is a tactic for attempting to prove all
1438 subgoals using the rules given in its argument, a tcset.
1441 Tcsets, like simpsets, are associated with theories and are merged when
1442 theories are merged. There are further primitives that use the default tcset.
1443 \begin{ttdescription}
1444 \item[\ttindexbold{tcset}] is a function to return the default tcset; use the
1445 expression \texttt{tcset()}.
1447 \item[\ttindexbold{AddTCs}] adds type-checking rules to the default tcset.
1449 \item[\ttindexbold{DelTCs}] removes type-checking rules from the default
1452 \item[\ttindexbold{Typecheck_tac}] calls \texttt{typecheck_tac} using the
1456 To supply some type-checking rules temporarily, using \texttt{Addrules} and
1457 later \texttt{Delrules} is the simplest way. There is also a high-tech
1458 approach. Call the simplifier with a new solver expressed using
1459 \ttindexbold{type_solver_tac} and your temporary type-checking rules.
1462 (simpset() setSolver type_solver_tac (tcset() addTCs prems)) 2);
1466 \section{Natural number and integer arithmetic}
1468 \index{arithmetic|(}
1470 \begin{figure}\small
1471 \index{#*@{\tt\#*} symbol}
1474 \index{#+@{\tt\#+} symbol}
1475 \index{#-@{\tt\#-} symbol}
1477 \it symbol & \it meta-type & \it priority & \it description \\
1478 \cdx{nat} & $i$ & & set of natural numbers \\
1479 \cdx{nat_case}& $[i,i\To i,i]\To i$ & & conditional for $nat$\\
1480 \tt \#* & $[i,i]\To i$ & Left 70 & multiplication \\
1481 \tt div & $[i,i]\To i$ & Left 70 & division\\
1482 \tt mod & $[i,i]\To i$ & Left 70 & modulus\\
1483 \tt \#+ & $[i,i]\To i$ & Left 65 & addition\\
1484 \tt \#- & $[i,i]\To i$ & Left 65 & subtraction
1488 \tdx{nat_def} nat == lfp(lam r: Pow(Inf). {\ttlbrace}0{\ttrbrace} Un {\ttlbrace}succ(x). x:r{\ttrbrace}
1490 \tdx{nat_case_def} nat_case(a,b,k) ==
1491 THE y. k=0 & y=a | (EX x. k=succ(x) & y=b(x))
1493 \tdx{nat_0I} 0 : nat
1494 \tdx{nat_succI} n : nat ==> succ(n) : nat
1497 [| n: nat; P(0); !!x. [| x: nat; P(x) |] ==> P(succ(x))
1500 \tdx{nat_case_0} nat_case(a,b,0) = a
1501 \tdx{nat_case_succ} nat_case(a,b,succ(m)) = b(m)
1503 \tdx{add_0_natify} 0 #+ n = natify(n)
1504 \tdx{add_succ} succ(m) #+ n = succ(m #+ n)
1506 \tdx{mult_type} m #* n : nat
1507 \tdx{mult_0} 0 #* n = 0
1508 \tdx{mult_succ} succ(m) #* n = n #+ (m #* n)
1509 \tdx{mult_commute} m #* n = n #* m
1510 \tdx{add_mult_dist} (m #+ n) #* k = (m #* k) #+ (n #* k)
1511 \tdx{mult_assoc} (m #* n) #* k = m #* (n #* k)
1512 \tdx{mod_div_equality} m: nat ==> (m div n)#*n #+ m mod n = m
1514 \caption{The natural numbers} \label{zf-nat}
1517 \index{natural numbers}
1519 Theory \thydx{Nat} defines the natural numbers and mathematical
1520 induction, along with a case analysis operator. The set of natural
1521 numbers, here called \texttt{nat}, is known in set theory as the ordinal~$\omega$.
1523 Theory \thydx{Arith} develops arithmetic on the natural numbers
1524 (Fig.\ts\ref{zf-nat}). Addition, multiplication and subtraction are defined
1525 by primitive recursion. Division and remainder are defined by repeated
1526 subtraction, which requires well-founded recursion; the termination argument
1527 relies on the divisor's being non-zero. Many properties are proved:
1528 commutative, associative and distributive laws, identity and cancellation
1529 laws, etc. The most interesting result is perhaps the theorem $a \bmod b +
1532 To minimize the need for tedious proofs of $t\in\texttt{nat}$, the arithmetic
1533 operators coerce their arguments to be natural numbers. The function
1534 \cdx{natify} is defined such that $\texttt{natify}(n) = n$ if $n$ is a natural
1535 number, $\texttt{natify}(\texttt{succ}(x)) =
1536 \texttt{succ}(\texttt{natify}(x))$ for all $x$, and finally
1537 $\texttt{natify}(x)=0$ in all other cases. The benefit is that the addition,
1538 subtraction, multiplication, division and remainder operators always return
1539 natural numbers, regardless of their arguments. Algebraic laws (commutative,
1540 associative, distributive) are unconditional. Occurrences of \texttt{natify}
1541 as operands of those operators are simplified away. Any remaining occurrences
1542 can either be tolerated or else eliminated by proving that the argument is a
1545 The simplifier automatically cancels common terms on the opposite sides of
1546 subtraction and of relations ($=$, $<$ and $\le$). Here is an example:
1548 1. i #+ j #+ k #- j < k #+ l
1550 1. natify(i) < natify(l)
1552 Given the assumptions \texttt{i:nat} and \texttt{l:nat}, both occurrences of
1553 \cdx{natify} would be simplified away.
1556 \begin{figure}\small
1557 \index{$*@{\tt\$*} symbol}
1558 \index{$+@{\tt\$+} symbol}
1559 \index{$-@{\tt\$-} symbol}
1561 \it symbol & \it meta-type & \it priority & \it description \\
1562 \cdx{int} & $i$ & & set of integers \\
1563 \tt \$* & $[i,i]\To i$ & Left 70 & multiplication \\
1564 \tt \$+ & $[i,i]\To i$ & Left 65 & addition\\
1565 \tt \$- & $[i,i]\To i$ & Left 65 & subtraction\\
1566 \tt \$< & $[i,i]\To o$ & Left 50 & $<$ on integers\\
1567 \tt \$<= & $[i,i]\To o$ & Left 50 & $\le$ on integers
1571 \tdx{zadd_0_intify} 0 $+ n = intify(n)
1573 \tdx{zmult_type} m $* n : int
1574 \tdx{zmult_0} 0 $* n = 0
1575 \tdx{zmult_commute} m $* n = n $* m
1576 \tdx{zadd_zmult_dist} (m $+ n) $* k = (m $* k) $+ (n $* k)
1577 \tdx{zmult_assoc} (m $* n) $* k = m $* (n $* k)
1579 \caption{The integers} \label{zf-int}
1585 Theory \thydx{Int} defines the integers, as equivalence classes of natural
1586 numbers. Figure~\ref{zf-int} presents a tidy collection of laws. In
1587 fact, a large library of facts is proved, including monotonicity laws for
1588 addition and multiplication, covering both positive and negative operands.
1590 As with the natural numbers, the need for typing proofs is minimized. All the
1591 operators defined in Fig.\ts\ref{zf-int} coerce their operands to integers by
1592 applying the function \cdx{intify}. This function is the identity on integers
1593 and maps other operands to zero.
1595 Decimal notation is provided for the integers. Numbers, written as
1596 \texttt{\#$nnn$} or \texttt{\#-$nnn$}, are represented internally in
1597 two's-complement binary. Expressions involving addition, subtraction and
1598 multiplication of numeral constants are evaluated (with acceptable efficiency)
1599 by simplification. The simplifier also collects similar terms, multiplying
1600 them by a numerical coefficient. It also cancels occurrences of the same
1601 terms on the other side of the relational operators. Example:
1603 1. y $+ z $+ #-3 $* x $+ y $<= x $* #2 $+ z
1605 1. #2 $* y $<= #5 $* x
1607 For more information on the integers, please see the theories on directory
1610 \index{arithmetic|)}
1613 \section{Datatype definitions}
1614 \label{sec:ZF:datatype}
1617 The \ttindex{datatype} definition package of \ZF\ constructs inductive
1618 datatypes similar to those of \ML. It can also construct coinductive
1619 datatypes (codatatypes), which are non-well-founded structures such as
1620 streams. It defines the set using a fixed-point construction and proves
1621 induction rules, as well as theorems for recursion and case combinators. It
1622 supplies mechanisms for reasoning about freeness. The datatype package can
1623 handle both mutual and indirect recursion.
1627 \label{subsec:datatype:basics}
1629 A \texttt{datatype} definition has the following form:
1632 \mathtt{datatype} & t@1(A@1,\ldots,A@h) & = &
1633 constructor^1@1 ~\mid~ \ldots ~\mid~ constructor^1@{k@1} \\
1635 \mathtt{and} & t@n(A@1,\ldots,A@h) & = &
1636 constructor^n@1~ ~\mid~ \ldots ~\mid~ constructor^n@{k@n}
1639 Here $t@1$, \ldots,~$t@n$ are identifiers and $A@1$, \ldots,~$A@h$ are
1640 variables: the datatype's parameters. Each constructor specification has the
1642 \[ C \hbox{\tt~( } \hbox{\tt"} x@1 \hbox{\tt:} T@1 \hbox{\tt"},\;
1644 \hbox{\tt"} x@m \hbox{\tt:} T@m \hbox{\tt"}
1647 Here $C$ is the constructor name, and variables $x@1$, \ldots,~$x@m$ are the
1648 constructor arguments, belonging to the sets $T@1$, \ldots, $T@m$,
1649 respectively. Typically each $T@j$ is either a constant set, a datatype
1650 parameter (one of $A@1$, \ldots, $A@h$) or a recursive occurrence of one of
1651 the datatypes, say $t@i(A@1,\ldots,A@h)$. More complex possibilities exist,
1652 but they are much harder to realize. Often, additional information must be
1653 supplied in the form of theorems.
1655 A datatype can occur recursively as the argument of some function~$F$. This
1656 is called a {\em nested} (or \emph{indirect}) occurrence. It is only allowed
1657 if the datatype package is given a theorem asserting that $F$ is monotonic.
1658 If the datatype has indirect occurrences, then Isabelle/ZF does not support
1659 recursive function definitions.
1661 A simple example of a datatype is \texttt{list}, which is built-in, and is
1665 datatype "list(A)" = Nil | Cons ("a:A", "l: list(A)")
1667 Note that the datatype operator must be declared as a constant first.
1668 However, the package declares the constructors. Here, \texttt{Nil} gets type
1669 $i$ and \texttt{Cons} gets type $[i,i]\To i$.
1671 Trees and forests can be modelled by the mutually recursive datatype
1674 consts tree, forest, tree_forest :: i=>i
1675 datatype "tree(A)" = Tcons ("a: A", "f: forest(A)")
1676 and "forest(A)" = Fnil | Fcons ("t: tree(A)", "f: forest(A)")
1678 Here $\texttt{tree}(A)$ is the set of trees over $A$, $\texttt{forest}(A)$ is
1679 the set of forests over $A$, and $\texttt{tree_forest}(A)$ is the union of
1680 the previous two sets. All three operators must be declared first.
1682 The datatype \texttt{term}, which is defined by
1685 datatype "term(A)" = Apply ("a: A", "l: list(term(A))")
1688 is an example of nested recursion. (The theorem \texttt{list_mono} is proved
1689 in file \texttt{List.ML}, and the \texttt{term} example is devaloped in theory
1692 \subsubsection{Freeness of the constructors}
1694 Constructors satisfy {\em freeness} properties. Constructions are distinct,
1695 for example $\texttt{Nil}\not=\texttt{Cons}(a,l)$, and they are injective, for
1696 example $\texttt{Cons}(a,l)=\texttt{Cons}(a',l') \bimp a=a' \conj l=l'$.
1697 Because the number of freeness is quadratic in the number of constructors, the
1698 datatype package does not prove them. Instead, it ensures that simplification
1699 will prove them dynamically: when the simplifier encounters a formula
1700 asserting the equality of two datatype constructors, it performs freeness
1703 Freeness reasoning can also be done using the classical reasoner, but it is
1704 more complicated. You have to add some safe elimination rules rules to the
1705 claset. For the \texttt{list} datatype, they are called
1706 \texttt{list.free_SEs}. Occasionally this exposes the underlying
1707 representation of some constructor, which can be rectified using the command
1708 \hbox{\tt fold_tac list.con_defs}.
1711 \subsubsection{Structural induction}
1713 The datatype package also provides structural induction rules. For datatypes
1714 without mutual or nested recursion, the rule has the form exemplified by
1715 \texttt{list.induct} in Fig.\ts\ref{zf-list}. For mutually recursive
1716 datatypes, the induction rule is supplied in two forms. Consider datatype
1717 \texttt{TF}. The rule \texttt{tree_forest.induct} performs induction over a
1718 single predicate~\texttt{P}, which is presumed to be defined for both trees
1721 [| x : tree_forest(A);
1722 !!a f. [| a : A; f : forest(A); P(f) |] ==> P(Tcons(a, f));
1724 !!f t. [| t : tree(A); P(t); f : forest(A); P(f) |]
1728 The rule \texttt{tree_forest.mutual_induct} performs induction over two
1729 distinct predicates, \texttt{P_tree} and \texttt{P_forest}.
1732 [| a : A; f : forest(A); P_forest(f) |] ==> P_tree(Tcons(a, f));
1734 !!f t. [| t : tree(A); P_tree(t); f : forest(A); P_forest(f) |]
1735 ==> P_forest(Fcons(t, f))
1736 |] ==> (ALL za. za : tree(A) --> P_tree(za)) &
1737 (ALL za. za : forest(A) --> P_forest(za))
1740 For datatypes with nested recursion, such as the \texttt{term} example from
1741 above, things are a bit more complicated. The rule \texttt{term.induct}
1742 refers to the monotonic operator, \texttt{list}:
1745 !!a l. [| a: A; l: list(Collect(term(A), P)) |] ==> P(Apply(a, l))
1748 The file \texttt{ex/Term.ML} derives two higher-level induction rules, one of
1749 which is particularly useful for proving equations:
1752 !!x zs. [| x : A; zs : list(term(A)); map(f, zs) = map(g, zs) |]
1753 ==> f(Apply(x, zs)) = g(Apply(x, zs))
1756 How this can be generalized to other nested datatypes is a matter for future
1760 \subsubsection{The \texttt{case} operator}
1762 The package defines an operator for performing case analysis over the
1763 datatype. For \texttt{list}, it is called \texttt{list_case} and satisfies
1766 list_case(f_Nil, f_Cons, []) = f_Nil
1767 list_case(f_Nil, f_Cons, Cons(a, l)) = f_Cons(a, l)
1769 Here \texttt{f_Nil} is the value to return if the argument is \texttt{Nil} and
1770 \texttt{f_Cons} is a function that computes the value to return if the
1771 argument has the form $\texttt{Cons}(a,l)$. The function can be expressed as
1772 an abstraction, over patterns if desired (\S\ref{sec:pairs}).
1774 For mutually recursive datatypes, there is a single \texttt{case} operator.
1775 In the tree/forest example, the constant \texttt{tree_forest_case} handles all
1776 of the constructors of the two datatypes.
1781 \subsection{Defining datatypes}
1783 The theory syntax for datatype definitions is shown in
1784 Fig.~\ref{datatype-grammar}. In order to be well-formed, a datatype
1785 definition has to obey the rules stated in the previous section. As a result
1786 the theory is extended with the new types, the constructors, and the theorems
1787 listed in the previous section. The quotation marks are necessary because
1788 they enclose general Isabelle formul\ae.
1792 datatype : ( 'datatype' | 'codatatype' ) datadecls;
1794 datadecls: ( '"' id arglist '"' '=' (constructor + '|') ) + 'and'
1796 constructor : name ( () | consargs ) ( () | ( '(' mixfix ')' ) )
1798 consargs : '(' ('"' var ':' term '"' + ',') ')'
1801 \caption{Syntax of datatype declarations}
1802 \label{datatype-grammar}
1805 Codatatypes are declared like datatypes and are identical to them in every
1806 respect except that they have a coinduction rule instead of an induction rule.
1807 Note that while an induction rule has the effect of limiting the values
1808 contained in the set, a coinduction rule gives a way of constructing new
1811 Most of the theorems about datatypes become part of the default simpset. You
1812 never need to see them again because the simplifier applies them
1813 automatically. Induction or exhaustion are usually invoked by hand,
1814 usually via these special-purpose tactics:
1815 \begin{ttdescription}
1816 \item[\ttindexbold{induct_tac} {\tt"}$x${\tt"} $i$] applies structural
1817 induction on variable $x$ to subgoal $i$, provided the type of $x$ is a
1818 datatype. The induction variable should not occur among other assumptions
1821 In some cases, induction is overkill and a case distinction over all
1822 constructors of the datatype suffices.
1823 \begin{ttdescription}
1824 \item[\ttindexbold{exhaust_tac} {\tt"}$x${\tt"} $i$]
1825 performs an exhaustive case analysis for the variable~$x$.
1828 Both tactics can only be applied to a variable, whose typing must be given in
1829 some assumption, for example the assumption \texttt{x:\ list(A)}. The tactics
1830 also work for the natural numbers (\texttt{nat}) and disjoint sums, although
1831 these sets were not defined using the datatype package. (Disjoint sums are
1832 not recursive, so only \texttt{exhaust_tac} is available.)
1835 Here are some more details for the technically minded. Processing the
1836 theory file produces an \ML\ structure which, in addition to the usual
1837 components, contains a structure named $t$ for each datatype $t$ defined in
1838 the file. Each structure $t$ contains the following elements:
1840 val intrs : thm list \textrm{the introduction rules}
1841 val elim : thm \textrm{the elimination (case analysis) rule}
1842 val induct : thm \textrm{the standard induction rule}
1843 val mutual_induct : thm \textrm{the mutual induction rule, or \texttt{True}}
1844 val case_eqns : thm list \textrm{equations for the case operator}
1845 val recursor_eqns : thm list \textrm{equations for the recursor}
1846 val con_defs : thm list \textrm{definitions of the case operator and constructors}
1847 val free_iffs : thm list \textrm{logical equivalences for proving freeness}
1848 val free_SEs : thm list \textrm{elimination rules for proving freeness}
1849 val mk_free : string -> thm \textrm{A function for proving freeness theorems}
1850 val mk_cases : string -> thm \textrm{case analysis, see below}
1851 val defs : thm list \textrm{definitions of operators}
1852 val bnd_mono : thm list \textrm{monotonicity property}
1853 val dom_subset : thm list \textrm{inclusion in `bounding set'}
1855 Furthermore there is the theorem $C$\texttt{_I} for every constructor~$C$; for
1856 example, the \texttt{list} datatype's introduction rules are bound to the
1857 identifiers \texttt{Nil_I} and \texttt{Cons_I}.
1859 For a codatatype, the component \texttt{coinduct} is the coinduction rule,
1860 replacing the \texttt{induct} component.
1862 See the theories \texttt{ex/Ntree} and \texttt{ex/Brouwer} for examples of
1863 infinitely branching datatypes. See theory \texttt{ex/LList} for an example
1864 of a codatatype. Some of these theories illustrate the use of additional,
1865 undocumented features of the datatype package. Datatype definitions are
1866 reduced to inductive definitions, and the advanced features should be
1867 understood in that light.
1870 \subsection{Examples}
1872 \subsubsection{The datatype of binary trees}
1874 Let us define the set $\texttt{bt}(A)$ of binary trees over~$A$. The theory
1875 must contain these lines:
1878 datatype "bt(A)" = Lf | Br ("a: A", "t1: bt(A)", "t2: bt(A)")
1880 After loading the theory, we can prove, for example, that no tree equals its
1881 left branch. To ease the induction, we state the goal using quantifiers.
1883 Goal "l : bt(A) ==> ALL x r. Br(x,l,r) ~= l";
1885 {\out l : bt(A) ==> ALL x r. Br(x, l, r) ~= l}
1886 {\out 1. l : bt(A) ==> ALL x r. Br(x, l, r) ~= l}
1888 This can be proved by the structural induction tactic:
1890 by (induct_tac "l" 1);
1892 {\out l : bt(A) ==> ALL x r. Br(x, l, r) ~= l}
1893 {\out 1. ALL x r. Br(x, Lf, r) ~= Lf}
1894 {\out 2. !!a t1 t2.}
1895 {\out [| a : A; t1 : bt(A);}
1896 {\out ALL x r. Br(x, t1, r) ~= t1; t2 : bt(A);}
1897 {\out ALL x r. Br(x, t2, r) ~= t2 |]}
1898 {\out ==> ALL x r. Br(x, Br(a, t1, t2), r) ~= Br(a, t1, t2)}
1900 Both subgoals are proved using \texttt{Auto_tac}, which performs the necessary
1905 {\out l : bt(A) ==> ALL x r. Br(x, l, r) ~= l}
1908 To remove the quantifiers from the induction formula, we save the theorem using
1909 \ttindex{qed_spec_mp}.
1911 qed_spec_mp "Br_neq_left";
1912 {\out val Br_neq_left = "?l : bt(?A) ==> Br(?x, ?l, ?r) ~= ?l" : thm}
1915 When there are only a few constructors, we might prefer to prove the freenness
1916 theorems for each constructor. This is trivial, using the function given us
1920 bt.mk_free "Br(a,l,r)=Br(a',l',r') <-> a=a' & l=l' & r=r'";
1922 {\out "Br(?a, ?l, ?r) = Br(?a', ?l', ?r') <->}
1923 {\out ?a = ?a' & ?l = ?l' & ?r = ?r'" : thm}
1926 The purpose of \ttindex{mk_cases} is to generate instances of the elimination
1927 (case analysis) rule that have been simplified using freeness reasoning. For
1928 example, this instance of the elimination rule propagates type-checking
1929 information from the premise $\texttt{Br}(a,l,r)\in\texttt{bt}(A)$:
1931 val BrE = bt.mk_cases "Br(a,l,r) : bt(A)";
1933 {\out "[| Br(?a, ?l, ?r) : bt(?A);}
1934 {\out [| ?a : ?A; ?l : bt(?A); ?r : bt(?A) |] ==> ?Q |]}
1935 {\out ==> ?Q" : thm}
1939 \subsubsection{Mixfix syntax in datatypes}
1941 Mixfix syntax is sometimes convenient. The theory \texttt{ex/PropLog} makes a
1942 deep embedding of propositional logic:
1945 datatype "prop" = Fls
1946 | Var ("n: nat") ("#_" [100] 100)
1947 | "=>" ("p: prop", "q: prop") (infixr 90)
1949 The second constructor has a special $\#n$ syntax, while the third constructor
1950 is an infixed arrow.
1953 \subsubsection{A giant enumeration type}
1955 This example shows a datatype that consists of 60 constructors:
1959 "enum" = C00 | C01 | C02 | C03 | C04 | C05 | C06 | C07 | C08 | C09
1960 | C10 | C11 | C12 | C13 | C14 | C15 | C16 | C17 | C18 | C19
1961 | C20 | C21 | C22 | C23 | C24 | C25 | C26 | C27 | C28 | C29
1962 | C30 | C31 | C32 | C33 | C34 | C35 | C36 | C37 | C38 | C39
1963 | C40 | C41 | C42 | C43 | C44 | C45 | C46 | C47 | C48 | C49
1964 | C50 | C51 | C52 | C53 | C54 | C55 | C56 | C57 | C58 | C59
1967 The datatype package scales well. Even though all properties are proved
1968 rather than assumed, full processing of this definition takes under 15 seconds
1969 (on a 300 MHz Pentium). The constructors have a balanced representation,
1970 essentially binary notation, so freeness properties can be proved fast.
1975 You need not derive such inequalities explicitly. The simplifier will dispose
1976 of them automatically.
1981 \subsection{Recursive function definitions}\label{sec:ZF:recursive}
1982 \index{recursive functions|see{recursion}}
1984 \index{recursion!primitive|(}
1986 Datatypes come with a uniform way of defining functions, {\bf primitive
1987 recursion}. Such definitions rely on the recursion operator defined by the
1988 datatype package. Isabelle proves the desired recursion equations as
1991 In principle, one could introduce primitive recursive functions by asserting
1992 their reduction rules as new axioms. Here is a dangerous way of defining the
1993 append function for lists:
1994 \begin{ttbox}\slshape
1995 consts "\at" :: [i,i]=>i (infixr 60)
1997 app_Nil "[] \at ys = ys"
1998 app_Cons "(Cons(a,l)) \at ys = Cons(a, l \at ys)"
2000 Asserting axioms brings the danger of accidentally asserting nonsense. It
2001 should be avoided at all costs!
2003 The \ttindex{primrec} declaration is a safe means of defining primitive
2004 recursive functions on datatypes:
2006 consts "\at" :: [i,i]=>i (infixr 60)
2009 "(Cons(a,l)) \at ys = Cons(a, l \at ys)"
2011 Isabelle will now check that the two rules do indeed form a primitive
2012 recursive definition. For example, the declaration
2017 is rejected with an error message ``\texttt{Extra variables on rhs}''.
2020 \subsubsection{Syntax of recursive definitions}
2022 The general form of a primitive recursive definition is
2025 {\it reduction rules}
2027 where \textit{reduction rules} specify one or more equations of the form
2028 \[ f \, x@1 \, \dots \, x@m \, (C \, y@1 \, \dots \, y@k) \, z@1 \,
2029 \dots \, z@n = r \] such that $C$ is a constructor of the datatype, $r$
2030 contains only the free variables on the left-hand side, and all recursive
2031 calls in $r$ are of the form $f \, \dots \, y@i \, \dots$ for some $i$.
2032 There must be at most one reduction rule for each constructor. The order is
2033 immaterial. For missing constructors, the function is defined to return zero.
2035 All reduction rules are added to the default simpset.
2036 If you would like to refer to some rule by name, then you must prefix
2037 the rule with an identifier. These identifiers, like those in the
2038 \texttt{rules} section of a theory, will be visible at the \ML\ level.
2040 The reduction rules for {\tt\at} become part of the default simpset, which
2041 leads to short proof scripts:
2042 \begin{ttbox}\underscoreon
2043 Goal "xs: list(A) ==> (xs @ ys) @ zs = xs @ (ys @ zs)";
2044 by (induct\_tac "xs" 1);
2045 by (ALLGOALS Asm\_simp\_tac);
2048 You can even use the \texttt{primrec} form with non-recursive datatypes and
2049 with codatatypes. Recursion is not allowed, but it provides a convenient
2050 syntax for defining functions by cases.
2053 \subsubsection{Example: varying arguments}
2055 All arguments, other than the recursive one, must be the same in each equation
2056 and in each recursive call. To get around this restriction, use explict
2057 $\lambda$-abstraction and function application. Here is an example, drawn
2058 from the theory \texttt{Resid/Substitution}. The type of redexes is declared
2063 "redexes" = Var ("n: nat")
2064 | Fun ("t: redexes")
2065 | App ("b:bool" ,"f:redexes" , "a:redexes")
2068 The function \texttt{lift} takes a second argument, $k$, which varies in
2072 "lift(Var(i)) = (lam k:nat. if i<k then Var(i) else Var(succ(i)))"
2073 "lift(Fun(t)) = (lam k:nat. Fun(lift(t) ` succ(k)))"
2074 "lift(App(b,f,a)) = (lam k:nat. App(b, lift(f)`k, lift(a)`k))"
2076 Now \texttt{lift(r)`k} satisfies the required recursion equations.
2078 \index{recursion!primitive|)}
2082 \section{Inductive and coinductive definitions}
2083 \index{*inductive|(}
2084 \index{*coinductive|(}
2086 An {\bf inductive definition} specifies the least set~$R$ closed under given
2087 rules. (Applying a rule to elements of~$R$ yields a result within~$R$.) For
2088 example, a structural operational semantics is an inductive definition of an
2089 evaluation relation. Dually, a {\bf coinductive definition} specifies the
2090 greatest set~$R$ consistent with given rules. (Every element of~$R$ can be
2091 seen as arising by applying a rule to elements of~$R$.) An important example
2092 is using bisimulation relations to formalise equivalence of processes and
2093 infinite data structures.
2095 A theory file may contain any number of inductive and coinductive
2096 definitions. They may be intermixed with other declarations; in
2097 particular, the (co)inductive sets {\bf must} be declared separately as
2098 constants, and may have mixfix syntax or be subject to syntax translations.
2100 Each (co)inductive definition adds definitions to the theory and also
2101 proves some theorems. Each definition creates an \ML\ structure, which is a
2102 substructure of the main theory structure.
2103 This package is described in detail in a separate paper,%
2104 \footnote{It appeared in CADE~\cite{paulson-CADE}; a longer version is
2105 distributed with Isabelle as \emph{A Fixedpoint Approach to
2106 (Co)Inductive and (Co)Datatype Definitions}.} %
2107 which you might refer to for background information.
2110 \subsection{The syntax of a (co)inductive definition}
2111 An inductive definition has the form
2114 domains {\it domain declarations}
2115 intrs {\it introduction rules}
2116 monos {\it monotonicity theorems}
2117 con_defs {\it constructor definitions}
2118 type_intrs {\it introduction rules for type-checking}
2119 type_elims {\it elimination rules for type-checking}
2121 A coinductive definition is identical, but starts with the keyword
2122 {\tt co\-inductive}.
2124 The {\tt monos}, {\tt con\_defs}, {\tt type\_intrs} and {\tt type\_elims}
2125 sections are optional. If present, each is specified either as a list of
2126 identifiers or as a string. If the latter, then the string must be a valid
2127 \textsc{ml} expression of type {\tt thm list}. The string is simply inserted
2128 into the {\tt _thy.ML} file; if it is ill-formed, it will trigger \textsc{ml}
2129 error messages. You can then inspect the file on the temporary directory.
2132 \item[\it domain declarations] are items of the form
2133 {\it string\/}~{\tt <=}~{\it string}, associating each recursive set with
2134 its domain. (The domain is some existing set that is large enough to
2135 hold the new set being defined.)
2137 \item[\it introduction rules] specify one or more introduction rules in
2138 the form {\it ident\/}~{\it string}, where the identifier gives the name of
2139 the rule in the result structure.
2141 \item[\it monotonicity theorems] are required for each operator applied to
2142 a recursive set in the introduction rules. There \textbf{must} be a theorem
2143 of the form $A\subseteq B\Imp M(A)\subseteq M(B)$, for each premise $t\in M(R_i)$
2144 in an introduction rule!
2146 \item[\it constructor definitions] contain definitions of constants
2147 appearing in the introduction rules. The (co)datatype package supplies
2148 the constructors' definitions here. Most (co)inductive definitions omit
2149 this section; one exception is the primitive recursive functions example;
2150 see theory \texttt{ex/Primrec}.
2152 \item[\it type\_intrs] consists of introduction rules for type-checking the
2153 definition: for demonstrating that the new set is included in its domain.
2154 (The proof uses depth-first search.)
2156 \item[\it type\_elims] consists of elimination rules for type-checking the
2157 definition. They are presumed to be safe and are applied as often as
2158 possible prior to the {\tt type\_intrs} search.
2161 The package has a few restrictions:
2163 \item The theory must separately declare the recursive sets as
2166 \item The names of the recursive sets must be identifiers, not infix
2169 \item Side-conditions must not be conjunctions. However, an introduction rule
2170 may contain any number of side-conditions.
2172 \item Side-conditions of the form $x=t$, where the variable~$x$ does not
2173 occur in~$t$, will be substituted through the rule \verb|mutual_induct|.
2177 \subsection{Example of an inductive definition}
2179 Two declarations, included in a theory file, define the finite powerset
2180 operator. First we declare the constant~\texttt{Fin}. Then we declare it
2181 inductively, with two introduction rules:
2186 domains "Fin(A)" <= "Pow(A)"
2189 consI "[| a: A; b: Fin(A) |] ==> cons(a,b) : Fin(A)"
2190 type_intrs empty_subsetI, cons_subsetI, PowI
2191 type_elims "[make_elim PowD]"
2193 The resulting theory structure contains a substructure, called~\texttt{Fin}.
2194 It contains the \texttt{Fin}$~A$ introduction rules as the list
2195 \texttt{Fin.intrs}, and also individually as \texttt{Fin.emptyI} and
2196 \texttt{Fin.consI}. The induction rule is \texttt{Fin.induct}.
2198 The chief problem with making (co)inductive definitions involves type-checking
2199 the rules. Sometimes, additional theorems need to be supplied under
2200 \texttt{type_intrs} or \texttt{type_elims}. If the package fails when trying
2201 to prove your introduction rules, then set the flag \ttindexbold{trace_induct}
2202 to \texttt{true} and try again. (See the manual \emph{A Fixedpoint Approach
2203 \ldots} for more discussion of type-checking.)
2205 In the example above, $\texttt{Pow}(A)$ is given as the domain of
2206 $\texttt{Fin}(A)$, for obviously every finite subset of~$A$ is a subset
2207 of~$A$. However, the inductive definition package can only prove that given a
2209 Here is the output that results (with the flag set) when the
2210 \texttt{type_intrs} and \texttt{type_elims} are omitted from the inductive
2213 Inductive definition Finite.Fin
2216 \%X. {z: Pow(A) . z = 0 | (EX a b. z = cons(a, b) & a : A & b : X)})
2217 Proving monotonicity...
2219 Proving the introduction rules...
2220 The type-checking subgoal:
2224 The subgoal after monos, type_elims:
2227 *** prove_goal: tactic failed
2229 We see the need to supply theorems to let the package prove
2230 $\emptyset\in\texttt{Pow}(A)$. Restoring the \texttt{type_intrs} but not the
2231 \texttt{type_elims}, we again get an error message:
2233 The type-checking subgoal:
2237 The subgoal after monos, type_elims:
2241 The type-checking subgoal:
2243 1. [| a : A; b : Fin(A) |] ==> cons(a, b) : Pow(A)
2245 The subgoal after monos, type_elims:
2247 1. [| a : A; b : Pow(A) |] ==> cons(a, b) : Pow(A)
2248 *** prove_goal: tactic failed
2250 The first rule has been type-checked, but the second one has failed. The
2251 simplest solution to such problems is to prove the failed subgoal separately
2252 and to supply it under \texttt{type_intrs}. The solution actually used is
2253 to supply, under \texttt{type_elims}, a rule that changes
2254 $b\in\texttt{Pow}(A)$ to $b\subseteq A$; together with \texttt{cons_subsetI}
2255 and \texttt{PowI}, it is enough to complete the type-checking.
2259 \subsection{Further examples}
2261 An inductive definition may involve arbitrary monotonic operators. Here is a
2262 standard example: the accessible part of a relation. Note the use
2263 of~\texttt{Pow} in the introduction rule and the corresponding mention of the
2264 rule \verb|Pow_mono| in the \texttt{monos} list. If the desired rule has a
2265 universally quantified premise, usually the effect can be obtained using
2270 domains "acc(r)" <= "field(r)"
2272 vimage "[| r-``{a}: Pow(acc(r)); a: field(r) |] ==> a: acc(r)"
2276 Finally, here is a coinductive definition. It captures (as a bisimulation)
2277 the notion of equality on lazy lists, which are first defined as a codatatype:
2279 consts llist :: i=>i
2280 codatatype "llist(A)" = LNil | LCons ("a: A", "l: llist(A)")
2285 domains "lleq(A)" <= "llist(A) * llist(A)"
2287 LNil "<LNil, LNil> : lleq(A)"
2288 LCons "[| a:A; <l,l'>: lleq(A) |]
2289 ==> <LCons(a,l), LCons(a,l')>: lleq(A)"
2290 type_intrs "llist.intrs"
2292 This use of \texttt{type_intrs} is typical: the relation concerns the
2293 codatatype \texttt{llist}, so naturally the introduction rules for that
2294 codatatype will be required for type-checking the rules.
2296 The Isabelle distribution contains many other inductive definitions. Simple
2297 examples are collected on subdirectory \texttt{ZF/ex}. The directory
2298 \texttt{Coind} and the theory \texttt{ZF/ex/LList} contain coinductive
2299 definitions. Larger examples may be found on other subdirectories of
2300 \texttt{ZF}, such as \texttt{IMP}, and \texttt{Resid}.
2303 \subsection{The result structure}
2305 Each (co)inductive set defined in a theory file generates an \ML\ substructure
2306 having the same name. The the substructure contains the following elements:
2309 val intrs : thm list \textrm{the introduction rules}
2310 val elim : thm \textrm{the elimination (case analysis) rule}
2311 val mk_cases : string -> thm \textrm{case analysis, see below}
2312 val induct : thm \textrm{the standard induction rule}
2313 val mutual_induct : thm \textrm{the mutual induction rule, or \texttt{True}}
2314 val defs : thm list \textrm{definitions of operators}
2315 val bnd_mono : thm list \textrm{monotonicity property}
2316 val dom_subset : thm list \textrm{inclusion in `bounding set'}
2318 Furthermore there is the theorem $C$\texttt{_I} for every constructor~$C$; for
2319 example, the \texttt{list} datatype's introduction rules are bound to the
2320 identifiers \texttt{Nil_I} and \texttt{Cons_I}.
2322 For a codatatype, the component \texttt{coinduct} is the coinduction rule,
2323 replacing the \texttt{induct} component.
2325 Recall that \ttindex{mk_cases} generates simplified instances of the
2326 elimination (case analysis) rule. It is as useful for inductive definitions
2327 as it is for datatypes. There are many examples in the theory
2328 \texttt{ex/Comb}, which is discussed at length
2329 elsewhere~\cite{paulson-generic}. The theory first defines the datatype
2330 \texttt{comb} of combinators:
2335 | "#" ("p: comb", "q: comb") (infixl 90)
2337 The theory goes on to define contraction and parallel contraction
2338 inductively. Then the file \texttt{ex/Comb.ML} defines special cases of
2339 contraction using \texttt{mk_cases}:
2341 val K_contractE = contract.mk_cases "K -1-> r";
2342 {\out val K_contractE = "K -1-> ?r ==> ?Q" : thm}
2344 We can read this as saying that the combinator \texttt{K} cannot reduce to
2345 anything. Similar elimination rules for \texttt{S} and application are also
2346 generated and are supplied to the classical reasoner. Note that
2347 \texttt{comb.con_defs} is given to \texttt{mk_cases} to allow freeness
2348 reasoning on datatype \texttt{comb}.
2350 \index{*coinductive|)} \index{*inductive|)}
2355 \section{The outer reaches of set theory}
2357 The constructions of the natural numbers and lists use a suite of
2358 operators for handling recursive function definitions. I have described
2359 the developments in detail elsewhere~\cite{paulson-set-II}. Here is a brief
2362 \item Theory \texttt{Trancl} defines the transitive closure of a relation
2363 (as a least fixedpoint).
2365 \item Theory \texttt{WF} proves the Well-Founded Recursion Theorem, using an
2366 elegant approach of Tobias Nipkow. This theorem permits general
2367 recursive definitions within set theory.
2369 \item Theory \texttt{Ord} defines the notions of transitive set and ordinal
2370 number. It derives transfinite induction. A key definition is {\bf
2371 less than}: $i<j$ if and only if $i$ and $j$ are both ordinals and
2372 $i\in j$. As a special case, it includes less than on the natural
2375 \item Theory \texttt{Epsilon} derives $\varepsilon$-induction and
2376 $\varepsilon$-recursion, which are generalisations of transfinite
2377 induction and recursion. It also defines \cdx{rank}$(x)$, which
2378 is the least ordinal $\alpha$ such that $x$ is constructed at
2379 stage $\alpha$ of the cumulative hierarchy (thus $x\in
2383 Other important theories lead to a theory of cardinal numbers. They have
2384 not yet been written up anywhere. Here is a summary:
2386 \item Theory \texttt{Rel} defines the basic properties of relations, such as
2387 (ir)reflexivity, (a)symmetry, and transitivity.
2389 \item Theory \texttt{EquivClass} develops a theory of equivalence
2390 classes, not using the Axiom of Choice.
2392 \item Theory \texttt{Order} defines partial orderings, total orderings and
2395 \item Theory \texttt{OrderArith} defines orderings on sum and product sets.
2396 These can be used to define ordinal arithmetic and have applications to
2397 cardinal arithmetic.
2399 \item Theory \texttt{OrderType} defines order types. Every wellordering is
2400 equivalent to a unique ordinal, which is its order type.
2402 \item Theory \texttt{Cardinal} defines equipollence and cardinal numbers.
2404 \item Theory \texttt{CardinalArith} defines cardinal addition and
2405 multiplication, and proves their elementary laws. It proves that there
2406 is no greatest cardinal. It also proves a deep result, namely
2407 $\kappa\otimes\kappa=\kappa$ for every infinite cardinal~$\kappa$; see
2408 Kunen~\cite[page 29]{kunen80}. None of these results assume the Axiom of
2409 Choice, which complicates their proofs considerably.
2412 The following developments involve the Axiom of Choice (AC):
2414 \item Theory \texttt{AC} asserts the Axiom of Choice and proves some simple
2417 \item Theory \texttt{Zorn} proves Hausdorff's Maximal Principle, Zorn's Lemma
2418 and the Wellordering Theorem, following Abrial and
2419 Laffitte~\cite{abrial93}.
2421 \item Theory \verb|Cardinal_AC| uses AC to prove simplified theorems about
2422 the cardinals. It also proves a theorem needed to justify
2423 infinitely branching datatype declarations: if $\kappa$ is an infinite
2424 cardinal and $|X(\alpha)| \le \kappa$ for all $\alpha<\kappa$ then
2425 $|\union\sb{\alpha<\kappa} X(\alpha)| \le \kappa$.
2427 \item Theory \texttt{InfDatatype} proves theorems to justify infinitely
2428 branching datatypes. Arbitrary index sets are allowed, provided their
2429 cardinalities have an upper bound. The theory also justifies some
2430 unusual cases of finite branching, involving the finite powerset operator
2431 and the finite function space operator.
2436 \section{The examples directories}
2437 Directory \texttt{HOL/IMP} contains a mechanised version of a semantic
2438 equivalence proof taken from Winskel~\cite{winskel93}. It formalises the
2439 denotational and operational semantics of a simple while-language, then
2440 proves the two equivalent. It contains several datatype and inductive
2441 definitions, and demonstrates their use.
2443 The directory \texttt{ZF/ex} contains further developments in {\ZF} set
2444 theory. Here is an overview; see the files themselves for more details. I
2445 describe much of this material in other
2446 publications~\cite{paulson-set-I,paulson-set-II,paulson-CADE}.
2448 \item File \texttt{misc.ML} contains miscellaneous examples such as
2449 Cantor's Theorem, the Schr\"oder-Bernstein Theorem and the `Composition
2450 of homomorphisms' challenge~\cite{boyer86}.
2452 \item Theory \texttt{Ramsey} proves the finite exponent 2 version of
2453 Ramsey's Theorem, following Basin and Kaufmann's
2454 presentation~\cite{basin91}.
2456 \item Theory \texttt{Integ} develops a theory of the integers as
2457 equivalence classes of pairs of natural numbers.
2459 \item Theory \texttt{Primrec} develops some computation theory. It
2460 inductively defines the set of primitive recursive functions and presents a
2461 proof that Ackermann's function is not primitive recursive.
2463 \item Theory \texttt{Primes} defines the Greatest Common Divisor of two
2464 natural numbers and and the ``divides'' relation.
2466 \item Theory \texttt{Bin} defines a datatype for two's complement binary
2467 integers, then proves rewrite rules to perform binary arithmetic. For
2468 instance, $1359\times {-}2468 = {-}3354012$ takes under 14 seconds.
2470 \item Theory \texttt{BT} defines the recursive data structure ${\tt
2471 bt}(A)$, labelled binary trees.
2473 \item Theory \texttt{Term} defines a recursive data structure for terms
2474 and term lists. These are simply finite branching trees.
2476 \item Theory \texttt{TF} defines primitives for solving mutually
2477 recursive equations over sets. It constructs sets of trees and forests
2478 as an example, including induction and recursion rules that handle the
2481 \item Theory \texttt{Prop} proves soundness and completeness of
2482 propositional logic~\cite{paulson-set-II}. This illustrates datatype
2483 definitions, inductive definitions, structural induction and rule
2486 \item Theory \texttt{ListN} inductively defines the lists of $n$
2487 elements~\cite{paulin-tlca}.
2489 \item Theory \texttt{Acc} inductively defines the accessible part of a
2490 relation~\cite{paulin-tlca}.
2492 \item Theory \texttt{Comb} defines the datatype of combinators and
2493 inductively defines contraction and parallel contraction. It goes on to
2494 prove the Church-Rosser Theorem. This case study follows Camilleri and
2495 Melham~\cite{camilleri92}.
2497 \item Theory \texttt{LList} defines lazy lists and a coinduction
2498 principle for proving equations between them.
2502 \section{A proof about powersets}\label{sec:ZF-pow-example}
2503 To demonstrate high-level reasoning about subsets, let us prove the
2504 equation ${{\tt Pow}(A)\cap {\tt Pow}(B)}= {\tt Pow}(A\cap B)$. Compared
2505 with first-order logic, set theory involves a maze of rules, and theorems
2506 have many different proofs. Attempting other proofs of the theorem might
2507 be instructive. This proof exploits the lattice properties of
2508 intersection. It also uses the monotonicity of the powerset operation,
2509 from \texttt{ZF/mono.ML}:
2511 \tdx{Pow_mono} A<=B ==> Pow(A) <= Pow(B)
2513 We enter the goal and make the first step, which breaks the equation into
2514 two inclusions by extensionality:\index{*equalityI theorem}
2516 Goal "Pow(A Int B) = Pow(A) Int Pow(B)";
2518 {\out Pow(A Int B) = Pow(A) Int Pow(B)}
2519 {\out 1. Pow(A Int B) = Pow(A) Int Pow(B)}
2521 by (resolve_tac [equalityI] 1);
2523 {\out Pow(A Int B) = Pow(A) Int Pow(B)}
2524 {\out 1. Pow(A Int B) <= Pow(A) Int Pow(B)}
2525 {\out 2. Pow(A) Int Pow(B) <= Pow(A Int B)}
2527 Both inclusions could be tackled straightforwardly using \texttt{subsetI}.
2528 A shorter proof results from noting that intersection forms the greatest
2529 lower bound:\index{*Int_greatest theorem}
2531 by (resolve_tac [Int_greatest] 1);
2533 {\out Pow(A Int B) = Pow(A) Int Pow(B)}
2534 {\out 1. Pow(A Int B) <= Pow(A)}
2535 {\out 2. Pow(A Int B) <= Pow(B)}
2536 {\out 3. Pow(A) Int Pow(B) <= Pow(A Int B)}
2538 Subgoal~1 follows by applying the monotonicity of \texttt{Pow} to $A\int
2539 B\subseteq A$; subgoal~2 follows similarly:
2540 \index{*Int_lower1 theorem}\index{*Int_lower2 theorem}
2542 by (resolve_tac [Int_lower1 RS Pow_mono] 1);
2544 {\out Pow(A Int B) = Pow(A) Int Pow(B)}
2545 {\out 1. Pow(A Int B) <= Pow(B)}
2546 {\out 2. Pow(A) Int Pow(B) <= Pow(A Int B)}
2548 by (resolve_tac [Int_lower2 RS Pow_mono] 1);
2550 {\out Pow(A Int B) = Pow(A) Int Pow(B)}
2551 {\out 1. Pow(A) Int Pow(B) <= Pow(A Int B)}
2553 We are left with the opposite inclusion, which we tackle in the
2554 straightforward way:\index{*subsetI theorem}
2556 by (resolve_tac [subsetI] 1);
2558 {\out Pow(A Int B) = Pow(A) Int Pow(B)}
2559 {\out 1. !!x. x : Pow(A) Int Pow(B) ==> x : Pow(A Int B)}
2561 The subgoal is to show $x\in {\tt Pow}(A\cap B)$ assuming $x\in{\tt
2562 Pow}(A)\cap {\tt Pow}(B)$; eliminating this assumption produces two
2563 subgoals. The rule \tdx{IntE} treats the intersection like a conjunction
2564 instead of unfolding its definition.
2566 by (eresolve_tac [IntE] 1);
2568 {\out Pow(A Int B) = Pow(A) Int Pow(B)}
2569 {\out 1. !!x. [| x : Pow(A); x : Pow(B) |] ==> x : Pow(A Int B)}
2571 The next step replaces the \texttt{Pow} by the subset
2572 relation~($\subseteq$).\index{*PowI theorem}
2574 by (resolve_tac [PowI] 1);
2576 {\out Pow(A Int B) = Pow(A) Int Pow(B)}
2577 {\out 1. !!x. [| x : Pow(A); x : Pow(B) |] ==> x <= A Int B}
2579 We perform the same replacement in the assumptions. This is a good
2580 demonstration of the tactic \ttindex{dresolve_tac}:\index{*PowD theorem}
2582 by (REPEAT (dresolve_tac [PowD] 1));
2584 {\out Pow(A Int B) = Pow(A) Int Pow(B)}
2585 {\out 1. !!x. [| x <= A; x <= B |] ==> x <= A Int B}
2587 The assumptions are that $x$ is a lower bound of both $A$ and~$B$, but
2588 $A\int B$ is the greatest lower bound:\index{*Int_greatest theorem}
2590 by (resolve_tac [Int_greatest] 1);
2592 {\out Pow(A Int B) = Pow(A) Int Pow(B)}
2593 {\out 1. !!x. [| x <= A; x <= B |] ==> x <= A}
2594 {\out 2. !!x. [| x <= A; x <= B |] ==> x <= B}
2596 To conclude the proof, we clear up the trivial subgoals:
2598 by (REPEAT (assume_tac 1));
2600 {\out Pow(A Int B) = Pow(A) Int Pow(B)}
2604 We could have performed this proof in one step by applying
2605 \ttindex{Blast_tac}. Let us
2606 go back to the start:
2610 {\out Pow(A Int B) = Pow(A) Int Pow(B)}
2611 {\out 1. Pow(A Int B) = Pow(A) Int Pow(B)}
2618 {\out Pow(A Int B) = Pow(A) Int Pow(B)}
2621 Past researchers regarded this as a difficult proof, as indeed it is if all
2622 the symbols are replaced by their definitions.
2625 \section{Monotonicity of the union operator}
2626 For another example, we prove that general union is monotonic:
2627 ${C\subseteq D}$ implies $\bigcup(C)\subseteq \bigcup(D)$. To begin, we
2628 tackle the inclusion using \tdx{subsetI}:
2630 Goal "C<=D ==> Union(C) <= Union(D)";
2632 {\out C <= D ==> Union(C) <= Union(D)}
2633 {\out 1. C <= D ==> Union(C) <= Union(D)}
2635 by (resolve_tac [subsetI] 1);
2637 {\out C <= D ==> Union(C) <= Union(D)}
2638 {\out 1. !!x. [| C <= D; x : Union(C) |] ==> x : Union(D)}
2640 Big union is like an existential quantifier --- the occurrence in the
2641 assumptions must be eliminated early, since it creates parameters.
2642 \index{*UnionE theorem}
2644 by (eresolve_tac [UnionE] 1);
2646 {\out C <= D ==> Union(C) <= Union(D)}
2647 {\out 1. !!x B. [| C <= D; x : B; B : C |] ==> x : Union(D)}
2649 Now we may apply \tdx{UnionI}, which creates an unknown involving the
2650 parameters. To show $x\in \bigcup(D)$ it suffices to show that $x$ belongs
2651 to some element, say~$\Var{B2}(x,B)$, of~$D$.
2653 by (resolve_tac [UnionI] 1);
2655 {\out C <= D ==> Union(C) <= Union(D)}
2656 {\out 1. !!x B. [| C <= D; x : B; B : C |] ==> ?B2(x,B) : D}
2657 {\out 2. !!x B. [| C <= D; x : B; B : C |] ==> x : ?B2(x,B)}
2659 Combining \tdx{subsetD} with the assumption $C\subseteq D$ yields
2660 $\Var{a}\in C \Imp \Var{a}\in D$, which reduces subgoal~1. Note that
2661 \texttt{eresolve_tac} has removed that assumption.
2663 by (eresolve_tac [subsetD] 1);
2665 {\out C <= D ==> Union(C) <= Union(D)}
2666 {\out 1. !!x B. [| x : B; B : C |] ==> ?B2(x,B) : C}
2667 {\out 2. !!x B. [| C <= D; x : B; B : C |] ==> x : ?B2(x,B)}
2669 The rest is routine. Observe how~$\Var{B2}(x,B)$ is instantiated.
2673 {\out C <= D ==> Union(C) <= Union(D)}
2674 {\out 1. !!x B. [| C <= D; x : B; B : C |] ==> x : B}
2677 {\out C <= D ==> Union(C) <= Union(D)}
2680 Again, \ttindex{Blast_tac} can prove the theorem in one step.
2687 {\out C <= D ==> Union(C) <= Union(D)}
2691 The file \texttt{ZF/equalities.ML} has many similar proofs. Reasoning about
2692 general intersection can be difficult because of its anomalous behaviour on
2693 the empty set. However, \ttindex{Blast_tac} copes well with these. Here is
2694 a typical example, borrowed from Devlin~\cite[page 12]{devlin79}:
2696 a:C ==> (INT x:C. A(x) Int B(x)) = (INT x:C. A(x)) Int (INT x:C. B(x))
2698 In traditional notation this is
2699 \[ a\in C \,\Imp\, \inter@{x\in C} \Bigl(A(x) \int B(x)\Bigr) =
2700 \Bigl(\inter@{x\in C} A(x)\Bigr) \int
2701 \Bigl(\inter@{x\in C} B(x)\Bigr) \]
2703 \section{Low-level reasoning about functions}
2704 The derived rules \texttt{lamI}, \texttt{lamE}, \texttt{lam_type}, \texttt{beta}
2705 and \texttt{eta} support reasoning about functions in a
2706 $\lambda$-calculus style. This is generally easier than regarding
2707 functions as sets of ordered pairs. But sometimes we must look at the
2708 underlying representation, as in the following proof
2709 of~\tdx{fun_disjoint_apply1}. This states that if $f$ and~$g$ are
2710 functions with disjoint domains~$A$ and~$C$, and if $a\in A$, then
2713 Goal "[| a:A; f: A->B; g: C->D; A Int C = 0 |] ==> \ttback
2714 \ttback (f Un g)`a = f`a";
2716 {\out [| a : A; f : A -> B; g : C -> D; A Int C = 0 |]}
2717 {\out ==> (f Un g) ` a = f ` a}
2718 {\out 1. [| a : A; f : A -> B; g : C -> D; A Int C = 0 |]}
2719 {\out ==> (f Un g) ` a = f ` a}
2721 Using \tdx{apply_equality}, we reduce the equality to reasoning about
2722 ordered pairs. The second subgoal is to verify that $f\un g$ is a function.
2723 To save space, the assumptions will be abbreviated below.
2725 by (resolve_tac [apply_equality] 1);
2727 {\out [| \ldots |] ==> (f Un g) ` a = f ` a}
2728 {\out 1. [| \ldots |] ==> <a,f ` a> : f Un g}
2729 {\out 2. [| \ldots |] ==> f Un g : (PROD x:?A. ?B(x))}
2731 We must show that the pair belongs to~$f$ or~$g$; by~\tdx{UnI1} we
2734 by (resolve_tac [UnI1] 1);
2736 {\out [| \ldots |] ==> (f Un g) ` a = f ` a}
2737 {\out 1. [| \ldots |] ==> <a,f ` a> : f}
2738 {\out 2. [| \ldots |] ==> f Un g : (PROD x:?A. ?B(x))}
2740 To show $\pair{a,f`a}\in f$ we use \tdx{apply_Pair}, which is
2741 essentially the converse of \tdx{apply_equality}:
2743 by (resolve_tac [apply_Pair] 1);
2745 {\out [| \ldots |] ==> (f Un g) ` a = f ` a}
2746 {\out 1. [| \ldots |] ==> f : (PROD x:?A2. ?B2(x))}
2747 {\out 2. [| \ldots |] ==> a : ?A2}
2748 {\out 3. [| \ldots |] ==> f Un g : (PROD x:?A. ?B(x))}
2750 Using the assumptions $f\in A\to B$ and $a\in A$, we solve the two subgoals
2751 from \tdx{apply_Pair}. Recall that a $\Pi$-set is merely a generalized
2752 function space, and observe that~{\tt?A2} is instantiated to~\texttt{A}.
2756 {\out [| \ldots |] ==> (f Un g) ` a = f ` a}
2757 {\out 1. [| \ldots |] ==> a : A}
2758 {\out 2. [| \ldots |] ==> f Un g : (PROD x:?A. ?B(x))}
2761 {\out [| \ldots |] ==> (f Un g) ` a = f ` a}
2762 {\out 1. [| \ldots |] ==> f Un g : (PROD x:?A. ?B(x))}
2764 To construct functions of the form $f\un g$, we apply
2765 \tdx{fun_disjoint_Un}:
2767 by (resolve_tac [fun_disjoint_Un] 1);
2769 {\out [| \ldots |] ==> (f Un g) ` a = f ` a}
2770 {\out 1. [| \ldots |] ==> f : ?A3 -> ?B3}
2771 {\out 2. [| \ldots |] ==> g : ?C3 -> ?D3}
2772 {\out 3. [| \ldots |] ==> ?A3 Int ?C3 = 0}
2774 The remaining subgoals are instances of the assumptions. Again, observe how
2775 unknowns are instantiated:
2779 {\out [| \ldots |] ==> (f Un g) ` a = f ` a}
2780 {\out 1. [| \ldots |] ==> g : ?C3 -> ?D3}
2781 {\out 2. [| \ldots |] ==> A Int ?C3 = 0}
2784 {\out [| \ldots |] ==> (f Un g) ` a = f ` a}
2785 {\out 1. [| \ldots |] ==> A Int C = 0}
2788 {\out [| \ldots |] ==> (f Un g) ` a = f ` a}
2791 See the files \texttt{ZF/func.ML} and \texttt{ZF/WF.ML} for more
2792 examples of reasoning about functions.
2794 \index{set theory|)}