paulson@6121: %% $Id$
paulson@6121: \chapter{Zermelo-Fraenkel Set Theory}
paulson@6121: \index{set theory|(}
paulson@6121:
paulson@6121: The theory~\thydx{ZF} implements Zermelo-Fraenkel set
paulson@6121: theory~\cite{halmos60,suppes72} as an extension of~\texttt{FOL}, classical
paulson@6121: first-order logic. The theory includes a collection of derived natural
paulson@6121: deduction rules, for use with Isabelle's classical reasoner. Much
paulson@6121: of it is based on the work of No\"el~\cite{noel}.
paulson@6121:
paulson@6121: A tremendous amount of set theory has been formally developed, including the
paulson@6121: basic properties of relations, functions, ordinals and cardinals. Significant
paulson@6121: results have been proved, such as the Schr\"oder-Bernstein Theorem, the
paulson@6121: Wellordering Theorem and a version of Ramsey's Theorem. \texttt{ZF} provides
paulson@6121: both the integers and the natural numbers. General methods have been
paulson@6121: developed for solving recursion equations over monotonic functors; these have
paulson@6121: been applied to yield constructions of lists, trees, infinite lists, etc.
paulson@6121:
paulson@6121: \texttt{ZF} has a flexible package for handling inductive definitions,
paulson@6121: such as inference systems, and datatype definitions, such as lists and
paulson@6121: trees. Moreover it handles coinductive definitions, such as
paulson@6121: bisimulation relations, and codatatype definitions, such as streams. It
paulson@6121: provides a streamlined syntax for defining primitive recursive functions over
paulson@6121: datatypes.
paulson@6121:
wenzelm@9695: Because ZF is an extension of FOL, it provides the same packages, namely
wenzelm@9695: \texttt{hyp_subst_tac}, the simplifier, and the classical reasoner. The
wenzelm@9695: default simpset and claset are usually satisfactory.
paulson@6121:
paulson@6121: Published articles~\cite{paulson-set-I,paulson-set-II} describe \texttt{ZF}
paulson@6121: less formally than this chapter. Isabelle employs a novel treatment of
paulson@6121: non-well-founded data structures within the standard {\sc zf} axioms including
paulson@6592: the Axiom of Foundation~\cite{paulson-mscs}.
paulson@6121:
paulson@6121:
paulson@6121: \section{Which version of axiomatic set theory?}
paulson@6121: The two main axiom systems for set theory are Bernays-G\"odel~({\sc bg})
paulson@6121: and Zermelo-Fraenkel~({\sc zf}). Resolution theorem provers can use {\sc
paulson@6121: bg} because it is finite~\cite{boyer86,quaife92}. {\sc zf} does not
paulson@6121: have a finite axiom system because of its Axiom Scheme of Replacement.
paulson@6121: This makes it awkward to use with many theorem provers, since instances
paulson@6121: of the axiom scheme have to be invoked explicitly. Since Isabelle has no
paulson@6121: difficulty with axiom schemes, we may adopt either axiom system.
paulson@6121:
paulson@6121: These two theories differ in their treatment of {\bf classes}, which are
paulson@6121: collections that are `too big' to be sets. The class of all sets,~$V$,
paulson@6121: cannot be a set without admitting Russell's Paradox. In {\sc bg}, both
paulson@6121: classes and sets are individuals; $x\in V$ expresses that $x$ is a set. In
paulson@6121: {\sc zf}, all variables denote sets; classes are identified with unary
paulson@6121: predicates. The two systems define essentially the same sets and classes,
paulson@6121: with similar properties. In particular, a class cannot belong to another
paulson@6121: class (let alone a set).
paulson@6121:
paulson@6121: Modern set theorists tend to prefer {\sc zf} because they are mainly concerned
paulson@6121: with sets, rather than classes. {\sc bg} requires tiresome proofs that various
paulson@6121: collections are sets; for instance, showing $x\in\{x\}$ requires showing that
paulson@6121: $x$ is a set.
paulson@6121:
paulson@6121:
paulson@6121: \begin{figure} \small
paulson@6121: \begin{center}
paulson@6121: \begin{tabular}{rrr}
paulson@6121: \it name &\it meta-type & \it description \\
paulson@6121: \cdx{Let} & $[\alpha,\alpha\To\beta]\To\beta$ & let binder\\
paulson@6121: \cdx{0} & $i$ & empty set\\
paulson@6121: \cdx{cons} & $[i,i]\To i$ & finite set constructor\\
paulson@6121: \cdx{Upair} & $[i,i]\To i$ & unordered pairing\\
paulson@6121: \cdx{Pair} & $[i,i]\To i$ & ordered pairing\\
paulson@6121: \cdx{Inf} & $i$ & infinite set\\
paulson@6121: \cdx{Pow} & $i\To i$ & powerset\\
paulson@6121: \cdx{Union} \cdx{Inter} & $i\To i$ & set union/intersection \\
paulson@6121: \cdx{split} & $[[i,i]\To i, i] \To i$ & generalized projection\\
paulson@6121: \cdx{fst} \cdx{snd} & $i\To i$ & projections\\
paulson@6121: \cdx{converse}& $i\To i$ & converse of a relation\\
paulson@6121: \cdx{succ} & $i\To i$ & successor\\
paulson@6121: \cdx{Collect} & $[i,i\To o]\To i$ & separation\\
paulson@6121: \cdx{Replace} & $[i, [i,i]\To o] \To i$ & replacement\\
paulson@6121: \cdx{PrimReplace} & $[i, [i,i]\To o] \To i$ & primitive replacement\\
paulson@6121: \cdx{RepFun} & $[i, i\To i] \To i$ & functional replacement\\
paulson@6121: \cdx{Pi} \cdx{Sigma} & $[i,i\To i]\To i$ & general product/sum\\
paulson@6121: \cdx{domain} & $i\To i$ & domain of a relation\\
paulson@6121: \cdx{range} & $i\To i$ & range of a relation\\
paulson@6121: \cdx{field} & $i\To i$ & field of a relation\\
paulson@6121: \cdx{Lambda} & $[i, i\To i]\To i$ & $\lambda$-abstraction\\
paulson@6121: \cdx{restrict}& $[i, i] \To i$ & restriction of a function\\
paulson@6121: \cdx{The} & $[i\To o]\To i$ & definite description\\
paulson@6121: \cdx{if} & $[o,i,i]\To i$ & conditional\\
paulson@6121: \cdx{Ball} \cdx{Bex} & $[i, i\To o]\To o$ & bounded quantifiers
paulson@6121: \end{tabular}
paulson@6121: \end{center}
paulson@6121: \subcaption{Constants}
paulson@6121:
paulson@6121: \begin{center}
paulson@6121: \index{*"`"` symbol}
paulson@6121: \index{*"-"`"` symbol}
wenzelm@9695: \index{*"` symbol}\index{function applications!in ZF}
paulson@6121: \index{*"- symbol}
paulson@6121: \index{*": symbol}
paulson@6121: \index{*"<"= symbol}
paulson@6121: \begin{tabular}{rrrr}
paulson@6121: \it symbol & \it meta-type & \it priority & \it description \\
paulson@6121: \tt `` & $[i,i]\To i$ & Left 90 & image \\
paulson@6121: \tt -`` & $[i,i]\To i$ & Left 90 & inverse image \\
paulson@6121: \tt ` & $[i,i]\To i$ & Left 90 & application \\
paulson@6121: \sdx{Int} & $[i,i]\To i$ & Left 70 & intersection ($\int$) \\
paulson@6121: \sdx{Un} & $[i,i]\To i$ & Left 65 & union ($\un$) \\
paulson@6121: \tt - & $[i,i]\To i$ & Left 65 & set difference ($-$) \\[1ex]
paulson@6121: \tt: & $[i,i]\To o$ & Left 50 & membership ($\in$) \\
paulson@6121: \tt <= & $[i,i]\To o$ & Left 50 & subset ($\subseteq$)
paulson@6121: \end{tabular}
paulson@6121: \end{center}
paulson@6121: \subcaption{Infixes}
wenzelm@9695: \caption{Constants of ZF} \label{zf-constants}
paulson@6121: \end{figure}
paulson@6121:
paulson@6121:
paulson@6121: \section{The syntax of set theory}
paulson@6121: The language of set theory, as studied by logicians, has no constants. The
paulson@6121: traditional axioms merely assert the existence of empty sets, unions,
paulson@6121: powersets, etc.; this would be intolerable for practical reasoning. The
paulson@6121: Isabelle theory declares constants for primitive sets. It also extends
paulson@6121: \texttt{FOL} with additional syntax for finite sets, ordered pairs,
paulson@6121: comprehension, general union/intersection, general sums/products, and
paulson@6121: bounded quantifiers. In most other respects, Isabelle implements precisely
paulson@6121: Zermelo-Fraenkel set theory.
paulson@6121:
wenzelm@9695: Figure~\ref{zf-constants} lists the constants and infixes of~ZF, while
paulson@6121: Figure~\ref{zf-trans} presents the syntax translations. Finally,
wenzelm@9695: Figure~\ref{zf-syntax} presents the full grammar for set theory, including the
wenzelm@9695: constructs of FOL.
paulson@6121:
paulson@6121: Local abbreviations can be introduced by a \texttt{let} construct whose
paulson@6121: syntax appears in Fig.\ts\ref{zf-syntax}. Internally it is translated into
paulson@6121: the constant~\cdx{Let}. It can be expanded by rewriting with its
paulson@6121: definition, \tdx{Let_def}.
paulson@6121:
paulson@6121: Apart from \texttt{let}, set theory does not use polymorphism. All terms in
wenzelm@9695: ZF have type~\tydx{i}, which is the type of individuals and has class~{\tt
paulson@6121: term}. The type of first-order formulae, remember, is~\textit{o}.
paulson@6121:
paulson@6121: Infix operators include binary union and intersection ($A\un B$ and
paulson@6121: $A\int B$), set difference ($A-B$), and the subset and membership
paulson@6121: relations. Note that $a$\verb|~:|$b$ is translated to $\neg(a\in b)$. The
paulson@6121: union and intersection operators ($\bigcup A$ and $\bigcap A$) form the
paulson@6121: union or intersection of a set of sets; $\bigcup A$ means the same as
paulson@6121: $\bigcup@{x\in A}x$. Of these operators, only $\bigcup A$ is primitive.
paulson@6121:
paulson@6121: The constant \cdx{Upair} constructs unordered pairs; thus {\tt
paulson@6121: Upair($A$,$B$)} denotes the set~$\{A,B\}$ and \texttt{Upair($A$,$A$)}
paulson@6121: denotes the singleton~$\{A\}$. General union is used to define binary
paulson@6121: union. The Isabelle version goes on to define the constant
paulson@6121: \cdx{cons}:
paulson@6121: \begin{eqnarray*}
paulson@6121: A\cup B & \equiv & \bigcup(\texttt{Upair}(A,B)) \\
paulson@6121: \texttt{cons}(a,B) & \equiv & \texttt{Upair}(a,a) \un B
paulson@6121: \end{eqnarray*}
paulson@6121: The $\{a@1, \ldots\}$ notation abbreviates finite sets constructed in the
paulson@6121: obvious manner using~\texttt{cons} and~$\emptyset$ (the empty set):
paulson@6121: \begin{eqnarray*}
paulson@6121: \{a,b,c\} & \equiv & \texttt{cons}(a,\texttt{cons}(b,\texttt{cons}(c,\emptyset)))
paulson@6121: \end{eqnarray*}
paulson@6121:
paulson@6121: The constant \cdx{Pair} constructs ordered pairs, as in {\tt
paulson@6121: Pair($a$,$b$)}. Ordered pairs may also be written within angle brackets,
paulson@6121: as {\tt<$a$,$b$>}. The $n$-tuple {\tt<$a@1$,\ldots,$a@{n-1}$,$a@n$>}
paulson@6121: abbreviates the nest of pairs\par\nobreak
paulson@6121: \centerline{\texttt{Pair($a@1$,\ldots,Pair($a@{n-1}$,$a@n$)\ldots).}}
paulson@6121:
wenzelm@9695: In ZF, a function is a set of pairs. A ZF function~$f$ is simply an
wenzelm@9695: individual as far as Isabelle is concerned: its Isabelle type is~$i$, not say
wenzelm@9695: $i\To i$. The infix operator~{\tt`} denotes the application of a function set
wenzelm@9695: to its argument; we must write~$f{\tt`}x$, not~$f(x)$. The syntax for image
wenzelm@9695: is~$f{\tt``}A$ and that for inverse image is~$f{\tt-``}A$.
paulson@6121:
paulson@6121:
paulson@6121: \begin{figure}
wenzelm@9695: \index{lambda abs@$\lambda$-abstractions!in ZF}
paulson@6121: \index{*"-"> symbol}
paulson@6121: \index{*"* symbol}
paulson@6121: \begin{center} \footnotesize\tt\frenchspacing
paulson@6121: \begin{tabular}{rrr}
paulson@6121: \it external & \it internal & \it description \\
paulson@6121: $a$ \ttilde: $b$ & \ttilde($a$ : $b$) & \rm negated membership\\
paulson@6121: \ttlbrace$a@1$, $\ldots$, $a@n$\ttrbrace & cons($a@1$,$\ldots$,cons($a@n$,0)) &
paulson@6121: \rm finite set \\
paulson@6121: <$a@1$, $\ldots$, $a@{n-1}$, $a@n$> &
paulson@6121: Pair($a@1$,\ldots,Pair($a@{n-1}$,$a@n$)\ldots) &
paulson@6121: \rm ordered $n$-tuple \\
paulson@6121: \ttlbrace$x$:$A . P[x]$\ttrbrace & Collect($A$,$\lambda x. P[x]$) &
paulson@6121: \rm separation \\
paulson@6121: \ttlbrace$y . x$:$A$, $Q[x,y]$\ttrbrace & Replace($A$,$\lambda x\,y. Q[x,y]$) &
paulson@6121: \rm replacement \\
paulson@6121: \ttlbrace$b[x] . x$:$A$\ttrbrace & RepFun($A$,$\lambda x. b[x]$) &
paulson@6121: \rm functional replacement \\
paulson@6121: \sdx{INT} $x$:$A . B[x]$ & Inter(\ttlbrace$B[x] . x$:$A$\ttrbrace) &
paulson@6121: \rm general intersection \\
paulson@6121: \sdx{UN} $x$:$A . B[x]$ & Union(\ttlbrace$B[x] . x$:$A$\ttrbrace) &
paulson@6121: \rm general union \\
paulson@6121: \sdx{PROD} $x$:$A . B[x]$ & Pi($A$,$\lambda x. B[x]$) &
paulson@6121: \rm general product \\
paulson@6121: \sdx{SUM} $x$:$A . B[x]$ & Sigma($A$,$\lambda x. B[x]$) &
paulson@6121: \rm general sum \\
paulson@6121: $A$ -> $B$ & Pi($A$,$\lambda x. B$) &
paulson@6121: \rm function space \\
paulson@6121: $A$ * $B$ & Sigma($A$,$\lambda x. B$) &
paulson@6121: \rm binary product \\
paulson@6121: \sdx{THE} $x . P[x]$ & The($\lambda x. P[x]$) &
paulson@6121: \rm definite description \\
paulson@6121: \sdx{lam} $x$:$A . b[x]$ & Lambda($A$,$\lambda x. b[x]$) &
paulson@6121: \rm $\lambda$-abstraction\\[1ex]
paulson@6121: \sdx{ALL} $x$:$A . P[x]$ & Ball($A$,$\lambda x. P[x]$) &
paulson@6121: \rm bounded $\forall$ \\
paulson@6121: \sdx{EX} $x$:$A . P[x]$ & Bex($A$,$\lambda x. P[x]$) &
paulson@6121: \rm bounded $\exists$
paulson@6121: \end{tabular}
paulson@6121: \end{center}
wenzelm@9695: \caption{Translations for ZF} \label{zf-trans}
paulson@6121: \end{figure}
paulson@6121:
paulson@6121:
paulson@6121: \begin{figure}
paulson@6121: \index{*let symbol}
paulson@6121: \index{*in symbol}
paulson@6121: \dquotes
paulson@6121: \[\begin{array}{rcl}
paulson@6121: term & = & \hbox{expression of type~$i$} \\
paulson@6121: & | & "let"~id~"="~term";"\dots";"~id~"="~term~"in"~term \\
paulson@6121: & | & "if"~term~"then"~term~"else"~term \\
paulson@6121: & | & "{\ttlbrace} " term\; ("," term)^* " {\ttrbrace}" \\
paulson@6121: & | & "< " term\; ("," term)^* " >" \\
paulson@6121: & | & "{\ttlbrace} " id ":" term " . " formula " {\ttrbrace}" \\
paulson@6121: & | & "{\ttlbrace} " id " . " id ":" term ", " formula " {\ttrbrace}" \\
paulson@6121: & | & "{\ttlbrace} " term " . " id ":" term " {\ttrbrace}" \\
paulson@6121: & | & term " `` " term \\
paulson@6121: & | & term " -`` " term \\
paulson@6121: & | & term " ` " term \\
paulson@6121: & | & term " * " term \\
paulson@6121: & | & term " Int " term \\
paulson@6121: & | & term " Un " term \\
paulson@6121: & | & term " - " term \\
paulson@6121: & | & term " -> " term \\
paulson@6121: & | & "THE~~" id " . " formula\\
paulson@6121: & | & "lam~~" id ":" term " . " term \\
paulson@6121: & | & "INT~~" id ":" term " . " term \\
paulson@6121: & | & "UN~~~" id ":" term " . " term \\
paulson@6121: & | & "PROD~" id ":" term " . " term \\
paulson@6121: & | & "SUM~~" id ":" term " . " term \\[2ex]
paulson@6121: formula & = & \hbox{expression of type~$o$} \\
paulson@6121: & | & term " : " term \\
paulson@6121: & | & term " \ttilde: " term \\
paulson@6121: & | & term " <= " term \\
paulson@6121: & | & term " = " term \\
paulson@6121: & | & term " \ttilde= " term \\
paulson@6121: & | & "\ttilde\ " formula \\
paulson@6121: & | & formula " \& " formula \\
paulson@6121: & | & formula " | " formula \\
paulson@6121: & | & formula " --> " formula \\
paulson@6121: & | & formula " <-> " formula \\
paulson@6121: & | & "ALL " id ":" term " . " formula \\
paulson@6121: & | & "EX~~" id ":" term " . " formula \\
paulson@6121: & | & "ALL~" id~id^* " . " formula \\
paulson@6121: & | & "EX~~" id~id^* " . " formula \\
paulson@6121: & | & "EX!~" id~id^* " . " formula
paulson@6121: \end{array}
paulson@6121: \]
wenzelm@9695: \caption{Full grammar for ZF} \label{zf-syntax}
paulson@6121: \end{figure}
paulson@6121:
paulson@6121:
paulson@6121: \section{Binding operators}
paulson@6121: The constant \cdx{Collect} constructs sets by the principle of {\bf
paulson@6121: separation}. The syntax for separation is
paulson@6121: \hbox{\tt\ttlbrace$x$:$A$.\ $P[x]$\ttrbrace}, where $P[x]$ is a formula
paulson@6121: that may contain free occurrences of~$x$. It abbreviates the set {\tt
paulson@6121: Collect($A$,$\lambda x. P[x]$)}, which consists of all $x\in A$ that
paulson@6121: satisfy~$P[x]$. Note that \texttt{Collect} is an unfortunate choice of
paulson@6121: name: some set theories adopt a set-formation principle, related to
paulson@6121: replacement, called collection.
paulson@6121:
paulson@6121: The constant \cdx{Replace} constructs sets by the principle of {\bf
paulson@6121: replacement}. The syntax
paulson@6121: \hbox{\tt\ttlbrace$y$.\ $x$:$A$,$Q[x,y]$\ttrbrace} denotes the set {\tt
paulson@6121: Replace($A$,$\lambda x\,y. Q[x,y]$)}, which consists of all~$y$ such
paulson@6121: that there exists $x\in A$ satisfying~$Q[x,y]$. The Replacement Axiom
paulson@6121: has the condition that $Q$ must be single-valued over~$A$: for
paulson@6121: all~$x\in A$ there exists at most one $y$ satisfying~$Q[x,y]$. A
paulson@6121: single-valued binary predicate is also called a {\bf class function}.
paulson@6121:
paulson@6121: The constant \cdx{RepFun} expresses a special case of replacement,
paulson@6121: where $Q[x,y]$ has the form $y=b[x]$. Such a $Q$ is trivially
paulson@6121: single-valued, since it is just the graph of the meta-level
paulson@6121: function~$\lambda x. b[x]$. The resulting set consists of all $b[x]$
paulson@6121: for~$x\in A$. This is analogous to the \ML{} functional \texttt{map},
paulson@6121: since it applies a function to every element of a set. The syntax is
paulson@6121: \hbox{\tt\ttlbrace$b[x]$.\ $x$:$A$\ttrbrace}, which expands to {\tt
paulson@6121: RepFun($A$,$\lambda x. b[x]$)}.
paulson@6121:
paulson@6121: \index{*INT symbol}\index{*UN symbol}
paulson@6121: General unions and intersections of indexed
paulson@6121: families of sets, namely $\bigcup@{x\in A}B[x]$ and $\bigcap@{x\in A}B[x]$,
paulson@6121: are written \hbox{\tt UN $x$:$A$.\ $B[x]$} and \hbox{\tt INT $x$:$A$.\ $B[x]$}.
paulson@6121: Their meaning is expressed using \texttt{RepFun} as
paulson@6121: \[
paulson@6121: \bigcup(\{B[x]. x\in A\}) \qquad\hbox{and}\qquad
paulson@6121: \bigcap(\{B[x]. x\in A\}).
paulson@6121: \]
paulson@6121: General sums $\sum@{x\in A}B[x]$ and products $\prod@{x\in A}B[x]$ can be
paulson@6121: constructed in set theory, where $B[x]$ is a family of sets over~$A$. They
paulson@6121: have as special cases $A\times B$ and $A\to B$, where $B$ is simply a set.
paulson@6121: This is similar to the situation in Constructive Type Theory (set theory
paulson@6121: has `dependent sets') and calls for similar syntactic conventions. The
paulson@6121: constants~\cdx{Sigma} and~\cdx{Pi} construct general sums and
paulson@6121: products. Instead of \texttt{Sigma($A$,$B$)} and \texttt{Pi($A$,$B$)} we may
paulson@6121: write
paulson@6121: \hbox{\tt SUM $x$:$A$.\ $B[x]$} and \hbox{\tt PROD $x$:$A$.\ $B[x]$}.
paulson@6121: \index{*SUM symbol}\index{*PROD symbol}%
paulson@6121: The special cases as \hbox{\tt$A$*$B$} and \hbox{\tt$A$->$B$} abbreviate
paulson@6121: general sums and products over a constant family.\footnote{Unlike normal
paulson@6121: infix operators, {\tt*} and {\tt->} merely define abbreviations; there are
paulson@6121: no constants~\texttt{op~*} and~\hbox{\tt op~->}.} Isabelle accepts these
paulson@6121: abbreviations in parsing and uses them whenever possible for printing.
paulson@6121:
wenzelm@9695: \index{*THE symbol} As mentioned above, whenever the axioms assert the
wenzelm@9695: existence and uniqueness of a set, Isabelle's set theory declares a constant
wenzelm@9695: for that set. These constants can express the {\bf definite description}
wenzelm@9695: operator~$\iota x. P[x]$, which stands for the unique~$a$ satisfying~$P[a]$,
wenzelm@9695: if such exists. Since all terms in ZF denote something, a description is
wenzelm@9695: always meaningful, but we do not know its value unless $P[x]$ defines it
wenzelm@9695: uniquely. Using the constant~\cdx{The}, we may write descriptions as {\tt
paulson@6121: The($\lambda x. P[x]$)} or use the syntax \hbox{\tt THE $x$.\ $P[x]$}.
paulson@6121:
paulson@6121: \index{*lam symbol}
paulson@6121: Function sets may be written in $\lambda$-notation; $\lambda x\in A. b[x]$
paulson@6121: stands for the set of all pairs $\pair{x,b[x]}$ for $x\in A$. In order for
paulson@6121: this to be a set, the function's domain~$A$ must be given. Using the
paulson@6121: constant~\cdx{Lambda}, we may express function sets as {\tt
paulson@6121: Lambda($A$,$\lambda x. b[x]$)} or use the syntax \hbox{\tt lam $x$:$A$.\ $b[x]$}.
paulson@6121:
paulson@6121: Isabelle's set theory defines two {\bf bounded quantifiers}:
paulson@6121: \begin{eqnarray*}
paulson@6121: \forall x\in A. P[x] &\hbox{abbreviates}& \forall x. x\in A\imp P[x] \\
paulson@6121: \exists x\in A. P[x] &\hbox{abbreviates}& \exists x. x\in A\conj P[x]
paulson@6121: \end{eqnarray*}
paulson@6121: The constants~\cdx{Ball} and~\cdx{Bex} are defined
paulson@6121: accordingly. Instead of \texttt{Ball($A$,$P$)} and \texttt{Bex($A$,$P$)} we may
paulson@6121: write
paulson@6121: \hbox{\tt ALL $x$:$A$.\ $P[x]$} and \hbox{\tt EX $x$:$A$.\ $P[x]$}.
paulson@6121:
paulson@6121:
paulson@6121: %%%% ZF.thy
paulson@6121:
paulson@6121: \begin{figure}
paulson@6121: \begin{ttbox}
paulson@6121: \tdx{Let_def} Let(s, f) == f(s)
paulson@6121:
paulson@6121: \tdx{Ball_def} Ball(A,P) == ALL x. x:A --> P(x)
paulson@6121: \tdx{Bex_def} Bex(A,P) == EX x. x:A & P(x)
paulson@6121:
paulson@6121: \tdx{subset_def} A <= B == ALL x:A. x:B
paulson@6121: \tdx{extension} A = B <-> A <= B & B <= A
paulson@6121:
paulson@6121: \tdx{Union_iff} A : Union(C) <-> (EX B:C. A:B)
paulson@6121: \tdx{Pow_iff} A : Pow(B) <-> A <= B
paulson@6121: \tdx{foundation} A=0 | (EX x:A. ALL y:x. ~ y:A)
paulson@6121:
paulson@6121: \tdx{replacement} (ALL x:A. ALL y z. P(x,y) & P(x,z) --> y=z) ==>
paulson@6121: b : PrimReplace(A,P) <-> (EX x:A. P(x,b))
paulson@6121: \subcaption{The Zermelo-Fraenkel Axioms}
paulson@6121:
paulson@6121: \tdx{Replace_def} Replace(A,P) ==
paulson@6121: PrimReplace(A, \%x y. (EX!z. P(x,z)) & P(x,y))
paulson@6121: \tdx{RepFun_def} RepFun(A,f) == {\ttlbrace}y . x:A, y=f(x)\ttrbrace
paulson@6121: \tdx{the_def} The(P) == Union({\ttlbrace}y . x:{\ttlbrace}0{\ttrbrace}, P(y){\ttrbrace})
paulson@6121: \tdx{if_def} if(P,a,b) == THE z. P & z=a | ~P & z=b
paulson@6121: \tdx{Collect_def} Collect(A,P) == {\ttlbrace}y . x:A, x=y & P(x){\ttrbrace}
paulson@6121: \tdx{Upair_def} Upair(a,b) ==
paulson@6121: {\ttlbrace}y. x:Pow(Pow(0)), (x=0 & y=a) | (x=Pow(0) & y=b){\ttrbrace}
paulson@6121: \subcaption{Consequences of replacement}
paulson@6121:
paulson@6121: \tdx{Inter_def} Inter(A) == {\ttlbrace}x:Union(A) . ALL y:A. x:y{\ttrbrace}
paulson@6121: \tdx{Un_def} A Un B == Union(Upair(A,B))
paulson@6121: \tdx{Int_def} A Int B == Inter(Upair(A,B))
paulson@6121: \tdx{Diff_def} A - B == {\ttlbrace}x:A . x~:B{\ttrbrace}
paulson@6121: \subcaption{Union, intersection, difference}
paulson@6121: \end{ttbox}
wenzelm@9695: \caption{Rules and axioms of ZF} \label{zf-rules}
paulson@6121: \end{figure}
paulson@6121:
paulson@6121:
paulson@6121: \begin{figure}
paulson@6121: \begin{ttbox}
paulson@6121: \tdx{cons_def} cons(a,A) == Upair(a,a) Un A
paulson@6121: \tdx{succ_def} succ(i) == cons(i,i)
paulson@6121: \tdx{infinity} 0:Inf & (ALL y:Inf. succ(y): Inf)
paulson@6121: \subcaption{Finite and infinite sets}
paulson@6121:
paulson@6121: \tdx{Pair_def} == {\ttlbrace}{\ttlbrace}a,a{\ttrbrace}, {\ttlbrace}a,b{\ttrbrace}{\ttrbrace}
paulson@6121: \tdx{split_def} split(c,p) == THE y. EX a b. p= & y=c(a,b)
paulson@6121: \tdx{fst_def} fst(A) == split(\%x y. x, p)
paulson@6121: \tdx{snd_def} snd(A) == split(\%x y. y, p)
paulson@6121: \tdx{Sigma_def} Sigma(A,B) == UN x:A. UN y:B(x). {\ttlbrace}{\ttrbrace}
paulson@6121: \subcaption{Ordered pairs and Cartesian products}
paulson@6121:
paulson@6121: \tdx{converse_def} converse(r) == {\ttlbrace}z. w:r, EX x y. w= & z={\ttrbrace}
paulson@6121: \tdx{domain_def} domain(r) == {\ttlbrace}x. w:r, EX y. w={\ttrbrace}
paulson@6121: \tdx{range_def} range(r) == domain(converse(r))
paulson@6121: \tdx{field_def} field(r) == domain(r) Un range(r)
paulson@6121: \tdx{image_def} r `` A == {\ttlbrace}y : range(r) . EX x:A. : r{\ttrbrace}
paulson@6121: \tdx{vimage_def} r -`` A == converse(r)``A
paulson@6121: \subcaption{Operations on relations}
paulson@6121:
paulson@6121: \tdx{lam_def} Lambda(A,b) == {\ttlbrace} . x:A{\ttrbrace}
paulson@6121: \tdx{apply_def} f`a == THE y. : f
paulson@6121: \tdx{Pi_def} Pi(A,B) == {\ttlbrace}f: Pow(Sigma(A,B)). ALL x:A. EX! y. : f{\ttrbrace}
paulson@6121: \tdx{restrict_def} restrict(f,A) == lam x:A. f`x
paulson@6121: \subcaption{Functions and general product}
paulson@6121: \end{ttbox}
wenzelm@9695: \caption{Further definitions of ZF} \label{zf-defs}
paulson@6121: \end{figure}
paulson@6121:
paulson@6121:
paulson@6121:
paulson@6121: \section{The Zermelo-Fraenkel axioms}
paulson@6121: The axioms appear in Fig.\ts \ref{zf-rules}. They resemble those
paulson@6121: presented by Suppes~\cite{suppes72}. Most of the theory consists of
paulson@6121: definitions. In particular, bounded quantifiers and the subset relation
paulson@6121: appear in other axioms. Object-level quantifiers and implications have
paulson@6121: been replaced by meta-level ones wherever possible, to simplify use of the
paulson@6121: axioms. See the file \texttt{ZF/ZF.thy} for details.
paulson@6121:
paulson@6121: The traditional replacement axiom asserts
paulson@6121: \[ y \in \texttt{PrimReplace}(A,P) \bimp (\exists x\in A. P(x,y)) \]
paulson@6121: subject to the condition that $P(x,y)$ is single-valued for all~$x\in A$.
paulson@6121: The Isabelle theory defines \cdx{Replace} to apply
paulson@6121: \cdx{PrimReplace} to the single-valued part of~$P$, namely
paulson@6121: \[ (\exists!z. P(x,z)) \conj P(x,y). \]
paulson@6121: Thus $y\in \texttt{Replace}(A,P)$ if and only if there is some~$x$ such that
paulson@6121: $P(x,-)$ holds uniquely for~$y$. Because the equivalence is unconditional,
paulson@6121: \texttt{Replace} is much easier to use than \texttt{PrimReplace}; it defines the
paulson@6121: same set, if $P(x,y)$ is single-valued. The nice syntax for replacement
paulson@6121: expands to \texttt{Replace}.
paulson@6121:
paulson@6121: Other consequences of replacement include functional replacement
paulson@6121: (\cdx{RepFun}) and definite descriptions (\cdx{The}).
paulson@6121: Axioms for separation (\cdx{Collect}) and unordered pairs
paulson@6121: (\cdx{Upair}) are traditionally assumed, but they actually follow
paulson@6121: from replacement~\cite[pages 237--8]{suppes72}.
paulson@6121:
paulson@6121: The definitions of general intersection, etc., are straightforward. Note
paulson@6121: the definition of \texttt{cons}, which underlies the finite set notation.
paulson@6121: The axiom of infinity gives us a set that contains~0 and is closed under
paulson@6121: successor (\cdx{succ}). Although this set is not uniquely defined,
paulson@6121: the theory names it (\cdx{Inf}) in order to simplify the
paulson@6121: construction of the natural numbers.
paulson@6121:
paulson@6121: Further definitions appear in Fig.\ts\ref{zf-defs}. Ordered pairs are
paulson@6121: defined in the standard way, $\pair{a,b}\equiv\{\{a\},\{a,b\}\}$. Recall
paulson@6121: that \cdx{Sigma}$(A,B)$ generalizes the Cartesian product of two
paulson@6121: sets. It is defined to be the union of all singleton sets
paulson@6121: $\{\pair{x,y}\}$, for $x\in A$ and $y\in B(x)$. This is a typical usage of
paulson@6121: general union.
paulson@6121:
paulson@6121: The projections \cdx{fst} and~\cdx{snd} are defined in terms of the
paulson@6121: generalized projection \cdx{split}. The latter has been borrowed from
paulson@6121: Martin-L\"of's Type Theory, and is often easier to use than \cdx{fst}
paulson@6121: and~\cdx{snd}.
paulson@6121:
paulson@6121: Operations on relations include converse, domain, range, and image. The
paulson@6121: set ${\tt Pi}(A,B)$ generalizes the space of functions between two sets.
paulson@6121: Note the simple definitions of $\lambda$-abstraction (using
paulson@6121: \cdx{RepFun}) and application (using a definite description). The
paulson@6121: function \cdx{restrict}$(f,A)$ has the same values as~$f$, but only
paulson@6121: over the domain~$A$.
paulson@6121:
paulson@6121:
paulson@6121: %%%% zf.ML
paulson@6121:
paulson@6121: \begin{figure}
paulson@6121: \begin{ttbox}
paulson@6121: \tdx{ballI} [| !!x. x:A ==> P(x) |] ==> ALL x:A. P(x)
paulson@6121: \tdx{bspec} [| ALL x:A. P(x); x: A |] ==> P(x)
paulson@6121: \tdx{ballE} [| ALL x:A. P(x); P(x) ==> Q; ~ x:A ==> Q |] ==> Q
paulson@6121:
paulson@6121: \tdx{ball_cong} [| A=A'; !!x. x:A' ==> P(x) <-> P'(x) |] ==>
paulson@6121: (ALL x:A. P(x)) <-> (ALL x:A'. P'(x))
paulson@6121:
paulson@6121: \tdx{bexI} [| P(x); x: A |] ==> EX x:A. P(x)
paulson@6121: \tdx{bexCI} [| ALL x:A. ~P(x) ==> P(a); a: A |] ==> EX x:A. P(x)
paulson@6121: \tdx{bexE} [| EX x:A. P(x); !!x. [| x:A; P(x) |] ==> Q |] ==> Q
paulson@6121:
paulson@6121: \tdx{bex_cong} [| A=A'; !!x. x:A' ==> P(x) <-> P'(x) |] ==>
paulson@6121: (EX x:A. P(x)) <-> (EX x:A'. P'(x))
paulson@6121: \subcaption{Bounded quantifiers}
paulson@6121:
paulson@6121: \tdx{subsetI} (!!x. x:A ==> x:B) ==> A <= B
paulson@6121: \tdx{subsetD} [| A <= B; c:A |] ==> c:B
paulson@6121: \tdx{subsetCE} [| A <= B; ~(c:A) ==> P; c:B ==> P |] ==> P
paulson@6121: \tdx{subset_refl} A <= A
paulson@6121: \tdx{subset_trans} [| A<=B; B<=C |] ==> A<=C
paulson@6121:
paulson@6121: \tdx{equalityI} [| A <= B; B <= A |] ==> A = B
paulson@6121: \tdx{equalityD1} A = B ==> A<=B
paulson@6121: \tdx{equalityD2} A = B ==> B<=A
paulson@6121: \tdx{equalityE} [| A = B; [| A<=B; B<=A |] ==> P |] ==> P
paulson@6121: \subcaption{Subsets and extensionality}
paulson@6121:
paulson@6121: \tdx{emptyE} a:0 ==> P
paulson@6121: \tdx{empty_subsetI} 0 <= A
paulson@6121: \tdx{equals0I} [| !!y. y:A ==> False |] ==> A=0
paulson@6121: \tdx{equals0D} [| A=0; a:A |] ==> P
paulson@6121:
paulson@6121: \tdx{PowI} A <= B ==> A : Pow(B)
paulson@6121: \tdx{PowD} A : Pow(B) ==> A<=B
paulson@6121: \subcaption{The empty set; power sets}
paulson@6121: \end{ttbox}
wenzelm@9695: \caption{Basic derived rules for ZF} \label{zf-lemmas1}
paulson@6121: \end{figure}
paulson@6121:
paulson@6121:
paulson@6121: \section{From basic lemmas to function spaces}
paulson@6121: Faced with so many definitions, it is essential to prove lemmas. Even
paulson@6121: trivial theorems like $A \int B = B \int A$ would be difficult to
paulson@6121: prove from the definitions alone. Isabelle's set theory derives many
paulson@6121: rules using a natural deduction style. Ideally, a natural deduction
paulson@6121: rule should introduce or eliminate just one operator, but this is not
paulson@6121: always practical. For most operators, we may forget its definition
paulson@6121: and use its derived rules instead.
paulson@6121:
paulson@6121: \subsection{Fundamental lemmas}
paulson@6121: Figure~\ref{zf-lemmas1} presents the derived rules for the most basic
paulson@6121: operators. The rules for the bounded quantifiers resemble those for the
paulson@6121: ordinary quantifiers, but note that \tdx{ballE} uses a negated assumption
paulson@6121: in the style of Isabelle's classical reasoner. The \rmindex{congruence
paulson@6121: rules} \tdx{ball_cong} and \tdx{bex_cong} are required by Isabelle's
paulson@6121: simplifier, but have few other uses. Congruence rules must be specially
paulson@6121: derived for all binding operators, and henceforth will not be shown.
paulson@6121:
paulson@6121: Figure~\ref{zf-lemmas1} also shows rules for the subset and equality
paulson@6121: relations (proof by extensionality), and rules about the empty set and the
paulson@6121: power set operator.
paulson@6121:
paulson@6121: Figure~\ref{zf-lemmas2} presents rules for replacement and separation.
paulson@6121: The rules for \cdx{Replace} and \cdx{RepFun} are much simpler than
paulson@6121: comparable rules for \texttt{PrimReplace} would be. The principle of
paulson@6121: separation is proved explicitly, although most proofs should use the
paulson@6121: natural deduction rules for \texttt{Collect}. The elimination rule
paulson@6121: \tdx{CollectE} is equivalent to the two destruction rules
paulson@6121: \tdx{CollectD1} and \tdx{CollectD2}, but each rule is suited to
paulson@6121: particular circumstances. Although too many rules can be confusing, there
paulson@6121: is no reason to aim for a minimal set of rules. See the file
paulson@6121: \texttt{ZF/ZF.ML} for a complete listing.
paulson@6121:
paulson@6121: Figure~\ref{zf-lemmas3} presents rules for general union and intersection.
paulson@6121: The empty intersection should be undefined. We cannot have
paulson@6121: $\bigcap(\emptyset)=V$ because $V$, the universal class, is not a set. All
wenzelm@9695: expressions denote something in ZF set theory; the definition of
paulson@6121: intersection implies $\bigcap(\emptyset)=\emptyset$, but this value is
paulson@6121: arbitrary. The rule \tdx{InterI} must have a premise to exclude
paulson@6121: the empty intersection. Some of the laws governing intersections require
paulson@6121: similar premises.
paulson@6121:
paulson@6121:
paulson@6121: %the [p] gives better page breaking for the book
paulson@6121: \begin{figure}[p]
paulson@6121: \begin{ttbox}
paulson@6121: \tdx{ReplaceI} [| x: A; P(x,b); !!y. P(x,y) ==> y=b |] ==>
paulson@6121: b : {\ttlbrace}y. x:A, P(x,y){\ttrbrace}
paulson@6121:
paulson@6121: \tdx{ReplaceE} [| b : {\ttlbrace}y. x:A, P(x,y){\ttrbrace};
paulson@6121: !!x. [| x: A; P(x,b); ALL y. P(x,y)-->y=b |] ==> R
paulson@6121: |] ==> R
paulson@6121:
paulson@6121: \tdx{RepFunI} [| a : A |] ==> f(a) : {\ttlbrace}f(x). x:A{\ttrbrace}
paulson@6121: \tdx{RepFunE} [| b : {\ttlbrace}f(x). x:A{\ttrbrace};
paulson@6121: !!x.[| x:A; b=f(x) |] ==> P |] ==> P
paulson@6121:
paulson@6121: \tdx{separation} a : {\ttlbrace}x:A. P(x){\ttrbrace} <-> a:A & P(a)
paulson@6121: \tdx{CollectI} [| a:A; P(a) |] ==> a : {\ttlbrace}x:A. P(x){\ttrbrace}
paulson@6121: \tdx{CollectE} [| a : {\ttlbrace}x:A. P(x){\ttrbrace}; [| a:A; P(a) |] ==> R |] ==> R
paulson@6121: \tdx{CollectD1} a : {\ttlbrace}x:A. P(x){\ttrbrace} ==> a:A
paulson@6121: \tdx{CollectD2} a : {\ttlbrace}x:A. P(x){\ttrbrace} ==> P(a)
paulson@6121: \end{ttbox}
paulson@6121: \caption{Replacement and separation} \label{zf-lemmas2}
paulson@6121: \end{figure}
paulson@6121:
paulson@6121:
paulson@6121: \begin{figure}
paulson@6121: \begin{ttbox}
paulson@6121: \tdx{UnionI} [| B: C; A: B |] ==> A: Union(C)
paulson@6121: \tdx{UnionE} [| A : Union(C); !!B.[| A: B; B: C |] ==> R |] ==> R
paulson@6121:
paulson@6121: \tdx{InterI} [| !!x. x: C ==> A: x; c:C |] ==> A : Inter(C)
paulson@6121: \tdx{InterD} [| A : Inter(C); B : C |] ==> A : B
paulson@6121: \tdx{InterE} [| A : Inter(C); A:B ==> R; ~ B:C ==> R |] ==> R
paulson@6121:
paulson@6121: \tdx{UN_I} [| a: A; b: B(a) |] ==> b: (UN x:A. B(x))
paulson@6121: \tdx{UN_E} [| b : (UN x:A. B(x)); !!x.[| x: A; b: B(x) |] ==> R
paulson@6121: |] ==> R
paulson@6121:
paulson@6121: \tdx{INT_I} [| !!x. x: A ==> b: B(x); a: A |] ==> b: (INT x:A. B(x))
paulson@6121: \tdx{INT_E} [| b : (INT x:A. B(x)); a: A |] ==> b : B(a)
paulson@6121: \end{ttbox}
paulson@6121: \caption{General union and intersection} \label{zf-lemmas3}
paulson@6121: \end{figure}
paulson@6121:
paulson@6121:
paulson@6121: %%% upair.ML
paulson@6121:
paulson@6121: \begin{figure}
paulson@6121: \begin{ttbox}
paulson@6121: \tdx{pairing} a:Upair(b,c) <-> (a=b | a=c)
paulson@6121: \tdx{UpairI1} a : Upair(a,b)
paulson@6121: \tdx{UpairI2} b : Upair(a,b)
paulson@6121: \tdx{UpairE} [| a : Upair(b,c); a = b ==> P; a = c ==> P |] ==> P
paulson@6121: \end{ttbox}
paulson@6121: \caption{Unordered pairs} \label{zf-upair1}
paulson@6121: \end{figure}
paulson@6121:
paulson@6121:
paulson@6121: \begin{figure}
paulson@6121: \begin{ttbox}
paulson@6121: \tdx{UnI1} c : A ==> c : A Un B
paulson@6121: \tdx{UnI2} c : B ==> c : A Un B
paulson@6121: \tdx{UnCI} (~c : B ==> c : A) ==> c : A Un B
paulson@6121: \tdx{UnE} [| c : A Un B; c:A ==> P; c:B ==> P |] ==> P
paulson@6121:
paulson@6121: \tdx{IntI} [| c : A; c : B |] ==> c : A Int B
paulson@6121: \tdx{IntD1} c : A Int B ==> c : A
paulson@6121: \tdx{IntD2} c : A Int B ==> c : B
paulson@6121: \tdx{IntE} [| c : A Int B; [| c:A; c:B |] ==> P |] ==> P
paulson@6121:
paulson@6121: \tdx{DiffI} [| c : A; ~ c : B |] ==> c : A - B
paulson@6121: \tdx{DiffD1} c : A - B ==> c : A
paulson@6121: \tdx{DiffD2} c : A - B ==> c ~: B
paulson@6121: \tdx{DiffE} [| c : A - B; [| c:A; ~ c:B |] ==> P |] ==> P
paulson@6121: \end{ttbox}
paulson@6121: \caption{Union, intersection, difference} \label{zf-Un}
paulson@6121: \end{figure}
paulson@6121:
paulson@6121:
paulson@6121: \begin{figure}
paulson@6121: \begin{ttbox}
paulson@6121: \tdx{consI1} a : cons(a,B)
paulson@6121: \tdx{consI2} a : B ==> a : cons(b,B)
paulson@6121: \tdx{consCI} (~ a:B ==> a=b) ==> a: cons(b,B)
paulson@6121: \tdx{consE} [| a : cons(b,A); a=b ==> P; a:A ==> P |] ==> P
paulson@6121:
paulson@6121: \tdx{singletonI} a : {\ttlbrace}a{\ttrbrace}
paulson@6121: \tdx{singletonE} [| a : {\ttlbrace}b{\ttrbrace}; a=b ==> P |] ==> P
paulson@6121: \end{ttbox}
paulson@6121: \caption{Finite and singleton sets} \label{zf-upair2}
paulson@6121: \end{figure}
paulson@6121:
paulson@6121:
paulson@6121: \begin{figure}
paulson@6121: \begin{ttbox}
paulson@6121: \tdx{succI1} i : succ(i)
paulson@6121: \tdx{succI2} i : j ==> i : succ(j)
paulson@6121: \tdx{succCI} (~ i:j ==> i=j) ==> i: succ(j)
paulson@6121: \tdx{succE} [| i : succ(j); i=j ==> P; i:j ==> P |] ==> P
paulson@6121: \tdx{succ_neq_0} [| succ(n)=0 |] ==> P
paulson@6121: \tdx{succ_inject} succ(m) = succ(n) ==> m=n
paulson@6121: \end{ttbox}
paulson@6121: \caption{The successor function} \label{zf-succ}
paulson@6121: \end{figure}
paulson@6121:
paulson@6121:
paulson@6121: \begin{figure}
paulson@6121: \begin{ttbox}
paulson@6121: \tdx{the_equality} [| P(a); !!x. P(x) ==> x=a |] ==> (THE x. P(x)) = a
paulson@6121: \tdx{theI} EX! x. P(x) ==> P(THE x. P(x))
paulson@6121:
paulson@6121: \tdx{if_P} P ==> (if P then a else b) = a
paulson@6121: \tdx{if_not_P} ~P ==> (if P then a else b) = b
paulson@6121:
paulson@6121: \tdx{mem_asym} [| a:b; b:a |] ==> P
paulson@6121: \tdx{mem_irrefl} a:a ==> P
paulson@6121: \end{ttbox}
paulson@6121: \caption{Descriptions; non-circularity} \label{zf-the}
paulson@6121: \end{figure}
paulson@6121:
paulson@6121:
paulson@6121: \subsection{Unordered pairs and finite sets}
paulson@6121: Figure~\ref{zf-upair1} presents the principle of unordered pairing, along
paulson@6121: with its derived rules. Binary union and intersection are defined in terms
paulson@6121: of ordered pairs (Fig.\ts\ref{zf-Un}). Set difference is also included. The
paulson@6121: rule \tdx{UnCI} is useful for classical reasoning about unions,
paulson@6121: like \texttt{disjCI}\@; it supersedes \tdx{UnI1} and
paulson@6121: \tdx{UnI2}, but these rules are often easier to work with. For
paulson@6121: intersection and difference we have both elimination and destruction rules.
paulson@6121: Again, there is no reason to provide a minimal rule set.
paulson@6121:
paulson@6121: Figure~\ref{zf-upair2} is concerned with finite sets: it presents rules
paulson@6121: for~\texttt{cons}, the finite set constructor, and rules for singleton
paulson@6121: sets. Figure~\ref{zf-succ} presents derived rules for the successor
paulson@6121: function, which is defined in terms of~\texttt{cons}. The proof that {\tt
paulson@6121: succ} is injective appears to require the Axiom of Foundation.
paulson@6121:
paulson@6121: Definite descriptions (\sdx{THE}) are defined in terms of the singleton
paulson@6121: set~$\{0\}$, but their derived rules fortunately hide this
paulson@6121: (Fig.\ts\ref{zf-the}). The rule~\tdx{theI} is difficult to apply
paulson@6121: because of the two occurrences of~$\Var{P}$. However,
paulson@6121: \tdx{the_equality} does not have this problem and the files contain
paulson@6121: many examples of its use.
paulson@6121:
paulson@6121: Finally, the impossibility of having both $a\in b$ and $b\in a$
paulson@6121: (\tdx{mem_asym}) is proved by applying the Axiom of Foundation to
paulson@6121: the set $\{a,b\}$. The impossibility of $a\in a$ is a trivial consequence.
paulson@6121:
paulson@6121: See the file \texttt{ZF/upair.ML} for full proofs of the rules discussed in
paulson@6121: this section.
paulson@6121:
paulson@6121:
paulson@6121: %%% subset.ML
paulson@6121:
paulson@6121: \begin{figure}
paulson@6121: \begin{ttbox}
paulson@6121: \tdx{Union_upper} B:A ==> B <= Union(A)
paulson@6121: \tdx{Union_least} [| !!x. x:A ==> x<=C |] ==> Union(A) <= C
paulson@6121:
paulson@6121: \tdx{Inter_lower} B:A ==> Inter(A) <= B
paulson@6121: \tdx{Inter_greatest} [| a:A; !!x. x:A ==> C<=x |] ==> C <= Inter(A)
paulson@6121:
paulson@6121: \tdx{Un_upper1} A <= A Un B
paulson@6121: \tdx{Un_upper2} B <= A Un B
paulson@6121: \tdx{Un_least} [| A<=C; B<=C |] ==> A Un B <= C
paulson@6121:
paulson@6121: \tdx{Int_lower1} A Int B <= A
paulson@6121: \tdx{Int_lower2} A Int B <= B
paulson@6121: \tdx{Int_greatest} [| C<=A; C<=B |] ==> C <= A Int B
paulson@6121:
paulson@6121: \tdx{Diff_subset} A-B <= A
paulson@6121: \tdx{Diff_contains} [| C<=A; C Int B = 0 |] ==> C <= A-B
paulson@6121:
paulson@6121: \tdx{Collect_subset} Collect(A,P) <= A
paulson@6121: \end{ttbox}
paulson@6121: \caption{Subset and lattice properties} \label{zf-subset}
paulson@6121: \end{figure}
paulson@6121:
paulson@6121:
paulson@6121: \subsection{Subset and lattice properties}
paulson@6121: The subset relation is a complete lattice. Unions form least upper bounds;
paulson@6121: non-empty intersections form greatest lower bounds. Figure~\ref{zf-subset}
paulson@6121: shows the corresponding rules. A few other laws involving subsets are
paulson@6121: included. Proofs are in the file \texttt{ZF/subset.ML}.
paulson@6121:
paulson@6121: Reasoning directly about subsets often yields clearer proofs than
paulson@6121: reasoning about the membership relation. Section~\ref{sec:ZF-pow-example}
paulson@6121: below presents an example of this, proving the equation ${{\tt Pow}(A)\cap
paulson@6121: {\tt Pow}(B)}= {\tt Pow}(A\cap B)$.
paulson@6121:
paulson@6121: %%% pair.ML
paulson@6121:
paulson@6121: \begin{figure}
paulson@6121: \begin{ttbox}
paulson@6121: \tdx{Pair_inject1} = ==> a=c
paulson@6121: \tdx{Pair_inject2} = ==> b=d
paulson@6121: \tdx{Pair_inject} [| = ; [| a=c; b=d |] ==> P |] ==> P
paulson@6121: \tdx{Pair_neq_0} =0 ==> P
paulson@6121:
paulson@6121: \tdx{fst_conv} fst() = a
paulson@6121: \tdx{snd_conv} snd() = b
paulson@6121: \tdx{split} split(\%x y. c(x,y), ) = c(a,b)
paulson@6121:
paulson@6121: \tdx{SigmaI} [| a:A; b:B(a) |] ==> : Sigma(A,B)
paulson@6121:
paulson@6121: \tdx{SigmaE} [| c: Sigma(A,B);
paulson@6121: !!x y.[| x:A; y:B(x); c= |] ==> P |] ==> P
paulson@6121:
paulson@6121: \tdx{SigmaE2} [| : Sigma(A,B);
paulson@6121: [| a:A; b:B(a) |] ==> P |] ==> P
paulson@6121: \end{ttbox}
paulson@6121: \caption{Ordered pairs; projections; general sums} \label{zf-pair}
paulson@6121: \end{figure}
paulson@6121:
paulson@6121:
paulson@6121: \subsection{Ordered pairs} \label{sec:pairs}
paulson@6121:
paulson@6121: Figure~\ref{zf-pair} presents the rules governing ordered pairs,
paulson@6121: projections and general sums. File \texttt{ZF/pair.ML} contains the
paulson@6121: full (and tedious) proof that $\{\{a\},\{a,b\}\}$ functions as an ordered
paulson@6121: pair. This property is expressed as two destruction rules,
paulson@6121: \tdx{Pair_inject1} and \tdx{Pair_inject2}, and equivalently
paulson@6121: as the elimination rule \tdx{Pair_inject}.
paulson@6121:
paulson@6121: The rule \tdx{Pair_neq_0} asserts $\pair{a,b}\neq\emptyset$. This
paulson@6121: is a property of $\{\{a\},\{a,b\}\}$, and need not hold for other
paulson@6121: encodings of ordered pairs. The non-standard ordered pairs mentioned below
paulson@6121: satisfy $\pair{\emptyset;\emptyset}=\emptyset$.
paulson@6121:
paulson@6121: The natural deduction rules \tdx{SigmaI} and \tdx{SigmaE}
paulson@6121: assert that \cdx{Sigma}$(A,B)$ consists of all pairs of the form
paulson@6121: $\pair{x,y}$, for $x\in A$ and $y\in B(x)$. The rule \tdx{SigmaE2}
paulson@6121: merely states that $\pair{a,b}\in \texttt{Sigma}(A,B)$ implies $a\in A$ and
paulson@6121: $b\in B(a)$.
paulson@6121:
paulson@6121: In addition, it is possible to use tuples as patterns in abstractions:
paulson@6121: \begin{center}
paulson@6121: {\tt\%<$x$,$y$>. $t$} \quad stands for\quad \texttt{split(\%$x$ $y$.\ $t$)}
paulson@6121: \end{center}
paulson@6121: Nested patterns are translated recursively:
paulson@6121: {\tt\%<$x$,$y$,$z$>. $t$} $\leadsto$ {\tt\%<$x$,<$y$,$z$>>. $t$} $\leadsto$
paulson@6121: \texttt{split(\%$x$.\%<$y$,$z$>. $t$)} $\leadsto$ \texttt{split(\%$x$. split(\%$y$
paulson@6121: $z$.\ $t$))}. The reverse translation is performed upon printing.
paulson@6121: \begin{warn}
paulson@6121: The translation between patterns and \texttt{split} is performed automatically
paulson@6121: by the parser and printer. Thus the internal and external form of a term
paulson@6121: may differ, which affects proofs. For example the term {\tt
paulson@6121: (\%.)} requires the theorem \texttt{split} to rewrite to
paulson@6121: {\tt}.
paulson@6121: \end{warn}
paulson@6121: In addition to explicit $\lambda$-abstractions, patterns can be used in any
paulson@6121: variable binding construct which is internally described by a
paulson@6121: $\lambda$-abstraction. Here are some important examples:
paulson@6121: \begin{description}
paulson@6121: \item[Let:] \texttt{let {\it pattern} = $t$ in $u$}
paulson@6121: \item[Choice:] \texttt{THE~{\it pattern}~.~$P$}
paulson@6121: \item[Set operations:] \texttt{UN~{\it pattern}:$A$.~$B$}
paulson@6121: \item[Comprehension:] \texttt{{\ttlbrace}~{\it pattern}:$A$~.~$P$~{\ttrbrace}}
paulson@6121: \end{description}
paulson@6121:
paulson@6121:
paulson@6121: %%% domrange.ML
paulson@6121:
paulson@6121: \begin{figure}
paulson@6121: \begin{ttbox}
paulson@6121: \tdx{domainI} : r ==> a : domain(r)
paulson@6121: \tdx{domainE} [| a : domain(r); !!y. : r ==> P |] ==> P
paulson@6121: \tdx{domain_subset} domain(Sigma(A,B)) <= A
paulson@6121:
paulson@6121: \tdx{rangeI} : r ==> b : range(r)
paulson@6121: \tdx{rangeE} [| b : range(r); !!x. : r ==> P |] ==> P
paulson@6121: \tdx{range_subset} range(A*B) <= B
paulson@6121:
paulson@6121: \tdx{fieldI1} : r ==> a : field(r)
paulson@6121: \tdx{fieldI2} : r ==> b : field(r)
paulson@6121: \tdx{fieldCI} (~ :r ==> : r) ==> a : field(r)
paulson@6121:
paulson@6121: \tdx{fieldE} [| a : field(r);
paulson@6121: !!x. : r ==> P;
paulson@6121: !!x. : r ==> P
paulson@6121: |] ==> P
paulson@6121:
paulson@6121: \tdx{field_subset} field(A*A) <= A
paulson@6121: \end{ttbox}
paulson@6121: \caption{Domain, range and field of a relation} \label{zf-domrange}
paulson@6121: \end{figure}
paulson@6121:
paulson@6121: \begin{figure}
paulson@6121: \begin{ttbox}
paulson@6121: \tdx{imageI} [| : r; a:A |] ==> b : r``A
paulson@6121: \tdx{imageE} [| b: r``A; !!x.[| : r; x:A |] ==> P |] ==> P
paulson@6121:
paulson@6121: \tdx{vimageI} [| : r; b:B |] ==> a : r-``B
paulson@6121: \tdx{vimageE} [| a: r-``B; !!x.[| : r; x:B |] ==> P |] ==> P
paulson@6121: \end{ttbox}
paulson@6121: \caption{Image and inverse image} \label{zf-domrange2}
paulson@6121: \end{figure}
paulson@6121:
paulson@6121:
paulson@6121: \subsection{Relations}
paulson@6121: Figure~\ref{zf-domrange} presents rules involving relations, which are sets
paulson@6121: of ordered pairs. The converse of a relation~$r$ is the set of all pairs
paulson@6121: $\pair{y,x}$ such that $\pair{x,y}\in r$; if $r$ is a function, then
paulson@6121: {\cdx{converse}$(r)$} is its inverse. The rules for the domain
paulson@6121: operation, namely \tdx{domainI} and~\tdx{domainE}, assert that
paulson@6121: \cdx{domain}$(r)$ consists of all~$x$ such that $r$ contains
paulson@6121: some pair of the form~$\pair{x,y}$. The range operation is similar, and
paulson@6121: the field of a relation is merely the union of its domain and range.
paulson@6121:
paulson@6121: Figure~\ref{zf-domrange2} presents rules for images and inverse images.
paulson@6121: Note that these operations are generalisations of range and domain,
paulson@6121: respectively. See the file \texttt{ZF/domrange.ML} for derivations of the
paulson@6121: rules.
paulson@6121:
paulson@6121:
paulson@6121: %%% func.ML
paulson@6121:
paulson@6121: \begin{figure}
paulson@6121: \begin{ttbox}
paulson@6121: \tdx{fun_is_rel} f: Pi(A,B) ==> f <= Sigma(A,B)
paulson@6121:
paulson@6121: \tdx{apply_equality} [| : f; f: Pi(A,B) |] ==> f`a = b
paulson@6121: \tdx{apply_equality2} [| : f; : f; f: Pi(A,B) |] ==> b=c
paulson@6121:
paulson@6121: \tdx{apply_type} [| f: Pi(A,B); a:A |] ==> f`a : B(a)
paulson@6121: \tdx{apply_Pair} [| f: Pi(A,B); a:A |] ==> : f
paulson@6121: \tdx{apply_iff} f: Pi(A,B) ==> : f <-> a:A & f`a = b
paulson@6121:
paulson@6121: \tdx{fun_extension} [| f : Pi(A,B); g: Pi(A,D);
paulson@6121: !!x. x:A ==> f`x = g`x |] ==> f=g
paulson@6121:
paulson@6121: \tdx{domain_type} [| : f; f: Pi(A,B) |] ==> a : A
paulson@6121: \tdx{range_type} [| : f; f: Pi(A,B) |] ==> b : B(a)
paulson@6121:
paulson@6121: \tdx{Pi_type} [| f: A->C; !!x. x:A ==> f`x: B(x) |] ==> f: Pi(A,B)
paulson@6121: \tdx{domain_of_fun} f: Pi(A,B) ==> domain(f)=A
paulson@6121: \tdx{range_of_fun} f: Pi(A,B) ==> f: A->range(f)
paulson@6121:
paulson@6121: \tdx{restrict} a : A ==> restrict(f,A) ` a = f`a
paulson@6121: \tdx{restrict_type} [| !!x. x:A ==> f`x: B(x) |] ==>
paulson@6121: restrict(f,A) : Pi(A,B)
paulson@6121: \end{ttbox}
paulson@6121: \caption{Functions} \label{zf-func1}
paulson@6121: \end{figure}
paulson@6121:
paulson@6121:
paulson@6121: \begin{figure}
paulson@6121: \begin{ttbox}
paulson@8249: \tdx{lamI} a:A ==> : (lam x:A. b(x))
paulson@8249: \tdx{lamE} [| p: (lam x:A. b(x)); !!x.[| x:A; p= |] ==> P
paulson@8249: |] ==> P
paulson@8249:
paulson@8249: \tdx{lam_type} [| !!x. x:A ==> b(x): B(x) |] ==> (lam x:A. b(x)) : Pi(A,B)
paulson@8249:
paulson@8249: \tdx{beta} a : A ==> (lam x:A. b(x)) ` a = b(a)
paulson@8249: \tdx{eta} f : Pi(A,B) ==> (lam x:A. f`x) = f
paulson@6121: \end{ttbox}
paulson@6121: \caption{$\lambda$-abstraction} \label{zf-lam}
paulson@6121: \end{figure}
paulson@6121:
paulson@6121:
paulson@6121: \begin{figure}
paulson@6121: \begin{ttbox}
paulson@6121: \tdx{fun_empty} 0: 0->0
paulson@6121: \tdx{fun_single} {\ttlbrace}{\ttrbrace} : {\ttlbrace}a{\ttrbrace} -> {\ttlbrace}b{\ttrbrace}
paulson@6121:
paulson@6121: \tdx{fun_disjoint_Un} [| f: A->B; g: C->D; A Int C = 0 |] ==>
paulson@6121: (f Un g) : (A Un C) -> (B Un D)
paulson@6121:
paulson@6121: \tdx{fun_disjoint_apply1} [| a:A; f: A->B; g: C->D; A Int C = 0 |] ==>
paulson@6121: (f Un g)`a = f`a
paulson@6121:
paulson@6121: \tdx{fun_disjoint_apply2} [| c:C; f: A->B; g: C->D; A Int C = 0 |] ==>
paulson@6121: (f Un g)`c = g`c
paulson@6121: \end{ttbox}
paulson@6121: \caption{Constructing functions from smaller sets} \label{zf-func2}
paulson@6121: \end{figure}
paulson@6121:
paulson@6121:
paulson@6121: \subsection{Functions}
paulson@6121: Functions, represented by graphs, are notoriously difficult to reason
paulson@6121: about. The file \texttt{ZF/func.ML} derives many rules, which overlap more
paulson@6121: than they ought. This section presents the more important rules.
paulson@6121:
paulson@6121: Figure~\ref{zf-func1} presents the basic properties of \cdx{Pi}$(A,B)$,
paulson@6121: the generalized function space. For example, if $f$ is a function and
paulson@6121: $\pair{a,b}\in f$, then $f`a=b$ (\tdx{apply_equality}). Two functions
paulson@6121: are equal provided they have equal domains and deliver equals results
paulson@6121: (\tdx{fun_extension}).
paulson@6121:
paulson@6121: By \tdx{Pi_type}, a function typing of the form $f\in A\to C$ can be
paulson@6121: refined to the dependent typing $f\in\prod@{x\in A}B(x)$, given a suitable
paulson@6121: family of sets $\{B(x)\}@{x\in A}$. Conversely, by \tdx{range_of_fun},
paulson@6121: any dependent typing can be flattened to yield a function type of the form
paulson@6121: $A\to C$; here, $C={\tt range}(f)$.
paulson@6121:
paulson@6121: Among the laws for $\lambda$-abstraction, \tdx{lamI} and \tdx{lamE}
paulson@6121: describe the graph of the generated function, while \tdx{beta} and
paulson@6121: \tdx{eta} are the standard conversions. We essentially have a
paulson@6121: dependently-typed $\lambda$-calculus (Fig.\ts\ref{zf-lam}).
paulson@6121:
paulson@6121: Figure~\ref{zf-func2} presents some rules that can be used to construct
paulson@6121: functions explicitly. We start with functions consisting of at most one
paulson@6121: pair, and may form the union of two functions provided their domains are
paulson@6121: disjoint.
paulson@6121:
paulson@6121:
paulson@6121: \begin{figure}
paulson@6121: \begin{ttbox}
paulson@6121: \tdx{Int_absorb} A Int A = A
paulson@6121: \tdx{Int_commute} A Int B = B Int A
paulson@6121: \tdx{Int_assoc} (A Int B) Int C = A Int (B Int C)
paulson@6121: \tdx{Int_Un_distrib} (A Un B) Int C = (A Int C) Un (B Int C)
paulson@6121:
paulson@6121: \tdx{Un_absorb} A Un A = A
paulson@6121: \tdx{Un_commute} A Un B = B Un A
paulson@6121: \tdx{Un_assoc} (A Un B) Un C = A Un (B Un C)
paulson@6121: \tdx{Un_Int_distrib} (A Int B) Un C = (A Un C) Int (B Un C)
paulson@6121:
paulson@6121: \tdx{Diff_cancel} A-A = 0
paulson@6121: \tdx{Diff_disjoint} A Int (B-A) = 0
paulson@6121: \tdx{Diff_partition} A<=B ==> A Un (B-A) = B
paulson@6121: \tdx{double_complement} [| A<=B; B<= C |] ==> (B - (C-A)) = A
paulson@6121: \tdx{Diff_Un} A - (B Un C) = (A-B) Int (A-C)
paulson@6121: \tdx{Diff_Int} A - (B Int C) = (A-B) Un (A-C)
paulson@6121:
paulson@6121: \tdx{Union_Un_distrib} Union(A Un B) = Union(A) Un Union(B)
paulson@6121: \tdx{Inter_Un_distrib} [| a:A; b:B |] ==>
paulson@6121: Inter(A Un B) = Inter(A) Int Inter(B)
paulson@6121:
paulson@6121: \tdx{Int_Union_RepFun} A Int Union(B) = (UN C:B. A Int C)
paulson@6121:
paulson@6121: \tdx{Un_Inter_RepFun} b:B ==>
paulson@6121: A Un Inter(B) = (INT C:B. A Un C)
paulson@6121:
paulson@6121: \tdx{SUM_Un_distrib1} (SUM x:A Un B. C(x)) =
paulson@6121: (SUM x:A. C(x)) Un (SUM x:B. C(x))
paulson@6121:
paulson@6121: \tdx{SUM_Un_distrib2} (SUM x:C. A(x) Un B(x)) =
paulson@6121: (SUM x:C. A(x)) Un (SUM x:C. B(x))
paulson@6121:
paulson@6121: \tdx{SUM_Int_distrib1} (SUM x:A Int B. C(x)) =
paulson@6121: (SUM x:A. C(x)) Int (SUM x:B. C(x))
paulson@6121:
paulson@6121: \tdx{SUM_Int_distrib2} (SUM x:C. A(x) Int B(x)) =
paulson@6121: (SUM x:C. A(x)) Int (SUM x:C. B(x))
paulson@6121: \end{ttbox}
paulson@6121: \caption{Equalities} \label{zf-equalities}
paulson@6121: \end{figure}
paulson@6121:
paulson@6121:
paulson@6121: \begin{figure}
paulson@6121: %\begin{constants}
paulson@6121: % \cdx{1} & $i$ & & $\{\emptyset\}$ \\
paulson@6121: % \cdx{bool} & $i$ & & the set $\{\emptyset,1\}$ \\
paulson@6121: % \cdx{cond} & $[i,i,i]\To i$ & & conditional for \texttt{bool} \\
paulson@6121: % \cdx{not} & $i\To i$ & & negation for \texttt{bool} \\
paulson@6121: % \sdx{and} & $[i,i]\To i$ & Left 70 & conjunction for \texttt{bool} \\
paulson@6121: % \sdx{or} & $[i,i]\To i$ & Left 65 & disjunction for \texttt{bool} \\
paulson@6121: % \sdx{xor} & $[i,i]\To i$ & Left 65 & exclusive-or for \texttt{bool}
paulson@6121: %\end{constants}
paulson@6121: %
paulson@6121: \begin{ttbox}
paulson@6121: \tdx{bool_def} bool == {\ttlbrace}0,1{\ttrbrace}
paulson@6121: \tdx{cond_def} cond(b,c,d) == if b=1 then c else d
paulson@6121: \tdx{not_def} not(b) == cond(b,0,1)
paulson@6121: \tdx{and_def} a and b == cond(a,b,0)
paulson@6121: \tdx{or_def} a or b == cond(a,1,b)
paulson@6121: \tdx{xor_def} a xor b == cond(a,not(b),b)
paulson@6121:
paulson@6121: \tdx{bool_1I} 1 : bool
paulson@6121: \tdx{bool_0I} 0 : bool
paulson@6121: \tdx{boolE} [| c: bool; c=1 ==> P; c=0 ==> P |] ==> P
paulson@6121: \tdx{cond_1} cond(1,c,d) = c
paulson@6121: \tdx{cond_0} cond(0,c,d) = d
paulson@6121: \end{ttbox}
paulson@6121: \caption{The booleans} \label{zf-bool}
paulson@6121: \end{figure}
paulson@6121:
paulson@6121:
paulson@6121: \section{Further developments}
paulson@6121: The next group of developments is complex and extensive, and only
paulson@6121: highlights can be covered here. It involves many theories and ML files of
paulson@6121: proofs.
paulson@6121:
paulson@6121: Figure~\ref{zf-equalities} presents commutative, associative, distributive,
paulson@6121: and idempotency laws of union and intersection, along with other equations.
paulson@6121: See file \texttt{ZF/equalities.ML}.
paulson@6121:
paulson@6121: Theory \thydx{Bool} defines $\{0,1\}$ as a set of booleans, with the usual
wenzelm@9695: operators including a conditional (Fig.\ts\ref{zf-bool}). Although ZF is a
paulson@6121: first-order theory, you can obtain the effect of higher-order logic using
paulson@6121: \texttt{bool}-valued functions, for example. The constant~\texttt{1} is
paulson@6121: translated to \texttt{succ(0)}.
paulson@6121:
paulson@6121: \begin{figure}
paulson@6121: \index{*"+ symbol}
paulson@6121: \begin{constants}
paulson@6121: \it symbol & \it meta-type & \it priority & \it description \\
paulson@6121: \tt + & $[i,i]\To i$ & Right 65 & disjoint union operator\\
paulson@6121: \cdx{Inl}~~\cdx{Inr} & $i\To i$ & & injections\\
paulson@6121: \cdx{case} & $[i\To i,i\To i, i]\To i$ & & conditional for $A+B$
paulson@6121: \end{constants}
paulson@6121: \begin{ttbox}
paulson@6121: \tdx{sum_def} A+B == {\ttlbrace}0{\ttrbrace}*A Un {\ttlbrace}1{\ttrbrace}*B
paulson@6121: \tdx{Inl_def} Inl(a) == <0,a>
paulson@6121: \tdx{Inr_def} Inr(b) == <1,b>
paulson@6121: \tdx{case_def} case(c,d,u) == split(\%y z. cond(y, d(z), c(z)), u)
paulson@6121:
paulson@6121: \tdx{sum_InlI} a : A ==> Inl(a) : A+B
paulson@6121: \tdx{sum_InrI} b : B ==> Inr(b) : A+B
paulson@6121:
paulson@6121: \tdx{Inl_inject} Inl(a)=Inl(b) ==> a=b
paulson@6121: \tdx{Inr_inject} Inr(a)=Inr(b) ==> a=b
paulson@6121: \tdx{Inl_neq_Inr} Inl(a)=Inr(b) ==> P
paulson@6121:
paulson@6121: \tdx{sumE2} u: A+B ==> (EX x. x:A & u=Inl(x)) | (EX y. y:B & u=Inr(y))
paulson@6121:
paulson@6121: \tdx{case_Inl} case(c,d,Inl(a)) = c(a)
paulson@6121: \tdx{case_Inr} case(c,d,Inr(b)) = d(b)
paulson@6121: \end{ttbox}
paulson@6121: \caption{Disjoint unions} \label{zf-sum}
paulson@6121: \end{figure}
paulson@6121:
paulson@6121:
paulson@9584: \subsection{Disjoint unions}
paulson@9584:
paulson@6121: Theory \thydx{Sum} defines the disjoint union of two sets, with
paulson@6121: injections and a case analysis operator (Fig.\ts\ref{zf-sum}). Disjoint
paulson@6121: unions play a role in datatype definitions, particularly when there is
paulson@6121: mutual recursion~\cite{paulson-set-II}.
paulson@6121:
paulson@6121: \begin{figure}
paulson@6121: \begin{ttbox}
paulson@6121: \tdx{QPair_def} == a+b
paulson@6121: \tdx{qsplit_def} qsplit(c,p) == THE y. EX a b. p= & y=c(a,b)
paulson@6121: \tdx{qfsplit_def} qfsplit(R,z) == EX x y. z= & R(x,y)
paulson@6121: \tdx{qconverse_def} qconverse(r) == {\ttlbrace}z. w:r, EX x y. w= & z={\ttrbrace}
paulson@6121: \tdx{QSigma_def} QSigma(A,B) == UN x:A. UN y:B(x). {\ttlbrace}{\ttrbrace}
paulson@6121:
paulson@6121: \tdx{qsum_def} A <+> B == ({\ttlbrace}0{\ttrbrace} <*> A) Un ({\ttlbrace}1{\ttrbrace} <*> B)
paulson@6121: \tdx{QInl_def} QInl(a) == <0;a>
paulson@6121: \tdx{QInr_def} QInr(b) == <1;b>
paulson@6121: \tdx{qcase_def} qcase(c,d) == qsplit(\%y z. cond(y, d(z), c(z)))
paulson@6121: \end{ttbox}
paulson@6121: \caption{Non-standard pairs, products and sums} \label{zf-qpair}
paulson@6121: \end{figure}
paulson@6121:
paulson@9584:
paulson@9584: \subsection{Non-standard ordered pairs}
paulson@9584:
paulson@6121: Theory \thydx{QPair} defines a notion of ordered pair that admits
paulson@6121: non-well-founded tupling (Fig.\ts\ref{zf-qpair}). Such pairs are written
paulson@6121: {\tt<$a$;$b$>}. It also defines the eliminator \cdx{qsplit}, the
paulson@6121: converse operator \cdx{qconverse}, and the summation operator
paulson@6121: \cdx{QSigma}. These are completely analogous to the corresponding
paulson@6121: versions for standard ordered pairs. The theory goes on to define a
paulson@6121: non-standard notion of disjoint sum using non-standard pairs. All of these
paulson@6121: concepts satisfy the same properties as their standard counterparts; in
paulson@6121: addition, {\tt<$a$;$b$>} is continuous. The theory supports coinductive
paulson@6592: definitions, for example of infinite lists~\cite{paulson-mscs}.
paulson@6121:
paulson@6121: \begin{figure}
paulson@6121: \begin{ttbox}
paulson@6121: \tdx{bnd_mono_def} bnd_mono(D,h) ==
paulson@6121: h(D)<=D & (ALL W X. W<=X --> X<=D --> h(W) <= h(X))
paulson@6121:
paulson@6121: \tdx{lfp_def} lfp(D,h) == Inter({\ttlbrace}X: Pow(D). h(X) <= X{\ttrbrace})
paulson@6121: \tdx{gfp_def} gfp(D,h) == Union({\ttlbrace}X: Pow(D). X <= h(X){\ttrbrace})
paulson@6121:
paulson@6121:
paulson@6121: \tdx{lfp_lowerbound} [| h(A) <= A; A<=D |] ==> lfp(D,h) <= A
paulson@6121:
paulson@6121: \tdx{lfp_subset} lfp(D,h) <= D
paulson@6121:
paulson@6121: \tdx{lfp_greatest} [| bnd_mono(D,h);
paulson@6121: !!X. [| h(X) <= X; X<=D |] ==> A<=X
paulson@6121: |] ==> A <= lfp(D,h)
paulson@6121:
paulson@6121: \tdx{lfp_Tarski} bnd_mono(D,h) ==> lfp(D,h) = h(lfp(D,h))
paulson@6121:
paulson@6121: \tdx{induct} [| a : lfp(D,h); bnd_mono(D,h);
paulson@6121: !!x. x : h(Collect(lfp(D,h),P)) ==> P(x)
paulson@6121: |] ==> P(a)
paulson@6121:
paulson@6121: \tdx{lfp_mono} [| bnd_mono(D,h); bnd_mono(E,i);
paulson@6121: !!X. X<=D ==> h(X) <= i(X)
paulson@6121: |] ==> lfp(D,h) <= lfp(E,i)
paulson@6121:
paulson@6121: \tdx{gfp_upperbound} [| A <= h(A); A<=D |] ==> A <= gfp(D,h)
paulson@6121:
paulson@6121: \tdx{gfp_subset} gfp(D,h) <= D
paulson@6121:
paulson@6121: \tdx{gfp_least} [| bnd_mono(D,h);
paulson@6121: !!X. [| X <= h(X); X<=D |] ==> X<=A
paulson@6121: |] ==> gfp(D,h) <= A
paulson@6121:
paulson@6121: \tdx{gfp_Tarski} bnd_mono(D,h) ==> gfp(D,h) = h(gfp(D,h))
paulson@6121:
paulson@6121: \tdx{coinduct} [| bnd_mono(D,h); a: X; X <= h(X Un gfp(D,h)); X <= D
paulson@6121: |] ==> a : gfp(D,h)
paulson@6121:
paulson@6121: \tdx{gfp_mono} [| bnd_mono(D,h); D <= E;
paulson@6121: !!X. X<=D ==> h(X) <= i(X)
paulson@6121: |] ==> gfp(D,h) <= gfp(E,i)
paulson@6121: \end{ttbox}
paulson@6121: \caption{Least and greatest fixedpoints} \label{zf-fixedpt}
paulson@6121: \end{figure}
paulson@6121:
paulson@9584:
paulson@9584: \subsection{Least and greatest fixedpoints}
paulson@9584:
paulson@6121: The Knaster-Tarski Theorem states that every monotone function over a
paulson@6121: complete lattice has a fixedpoint. Theory \thydx{Fixedpt} proves the
paulson@6121: Theorem only for a particular lattice, namely the lattice of subsets of a
paulson@6121: set (Fig.\ts\ref{zf-fixedpt}). The theory defines least and greatest
paulson@6121: fixedpoint operators with corresponding induction and coinduction rules.
paulson@6121: These are essential to many definitions that follow, including the natural
paulson@6121: numbers and the transitive closure operator. The (co)inductive definition
paulson@6121: package also uses the fixedpoint operators~\cite{paulson-CADE}. See
wenzelm@6745: Davey and Priestley~\cite{davey-priestley} for more on the Knaster-Tarski
paulson@6121: Theorem and my paper~\cite{paulson-set-II} for discussion of the Isabelle
paulson@6121: proofs.
paulson@6121:
paulson@6121: Monotonicity properties are proved for most of the set-forming operations:
paulson@6121: union, intersection, Cartesian product, image, domain, range, etc. These
paulson@6121: are useful for applying the Knaster-Tarski Fixedpoint Theorem. The proofs
paulson@6121: themselves are trivial applications of Isabelle's classical reasoner. See
paulson@6121: file \texttt{ZF/mono.ML}.
paulson@6121:
paulson@6121:
paulson@9584: \subsection{Finite sets and lists}
paulson@9584:
paulson@9584: Theory \texttt{Finite} (Figure~\ref{zf-fin}) defines the finite set operator;
paulson@9584: ${\tt Fin}(A)$ is the set of all finite sets over~$A$. The theory employs
paulson@9584: Isabelle's inductive definition package, which proves various rules
paulson@9584: automatically. The induction rule shown is stronger than the one proved by
paulson@9584: the package. The theory also defines the set of all finite functions
paulson@9584: between two given sets.
paulson@9584:
paulson@9584: \begin{figure}
paulson@9584: \begin{ttbox}
paulson@9584: \tdx{Fin.emptyI} 0 : Fin(A)
paulson@9584: \tdx{Fin.consI} [| a: A; b: Fin(A) |] ==> cons(a,b) : Fin(A)
paulson@9584:
paulson@9584: \tdx{Fin_induct}
paulson@9584: [| b: Fin(A);
paulson@9584: P(0);
paulson@9584: !!x y. [| x: A; y: Fin(A); x~:y; P(y) |] ==> P(cons(x,y))
paulson@9584: |] ==> P(b)
paulson@9584:
paulson@9584: \tdx{Fin_mono} A<=B ==> Fin(A) <= Fin(B)
paulson@9584: \tdx{Fin_UnI} [| b: Fin(A); c: Fin(A) |] ==> b Un c : Fin(A)
paulson@9584: \tdx{Fin_UnionI} C : Fin(Fin(A)) ==> Union(C) : Fin(A)
paulson@9584: \tdx{Fin_subset} [| c<=b; b: Fin(A) |] ==> c: Fin(A)
paulson@9584: \end{ttbox}
paulson@9584: \caption{The finite set operator} \label{zf-fin}
paulson@9584: \end{figure}
paulson@9584:
paulson@9584: \begin{figure}
paulson@9584: \begin{constants}
paulson@9584: \it symbol & \it meta-type & \it priority & \it description \\
paulson@9584: \cdx{list} & $i\To i$ && lists over some set\\
paulson@9584: \cdx{list_case} & $[i, [i,i]\To i, i] \To i$ && conditional for $list(A)$ \\
paulson@9584: \cdx{map} & $[i\To i, i] \To i$ & & mapping functional\\
paulson@9584: \cdx{length} & $i\To i$ & & length of a list\\
paulson@9584: \cdx{rev} & $i\To i$ & & reverse of a list\\
paulson@9584: \tt \at & $[i,i]\To i$ & Right 60 & append for lists\\
paulson@9584: \cdx{flat} & $i\To i$ & & append of list of lists
paulson@9584: \end{constants}
paulson@9584:
paulson@9584: \underscoreon %%because @ is used here
paulson@9584: \begin{ttbox}
paulson@9584: \tdx{NilI} Nil : list(A)
paulson@9584: \tdx{ConsI} [| a: A; l: list(A) |] ==> Cons(a,l) : list(A)
paulson@9584:
paulson@9584: \tdx{List.induct}
paulson@9584: [| l: list(A);
paulson@9584: P(Nil);
paulson@9584: !!x y. [| x: A; y: list(A); P(y) |] ==> P(Cons(x,y))
paulson@9584: |] ==> P(l)
paulson@9584:
paulson@9584: \tdx{Cons_iff} Cons(a,l)=Cons(a',l') <-> a=a' & l=l'
paulson@9584: \tdx{Nil_Cons_iff} ~ Nil=Cons(a,l)
paulson@9584:
paulson@9584: \tdx{list_mono} A<=B ==> list(A) <= list(B)
paulson@9584:
paulson@9584: \tdx{map_ident} l: list(A) ==> map(\%u. u, l) = l
paulson@9584: \tdx{map_compose} l: list(A) ==> map(h, map(j,l)) = map(\%u. h(j(u)), l)
paulson@9584: \tdx{map_app_distrib} xs: list(A) ==> map(h, xs@ys) = map(h,xs) @ map(h,ys)
paulson@9584: \tdx{map_type}
paulson@9584: [| l: list(A); !!x. x: A ==> h(x): B |] ==> map(h,l) : list(B)
paulson@9584: \tdx{map_flat}
paulson@9584: ls: list(list(A)) ==> map(h, flat(ls)) = flat(map(map(h),ls))
paulson@9584: \end{ttbox}
paulson@9584: \caption{Lists} \label{zf-list}
paulson@9584: \end{figure}
paulson@9584:
paulson@9584:
paulson@9584: Figure~\ref{zf-list} presents the set of lists over~$A$, ${\tt list}(A)$. The
paulson@9584: definition employs Isabelle's datatype package, which defines the introduction
paulson@9584: and induction rules automatically, as well as the constructors, case operator
paulson@9584: (\verb|list_case|) and recursion operator. The theory then defines the usual
paulson@9584: list functions by primitive recursion. See theory \texttt{List}.
paulson@9584:
paulson@9584:
paulson@9584: \subsection{Miscellaneous}
paulson@9584:
paulson@6121: \begin{figure}
paulson@6121: \begin{constants}
paulson@6121: \it symbol & \it meta-type & \it priority & \it description \\
paulson@6121: \sdx{O} & $[i,i]\To i$ & Right 60 & composition ($\circ$) \\
paulson@6121: \cdx{id} & $i\To i$ & & identity function \\
paulson@6121: \cdx{inj} & $[i,i]\To i$ & & injective function space\\
paulson@6121: \cdx{surj} & $[i,i]\To i$ & & surjective function space\\
paulson@6121: \cdx{bij} & $[i,i]\To i$ & & bijective function space
paulson@6121: \end{constants}
paulson@6121:
paulson@6121: \begin{ttbox}
paulson@6121: \tdx{comp_def} r O s == {\ttlbrace}xz : domain(s)*range(r) .
paulson@6121: EX x y z. xz= & :s & :r{\ttrbrace}
paulson@6121: \tdx{id_def} id(A) == (lam x:A. x)
paulson@6121: \tdx{inj_def} inj(A,B) == {\ttlbrace} f: A->B. ALL w:A. ALL x:A. f`w=f`x --> w=x {\ttrbrace}
paulson@6121: \tdx{surj_def} surj(A,B) == {\ttlbrace} f: A->B . ALL y:B. EX x:A. f`x=y {\ttrbrace}
paulson@6121: \tdx{bij_def} bij(A,B) == inj(A,B) Int surj(A,B)
paulson@6121:
paulson@6121:
paulson@6121: \tdx{left_inverse} [| f: inj(A,B); a: A |] ==> converse(f)`(f`a) = a
paulson@6121: \tdx{right_inverse} [| f: inj(A,B); b: range(f) |] ==>
paulson@6121: f`(converse(f)`b) = b
paulson@6121:
paulson@6121: \tdx{inj_converse_inj} f: inj(A,B) ==> converse(f): inj(range(f), A)
paulson@6121: \tdx{bij_converse_bij} f: bij(A,B) ==> converse(f): bij(B,A)
paulson@6121:
paulson@6121: \tdx{comp_type} [| s<=A*B; r<=B*C |] ==> (r O s) <= A*C
paulson@6121: \tdx{comp_assoc} (r O s) O t = r O (s O t)
paulson@6121:
paulson@6121: \tdx{left_comp_id} r<=A*B ==> id(B) O r = r
paulson@6121: \tdx{right_comp_id} r<=A*B ==> r O id(A) = r
paulson@6121:
paulson@6121: \tdx{comp_func} [| g:A->B; f:B->C |] ==> (f O g):A->C
paulson@6121: \tdx{comp_func_apply} [| g:A->B; f:B->C; a:A |] ==> (f O g)`a = f`(g`a)
paulson@6121:
paulson@6121: \tdx{comp_inj} [| g:inj(A,B); f:inj(B,C) |] ==> (f O g):inj(A,C)
paulson@6121: \tdx{comp_surj} [| g:surj(A,B); f:surj(B,C) |] ==> (f O g):surj(A,C)
paulson@6121: \tdx{comp_bij} [| g:bij(A,B); f:bij(B,C) |] ==> (f O g):bij(A,C)
paulson@6121:
paulson@6121: \tdx{left_comp_inverse} f: inj(A,B) ==> converse(f) O f = id(A)
paulson@6121: \tdx{right_comp_inverse} f: surj(A,B) ==> f O converse(f) = id(B)
paulson@6121:
paulson@6121: \tdx{bij_disjoint_Un}
paulson@6121: [| f: bij(A,B); g: bij(C,D); A Int C = 0; B Int D = 0 |] ==>
paulson@6121: (f Un g) : bij(A Un C, B Un D)
paulson@6121:
paulson@6121: \tdx{restrict_bij} [| f:inj(A,B); C<=A |] ==> restrict(f,C): bij(C, f``C)
paulson@6121: \end{ttbox}
paulson@6121: \caption{Permutations} \label{zf-perm}
paulson@6121: \end{figure}
paulson@6121:
paulson@6121: The theory \thydx{Perm} is concerned with permutations (bijections) and
paulson@6121: related concepts. These include composition of relations, the identity
paulson@6121: relation, and three specialized function spaces: injective, surjective and
paulson@6121: bijective. Figure~\ref{zf-perm} displays many of their properties that
paulson@6121: have been proved. These results are fundamental to a treatment of
paulson@6121: equipollence and cardinality.
paulson@6121:
paulson@9584: Theory \thydx{Univ} defines a `universe' $\texttt{univ}(A)$, which is used by
paulson@9584: the datatype package. This set contains $A$ and the
paulson@9584: natural numbers. Vitally, it is closed under finite products: ${\tt
paulson@9584: univ}(A)\times{\tt univ}(A)\subseteq{\tt univ}(A)$. This theory also
paulson@9584: defines the cumulative hierarchy of axiomatic set theory, which
paulson@9584: traditionally is written $V@\alpha$ for an ordinal~$\alpha$. The
paulson@9584: `universe' is a simple generalization of~$V@\omega$.
paulson@9584:
paulson@9584: Theory \thydx{QUniv} defines a `universe' ${\tt quniv}(A)$, which is used by
paulson@9584: the datatype package to construct codatatypes such as streams. It is
paulson@9584: analogous to ${\tt univ}(A)$ (and is defined in terms of it) but is closed
paulson@9584: under the non-standard product and sum.
paulson@9584:
paulson@9584:
paulson@9584: \section{Automatic Tools}
paulson@9584:
wenzelm@9695: ZF provides the simplifier and the classical reasoner. Moreover it supplies a
wenzelm@9695: specialized tool to infer `types' of terms.
paulson@9584:
paulson@9584: \subsection{Simplification}
paulson@9584:
wenzelm@9695: ZF inherits simplification from FOL but adopts it for set theory. The
wenzelm@9695: extraction of rewrite rules takes the ZF primitives into account. It can
paulson@9584: strip bounded universal quantifiers from a formula; for example, ${\forall
paulson@9584: x\in A. f(x)=g(x)}$ yields the conditional rewrite rule $x\in A \Imp
paulson@9584: f(x)=g(x)$. Given $a\in\{x\in A. P(x)\}$ it extracts rewrite rules from $a\in
paulson@9584: A$ and~$P(a)$. It can also break down $a\in A\int B$ and $a\in A-B$.
paulson@9584:
paulson@9584: Simplification tactics tactics such as \texttt{Asm_simp_tac} and
paulson@9584: \texttt{Full_simp_tac} use the default simpset (\texttt{simpset()}), which
paulson@9584: works for most purposes. A small simplification set for set theory is
paulson@9584: called~\ttindexbold{ZF_ss}, and you can even use \ttindex{FOL_ss} as a minimal
paulson@9584: starting point. \texttt{ZF_ss} contains congruence rules for all the binding
wenzelm@9695: operators of ZF. It contains all the conversion rules, such as \texttt{fst}
wenzelm@9695: and \texttt{snd}, as well as the rewrites shown in Fig.\ts\ref{zf-simpdata}.
wenzelm@9695: See the file \texttt{ZF/simpdata.ML} for a fuller list.
paulson@9584:
paulson@9584:
paulson@9584: \subsection{Classical Reasoning}
paulson@9584:
paulson@9584: As for the classical reasoner, tactics such as \texttt{Blast_tac} and {\tt
paulson@9584: Best_tac} refer to the default claset (\texttt{claset()}). This works for
paulson@9584: most purposes. Named clasets include \ttindexbold{ZF_cs} (basic set theory)
paulson@9584: and \ttindexbold{le_cs} (useful for reasoning about the relations $<$ and
paulson@9584: $\le$). You can use \ttindex{FOL_cs} as a minimal basis for building your own
paulson@9584: clasets. See \iflabelundefined{chap:classical}{the {\em Reference Manual\/}}%
paulson@9584: {Chap.\ts\ref{chap:classical}} for more discussion of classical proof methods.
paulson@9584:
paulson@9584:
paulson@9584: \begin{figure}
paulson@9584: \begin{eqnarray*}
paulson@9584: a\in \emptyset & \bimp & \bot\\
paulson@9584: a \in A \un B & \bimp & a\in A \disj a\in B\\
paulson@9584: a \in A \int B & \bimp & a\in A \conj a\in B\\
paulson@9584: a \in A-B & \bimp & a\in A \conj \neg (a\in B)\\
paulson@9584: \pair{a,b}\in {\tt Sigma}(A,B)
paulson@9584: & \bimp & a\in A \conj b\in B(a)\\
paulson@9584: a \in {\tt Collect}(A,P) & \bimp & a\in A \conj P(a)\\
paulson@9584: (\forall x \in \emptyset. P(x)) & \bimp & \top\\
paulson@9584: (\forall x \in A. \top) & \bimp & \top
paulson@9584: \end{eqnarray*}
paulson@9584: \caption{Some rewrite rules for set theory} \label{zf-simpdata}
paulson@9584: \end{figure}
paulson@9584:
paulson@9584:
paulson@9584: \subsection{Type-Checking Tactics}
paulson@9584: \index{type-checking tactics}
paulson@9584:
wenzelm@9695: Isabelle/ZF provides simple tactics to help automate those proofs that are
paulson@9584: essentially type-checking. Such proofs are built by applying rules such as
paulson@9584: these:
paulson@9584: \begin{ttbox}
paulson@9584: [| ?P ==> ?a: ?A; ~?P ==> ?b: ?A |] ==> (if ?P then ?a else ?b): ?A
paulson@9584:
paulson@9584: [| ?m : nat; ?n : nat |] ==> ?m #+ ?n : nat
paulson@9584:
paulson@9584: ?a : ?A ==> Inl(?a) : ?A + ?B
paulson@9584: \end{ttbox}
paulson@9584: In typical applications, the goal has the form $t\in\Var{A}$: in other words,
paulson@9584: we have a specific term~$t$ and need to infer its `type' by instantiating the
paulson@9584: set variable~$\Var{A}$. Neither the simplifier nor the classical reasoner
paulson@9584: does this job well. The if-then-else rule, and many similar ones, can make
paulson@9584: the classical reasoner loop. The simplifier refuses (on principle) to
paulson@9584: instantiate variables during rewriting, so goals such as \texttt{i\#+j :\ ?A}
paulson@9584: are left unsolved.
paulson@9584:
paulson@9584: The simplifier calls the type-checker to solve rewritten subgoals: this stage
paulson@9584: can indeed instantiate variables. If you have defined new constants and
paulson@9584: proved type-checking rules for them, then insert the rules using
paulson@9584: \texttt{AddTCs} and the rest should be automatic. In particular, the
paulson@9584: simplifier will use type-checking to help satisfy conditional rewrite rules.
paulson@9584: Call the tactic \ttindex{Typecheck_tac} to break down all subgoals using
paulson@9584: type-checking rules.
paulson@9584:
paulson@9584: Though the easiest way to invoke the type-checker is via the simplifier,
paulson@9584: specialized applications may require more detailed knowledge of
paulson@9584: the type-checking primitives. They are modelled on the simplifier's:
paulson@9584: \begin{ttdescription}
paulson@9584: \item[\ttindexbold{tcset}] is the type of tcsets: sets of type-checking rules.
paulson@9584:
paulson@9584: \item[\ttindexbold{addTCs}] is an infix operator to add type-checking rules to
paulson@9584: a tcset.
paulson@9584:
paulson@9584: \item[\ttindexbold{delTCs}] is an infix operator to remove type-checking rules
paulson@9584: from a tcset.
paulson@9584:
paulson@9584: \item[\ttindexbold{typecheck_tac}] is a tactic for attempting to prove all
paulson@9584: subgoals using the rules given in its argument, a tcset.
paulson@9584: \end{ttdescription}
paulson@9584:
paulson@9584: Tcsets, like simpsets, are associated with theories and are merged when
paulson@9584: theories are merged. There are further primitives that use the default tcset.
paulson@9584: \begin{ttdescription}
paulson@9584: \item[\ttindexbold{tcset}] is a function to return the default tcset; use the
paulson@9584: expression \texttt{tcset()}.
paulson@9584:
paulson@9584: \item[\ttindexbold{AddTCs}] adds type-checking rules to the default tcset.
paulson@9584:
paulson@9584: \item[\ttindexbold{DelTCs}] removes type-checking rules from the default
paulson@9584: tcset.
paulson@9584:
paulson@9584: \item[\ttindexbold{Typecheck_tac}] calls \texttt{typecheck_tac} using the
paulson@9584: default tcset.
paulson@9584: \end{ttdescription}
paulson@9584:
paulson@9584: To supply some type-checking rules temporarily, using \texttt{Addrules} and
paulson@9584: later \texttt{Delrules} is the simplest way. There is also a high-tech
paulson@9584: approach. Call the simplifier with a new solver expressed using
paulson@9584: \ttindexbold{type_solver_tac} and your temporary type-checking rules.
paulson@9584: \begin{ttbox}
paulson@9584: by (asm_simp_tac
paulson@9584: (simpset() setSolver type_solver_tac (tcset() addTCs prems)) 2);
paulson@9584: \end{ttbox}
paulson@9584:
paulson@9584:
paulson@9584: \section{Natural number and integer arithmetic}
paulson@9584:
paulson@9584: \index{arithmetic|(}
paulson@9584:
paulson@6121: \begin{figure}\small
paulson@6121: \index{#*@{\tt\#*} symbol}
paulson@6121: \index{*div symbol}
paulson@6121: \index{*mod symbol}
paulson@6121: \index{#+@{\tt\#+} symbol}
paulson@6121: \index{#-@{\tt\#-} symbol}
paulson@6121: \begin{constants}
paulson@6121: \it symbol & \it meta-type & \it priority & \it description \\
paulson@6121: \cdx{nat} & $i$ & & set of natural numbers \\
paulson@6121: \cdx{nat_case}& $[i,i\To i,i]\To i$ & & conditional for $nat$\\
paulson@6121: \tt \#* & $[i,i]\To i$ & Left 70 & multiplication \\
paulson@6121: \tt div & $[i,i]\To i$ & Left 70 & division\\
paulson@6121: \tt mod & $[i,i]\To i$ & Left 70 & modulus\\
paulson@6121: \tt \#+ & $[i,i]\To i$ & Left 65 & addition\\
paulson@6121: \tt \#- & $[i,i]\To i$ & Left 65 & subtraction
paulson@6121: \end{constants}
paulson@6121:
paulson@6121: \begin{ttbox}
paulson@6121: \tdx{nat_def} nat == lfp(lam r: Pow(Inf). {\ttlbrace}0{\ttrbrace} Un {\ttlbrace}succ(x). x:r{\ttrbrace}
paulson@6121:
paulson@6121: \tdx{nat_case_def} nat_case(a,b,k) ==
paulson@6121: THE y. k=0 & y=a | (EX x. k=succ(x) & y=b(x))
paulson@6121:
paulson@9584: \tdx{nat_0I} 0 : nat
paulson@9584: \tdx{nat_succI} n : nat ==> succ(n) : nat
paulson@6121:
paulson@6121: \tdx{nat_induct}
paulson@6121: [| n: nat; P(0); !!x. [| x: nat; P(x) |] ==> P(succ(x))
paulson@6121: |] ==> P(n)
paulson@6121:
paulson@9584: \tdx{nat_case_0} nat_case(a,b,0) = a
paulson@9584: \tdx{nat_case_succ} nat_case(a,b,succ(m)) = b(m)
paulson@9584:
paulson@9584: \tdx{add_0_natify} 0 #+ n = natify(n)
paulson@9584: \tdx{add_succ} succ(m) #+ n = succ(m #+ n)
paulson@9584:
paulson@9584: \tdx{mult_type} m #* n : nat
paulson@9584: \tdx{mult_0} 0 #* n = 0
paulson@9584: \tdx{mult_succ} succ(m) #* n = n #+ (m #* n)
paulson@9584: \tdx{mult_commute} m #* n = n #* m
paulson@9584: \tdx{add_mult_dist} (m #+ n) #* k = (m #* k) #+ (n #* k)
paulson@9584: \tdx{mult_assoc} (m #* n) #* k = m #* (n #* k)
paulson@9584: \tdx{mod_div_equality} m: nat ==> (m div n)#*n #+ m mod n = m
paulson@6121: \end{ttbox}
paulson@6121: \caption{The natural numbers} \label{zf-nat}
paulson@6121: \end{figure}
paulson@6121:
paulson@9584: \index{natural numbers}
paulson@9584:
paulson@6121: Theory \thydx{Nat} defines the natural numbers and mathematical
paulson@6121: induction, along with a case analysis operator. The set of natural
paulson@6121: numbers, here called \texttt{nat}, is known in set theory as the ordinal~$\omega$.
paulson@6121:
paulson@6121: Theory \thydx{Arith} develops arithmetic on the natural numbers
paulson@6121: (Fig.\ts\ref{zf-nat}). Addition, multiplication and subtraction are defined
paulson@6121: by primitive recursion. Division and remainder are defined by repeated
paulson@6121: subtraction, which requires well-founded recursion; the termination argument
paulson@6121: relies on the divisor's being non-zero. Many properties are proved:
paulson@6121: commutative, associative and distributive laws, identity and cancellation
paulson@6121: laws, etc. The most interesting result is perhaps the theorem $a \bmod b +
paulson@6121: (a/b)\times b = a$.
paulson@6121:
paulson@9584: To minimize the need for tedious proofs of $t\in\texttt{nat}$, the arithmetic
paulson@9584: operators coerce their arguments to be natural numbers. The function
paulson@9584: \cdx{natify} is defined such that $\texttt{natify}(n) = n$ if $n$ is a natural
paulson@9584: number, $\texttt{natify}(\texttt{succ}(x)) =
paulson@9584: \texttt{succ}(\texttt{natify}(x))$ for all $x$, and finally
paulson@9584: $\texttt{natify}(x)=0$ in all other cases. The benefit is that the addition,
paulson@9584: subtraction, multiplication, division and remainder operators always return
paulson@9584: natural numbers, regardless of their arguments. Algebraic laws (commutative,
paulson@9584: associative, distributive) are unconditional. Occurrences of \texttt{natify}
paulson@9584: as operands of those operators are simplified away. Any remaining occurrences
paulson@9584: can either be tolerated or else eliminated by proving that the argument is a
paulson@9584: natural number.
paulson@9584:
paulson@9584: The simplifier automatically cancels common terms on the opposite sides of
paulson@9584: subtraction and of relations ($=$, $<$ and $\le$). Here is an example:
paulson@6121: \begin{ttbox}
paulson@9584: 1. i #+ j #+ k #- j < k #+ l
paulson@9584: > by (Simp_tac 1);
paulson@9584: 1. natify(i) < natify(l)
paulson@6121: \end{ttbox}
paulson@9584: Given the assumptions \texttt{i:nat} and \texttt{l:nat}, both occurrences of
paulson@9584: \cdx{natify} would be simplified away.
paulson@9584:
paulson@9584:
paulson@9584: \begin{figure}\small
paulson@9584: \index{$*@{\tt\$*} symbol}
paulson@9584: \index{$+@{\tt\$+} symbol}
paulson@9584: \index{$-@{\tt\$-} symbol}
paulson@6121: \begin{constants}
paulson@6121: \it symbol & \it meta-type & \it priority & \it description \\
paulson@9584: \cdx{int} & $i$ & & set of integers \\
paulson@9584: \tt \$* & $[i,i]\To i$ & Left 70 & multiplication \\
paulson@9584: \tt \$+ & $[i,i]\To i$ & Left 65 & addition\\
paulson@9584: \tt \$- & $[i,i]\To i$ & Left 65 & subtraction\\
paulson@9584: \tt \$< & $[i,i]\To o$ & Left 50 & $<$ on integers\\
paulson@9584: \tt \$<= & $[i,i]\To o$ & Left 50 & $\le$ on integers
paulson@6121: \end{constants}
paulson@6121:
paulson@6121: \begin{ttbox}
paulson@9584: \tdx{zadd_0_intify} 0 $+ n = intify(n)
paulson@9584:
paulson@9584: \tdx{zmult_type} m $* n : int
paulson@9584: \tdx{zmult_0} 0 $* n = 0
paulson@9584: \tdx{zmult_commute} m $* n = n $* m
paulson@9584: \tdx{zadd_zmult_dist} (m $+ n) $* k = (m $* k) $+ (n $* k)
paulson@9584: \tdx{zmult_assoc} (m $* n) $* k = m $* (n $* k)
paulson@6121: \end{ttbox}
paulson@9584: \caption{The integers} \label{zf-int}
paulson@6121: \end{figure}
paulson@6121:
paulson@6121:
paulson@9584: \index{integers}
paulson@9584:
paulson@9584: Theory \thydx{Int} defines the integers, as equivalence classes of natural
paulson@9584: numbers. Figure~\ref{zf-int} presents a tidy collection of laws. In
paulson@9584: fact, a large library of facts is proved, including monotonicity laws for
paulson@9584: addition and multiplication, covering both positive and negative operands.
paulson@9584:
paulson@9584: As with the natural numbers, the need for typing proofs is minimized. All the
paulson@9584: operators defined in Fig.\ts\ref{zf-int} coerce their operands to integers by
paulson@9584: applying the function \cdx{intify}. This function is the identity on integers
paulson@9584: and maps other operands to zero.
paulson@9584:
paulson@9584: Decimal notation is provided for the integers. Numbers, written as
paulson@9584: \texttt{\#$nnn$} or \texttt{\#-$nnn$}, are represented internally in
paulson@9584: two's-complement binary. Expressions involving addition, subtraction and
paulson@9584: multiplication of numeral constants are evaluated (with acceptable efficiency)
paulson@9584: by simplification. The simplifier also collects similar terms, multiplying
paulson@9584: them by a numerical coefficient. It also cancels occurrences of the same
paulson@9584: terms on the other side of the relational operators. Example:
paulson@6173: \begin{ttbox}
paulson@9584: 1. y $+ z $+ #-3 $* x $+ y $<= x $* #2 $+ z
paulson@9584: > by (Simp_tac 1);
paulson@9584: 1. #2 $* y $<= #5 $* x
paulson@6173: \end{ttbox}
paulson@9584: For more information on the integers, please see the theories on directory
paulson@9584: \texttt{ZF/Integ}.
paulson@9584:
paulson@9584: \index{arithmetic|)}
paulson@6173:
paulson@6173:
paulson@6121: \section{Datatype definitions}
paulson@6121: \label{sec:ZF:datatype}
paulson@6121: \index{*datatype|(}
paulson@6121:
wenzelm@9695: The \ttindex{datatype} definition package of ZF constructs inductive datatypes
wenzelm@9695: similar to those of \ML. It can also construct coinductive datatypes
wenzelm@9695: (codatatypes), which are non-well-founded structures such as streams. It
wenzelm@9695: defines the set using a fixed-point construction and proves induction rules,
wenzelm@9695: as well as theorems for recursion and case combinators. It supplies
wenzelm@9695: mechanisms for reasoning about freeness. The datatype package can handle both
wenzelm@9695: mutual and indirect recursion.
paulson@6121:
paulson@6121:
paulson@6121: \subsection{Basics}
paulson@6121: \label{subsec:datatype:basics}
paulson@6121:
paulson@6121: A \texttt{datatype} definition has the following form:
paulson@6121: \[
paulson@6121: \begin{array}{llcl}
paulson@6121: \mathtt{datatype} & t@1(A@1,\ldots,A@h) & = &
paulson@6121: constructor^1@1 ~\mid~ \ldots ~\mid~ constructor^1@{k@1} \\
paulson@6121: & & \vdots \\
paulson@6121: \mathtt{and} & t@n(A@1,\ldots,A@h) & = &
paulson@6121: constructor^n@1~ ~\mid~ \ldots ~\mid~ constructor^n@{k@n}
paulson@6121: \end{array}
paulson@6121: \]
paulson@6121: Here $t@1$, \ldots,~$t@n$ are identifiers and $A@1$, \ldots,~$A@h$ are
paulson@6121: variables: the datatype's parameters. Each constructor specification has the
paulson@6121: form \dquotesoff
paulson@6121: \[ C \hbox{\tt~( } \hbox{\tt"} x@1 \hbox{\tt:} T@1 \hbox{\tt"},\;
paulson@6121: \ldots,\;
paulson@6121: \hbox{\tt"} x@m \hbox{\tt:} T@m \hbox{\tt"}
paulson@6121: \hbox{\tt~)}
paulson@6121: \]
paulson@6121: Here $C$ is the constructor name, and variables $x@1$, \ldots,~$x@m$ are the
paulson@6121: constructor arguments, belonging to the sets $T@1$, \ldots, $T@m$,
paulson@6121: respectively. Typically each $T@j$ is either a constant set, a datatype
paulson@6121: parameter (one of $A@1$, \ldots, $A@h$) or a recursive occurrence of one of
paulson@6121: the datatypes, say $t@i(A@1,\ldots,A@h)$. More complex possibilities exist,
paulson@6121: but they are much harder to realize. Often, additional information must be
paulson@6121: supplied in the form of theorems.
paulson@6121:
paulson@6121: A datatype can occur recursively as the argument of some function~$F$. This
paulson@6121: is called a {\em nested} (or \emph{indirect}) occurrence. It is only allowed
paulson@6121: if the datatype package is given a theorem asserting that $F$ is monotonic.
paulson@6121: If the datatype has indirect occurrences, then Isabelle/ZF does not support
paulson@6121: recursive function definitions.
paulson@6121:
paulson@6121: A simple example of a datatype is \texttt{list}, which is built-in, and is
paulson@6121: defined by
paulson@6121: \begin{ttbox}
paulson@6121: consts list :: i=>i
paulson@6121: datatype "list(A)" = Nil | Cons ("a:A", "l: list(A)")
paulson@6121: \end{ttbox}
paulson@6121: Note that the datatype operator must be declared as a constant first.
paulson@6121: However, the package declares the constructors. Here, \texttt{Nil} gets type
paulson@6121: $i$ and \texttt{Cons} gets type $[i,i]\To i$.
paulson@6121:
paulson@6121: Trees and forests can be modelled by the mutually recursive datatype
paulson@6121: definition
paulson@6121: \begin{ttbox}
paulson@6121: consts tree, forest, tree_forest :: i=>i
paulson@6121: datatype "tree(A)" = Tcons ("a: A", "f: forest(A)")
paulson@6121: and "forest(A)" = Fnil | Fcons ("t: tree(A)", "f: forest(A)")
paulson@6121: \end{ttbox}
paulson@6121: Here $\texttt{tree}(A)$ is the set of trees over $A$, $\texttt{forest}(A)$ is
paulson@6121: the set of forests over $A$, and $\texttt{tree_forest}(A)$ is the union of
paulson@6121: the previous two sets. All three operators must be declared first.
paulson@6121:
paulson@6121: The datatype \texttt{term}, which is defined by
paulson@6121: \begin{ttbox}
paulson@6121: consts term :: i=>i
paulson@6121: datatype "term(A)" = Apply ("a: A", "l: list(term(A))")
paulson@6121: monos "[list_mono]"
paulson@6121: \end{ttbox}
paulson@6121: is an example of nested recursion. (The theorem \texttt{list_mono} is proved
paulson@6121: in file \texttt{List.ML}, and the \texttt{term} example is devaloped in theory
paulson@6121: \thydx{ex/Term}.)
paulson@6121:
paulson@6121: \subsubsection{Freeness of the constructors}
paulson@6121:
paulson@6121: Constructors satisfy {\em freeness} properties. Constructions are distinct,
paulson@6121: for example $\texttt{Nil}\not=\texttt{Cons}(a,l)$, and they are injective, for
paulson@6121: example $\texttt{Cons}(a,l)=\texttt{Cons}(a',l') \bimp a=a' \conj l=l'$.
paulson@6121: Because the number of freeness is quadratic in the number of constructors, the
paulson@6143: datatype package does not prove them. Instead, it ensures that simplification
paulson@6143: will prove them dynamically: when the simplifier encounters a formula
paulson@6143: asserting the equality of two datatype constructors, it performs freeness
paulson@6143: reasoning.
paulson@6143:
paulson@6143: Freeness reasoning can also be done using the classical reasoner, but it is
paulson@6143: more complicated. You have to add some safe elimination rules rules to the
paulson@6143: claset. For the \texttt{list} datatype, they are called
paulson@6143: \texttt{list.free_SEs}. Occasionally this exposes the underlying
paulson@6143: representation of some constructor, which can be rectified using the command
paulson@6143: \hbox{\tt fold_tac list.con_defs}.
paulson@6143:
paulson@6121:
paulson@6121: \subsubsection{Structural induction}
paulson@6121:
paulson@6121: The datatype package also provides structural induction rules. For datatypes
paulson@6121: without mutual or nested recursion, the rule has the form exemplified by
paulson@6121: \texttt{list.induct} in Fig.\ts\ref{zf-list}. For mutually recursive
paulson@6121: datatypes, the induction rule is supplied in two forms. Consider datatype
paulson@6121: \texttt{TF}. The rule \texttt{tree_forest.induct} performs induction over a
paulson@6121: single predicate~\texttt{P}, which is presumed to be defined for both trees
paulson@6121: and forests:
paulson@6121: \begin{ttbox}
paulson@6121: [| x : tree_forest(A);
paulson@8249: !!a f. [| a : A; f : forest(A); P(f) |] ==> P(Tcons(a, f));
paulson@8249: P(Fnil);
paulson@6121: !!f t. [| t : tree(A); P(t); f : forest(A); P(f) |]
paulson@6121: ==> P(Fcons(t, f))
paulson@6121: |] ==> P(x)
paulson@6121: \end{ttbox}
paulson@6121: The rule \texttt{tree_forest.mutual_induct} performs induction over two
paulson@6121: distinct predicates, \texttt{P_tree} and \texttt{P_forest}.
paulson@6121: \begin{ttbox}
paulson@6121: [| !!a f.
paulson@6121: [| a : A; f : forest(A); P_forest(f) |] ==> P_tree(Tcons(a, f));
paulson@6121: P_forest(Fnil);
paulson@6121: !!f t. [| t : tree(A); P_tree(t); f : forest(A); P_forest(f) |]
paulson@6121: ==> P_forest(Fcons(t, f))
paulson@6121: |] ==> (ALL za. za : tree(A) --> P_tree(za)) &
paulson@6121: (ALL za. za : forest(A) --> P_forest(za))
paulson@6121: \end{ttbox}
paulson@6121:
paulson@6121: For datatypes with nested recursion, such as the \texttt{term} example from
paulson@6121: above, things are a bit more complicated. The rule \texttt{term.induct}
paulson@6121: refers to the monotonic operator, \texttt{list}:
paulson@6121: \begin{ttbox}
paulson@6121: [| x : term(A);
paulson@8249: !!a l. [| a: A; l: list(Collect(term(A), P)) |] ==> P(Apply(a, l))
paulson@6121: |] ==> P(x)
paulson@6121: \end{ttbox}
paulson@6121: The file \texttt{ex/Term.ML} derives two higher-level induction rules, one of
paulson@6121: which is particularly useful for proving equations:
paulson@6121: \begin{ttbox}
paulson@6121: [| t : term(A);
paulson@6121: !!x zs. [| x : A; zs : list(term(A)); map(f, zs) = map(g, zs) |]
paulson@6121: ==> f(Apply(x, zs)) = g(Apply(x, zs))
paulson@6121: |] ==> f(t) = g(t)
paulson@6121: \end{ttbox}
paulson@6121: How this can be generalized to other nested datatypes is a matter for future
paulson@6121: research.
paulson@6121:
paulson@6121:
paulson@6121: \subsubsection{The \texttt{case} operator}
paulson@6121:
paulson@6121: The package defines an operator for performing case analysis over the
paulson@6121: datatype. For \texttt{list}, it is called \texttt{list_case} and satisfies
paulson@6121: the equations
paulson@6121: \begin{ttbox}
paulson@6121: list_case(f_Nil, f_Cons, []) = f_Nil
paulson@6121: list_case(f_Nil, f_Cons, Cons(a, l)) = f_Cons(a, l)
paulson@6121: \end{ttbox}
paulson@6121: Here \texttt{f_Nil} is the value to return if the argument is \texttt{Nil} and
paulson@6121: \texttt{f_Cons} is a function that computes the value to return if the
paulson@6121: argument has the form $\texttt{Cons}(a,l)$. The function can be expressed as
paulson@6121: an abstraction, over patterns if desired (\S\ref{sec:pairs}).
paulson@6121:
paulson@6121: For mutually recursive datatypes, there is a single \texttt{case} operator.
paulson@6121: In the tree/forest example, the constant \texttt{tree_forest_case} handles all
paulson@6121: of the constructors of the two datatypes.
paulson@6121:
paulson@6121:
paulson@6121:
paulson@6121:
paulson@6121: \subsection{Defining datatypes}
paulson@6121:
paulson@6121: The theory syntax for datatype definitions is shown in
paulson@6121: Fig.~\ref{datatype-grammar}. In order to be well-formed, a datatype
paulson@6121: definition has to obey the rules stated in the previous section. As a result
paulson@6121: the theory is extended with the new types, the constructors, and the theorems
paulson@6121: listed in the previous section. The quotation marks are necessary because
paulson@6121: they enclose general Isabelle formul\ae.
paulson@6121:
paulson@6121: \begin{figure}
paulson@6121: \begin{rail}
paulson@6121: datatype : ( 'datatype' | 'codatatype' ) datadecls;
paulson@6121:
paulson@6121: datadecls: ( '"' id arglist '"' '=' (constructor + '|') ) + 'and'
paulson@6121: ;
paulson@6121: constructor : name ( () | consargs ) ( () | ( '(' mixfix ')' ) )
paulson@6121: ;
paulson@6121: consargs : '(' ('"' var ':' term '"' + ',') ')'
paulson@6121: ;
paulson@6121: \end{rail}
paulson@6121: \caption{Syntax of datatype declarations}
paulson@6121: \label{datatype-grammar}
paulson@6121: \end{figure}
paulson@6121:
paulson@6121: Codatatypes are declared like datatypes and are identical to them in every
paulson@6121: respect except that they have a coinduction rule instead of an induction rule.
paulson@6121: Note that while an induction rule has the effect of limiting the values
paulson@6121: contained in the set, a coinduction rule gives a way of constructing new
paulson@6121: values of the set.
paulson@6121:
paulson@6121: Most of the theorems about datatypes become part of the default simpset. You
paulson@6121: never need to see them again because the simplifier applies them
paulson@6143: automatically. Induction or exhaustion are usually invoked by hand,
paulson@6121: usually via these special-purpose tactics:
paulson@6121: \begin{ttdescription}
paulson@6121: \item[\ttindexbold{induct_tac} {\tt"}$x${\tt"} $i$] applies structural
paulson@6121: induction on variable $x$ to subgoal $i$, provided the type of $x$ is a
paulson@6121: datatype. The induction variable should not occur among other assumptions
paulson@6121: of the subgoal.
paulson@6121: \end{ttdescription}
paulson@6121: In some cases, induction is overkill and a case distinction over all
paulson@6121: constructors of the datatype suffices.
paulson@6121: \begin{ttdescription}
paulson@6121: \item[\ttindexbold{exhaust_tac} {\tt"}$x${\tt"} $i$]
paulson@6121: performs an exhaustive case analysis for the variable~$x$.
paulson@6121: \end{ttdescription}
paulson@6121:
paulson@6121: Both tactics can only be applied to a variable, whose typing must be given in
paulson@6121: some assumption, for example the assumption \texttt{x:\ list(A)}. The tactics
paulson@6121: also work for the natural numbers (\texttt{nat}) and disjoint sums, although
paulson@6121: these sets were not defined using the datatype package. (Disjoint sums are
paulson@6121: not recursive, so only \texttt{exhaust_tac} is available.)
paulson@6121:
paulson@6121: \bigskip
paulson@6121: Here are some more details for the technically minded. Processing the
paulson@6121: theory file produces an \ML\ structure which, in addition to the usual
paulson@6121: components, contains a structure named $t$ for each datatype $t$ defined in
paulson@6121: the file. Each structure $t$ contains the following elements:
paulson@6121: \begin{ttbox}
paulson@6121: val intrs : thm list \textrm{the introduction rules}
paulson@6121: val elim : thm \textrm{the elimination (case analysis) rule}
paulson@6121: val induct : thm \textrm{the standard induction rule}
paulson@6121: val mutual_induct : thm \textrm{the mutual induction rule, or \texttt{True}}
paulson@6121: val case_eqns : thm list \textrm{equations for the case operator}
paulson@6121: val recursor_eqns : thm list \textrm{equations for the recursor}
paulson@6121: val con_defs : thm list \textrm{definitions of the case operator and constructors}
paulson@6121: val free_iffs : thm list \textrm{logical equivalences for proving freeness}
paulson@6121: val free_SEs : thm list \textrm{elimination rules for proving freeness}
paulson@6121: val mk_free : string -> thm \textrm{A function for proving freeness theorems}
paulson@6141: val mk_cases : string -> thm \textrm{case analysis, see below}
paulson@6121: val defs : thm list \textrm{definitions of operators}
paulson@6121: val bnd_mono : thm list \textrm{monotonicity property}
paulson@6121: val dom_subset : thm list \textrm{inclusion in `bounding set'}
paulson@6121: \end{ttbox}
paulson@6121: Furthermore there is the theorem $C$\texttt{_I} for every constructor~$C$; for
paulson@6121: example, the \texttt{list} datatype's introduction rules are bound to the
paulson@6121: identifiers \texttt{Nil_I} and \texttt{Cons_I}.
paulson@6121:
paulson@6121: For a codatatype, the component \texttt{coinduct} is the coinduction rule,
paulson@6121: replacing the \texttt{induct} component.
paulson@6121:
paulson@6121: See the theories \texttt{ex/Ntree} and \texttt{ex/Brouwer} for examples of
paulson@6121: infinitely branching datatypes. See theory \texttt{ex/LList} for an example
paulson@6121: of a codatatype. Some of these theories illustrate the use of additional,
paulson@6121: undocumented features of the datatype package. Datatype definitions are
paulson@6121: reduced to inductive definitions, and the advanced features should be
paulson@6121: understood in that light.
paulson@6121:
paulson@6121:
paulson@6121: \subsection{Examples}
paulson@6121:
paulson@6121: \subsubsection{The datatype of binary trees}
paulson@6121:
paulson@6121: Let us define the set $\texttt{bt}(A)$ of binary trees over~$A$. The theory
paulson@6121: must contain these lines:
paulson@6121: \begin{ttbox}
paulson@6121: consts bt :: i=>i
paulson@6121: datatype "bt(A)" = Lf | Br ("a: A", "t1: bt(A)", "t2: bt(A)")
paulson@6121: \end{ttbox}
paulson@6121: After loading the theory, we can prove, for example, that no tree equals its
paulson@6121: left branch. To ease the induction, we state the goal using quantifiers.
paulson@6121: \begin{ttbox}
paulson@6121: Goal "l : bt(A) ==> ALL x r. Br(x,l,r) ~= l";
paulson@6121: {\out Level 0}
paulson@6121: {\out l : bt(A) ==> ALL x r. Br(x, l, r) ~= l}
paulson@6121: {\out 1. l : bt(A) ==> ALL x r. Br(x, l, r) ~= l}
paulson@6121: \end{ttbox}
paulson@6121: This can be proved by the structural induction tactic:
paulson@6121: \begin{ttbox}
paulson@6121: by (induct_tac "l" 1);
paulson@6121: {\out Level 1}
paulson@6121: {\out l : bt(A) ==> ALL x r. Br(x, l, r) ~= l}
paulson@6121: {\out 1. ALL x r. Br(x, Lf, r) ~= Lf}
paulson@6121: {\out 2. !!a t1 t2.}
paulson@8249: {\out [| a : A; t1 : bt(A);}
paulson@8249: {\out ALL x r. Br(x, t1, r) ~= t1; t2 : bt(A);}
paulson@6121: {\out ALL x r. Br(x, t2, r) ~= t2 |]}
paulson@6121: {\out ==> ALL x r. Br(x, Br(a, t1, t2), r) ~= Br(a, t1, t2)}
paulson@6121: \end{ttbox}
paulson@6143: Both subgoals are proved using \texttt{Auto_tac}, which performs the necessary
paulson@6143: freeness reasoning.
paulson@6121: \begin{ttbox}
paulson@6143: by Auto_tac;
paulson@6121: {\out Level 2}
paulson@6121: {\out l : bt(A) ==> ALL x r. Br(x, l, r) ~= l}
paulson@6121: {\out No subgoals!}
paulson@6121: \end{ttbox}
paulson@6121: To remove the quantifiers from the induction formula, we save the theorem using
paulson@6121: \ttindex{qed_spec_mp}.
paulson@6121: \begin{ttbox}
paulson@6121: qed_spec_mp "Br_neq_left";
paulson@6121: {\out val Br_neq_left = "?l : bt(?A) ==> Br(?x, ?l, ?r) ~= ?l" : thm}
paulson@6121: \end{ttbox}
paulson@6121:
paulson@6121: When there are only a few constructors, we might prefer to prove the freenness
paulson@6121: theorems for each constructor. This is trivial, using the function given us
paulson@6121: for that purpose:
paulson@6121: \begin{ttbox}
paulson@8249: val Br_iff =
paulson@8249: bt.mk_free "Br(a,l,r)=Br(a',l',r') <-> a=a' & l=l' & r=r'";
paulson@6121: {\out val Br_iff =}
paulson@6121: {\out "Br(?a, ?l, ?r) = Br(?a', ?l', ?r') <->}
paulson@6121: {\out ?a = ?a' & ?l = ?l' & ?r = ?r'" : thm}
paulson@6121: \end{ttbox}
paulson@6121:
paulson@6141: The purpose of \ttindex{mk_cases} is to generate instances of the elimination
paulson@6141: (case analysis) rule that have been simplified using freeness reasoning. For
paulson@6141: example, this instance of the elimination rule propagates type-checking
paulson@6141: information from the premise $\texttt{Br}(a,l,r)\in\texttt{bt}(A)$:
paulson@6121: \begin{ttbox}
paulson@6141: val BrE = bt.mk_cases "Br(a,l,r) : bt(A)";
paulson@6121: {\out val BrE =}
paulson@6121: {\out "[| Br(?a, ?l, ?r) : bt(?A);}
paulson@8249: {\out [| ?a : ?A; ?l : bt(?A); ?r : bt(?A) |] ==> ?Q |]}
paulson@8249: {\out ==> ?Q" : thm}
paulson@6121: \end{ttbox}
paulson@6121:
paulson@6121:
paulson@6121: \subsubsection{Mixfix syntax in datatypes}
paulson@6121:
paulson@6121: Mixfix syntax is sometimes convenient. The theory \texttt{ex/PropLog} makes a
paulson@6121: deep embedding of propositional logic:
paulson@6121: \begin{ttbox}
paulson@6121: consts prop :: i
paulson@6121: datatype "prop" = Fls
paulson@6121: | Var ("n: nat") ("#_" [100] 100)
paulson@6121: | "=>" ("p: prop", "q: prop") (infixr 90)
paulson@6121: \end{ttbox}
paulson@6121: The second constructor has a special $\#n$ syntax, while the third constructor
paulson@6121: is an infixed arrow.
paulson@6121:
paulson@6121:
paulson@6121: \subsubsection{A giant enumeration type}
paulson@6121:
paulson@6121: This example shows a datatype that consists of 60 constructors:
paulson@6121: \begin{ttbox}
paulson@6121: consts enum :: i
paulson@6121: datatype
paulson@6121: "enum" = C00 | C01 | C02 | C03 | C04 | C05 | C06 | C07 | C08 | C09
paulson@6121: | C10 | C11 | C12 | C13 | C14 | C15 | C16 | C17 | C18 | C19
paulson@6121: | C20 | C21 | C22 | C23 | C24 | C25 | C26 | C27 | C28 | C29
paulson@6121: | C30 | C31 | C32 | C33 | C34 | C35 | C36 | C37 | C38 | C39
paulson@6121: | C40 | C41 | C42 | C43 | C44 | C45 | C46 | C47 | C48 | C49
paulson@6121: | C50 | C51 | C52 | C53 | C54 | C55 | C56 | C57 | C58 | C59
paulson@6121: end
paulson@6121: \end{ttbox}
paulson@6121: The datatype package scales well. Even though all properties are proved
paulson@6121: rather than assumed, full processing of this definition takes under 15 seconds
paulson@6121: (on a 300 MHz Pentium). The constructors have a balanced representation,
paulson@6121: essentially binary notation, so freeness properties can be proved fast.
paulson@6121: \begin{ttbox}
paulson@6121: Goal "C00 ~= C01";
paulson@6143: by (Simp_tac 1);
paulson@6121: \end{ttbox}
paulson@6121: You need not derive such inequalities explicitly. The simplifier will dispose
paulson@6143: of them automatically.
paulson@6121:
paulson@6121: \index{*datatype|)}
paulson@6121:
paulson@6121:
paulson@6121: \subsection{Recursive function definitions}\label{sec:ZF:recursive}
paulson@6121: \index{recursive functions|see{recursion}}
paulson@6121: \index{*primrec|(}
paulson@6173: \index{recursion!primitive|(}
paulson@6121:
paulson@6121: Datatypes come with a uniform way of defining functions, {\bf primitive
paulson@6121: recursion}. Such definitions rely on the recursion operator defined by the
paulson@6121: datatype package. Isabelle proves the desired recursion equations as
paulson@6121: theorems.
paulson@6121:
paulson@6121: In principle, one could introduce primitive recursive functions by asserting
paulson@6121: their reduction rules as new axioms. Here is a dangerous way of defining the
paulson@6121: append function for lists:
paulson@6121: \begin{ttbox}\slshape
paulson@6121: consts "\at" :: [i,i]=>i (infixr 60)
paulson@6121: rules
paulson@6121: app_Nil "[] \at ys = ys"
paulson@6121: app_Cons "(Cons(a,l)) \at ys = Cons(a, l \at ys)"
paulson@6121: \end{ttbox}
paulson@6121: Asserting axioms brings the danger of accidentally asserting nonsense. It
paulson@6121: should be avoided at all costs!
paulson@6121:
paulson@6121: The \ttindex{primrec} declaration is a safe means of defining primitive
paulson@6121: recursive functions on datatypes:
paulson@6121: \begin{ttbox}
paulson@6121: consts "\at" :: [i,i]=>i (infixr 60)
paulson@6121: primrec
paulson@6121: "[] \at ys = ys"
paulson@6121: "(Cons(a,l)) \at ys = Cons(a, l \at ys)"
paulson@6121: \end{ttbox}
paulson@6121: Isabelle will now check that the two rules do indeed form a primitive
paulson@6121: recursive definition. For example, the declaration
paulson@6121: \begin{ttbox}
paulson@6121: primrec
paulson@6121: "[] \at ys = us"
paulson@6121: \end{ttbox}
paulson@6121: is rejected with an error message ``\texttt{Extra variables on rhs}''.
paulson@6121:
paulson@6121:
paulson@6121: \subsubsection{Syntax of recursive definitions}
paulson@6121:
paulson@6121: The general form of a primitive recursive definition is
paulson@6121: \begin{ttbox}
paulson@6121: primrec
paulson@6121: {\it reduction rules}
paulson@6121: \end{ttbox}
paulson@6121: where \textit{reduction rules} specify one or more equations of the form
paulson@6121: \[ f \, x@1 \, \dots \, x@m \, (C \, y@1 \, \dots \, y@k) \, z@1 \,
paulson@6121: \dots \, z@n = r \] such that $C$ is a constructor of the datatype, $r$
paulson@6121: contains only the free variables on the left-hand side, and all recursive
paulson@6121: calls in $r$ are of the form $f \, \dots \, y@i \, \dots$ for some $i$.
paulson@6121: There must be at most one reduction rule for each constructor. The order is
paulson@6121: immaterial. For missing constructors, the function is defined to return zero.
paulson@6121:
paulson@6121: All reduction rules are added to the default simpset.
paulson@6121: If you would like to refer to some rule by name, then you must prefix
paulson@6121: the rule with an identifier. These identifiers, like those in the
paulson@6121: \texttt{rules} section of a theory, will be visible at the \ML\ level.
paulson@6121:
paulson@6121: The reduction rules for {\tt\at} become part of the default simpset, which
paulson@6121: leads to short proof scripts:
paulson@6121: \begin{ttbox}\underscoreon
paulson@6121: Goal "xs: list(A) ==> (xs @ ys) @ zs = xs @ (ys @ zs)";
paulson@6121: by (induct\_tac "xs" 1);
paulson@6121: by (ALLGOALS Asm\_simp\_tac);
paulson@6121: \end{ttbox}
paulson@6121:
paulson@6121: You can even use the \texttt{primrec} form with non-recursive datatypes and
paulson@6121: with codatatypes. Recursion is not allowed, but it provides a convenient
paulson@6121: syntax for defining functions by cases.
paulson@6121:
paulson@6121:
paulson@6121: \subsubsection{Example: varying arguments}
paulson@6121:
paulson@6121: All arguments, other than the recursive one, must be the same in each equation
paulson@6121: and in each recursive call. To get around this restriction, use explict
paulson@6121: $\lambda$-abstraction and function application. Here is an example, drawn
paulson@6121: from the theory \texttt{Resid/Substitution}. The type of redexes is declared
paulson@6121: as follows:
paulson@6121: \begin{ttbox}
paulson@6121: consts redexes :: i
paulson@6121: datatype
paulson@6121: "redexes" = Var ("n: nat")
paulson@6121: | Fun ("t: redexes")
paulson@6121: | App ("b:bool" ,"f:redexes" , "a:redexes")
paulson@6121: \end{ttbox}
paulson@6121:
paulson@6121: The function \texttt{lift} takes a second argument, $k$, which varies in
paulson@6121: recursive calls.
paulson@6121: \begin{ttbox}
paulson@6121: primrec
paulson@6121: "lift(Var(i)) = (lam k:nat. if ii
paulson@6121:
paulson@6121: inductive
paulson@6121: domains "Fin(A)" <= "Pow(A)"
paulson@6121: intrs
paulson@6121: emptyI "0 : Fin(A)"
paulson@6121: consI "[| a: A; b: Fin(A) |] ==> cons(a,b) : Fin(A)"
paulson@6121: type_intrs empty_subsetI, cons_subsetI, PowI
paulson@6121: type_elims "[make_elim PowD]"
paulson@6121: \end{ttbox}
paulson@6121: The resulting theory structure contains a substructure, called~\texttt{Fin}.
paulson@6121: It contains the \texttt{Fin}$~A$ introduction rules as the list
paulson@6121: \texttt{Fin.intrs}, and also individually as \texttt{Fin.emptyI} and
paulson@6121: \texttt{Fin.consI}. The induction rule is \texttt{Fin.induct}.
paulson@6121:
paulson@6121: The chief problem with making (co)inductive definitions involves type-checking
paulson@6121: the rules. Sometimes, additional theorems need to be supplied under
paulson@6121: \texttt{type_intrs} or \texttt{type_elims}. If the package fails when trying
paulson@6121: to prove your introduction rules, then set the flag \ttindexbold{trace_induct}
paulson@6121: to \texttt{true} and try again. (See the manual \emph{A Fixedpoint Approach
paulson@6121: \ldots} for more discussion of type-checking.)
paulson@6121:
paulson@6121: In the example above, $\texttt{Pow}(A)$ is given as the domain of
paulson@6121: $\texttt{Fin}(A)$, for obviously every finite subset of~$A$ is a subset
paulson@6121: of~$A$. However, the inductive definition package can only prove that given a
paulson@6121: few hints.
paulson@6121: Here is the output that results (with the flag set) when the
paulson@6121: \texttt{type_intrs} and \texttt{type_elims} are omitted from the inductive
paulson@6121: definition above:
paulson@6121: \begin{ttbox}
paulson@6121: Inductive definition Finite.Fin
paulson@6121: Fin(A) ==
paulson@6121: lfp(Pow(A),
paulson@6121: \%X. {z: Pow(A) . z = 0 | (EX a b. z = cons(a, b) & a : A & b : X)})
paulson@6121: Proving monotonicity...
paulson@6121: \ttbreak
paulson@6121: Proving the introduction rules...
paulson@6173: The type-checking subgoal:
paulson@6121: 0 : Fin(A)
paulson@6121: 1. 0 : Pow(A)
paulson@6121: \ttbreak
paulson@6121: The subgoal after monos, type_elims:
paulson@6121: 0 : Fin(A)
paulson@6121: 1. 0 : Pow(A)
paulson@6121: *** prove_goal: tactic failed
paulson@6121: \end{ttbox}
paulson@6121: We see the need to supply theorems to let the package prove
paulson@6121: $\emptyset\in\texttt{Pow}(A)$. Restoring the \texttt{type_intrs} but not the
paulson@6121: \texttt{type_elims}, we again get an error message:
paulson@6121: \begin{ttbox}
paulson@6173: The type-checking subgoal:
paulson@6121: 0 : Fin(A)
paulson@6121: 1. 0 : Pow(A)
paulson@6121: \ttbreak
paulson@6121: The subgoal after monos, type_elims:
paulson@6121: 0 : Fin(A)
paulson@6121: 1. 0 : Pow(A)
paulson@6121: \ttbreak
paulson@6173: The type-checking subgoal:
paulson@6121: cons(a, b) : Fin(A)
paulson@6121: 1. [| a : A; b : Fin(A) |] ==> cons(a, b) : Pow(A)
paulson@6121: \ttbreak
paulson@6121: The subgoal after monos, type_elims:
paulson@6121: cons(a, b) : Fin(A)
paulson@6121: 1. [| a : A; b : Pow(A) |] ==> cons(a, b) : Pow(A)
paulson@6121: *** prove_goal: tactic failed
paulson@6121: \end{ttbox}
paulson@6121: The first rule has been type-checked, but the second one has failed. The
paulson@6121: simplest solution to such problems is to prove the failed subgoal separately
paulson@6121: and to supply it under \texttt{type_intrs}. The solution actually used is
paulson@6121: to supply, under \texttt{type_elims}, a rule that changes
paulson@6121: $b\in\texttt{Pow}(A)$ to $b\subseteq A$; together with \texttt{cons_subsetI}
paulson@6121: and \texttt{PowI}, it is enough to complete the type-checking.
paulson@6121:
paulson@6121:
paulson@6121:
paulson@6121: \subsection{Further examples}
paulson@6121:
paulson@6121: An inductive definition may involve arbitrary monotonic operators. Here is a
paulson@6121: standard example: the accessible part of a relation. Note the use
paulson@6121: of~\texttt{Pow} in the introduction rule and the corresponding mention of the
paulson@6121: rule \verb|Pow_mono| in the \texttt{monos} list. If the desired rule has a
paulson@6121: universally quantified premise, usually the effect can be obtained using
paulson@6121: \texttt{Pow}.
paulson@6121: \begin{ttbox}
paulson@6121: consts acc :: i=>i
paulson@6121: inductive
paulson@6121: domains "acc(r)" <= "field(r)"
paulson@6121: intrs
paulson@6121: vimage "[| r-``{a}: Pow(acc(r)); a: field(r) |] ==> a: acc(r)"
paulson@6121: monos Pow_mono
paulson@6121: \end{ttbox}
paulson@6121:
paulson@6121: Finally, here is a coinductive definition. It captures (as a bisimulation)
paulson@6121: the notion of equality on lazy lists, which are first defined as a codatatype:
paulson@6121: \begin{ttbox}
paulson@6121: consts llist :: i=>i
paulson@6121: codatatype "llist(A)" = LNil | LCons ("a: A", "l: llist(A)")
paulson@6121: \ttbreak
paulson@6121:
paulson@6121: consts lleq :: i=>i
paulson@6121: coinductive
paulson@6121: domains "lleq(A)" <= "llist(A) * llist(A)"
paulson@6121: intrs
paulson@6121: LNil " : lleq(A)"
paulson@6121: LCons "[| a:A; : lleq(A) |]
paulson@6121: ==> : lleq(A)"
paulson@6121: type_intrs "llist.intrs"
paulson@6121: \end{ttbox}
paulson@6121: This use of \texttt{type_intrs} is typical: the relation concerns the
paulson@6121: codatatype \texttt{llist}, so naturally the introduction rules for that
paulson@6121: codatatype will be required for type-checking the rules.
paulson@6121:
paulson@6121: The Isabelle distribution contains many other inductive definitions. Simple
paulson@6121: examples are collected on subdirectory \texttt{ZF/ex}. The directory
paulson@6121: \texttt{Coind} and the theory \texttt{ZF/ex/LList} contain coinductive
paulson@6121: definitions. Larger examples may be found on other subdirectories of
paulson@6121: \texttt{ZF}, such as \texttt{IMP}, and \texttt{Resid}.
paulson@6121:
paulson@6121:
paulson@6121: \subsection{The result structure}
paulson@6121:
paulson@6121: Each (co)inductive set defined in a theory file generates an \ML\ substructure
paulson@6121: having the same name. The the substructure contains the following elements:
paulson@6121:
paulson@6121: \begin{ttbox}
paulson@6121: val intrs : thm list \textrm{the introduction rules}
paulson@6121: val elim : thm \textrm{the elimination (case analysis) rule}
paulson@6141: val mk_cases : string -> thm \textrm{case analysis, see below}
paulson@6121: val induct : thm \textrm{the standard induction rule}
paulson@6121: val mutual_induct : thm \textrm{the mutual induction rule, or \texttt{True}}
paulson@6121: val defs : thm list \textrm{definitions of operators}
paulson@6121: val bnd_mono : thm list \textrm{monotonicity property}
paulson@6121: val dom_subset : thm list \textrm{inclusion in `bounding set'}
paulson@6121: \end{ttbox}
paulson@6121: Furthermore there is the theorem $C$\texttt{_I} for every constructor~$C$; for
paulson@6121: example, the \texttt{list} datatype's introduction rules are bound to the
paulson@6121: identifiers \texttt{Nil_I} and \texttt{Cons_I}.
paulson@6121:
paulson@6121: For a codatatype, the component \texttt{coinduct} is the coinduction rule,
paulson@6121: replacing the \texttt{induct} component.
paulson@6121:
paulson@6121: Recall that \ttindex{mk_cases} generates simplified instances of the
paulson@6121: elimination (case analysis) rule. It is as useful for inductive definitions
paulson@6121: as it is for datatypes. There are many examples in the theory
paulson@6121: \texttt{ex/Comb}, which is discussed at length
paulson@6121: elsewhere~\cite{paulson-generic}. The theory first defines the datatype
paulson@6121: \texttt{comb} of combinators:
paulson@6121: \begin{ttbox}
paulson@6121: consts comb :: i
paulson@6121: datatype "comb" = K
paulson@6121: | S
paulson@6121: | "#" ("p: comb", "q: comb") (infixl 90)
paulson@6121: \end{ttbox}
paulson@6121: The theory goes on to define contraction and parallel contraction
paulson@6121: inductively. Then the file \texttt{ex/Comb.ML} defines special cases of
paulson@6121: contraction using \texttt{mk_cases}:
paulson@6121: \begin{ttbox}
paulson@6141: val K_contractE = contract.mk_cases "K -1-> r";
paulson@6121: {\out val K_contractE = "K -1-> ?r ==> ?Q" : thm}
paulson@6121: \end{ttbox}
paulson@6121: We can read this as saying that the combinator \texttt{K} cannot reduce to
paulson@6121: anything. Similar elimination rules for \texttt{S} and application are also
paulson@6121: generated and are supplied to the classical reasoner. Note that
paulson@6121: \texttt{comb.con_defs} is given to \texttt{mk_cases} to allow freeness
paulson@6121: reasoning on datatype \texttt{comb}.
paulson@6121:
paulson@6121: \index{*coinductive|)} \index{*inductive|)}
paulson@6121:
paulson@6121:
paulson@6121:
paulson@6121:
paulson@6121: \section{The outer reaches of set theory}
paulson@6121:
paulson@6121: The constructions of the natural numbers and lists use a suite of
paulson@6121: operators for handling recursive function definitions. I have described
paulson@6121: the developments in detail elsewhere~\cite{paulson-set-II}. Here is a brief
paulson@6121: summary:
paulson@6121: \begin{itemize}
paulson@6121: \item Theory \texttt{Trancl} defines the transitive closure of a relation
paulson@6121: (as a least fixedpoint).
paulson@6121:
paulson@6121: \item Theory \texttt{WF} proves the Well-Founded Recursion Theorem, using an
paulson@6121: elegant approach of Tobias Nipkow. This theorem permits general
paulson@6121: recursive definitions within set theory.
paulson@6121:
paulson@6121: \item Theory \texttt{Ord} defines the notions of transitive set and ordinal
paulson@6121: number. It derives transfinite induction. A key definition is {\bf
paulson@6121: less than}: $i Pow(A) <= Pow(B)
paulson@6121: \end{ttbox}
paulson@6121: We enter the goal and make the first step, which breaks the equation into
paulson@6121: two inclusions by extensionality:\index{*equalityI theorem}
paulson@6121: \begin{ttbox}
paulson@6121: Goal "Pow(A Int B) = Pow(A) Int Pow(B)";
paulson@6121: {\out Level 0}
paulson@6121: {\out Pow(A Int B) = Pow(A) Int Pow(B)}
paulson@6121: {\out 1. Pow(A Int B) = Pow(A) Int Pow(B)}
paulson@6121: \ttbreak
paulson@6121: by (resolve_tac [equalityI] 1);
paulson@6121: {\out Level 1}
paulson@6121: {\out Pow(A Int B) = Pow(A) Int Pow(B)}
paulson@6121: {\out 1. Pow(A Int B) <= Pow(A) Int Pow(B)}
paulson@6121: {\out 2. Pow(A) Int Pow(B) <= Pow(A Int B)}
paulson@6121: \end{ttbox}
paulson@6121: Both inclusions could be tackled straightforwardly using \texttt{subsetI}.
paulson@6121: A shorter proof results from noting that intersection forms the greatest
paulson@6121: lower bound:\index{*Int_greatest theorem}
paulson@6121: \begin{ttbox}
paulson@6121: by (resolve_tac [Int_greatest] 1);
paulson@6121: {\out Level 2}
paulson@6121: {\out Pow(A Int B) = Pow(A) Int Pow(B)}
paulson@6121: {\out 1. Pow(A Int B) <= Pow(A)}
paulson@6121: {\out 2. Pow(A Int B) <= Pow(B)}
paulson@6121: {\out 3. Pow(A) Int Pow(B) <= Pow(A Int B)}
paulson@6121: \end{ttbox}
paulson@6121: Subgoal~1 follows by applying the monotonicity of \texttt{Pow} to $A\int
paulson@6121: B\subseteq A$; subgoal~2 follows similarly:
paulson@6121: \index{*Int_lower1 theorem}\index{*Int_lower2 theorem}
paulson@6121: \begin{ttbox}
paulson@6121: by (resolve_tac [Int_lower1 RS Pow_mono] 1);
paulson@6121: {\out Level 3}
paulson@6121: {\out Pow(A Int B) = Pow(A) Int Pow(B)}
paulson@6121: {\out 1. Pow(A Int B) <= Pow(B)}
paulson@6121: {\out 2. Pow(A) Int Pow(B) <= Pow(A Int B)}
paulson@6121: \ttbreak
paulson@6121: by (resolve_tac [Int_lower2 RS Pow_mono] 1);
paulson@6121: {\out Level 4}
paulson@6121: {\out Pow(A Int B) = Pow(A) Int Pow(B)}
paulson@6121: {\out 1. Pow(A) Int Pow(B) <= Pow(A Int B)}
paulson@6121: \end{ttbox}
paulson@6121: We are left with the opposite inclusion, which we tackle in the
paulson@6121: straightforward way:\index{*subsetI theorem}
paulson@6121: \begin{ttbox}
paulson@6121: by (resolve_tac [subsetI] 1);
paulson@6121: {\out Level 5}
paulson@6121: {\out Pow(A Int B) = Pow(A) Int Pow(B)}
paulson@6121: {\out 1. !!x. x : Pow(A) Int Pow(B) ==> x : Pow(A Int B)}
paulson@6121: \end{ttbox}
paulson@6121: The subgoal is to show $x\in {\tt Pow}(A\cap B)$ assuming $x\in{\tt
paulson@6121: Pow}(A)\cap {\tt Pow}(B)$; eliminating this assumption produces two
paulson@6121: subgoals. The rule \tdx{IntE} treats the intersection like a conjunction
paulson@6121: instead of unfolding its definition.
paulson@6121: \begin{ttbox}
paulson@6121: by (eresolve_tac [IntE] 1);
paulson@6121: {\out Level 6}
paulson@6121: {\out Pow(A Int B) = Pow(A) Int Pow(B)}
paulson@6121: {\out 1. !!x. [| x : Pow(A); x : Pow(B) |] ==> x : Pow(A Int B)}
paulson@6121: \end{ttbox}
paulson@6121: The next step replaces the \texttt{Pow} by the subset
paulson@6121: relation~($\subseteq$).\index{*PowI theorem}
paulson@6121: \begin{ttbox}
paulson@6121: by (resolve_tac [PowI] 1);
paulson@6121: {\out Level 7}
paulson@6121: {\out Pow(A Int B) = Pow(A) Int Pow(B)}
paulson@6121: {\out 1. !!x. [| x : Pow(A); x : Pow(B) |] ==> x <= A Int B}
paulson@6121: \end{ttbox}
paulson@6121: We perform the same replacement in the assumptions. This is a good
paulson@6121: demonstration of the tactic \ttindex{dresolve_tac}:\index{*PowD theorem}
paulson@6121: \begin{ttbox}
paulson@6121: by (REPEAT (dresolve_tac [PowD] 1));
paulson@6121: {\out Level 8}
paulson@6121: {\out Pow(A Int B) = Pow(A) Int Pow(B)}
paulson@6121: {\out 1. !!x. [| x <= A; x <= B |] ==> x <= A Int B}
paulson@6121: \end{ttbox}
paulson@6121: The assumptions are that $x$ is a lower bound of both $A$ and~$B$, but
paulson@6121: $A\int B$ is the greatest lower bound:\index{*Int_greatest theorem}
paulson@6121: \begin{ttbox}
paulson@6121: by (resolve_tac [Int_greatest] 1);
paulson@6121: {\out Level 9}
paulson@6121: {\out Pow(A Int B) = Pow(A) Int Pow(B)}
paulson@6121: {\out 1. !!x. [| x <= A; x <= B |] ==> x <= A}
paulson@6121: {\out 2. !!x. [| x <= A; x <= B |] ==> x <= B}
paulson@6121: \end{ttbox}
paulson@6121: To conclude the proof, we clear up the trivial subgoals:
paulson@6121: \begin{ttbox}
paulson@6121: by (REPEAT (assume_tac 1));
paulson@6121: {\out Level 10}
paulson@6121: {\out Pow(A Int B) = Pow(A) Int Pow(B)}
paulson@6121: {\out No subgoals!}
paulson@6121: \end{ttbox}
paulson@6121: \medskip
paulson@6121: We could have performed this proof in one step by applying
paulson@6121: \ttindex{Blast_tac}. Let us
paulson@6121: go back to the start:
paulson@6121: \begin{ttbox}
paulson@6121: choplev 0;
paulson@6121: {\out Level 0}
paulson@6121: {\out Pow(A Int B) = Pow(A) Int Pow(B)}
paulson@6121: {\out 1. Pow(A Int B) = Pow(A) Int Pow(B)}
paulson@6121: by (Blast_tac 1);
paulson@6121: {\out Depth = 0}
paulson@6121: {\out Depth = 1}
paulson@6121: {\out Depth = 2}
paulson@6121: {\out Depth = 3}
paulson@6121: {\out Level 1}
paulson@6121: {\out Pow(A Int B) = Pow(A) Int Pow(B)}
paulson@6121: {\out No subgoals!}
paulson@6121: \end{ttbox}
paulson@6121: Past researchers regarded this as a difficult proof, as indeed it is if all
paulson@6121: the symbols are replaced by their definitions.
paulson@6121: \goodbreak
paulson@6121:
paulson@6121: \section{Monotonicity of the union operator}
paulson@6121: For another example, we prove that general union is monotonic:
paulson@6121: ${C\subseteq D}$ implies $\bigcup(C)\subseteq \bigcup(D)$. To begin, we
paulson@6121: tackle the inclusion using \tdx{subsetI}:
paulson@6121: \begin{ttbox}
paulson@6121: Goal "C<=D ==> Union(C) <= Union(D)";
paulson@6121: {\out Level 0}
paulson@6121: {\out C <= D ==> Union(C) <= Union(D)}
paulson@6121: {\out 1. C <= D ==> Union(C) <= Union(D)}
paulson@6121: \ttbreak
paulson@6121: by (resolve_tac [subsetI] 1);
paulson@6121: {\out Level 1}
paulson@6121: {\out C <= D ==> Union(C) <= Union(D)}
paulson@6121: {\out 1. !!x. [| C <= D; x : Union(C) |] ==> x : Union(D)}
paulson@6121: \end{ttbox}
paulson@6121: Big union is like an existential quantifier --- the occurrence in the
paulson@6121: assumptions must be eliminated early, since it creates parameters.
paulson@6121: \index{*UnionE theorem}
paulson@6121: \begin{ttbox}
paulson@6121: by (eresolve_tac [UnionE] 1);
paulson@6121: {\out Level 2}
paulson@6121: {\out C <= D ==> Union(C) <= Union(D)}
paulson@6121: {\out 1. !!x B. [| C <= D; x : B; B : C |] ==> x : Union(D)}
paulson@6121: \end{ttbox}
paulson@6121: Now we may apply \tdx{UnionI}, which creates an unknown involving the
paulson@6121: parameters. To show $x\in \bigcup(D)$ it suffices to show that $x$ belongs
paulson@6121: to some element, say~$\Var{B2}(x,B)$, of~$D$.
paulson@6121: \begin{ttbox}
paulson@6121: by (resolve_tac [UnionI] 1);
paulson@6121: {\out Level 3}
paulson@6121: {\out C <= D ==> Union(C) <= Union(D)}
paulson@6121: {\out 1. !!x B. [| C <= D; x : B; B : C |] ==> ?B2(x,B) : D}
paulson@6121: {\out 2. !!x B. [| C <= D; x : B; B : C |] ==> x : ?B2(x,B)}
paulson@6121: \end{ttbox}
paulson@6121: Combining \tdx{subsetD} with the assumption $C\subseteq D$ yields
paulson@6121: $\Var{a}\in C \Imp \Var{a}\in D$, which reduces subgoal~1. Note that
paulson@6121: \texttt{eresolve_tac} has removed that assumption.
paulson@6121: \begin{ttbox}
paulson@6121: by (eresolve_tac [subsetD] 1);
paulson@6121: {\out Level 4}
paulson@6121: {\out C <= D ==> Union(C) <= Union(D)}
paulson@6121: {\out 1. !!x B. [| x : B; B : C |] ==> ?B2(x,B) : C}
paulson@6121: {\out 2. !!x B. [| C <= D; x : B; B : C |] ==> x : ?B2(x,B)}
paulson@6121: \end{ttbox}
paulson@6121: The rest is routine. Observe how~$\Var{B2}(x,B)$ is instantiated.
paulson@6121: \begin{ttbox}
paulson@6121: by (assume_tac 1);
paulson@6121: {\out Level 5}
paulson@6121: {\out C <= D ==> Union(C) <= Union(D)}
paulson@6121: {\out 1. !!x B. [| C <= D; x : B; B : C |] ==> x : B}
paulson@6121: by (assume_tac 1);
paulson@6121: {\out Level 6}
paulson@6121: {\out C <= D ==> Union(C) <= Union(D)}
paulson@6121: {\out No subgoals!}
paulson@6121: \end{ttbox}
paulson@6121: Again, \ttindex{Blast_tac} can prove the theorem in one step.
paulson@6121: \begin{ttbox}
paulson@6121: by (Blast_tac 1);
paulson@6121: {\out Depth = 0}
paulson@6121: {\out Depth = 1}
paulson@6121: {\out Depth = 2}
paulson@6121: {\out Level 1}
paulson@6121: {\out C <= D ==> Union(C) <= Union(D)}
paulson@6121: {\out No subgoals!}
paulson@6121: \end{ttbox}
paulson@6121:
paulson@6121: The file \texttt{ZF/equalities.ML} has many similar proofs. Reasoning about
paulson@6121: general intersection can be difficult because of its anomalous behaviour on
paulson@6121: the empty set. However, \ttindex{Blast_tac} copes well with these. Here is
paulson@6121: a typical example, borrowed from Devlin~\cite[page 12]{devlin79}:
paulson@6121: \begin{ttbox}
paulson@6121: a:C ==> (INT x:C. A(x) Int B(x)) = (INT x:C. A(x)) Int (INT x:C. B(x))
paulson@6121: \end{ttbox}
paulson@6121: In traditional notation this is
paulson@6121: \[ a\in C \,\Imp\, \inter@{x\in C} \Bigl(A(x) \int B(x)\Bigr) =
paulson@6121: \Bigl(\inter@{x\in C} A(x)\Bigr) \int
paulson@6121: \Bigl(\inter@{x\in C} B(x)\Bigr) \]
paulson@6121:
paulson@6121: \section{Low-level reasoning about functions}
paulson@6121: The derived rules \texttt{lamI}, \texttt{lamE}, \texttt{lam_type}, \texttt{beta}
paulson@6121: and \texttt{eta} support reasoning about functions in a
paulson@6121: $\lambda$-calculus style. This is generally easier than regarding
paulson@6121: functions as sets of ordered pairs. But sometimes we must look at the
paulson@6121: underlying representation, as in the following proof
paulson@6121: of~\tdx{fun_disjoint_apply1}. This states that if $f$ and~$g$ are
paulson@6121: functions with disjoint domains~$A$ and~$C$, and if $a\in A$, then
paulson@6121: $(f\un g)`a = f`a$:
paulson@6121: \begin{ttbox}
paulson@6121: Goal "[| a:A; f: A->B; g: C->D; A Int C = 0 |] ==> \ttback
paulson@6121: \ttback (f Un g)`a = f`a";
paulson@6121: {\out Level 0}
paulson@6121: {\out [| a : A; f : A -> B; g : C -> D; A Int C = 0 |]}
paulson@6121: {\out ==> (f Un g) ` a = f ` a}
paulson@6121: {\out 1. [| a : A; f : A -> B; g : C -> D; A Int C = 0 |]}
paulson@6121: {\out ==> (f Un g) ` a = f ` a}
paulson@6121: \end{ttbox}
paulson@6121: Using \tdx{apply_equality}, we reduce the equality to reasoning about
paulson@6121: ordered pairs. The second subgoal is to verify that $f\un g$ is a function.
paulson@6121: To save space, the assumptions will be abbreviated below.
paulson@6121: \begin{ttbox}
paulson@6121: by (resolve_tac [apply_equality] 1);
paulson@6121: {\out Level 1}
paulson@6121: {\out [| \ldots |] ==> (f Un g) ` a = f ` a}
paulson@6121: {\out 1. [| \ldots |] ==> : f Un g}
paulson@6121: {\out 2. [| \ldots |] ==> f Un g : (PROD x:?A. ?B(x))}
paulson@6121: \end{ttbox}
paulson@6121: We must show that the pair belongs to~$f$ or~$g$; by~\tdx{UnI1} we
paulson@6121: choose~$f$:
paulson@6121: \begin{ttbox}
paulson@6121: by (resolve_tac [UnI1] 1);
paulson@6121: {\out Level 2}
paulson@6121: {\out [| \ldots |] ==> (f Un g) ` a = f ` a}
paulson@6121: {\out 1. [| \ldots |] ==> : f}
paulson@6121: {\out 2. [| \ldots |] ==> f Un g : (PROD x:?A. ?B(x))}
paulson@6121: \end{ttbox}
paulson@6121: To show $\pair{a,f`a}\in f$ we use \tdx{apply_Pair}, which is
paulson@6121: essentially the converse of \tdx{apply_equality}:
paulson@6121: \begin{ttbox}
paulson@6121: by (resolve_tac [apply_Pair] 1);
paulson@6121: {\out Level 3}
paulson@6121: {\out [| \ldots |] ==> (f Un g) ` a = f ` a}
paulson@6121: {\out 1. [| \ldots |] ==> f : (PROD x:?A2. ?B2(x))}
paulson@6121: {\out 2. [| \ldots |] ==> a : ?A2}
paulson@6121: {\out 3. [| \ldots |] ==> f Un g : (PROD x:?A. ?B(x))}
paulson@6121: \end{ttbox}
paulson@6121: Using the assumptions $f\in A\to B$ and $a\in A$, we solve the two subgoals
paulson@6121: from \tdx{apply_Pair}. Recall that a $\Pi$-set is merely a generalized
paulson@6121: function space, and observe that~{\tt?A2} is instantiated to~\texttt{A}.
paulson@6121: \begin{ttbox}
paulson@6121: by (assume_tac 1);
paulson@6121: {\out Level 4}
paulson@6121: {\out [| \ldots |] ==> (f Un g) ` a = f ` a}
paulson@6121: {\out 1. [| \ldots |] ==> a : A}
paulson@6121: {\out 2. [| \ldots |] ==> f Un g : (PROD x:?A. ?B(x))}
paulson@6121: by (assume_tac 1);
paulson@6121: {\out Level 5}
paulson@6121: {\out [| \ldots |] ==> (f Un g) ` a = f ` a}
paulson@6121: {\out 1. [| \ldots |] ==> f Un g : (PROD x:?A. ?B(x))}
paulson@6121: \end{ttbox}
paulson@6121: To construct functions of the form $f\un g$, we apply
paulson@6121: \tdx{fun_disjoint_Un}:
paulson@6121: \begin{ttbox}
paulson@6121: by (resolve_tac [fun_disjoint_Un] 1);
paulson@6121: {\out Level 6}
paulson@6121: {\out [| \ldots |] ==> (f Un g) ` a = f ` a}
paulson@6121: {\out 1. [| \ldots |] ==> f : ?A3 -> ?B3}
paulson@6121: {\out 2. [| \ldots |] ==> g : ?C3 -> ?D3}
paulson@6121: {\out 3. [| \ldots |] ==> ?A3 Int ?C3 = 0}
paulson@6121: \end{ttbox}
paulson@6121: The remaining subgoals are instances of the assumptions. Again, observe how
paulson@6121: unknowns are instantiated:
paulson@6121: \begin{ttbox}
paulson@6121: by (assume_tac 1);
paulson@6121: {\out Level 7}
paulson@6121: {\out [| \ldots |] ==> (f Un g) ` a = f ` a}
paulson@6121: {\out 1. [| \ldots |] ==> g : ?C3 -> ?D3}
paulson@6121: {\out 2. [| \ldots |] ==> A Int ?C3 = 0}
paulson@6121: by (assume_tac 1);
paulson@6121: {\out Level 8}
paulson@6121: {\out [| \ldots |] ==> (f Un g) ` a = f ` a}
paulson@6121: {\out 1. [| \ldots |] ==> A Int C = 0}
paulson@6121: by (assume_tac 1);
paulson@6121: {\out Level 9}
paulson@6121: {\out [| \ldots |] ==> (f Un g) ` a = f ` a}
paulson@6121: {\out No subgoals!}
paulson@6121: \end{ttbox}
paulson@6121: See the files \texttt{ZF/func.ML} and \texttt{ZF/WF.ML} for more
paulson@6121: examples of reasoning about functions.
paulson@6121:
paulson@6121: \index{set theory|)}