1.1 --- a/doc-src/ZF/ZF.tex Sat Aug 12 21:42:12 2000 +0200
1.2 +++ b/doc-src/ZF/ZF.tex Sat Aug 12 21:42:51 2000 +0200
1.3 @@ -1086,6 +1086,8 @@
1.4 \end{figure}
1.5
1.6
1.7 +\subsection{Disjoint unions}
1.8 +
1.9 Theory \thydx{Sum} defines the disjoint union of two sets, with
1.10 injections and a case analysis operator (Fig.\ts\ref{zf-sum}). Disjoint
1.11 unions play a role in datatype definitions, particularly when there is
1.12 @@ -1107,6 +1109,9 @@
1.13 \caption{Non-standard pairs, products and sums} \label{zf-qpair}
1.14 \end{figure}
1.15
1.16 +
1.17 +\subsection{Non-standard ordered pairs}
1.18 +
1.19 Theory \thydx{QPair} defines a notion of ordered pair that admits
1.20 non-well-founded tupling (Fig.\ts\ref{zf-qpair}). Such pairs are written
1.21 {\tt<$a$;$b$>}. It also defines the eliminator \cdx{qsplit}, the
1.22 @@ -1165,6 +1170,9 @@
1.23 \caption{Least and greatest fixedpoints} \label{zf-fixedpt}
1.24 \end{figure}
1.25
1.26 +
1.27 +\subsection{Least and greatest fixedpoints}
1.28 +
1.29 The Knaster-Tarski Theorem states that every monotone function over a
1.30 complete lattice has a fixedpoint. Theory \thydx{Fixedpt} proves the
1.31 Theorem only for a particular lattice, namely the lattice of subsets of a
1.32 @@ -1184,6 +1192,83 @@
1.33 file \texttt{ZF/mono.ML}.
1.34
1.35
1.36 +\subsection{Finite sets and lists}
1.37 +
1.38 +Theory \texttt{Finite} (Figure~\ref{zf-fin}) defines the finite set operator;
1.39 +${\tt Fin}(A)$ is the set of all finite sets over~$A$. The theory employs
1.40 +Isabelle's inductive definition package, which proves various rules
1.41 +automatically. The induction rule shown is stronger than the one proved by
1.42 +the package. The theory also defines the set of all finite functions
1.43 +between two given sets.
1.44 +
1.45 +\begin{figure}
1.46 +\begin{ttbox}
1.47 +\tdx{Fin.emptyI} 0 : Fin(A)
1.48 +\tdx{Fin.consI} [| a: A; b: Fin(A) |] ==> cons(a,b) : Fin(A)
1.49 +
1.50 +\tdx{Fin_induct}
1.51 + [| b: Fin(A);
1.52 + P(0);
1.53 + !!x y. [| x: A; y: Fin(A); x~:y; P(y) |] ==> P(cons(x,y))
1.54 + |] ==> P(b)
1.55 +
1.56 +\tdx{Fin_mono} A<=B ==> Fin(A) <= Fin(B)
1.57 +\tdx{Fin_UnI} [| b: Fin(A); c: Fin(A) |] ==> b Un c : Fin(A)
1.58 +\tdx{Fin_UnionI} C : Fin(Fin(A)) ==> Union(C) : Fin(A)
1.59 +\tdx{Fin_subset} [| c<=b; b: Fin(A) |] ==> c: Fin(A)
1.60 +\end{ttbox}
1.61 +\caption{The finite set operator} \label{zf-fin}
1.62 +\end{figure}
1.63 +
1.64 +\begin{figure}
1.65 +\begin{constants}
1.66 + \it symbol & \it meta-type & \it priority & \it description \\
1.67 + \cdx{list} & $i\To i$ && lists over some set\\
1.68 + \cdx{list_case} & $[i, [i,i]\To i, i] \To i$ && conditional for $list(A)$ \\
1.69 + \cdx{map} & $[i\To i, i] \To i$ & & mapping functional\\
1.70 + \cdx{length} & $i\To i$ & & length of a list\\
1.71 + \cdx{rev} & $i\To i$ & & reverse of a list\\
1.72 + \tt \at & $[i,i]\To i$ & Right 60 & append for lists\\
1.73 + \cdx{flat} & $i\To i$ & & append of list of lists
1.74 +\end{constants}
1.75 +
1.76 +\underscoreon %%because @ is used here
1.77 +\begin{ttbox}
1.78 +\tdx{NilI} Nil : list(A)
1.79 +\tdx{ConsI} [| a: A; l: list(A) |] ==> Cons(a,l) : list(A)
1.80 +
1.81 +\tdx{List.induct}
1.82 + [| l: list(A);
1.83 + P(Nil);
1.84 + !!x y. [| x: A; y: list(A); P(y) |] ==> P(Cons(x,y))
1.85 + |] ==> P(l)
1.86 +
1.87 +\tdx{Cons_iff} Cons(a,l)=Cons(a',l') <-> a=a' & l=l'
1.88 +\tdx{Nil_Cons_iff} ~ Nil=Cons(a,l)
1.89 +
1.90 +\tdx{list_mono} A<=B ==> list(A) <= list(B)
1.91 +
1.92 +\tdx{map_ident} l: list(A) ==> map(\%u. u, l) = l
1.93 +\tdx{map_compose} l: list(A) ==> map(h, map(j,l)) = map(\%u. h(j(u)), l)
1.94 +\tdx{map_app_distrib} xs: list(A) ==> map(h, xs@ys) = map(h,xs) @ map(h,ys)
1.95 +\tdx{map_type}
1.96 + [| l: list(A); !!x. x: A ==> h(x): B |] ==> map(h,l) : list(B)
1.97 +\tdx{map_flat}
1.98 + ls: list(list(A)) ==> map(h, flat(ls)) = flat(map(map(h),ls))
1.99 +\end{ttbox}
1.100 +\caption{Lists} \label{zf-list}
1.101 +\end{figure}
1.102 +
1.103 +
1.104 +Figure~\ref{zf-list} presents the set of lists over~$A$, ${\tt list}(A)$. The
1.105 +definition employs Isabelle's datatype package, which defines the introduction
1.106 +and induction rules automatically, as well as the constructors, case operator
1.107 +(\verb|list_case|) and recursion operator. The theory then defines the usual
1.108 +list functions by primitive recursion. See theory \texttt{List}.
1.109 +
1.110 +
1.111 +\subsection{Miscellaneous}
1.112 +
1.113 \begin{figure}
1.114 \begin{constants}
1.115 \it symbol & \it meta-type & \it priority & \it description \\
1.116 @@ -1242,6 +1327,146 @@
1.117 have been proved. These results are fundamental to a treatment of
1.118 equipollence and cardinality.
1.119
1.120 +Theory \thydx{Univ} defines a `universe' $\texttt{univ}(A)$, which is used by
1.121 +the datatype package. This set contains $A$ and the
1.122 +natural numbers. Vitally, it is closed under finite products: ${\tt
1.123 + univ}(A)\times{\tt univ}(A)\subseteq{\tt univ}(A)$. This theory also
1.124 +defines the cumulative hierarchy of axiomatic set theory, which
1.125 +traditionally is written $V@\alpha$ for an ordinal~$\alpha$. The
1.126 +`universe' is a simple generalization of~$V@\omega$.
1.127 +
1.128 +Theory \thydx{QUniv} defines a `universe' ${\tt quniv}(A)$, which is used by
1.129 +the datatype package to construct codatatypes such as streams. It is
1.130 +analogous to ${\tt univ}(A)$ (and is defined in terms of it) but is closed
1.131 +under the non-standard product and sum.
1.132 +
1.133 +
1.134 +\section{Automatic Tools}
1.135 +
1.136 +{\ZF} provides the simplifier and the classical reasoner. Moreover it
1.137 +supplies a specialized tool to infer `types' of terms.
1.138 +
1.139 +\subsection{Simplification}
1.140 +
1.141 +{\ZF} inherits simplification from {\FOL} but adopts it for set theory. The
1.142 +extraction of rewrite rules takes the {\ZF} primitives into account. It can
1.143 +strip bounded universal quantifiers from a formula; for example, ${\forall
1.144 + x\in A. f(x)=g(x)}$ yields the conditional rewrite rule $x\in A \Imp
1.145 +f(x)=g(x)$. Given $a\in\{x\in A. P(x)\}$ it extracts rewrite rules from $a\in
1.146 +A$ and~$P(a)$. It can also break down $a\in A\int B$ and $a\in A-B$.
1.147 +
1.148 +Simplification tactics tactics such as \texttt{Asm_simp_tac} and
1.149 +\texttt{Full_simp_tac} use the default simpset (\texttt{simpset()}), which
1.150 +works for most purposes. A small simplification set for set theory is
1.151 +called~\ttindexbold{ZF_ss}, and you can even use \ttindex{FOL_ss} as a minimal
1.152 +starting point. \texttt{ZF_ss} contains congruence rules for all the binding
1.153 +operators of {\ZF}\@. It contains all the conversion rules, such as
1.154 +\texttt{fst} and \texttt{snd}, as well as the rewrites shown in
1.155 +Fig.\ts\ref{zf-simpdata}. See the file \texttt{ZF/simpdata.ML} for a fuller
1.156 +list.
1.157 +
1.158 +
1.159 +\subsection{Classical Reasoning}
1.160 +
1.161 +As for the classical reasoner, tactics such as \texttt{Blast_tac} and {\tt
1.162 + Best_tac} refer to the default claset (\texttt{claset()}). This works for
1.163 +most purposes. Named clasets include \ttindexbold{ZF_cs} (basic set theory)
1.164 +and \ttindexbold{le_cs} (useful for reasoning about the relations $<$ and
1.165 +$\le$). You can use \ttindex{FOL_cs} as a minimal basis for building your own
1.166 +clasets. See \iflabelundefined{chap:classical}{the {\em Reference Manual\/}}%
1.167 +{Chap.\ts\ref{chap:classical}} for more discussion of classical proof methods.
1.168 +
1.169 +
1.170 +\begin{figure}
1.171 +\begin{eqnarray*}
1.172 + a\in \emptyset & \bimp & \bot\\
1.173 + a \in A \un B & \bimp & a\in A \disj a\in B\\
1.174 + a \in A \int B & \bimp & a\in A \conj a\in B\\
1.175 + a \in A-B & \bimp & a\in A \conj \neg (a\in B)\\
1.176 + \pair{a,b}\in {\tt Sigma}(A,B)
1.177 + & \bimp & a\in A \conj b\in B(a)\\
1.178 + a \in {\tt Collect}(A,P) & \bimp & a\in A \conj P(a)\\
1.179 + (\forall x \in \emptyset. P(x)) & \bimp & \top\\
1.180 + (\forall x \in A. \top) & \bimp & \top
1.181 +\end{eqnarray*}
1.182 +\caption{Some rewrite rules for set theory} \label{zf-simpdata}
1.183 +\end{figure}
1.184 +
1.185 +
1.186 +\subsection{Type-Checking Tactics}
1.187 +\index{type-checking tactics}
1.188 +
1.189 +Isabelle/{\ZF} provides simple tactics to help automate those proofs that are
1.190 +essentially type-checking. Such proofs are built by applying rules such as
1.191 +these:
1.192 +\begin{ttbox}
1.193 +[| ?P ==> ?a: ?A; ~?P ==> ?b: ?A |] ==> (if ?P then ?a else ?b): ?A
1.194 +
1.195 +[| ?m : nat; ?n : nat |] ==> ?m #+ ?n : nat
1.196 +
1.197 +?a : ?A ==> Inl(?a) : ?A + ?B
1.198 +\end{ttbox}
1.199 +In typical applications, the goal has the form $t\in\Var{A}$: in other words,
1.200 +we have a specific term~$t$ and need to infer its `type' by instantiating the
1.201 +set variable~$\Var{A}$. Neither the simplifier nor the classical reasoner
1.202 +does this job well. The if-then-else rule, and many similar ones, can make
1.203 +the classical reasoner loop. The simplifier refuses (on principle) to
1.204 +instantiate variables during rewriting, so goals such as \texttt{i\#+j :\ ?A}
1.205 +are left unsolved.
1.206 +
1.207 +The simplifier calls the type-checker to solve rewritten subgoals: this stage
1.208 +can indeed instantiate variables. If you have defined new constants and
1.209 +proved type-checking rules for them, then insert the rules using
1.210 +\texttt{AddTCs} and the rest should be automatic. In particular, the
1.211 +simplifier will use type-checking to help satisfy conditional rewrite rules.
1.212 +Call the tactic \ttindex{Typecheck_tac} to break down all subgoals using
1.213 +type-checking rules.
1.214 +
1.215 +Though the easiest way to invoke the type-checker is via the simplifier,
1.216 +specialized applications may require more detailed knowledge of
1.217 +the type-checking primitives. They are modelled on the simplifier's:
1.218 +\begin{ttdescription}
1.219 +\item[\ttindexbold{tcset}] is the type of tcsets: sets of type-checking rules.
1.220 +
1.221 +\item[\ttindexbold{addTCs}] is an infix operator to add type-checking rules to
1.222 + a tcset.
1.223 +
1.224 +\item[\ttindexbold{delTCs}] is an infix operator to remove type-checking rules
1.225 + from a tcset.
1.226 +
1.227 +\item[\ttindexbold{typecheck_tac}] is a tactic for attempting to prove all
1.228 + subgoals using the rules given in its argument, a tcset.
1.229 +\end{ttdescription}
1.230 +
1.231 +Tcsets, like simpsets, are associated with theories and are merged when
1.232 +theories are merged. There are further primitives that use the default tcset.
1.233 +\begin{ttdescription}
1.234 +\item[\ttindexbold{tcset}] is a function to return the default tcset; use the
1.235 + expression \texttt{tcset()}.
1.236 +
1.237 +\item[\ttindexbold{AddTCs}] adds type-checking rules to the default tcset.
1.238 +
1.239 +\item[\ttindexbold{DelTCs}] removes type-checking rules from the default
1.240 + tcset.
1.241 +
1.242 +\item[\ttindexbold{Typecheck_tac}] calls \texttt{typecheck_tac} using the
1.243 + default tcset.
1.244 +\end{ttdescription}
1.245 +
1.246 +To supply some type-checking rules temporarily, using \texttt{Addrules} and
1.247 +later \texttt{Delrules} is the simplest way. There is also a high-tech
1.248 +approach. Call the simplifier with a new solver expressed using
1.249 +\ttindexbold{type_solver_tac} and your temporary type-checking rules.
1.250 +\begin{ttbox}
1.251 +by (asm_simp_tac
1.252 + (simpset() setSolver type_solver_tac (tcset() addTCs prems)) 2);
1.253 +\end{ttbox}
1.254 +
1.255 +
1.256 +\section{Natural number and integer arithmetic}
1.257 +
1.258 +\index{arithmetic|(}
1.259 +
1.260 \begin{figure}\small
1.261 \index{#*@{\tt\#*} symbol}
1.262 \index{*div symbol}
1.263 @@ -1262,40 +1487,35 @@
1.264 \begin{ttbox}
1.265 \tdx{nat_def} nat == lfp(lam r: Pow(Inf). {\ttlbrace}0{\ttrbrace} Un {\ttlbrace}succ(x). x:r{\ttrbrace}
1.266
1.267 -\tdx{mod_def} m mod n == transrec(m, \%j f. if j:n then j else f`(j#-n))
1.268 -\tdx{div_def} m div n == transrec(m, \%j f. if j:n then 0
1.269 - else succ(f`(j#-n)))
1.270 -
1.271 \tdx{nat_case_def} nat_case(a,b,k) ==
1.272 THE y. k=0 & y=a | (EX x. k=succ(x) & y=b(x))
1.273
1.274 -\tdx{nat_0I} 0 : nat
1.275 -\tdx{nat_succI} n : nat ==> succ(n) : nat
1.276 +\tdx{nat_0I} 0 : nat
1.277 +\tdx{nat_succI} n : nat ==> succ(n) : nat
1.278
1.279 \tdx{nat_induct}
1.280 [| n: nat; P(0); !!x. [| x: nat; P(x) |] ==> P(succ(x))
1.281 |] ==> P(n)
1.282
1.283 -\tdx{nat_case_0} nat_case(a,b,0) = a
1.284 -\tdx{nat_case_succ} nat_case(a,b,succ(m)) = b(m)
1.285 -
1.286 -\tdx{add_0} 0 #+ n = n
1.287 -\tdx{add_succ} succ(m) #+ n = succ(m #+ n)
1.288 -
1.289 -\tdx{mult_type} [| m:nat; n:nat |] ==> m #* n : nat
1.290 -\tdx{mult_0} 0 #* n = 0
1.291 -\tdx{mult_succ} succ(m) #* n = n #+ (m #* n)
1.292 -\tdx{mult_commute} [| m:nat; n:nat |] ==> m #* n = n #* m
1.293 -\tdx{add_mult_dist} [| m:nat; k:nat |] ==>
1.294 - (m #+ n) #* k = (m #* k) #+ (n #* k)
1.295 -\tdx{mult_assoc}
1.296 - [| m:nat; n:nat; k:nat |] ==> (m #* n) #* k = m #* (n #* k)
1.297 -\tdx{mod_quo_equality}
1.298 - [| 0:n; m:nat; n:nat |] ==> (m div n)#*n #+ m mod n = m
1.299 +\tdx{nat_case_0} nat_case(a,b,0) = a
1.300 +\tdx{nat_case_succ} nat_case(a,b,succ(m)) = b(m)
1.301 +
1.302 +\tdx{add_0_natify} 0 #+ n = natify(n)
1.303 +\tdx{add_succ} succ(m) #+ n = succ(m #+ n)
1.304 +
1.305 +\tdx{mult_type} m #* n : nat
1.306 +\tdx{mult_0} 0 #* n = 0
1.307 +\tdx{mult_succ} succ(m) #* n = n #+ (m #* n)
1.308 +\tdx{mult_commute} m #* n = n #* m
1.309 +\tdx{add_mult_dist} (m #+ n) #* k = (m #* k) #+ (n #* k)
1.310 +\tdx{mult_assoc} (m #* n) #* k = m #* (n #* k)
1.311 +\tdx{mod_div_equality} m: nat ==> (m div n)#*n #+ m mod n = m
1.312 \end{ttbox}
1.313 \caption{The natural numbers} \label{zf-nat}
1.314 \end{figure}
1.315
1.316 +\index{natural numbers}
1.317 +
1.318 Theory \thydx{Nat} defines the natural numbers and mathematical
1.319 induction, along with a case analysis operator. The set of natural
1.320 numbers, here called \texttt{nat}, is known in set theory as the ordinal~$\omega$.
1.321 @@ -1309,213 +1529,85 @@
1.322 laws, etc. The most interesting result is perhaps the theorem $a \bmod b +
1.323 (a/b)\times b = a$.
1.324
1.325 -Theory \thydx{Univ} defines a `universe' $\texttt{univ}(A)$, which is used by
1.326 -the datatype package. This set contains $A$ and the
1.327 -natural numbers. Vitally, it is closed under finite products: ${\tt
1.328 - univ}(A)\times{\tt univ}(A)\subseteq{\tt univ}(A)$. This theory also
1.329 -defines the cumulative hierarchy of axiomatic set theory, which
1.330 -traditionally is written $V@\alpha$ for an ordinal~$\alpha$. The
1.331 -`universe' is a simple generalization of~$V@\omega$.
1.332 -
1.333 -Theory \thydx{QUniv} defines a `universe' ${\tt quniv}(A)$, which is used by
1.334 -the datatype package to construct codatatypes such as streams. It is
1.335 -analogous to ${\tt univ}(A)$ (and is defined in terms of it) but is closed
1.336 -under the non-standard product and sum.
1.337 -
1.338 -Theory \texttt{Finite} (Figure~\ref{zf-fin}) defines the finite set operator;
1.339 -${\tt Fin}(A)$ is the set of all finite sets over~$A$. The theory employs
1.340 -Isabelle's inductive definition package, which proves various rules
1.341 -automatically. The induction rule shown is stronger than the one proved by
1.342 -the package. The theory also defines the set of all finite functions
1.343 -between two given sets.
1.344 -
1.345 -\begin{figure}
1.346 +To minimize the need for tedious proofs of $t\in\texttt{nat}$, the arithmetic
1.347 +operators coerce their arguments to be natural numbers. The function
1.348 +\cdx{natify} is defined such that $\texttt{natify}(n) = n$ if $n$ is a natural
1.349 +number, $\texttt{natify}(\texttt{succ}(x)) =
1.350 +\texttt{succ}(\texttt{natify}(x))$ for all $x$, and finally
1.351 +$\texttt{natify}(x)=0$ in all other cases. The benefit is that the addition,
1.352 +subtraction, multiplication, division and remainder operators always return
1.353 +natural numbers, regardless of their arguments. Algebraic laws (commutative,
1.354 +associative, distributive) are unconditional. Occurrences of \texttt{natify}
1.355 +as operands of those operators are simplified away. Any remaining occurrences
1.356 +can either be tolerated or else eliminated by proving that the argument is a
1.357 +natural number.
1.358 +
1.359 +The simplifier automatically cancels common terms on the opposite sides of
1.360 +subtraction and of relations ($=$, $<$ and $\le$). Here is an example:
1.361 \begin{ttbox}
1.362 -\tdx{Fin.emptyI} 0 : Fin(A)
1.363 -\tdx{Fin.consI} [| a: A; b: Fin(A) |] ==> cons(a,b) : Fin(A)
1.364 -
1.365 -\tdx{Fin_induct}
1.366 - [| b: Fin(A);
1.367 - P(0);
1.368 - !!x y. [| x: A; y: Fin(A); x~:y; P(y) |] ==> P(cons(x,y))
1.369 - |] ==> P(b)
1.370 -
1.371 -\tdx{Fin_mono} A<=B ==> Fin(A) <= Fin(B)
1.372 -\tdx{Fin_UnI} [| b: Fin(A); c: Fin(A) |] ==> b Un c : Fin(A)
1.373 -\tdx{Fin_UnionI} C : Fin(Fin(A)) ==> Union(C) : Fin(A)
1.374 -\tdx{Fin_subset} [| c<=b; b: Fin(A) |] ==> c: Fin(A)
1.375 + 1. i #+ j #+ k #- j < k #+ l
1.376 +> by (Simp_tac 1);
1.377 + 1. natify(i) < natify(l)
1.378 \end{ttbox}
1.379 -\caption{The finite set operator} \label{zf-fin}
1.380 -\end{figure}
1.381 -
1.382 -\begin{figure}
1.383 +Given the assumptions \texttt{i:nat} and \texttt{l:nat}, both occurrences of
1.384 +\cdx{natify} would be simplified away.
1.385 +
1.386 +
1.387 +\begin{figure}\small
1.388 +\index{$*@{\tt\$*} symbol}
1.389 +\index{$+@{\tt\$+} symbol}
1.390 +\index{$-@{\tt\$-} symbol}
1.391 \begin{constants}
1.392 \it symbol & \it meta-type & \it priority & \it description \\
1.393 - \cdx{list} & $i\To i$ && lists over some set\\
1.394 - \cdx{list_case} & $[i, [i,i]\To i, i] \To i$ && conditional for $list(A)$ \\
1.395 - \cdx{map} & $[i\To i, i] \To i$ & & mapping functional\\
1.396 - \cdx{length} & $i\To i$ & & length of a list\\
1.397 - \cdx{rev} & $i\To i$ & & reverse of a list\\
1.398 - \tt \at & $[i,i]\To i$ & Right 60 & append for lists\\
1.399 - \cdx{flat} & $i\To i$ & & append of list of lists
1.400 + \cdx{int} & $i$ & & set of integers \\
1.401 + \tt \$* & $[i,i]\To i$ & Left 70 & multiplication \\
1.402 + \tt \$+ & $[i,i]\To i$ & Left 65 & addition\\
1.403 + \tt \$- & $[i,i]\To i$ & Left 65 & subtraction\\
1.404 + \tt \$< & $[i,i]\To o$ & Left 50 & $<$ on integers\\
1.405 + \tt \$<= & $[i,i]\To o$ & Left 50 & $\le$ on integers
1.406 \end{constants}
1.407
1.408 -\underscoreon %%because @ is used here
1.409 \begin{ttbox}
1.410 -\tdx{NilI} Nil : list(A)
1.411 -\tdx{ConsI} [| a: A; l: list(A) |] ==> Cons(a,l) : list(A)
1.412 -
1.413 -\tdx{List.induct}
1.414 - [| l: list(A);
1.415 - P(Nil);
1.416 - !!x y. [| x: A; y: list(A); P(y) |] ==> P(Cons(x,y))
1.417 - |] ==> P(l)
1.418 -
1.419 -\tdx{Cons_iff} Cons(a,l)=Cons(a',l') <-> a=a' & l=l'
1.420 -\tdx{Nil_Cons_iff} ~ Nil=Cons(a,l)
1.421 -
1.422 -\tdx{list_mono} A<=B ==> list(A) <= list(B)
1.423 -
1.424 -\tdx{map_ident} l: list(A) ==> map(\%u. u, l) = l
1.425 -\tdx{map_compose} l: list(A) ==> map(h, map(j,l)) = map(\%u. h(j(u)), l)
1.426 -\tdx{map_app_distrib} xs: list(A) ==> map(h, xs@ys) = map(h,xs) @ map(h,ys)
1.427 -\tdx{map_type}
1.428 - [| l: list(A); !!x. x: A ==> h(x): B |] ==> map(h,l) : list(B)
1.429 -\tdx{map_flat}
1.430 - ls: list(list(A)) ==> map(h, flat(ls)) = flat(map(map(h),ls))
1.431 +\tdx{zadd_0_intify} 0 $+ n = intify(n)
1.432 +
1.433 +\tdx{zmult_type} m $* n : int
1.434 +\tdx{zmult_0} 0 $* n = 0
1.435 +\tdx{zmult_commute} m $* n = n $* m
1.436 +\tdx{zadd_zmult_dist} (m $+ n) $* k = (m $* k) $+ (n $* k)
1.437 +\tdx{zmult_assoc} (m $* n) $* k = m $* (n $* k)
1.438 \end{ttbox}
1.439 -\caption{Lists} \label{zf-list}
1.440 +\caption{The integers} \label{zf-int}
1.441 \end{figure}
1.442
1.443
1.444 -Figure~\ref{zf-list} presents the set of lists over~$A$, ${\tt list}(A)$. The
1.445 -definition employs Isabelle's datatype package, which defines the introduction
1.446 -and induction rules automatically, as well as the constructors, case operator
1.447 -(\verb|list_case|) and recursion operator. The theory then defines the usual
1.448 -list functions by primitive recursion. See theory \texttt{List}.
1.449 -
1.450 -
1.451 -\section{Automatic Tools}
1.452 -
1.453 -{\ZF} provides the simplifier and the classical reasoner. Moreover it
1.454 -supplies a specialized tool to infer `types' of terms.
1.455 -
1.456 -\subsection{Simplification}
1.457 -
1.458 -{\ZF} inherits simplification from {\FOL} but adopts it for set theory. The
1.459 -extraction of rewrite rules takes the {\ZF} primitives into account. It can
1.460 -strip bounded universal quantifiers from a formula; for example, ${\forall
1.461 - x\in A. f(x)=g(x)}$ yields the conditional rewrite rule $x\in A \Imp
1.462 -f(x)=g(x)$. Given $a\in\{x\in A. P(x)\}$ it extracts rewrite rules from $a\in
1.463 -A$ and~$P(a)$. It can also break down $a\in A\int B$ and $a\in A-B$.
1.464 -
1.465 -Simplification tactics tactics such as \texttt{Asm_simp_tac} and
1.466 -\texttt{Full_simp_tac} use the default simpset (\texttt{simpset()}), which
1.467 -works for most purposes. A small simplification set for set theory is
1.468 -called~\ttindexbold{ZF_ss}, and you can even use \ttindex{FOL_ss} as a minimal
1.469 -starting point. \texttt{ZF_ss} contains congruence rules for all the binding
1.470 -operators of {\ZF}\@. It contains all the conversion rules, such as
1.471 -\texttt{fst} and \texttt{snd}, as well as the rewrites shown in
1.472 -Fig.\ts\ref{zf-simpdata}. See the file \texttt{ZF/simpdata.ML} for a fuller
1.473 -list.
1.474 -
1.475 -
1.476 -\subsection{Classical Reasoning}
1.477 -
1.478 -As for the classical reasoner, tactics such as \texttt{Blast_tac} and {\tt
1.479 - Best_tac} refer to the default claset (\texttt{claset()}). This works for
1.480 -most purposes. Named clasets include \ttindexbold{ZF_cs} (basic set theory)
1.481 -and \ttindexbold{le_cs} (useful for reasoning about the relations $<$ and
1.482 -$\le$). You can use \ttindex{FOL_cs} as a minimal basis for building your own
1.483 -clasets. See \iflabelundefined{chap:classical}{the {\em Reference Manual\/}}%
1.484 -{Chap.\ts\ref{chap:classical}} for more discussion of classical proof methods.
1.485 -
1.486 -
1.487 -\begin{figure}
1.488 -\begin{eqnarray*}
1.489 - a\in \emptyset & \bimp & \bot\\
1.490 - a \in A \un B & \bimp & a\in A \disj a\in B\\
1.491 - a \in A \int B & \bimp & a\in A \conj a\in B\\
1.492 - a \in A-B & \bimp & a\in A \conj \neg (a\in B)\\
1.493 - \pair{a,b}\in {\tt Sigma}(A,B)
1.494 - & \bimp & a\in A \conj b\in B(a)\\
1.495 - a \in {\tt Collect}(A,P) & \bimp & a\in A \conj P(a)\\
1.496 - (\forall x \in \emptyset. P(x)) & \bimp & \top\\
1.497 - (\forall x \in A. \top) & \bimp & \top
1.498 -\end{eqnarray*}
1.499 -\caption{Some rewrite rules for set theory} \label{zf-simpdata}
1.500 -\end{figure}
1.501 -
1.502 -
1.503 -\subsection{Type-Checking Tactics}
1.504 -\index{type-checking tactics}
1.505 -
1.506 -Isabelle/{\ZF} provides simple tactics to help automate those proofs that are
1.507 -essentially type-checking. Such proofs are built by applying rules such as
1.508 -these:
1.509 +\index{integers}
1.510 +
1.511 +Theory \thydx{Int} defines the integers, as equivalence classes of natural
1.512 +numbers. Figure~\ref{zf-int} presents a tidy collection of laws. In
1.513 +fact, a large library of facts is proved, including monotonicity laws for
1.514 +addition and multiplication, covering both positive and negative operands.
1.515 +
1.516 +As with the natural numbers, the need for typing proofs is minimized. All the
1.517 +operators defined in Fig.\ts\ref{zf-int} coerce their operands to integers by
1.518 +applying the function \cdx{intify}. This function is the identity on integers
1.519 +and maps other operands to zero.
1.520 +
1.521 +Decimal notation is provided for the integers. Numbers, written as
1.522 +\texttt{\#$nnn$} or \texttt{\#-$nnn$}, are represented internally in
1.523 +two's-complement binary. Expressions involving addition, subtraction and
1.524 +multiplication of numeral constants are evaluated (with acceptable efficiency)
1.525 +by simplification. The simplifier also collects similar terms, multiplying
1.526 +them by a numerical coefficient. It also cancels occurrences of the same
1.527 +terms on the other side of the relational operators. Example:
1.528 \begin{ttbox}
1.529 -[| ?P ==> ?a: ?A; ~?P ==> ?b: ?A |] ==> (if ?P then ?a else ?b): ?A
1.530 -
1.531 -[| ?m : nat; ?n : nat |] ==> ?m #+ ?n : nat
1.532 -
1.533 -?a : ?A ==> Inl(?a) : ?A + ?B
1.534 + 1. y $+ z $+ #-3 $* x $+ y $<= x $* #2 $+ z
1.535 +> by (Simp_tac 1);
1.536 + 1. #2 $* y $<= #5 $* x
1.537 \end{ttbox}
1.538 -In typical applications, the goal has the form $t\in\Var{A}$: in other words,
1.539 -we have a specific term~$t$ and need to infer its `type' by instantiating the
1.540 -set variable~$\Var{A}$. Neither the simplifier nor the classical reasoner
1.541 -does this job well. The if-then-else rule, and many similar ones, can make
1.542 -the classical reasoner loop. The simplifier refuses (on principle) to
1.543 -instantiate variables during rewriting, so goals such as \texttt{i\#+j :\ ?A}
1.544 -are left unsolved.
1.545 -
1.546 -The simplifier calls the type-checker to solve rewritten subgoals: this stage
1.547 -can indeed instantiate variables. If you have defined new constants and
1.548 -proved type-checking rules for them, then insert the rules using
1.549 -\texttt{AddTCs} and the rest should be automatic. In particular, the
1.550 -simplifier will use type-checking to help satisfy conditional rewrite rules.
1.551 -Call the tactic \ttindex{Typecheck_tac} to break down all subgoals using
1.552 -type-checking rules.
1.553 -
1.554 -Though the easiest way to invoke the type-checker is via the simplifier,
1.555 -specialized applications may require more detailed knowledge of
1.556 -the type-checking primitives. They are modelled on the simplifier's:
1.557 -\begin{ttdescription}
1.558 -\item[\ttindexbold{tcset}] is the type of tcsets: sets of type-checking rules.
1.559 -
1.560 -\item[\ttindexbold{addTCs}] is an infix operator to add type-checking rules to
1.561 - a tcset.
1.562 -
1.563 -\item[\ttindexbold{delTCs}] is an infix operator to remove type-checking rules
1.564 - from a tcset.
1.565 -
1.566 -\item[\ttindexbold{typecheck_tac}] is a tactic for attempting to prove all
1.567 - subgoals using the rules given in its argument, a tcset.
1.568 -\end{ttdescription}
1.569 -
1.570 -Tcsets, like simpsets, are associated with theories and are merged when
1.571 -theories are merged. There are further primitives that use the default tcset.
1.572 -\begin{ttdescription}
1.573 -\item[\ttindexbold{tcset}] is a function to return the default tcset; use the
1.574 - expression \texttt{tcset()}.
1.575 -
1.576 -\item[\ttindexbold{AddTCs}] adds type-checking rules to the default tcset.
1.577 -
1.578 -\item[\ttindexbold{DelTCs}] removes type-checking rules from the default
1.579 - tcset.
1.580 -
1.581 -\item[\ttindexbold{Typecheck_tac}] calls \texttt{typecheck_tac} using the
1.582 - default tcset.
1.583 -\end{ttdescription}
1.584 -
1.585 -To supply some type-checking rules temporarily, using \texttt{Addrules} and
1.586 -later \texttt{Delrules} is the simplest way. There is also a high-tech
1.587 -approach. Call the simplifier with a new solver expressed using
1.588 -\ttindexbold{type_solver_tac} and your temporary type-checking rules.
1.589 -\begin{ttbox}
1.590 -by (asm_simp_tac
1.591 - (simpset() setSolver type_solver_tac (tcset() addTCs prems)) 2);
1.592 -\end{ttbox}
1.593 -
1.594 +For more information on the integers, please see the theories on directory
1.595 +\texttt{ZF/Integ}.
1.596 +
1.597 +\index{arithmetic|)}
1.598
1.599
1.600 \section{Datatype definitions}