First draft of Springer book
authorlcp
Sat, 19 Mar 1994 03:01:25 +0100
changeset 2841072b18b2caa
parent 283 76caebd18756
child 285 fd4a6585e5bf
First draft of Springer book
doc-src/Intro/advanced.tex
doc-src/Logics/CTT.tex
     1.1 --- a/doc-src/Intro/advanced.tex	Thu Mar 17 17:48:37 1994 +0100
     1.2 +++ b/doc-src/Intro/advanced.tex	Sat Mar 19 03:01:25 1994 +0100
     1.3 @@ -1,17 +1,8 @@
     1.4  %% $Id$
     1.5 -\part{Advanced methods}
     1.6 +\part{Advanced Methods}
     1.7  Before continuing, it might be wise to try some of your own examples in
     1.8  Isabelle, reinforcing your knowledge of the basic functions.
     1.9 -This paper is merely an introduction to Isabelle.  Two other documents
    1.10 -exist:
    1.11 -\begin{itemize}
    1.12 -  \item {\em The Isabelle Reference Manual\/} contains information about
    1.13 -most of the facilities of Isabelle, apart from particular object-logics.
    1.14  
    1.15 -  \item {\em Isabelle's Object-Logics\/} describes the various logics
    1.16 -distributed with Isabelle.  It also explains how to define new logics in
    1.17 -Isabelle.
    1.18 -\end{itemize}
    1.19  Look through {\em Isabelle's Object-Logics\/} and try proving some simple
    1.20  theorems.  You probably should begin with first-order logic ({\tt FOL}
    1.21  or~{\tt LK}).  Try working some of the examples provided, and others from
    1.22 @@ -225,11 +216,13 @@
    1.23  
    1.24  
    1.25  \subsubsection{Deriving the elimination rule}
    1.26 -Let us derive $(\neg E)$.  The proof follows that of~{\tt conjE}
    1.27 +Let us derive the rule $(\neg E)$.  The proof follows that of~{\tt conjE}
    1.28  (\S\ref{deriving-example}), with an additional step to unfold negation in
    1.29  the major premise.  Although the {\tt goalw} command is best for this, let
    1.30 -us try~\ttindex{goal}.  As usual, we bind the premises to \ML\ identifiers.
    1.31 -We then apply \ttindex{FalseE}, which stands for~$(\bot E)$:
    1.32 +us try~\ttindex{goal} and examine another way of unfolding definitions.
    1.33 +
    1.34 +As usual, we bind the premises to \ML\ identifiers.  We then apply
    1.35 +\ttindex{FalseE}, which stands for~$(\bot E)$:
    1.36  \begin{ttbox}
    1.37  val [major,minor] = goal FOL.thy "[| ~P;  P |] ==> R";
    1.38  {\out Level 0}
    1.39 @@ -242,7 +235,10 @@
    1.40  {\out Level 1}
    1.41  {\out R}
    1.42  {\out  1. False}
    1.43 -\ttbreak
    1.44 +\end{ttbox}
    1.45 +Everything follows from falsity.  And we can prove falsity using the
    1.46 +premises and Modus Ponens:
    1.47 +\begin{ttbox}
    1.48  by (resolve_tac [mp] 1);
    1.49  {\out Level 2}
    1.50  {\out R}
    1.51 @@ -261,7 +257,8 @@
    1.52  {\out R}
    1.53  {\out  1. P}
    1.54  \end{ttbox}
    1.55 -Now {\tt?P1} has changed to~{\tt P}; we need only use the minor premise:
    1.56 +The subgoal {\tt?P1} has been instantiate to~{\tt P}, which we can prove
    1.57 +using the minor premise:
    1.58  \begin{ttbox}
    1.59  by (resolve_tac [minor] 1);
    1.60  {\out Level 4}
    1.61 @@ -273,9 +270,9 @@
    1.62  \indexbold{*notE}
    1.63  
    1.64  \medskip
    1.65 -Again, there is a simpler way of conducting this proof.  The
    1.66 -\ttindex{goalw} command unfolds definitions in the premises as well
    1.67 -as the conclusion:
    1.68 +Again, there is a simpler way of conducting this proof.  Recall that
    1.69 +the \ttindex{goalw} command unfolds definitions the conclusion; it also
    1.70 +unfolds definitions in the premises:
    1.71  \begin{ttbox}
    1.72  val [major,minor] = goalw FOL.thy [not_def]
    1.73      "[| ~P;  P |] ==> R";
    1.74 @@ -285,7 +282,8 @@
    1.75  Observe the difference in {\tt major}; the premises are now {\bf unfolded}
    1.76  and we need not call~\ttindex{rewrite_rule}.  Incidentally, the four calls
    1.77  to \ttindex{resolve_tac} above can be collapsed to one, with the help
    1.78 -of~\ttindex{RS}\@:
    1.79 +of~\ttindex{RS}; this is a typical example of forward reasoning from a
    1.80 +complex premise.
    1.81  \begin{ttbox}
    1.82  minor RS (major RS mp RS FalseE);
    1.83  {\out val it = "?P  [P, ~P]" : thm}
    1.84 @@ -295,11 +293,12 @@
    1.85  {\out No subgoals!}
    1.86  \end{ttbox}
    1.87  
    1.88 -
    1.89 -\medskip Finally, here is a trick that is sometimes useful.  If the goal
    1.90 +\goodbreak\medskip
    1.91 +Finally, here is a trick that is sometimes useful.  If the goal
    1.92  has an outermost meta-quantifier, then \ttindex{goal} and \ttindex{goalw}
    1.93 -do not return the rule's premises in the list of theorems.  Instead, the
    1.94 -premises become assumptions in subgoal~1:
    1.95 +do not return the rule's premises in the list of theorems;  instead, the
    1.96 +premises become assumptions in subgoal~1.  
    1.97 +%%%It does not matter which variables are quantified over.
    1.98  \begin{ttbox}
    1.99  goalw FOL.thy [not_def] "!!P R. [| ~P;  P |] ==> R";
   1.100  {\out Level 0}
   1.101 @@ -338,7 +337,7 @@
   1.102  for deriving the introduction rule~$(\neg I)$.
   1.103  
   1.104  
   1.105 -\section{Defining theories}
   1.106 +\section{Defining theories}\label{sec:defining-theories}
   1.107  \index{theories!defining|(}
   1.108  Isabelle makes no distinction between simple extensions of a logic --- like
   1.109  defining a type~$bool$ with constants~$true$ and~$false$ --- and defining
   1.110 @@ -362,8 +361,7 @@
   1.111  associated concrete syntax.  The translations section specifies rewrite
   1.112  rules on abstract syntax trees, for defining notations and abbreviations.
   1.113  The {\ML} section contains code to perform arbitrary syntactic
   1.114 -transformations.  The main declaration forms are discussed below; see {\em
   1.115 -  Isabelle's Object-Logics} for full details and examples.
   1.116 +transformations.  The main declaration forms are discussed below.
   1.117  
   1.118  All the declaration parts can be omitted.  In the simplest case, $T$ is
   1.119  just the union of $S@1$,~\ldots,~$S@n$.  New theories always extend one
   1.120 @@ -396,7 +394,7 @@
   1.121  bindings to the old rule may persist.  Isabelle ensures that the old and
   1.122  new versions of~$T$ are not involved in the same proof.  Attempting to
   1.123  combine different versions of~$T$ yields the fatal error
   1.124 -\begin{ttbox} 
   1.125 +\begin{ttbox}
   1.126  Attempt to merge different versions of theory: \(T\)
   1.127  \end{ttbox}
   1.128  
   1.129 @@ -424,13 +422,20 @@
   1.130          \(id@n\) "\(rule@n\)"
   1.131  \end{ttbox}
   1.132  where $id@1$, \ldots, $id@n$ are \ML{} identifiers and $rule@1$, \ldots,
   1.133 -$rule@n$ are expressions of type~$prop$.  {\bf Definitions} are rules of
   1.134 -the form $t\equiv u$.  Each rule {\em must\/} be enclosed in quotation marks.
   1.135 +$rule@n$ are expressions of type~$prop$.  Each rule {\em must\/} be
   1.136 +enclosed in quotation marks.
   1.137 +
   1.138 +{\bf Definitions} are rules of the form $t\equiv u$.  Normally definitions
   1.139 +should be conservative, serving only as abbreviations.  As of this writing,
   1.140 +Isabelle does not provide a separate declaration part for definitions; it
   1.141 +is your responsibility to ensure that your definitions are conservative.
   1.142 +However, Isabelle's rewriting primitives will reject $t\equiv u$ unless all
   1.143 +variables free in~$u$ are also free in~$t$.
   1.144  
   1.145  \index{examples!of theories}
   1.146  This theory extends first-order logic with two constants {\em nand} and
   1.147 -{\em xor}, and two rules defining them:
   1.148 -\begin{ttbox} 
   1.149 +{\em xor}, and declares rules to define them:
   1.150 +\begin{ttbox}
   1.151  Gate = FOL +
   1.152  consts  nand,xor :: "[o,o] => o"
   1.153  rules   nand_def "nand(P,Q) == ~(P & Q)"
   1.154 @@ -440,21 +445,28 @@
   1.155  
   1.156  
   1.157  \subsection{Declaring type constructors}
   1.158 -\indexbold{type constructors!declaring}\indexbold{arities!declaring}
   1.159 +\indexbold{types!declaring}\indexbold{arities!declaring} 
   1.160 +%
   1.161  Types are composed of type variables and {\bf type constructors}.  Each
   1.162 -type constructor has a fixed number of argument places.  For example,
   1.163 -$list$ is a 1-place type constructor and $nat$ is a 0-place type
   1.164 -constructor.
   1.165 +type constructor takes a fixed number of arguments.  They are declared
   1.166 +with an \ML-like syntax.  If $list$ takes one type argument, $tree$ takes
   1.167 +two arguments and $nat$ takes no arguments, then these type constructors
   1.168 +can be declared by
   1.169 +\begin{ttbox}
   1.170 +types 'a list
   1.171 +      ('a,'b) tree
   1.172 +      nat
   1.173 +\end{ttbox}
   1.174  
   1.175 -The {\bf type declaration part} has the form
   1.176 +The {\bf type declaration part} has the general form
   1.177  \begin{ttbox}
   1.178 -types   \(id@1\) \(k@1\)
   1.179 +types   \(tids@1\) \(id@1\)
   1.180          \vdots
   1.181 -        \(id@n\) \(k@n\)
   1.182 +        \(tids@1\) \(id@n\)
   1.183  \end{ttbox}
   1.184 -where $id@1$, \ldots, $id@n$ are identifiers and $k@1$, \ldots, $k@n$ are
   1.185 -natural numbers.  It declares each $id@i$ as a type constructor with $k@i$
   1.186 -argument places.
   1.187 +where $id@1$, \ldots, $id@n$ are identifiers and $tids@1$, \ldots, $tids@n$
   1.188 +are type argument lists as shown in the example above.  It declares each
   1.189 +$id@i$ as a type constructor with the specified number of argument places.
   1.190  
   1.191  The {\bf arity declaration part} has the form
   1.192  \begin{ttbox}
   1.193 @@ -465,28 +477,26 @@
   1.194  where $tycon@1$, \ldots, $tycon@n$ are identifiers and $arity@1$, \ldots,
   1.195  $arity@n$ are arities.  Arity declarations add arities to existing
   1.196  types; they complement type declarations.
   1.197 -
   1.198  In the simplest case, for an 0-place type constructor, an arity is simply
   1.199  the type's class.  Let us declare a type~$bool$ of class $term$, with
   1.200 -constants $tt$ and~$ff$:\footnote{In first-order logic, booleans are
   1.201 -distinct from formulae, which have type $o::logic$.}
   1.202 +constants $tt$ and~$ff$.  (In first-order logic, booleans are
   1.203 +distinct from formulae, which have type $o::logic$.)
   1.204  \index{examples!of theories}
   1.205 -\begin{ttbox} 
   1.206 +\begin{ttbox}
   1.207  Bool = FOL +
   1.208 -types   bool 0
   1.209 +types   bool
   1.210  arities bool    :: term
   1.211  consts  tt,ff   :: "bool"
   1.212  end
   1.213  \end{ttbox}
   1.214 -In the general case, type constructors take arguments.  Each type
   1.215 -constructor has an {\bf arity} with respect to
   1.216 -classes~(\S\ref{polymorphic}).  A $k$-place type constructor may have
   1.217 -arities of the form $(s@1,\ldots,s@k)c$, where $s@1,\ldots,s@n$ are sorts
   1.218 -and $c$ is a class.  Each sort specifies a type argument; it has the form
   1.219 -$\{c@1,\ldots,c@m\}$, where $c@1$, \dots,~$c@m$ are classes.  Mostly we
   1.220 -deal with singleton sorts, and may abbreviate them by dropping the braces.
   1.221 -The arity declaration $list{::}(term)term$ is short for
   1.222 -$list{::}(\{term\})term$.
   1.223 +Type constructors can take arguments.  Each type constructor has an {\bf
   1.224 +  arity} with respect to classes~(\S\ref{polymorphic}).  A $k$-place type
   1.225 +constructor may have arities of the form $(s@1,\ldots,s@k)c$, where
   1.226 +$s@1,\ldots,s@n$ are sorts and $c$ is a class.  Each sort specifies a type
   1.227 +argument; it has the form $\{c@1,\ldots,c@m\}$, where $c@1$, \dots,~$c@m$
   1.228 +are classes.  Mostly we deal with singleton sorts, and may abbreviate them
   1.229 +by dropping the braces.  The arity $(term)term$ is short for
   1.230 +$(\{term\})term$.
   1.231  
   1.232  A type constructor may be overloaded (subject to certain conditions) by
   1.233  appearing in several arity declarations.  For instance, the built-in type
   1.234 @@ -494,20 +504,19 @@
   1.235  logic, it is declared also to have arity $(term,term)term$.
   1.236  
   1.237  Theory {\tt List} declares the 1-place type constructor $list$, gives
   1.238 -it arity $list{::}(term)term$, and declares constants $Nil$ and $Cons$ with
   1.239 +it arity $(term)term$, and declares constants $Nil$ and $Cons$ with
   1.240  polymorphic types:
   1.241  \index{examples!of theories}
   1.242 -\begin{ttbox} 
   1.243 +\begin{ttbox}
   1.244  List = FOL +
   1.245 -types   list 1
   1.246 +types   'a list
   1.247  arities list    :: (term)term
   1.248  consts  Nil     :: "'a list"
   1.249          Cons    :: "['a, 'a list] => 'a list" 
   1.250  end
   1.251  \end{ttbox}
   1.252 -Multiple type and arity declarations may be abbreviated to a single line:
   1.253 +Multiple arity declarations may be abbreviated to a single line:
   1.254  \begin{ttbox}
   1.255 -types   \(id@1\), \ldots, \(id@n\) \(k\)
   1.256  arities \(tycon@1\), \ldots, \(tycon@n\) :: \(arity\)
   1.257  \end{ttbox}
   1.258  
   1.259 @@ -520,7 +529,7 @@
   1.260  \subsection{Infixes and Mixfixes}
   1.261  \indexbold{infix operators}\index{examples!of theories}
   1.262  The constant declaration part of the theory
   1.263 -\begin{ttbox} 
   1.264 +\begin{ttbox}
   1.265  Gate2 = FOL +
   1.266  consts  "~&"     :: "[o,o] => o"         (infixl 35)
   1.267          "#"      :: "[o,o] => o"         (infixl 30)
   1.268 @@ -540,19 +549,19 @@
   1.269  
   1.270  \indexbold{mixfix operators}
   1.271  {\bf Mixfix} operators may have arbitrary context-free syntaxes.  For example
   1.272 -\begin{ttbox} 
   1.273 -    If :: "[o,o,o] => o"       ("if _ then _ else _")
   1.274 +\begin{ttbox}
   1.275 +        If :: "[o,o,o] => o"       ("if _ then _ else _")
   1.276  \end{ttbox}
   1.277 -declares a constant $If$ of type $[o,o,o] \To o$ with concrete syntax
   1.278 -$if~P~then~Q~else~R$ instead of $If(P,Q,R)$.  Underscores denote argument
   1.279 -positions.  Pretty-printing information can be specified in order to
   1.280 -improve the layout of formulae with mixfix operations.  For details, see
   1.281 -{\em Isabelle's Object-Logics}.
   1.282 +declares a constant $If$ of type $[o,o,o] \To o$ with concrete syntax {\tt
   1.283 +  if~$P$ then~$Q$ else~$R$} instead of {\tt If($P$,$Q$,$R$)}.  Underscores
   1.284 +denote argument positions.  Pretty-printing information can be specified in
   1.285 +order to improve the layout of formulae with mixfix operations.  For
   1.286 +details, see {\em Isabelle's Object-Logics}.
   1.287  
   1.288  Mixfix declarations can be annotated with precedences, just like
   1.289  infixes.  The example above is just a shorthand for
   1.290 -\begin{ttbox} 
   1.291 -    If :: "[o,o,o] => o"       ("if _ then _ else _" [0,0,0] 1000)
   1.292 +\begin{ttbox}
   1.293 +        If :: "[o,o,o] => o"       ("if _ then _ else _" [0,0,0] 1000)
   1.294  \end{ttbox}
   1.295  The numeric components determine precedences.  The list of integers
   1.296  defines, for each argument position, the minimal precedence an expression
   1.297 @@ -561,15 +570,15 @@
   1.298  acceptable because precedences are non-negative, and conditionals may
   1.299  appear everywhere because 1000 is the highest precedence.  On the other
   1.300  hand,
   1.301 -\begin{ttbox} 
   1.302 -    If :: "[o,o,o] => o"       ("if _ then _ else _" [100,0,0] 99)
   1.303 +\begin{ttbox}
   1.304 +        If :: "[o,o,o] => o"       ("if _ then _ else _" [100,0,0] 99)
   1.305  \end{ttbox}
   1.306 -defines concrete syntax for a
   1.307 -conditional whose first argument cannot have the form $if~P~then~Q~else~R$
   1.308 -because it must have a precedence of at least~100.  Since expressions put in
   1.309 -parentheses have maximal precedence, we may of course write 
   1.310 -\begin{quote}
   1.311 -\it  if (if P then Q else R) then S else T
   1.312 +defines concrete syntax for a conditional whose first argument cannot have
   1.313 +the form {\tt if~$P$ then~$Q$ else~$R$} because it must have a precedence
   1.314 +of at least~100.  Since expressions put in parentheses have maximal
   1.315 +precedence, we may of course write
   1.316 +\begin{quote}\tt
   1.317 +if (if $P$ then $Q$ else $R$) then $S$ else $T$
   1.318  \end{quote}
   1.319  Conditional expressions can also be written using the constant {\tt If}.
   1.320  
   1.321 @@ -580,7 +589,7 @@
   1.322  \index{examples!of theories}
   1.323  \begin{ttbox}
   1.324  Prod = FOL +
   1.325 -types   "*" 2                                 (infixl 20)
   1.326 +types   ('a,'b) "*"                           (infixl 20)
   1.327  arities "*"     :: (term,term)term
   1.328  consts  fst     :: "'a * 'b => 'a"
   1.329          snd     :: "'a * 'b => 'b"
   1.330 @@ -637,9 +646,10 @@
   1.331  \index{examples!of theories}
   1.332  \begin{ttbox}
   1.333  BoolNat = Arith +
   1.334 -types   bool,nat    0
   1.335 +types   bool,nat
   1.336  arities bool,nat    :: arith
   1.337  consts  Suc         :: "nat=>nat"
   1.338 +\ttbreak
   1.339  rules   add0        "0 + n = n::nat"
   1.340          addS        "Suc(m)+n = Suc(m+n)"
   1.341          nat1        "1 = Suc(0)"
   1.342 @@ -653,7 +663,7 @@
   1.343  either type.  The type constraints in the axioms are vital.  Without
   1.344  constraints, the $x$ in $1+x = x$ would have type $\alpha{::}arith$
   1.345  and the axiom would hold for any type of class $arith$.  This would
   1.346 -collapse $nat$:
   1.347 +collapse $nat$ to a trivial type:
   1.348  \[ Suc(1) = Suc(0+1) = Suc(0)+1 = 1+1 = 1! \]
   1.349  The class $arith$ as defined above is more specific than necessary.  Many
   1.350  types come with a binary operation and identity~(0).  On lists,
   1.351 @@ -670,10 +680,10 @@
   1.352  \subsection{Extending first-order logic with the natural numbers}
   1.353  \index{examples!of theories}
   1.354  
   1.355 -The early part of this paper defines a first-order logic, including a
   1.356 -type~$nat$ and the constants $0::nat$ and $Suc::nat\To nat$.  Let us
   1.357 -introduce the Peano axioms for mathematical induction and the freeness of
   1.358 -$0$ and~$Suc$:
   1.359 +Section\ts\ref{sec:logical-syntax} has formalized a first-order logic,
   1.360 +including a type~$nat$ and the constants $0::nat$ and $Suc::nat\To nat$.
   1.361 +Let us introduce the Peano axioms for mathematical induction and the
   1.362 +freeness of $0$ and~$Suc$:
   1.363  \[ \vcenter{\infer[(induct)*]{P[n/x]}{P[0/x] & \infer*{P[Suc(x)/x]}{[P]}}}
   1.364   \qquad \parbox{4.5cm}{provided $x$ is not free in any assumption except~$P$}
   1.365  \]
   1.366 @@ -730,7 +740,7 @@
   1.367  the 0-place type constructor $nat$ and the constants $rec$ and~$Suc$:
   1.368  \begin{ttbox}
   1.369  Nat = FOL +
   1.370 -types   nat 0
   1.371 +types   nat
   1.372  arities nat         :: term
   1.373  consts  "0"         :: "nat"    ("0")
   1.374          Suc         :: "nat=>nat"
   1.375 @@ -753,31 +763,31 @@
   1.376  File {\tt FOL/ex/nat.ML} contains proofs involving this theory of the
   1.377  natural numbers.  As a trivial example, let us derive recursion equations
   1.378  for \verb$+$.  Here is the zero case:
   1.379 -\begin{ttbox} 
   1.380 +\begin{ttbox}
   1.381  goalw Nat.thy [add_def] "0+n = n";
   1.382  {\out Level 0}
   1.383  {\out 0 + n = n}
   1.384 -{\out  1. rec(0,n,%x y. Suc(y)) = n}
   1.385 +{\out  1. rec(0,n,\%x y. Suc(y)) = n}
   1.386  \ttbreak
   1.387  by (resolve_tac [rec_0] 1);
   1.388  {\out Level 1}
   1.389  {\out 0 + n = n}
   1.390  {\out No subgoals!}
   1.391  val add_0 = result();
   1.392 -\end{ttbox} 
   1.393 +\end{ttbox}
   1.394  And here is the successor case:
   1.395 -\begin{ttbox} 
   1.396 +\begin{ttbox}
   1.397  goalw Nat.thy [add_def] "Suc(m)+n = Suc(m+n)";
   1.398  {\out Level 0}
   1.399  {\out Suc(m) + n = Suc(m + n)}
   1.400 -{\out  1. rec(Suc(m),n,%x y. Suc(y)) = Suc(rec(m,n,%x y. Suc(y)))}
   1.401 +{\out  1. rec(Suc(m),n,\%x y. Suc(y)) = Suc(rec(m,n,\%x y. Suc(y)))}
   1.402  \ttbreak
   1.403  by (resolve_tac [rec_Suc] 1);
   1.404  {\out Level 1}
   1.405  {\out Suc(m) + n = Suc(m + n)}
   1.406  {\out No subgoals!}
   1.407  val add_Suc = result();
   1.408 -\end{ttbox} 
   1.409 +\end{ttbox}
   1.410  The induction rule raises some complications, which are discussed next.
   1.411  \index{theories!defining|)}
   1.412  
   1.413 @@ -820,7 +830,7 @@
   1.414  Let us prove that no natural number~$k$ equals its own successor.  To
   1.415  use~$(induct)$, we instantiate~$\Var{n}$ to~$k$; Isabelle finds a good
   1.416  instantiation for~$\Var{P}$.
   1.417 -\begin{ttbox} 
   1.418 +\begin{ttbox}
   1.419  goal Nat.thy "~ (Suc(k) = k)";
   1.420  {\out Level 0}
   1.421  {\out ~Suc(k) = k}
   1.422 @@ -831,13 +841,13 @@
   1.423  {\out ~Suc(k) = k}
   1.424  {\out  1. ~Suc(0) = 0}
   1.425  {\out  2. !!x. ~Suc(x) = x ==> ~Suc(Suc(x)) = Suc(x)}
   1.426 -\end{ttbox} 
   1.427 +\end{ttbox}
   1.428  We should check that Isabelle has correctly applied induction.  Subgoal~1
   1.429  is the base case, with $k$ replaced by~0.  Subgoal~2 is the inductive step,
   1.430  with $k$ replaced by~$Suc(x)$ and with an induction hypothesis for~$x$.
   1.431  The rest of the proof demonstrates~\ttindex{notI}, \ttindex{notE} and the
   1.432  other rules of~\ttindex{Nat.thy}.  The base case holds by~\ttindex{Suc_neq_0}:
   1.433 -\begin{ttbox} 
   1.434 +\begin{ttbox}
   1.435  by (resolve_tac [notI] 1);
   1.436  {\out Level 2}
   1.437  {\out ~Suc(k) = k}
   1.438 @@ -848,10 +858,11 @@
   1.439  {\out Level 3}
   1.440  {\out ~Suc(k) = k}
   1.441  {\out  1. !!x. ~Suc(x) = x ==> ~Suc(Suc(x)) = Suc(x)}
   1.442 -\end{ttbox} 
   1.443 +\end{ttbox}
   1.444  The inductive step holds by the contrapositive of~\ttindex{Suc_inject}.
   1.445 -Using the negation rule, we assume $Suc(Suc(x)) = Suc(x)$ and prove $Suc(x)=x$:
   1.446 -\begin{ttbox} 
   1.447 +Negation rules transform the subgoal into that of proving $Suc(x)=x$ from
   1.448 +$Suc(Suc(x)) = Suc(x)$:
   1.449 +\begin{ttbox}
   1.450  by (resolve_tac [notI] 1);
   1.451  {\out Level 4}
   1.452  {\out ~Suc(k) = k}
   1.453 @@ -866,7 +877,7 @@
   1.454  {\out Level 6}
   1.455  {\out ~Suc(k) = k}
   1.456  {\out No subgoals!}
   1.457 -\end{ttbox} 
   1.458 +\end{ttbox}
   1.459  
   1.460  
   1.461  \subsection{An example of ambiguity in {\tt resolve_tac}}
   1.462 @@ -875,7 +886,7 @@
   1.463  not actually needed.  Almost by chance, \ttindex{resolve_tac} finds the right
   1.464  instantiation for~$(induct)$ to yield the desired next state.  With more
   1.465  complex formulae, our luck fails.  
   1.466 -\begin{ttbox} 
   1.467 +\begin{ttbox}
   1.468  goal Nat.thy "(k+m)+n = k+(m+n)";
   1.469  {\out Level 0}
   1.470  {\out k + m + n = k + (m + n)}
   1.471 @@ -886,59 +897,63 @@
   1.472  {\out k + m + n = k + (m + n)}
   1.473  {\out  1. k + m + n = 0}
   1.474  {\out  2. !!x. k + m + n = x ==> k + m + n = Suc(x)}
   1.475 -\end{ttbox} 
   1.476 -This proof requires induction on~$k$.  But the 0 in subgoal~1 indicates
   1.477 -that induction has been applied to the term~$k+(m+n)$.  The
   1.478 -\ttindex{back} command causes backtracking to an alternative
   1.479 -outcome of the tactic.  
   1.480 -\begin{ttbox} 
   1.481 +\end{ttbox}
   1.482 +This proof requires induction on~$k$.  The occurrence of~0 in subgoal~1
   1.483 +indicates that induction has been applied to the term~$k+(m+n)$; this
   1.484 +application is sound but will not lead to a proof here.  Fortunately,
   1.485 +Isabelle can (lazily!) generate all the valid applications of induction.
   1.486 +The \ttindex{back} command causes backtracking to an alternative outcome of
   1.487 +the tactic.
   1.488 +\begin{ttbox}
   1.489  back();
   1.490  {\out Level 1}
   1.491  {\out k + m + n = k + (m + n)}
   1.492  {\out  1. k + m + n = k + 0}
   1.493  {\out  2. !!x. k + m + n = k + x ==> k + m + n = k + Suc(x)}
   1.494 -\end{ttbox} 
   1.495 -Now induction has been applied to~$m+n$.  Let us call \ttindex{back}
   1.496 -again.
   1.497 -\begin{ttbox} 
   1.498 +\end{ttbox}
   1.499 +Now induction has been applied to~$m+n$.  This is equally useless.  Let us
   1.500 +call \ttindex{back} again.
   1.501 +\begin{ttbox}
   1.502  back();
   1.503  {\out Level 1}
   1.504  {\out k + m + n = k + (m + n)}
   1.505  {\out  1. k + m + 0 = k + (m + 0)}
   1.506 -{\out  2. !!x. k + m + x = k + (m + x) ==> k + m + Suc(x) = k + (m + Suc(x))}
   1.507 -\end{ttbox} 
   1.508 +{\out  2. !!x. k + m + x = k + (m + x) ==>}
   1.509 +{\out          k + m + Suc(x) = k + (m + Suc(x))}
   1.510 +\end{ttbox}
   1.511  Now induction has been applied to~$n$.  What is the next alternative?
   1.512 -\begin{ttbox} 
   1.513 +\begin{ttbox}
   1.514  back();
   1.515  {\out Level 1}
   1.516  {\out k + m + n = k + (m + n)}
   1.517  {\out  1. k + m + n = k + (m + 0)}
   1.518  {\out  2. !!x. k + m + n = k + (m + x) ==> k + m + n = k + (m + Suc(x))}
   1.519 -\end{ttbox} 
   1.520 +\end{ttbox}
   1.521  Inspecting subgoal~1 reveals that induction has been applied to just the
   1.522  second occurrence of~$n$.  This perfectly legitimate induction is useless
   1.523  here.  The main goal admits fourteen different applications of induction.
   1.524  The number is exponential in the size of the formula.
   1.525  
   1.526  \subsection{Proving that addition is associative}
   1.527 -\index{associativity of addition}
   1.528 -\index{examples!of rewriting}
   1.529 -Let us do the proof properly, using~\ttindex{res_inst_tac}.  At the same
   1.530 -time, we shall have a glimpse at Isabelle's rewriting tactics, which are
   1.531 -described in the {\em Reference Manual}.
   1.532 +Let us invoke the induction rule properly properly,
   1.533 +using~\ttindex{res_inst_tac}.  At the same time, we shall have a glimpse at
   1.534 +Isabelle's rewriting tactics, which are described in the {\em Reference
   1.535 +  Manual}.
   1.536  
   1.537 -\index{rewriting!object-level} 
   1.538 +\index{rewriting!object-level}\index{examples!of rewriting} 
   1.539 +
   1.540  Isabelle's rewriting tactics repeatedly applies equations to a subgoal,
   1.541  simplifying or proving it.  For efficiency, the rewriting rules must be
   1.542 -packaged into a \bfindex{simplification set}.  Let us include the equations
   1.543 +packaged into a \bfindex{simplification set}, or {\bf simpset}.  We take
   1.544 +the standard simpset for first-order logic and insert the equations
   1.545  for~{\tt add} proved in the previous section, namely $0+n=n$ and ${\tt
   1.546 -  Suc}(m)+n={\tt Suc}(m+n)$: 
   1.547 -\begin{ttbox} 
   1.548 +  Suc}(m)+n={\tt Suc}(m+n)$:
   1.549 +\begin{ttbox}
   1.550  val add_ss = FOL_ss addrews [add_0, add_Suc];
   1.551 -\end{ttbox} 
   1.552 +\end{ttbox}
   1.553  We state the goal for associativity of addition, and
   1.554  use \ttindex{res_inst_tac} to invoke induction on~$k$:
   1.555 -\begin{ttbox} 
   1.556 +\begin{ttbox}
   1.557  goal Nat.thy "(k+m)+n = k+(m+n)";
   1.558  {\out Level 0}
   1.559  {\out k + m + n = k + (m + n)}
   1.560 @@ -948,34 +963,36 @@
   1.561  {\out Level 1}
   1.562  {\out k + m + n = k + (m + n)}
   1.563  {\out  1. 0 + m + n = 0 + (m + n)}
   1.564 -{\out  2. !!x. x + m + n = x + (m + n) ==> Suc(x) + m + n = Suc(x) + (m + n)}
   1.565 -\end{ttbox} 
   1.566 +{\out  2. !!x. x + m + n = x + (m + n) ==>}
   1.567 +{\out          Suc(x) + m + n = Suc(x) + (m + n)}
   1.568 +\end{ttbox}
   1.569  The base case holds easily; both sides reduce to $m+n$.  The
   1.570  tactic~\ttindex{simp_tac} rewrites with respect to the given simplification
   1.571  set, applying the rewrite rules for~{\tt +}:
   1.572 -\begin{ttbox} 
   1.573 +\begin{ttbox}
   1.574  by (simp_tac add_ss 1);
   1.575  {\out Level 2}
   1.576  {\out k + m + n = k + (m + n)}
   1.577 -{\out  1. !!x. x + m + n = x + (m + n) ==> Suc(x) + m + n = Suc(x) + (m + n)}
   1.578 -\end{ttbox} 
   1.579 +{\out  1. !!x. x + m + n = x + (m + n) ==>}
   1.580 +{\out          Suc(x) + m + n = Suc(x) + (m + n)}
   1.581 +\end{ttbox}
   1.582  The inductive step requires rewriting by the equations for~{\tt add}
   1.583  together the induction hypothesis, which is also an equation.  The
   1.584  tactic~\ttindex{asm_simp_tac} rewrites using a simplification set and any
   1.585  useful assumptions:
   1.586 -\begin{ttbox} 
   1.587 +\begin{ttbox}
   1.588  by (asm_simp_tac add_ss 1);
   1.589  {\out Level 3}
   1.590  {\out k + m + n = k + (m + n)}
   1.591  {\out No subgoals!}
   1.592 -\end{ttbox} 
   1.593 +\end{ttbox}
   1.594  
   1.595  
   1.596 -\section{A {\sc Prolog} interpreter}
   1.597 +\section{A Prolog interpreter}
   1.598  \index{Prolog interpreter|bold}
   1.599 -To demonstrate the power of tacticals, let us construct a {\sc Prolog}
   1.600 +To demonstrate the power of tacticals, let us construct a Prolog
   1.601  interpreter and execute programs involving lists.\footnote{To run these
   1.602 -examples, see the file {\tt FOL/ex/prolog.ML}.} The {\sc Prolog} program
   1.603 +examples, see the file {\tt FOL/ex/prolog.ML}.} The Prolog program
   1.604  consists of a theory.  We declare a type constructor for lists, with an
   1.605  arity declaration to say that $(\tau)list$ is of class~$term$
   1.606  provided~$\tau$ is:
   1.607 @@ -984,7 +1001,7 @@
   1.608  \end{eqnarray*}
   1.609  We declare four constants: the empty list~$Nil$; the infix list
   1.610  constructor~{:}; the list concatenation predicate~$app$; the list reverse
   1.611 -predicate~$rev$.  (In {\sc Prolog}, functions on lists are expressed as
   1.612 +predicate~$rev$.  (In Prolog, functions on lists are expressed as
   1.613  predicates.)
   1.614  \begin{eqnarray*}
   1.615      Nil         & :: & \alpha list \\
   1.616 @@ -992,7 +1009,7 @@
   1.617      app & :: & [\alpha list,\alpha list,\alpha list] \To o \\
   1.618      rev & :: & [\alpha list,\alpha list] \To o 
   1.619  \end{eqnarray*}
   1.620 -The predicate $app$ should satisfy the {\sc Prolog}-style rules
   1.621 +The predicate $app$ should satisfy the Prolog-style rules
   1.622  \[ {app(Nil,ys,ys)} \qquad
   1.623     {app(xs,ys,zs) \over app(x:xs, ys, x:zs)} \]
   1.624  We define the naive version of $rev$, which calls~$app$:
   1.625 @@ -1020,7 +1037,7 @@
   1.626  end
   1.627  \end{ttbox}
   1.628  \subsection{Simple executions}
   1.629 -Repeated application of the rules solves {\sc Prolog} goals.  Let us
   1.630 +Repeated application of the rules solves Prolog goals.  Let us
   1.631  append the lists $[a,b,c]$ and~$[d,e]$.  As the rules are applied, the
   1.632  answer builds up in~{\tt ?x}.
   1.633  \begin{ttbox}
   1.634 @@ -1052,7 +1069,7 @@
   1.635  {\out No subgoals!}
   1.636  \end{ttbox}
   1.637  
   1.638 -{\sc Prolog} can run functions backwards.  Which list can be appended
   1.639 +Prolog can run functions backwards.  Which list can be appended
   1.640  with $[c,d]$ to produce $[a,b,c,d]$?
   1.641  Using \ttindex{REPEAT}, we find the answer at once, $[a,b]$:
   1.642  \begin{ttbox}
   1.643 @@ -1068,11 +1085,11 @@
   1.644  \end{ttbox}
   1.645  
   1.646  
   1.647 -\subsection{Backtracking}
   1.648 -\index{backtracking}
   1.649 +\subsection{Backtracking}\index{backtracking}
   1.650 +Prolog backtracking can handle questions that have multiple solutions.
   1.651  Which lists $x$ and $y$ can be appended to form the list $[a,b,c,d]$?
   1.652 -Using \ttindex{REPEAT} to apply the rules, we quickly find the solution
   1.653 -$x=[]$ and $y=[a,b,c,d]$:
   1.654 +Using \ttindex{REPEAT} to apply the rules, we quickly find the first
   1.655 +solution, namely $x=[]$ and $y=[a,b,c,d]$:
   1.656  \begin{ttbox}
   1.657  goal Prolog.thy "app(?x, ?y, a:b:c:d:Nil)";
   1.658  {\out Level 0}
   1.659 @@ -1084,8 +1101,8 @@
   1.660  {\out app(Nil, a : b : c : d : Nil, a : b : c : d : Nil)}
   1.661  {\out No subgoals!}
   1.662  \end{ttbox}
   1.663 -The \ttindex{back} command returns the tactic's next outcome,
   1.664 -$x=[a]$ and $y=[b,c,d]$:
   1.665 +Isabelle can lazily generate all the possibilities.  The \ttindex{back}
   1.666 +command returns the tactic's next outcome, namely $x=[a]$ and $y=[b,c,d]$:
   1.667  \begin{ttbox}
   1.668  back();
   1.669  {\out Level 1}
   1.670 @@ -1121,7 +1138,9 @@
   1.671  goal Prolog.thy "rev(a:b:c:d:e:f:g:h:i:j:k:l:m:n:Nil, ?w)";
   1.672  {\out Level 0}
   1.673  {\out rev(a : b : c : d : e : f : g : h : i : j : k : l : m : n : Nil, ?w)}
   1.674 -{\out  1. rev(a : b : c : d : e : f : g : h : i : j : k : l : m : n : Nil, ?w)}
   1.675 +{\out  1. rev(a : b : c : d : e : f : g : h : i : j : k : l : m : n : Nil,}
   1.676 +{\out         ?w)}
   1.677 +\ttbreak
   1.678  val rules = [appNil,appCons,revNil,revCons];
   1.679  \ttbreak
   1.680  by (REPEAT (resolve_tac rules 1));
   1.681 @@ -1168,8 +1187,8 @@
   1.682  val prolog_tac = DEPTH_FIRST (has_fewer_prems 1) 
   1.683                               (resolve_tac rules 1);
   1.684  \end{ttbox}
   1.685 -Since {\sc Prolog} uses depth-first search, this tactic is a (slow!) {\sc
   1.686 -Prolog} interpreter.  We return to the start of the proof (using
   1.687 +Since Prolog uses depth-first search, this tactic is a (slow!) 
   1.688 +Prolog interpreter.  We return to the start of the proof (using
   1.689  \ttindex{choplev}), and apply {\tt prolog_tac}:
   1.690  \begin{ttbox}
   1.691  choplev 0;
   1.692 @@ -1194,6 +1213,6 @@
   1.693  {\out rev(a : b : c : d : Nil, d : c : b : a : Nil)}
   1.694  {\out No subgoals!}
   1.695  \end{ttbox}
   1.696 -Although Isabelle is much slower than a {\sc Prolog} system, Isabelle
   1.697 +Although Isabelle is much slower than a Prolog system, Isabelle
   1.698  tactics can exploit logic programming techniques.  
   1.699  
     2.1 --- a/doc-src/Logics/CTT.tex	Thu Mar 17 17:48:37 1994 +0100
     2.2 +++ b/doc-src/Logics/CTT.tex	Sat Mar 19 03:01:25 1994 +0100
     2.3 @@ -159,7 +159,7 @@
     2.4  
     2.5  
     2.6  \section{Syntax}
     2.7 -The constants are shown in Figure~\ref{ctt-constants}.  The infixes include
     2.8 +The constants are shown in Fig.\ts\ref{ctt-constants}.  The infixes include
     2.9  the function application operator (sometimes called `apply'), and the
    2.10  2-place type operators.  Note that meta-level abstraction and application,
    2.11  $\lambda x.b$ and $f(a)$, differ from object-level abstraction and
    2.12 @@ -170,10 +170,10 @@
    2.13  \indexbold{*F}\indexbold{*T}\indexbold{*SUM}\indexbold{*PROD}
    2.14  The empty type is called $F$ and the one-element type is $T$; other finite
    2.15  sets are built as $T+T+T$, etc.  The notation for~{\CTT}
    2.16 -(Figure~\ref{ctt-syntax}) is based on that of Nordstr\"om et
    2.17 +(Fig.\ts\ref{ctt-syntax}) is based on that of Nordstr\"om et
    2.18  al.~\cite{nordstrom90}.  We can write
    2.19  \begin{ttbox}
    2.20 -SUM y:B. PROD x:A. C(x,y)   {\rm for}   Sum(B, %y. Prod(A, %x. C(x,y)))
    2.21 +SUM y:B. PROD x:A. C(x,y)   {\rm for}   Sum(B, \%y. Prod(A, \%x. C(x,y)))
    2.22  \end{ttbox}
    2.23  The special cases as \hbox{\tt$A$*$B$} and \hbox{\tt$A$-->$B$} abbreviate
    2.24  general sums and products over a constant family.\footnote{Unlike normal
    2.25 @@ -222,20 +222,20 @@
    2.26  
    2.27  \idx{NE}        [| p: N;  a: C(0);  
    2.28               !!u v. [| u: N; v: C(u) |] ==> b(u,v): C(succ(u)) 
    2.29 -          |] ==> rec(p, a, %u v.b(u,v)) : C(p)
    2.30 +          |] ==> rec(p, a, \%u v.b(u,v)) : C(p)
    2.31  
    2.32  \idx{NEL}       [| p = q : N;  a = c : C(0);  
    2.33               !!u v. [| u: N; v: C(u) |] ==> b(u,v)=d(u,v): C(succ(u))
    2.34 -          |] ==> rec(p, a, %u v.b(u,v)) = rec(q,c,d) : C(p)
    2.35 +          |] ==> rec(p, a, \%u v.b(u,v)) = rec(q,c,d) : C(p)
    2.36  
    2.37  \idx{NC0}       [| a: C(0);  
    2.38               !!u v. [| u: N; v: C(u) |] ==> b(u,v): C(succ(u))
    2.39 -          |] ==> rec(0, a, %u v.b(u,v)) = a : C(0)
    2.40 +          |] ==> rec(0, a, \%u v.b(u,v)) = a : C(0)
    2.41  
    2.42  \idx{NC_succ}   [| p: N;  a: C(0);  
    2.43               !!u v. [| u: N; v: C(u) |] ==> b(u,v): C(succ(u)) 
    2.44 -          |] ==> rec(succ(p), a, %u v.b(u,v)) =
    2.45 -                 b(p, rec(p, a, %u v.b(u,v))) : C(succ(p))
    2.46 +          |] ==> rec(succ(p), a, \%u v.b(u,v)) =
    2.47 +                 b(p, rec(p, a, \%u v.b(u,v))) : C(succ(p))
    2.48  
    2.49  \idx{zero_ne_succ}      [| a: N;  0 = succ(a) : N |] ==> 0: F
    2.50  \end{ttbox}
    2.51 @@ -277,18 +277,18 @@
    2.52  
    2.53  \idx{SumE}      [| p: SUM x:A.B(x);  
    2.54               !!x y. [| x:A; y:B(x) |] ==> c(x,y): C(<x,y>) 
    2.55 -          |] ==> split(p, %x y.c(x,y)) : C(p)
    2.56 +          |] ==> split(p, \%x y.c(x,y)) : C(p)
    2.57  
    2.58  \idx{SumEL}     [| p=q : SUM x:A.B(x); 
    2.59               !!x y. [| x:A; y:B(x) |] ==> c(x,y)=d(x,y): C(<x,y>)
    2.60 -          |] ==> split(p, %x y.c(x,y)) = split(q, %x y.d(x,y)) : C(p)
    2.61 +          |] ==> split(p, \%x y.c(x,y)) = split(q, \%x y.d(x,y)) : C(p)
    2.62  
    2.63  \idx{SumC}      [| a: A;  b: B(a);
    2.64               !!x y. [| x:A; y:B(x) |] ==> c(x,y): C(<x,y>)
    2.65 -          |] ==> split(<a,b>, %x y.c(x,y)) = c(a,b) : C(<a,b>)
    2.66 +          |] ==> split(<a,b>, \%x y.c(x,y)) = c(a,b) : C(<a,b>)
    2.67  
    2.68 -\idx{fst_def}   fst(a) == split(a, %x y.x)
    2.69 -\idx{snd_def}   snd(a) == split(a, %x y.y)
    2.70 +\idx{fst_def}   fst(a) == split(a, \%x y.x)
    2.71 +\idx{snd_def}   snd(a) == split(a, \%x y.y)
    2.72  \end{ttbox}
    2.73  \caption{Rules for the sum type $\sum@{x\in A}B[x]$} \label{ctt-sum}
    2.74  \end{figure}
    2.75 @@ -308,23 +308,23 @@
    2.76  \idx{PlusE}     [| p: A+B;
    2.77               !!x. x:A ==> c(x): C(inl(x));  
    2.78               !!y. y:B ==> d(y): C(inr(y))
    2.79 -          |] ==> when(p, %x.c(x), %y.d(y)) : C(p)
    2.80 +          |] ==> when(p, \%x.c(x), \%y.d(y)) : C(p)
    2.81  
    2.82  \idx{PlusEL}    [| p = q : A+B;
    2.83               !!x. x: A ==> c(x) = e(x) : C(inl(x));   
    2.84               !!y. y: B ==> d(y) = f(y) : C(inr(y))
    2.85 -          |] ==> when(p, %x.c(x), %y.d(y)) = 
    2.86 -                 when(q, %x.e(x), %y.f(y)) : C(p)
    2.87 +          |] ==> when(p, \%x.c(x), \%y.d(y)) = 
    2.88 +                 when(q, \%x.e(x), \%y.f(y)) : C(p)
    2.89  
    2.90  \idx{PlusC_inl} [| a: A;
    2.91               !!x. x:A ==> c(x): C(inl(x));  
    2.92               !!y. y:B ==> d(y): C(inr(y))
    2.93 -          |] ==> when(inl(a), %x.c(x), %y.d(y)) = c(a) : C(inl(a))
    2.94 +          |] ==> when(inl(a), \%x.c(x), \%y.d(y)) = c(a) : C(inl(a))
    2.95  
    2.96  \idx{PlusC_inr} [| b: B;
    2.97               !!x. x:A ==> c(x): C(inl(x));  
    2.98               !!y. y:B ==> d(y): C(inr(y))
    2.99 -          |] ==> when(inr(b), %x.c(x), %y.d(y)) = d(b) : C(inr(b))
   2.100 +          |] ==> when(inr(b), \%x.c(x), \%y.d(y)) = d(b) : C(inr(b))
   2.101  \end{ttbox}
   2.102  \caption{Rules for the binary sum type $A+B$} \label{ctt-plus}
   2.103  \end{figure}
   2.104 @@ -406,7 +406,7 @@
   2.105  note that the only rule that makes use of {\tt Reduce} is \ttindex{trans_red},
   2.106  whose first premise ensures that $a$ and $b$ (and thus $c$) are well-typed.
   2.107  
   2.108 -Derived rules are shown in Figure~\ref{ctt-derived}.  The rule
   2.109 +Derived rules are shown in Fig.\ts\ref{ctt-derived}.  The rule
   2.110  \ttindex{subst_prodE} is derived from \ttindex{prodE}, and is easier to
   2.111  use in backwards proof.  The rules \ttindex{SumE_fst} and
   2.112  \ttindex{SumE_snd} express the typing of~\ttindex{fst} and~\ttindex{snd};
   2.113 @@ -589,16 +589,16 @@
   2.114  
   2.115  \begin{figure} 
   2.116  \begin{ttbox}
   2.117 -\idx{add_def}           a#+b  == rec(a, b, %u v.succ(v))  
   2.118 -\idx{diff_def}          a-b   == rec(b, a, %u v.rec(v, 0, %x y.x))  
   2.119 +\idx{add_def}           a#+b  == rec(a, b, \%u v.succ(v))  
   2.120 +\idx{diff_def}          a-b   == rec(b, a, \%u v.rec(v, 0, \%x y.x))  
   2.121  \idx{absdiff_def}       a|-|b == (a-b) #+ (b-a)  
   2.122 -\idx{mult_def}          a#*b  == rec(a, 0, %u v. b #+ v)  
   2.123 +\idx{mult_def}          a#*b  == rec(a, 0, \%u v. b #+ v)  
   2.124  
   2.125  \idx{mod_def}   a mod b == rec(a, 0,
   2.126 -                      %u v. rec(succ(v) |-| b, 0, %x y.succ(v)))  
   2.127 +                      \%u v. rec(succ(v) |-| b, 0, \%x y.succ(v)))  
   2.128  
   2.129  \idx{div_def}   a div b == rec(a, 0,
   2.130 -                      %u v. rec(succ(u) mod b, succ(v), %x y.v))
   2.131 +                      \%u v. rec(succ(u) mod b, succ(v), \%x y.v))
   2.132  \end{ttbox}
   2.133  \subcaption{Definitions of the operators}
   2.134  
   2.135 @@ -643,7 +643,7 @@
   2.136  Figure~\ref{ctt-arith} presents the definitions and some of the key
   2.137  theorems, including commutative, distributive, and associative laws.  The
   2.138  theory has the {\ML} identifier \ttindexbold{arith.thy}.  All proofs are on
   2.139 -the file \ttindexbold{CTT/arith.ML}.
   2.140 +the file {\tt CTT/arith.ML}.
   2.141  
   2.142  The operators~\verb'#+', \verb'-', \verb'|-|', \verb'#*', \verb'mod'
   2.143  and~\verb'div' stand for sum, difference, absolute difference, product,
   2.144 @@ -665,17 +665,17 @@
   2.145  \section{The examples directory}
   2.146  This directory contains examples and experimental proofs in {\CTT}.
   2.147  \begin{description}
   2.148 -\item[\ttindexbold{CTT/ex/typechk.ML}]
   2.149 +\item[{\tt CTT/ex/typechk.ML}]
   2.150  contains simple examples of type checking and type deduction.
   2.151  
   2.152 -\item[\ttindexbold{CTT/ex/elim.ML}]
   2.153 +\item[{\tt CTT/ex/elim.ML}]
   2.154  contains some examples from Martin-L\"of~\cite{martinlof84}, proved using 
   2.155  {\tt pc_tac}.
   2.156  
   2.157 -\item[\ttindexbold{CTT/ex/equal.ML}]
   2.158 +\item[{\tt CTT/ex/equal.ML}]
   2.159  contains simple examples of rewriting.
   2.160  
   2.161 -\item[\ttindexbold{CTT/ex/synth.ML}]
   2.162 +\item[{\tt CTT/ex/synth.ML}]
   2.163  demonstrates the use of unknowns with some trivial examples of program
   2.164  synthesis. 
   2.165  \end{description}
   2.166 @@ -688,10 +688,10 @@
   2.167  unknown, takes shape in the course of the proof.  Our example is the
   2.168  predecessor function on the natural numbers.
   2.169  \begin{ttbox}
   2.170 -goal CTT.thy "lam n. rec(n, 0, %x y.x) : ?A";
   2.171 +goal CTT.thy "lam n. rec(n, 0, \%x y.x) : ?A";
   2.172  {\out Level 0}
   2.173 -{\out lam n. rec(n,0,%x y. x) : ?A}
   2.174 -{\out  1. lam n. rec(n,0,%x y. x) : ?A}
   2.175 +{\out lam n. rec(n,0,\%x y. x) : ?A}
   2.176 +{\out  1. lam n. rec(n,0,\%x y. x) : ?A}
   2.177  \end{ttbox}
   2.178  Since the term is a Constructive Type Theory $\lambda$-abstraction (not to
   2.179  be confused with a meta-level abstraction), we apply the rule
   2.180 @@ -700,47 +700,58 @@
   2.181  \begin{ttbox}
   2.182  by (resolve_tac [ProdI] 1);
   2.183  {\out Level 1}
   2.184 -{\out lam n. rec(n,0,%x y. x) : PROD x:?A1. ?B1(x)}
   2.185 +{\out lam n. rec(n,0,\%x y. x) : PROD x:?A1. ?B1(x)}
   2.186  {\out  1. ?A1 type}
   2.187 -{\out  2. !!n. n : ?A1 ==> rec(n,0,%x y. x) : ?B1(n)}
   2.188 +{\out  2. !!n. n : ?A1 ==> rec(n,0,\%x y. x) : ?B1(n)}
   2.189  \end{ttbox}
   2.190 -Subgoal~1 can be solved by instantiating~$\Var{A@1}$ to any type, but this
   2.191 -could invalidate subgoal~2.  We therefore tackle the latter subgoal.  It
   2.192 -asks the type of a term beginning with {\tt rec}, which can be found by
   2.193 -$N$-elimination.\index{*NE}
   2.194 +Subgoal~1 is too flexible.  It can be solved by instantiating~$\Var{A@1}$
   2.195 +to any type, but most instantiations will invalidate subgoal~2.  We
   2.196 +therefore tackle the latter subgoal.  It asks the type of a term beginning
   2.197 +with {\tt rec}, which can be found by $N$-elimination.\index{*NE}
   2.198  \begin{ttbox}
   2.199  by (eresolve_tac [NE] 2);
   2.200  {\out Level 2}
   2.201 -{\out lam n. rec(n,0,%x y. x) : PROD x:N. ?C2(x,x)}
   2.202 +{\out lam n. rec(n,0,\%x y. x) : PROD x:N. ?C2(x,x)}
   2.203  {\out  1. N type}
   2.204  {\out  2. !!n. 0 : ?C2(n,0)}
   2.205  {\out  3. !!n x y. [| x : N; y : ?C2(n,x) |] ==> x : ?C2(n,succ(x))}
   2.206  \end{ttbox}
   2.207 -We now know~$\Var{A@1}$ is the type of natural numbers.  However, let us
   2.208 -continue with subgoal~2.  What is the type of~0?\index{*NIO}
   2.209 +Subgoal~1 is no longer flexible: we now know~$\Var{A@1}$ is the type of
   2.210 +natural numbers.  However, let us continue proving nontrivial subgoals.
   2.211 +Subgoal~2 asks, what is the type of~0?\index{*NIO}
   2.212  \begin{ttbox}
   2.213  by (resolve_tac [NI0] 2);
   2.214  {\out Level 3}
   2.215 -{\out lam n. rec(n,0,%x y. x) : N --> N}
   2.216 +{\out lam n. rec(n,0,\%x y. x) : N --> N}
   2.217  {\out  1. N type}
   2.218  {\out  2. !!n x y. [| x : N; y : N |] ==> x : N}
   2.219  \end{ttbox}
   2.220 -The type~$\Var{A}$ is now determined.  It is $\prod@{n\in N}N$, which is
   2.221 -equivalent to $N\to N$.  But we must prove all the subgoals to show that
   2.222 -the original term is validly typed.  Subgoal~2 is provable by assumption
   2.223 -and the remaining subgoal falls by $N$-formation.\index{*NF}
   2.224 +The type~$\Var{A}$ is now fully determined.  It is the product type
   2.225 +$\prod@{x\in N}N$, which is equivalent to function type $N\to N$ because
   2.226 +there is no dependence on~$x$.  But we must prove all the subgoals to show
   2.227 +that the original term is validly typed.  Subgoal~2 is provable by
   2.228 +assumption and the remaining subgoal falls by $N$-formation.\index{*NF}
   2.229  \begin{ttbox}
   2.230  by (assume_tac 2);
   2.231  {\out Level 4}
   2.232 -{\out lam n. rec(n,0,%x y. x) : N --> N}
   2.233 +{\out lam n. rec(n,0,\%x y. x) : N --> N}
   2.234  {\out  1. N type}
   2.235 +\ttbreak
   2.236  by (resolve_tac [NF] 1);
   2.237  {\out Level 5}
   2.238 -{\out lam n. rec(n,0,%x y. x) : N --> N}
   2.239 +{\out lam n. rec(n,0,\%x y. x) : N --> N}
   2.240  {\out No subgoals!}
   2.241  \end{ttbox}
   2.242  Calling \ttindex{typechk_tac} can prove this theorem in one step.
   2.243  
   2.244 +Even if the original term is ill-typed, one can infer a type for it, but
   2.245 +unprovable subgoals will be left.  As an exercise, try to prove the
   2.246 +following invalid goal:
   2.247 +\begin{ttbox}
   2.248 +goal CTT.thy "lam n. rec(n, 0, \%x y.tt) : ?A";
   2.249 +\end{ttbox}
   2.250 +
   2.251 +
   2.252  
   2.253  \section{An example of logical reasoning}
   2.254  Logical reasoning in Type Theory involves proving a goal of the form
   2.255 @@ -786,8 +797,9 @@
   2.256  rather than the assumption of a goal, it cannot be found by
   2.257  \ttindex{eresolve_tac}.  We could insert it by calling
   2.258  \hbox{\tt \ttindex{cut_facts_tac} prems 1}.   Instead, let us resolve the
   2.259 -$\Sigma$-elimination rule with the premises; this yields one result, which
   2.260 -we supply to \ttindex{resolve_tac}.\index{*SumE}\index{*RL}
   2.261 +$\Sigma$-elimination rule with the premises using~{\tt RL}; this forward
   2.262 +inference yields one result, which we supply to
   2.263 +\ttindex{resolve_tac}.\index{*SumE}\index{*RL} 
   2.264  \begin{ttbox}
   2.265  by (resolve_tac (prems RL [SumE]) 1);
   2.266  {\out Level 1}
   2.267 @@ -796,15 +808,15 @@
   2.268  {\out        [| x : A; y : B(x) + C(x) |] ==>}
   2.269  {\out        ?c1(x,y) : (SUM x:A. B(x)) + (SUM x:A. C(x))}
   2.270  \end{ttbox}
   2.271 -The subgoal has two new parameters.  In the main goal, $\Var{a}$ has been
   2.272 -instantiated with a \ttindex{split} term.  The assumption $y\in B(x) + C(x)$ is
   2.273 -eliminated next, causing a case split and a new parameter.  The main goal
   2.274 -now contains~\ttindex{when}.
   2.275 +The subgoal has two new parameters, $x$ and~$y$.  In the main goal,
   2.276 +$\Var{a}$ has been instantiated with a \ttindex{split} term.  The
   2.277 +assumption $y\in B(x) + C(x)$ is eliminated next, causing a case split and
   2.278 +a further parameter,~$xa$.  It also inserts~\ttindex{when} into the main goal.
   2.279  \index{*PlusE}
   2.280  \begin{ttbox}
   2.281  by (eresolve_tac [PlusE] 1);
   2.282  {\out Level 2}
   2.283 -{\out split(p,%x y. when(y,?c2(x,y),?d2(x,y)))}
   2.284 +{\out split(p,\%x y. when(y,?c2(x,y),?d2(x,y)))}
   2.285  {\out : (SUM x:A. B(x)) + (SUM x:A. C(x))}
   2.286  {\out  1. !!x y xa.}
   2.287  {\out        [| x : A; xa : B(x) |] ==>}
   2.288 @@ -822,7 +834,7 @@
   2.289  \begin{ttbox}
   2.290  by (resolve_tac [PlusI_inl] 1);
   2.291  {\out Level 3}
   2.292 -{\out split(p,%x y. when(y,%xa. inl(?a3(x,y,xa)),?d2(x,y)))}
   2.293 +{\out split(p,\%x y. when(y,\%xa. inl(?a3(x,y,xa)),?d2(x,y)))}
   2.294  {\out : (SUM x:A. B(x)) + (SUM x:A. C(x))}
   2.295  {\out  1. !!x y xa. [| x : A; xa : B(x) |] ==> ?a3(x,y,xa) : SUM x:A. B(x)}
   2.296  {\out  2. !!x y xa. [| x : A; xa : B(x) |] ==> SUM x:A. C(x) type}
   2.297 @@ -838,7 +850,7 @@
   2.298  \begin{ttbox}
   2.299  by (resolve_tac [SumI] 1);
   2.300  {\out Level 4}
   2.301 -{\out split(p,%x y. when(y,%xa. inl(<?a4(x,y,xa),?b4(x,y,xa)>),?d2(x,y)))}
   2.302 +{\out split(p,\%x y. when(y,\%xa. inl(<?a4(x,y,xa),?b4(x,y,xa)>),?d2(x,y)))}
   2.303  {\out : (SUM x:A. B(x)) + (SUM x:A. C(x))}
   2.304  {\out  1. !!x y xa. [| x : A; xa : B(x) |] ==> ?a4(x,y,xa) : A}
   2.305  {\out  2. !!x y xa. [| x : A; xa : B(x) |] ==> ?b4(x,y,xa) : B(?a4(x,y,xa))}
   2.306 @@ -852,16 +864,18 @@
   2.307  \begin{ttbox}
   2.308  by (assume_tac 1);
   2.309  {\out Level 5}
   2.310 -{\out split(p,%x y. when(y,%xa. inl(<x,?b4(x,y,xa)>),?d2(x,y)))}
   2.311 +{\out split(p,\%x y. when(y,\%xa. inl(<x,?b4(x,y,xa)>),?d2(x,y)))}
   2.312  {\out : (SUM x:A. B(x)) + (SUM x:A. C(x))}
   2.313 +\ttbreak
   2.314  {\out  1. !!x y xa. [| x : A; xa : B(x) |] ==> ?b4(x,y,xa) : B(x)}
   2.315  {\out  2. !!x y xa. [| x : A; xa : B(x) |] ==> SUM x:A. C(x) type}
   2.316  {\out  3. !!x y ya.}
   2.317  {\out        [| x : A; ya : C(x) |] ==>}
   2.318  {\out        ?d2(x,y,ya) : (SUM x:A. B(x)) + (SUM x:A. C(x))}
   2.319 +\ttbreak
   2.320  by (assume_tac 1);
   2.321  {\out Level 6}
   2.322 -{\out split(p,%x y. when(y,%xa. inl(<x,xa>),?d2(x,y)))}
   2.323 +{\out split(p,\%x y. when(y,\%xa. inl(<x,xa>),?d2(x,y)))}
   2.324  {\out : (SUM x:A. B(x)) + (SUM x:A. C(x))}
   2.325  {\out  1. !!x y xa. [| x : A; xa : B(x) |] ==> SUM x:A. C(x) type}
   2.326  {\out  2. !!x y ya.}
   2.327 @@ -873,7 +887,7 @@
   2.328  \begin{ttbox}
   2.329  by (typechk_tac prems);
   2.330  {\out Level 7}
   2.331 -{\out split(p,%x y. when(y,%xa. inl(<x,xa>),?d2(x,y)))}
   2.332 +{\out split(p,\%x y. when(y,\%xa. inl(<x,xa>),?d2(x,y)))}
   2.333  {\out : (SUM x:A. B(x)) + (SUM x:A. C(x))}
   2.334  {\out  1. !!x y ya.}
   2.335  {\out        [| x : A; ya : C(x) |] ==>}
   2.336 @@ -884,7 +898,7 @@
   2.337  \begin{ttbox}
   2.338  by (pc_tac prems 1);
   2.339  {\out Level 8}
   2.340 -{\out split(p,%x y. when(y,%xa. inl(<x,xa>),%y. inr(<x,y>)))}
   2.341 +{\out split(p,\%x y. when(y,\%xa. inl(<x,xa>),\%y. inr(<x,y>)))}
   2.342  {\out : (SUM x:A. B(x)) + (SUM x:A. C(x))}
   2.343  {\out No subgoals!}
   2.344  \end{ttbox}
   2.345 @@ -895,18 +909,27 @@
   2.346  \section{Example: deriving a currying functional}
   2.347  In simply-typed languages such as {\ML}, a currying functional has the type 
   2.348  \[ (A\times B \to C) \to (A\to (B\to C)). \]
   2.349 -Let us generalize this to~$\Sigma$ and~$\Pi$.  The argument of the
   2.350 -functional is a function that maps $z:\Sigma(A,B)$ to~$C(z)$; the resulting
   2.351 -function maps $x\in A$ and $y\in B(x)$ to $C(\langle x,y\rangle)$.  Here
   2.352 -$B$ is a family over~$A$, while $C$ is a family over $\Sigma(A,B)$.
   2.353 +Let us generalize this to~$\Sigma$ and~$\Pi$.  
   2.354 +The functional takes a function~$f$ that maps $z:\Sigma(A,B)$
   2.355 +to~$C(z)$; the resulting function maps $x\in A$ and $y\in B(x)$ to
   2.356 +$C(\langle x,y\rangle)$.
   2.357 +
   2.358 +Formally, there are three typing premises.  $A$ is a type; $B$ is an
   2.359 +$A$-indexed family of types; $C$ is a family of types indexed by
   2.360 +$\Sigma(A,B)$.  The goal is expressed using \hbox{\tt PROD f} to ensure
   2.361 +that the parameter corresponding to the functional's argument is really
   2.362 +called~$f$; Isabelle echoes the type using \verb|-->| because there is no
   2.363 +explicit dependence upon~$f$.
   2.364  \begin{ttbox}
   2.365  val prems = goal CTT.thy
   2.366 -    "[| A type; !!x. x:A ==> B(x) type;                    \ttback
   2.367 -\ttback               !!z. z: (SUM x:A. B(x)) ==> C(z) type |]   \ttback
   2.368 -\ttback    ==> ?a : (PROD z : (SUM x:A . B(x)) . C(z))           \ttback
   2.369 -\ttback         --> (PROD x:A . PROD y:B(x) . C(<x,y>))";
   2.370 +    "[| A type; !!x. x:A ==> B(x) type;                           \ttback
   2.371 +\ttback               !!z. z: (SUM x:A. B(x)) ==> C(z) type             \ttback
   2.372 +\ttback    |] ==> ?a : PROD f: (PROD z : (SUM x:A . B(x)) . C(z)).      \ttback
   2.373 +\ttback                     (PROD x:A . PROD y:B(x) . C(<x,y>))";
   2.374 +\ttbreak
   2.375  {\out Level 0}
   2.376 -{\out ?a : (PROD z:SUM x:A. B(x). C(z)) --> (PROD x:A. PROD y:B(x). C(<x,y>))}
   2.377 +{\out ?a : (PROD z:SUM x:A. B(x). C(z)) -->}
   2.378 +{\out      (PROD x:A. PROD y:B(x). C(<x,y>))}
   2.379  {\out  1. ?a : (PROD z:SUM x:A. B(x). C(z)) -->}
   2.380  {\out          (PROD x:A. PROD y:B(x). C(<x,y>))}
   2.381  \ttbreak
   2.382 @@ -915,27 +938,32 @@
   2.383  {\out              "?z : SUM x:A. B(x) ==> C(?z) type}
   2.384  {\out               [!!z. z : SUM x:A. B(x) ==> C(z) type]"] : thm list}
   2.385  \end{ttbox}
   2.386 -This is an opportunity to demonstrate \ttindex{intr_tac}.  Here, the tactic
   2.387 -repeatedly applies $\Pi$-introduction, automatically proving the rather
   2.388 -tiresome typing conditions.  Note that $\Var{a}$ becomes instantiated to
   2.389 -three nested $\lambda$-abstractions.
   2.390 +This is a chance to demonstrate \ttindex{intr_tac}.  Here, the tactic
   2.391 +repeatedly applies $\Pi$-introduction, proving the rather
   2.392 +tiresome typing conditions.  
   2.393 +
   2.394 +Note that $\Var{a}$ becomes instantiated to three nested
   2.395 +$\lambda$-abstractions.  It would be easier to read if the bound variable
   2.396 +names agreed with the parameters in the subgoal.  Isabelle attempts to give
   2.397 +parameters the same names as corresponding bound variables in the goal, but
   2.398 +this does not always work.  In any event, the goal is logically correct.
   2.399  \begin{ttbox}
   2.400  by (intr_tac prems);
   2.401  {\out Level 1}
   2.402  {\out lam x xa xb. ?b7(x,xa,xb)}
   2.403  {\out : (PROD z:SUM x:A. B(x). C(z)) --> (PROD x:A. PROD y:B(x). C(<x,y>))}
   2.404 -{\out  1. !!uu x y.}
   2.405 -{\out        [| uu : PROD z:SUM x:A. B(x). C(z); x : A; y : B(x) |] ==>}
   2.406 -{\out        ?b7(uu,x,y) : C(<x,y>)}
   2.407 +{\out  1. !!f x y.}
   2.408 +{\out        [| f : PROD z:SUM x:A. B(x). C(z); x : A; y : B(x) |] ==>}
   2.409 +{\out        ?b7(f,x,y) : C(<x,y>)}
   2.410  \end{ttbox}
   2.411 -Using $\Pi$-elimination, we solve subgoal~1 by applying the function~$uu$.
   2.412 +Using $\Pi$-elimination, we solve subgoal~1 by applying the function~$f$.
   2.413  \index{*ProdE}
   2.414  \begin{ttbox}
   2.415  by (eresolve_tac [ProdE] 1);
   2.416  {\out Level 2}
   2.417  {\out lam x xa xb. x ` <xa,xb>}
   2.418  {\out : (PROD z:SUM x:A. B(x). C(z)) --> (PROD x:A. PROD y:B(x). C(<x,y>))}
   2.419 -{\out  1. !!uu x y. [| x : A; y : B(x) |] ==> <x,y> : SUM x:A. B(x)}
   2.420 +{\out  1. !!f x y. [| x : A; y : B(x) |] ==> <x,y> : SUM x:A. B(x)}
   2.421  \end{ttbox}
   2.422  Finally, we exhibit a suitable argument for the function application.  This
   2.423  is straightforward using introduction rules.
   2.424 @@ -970,14 +998,14 @@
   2.425  But a completely formal proof is hard to find.  Many of the rules can be
   2.426  applied in a multiplicity of ways, yielding a large number of higher-order
   2.427  unifiers.  The proof can get bogged down in the details.  But with a
   2.428 -careful selection of derived rules (recall Figure~\ref{ctt-derived}) and
   2.429 +careful selection of derived rules (recall Fig.\ts\ref{ctt-derived}) and
   2.430  the type checking tactics, we can prove the theorem in nine steps.
   2.431  \begin{ttbox}
   2.432  val prems = goal CTT.thy
   2.433 -    "[| A type;  !!x. x:A ==> B(x) type;              \ttback
   2.434 -\ttback       !!x y.[| x:A;  y:B(x) |] ==> C(x,y) type      \ttback
   2.435 -\ttback    |] ==> ?a :    (PROD x:A. SUM y:B(x). C(x,y))    \ttback
   2.436 -\ttback               --> (SUM f: (PROD x:A. B(x)). PROD x:A. C(x, f`x))";
   2.437 +    "[| A type;  !!x. x:A ==> B(x) type;                    \ttback
   2.438 +\ttback       !!x y.[| x:A;  y:B(x) |] ==> C(x,y) type            \ttback
   2.439 +\ttback    |] ==> ?a : PROD h: (PROD x:A. SUM y:B(x). C(x,y)).    \ttback
   2.440 +\ttback                     (SUM f: (PROD x:A. B(x)). PROD x:A. C(x, f`x))";
   2.441  {\out Level 0}
   2.442  {\out ?a : (PROD x:A. SUM y:B(x). C(x,y)) -->}
   2.443  {\out      (SUM f:PROD x:A. B(x). PROD x:A. C(x,f ` x))}
   2.444 @@ -1000,18 +1028,19 @@
   2.445  {\out : (PROD x:A. SUM y:B(x). C(x,y)) -->}
   2.446  {\out   (SUM f:PROD x:A. B(x). PROD x:A. C(x,f ` x))}
   2.447  \ttbreak
   2.448 -{\out  1. !!uu x.}
   2.449 -{\out        [| uu : PROD x:A. SUM y:B(x). C(x,y); x : A |] ==>}
   2.450 -{\out        ?b7(uu,x) : B(x)}
   2.451 +{\out  1. !!h x.}
   2.452 +{\out        [| h : PROD x:A. SUM y:B(x). C(x,y); x : A |] ==>}
   2.453 +{\out        ?b7(h,x) : B(x)}
   2.454  \ttbreak
   2.455 -{\out  2. !!uu x.}
   2.456 -{\out        [| uu : PROD x:A. SUM y:B(x). C(x,y); x : A |] ==>}
   2.457 -{\out        ?b8(uu,x) : C(x,(lam x. ?b7(uu,x)) ` x)}
   2.458 +{\out  2. !!h x.}
   2.459 +{\out        [| h : PROD x:A. SUM y:B(x). C(x,y); x : A |] ==>}
   2.460 +{\out        ?b8(h,x) : C(x,(lam x. ?b7(h,x)) ` x)}
   2.461  \end{ttbox}
   2.462  Subgoal~1 asks to find the choice function itself, taking $x\in A$ to some
   2.463 -$\Var{b@7}(uu,x)\in B(x)$.  Subgoal~2 asks, given $x\in A$, for a proof
   2.464 -object $\Var{b@8}(uu,x)$ to witness that the choice function's argument
   2.465 -and result lie in the relation~$C$.  
   2.466 +$\Var{b@7}(h,x)\in B(x)$.  Subgoal~2 asks, given $x\in A$, for a proof
   2.467 +object $\Var{b@8}(h,x)$ to witness that the choice function's argument and
   2.468 +result lie in the relation~$C$.  This latter task will take up most of the
   2.469 +proof.
   2.470  \index{*ProdE}\index{*SumE_fst}\index{*RS}
   2.471  \begin{ttbox}
   2.472  by (eresolve_tac [ProdE RS SumE_fst] 1);
   2.473 @@ -1020,27 +1049,26 @@
   2.474  {\out : (PROD x:A. SUM y:B(x). C(x,y)) -->}
   2.475  {\out   (SUM f:PROD x:A. B(x). PROD x:A. C(x,f ` x))}
   2.476  \ttbreak
   2.477 -{\out  1. !!uu x. x : A ==> x : A}
   2.478 -{\out  2. !!uu x.}
   2.479 -{\out        [| uu : PROD x:A. SUM y:B(x). C(x,y); x : A |] ==>}
   2.480 -{\out        ?b8(uu,x) : C(x,(lam x. fst(uu ` x)) ` x)}
   2.481 +{\out  1. !!h x. x : A ==> x : A}
   2.482 +{\out  2. !!h x.}
   2.483 +{\out        [| h : PROD x:A. SUM y:B(x). C(x,y); x : A |] ==>}
   2.484 +{\out        ?b8(h,x) : C(x,(lam x. fst(h ` x)) ` x)}
   2.485  \end{ttbox}
   2.486 -Above, we have composed \ttindex{fst} with the function~$h$ (named~$uu$ in
   2.487 -the assumptions).  Unification has deduced that the function must be
   2.488 -applied to $x\in A$.
   2.489 +Above, we have composed \ttindex{fst} with the function~$h$.  Unification
   2.490 +has deduced that the function must be applied to $x\in A$.
   2.491  \begin{ttbox}
   2.492  by (assume_tac 1);
   2.493  {\out Level 3}
   2.494  {\out lam x. <lam xa. fst(x ` xa),lam xa. ?b8(x,xa)>}
   2.495  {\out : (PROD x:A. SUM y:B(x). C(x,y)) -->}
   2.496  {\out   (SUM f:PROD x:A. B(x). PROD x:A. C(x,f ` x))}
   2.497 -{\out  1. !!uu x.}
   2.498 -{\out        [| uu : PROD x:A. SUM y:B(x). C(x,y); x : A |] ==>}
   2.499 -{\out        ?b8(uu,x) : C(x,(lam x. fst(uu ` x)) ` x)}
   2.500 +{\out  1. !!h x.}
   2.501 +{\out        [| h : PROD x:A. SUM y:B(x). C(x,y); x : A |] ==>}
   2.502 +{\out        ?b8(h,x) : C(x,(lam x. fst(h ` x)) ` x)}
   2.503  \end{ttbox}
   2.504  Before we can compose \ttindex{snd} with~$h$, the arguments of $C$ must be
   2.505  simplified.  The derived rule \ttindex{replace_type} lets us replace a type
   2.506 -by any equivalent type:
   2.507 +by any equivalent type, shown below as the schematic term $\Var{A@{13}}(h,x)$:
   2.508  \begin{ttbox}
   2.509  by (resolve_tac [replace_type] 1);
   2.510  {\out Level 4}
   2.511 @@ -1048,13 +1076,13 @@
   2.512  {\out : (PROD x:A. SUM y:B(x). C(x,y)) -->}
   2.513  {\out   (SUM f:PROD x:A. B(x). PROD x:A. C(x,f ` x))}
   2.514  \ttbreak
   2.515 -{\out  1. !!uu x.}
   2.516 -{\out        [| uu : PROD x:A. SUM y:B(x). C(x,y); x : A |] ==>}
   2.517 -{\out        C(x,(lam x. fst(uu ` x)) ` x) = ?A13(uu,x)}
   2.518 +{\out  1. !!h x.}
   2.519 +{\out        [| h : PROD x:A. SUM y:B(x). C(x,y); x : A |] ==>}
   2.520 +{\out        C(x,(lam x. fst(h ` x)) ` x) = ?A13(h,x)}
   2.521  \ttbreak
   2.522 -{\out  2. !!uu x.}
   2.523 -{\out        [| uu : PROD x:A. SUM y:B(x). C(x,y); x : A |] ==>}
   2.524 -{\out        ?b8(uu,x) : ?A13(uu,x)}
   2.525 +{\out  2. !!h x.}
   2.526 +{\out        [| h : PROD x:A. SUM y:B(x). C(x,y); x : A |] ==>}
   2.527 +{\out        ?b8(h,x) : ?A13(h,x)}
   2.528  \end{ttbox}
   2.529  The derived rule \ttindex{subst_eqtyparg} lets us simplify a type's
   2.530  argument (by currying, $C(x)$ is a unary type operator):
   2.531 @@ -1065,21 +1093,22 @@
   2.532  {\out : (PROD x:A. SUM y:B(x). C(x,y)) -->}
   2.533  {\out   (SUM f:PROD x:A. B(x). PROD x:A. C(x,f ` x))}
   2.534  \ttbreak
   2.535 -{\out  1. !!uu x.}
   2.536 -{\out        [| uu : PROD x:A. SUM y:B(x). C(x,y); x : A |] ==>}
   2.537 -{\out        (lam x. fst(uu ` x)) ` x = ?c14(uu,x) : ?A14(uu,x)}
   2.538 +{\out  1. !!h x.}
   2.539 +{\out        [| h : PROD x:A. SUM y:B(x). C(x,y); x : A |] ==>}
   2.540 +{\out        (lam x. fst(h ` x)) ` x = ?c14(h,x) : ?A14(h,x)}
   2.541  \ttbreak
   2.542 -{\out  2. !!uu x z.}
   2.543 -{\out        [| uu : PROD x:A. SUM y:B(x). C(x,y); x : A;}
   2.544 -{\out           z : ?A14(uu,x) |] ==>}
   2.545 +{\out  2. !!h x z.}
   2.546 +{\out        [| h : PROD x:A. SUM y:B(x). C(x,y); x : A;}
   2.547 +{\out           z : ?A14(h,x) |] ==>}
   2.548  {\out        C(x,z) type}
   2.549  \ttbreak
   2.550 -{\out  3. !!uu x.}
   2.551 -{\out        [| uu : PROD x:A. SUM y:B(x). C(x,y); x : A |] ==>}
   2.552 -{\out        ?b8(uu,x) : C(x,?c14(uu,x))}
   2.553 +{\out  3. !!h x.}
   2.554 +{\out        [| h : PROD x:A. SUM y:B(x). C(x,y); x : A |] ==>}
   2.555 +{\out        ?b8(h,x) : C(x,?c14(h,x))}
   2.556  \end{ttbox}
   2.557 -The rule \ttindex{ProdC} is simply $\beta$-reduction.  The term
   2.558 -$\Var{c@{14}}(uu,x)$ receives the simplified form, $f{\tt`}x$.
   2.559 +Subgoal~1 requires simply $\beta$-contraction, which is the rule
   2.560 +\ttindex{ProdC}.  The term $\Var{c@{14}}(h,x)$ in the last subgoal
   2.561 +receives the contracted result.
   2.562  \begin{ttbox}
   2.563  by (resolve_tac [ProdC] 1);
   2.564  {\out Level 6}
   2.565 @@ -1087,35 +1116,37 @@
   2.566  {\out : (PROD x:A. SUM y:B(x). C(x,y)) -->}
   2.567  {\out   (SUM f:PROD x:A. B(x). PROD x:A. C(x,f ` x))}
   2.568  \ttbreak
   2.569 -{\out  1. !!uu x.}
   2.570 -{\out        [| uu : PROD x:A. SUM y:B(x). C(x,y); x : A |] ==> x : ?A15(uu,x)}
   2.571 +{\out  1. !!h x.}
   2.572 +{\out        [| h : PROD x:A. SUM y:B(x). C(x,y); x : A |] ==>}
   2.573 +{\out        x : ?A15(h,x)}
   2.574  \ttbreak
   2.575 -{\out  2. !!uu x xa.}
   2.576 -{\out        [| uu : PROD x:A. SUM y:B(x). C(x,y); x : A;}
   2.577 -{\out           xa : ?A15(uu,x) |] ==>}
   2.578 -{\out        fst(uu ` xa) : ?B15(uu,x,xa)}
   2.579 +{\out  2. !!h x xa.}
   2.580 +{\out        [| h : PROD x:A. SUM y:B(x). C(x,y); x : A;}
   2.581 +{\out           xa : ?A15(h,x) |] ==>}
   2.582 +{\out        fst(h ` xa) : ?B15(h,x,xa)}
   2.583  \ttbreak
   2.584 -{\out  3. !!uu x z.}
   2.585 -{\out        [| uu : PROD x:A. SUM y:B(x). C(x,y); x : A;}
   2.586 -{\out           z : ?B15(uu,x,x) |] ==>}
   2.587 +{\out  3. !!h x z.}
   2.588 +{\out        [| h : PROD x:A. SUM y:B(x). C(x,y); x : A;}
   2.589 +{\out           z : ?B15(h,x,x) |] ==>}
   2.590  {\out        C(x,z) type}
   2.591  \ttbreak
   2.592 -{\out  4. !!uu x.}
   2.593 -{\out        [| uu : PROD x:A. SUM y:B(x). C(x,y); x : A |] ==>}
   2.594 -{\out        ?b8(uu,x) : C(x,fst(uu ` x))}
   2.595 +{\out  4. !!h x.}
   2.596 +{\out        [| h : PROD x:A. SUM y:B(x). C(x,y); x : A |] ==>}
   2.597 +{\out        ?b8(h,x) : C(x,fst(h ` x))}
   2.598  \end{ttbox}
   2.599  Routine type checking goals proliferate in Constructive Type Theory, but
   2.600  \ttindex{typechk_tac} quickly solves them.  Note the inclusion of
   2.601 -\ttindex{SumE_fst}.
   2.602 +\ttindex{SumE_fst} along with the premises.
   2.603  \begin{ttbox}
   2.604  by (typechk_tac (SumE_fst::prems));
   2.605  {\out Level 7}
   2.606  {\out lam x. <lam xa. fst(x ` xa),lam xa. ?b8(x,xa)>}
   2.607  {\out : (PROD x:A. SUM y:B(x). C(x,y)) -->}
   2.608  {\out   (SUM f:PROD x:A. B(x). PROD x:A. C(x,f ` x))}
   2.609 -{\out  1. !!uu x.}
   2.610 -{\out        [| uu : PROD x:A. SUM y:B(x). C(x,y); x : A |] ==>}
   2.611 -{\out        ?b8(uu,x) : C(x,fst(uu ` x))}
   2.612 +\ttbreak
   2.613 +{\out  1. !!h x.}
   2.614 +{\out        [| h : PROD x:A. SUM y:B(x). C(x,y); x : A |] ==>}
   2.615 +{\out        ?b8(h,x) : C(x,fst(h ` x))}
   2.616  \end{ttbox}
   2.617  We are finally ready to compose \ttindex{snd} with~$h$.
   2.618  \index{*ProdE}\index{*SumE_snd}\index{*RS}
   2.619 @@ -1125,9 +1156,10 @@
   2.620  {\out lam x. <lam xa. fst(x ` xa),lam xa. snd(x ` xa)>}
   2.621  {\out : (PROD x:A. SUM y:B(x). C(x,y)) -->}
   2.622  {\out   (SUM f:PROD x:A. B(x). PROD x:A. C(x,f ` x))}
   2.623 -{\out  1. !!uu x. x : A ==> x : A}
   2.624 -{\out  2. !!uu x. x : A ==> B(x) type}
   2.625 -{\out  3. !!uu x xa. [| x : A; xa : B(x) |] ==> C(x,xa) type}
   2.626 +\ttbreak
   2.627 +{\out  1. !!h x. x : A ==> x : A}
   2.628 +{\out  2. !!h x. x : A ==> B(x) type}
   2.629 +{\out  3. !!h x xa. [| x : A; xa : B(x) |] ==> C(x,xa) type}
   2.630  \end{ttbox}
   2.631  The proof object has reached its final form.  We call \ttindex{typechk_tac}
   2.632  to finish the type checking.