doc-src/Intro/foundations.tex
author Walther Neuper <neuper@ist.tugraz.at>
Thu, 12 Aug 2010 15:03:34 +0200
branchisac-from-Isabelle2009-2
changeset 37913 20e3616b2d9c
parent 9695 ec7d7f877712
child 43508 381fdcab0f36
permissions -rw-r--r--
prepare reactivation of isac-update-Isa09-2
     1 %% $Id$
     2 
     3 \part{Foundations} 
     4 The following sections discuss Isabelle's logical foundations in detail:
     5 representing logical syntax in the typed $\lambda$-calculus; expressing
     6 inference rules in Isabelle's meta-logic; combining rules by resolution.
     7 
     8 If you wish to use Isabelle immediately, please turn to
     9 page~\pageref{chap:getting}.  You can always read about foundations later,
    10 either by returning to this point or by looking up particular items in the
    11 index.
    12 
    13 \begin{figure} 
    14 \begin{eqnarray*}
    15   \neg P   & \hbox{abbreviates} & P\imp\bot \\
    16   P\bimp Q & \hbox{abbreviates} & (P\imp Q) \conj (Q\imp P)
    17 \end{eqnarray*}
    18 \vskip 4ex
    19 
    20 \(\begin{array}{c@{\qquad\qquad}c}
    21   \infer[({\conj}I)]{P\conj Q}{P & Q}  &
    22   \infer[({\conj}E1)]{P}{P\conj Q} \qquad 
    23   \infer[({\conj}E2)]{Q}{P\conj Q} \\[4ex]
    24 
    25   \infer[({\disj}I1)]{P\disj Q}{P} \qquad 
    26   \infer[({\disj}I2)]{P\disj Q}{Q} &
    27   \infer[({\disj}E)]{R}{P\disj Q & \infer*{R}{[P]} & \infer*{R}{[Q]}}\\[4ex]
    28 
    29   \infer[({\imp}I)]{P\imp Q}{\infer*{Q}{[P]}} &
    30   \infer[({\imp}E)]{Q}{P\imp Q & P}  \\[4ex]
    31 
    32   &
    33   \infer[({\bot}E)]{P}{\bot}\\[4ex]
    34 
    35   \infer[({\forall}I)*]{\forall x.P}{P} &
    36   \infer[({\forall}E)]{P[t/x]}{\forall x.P} \\[3ex]
    37 
    38   \infer[({\exists}I)]{\exists x.P}{P[t/x]} &
    39   \infer[({\exists}E)*]{Q}{{\exists x.P} & \infer*{Q}{[P]} } \\[3ex]
    40 
    41   {t=t} \,(refl)   &  \vcenter{\infer[(subst)]{P[u/x]}{t=u & P[t/x]}} 
    42 \end{array} \)
    43 
    44 \bigskip\bigskip
    45 *{\em Eigenvariable conditions\/}:
    46 
    47 $\forall I$: provided $x$ is not free in the assumptions
    48 
    49 $\exists E$: provided $x$ is not free in $Q$ or any assumption except $P$
    50 \caption{Intuitionistic first-order logic} \label{fol-fig}
    51 \end{figure}
    52 
    53 \section{Formalizing logical syntax in Isabelle}\label{sec:logical-syntax}
    54 \index{first-order logic}
    55 
    56 Figure~\ref{fol-fig} presents intuitionistic first-order logic,
    57 including equality.  Let us see how to formalize
    58 this logic in Isabelle, illustrating the main features of Isabelle's
    59 polymorphic meta-logic.
    60 
    61 \index{lambda calc@$\lambda$-calculus} 
    62 Isabelle represents syntax using the simply typed $\lambda$-calculus.  We
    63 declare a type for each syntactic category of the logic.  We declare a
    64 constant for each symbol of the logic, giving each $n$-place operation an
    65 $n$-argument curried function type.  Most importantly,
    66 $\lambda$-abstraction represents variable binding in quantifiers.
    67 
    68 \index{types!syntax of}\index{types!function}\index{*fun type} 
    69 \index{type constructors}
    70 Isabelle has \ML-style polymorphic types such as~$(\alpha)list$, where
    71 $list$ is a type constructor and $\alpha$ is a type variable; for example,
    72 $(bool)list$ is the type of lists of booleans.  Function types have the
    73 form $(\sigma,\tau)fun$ or $\sigma\To\tau$, where $\sigma$ and $\tau$ are
    74 types.  Curried function types may be abbreviated:
    75 \[  \sigma@1\To (\cdots \sigma@n\To \tau\cdots)  \quad \hbox{as} \quad
    76 [\sigma@1, \ldots, \sigma@n] \To \tau \]
    77  
    78 \index{terms!syntax of} The syntax for terms is summarised below.
    79 Note that there are two versions of function application syntax
    80 available in Isabelle: either $t\,u$, which is the usual form for
    81 higher-order languages, or $t(u)$, trying to look more like
    82 first-order.  The latter syntax is used throughout the manual.
    83 \[ 
    84 \index{lambda abs@$\lambda$-abstractions}\index{function applications}
    85 \begin{array}{ll}
    86   t :: \tau   & \hbox{type constraint, on a term or bound variable} \\
    87   \lambda x.t   & \hbox{abstraction} \\
    88   \lambda x@1\ldots x@n.t
    89         & \hbox{curried abstraction, $\lambda x@1. \ldots \lambda x@n.t$} \\
    90   t(u)          & \hbox{application} \\
    91   t (u@1, \ldots, u@n) & \hbox{curried application, $t(u@1)\ldots(u@n)$} 
    92 \end{array}
    93 \]
    94 
    95 
    96 \subsection{Simple types and constants}\index{types!simple|bold} 
    97 
    98 The syntactic categories of our logic (Fig.\ts\ref{fol-fig}) are {\bf
    99   formulae} and {\bf terms}.  Formulae denote truth values, so (following
   100 tradition) let us call their type~$o$.  To allow~0 and~$Suc(t)$ as terms,
   101 let us declare a type~$nat$ of natural numbers.  Later, we shall see
   102 how to admit terms of other types.
   103 
   104 \index{constants}\index{*nat type}\index{*o type}
   105 After declaring the types~$o$ and~$nat$, we may declare constants for the
   106 symbols of our logic.  Since $\bot$ denotes a truth value (falsity) and 0
   107 denotes a number, we put \begin{eqnarray*}
   108   \bot  & :: & o \\
   109   0     & :: & nat.
   110 \end{eqnarray*}
   111 If a symbol requires operands, the corresponding constant must have a
   112 function type.  In our logic, the successor function
   113 ($Suc$) is from natural numbers to natural numbers, negation ($\neg$) is a
   114 function from truth values to truth values, and the binary connectives are
   115 curried functions taking two truth values as arguments: 
   116 \begin{eqnarray*}
   117   Suc    & :: & nat\To nat  \\
   118   {\neg} & :: & o\To o      \\
   119   \conj,\disj,\imp,\bimp  & :: & [o,o]\To o 
   120 \end{eqnarray*}
   121 The binary connectives can be declared as infixes, with appropriate
   122 precedences, so that we write $P\conj Q\disj R$ instead of
   123 $\disj(\conj(P,Q), R)$.
   124 
   125 Section~\ref{sec:defining-theories} below describes the syntax of Isabelle
   126 theory files and illustrates it by extending our logic with mathematical
   127 induction.
   128 
   129 
   130 \subsection{Polymorphic types and constants} \label{polymorphic}
   131 \index{types!polymorphic|bold}
   132 \index{equality!polymorphic}
   133 \index{constants!polymorphic}
   134 
   135 Which type should we assign to the equality symbol?  If we tried
   136 $[nat,nat]\To o$, then equality would be restricted to the natural
   137 numbers; we should have to declare different equality symbols for each
   138 type.  Isabelle's type system is polymorphic, so we could declare
   139 \begin{eqnarray*}
   140   {=}  & :: & [\alpha,\alpha]\To o,
   141 \end{eqnarray*}
   142 where the type variable~$\alpha$ ranges over all types.
   143 But this is also wrong.  The declaration is too polymorphic; $\alpha$
   144 includes types like~$o$ and $nat\To nat$.  Thus, it admits
   145 $\bot=\neg(\bot)$ and $Suc=Suc$ as formulae, which is acceptable in
   146 higher-order logic but not in first-order logic.
   147 
   148 Isabelle's {\bf type classes}\index{classes} control
   149 polymorphism~\cite{nipkow-prehofer}.  Each type variable belongs to a
   150 class, which denotes a set of types.  Classes are partially ordered by the
   151 subclass relation, which is essentially the subset relation on the sets of
   152 types.  They closely resemble the classes of the functional language
   153 Haskell~\cite{haskell-tutorial,haskell-report}.
   154 
   155 \index{*logic class}\index{*term class}
   156 Isabelle provides the built-in class $logic$, which consists of the logical
   157 types: the ones we want to reason about.  Let us declare a class $term$, to
   158 consist of all legal types of terms in our logic.  The subclass structure
   159 is now $term\le logic$.
   160 
   161 \index{*nat type}
   162 We put $nat$ in class $term$ by declaring $nat{::}term$.  We declare the
   163 equality constant by
   164 \begin{eqnarray*}
   165   {=}  & :: & [\alpha{::}term,\alpha]\To o 
   166 \end{eqnarray*}
   167 where $\alpha{::}term$ constrains the type variable~$\alpha$ to class
   168 $term$.  Such type variables resemble Standard~\ML's equality type
   169 variables.
   170 
   171 We give~$o$ and function types the class $logic$ rather than~$term$, since
   172 they are not legal types for terms.  We may introduce new types of class
   173 $term$ --- for instance, type $string$ or $real$ --- at any time.  We can
   174 even declare type constructors such as~$list$, and state that type
   175 $(\tau)list$ belongs to class~$term$ provided $\tau$ does; equality
   176 applies to lists of natural numbers but not to lists of formulae.  We may
   177 summarize this paragraph by a set of {\bf arity declarations} for type
   178 constructors:\index{arities!declaring}
   179 \begin{eqnarray*}\index{*o type}\index{*fun type}
   180   o             & :: & logic \\
   181   fun           & :: & (logic,logic)logic \\
   182   nat, string, real     & :: & term \\
   183   list          & :: & (term)term
   184 \end{eqnarray*}
   185 (Recall that $fun$ is the type constructor for function types.)
   186 In \rmindex{higher-order logic}, equality does apply to truth values and
   187 functions;  this requires the arity declarations ${o::term}$
   188 and ${fun::(term,term)term}$.  The class system can also handle
   189 overloading.\index{overloading|bold} We could declare $arith$ to be the
   190 subclass of $term$ consisting of the `arithmetic' types, such as~$nat$.
   191 Then we could declare the operators
   192 \begin{eqnarray*}
   193   {+},{-},{\times},{/}  & :: & [\alpha{::}arith,\alpha]\To \alpha
   194 \end{eqnarray*}
   195 If we declare new types $real$ and $complex$ of class $arith$, then we
   196 in effect have three sets of operators:
   197 \begin{eqnarray*}
   198   {+},{-},{\times},{/}  & :: & [nat,nat]\To nat \\
   199   {+},{-},{\times},{/}  & :: & [real,real]\To real \\
   200   {+},{-},{\times},{/}  & :: & [complex,complex]\To complex 
   201 \end{eqnarray*}
   202 Isabelle will regard these as distinct constants, each of which can be defined
   203 separately.  We could even introduce the type $(\alpha)vector$ and declare
   204 its arity as $(arith)arith$.  Then we could declare the constant
   205 \begin{eqnarray*}
   206   {+}  & :: & [(\alpha)vector,(\alpha)vector]\To (\alpha)vector 
   207 \end{eqnarray*}
   208 and specify it in terms of ${+} :: [\alpha,\alpha]\To \alpha$.
   209 
   210 A type variable may belong to any finite number of classes.  Suppose that
   211 we had declared yet another class $ord \le term$, the class of all
   212 `ordered' types, and a constant
   213 \begin{eqnarray*}
   214   {\le}  & :: & [\alpha{::}ord,\alpha]\To o.
   215 \end{eqnarray*}
   216 In this context the variable $x$ in $x \le (x+x)$ will be assigned type
   217 $\alpha{::}\{arith,ord\}$, which means $\alpha$ belongs to both $arith$ and
   218 $ord$.  Semantically the set $\{arith,ord\}$ should be understood as the
   219 intersection of the sets of types represented by $arith$ and $ord$.  Such
   220 intersections of classes are called \bfindex{sorts}.  The empty
   221 intersection of classes, $\{\}$, contains all types and is thus the {\bf
   222   universal sort}.
   223 
   224 Even with overloading, each term has a unique, most general type.  For this
   225 to be possible, the class and type declarations must satisfy certain
   226 technical constraints; see 
   227 \iflabelundefined{sec:ref-defining-theories}%
   228            {Sect.\ Defining Theories in the {\em Reference Manual}}%
   229            {\S\ref{sec:ref-defining-theories}}.
   230 
   231 
   232 \subsection{Higher types and quantifiers}
   233 \index{types!higher|bold}\index{quantifiers}
   234 Quantifiers are regarded as operations upon functions.  Ignoring polymorphism
   235 for the moment, consider the formula $\forall x. P(x)$, where $x$ ranges
   236 over type~$nat$.  This is true if $P(x)$ is true for all~$x$.  Abstracting
   237 $P(x)$ into a function, this is the same as saying that $\lambda x.P(x)$
   238 returns true for all arguments.  Thus, the universal quantifier can be
   239 represented by a constant
   240 \begin{eqnarray*}
   241   \forall  & :: & (nat\To o) \To o,
   242 \end{eqnarray*}
   243 which is essentially an infinitary truth table.  The representation of $\forall
   244 x. P(x)$ is $\forall(\lambda x. P(x))$.  
   245 
   246 The existential quantifier is treated
   247 in the same way.  Other binding operators are also easily handled; for
   248 instance, the summation operator $\Sigma@{k=i}^j f(k)$ can be represented as
   249 $\Sigma(i,j,\lambda k.f(k))$, where
   250 \begin{eqnarray*}
   251   \Sigma  & :: & [nat,nat, nat\To nat] \To nat.
   252 \end{eqnarray*}
   253 Quantifiers may be polymorphic.  We may define $\forall$ and~$\exists$ over
   254 all legal types of terms, not just the natural numbers, and
   255 allow summations over all arithmetic types:
   256 \begin{eqnarray*}
   257    \forall,\exists      & :: & (\alpha{::}term\To o) \To o \\
   258    \Sigma               & :: & [nat,nat, nat\To \alpha{::}arith] \To \alpha
   259 \end{eqnarray*}
   260 Observe that the index variables still have type $nat$, while the values
   261 being summed may belong to any arithmetic type.
   262 
   263 
   264 \section{Formalizing logical rules in Isabelle}
   265 \index{meta-implication|bold}
   266 \index{meta-quantifiers|bold}
   267 \index{meta-equality|bold}
   268 
   269 Object-logics are formalized by extending Isabelle's
   270 meta-logic~\cite{paulson-found}, which is intuitionistic higher-order logic.
   271 The meta-level connectives are {\bf implication}, the {\bf universal
   272   quantifier}, and {\bf equality}.
   273 \begin{itemize}
   274   \item The implication \(\phi\Imp \psi\) means `\(\phi\) implies
   275 \(\psi\)', and expresses logical {\bf entailment}.  
   276 
   277   \item The quantification \(\Forall x.\phi\) means `\(\phi\) is true for
   278 all $x$', and expresses {\bf generality} in rules and axiom schemes. 
   279 
   280 \item The equality \(a\equiv b\) means `$a$ equals $b$', for expressing
   281   {\bf definitions} (see~\S\ref{definitions}).\index{definitions}
   282   Equalities left over from the unification process, so called {\bf
   283     flex-flex constraints},\index{flex-flex constraints} are written $a\qeq
   284   b$.  The two equality symbols have the same logical meaning.
   285 
   286 \end{itemize}
   287 The syntax of the meta-logic is formalized in the same manner
   288 as object-logics, using the simply typed $\lambda$-calculus.  Analogous to
   289 type~$o$ above, there is a built-in type $prop$ of meta-level truth values.
   290 Meta-level formulae will have this type.  Type $prop$ belongs to
   291 class~$logic$; also, $\sigma\To\tau$ belongs to $logic$ provided $\sigma$
   292 and $\tau$ do.  Here are the types of the built-in connectives:
   293 \begin{eqnarray*}\index{*prop type}\index{*logic class}
   294   \Imp     & :: & [prop,prop]\To prop \\
   295   \Forall  & :: & (\alpha{::}logic\To prop) \To prop \\
   296   {\equiv} & :: & [\alpha{::}\{\},\alpha]\To prop \\
   297   \qeq & :: & [\alpha{::}\{\},\alpha]\To prop
   298 \end{eqnarray*}
   299 The polymorphism in $\Forall$ is restricted to class~$logic$ to exclude
   300 certain types, those used just for parsing.  The type variable
   301 $\alpha{::}\{\}$ ranges over the universal sort.
   302 
   303 In our formalization of first-order logic, we declared a type~$o$ of
   304 object-level truth values, rather than using~$prop$ for this purpose.  If
   305 we declared the object-level connectives to have types such as
   306 ${\neg}::prop\To prop$, then these connectives would be applicable to
   307 meta-level formulae.  Keeping $prop$ and $o$ as separate types maintains
   308 the distinction between the meta-level and the object-level.  To formalize
   309 the inference rules, we shall need to relate the two levels; accordingly,
   310 we declare the constant
   311 \index{*Trueprop constant}
   312 \begin{eqnarray*}
   313   Trueprop & :: & o\To prop.
   314 \end{eqnarray*}
   315 We may regard $Trueprop$ as a meta-level predicate, reading $Trueprop(P)$ as
   316 `$P$ is true at the object-level.'  Put another way, $Trueprop$ is a coercion
   317 from $o$ to $prop$.
   318 
   319 
   320 \subsection{Expressing propositional rules}
   321 \index{rules!propositional}
   322 We shall illustrate the use of the meta-logic by formalizing the rules of
   323 Fig.\ts\ref{fol-fig}.  Each object-level rule is expressed as a meta-level
   324 axiom. 
   325 
   326 One of the simplest rules is $(\conj E1)$.  Making
   327 everything explicit, its formalization in the meta-logic is
   328 $$
   329 \Forall P\;Q. Trueprop(P\conj Q) \Imp Trueprop(P).   \eqno(\conj E1)
   330 $$
   331 This may look formidable, but it has an obvious reading: for all object-level
   332 truth values $P$ and~$Q$, if $P\conj Q$ is true then so is~$P$.  The
   333 reading is correct because the meta-logic has simple models, where
   334 types denote sets and $\Forall$ really means `for all.'
   335 
   336 \index{*Trueprop constant}
   337 Isabelle adopts notational conventions to ease the writing of rules.  We may
   338 hide the occurrences of $Trueprop$ by making it an implicit coercion.
   339 Outer universal quantifiers may be dropped.  Finally, the nested implication
   340 \index{meta-implication}
   341 \[  \phi@1\Imp(\cdots \phi@n\Imp\psi\cdots) \]
   342 may be abbreviated as $\List{\phi@1; \ldots; \phi@n} \Imp \psi$, which
   343 formalizes a rule of $n$~premises.
   344 
   345 Using these conventions, the conjunction rules become the following axioms.
   346 These fully specify the properties of~$\conj$:
   347 $$ \List{P; Q} \Imp P\conj Q                 \eqno(\conj I) $$
   348 $$ P\conj Q \Imp P  \qquad  P\conj Q \Imp Q  \eqno(\conj E1,2) $$
   349 
   350 \noindent
   351 Next, consider the disjunction rules.  The discharge of assumption in
   352 $(\disj E)$ is expressed  using $\Imp$:
   353 \index{assumptions!discharge of}%
   354 $$ P \Imp P\disj Q  \qquad  Q \Imp P\disj Q  \eqno(\disj I1,2) $$
   355 $$ \List{P\disj Q; P\Imp R; Q\Imp R} \Imp R  \eqno(\disj E) $$
   356 %
   357 To understand this treatment of assumptions in natural
   358 deduction, look at implication.  The rule $({\imp}I)$ is the classic
   359 example of natural deduction: to prove that $P\imp Q$ is true, assume $P$
   360 is true and show that $Q$ must then be true.  More concisely, if $P$
   361 implies $Q$ (at the meta-level), then $P\imp Q$ is true (at the
   362 object-level).  Showing the coercion explicitly, this is formalized as
   363 \[ (Trueprop(P)\Imp Trueprop(Q)) \Imp Trueprop(P\imp Q). \]
   364 The rule $({\imp}E)$ is straightforward; hiding $Trueprop$, the axioms to
   365 specify $\imp$ are 
   366 $$ (P \Imp Q)  \Imp  P\imp Q   \eqno({\imp}I) $$
   367 $$ \List{P\imp Q; P}  \Imp Q.  \eqno({\imp}E) $$
   368 
   369 \noindent
   370 Finally, the intuitionistic contradiction rule is formalized as the axiom
   371 $$ \bot \Imp P.   \eqno(\bot E) $$
   372 
   373 \begin{warn}
   374 Earlier versions of Isabelle, and certain
   375 papers~\cite{paulson-found,paulson700}, use $\List{P}$ to mean $Trueprop(P)$.
   376 \end{warn}
   377 
   378 \subsection{Quantifier rules and substitution}
   379 \index{quantifiers}\index{rules!quantifier}\index{substitution|bold}
   380 \index{variables!bound}\index{lambda abs@$\lambda$-abstractions}
   381 \index{function applications}
   382 
   383 Isabelle expresses variable binding using $\lambda$-abstraction; for instance,
   384 $\forall x.P$ is formalized as $\forall(\lambda x.P)$.  Recall that $F(t)$
   385 is Isabelle's syntax for application of the function~$F$ to the argument~$t$;
   386 it is not a meta-notation for substitution.  On the other hand, a substitution
   387 will take place if $F$ has the form $\lambda x.P$;  Isabelle transforms
   388 $(\lambda x.P)(t)$ to~$P[t/x]$ by $\beta$-conversion.  Thus, we can express
   389 inference rules that involve substitution for bound variables.
   390 
   391 \index{parameters|bold}\index{eigenvariables|see{parameters}}
   392 A logic may attach provisos to certain of its rules, especially quantifier
   393 rules.  We cannot hope to formalize arbitrary provisos.  Fortunately, those
   394 typical of quantifier rules always have the same form, namely `$x$ not free in
   395 \ldots {\it (some set of formulae)},' where $x$ is a variable (called a {\bf
   396 parameter} or {\bf eigenvariable}) in some premise.  Isabelle treats
   397 provisos using~$\Forall$, its inbuilt notion of `for all'.
   398 \index{meta-quantifiers}
   399 
   400 The purpose of the proviso `$x$ not free in \ldots' is
   401 to ensure that the premise may not make assumptions about the value of~$x$,
   402 and therefore holds for all~$x$.  We formalize $(\forall I)$ by
   403 \[ \left(\Forall x. Trueprop(P(x))\right) \Imp Trueprop(\forall x.P(x)). \]
   404 This means, `if $P(x)$ is true for all~$x$, then $\forall x.P(x)$ is true.'
   405 The $\forall E$ rule exploits $\beta$-conversion.  Hiding $Trueprop$, the
   406 $\forall$ axioms are
   407 $$ \left(\Forall x. P(x)\right)  \Imp  \forall x.P(x)   \eqno(\forall I) $$
   408 $$ (\forall x.P(x))  \Imp P(t).  \eqno(\forall E) $$
   409 
   410 \noindent
   411 We have defined the object-level universal quantifier~($\forall$)
   412 using~$\Forall$.  But we do not require meta-level counterparts of all the
   413 connectives of the object-logic!  Consider the existential quantifier: 
   414 $$ P(t)  \Imp  \exists x.P(x)  \eqno(\exists I) $$
   415 $$ \List{\exists x.P(x);\; \Forall x. P(x)\Imp Q} \Imp Q  \eqno(\exists E) $$
   416 Let us verify $(\exists E)$ semantically.  Suppose that the premises
   417 hold; since $\exists x.P(x)$ is true, we may choose an~$a$ such that $P(a)$ is
   418 true.  Instantiating $\Forall x. P(x)\Imp Q$ with $a$ yields $P(a)\Imp Q$, and
   419 we obtain the desired conclusion, $Q$.
   420 
   421 The treatment of substitution deserves mention.  The rule
   422 \[ \infer{P[u/t]}{t=u & P} \]
   423 would be hard to formalize in Isabelle.  It calls for replacing~$t$ by $u$
   424 throughout~$P$, which cannot be expressed using $\beta$-conversion.  Our
   425 rule~$(subst)$ uses~$P$ as a template for substitution, inferring $P[u/x]$
   426 from~$P[t/x]$.  When we formalize this as an axiom, the template becomes a
   427 function variable:
   428 $$ \List{t=u; P(t)} \Imp P(u).  \eqno(subst) $$
   429 
   430 
   431 \subsection{Signatures and theories}
   432 \index{signatures|bold}
   433 
   434 A {\bf signature} contains the information necessary for type-checking,
   435 parsing and pretty printing a term.  It specifies type classes and their
   436 relationships, types and their arities, constants and their types, etc.  It
   437 also contains grammar rules, specified using mixfix declarations.
   438 
   439 Two signatures can be merged provided their specifications are compatible ---
   440 they must not, for example, assign different types to the same constant.
   441 Under similar conditions, a signature can be extended.  Signatures are
   442 managed internally by Isabelle; users seldom encounter them.
   443 
   444 \index{theories|bold} A {\bf theory} consists of a signature plus a collection
   445 of axioms.  The Pure theory contains only the meta-logic.  Theories can be
   446 combined provided their signatures are compatible.  A theory definition
   447 extends an existing theory with further signature specifications --- classes,
   448 types, constants and mixfix declarations --- plus lists of axioms and
   449 definitions etc., expressed as strings to be parsed.  A theory can formalize a
   450 small piece of mathematics, such as lists and their operations, or an entire
   451 logic.  A mathematical development typically involves many theories in a
   452 hierarchy.  For example, the Pure theory could be extended to form a theory
   453 for Fig.\ts\ref{fol-fig}; this could be extended in two separate ways to form
   454 a theory for natural numbers and a theory for lists; the union of these two
   455 could be extended into a theory defining the length of a list:
   456 \begin{tt}
   457 \[
   458 \begin{array}{c@{}c@{}c@{}c@{}c}
   459      {}   &     {}   &\hbox{Pure}&     {}  &     {}  \\
   460      {}   &     {}   &  \downarrow &     {}   &     {}   \\
   461      {}   &     {}   &\hbox{FOL} &     {}   &     {}   \\
   462      {}   & \swarrow &     {}    & \searrow &     {}   \\
   463  \hbox{Nat} &   {}   &     {}    &     {}   & \hbox{List} \\
   464      {}   & \searrow &     {}    & \swarrow &     {}   \\
   465      {}   &     {} &\hbox{Nat}+\hbox{List}&  {}   &     {}   \\
   466      {}   &     {}   &  \downarrow &     {}   &     {}   \\
   467      {}   &     {} & \hbox{Length} &  {}   &     {}
   468 \end{array}
   469 \]
   470 \end{tt}%
   471 Each Isabelle proof typically works within a single theory, which is
   472 associated with the proof state.  However, many different theories may
   473 coexist at the same time, and you may work in each of these during a single
   474 session.  
   475 
   476 \begin{warn}\index{constants!clashes with variables}%
   477   Confusing problems arise if you work in the wrong theory.  Each theory
   478   defines its own syntax.  An identifier may be regarded in one theory as a
   479   constant and in another as a variable, for example.
   480 \end{warn}
   481 
   482 \section{Proof construction in Isabelle}
   483 I have elsewhere described the meta-logic and demonstrated it by
   484 formalizing first-order logic~\cite{paulson-found}.  There is a one-to-one
   485 correspondence between meta-level proofs and object-level proofs.  To each
   486 use of a meta-level axiom, such as $(\forall I)$, there is a use of the
   487 corresponding object-level rule.  Object-level assumptions and parameters
   488 have meta-level counterparts.  The meta-level formalization is {\bf
   489   faithful}, admitting no incorrect object-level inferences, and {\bf
   490   adequate}, admitting all correct object-level inferences.  These
   491 properties must be demonstrated separately for each object-logic.
   492 
   493 The meta-logic is defined by a collection of inference rules, including
   494 equational rules for the $\lambda$-calculus and logical rules.  The rules
   495 for~$\Imp$ and~$\Forall$ resemble those for~$\imp$ and~$\forall$ in
   496 Fig.\ts\ref{fol-fig}.  Proofs performed using the primitive meta-rules
   497 would be lengthy; Isabelle proofs normally use certain derived rules.
   498 {\bf Resolution}, in particular, is convenient for backward proof.
   499 
   500 Unification is central to theorem proving.  It supports quantifier
   501 reasoning by allowing certain `unknown' terms to be instantiated later,
   502 possibly in stages.  When proving that the time required to sort $n$
   503 integers is proportional to~$n^2$, we need not state the constant of
   504 proportionality; when proving that a hardware adder will deliver the sum of
   505 its inputs, we need not state how many clock ticks will be required.  Such
   506 quantities often emerge from the proof.
   507 
   508 Isabelle provides {\bf schematic variables}, or {\bf
   509   unknowns},\index{unknowns} for unification.  Logically, unknowns are free
   510 variables.  But while ordinary variables remain fixed, unification may
   511 instantiate unknowns.  Unknowns are written with a ?\ prefix and are
   512 frequently subscripted: $\Var{a}$, $\Var{a@1}$, $\Var{a@2}$, \ldots,
   513 $\Var{P}$, $\Var{P@1}$, \ldots.
   514 
   515 Recall that an inference rule of the form
   516 \[ \infer{\phi}{\phi@1 & \ldots & \phi@n} \]
   517 is formalized in Isabelle's meta-logic as the axiom
   518 $\List{\phi@1; \ldots; \phi@n} \Imp \phi$.\index{resolution}
   519 Such axioms resemble Prolog's Horn clauses, and can be combined by
   520 resolution --- Isabelle's principal proof method.  Resolution yields both
   521 forward and backward proof.  Backward proof works by unifying a goal with
   522 the conclusion of a rule, whose premises become new subgoals.  Forward proof
   523 works by unifying theorems with the premises of a rule, deriving a new theorem.
   524 
   525 Isabelle formulae require an extended notion of resolution.
   526 They differ from Horn clauses in two major respects:
   527 \begin{itemize}
   528   \item They are written in the typed $\lambda$-calculus, and therefore must be
   529 resolved using higher-order unification.
   530 
   531 \item The constituents of a clause need not be atomic formulae.  Any
   532   formula of the form $Trueprop(\cdots)$ is atomic, but axioms such as
   533   ${\imp}I$ and $\forall I$ contain non-atomic formulae.
   534 \end{itemize}
   535 Isabelle has little in common with classical resolution theorem provers
   536 such as Otter~\cite{wos-bledsoe}.  At the meta-level, Isabelle proves
   537 theorems in their positive form, not by refutation.  However, an
   538 object-logic that includes a contradiction rule may employ a refutation
   539 proof procedure.
   540 
   541 
   542 \subsection{Higher-order unification}
   543 \index{unification!higher-order|bold}
   544 Unification is equation solving.  The solution of $f(\Var{x},c) \qeq
   545 f(d,\Var{y})$ is $\Var{x}\equiv d$ and $\Var{y}\equiv c$.  {\bf
   546 Higher-order unification} is equation solving for typed $\lambda$-terms.
   547 To handle $\beta$-conversion, it must reduce $(\lambda x.t)u$ to $t[u/x]$.
   548 That is easy --- in the typed $\lambda$-calculus, all reduction sequences
   549 terminate at a normal form.  But it must guess the unknown
   550 function~$\Var{f}$ in order to solve the equation
   551 \begin{equation} \label{hou-eqn}
   552  \Var{f}(t) \qeq g(u@1,\ldots,u@k).
   553 \end{equation}
   554 Huet's~\cite{huet75} search procedure solves equations by imitation and
   555 projection.  {\bf Imitation} makes~$\Var{f}$ apply the leading symbol (if a
   556 constant) of the right-hand side.  To solve equation~(\ref{hou-eqn}), it
   557 guesses
   558 \[ \Var{f} \equiv \lambda x. g(\Var{h@1}(x),\ldots,\Var{h@k}(x)), \]
   559 where $\Var{h@1}$, \ldots, $\Var{h@k}$ are new unknowns.  Assuming there are no
   560 other occurrences of~$\Var{f}$, equation~(\ref{hou-eqn}) simplifies to the
   561 set of equations
   562 \[ \Var{h@1}(t)\qeq u@1 \quad\ldots\quad \Var{h@k}(t)\qeq u@k. \]
   563 If the procedure solves these equations, instantiating $\Var{h@1}$, \ldots,
   564 $\Var{h@k}$, then it yields an instantiation for~$\Var{f}$.
   565 
   566 {\bf Projection} makes $\Var{f}$ apply one of its arguments.  To solve
   567 equation~(\ref{hou-eqn}), if $t$ expects~$m$ arguments and delivers a
   568 result of suitable type, it guesses
   569 \[ \Var{f} \equiv \lambda x. x(\Var{h@1}(x),\ldots,\Var{h@m}(x)), \]
   570 where $\Var{h@1}$, \ldots, $\Var{h@m}$ are new unknowns.  Assuming there are no
   571 other occurrences of~$\Var{f}$, equation~(\ref{hou-eqn}) simplifies to the 
   572 equation 
   573 \[ t(\Var{h@1}(t),\ldots,\Var{h@m}(t)) \qeq g(u@1,\ldots,u@k). \]
   574 
   575 \begin{warn}\index{unification!incompleteness of}%
   576 Huet's unification procedure is complete.  Isabelle's polymorphic version,
   577 which solves for type unknowns as well as for term unknowns, is incomplete.
   578 The problem is that projection requires type information.  In
   579 equation~(\ref{hou-eqn}), if the type of~$t$ is unknown, then projections
   580 are possible for all~$m\geq0$, and the types of the $\Var{h@i}$ will be
   581 similarly unconstrained.  Therefore, Isabelle never attempts such
   582 projections, and may fail to find unifiers where a type unknown turns out
   583 to be a function type.
   584 \end{warn}
   585 
   586 \index{unknowns!function|bold}
   587 Given $\Var{f}(t@1,\ldots,t@n)\qeq u$, Huet's procedure could make up to
   588 $n+1$ guesses.  The search tree and set of unifiers may be infinite.  But
   589 higher-order unification can work effectively, provided you are careful
   590 with {\bf function unknowns}:
   591 \begin{itemize}
   592   \item Equations with no function unknowns are solved using first-order
   593 unification, extended to treat bound variables.  For example, $\lambda x.x
   594 \qeq \lambda x.\Var{y}$ has no solution because $\Var{y}\equiv x$ would
   595 capture the free variable~$x$.
   596 
   597   \item An occurrence of the term $\Var{f}(x,y,z)$, where the arguments are
   598 distinct bound variables, causes no difficulties.  Its projections can only
   599 match the corresponding variables.
   600 
   601   \item Even an equation such as $\Var{f}(a)\qeq a+a$ is all right.  It has
   602 four solutions, but Isabelle evaluates them lazily, trying projection before
   603 imitation.  The first solution is usually the one desired:
   604 \[ \Var{f}\equiv \lambda x. x+x \quad
   605    \Var{f}\equiv \lambda x. a+x \quad
   606    \Var{f}\equiv \lambda x. x+a \quad
   607    \Var{f}\equiv \lambda x. a+a \]
   608   \item  Equations such as $\Var{f}(\Var{x},\Var{y})\qeq t$ and
   609 $\Var{f}(\Var{g}(x))\qeq t$ admit vast numbers of unifiers, and must be
   610 avoided. 
   611 \end{itemize}
   612 In problematic cases, you may have to instantiate some unknowns before
   613 invoking unification. 
   614 
   615 
   616 \subsection{Joining rules by resolution} \label{joining}
   617 \index{resolution|bold}
   618 Let $\List{\psi@1; \ldots; \psi@m} \Imp \psi$ and $\List{\phi@1; \ldots;
   619 \phi@n} \Imp \phi$ be two Isabelle theorems, representing object-level rules. 
   620 Choosing some~$i$ from~1 to~$n$, suppose that $\psi$ and $\phi@i$ have a
   621 higher-order unifier.  Writing $Xs$ for the application of substitution~$s$ to
   622 expression~$X$, this means there is some~$s$ such that $\psi s\equiv \phi@i s$.
   623 By resolution, we may conclude
   624 \[ (\List{\phi@1; \ldots; \phi@{i-1}; \psi@1; \ldots; \psi@m;
   625           \phi@{i+1}; \ldots; \phi@n} \Imp \phi)s.
   626 \]
   627 The substitution~$s$ may instantiate unknowns in both rules.  In short,
   628 resolution is the following rule:
   629 \[ \infer[(\psi s\equiv \phi@i s)]
   630          {(\List{\phi@1; \ldots; \phi@{i-1}; \psi@1; \ldots; \psi@m;
   631           \phi@{i+1}; \ldots; \phi@n} \Imp \phi)s}
   632          {\List{\psi@1; \ldots; \psi@m} \Imp \psi & &
   633           \List{\phi@1; \ldots; \phi@n} \Imp \phi}
   634 \]
   635 It operates at the meta-level, on Isabelle theorems, and is justified by
   636 the properties of $\Imp$ and~$\Forall$.  It takes the number~$i$ (for
   637 $1\leq i\leq n$) as a parameter and may yield infinitely many conclusions,
   638 one for each unifier of $\psi$ with $\phi@i$.  Isabelle returns these
   639 conclusions as a sequence (lazy list).
   640 
   641 Resolution expects the rules to have no outer quantifiers~($\Forall$).
   642 It may rename or instantiate any schematic variables, but leaves free
   643 variables unchanged.  When constructing a theory, Isabelle puts the
   644 rules into a standard form with all free variables converted into
   645 schematic ones; for instance, $({\imp}E)$ becomes
   646 \[ \List{\Var{P}\imp \Var{Q}; \Var{P}}  \Imp \Var{Q}. 
   647 \]
   648 When resolving two rules, the unknowns in the first rule are renamed, by
   649 subscripting, to make them distinct from the unknowns in the second rule.  To
   650 resolve $({\imp}E)$ with itself, the first copy of the rule becomes
   651 \[ \List{\Var{P@1}\imp \Var{Q@1}; \Var{P@1}}  \Imp \Var{Q@1}. \]
   652 Resolving this with $({\imp}E)$ in the first premise, unifying $\Var{Q@1}$ with
   653 $\Var{P}\imp \Var{Q}$, is the meta-level inference
   654 \[ \infer{\List{\Var{P@1}\imp (\Var{P}\imp \Var{Q}); \Var{P@1}; \Var{P}} 
   655            \Imp\Var{Q}.}
   656          {\List{\Var{P@1}\imp \Var{Q@1}; \Var{P@1}}  \Imp \Var{Q@1} & &
   657           \List{\Var{P}\imp \Var{Q}; \Var{P}}  \Imp \Var{Q}}
   658 \]
   659 Renaming the unknowns in the resolvent, we have derived the
   660 object-level rule\index{rules!derived}
   661 \[ \infer{Q.}{R\imp (P\imp Q)  &  R  &  P}  \]
   662 Joining rules in this fashion is a simple way of proving theorems.  The
   663 derived rules are conservative extensions of the object-logic, and may permit
   664 simpler proofs.  Let us consider another example.  Suppose we have the axiom
   665 $$ \forall x\,y. Suc(x)=Suc(y)\imp x=y. \eqno (inject) $$
   666 
   667 \noindent 
   668 The standard form of $(\forall E)$ is
   669 $\forall x.\Var{P}(x)  \Imp \Var{P}(\Var{t})$.
   670 Resolving $(inject)$ with $(\forall E)$ replaces $\Var{P}$ by
   671 $\lambda x. \forall y. Suc(x)=Suc(y)\imp x=y$ and leaves $\Var{t}$
   672 unchanged, yielding  
   673 \[ \forall y. Suc(\Var{t})=Suc(y)\imp \Var{t}=y. \]
   674 Resolving this with $(\forall E)$ puts a subscript on~$\Var{t}$
   675 and yields
   676 \[ Suc(\Var{t@1})=Suc(\Var{t})\imp \Var{t@1}=\Var{t}. \]
   677 Resolving this with $({\imp}E)$ increases the subscripts and yields
   678 \[ Suc(\Var{t@2})=Suc(\Var{t@1})\Imp \Var{t@2}=\Var{t@1}. 
   679 \]
   680 We have derived the rule
   681 \[ \infer{m=n,}{Suc(m)=Suc(n)} \]
   682 which goes directly from $Suc(m)=Suc(n)$ to $m=n$.  It is handy for simplifying
   683 an equation like $Suc(Suc(Suc(m)))=Suc(Suc(Suc(0)))$.  
   684 
   685 
   686 \section{Lifting a rule into a context}
   687 The rules $({\imp}I)$ and $(\forall I)$ may seem unsuitable for
   688 resolution.  They have non-atomic premises, namely $P\Imp Q$ and $\Forall
   689 x.P(x)$, while the conclusions of all the rules are atomic (they have the form
   690 $Trueprop(\cdots)$).  Isabelle gets round the problem through a meta-inference
   691 called \bfindex{lifting}.  Let us consider how to construct proofs such as
   692 \[ \infer[({\imp}I)]{P\imp(Q\imp R)}
   693          {\infer[({\imp}I)]{Q\imp R}
   694                         {\infer*{R}{[P,Q]}}}
   695    \qquad
   696    \infer[(\forall I)]{\forall x\,y.P(x,y)}
   697          {\infer[(\forall I)]{\forall y.P(x,y)}{P(x,y)}}
   698 \]
   699 
   700 \subsection{Lifting over assumptions}
   701 \index{assumptions!lifting over}
   702 Lifting over $\theta\Imp{}$ is the following meta-inference rule:
   703 \[ \infer{\List{\theta\Imp\phi@1; \ldots; \theta\Imp\phi@n} \Imp
   704           (\theta \Imp \phi)}
   705          {\List{\phi@1; \ldots; \phi@n} \Imp \phi} \]
   706 This is clearly sound: if $\List{\phi@1; \ldots; \phi@n} \Imp \phi$ is true and
   707 $\theta\Imp\phi@1$, \ldots, $\theta\Imp\phi@n$ and $\theta$ are all true then
   708 $\phi$ must be true.  Iterated lifting over a series of meta-formulae
   709 $\theta@k$, \ldots, $\theta@1$ yields an object-rule whose conclusion is
   710 $\List{\theta@1; \ldots; \theta@k} \Imp \phi$.  Typically the $\theta@i$ are
   711 the assumptions in a natural deduction proof; lifting copies them into a rule's
   712 premises and conclusion.
   713 
   714 When resolving two rules, Isabelle lifts the first one if necessary.  The
   715 standard form of $({\imp}I)$ is
   716 \[ (\Var{P} \Imp \Var{Q})  \Imp  \Var{P}\imp \Var{Q}.   \]
   717 To resolve this rule with itself, Isabelle modifies one copy as follows: it
   718 renames the unknowns to $\Var{P@1}$ and $\Var{Q@1}$, then lifts the rule over
   719 $\Var{P}\Imp{}$ to obtain
   720 \[ (\Var{P}\Imp (\Var{P@1} \Imp \Var{Q@1})) \Imp (\Var{P} \Imp 
   721    (\Var{P@1}\imp \Var{Q@1})).   \]
   722 Using the $\List{\cdots}$ abbreviation, this can be written as
   723 \[ \List{\List{\Var{P}; \Var{P@1}} \Imp \Var{Q@1}; \Var{P}} 
   724    \Imp  \Var{P@1}\imp \Var{Q@1}.   \]
   725 Unifying $\Var{P}\Imp \Var{P@1}\imp\Var{Q@1}$ with $\Var{P} \Imp
   726 \Var{Q}$ instantiates $\Var{Q}$ to ${\Var{P@1}\imp\Var{Q@1}}$.
   727 Resolution yields
   728 \[ (\List{\Var{P}; \Var{P@1}} \Imp \Var{Q@1}) \Imp
   729 \Var{P}\imp(\Var{P@1}\imp\Var{Q@1}).   \]
   730 This represents the derived rule
   731 \[ \infer{P\imp(Q\imp R).}{\infer*{R}{[P,Q]}} \]
   732 
   733 \subsection{Lifting over parameters}
   734 \index{parameters!lifting over}
   735 An analogous form of lifting handles premises of the form $\Forall x\ldots\,$. 
   736 Here, lifting prefixes an object-rule's premises and conclusion with $\Forall
   737 x$.  At the same time, lifting introduces a dependence upon~$x$.  It replaces
   738 each unknown $\Var{a}$ in the rule by $\Var{a'}(x)$, where $\Var{a'}$ is a new
   739 unknown (by subscripting) of suitable type --- necessarily a function type.  In
   740 short, lifting is the meta-inference
   741 \[ \infer{\List{\Forall x.\phi@1^x; \ldots; \Forall x.\phi@n^x} 
   742            \Imp \Forall x.\phi^x,}
   743          {\List{\phi@1; \ldots; \phi@n} \Imp \phi} \]
   744 %
   745 where $\phi^x$ stands for the result of lifting unknowns over~$x$ in
   746 $\phi$.  It is not hard to verify that this meta-inference is sound.  If
   747 $\phi\Imp\psi$ then $\phi^x\Imp\psi^x$ for all~$x$; so if $\phi^x$ is true
   748 for all~$x$ then so is $\psi^x$.  Thus, from $\phi\Imp\psi$ we conclude
   749 $(\Forall x.\phi^x) \Imp (\Forall x.\psi^x)$.
   750 
   751 For example, $(\disj I)$ might be lifted to
   752 \[ (\Forall x.\Var{P@1}(x)) \Imp (\Forall x. \Var{P@1}(x)\disj \Var{Q@1}(x))\]
   753 and $(\forall I)$ to
   754 \[ (\Forall x\,y.\Var{P@1}(x,y)) \Imp (\Forall x. \forall y.\Var{P@1}(x,y)). \]
   755 Isabelle has renamed a bound variable in $(\forall I)$ from $x$ to~$y$,
   756 avoiding a clash.  Resolving the above with $(\forall I)$ is the meta-inference
   757 \[ \infer{\Forall x\,y.\Var{P@1}(x,y)) \Imp \forall x\,y.\Var{P@1}(x,y)) }
   758          {(\Forall x\,y.\Var{P@1}(x,y)) \Imp 
   759                (\Forall x. \forall y.\Var{P@1}(x,y))  &
   760           (\Forall x.\Var{P}(x)) \Imp (\forall x.\Var{P}(x))} \]
   761 Here, $\Var{P}$ is replaced by $\lambda x.\forall y.\Var{P@1}(x,y)$; the
   762 resolvent expresses the derived rule
   763 \[ \vcenter{ \infer{\forall x\,y.Q(x,y)}{Q(x,y)} }
   764    \quad\hbox{provided $x$, $y$ not free in the assumptions} 
   765 \] 
   766 I discuss lifting and parameters at length elsewhere~\cite{paulson-found}.
   767 Miller goes into even greater detail~\cite{miller-mixed}.
   768 
   769 
   770 \section{Backward proof by resolution}
   771 \index{resolution!in backward proof}
   772 
   773 Resolution is convenient for deriving simple rules and for reasoning
   774 forward from facts.  It can also support backward proof, where we start
   775 with a goal and refine it to progressively simpler subgoals until all have
   776 been solved.  {\sc lcf} and its descendants {\sc hol} and Nuprl provide
   777 tactics and tacticals, which constitute a sophisticated language for
   778 expressing proof searches.  {\bf Tactics} refine subgoals while {\bf
   779   tacticals} combine tactics.
   780 
   781 \index{LCF system}
   782 Isabelle's tactics and tacticals work differently from {\sc lcf}'s.  An
   783 Isabelle rule is bidirectional: there is no distinction between
   784 inputs and outputs.  {\sc lcf} has a separate tactic for each rule;
   785 Isabelle performs refinement by any rule in a uniform fashion, using
   786 resolution.
   787 
   788 Isabelle works with meta-level theorems of the form
   789 \( \List{\phi@1; \ldots; \phi@n} \Imp \phi \).
   790 We have viewed this as the {\bf rule} with premises
   791 $\phi@1$,~\ldots,~$\phi@n$ and conclusion~$\phi$.  It can also be viewed as
   792 the {\bf proof state}\index{proof state}
   793 with subgoals $\phi@1$,~\ldots,~$\phi@n$ and main
   794 goal~$\phi$.
   795 
   796 To prove the formula~$\phi$, take $\phi\Imp \phi$ as the initial proof
   797 state.  This assertion is, trivially, a theorem.  At a later stage in the
   798 backward proof, a typical proof state is $\List{\phi@1; \ldots; \phi@n}
   799 \Imp \phi$.  This proof state is a theorem, ensuring that the subgoals
   800 $\phi@1$,~\ldots,~$\phi@n$ imply~$\phi$.  If $n=0$ then we have
   801 proved~$\phi$ outright.  If $\phi$ contains unknowns, they may become
   802 instantiated during the proof; a proof state may be $\List{\phi@1; \ldots;
   803 \phi@n} \Imp \phi'$, where $\phi'$ is an instance of~$\phi$.
   804 
   805 \subsection{Refinement by resolution}
   806 To refine subgoal~$i$ of a proof state by a rule, perform the following
   807 resolution: 
   808 \[ \infer{\hbox{new proof state}}{\hbox{rule} & & \hbox{proof state}} \]
   809 Suppose the rule is $\List{\psi'@1; \ldots; \psi'@m} \Imp \psi'$ after
   810 lifting over subgoal~$i$'s assumptions and parameters.  If the proof state
   811 is $\List{\phi@1; \ldots; \phi@n} \Imp \phi$, then the new proof state is
   812 (for~$1\leq i\leq n$)
   813 \[ (\List{\phi@1; \ldots; \phi@{i-1}; \psi'@1;
   814           \ldots; \psi'@m; \phi@{i+1}; \ldots; \phi@n} \Imp \phi)s.  \]
   815 Substitution~$s$ unifies $\psi'$ with~$\phi@i$.  In the proof state,
   816 subgoal~$i$ is replaced by $m$ new subgoals, the rule's instantiated premises.
   817 If some of the rule's unknowns are left un-instantiated, they become new
   818 unknowns in the proof state.  Refinement by~$(\exists I)$, namely
   819 \[ \Var{P}(\Var{t}) \Imp \exists x. \Var{P}(x), \]
   820 inserts a new unknown derived from~$\Var{t}$ by subscripting and lifting.
   821 We do not have to specify an `existential witness' when
   822 applying~$(\exists I)$.  Further resolutions may instantiate unknowns in
   823 the proof state.
   824 
   825 \subsection{Proof by assumption}
   826 \index{assumptions!use of}
   827 In the course of a natural deduction proof, parameters $x@1$, \ldots,~$x@l$ and
   828 assumptions $\theta@1$, \ldots, $\theta@k$ accumulate, forming a context for
   829 each subgoal.  Repeated lifting steps can lift a rule into any context.  To
   830 aid readability, Isabelle puts contexts into a normal form, gathering the
   831 parameters at the front:
   832 \begin{equation} \label{context-eqn}
   833 \Forall x@1 \ldots x@l. \List{\theta@1; \ldots; \theta@k}\Imp\theta. 
   834 \end{equation}
   835 Under the usual reading of the connectives, this expresses that $\theta$
   836 follows from $\theta@1$,~\ldots~$\theta@k$ for arbitrary
   837 $x@1$,~\ldots,~$x@l$.  It is trivially true if $\theta$ equals any of
   838 $\theta@1$,~\ldots~$\theta@k$, or is unifiable with any of them.  This
   839 models proof by assumption in natural deduction.
   840 
   841 Isabelle automates the meta-inference for proof by assumption.  Its arguments
   842 are the meta-theorem $\List{\phi@1; \ldots; \phi@n} \Imp \phi$, and some~$i$
   843 from~1 to~$n$, where $\phi@i$ has the form~(\ref{context-eqn}).  Its results
   844 are meta-theorems of the form
   845 \[ (\List{\phi@1; \ldots; \phi@{i-1}; \phi@{i+1}; \phi@n} \Imp \phi)s \]
   846 for each $s$ and~$j$ such that $s$ unifies $\lambda x@1 \ldots x@l. \theta@j$
   847 with $\lambda x@1 \ldots x@l. \theta$.  Isabelle supplies the parameters
   848 $x@1$,~\ldots,~$x@l$ to higher-order unification as bound variables, which
   849 regards them as unique constants with a limited scope --- this enforces
   850 parameter provisos~\cite{paulson-found}.
   851 
   852 The premise represents a proof state with~$n$ subgoals, of which the~$i$th
   853 is to be solved by assumption.  Isabelle searches the subgoal's context for
   854 an assumption~$\theta@j$ that can solve it.  For each unifier, the
   855 meta-inference returns an instantiated proof state from which the $i$th
   856 subgoal has been removed.  Isabelle searches for a unifying assumption; for
   857 readability and robustness, proofs do not refer to assumptions by number.
   858 
   859 Consider the proof state 
   860 \[ (\List{P(a); P(b)} \Imp P(\Var{x})) \Imp Q(\Var{x}). \]
   861 Proof by assumption (with $i=1$, the only possibility) yields two results:
   862 \begin{itemize}
   863   \item $Q(a)$, instantiating $\Var{x}\equiv a$
   864   \item $Q(b)$, instantiating $\Var{x}\equiv b$
   865 \end{itemize}
   866 Here, proof by assumption affects the main goal.  It could also affect
   867 other subgoals; if we also had the subgoal ${\List{P(b); P(c)} \Imp
   868   P(\Var{x})}$, then $\Var{x}\equiv a$ would transform it to ${\List{P(b);
   869     P(c)} \Imp P(a)}$, which might be unprovable.
   870 
   871 
   872 \subsection{A propositional proof} \label{prop-proof}
   873 \index{examples!propositional}
   874 Our first example avoids quantifiers.  Given the main goal $P\disj P\imp
   875 P$, Isabelle creates the initial state
   876 \[ (P\disj P\imp P)\Imp (P\disj P\imp P). \] 
   877 %
   878 Bear in mind that every proof state we derive will be a meta-theorem,
   879 expressing that the subgoals imply the main goal.  Our aim is to reach the
   880 state $P\disj P\imp P$; this meta-theorem is the desired result.
   881 
   882 The first step is to refine subgoal~1 by (${\imp}I)$, creating a new state
   883 where $P\disj P$ is an assumption:
   884 \[ (P\disj P\Imp P)\Imp (P\disj P\imp P) \]
   885 The next step is $(\disj E)$, which replaces subgoal~1 by three new subgoals. 
   886 Because of lifting, each subgoal contains a copy of the context --- the
   887 assumption $P\disj P$.  (In fact, this assumption is now redundant; we shall
   888 shortly see how to get rid of it!)  The new proof state is the following
   889 meta-theorem, laid out for clarity:
   890 \[ \begin{array}{l@{}l@{\qquad\qquad}l} 
   891   \lbrakk\;& P\disj P\Imp \Var{P@1}\disj\Var{Q@1}; & \hbox{(subgoal 1)} \\
   892            & \List{P\disj P; \Var{P@1}} \Imp P;    & \hbox{(subgoal 2)} \\
   893            & \List{P\disj P; \Var{Q@1}} \Imp P     & \hbox{(subgoal 3)} \\
   894   \rbrakk\;& \Imp (P\disj P\imp P)                 & \hbox{(main goal)}
   895    \end{array} 
   896 \]
   897 Notice the unknowns in the proof state.  Because we have applied $(\disj E)$,
   898 we must prove some disjunction, $\Var{P@1}\disj\Var{Q@1}$.  Of course,
   899 subgoal~1 is provable by assumption.  This instantiates both $\Var{P@1}$ and
   900 $\Var{Q@1}$ to~$P$ throughout the proof state:
   901 \[ \begin{array}{l@{}l@{\qquad\qquad}l} 
   902     \lbrakk\;& \List{P\disj P; P} \Imp P; & \hbox{(subgoal 1)} \\
   903              & \List{P\disj P; P} \Imp P  & \hbox{(subgoal 2)} \\
   904     \rbrakk\;& \Imp (P\disj P\imp P)      & \hbox{(main goal)}
   905    \end{array} \]
   906 Both of the remaining subgoals can be proved by assumption.  After two such
   907 steps, the proof state is $P\disj P\imp P$.
   908 
   909 
   910 \subsection{A quantifier proof}
   911 \index{examples!with quantifiers}
   912 To illustrate quantifiers and $\Forall$-lifting, let us prove
   913 $(\exists x.P(f(x)))\imp(\exists x.P(x))$.  The initial proof
   914 state is the trivial meta-theorem 
   915 \[ (\exists x.P(f(x)))\imp(\exists x.P(x)) \Imp 
   916    (\exists x.P(f(x)))\imp(\exists x.P(x)). \]
   917 As above, the first step is refinement by (${\imp}I)$: 
   918 \[ (\exists x.P(f(x))\Imp \exists x.P(x)) \Imp 
   919    (\exists x.P(f(x)))\imp(\exists x.P(x)) 
   920 \]
   921 The next step is $(\exists E)$, which replaces subgoal~1 by two new subgoals.
   922 Both have the assumption $\exists x.P(f(x))$.  The new proof
   923 state is the meta-theorem  
   924 \[ \begin{array}{l@{}l@{\qquad\qquad}l} 
   925    \lbrakk\;& \exists x.P(f(x)) \Imp \exists x.\Var{P@1}(x); & \hbox{(subgoal 1)} \\
   926             & \Forall x.\List{\exists x.P(f(x)); \Var{P@1}(x)} \Imp 
   927                        \exists x.P(x)  & \hbox{(subgoal 2)} \\
   928     \rbrakk\;& \Imp (\exists x.P(f(x)))\imp(\exists x.P(x))  & \hbox{(main goal)}
   929    \end{array} 
   930 \]
   931 The unknown $\Var{P@1}$ appears in both subgoals.  Because we have applied
   932 $(\exists E)$, we must prove $\exists x.\Var{P@1}(x)$, where $\Var{P@1}(x)$ may
   933 become any formula possibly containing~$x$.  Proving subgoal~1 by assumption
   934 instantiates $\Var{P@1}$ to~$\lambda x.P(f(x))$:  
   935 \[ \left(\Forall x.\List{\exists x.P(f(x)); P(f(x))} \Imp 
   936          \exists x.P(x)\right) 
   937    \Imp (\exists x.P(f(x)))\imp(\exists x.P(x)) 
   938 \]
   939 The next step is refinement by $(\exists I)$.  The rule is lifted into the
   940 context of the parameter~$x$ and the assumption $P(f(x))$.  This copies
   941 the context to the subgoal and allows the existential witness to
   942 depend upon~$x$: 
   943 \[ \left(\Forall x.\List{\exists x.P(f(x)); P(f(x))} \Imp 
   944          P(\Var{x@2}(x))\right) 
   945    \Imp (\exists x.P(f(x)))\imp(\exists x.P(x)) 
   946 \]
   947 The existential witness, $\Var{x@2}(x)$, consists of an unknown
   948 applied to a parameter.  Proof by assumption unifies $\lambda x.P(f(x))$ 
   949 with $\lambda x.P(\Var{x@2}(x))$, instantiating $\Var{x@2}$ to $f$.  The final
   950 proof state contains no subgoals: $(\exists x.P(f(x)))\imp(\exists x.P(x))$.
   951 
   952 
   953 \subsection{Tactics and tacticals}
   954 \index{tactics|bold}\index{tacticals|bold}
   955 {\bf Tactics} perform backward proof.  Isabelle tactics differ from those
   956 of {\sc lcf}, {\sc hol} and Nuprl by operating on entire proof states,
   957 rather than on individual subgoals.  An Isabelle tactic is a function that
   958 takes a proof state and returns a sequence (lazy list) of possible
   959 successor states.  Lazy lists are coded in ML as functions, a standard
   960 technique~\cite{paulson-ml2}.  Isabelle represents proof states by theorems.
   961 
   962 Basic tactics execute the meta-rules described above, operating on a
   963 given subgoal.  The {\bf resolution tactics} take a list of rules and
   964 return next states for each combination of rule and unifier.  The {\bf
   965 assumption tactic} examines the subgoal's assumptions and returns next
   966 states for each combination of assumption and unifier.  Lazy lists are
   967 essential because higher-order resolution may return infinitely many
   968 unifiers.  If there are no matching rules or assumptions then no next
   969 states are generated; a tactic application that returns an empty list is
   970 said to {\bf fail}.
   971 
   972 Sequences realize their full potential with {\bf tacticals} --- operators
   973 for combining tactics.  Depth-first search, breadth-first search and
   974 best-first search (where a heuristic function selects the best state to
   975 explore) return their outcomes as a sequence.  Isabelle provides such
   976 procedures in the form of tacticals.  Simpler procedures can be expressed
   977 directly using the basic tacticals {\tt THEN}, {\tt ORELSE} and {\tt REPEAT}:
   978 \begin{ttdescription}
   979 \item[$tac1$ THEN $tac2$] is a tactic for sequential composition.  Applied
   980 to a proof state, it returns all states reachable in two steps by applying
   981 $tac1$ followed by~$tac2$.
   982 
   983 \item[$tac1$ ORELSE $tac2$] is a choice tactic.  Applied to a state, it
   984 tries~$tac1$ and returns the result if non-empty; otherwise, it uses~$tac2$.
   985 
   986 \item[REPEAT $tac$] is a repetition tactic.  Applied to a state, it
   987 returns all states reachable by applying~$tac$ as long as possible --- until
   988 it would fail.  
   989 \end{ttdescription}
   990 For instance, this tactic repeatedly applies $tac1$ and~$tac2$, giving
   991 $tac1$ priority:
   992 \begin{center} \tt
   993 REPEAT($tac1$ ORELSE $tac2$)
   994 \end{center}
   995 
   996 
   997 \section{Variations on resolution}
   998 In principle, resolution and proof by assumption suffice to prove all
   999 theorems.  However, specialized forms of resolution are helpful for working
  1000 with elimination rules.  Elim-resolution applies an elimination rule to an
  1001 assumption; destruct-resolution is similar, but applies a rule in a forward
  1002 style.
  1003 
  1004 The last part of the section shows how the techniques for proving theorems
  1005 can also serve to derive rules.
  1006 
  1007 \subsection{Elim-resolution}
  1008 \index{elim-resolution|bold}\index{assumptions!deleting}
  1009 
  1010 Consider proving the theorem $((R\disj R)\disj R)\disj R\imp R$.  By
  1011 $({\imp}I)$, we prove~$R$ from the assumption $((R\disj R)\disj R)\disj R$.
  1012 Applying $(\disj E)$ to this assumption yields two subgoals, one that
  1013 assumes~$R$ (and is therefore trivial) and one that assumes $(R\disj
  1014 R)\disj R$.  This subgoal admits another application of $(\disj E)$.  Since
  1015 natural deduction never discards assumptions, we eventually generate a
  1016 subgoal containing much that is redundant:
  1017 \[ \List{((R\disj R)\disj R)\disj R; (R\disj R)\disj R; R\disj R; R} \Imp R. \]
  1018 In general, using $(\disj E)$ on the assumption $P\disj Q$ creates two new
  1019 subgoals with the additional assumption $P$ or~$Q$.  In these subgoals,
  1020 $P\disj Q$ is redundant.  Other elimination rules behave
  1021 similarly.  In first-order logic, only universally quantified
  1022 assumptions are sometimes needed more than once --- say, to prove
  1023 $P(f(f(a)))$ from the assumptions $\forall x.P(x)\imp P(f(x))$ and~$P(a)$.
  1024 
  1025 Many logics can be formulated as sequent calculi that delete redundant
  1026 assumptions after use.  The rule $(\disj E)$ might become
  1027 \[ \infer[\disj\hbox{-left}]
  1028          {\Gamma,P\disj Q,\Delta \turn \Theta}
  1029          {\Gamma,P,\Delta \turn \Theta && \Gamma,Q,\Delta \turn \Theta}  \] 
  1030 In backward proof, a goal containing $P\disj Q$ on the left of the~$\turn$
  1031 (that is, as an assumption) splits into two subgoals, replacing $P\disj Q$
  1032 by $P$ or~$Q$.  But the sequent calculus, with its explicit handling of
  1033 assumptions, can be tiresome to use.
  1034 
  1035 Elim-resolution is Isabelle's way of getting sequent calculus behaviour
  1036 from natural deduction rules.  It lets an elimination rule consume an
  1037 assumption.  Elim-resolution combines two meta-theorems:
  1038 \begin{itemize}
  1039   \item a rule $\List{\psi@1; \ldots; \psi@m} \Imp \psi$
  1040   \item a proof state $\List{\phi@1; \ldots; \phi@n} \Imp \phi$
  1041 \end{itemize}
  1042 The rule must have at least one premise, thus $m>0$.  Write the rule's
  1043 lifted form as $\List{\psi'@1; \ldots; \psi'@m} \Imp \psi'$.  Suppose we
  1044 wish to change subgoal number~$i$.
  1045 
  1046 Ordinary resolution would attempt to reduce~$\phi@i$,
  1047 replacing subgoal~$i$ by $m$ new ones.  Elim-resolution tries
  1048 simultaneously to reduce~$\phi@i$ and to solve~$\psi'@1$ by assumption; it
  1049 returns a sequence of next states.  Each of these replaces subgoal~$i$ by
  1050 instances of $\psi'@2$, \ldots, $\psi'@m$ from which the selected
  1051 assumption has been deleted.  Suppose $\phi@i$ has the parameter~$x$ and
  1052 assumptions $\theta@1$,~\ldots,~$\theta@k$.  Then $\psi'@1$, the rule's first
  1053 premise after lifting, will be
  1054 \( \Forall x. \List{\theta@1; \ldots; \theta@k}\Imp \psi^{x}@1 \).
  1055 Elim-resolution tries to unify $\psi'\qeq\phi@i$ and
  1056 $\lambda x. \theta@j \qeq \lambda x. \psi^{x}@1$ simultaneously, for
  1057 $j=1$,~\ldots,~$k$. 
  1058 
  1059 Let us redo the example from~\S\ref{prop-proof}.  The elimination rule
  1060 is~$(\disj E)$,
  1061 \[ \List{\Var{P}\disj \Var{Q};\; \Var{P}\Imp \Var{R};\; \Var{Q}\Imp \Var{R}}
  1062       \Imp \Var{R}  \]
  1063 and the proof state is $(P\disj P\Imp P)\Imp (P\disj P\imp P)$.  The
  1064 lifted rule is
  1065 \[ \begin{array}{l@{}l}
  1066   \lbrakk\;& P\disj P \Imp \Var{P@1}\disj\Var{Q@1}; \\
  1067            & \List{P\disj P ;\; \Var{P@1}} \Imp \Var{R@1};    \\
  1068            & \List{P\disj P ;\; \Var{Q@1}} \Imp \Var{R@1}     \\
  1069   \rbrakk\;& \Imp (P\disj P \Imp \Var{R@1})
  1070   \end{array} 
  1071 \]
  1072 Unification takes the simultaneous equations
  1073 $P\disj P \qeq \Var{P@1}\disj\Var{Q@1}$ and $\Var{R@1} \qeq P$, yielding
  1074 $\Var{P@1}\equiv\Var{Q@1}\equiv\Var{R@1} \equiv P$.  The new proof state
  1075 is simply
  1076 \[ \List{P \Imp P;\; P \Imp P} \Imp (P\disj P\imp P). 
  1077 \]
  1078 Elim-resolution's simultaneous unification gives better control
  1079 than ordinary resolution.  Recall the substitution rule:
  1080 $$ \List{\Var{t}=\Var{u}; \Var{P}(\Var{t})} \Imp \Var{P}(\Var{u}) 
  1081 \eqno(subst) $$
  1082 Unsuitable for ordinary resolution because $\Var{P}(\Var{u})$ admits many
  1083 unifiers, $(subst)$ works well with elim-resolution.  It deletes some
  1084 assumption of the form $x=y$ and replaces every~$y$ by~$x$ in the subgoal
  1085 formula.  The simultaneous unification instantiates $\Var{u}$ to~$y$; if
  1086 $y$ is not an unknown, then $\Var{P}(y)$ can easily be unified with another
  1087 formula.  
  1088 
  1089 In logical parlance, the premise containing the connective to be eliminated
  1090 is called the \bfindex{major premise}.  Elim-resolution expects the major
  1091 premise to come first.  The order of the premises is significant in
  1092 Isabelle.
  1093 
  1094 \subsection{Destruction rules} \label{destruct}
  1095 \index{rules!destruction}\index{rules!elimination}
  1096 \index{forward proof}
  1097 
  1098 Looking back to Fig.\ts\ref{fol-fig}, notice that there are two kinds of
  1099 elimination rule.  The rules $({\conj}E1)$, $({\conj}E2)$, $({\imp}E)$ and
  1100 $({\forall}E)$ extract the conclusion from the major premise.  In Isabelle
  1101 parlance, such rules are called {\bf destruction rules}; they are readable
  1102 and easy to use in forward proof.  The rules $({\disj}E)$, $({\bot}E)$ and
  1103 $({\exists}E)$ work by discharging assumptions; they support backward proof
  1104 in a style reminiscent of the sequent calculus.
  1105 
  1106 The latter style is the most general form of elimination rule.  In natural
  1107 deduction, there is no way to recast $({\disj}E)$, $({\bot}E)$ or
  1108 $({\exists}E)$ as destruction rules.  But we can write general elimination
  1109 rules for $\conj$, $\imp$ and~$\forall$:
  1110 \[
  1111 \infer{R}{P\conj Q & \infer*{R}{[P,Q]}} \qquad
  1112 \infer{R}{P\imp Q & P & \infer*{R}{[Q]}}  \qquad
  1113 \infer{Q}{\forall x.P & \infer*{Q}{[P[t/x]]}} 
  1114 \]
  1115 Because they are concise, destruction rules are simpler to derive than the
  1116 corresponding elimination rules.  To facilitate their use in backward
  1117 proof, Isabelle provides a means of transforming a destruction rule such as
  1118 \[ \infer[\quad\hbox{to the elimination rule}\quad]{Q}{P@1 & \ldots & P@m} 
  1119    \infer{R.}{P@1 & \ldots & P@m & \infer*{R}{[Q]}} 
  1120 \]
  1121 {\bf Destruct-resolution}\index{destruct-resolution} combines this
  1122 transformation with elim-resolution.  It applies a destruction rule to some
  1123 assumption of a subgoal.  Given the rule above, it replaces the
  1124 assumption~$P@1$ by~$Q$, with new subgoals of showing instances of $P@2$,
  1125 \ldots,~$P@m$.  Destruct-resolution works forward from a subgoal's
  1126 assumptions.  Ordinary resolution performs forward reasoning from theorems,
  1127 as illustrated in~\S\ref{joining}.
  1128 
  1129 
  1130 \subsection{Deriving rules by resolution}  \label{deriving}
  1131 \index{rules!derived|bold}\index{meta-assumptions!syntax of}
  1132 The meta-logic, itself a form of the predicate calculus, is defined by a
  1133 system of natural deduction rules.  Each theorem may depend upon
  1134 meta-assumptions.  The theorem that~$\phi$ follows from the assumptions
  1135 $\phi@1$, \ldots, $\phi@n$ is written
  1136 \[ \phi \quad [\phi@1,\ldots,\phi@n]. \]
  1137 A more conventional notation might be $\phi@1,\ldots,\phi@n \turn \phi$,
  1138 but Isabelle's notation is more readable with large formulae.
  1139 
  1140 Meta-level natural deduction provides a convenient mechanism for deriving
  1141 new object-level rules.  To derive the rule
  1142 \[ \infer{\phi,}{\theta@1 & \ldots & \theta@k} \]
  1143 assume the premises $\theta@1$,~\ldots,~$\theta@k$ at the
  1144 meta-level.  Then prove $\phi$, possibly using these assumptions.
  1145 Starting with a proof state $\phi\Imp\phi$, assumptions may accumulate,
  1146 reaching a final state such as
  1147 \[ \phi \quad [\theta@1,\ldots,\theta@k]. \]
  1148 The meta-rule for $\Imp$ introduction discharges an assumption.
  1149 Discharging them in the order $\theta@k,\ldots,\theta@1$ yields the
  1150 meta-theorem $\List{\theta@1; \ldots; \theta@k} \Imp \phi$, with no
  1151 assumptions.  This represents the desired rule.
  1152 Let us derive the general $\conj$ elimination rule:
  1153 $$ \infer{R}{P\conj Q & \infer*{R}{[P,Q]}}  \eqno(\conj E) $$
  1154 We assume $P\conj Q$ and $\List{P;Q}\Imp R$, and commence backward proof in
  1155 the state $R\Imp R$.  Resolving this with the second assumption yields the
  1156 state 
  1157 \[ \phantom{\List{P\conj Q;\; P\conj Q}}
  1158    \llap{$\List{P;Q}$}\Imp R \quad [\,\List{P;Q}\Imp R\,]. \]
  1159 Resolving subgoals~1 and~2 with~$({\conj}E1)$ and~$({\conj}E2)$,
  1160 respectively, yields the state
  1161 \[ \List{P\conj \Var{Q@1};\; \Var{P@2}\conj Q}\Imp R 
  1162    \quad [\,\List{P;Q}\Imp R\,]. 
  1163 \]
  1164 The unknowns $\Var{Q@1}$ and~$\Var{P@2}$ arise from unconstrained
  1165 subformulae in the premises of~$({\conj}E1)$ and~$({\conj}E2)$.  Resolving
  1166 both subgoals with the assumption $P\conj Q$ instantiates the unknowns to yield
  1167 \[ R \quad [\, \List{P;Q}\Imp R, P\conj Q \,]. \]
  1168 The proof may use the meta-assumptions in any order, and as often as
  1169 necessary; when finished, we discharge them in the correct order to
  1170 obtain the desired form:
  1171 \[ \List{P\conj Q;\; \List{P;Q}\Imp R} \Imp R \]
  1172 We have derived the rule using free variables, which prevents their
  1173 premature instantiation during the proof; we may now replace them by
  1174 schematic variables.
  1175 
  1176 \begin{warn}
  1177   Schematic variables are not allowed in meta-assumptions, for a variety of
  1178   reasons.  Meta-assumptions remain fixed throughout a proof.
  1179 \end{warn}
  1180