doc-src/Intro/advanced.tex
author lcp
Fri, 26 Nov 1993 12:31:48 +0100
changeset 156 ab4dcb9285e0
parent 109 0872fd327440
child 284 1072b18b2caa
permissions -rw-r--r--
Corrected errors found by Marcus Wenzel.
lcp@105
     1
%% $Id$
lcp@105
     2
\part{Advanced methods}
lcp@105
     3
Before continuing, it might be wise to try some of your own examples in
lcp@105
     4
Isabelle, reinforcing your knowledge of the basic functions.
lcp@105
     5
This paper is merely an introduction to Isabelle.  Two other documents
lcp@105
     6
exist:
lcp@105
     7
\begin{itemize}
lcp@105
     8
  \item {\em The Isabelle Reference Manual\/} contains information about
lcp@105
     9
most of the facilities of Isabelle, apart from particular object-logics.
lcp@105
    10
lcp@105
    11
  \item {\em Isabelle's Object-Logics\/} describes the various logics
lcp@105
    12
distributed with Isabelle.  It also explains how to define new logics in
lcp@105
    13
Isabelle.
lcp@105
    14
\end{itemize}
lcp@105
    15
Look through {\em Isabelle's Object-Logics\/} and try proving some simple
lcp@105
    16
theorems.  You probably should begin with first-order logic ({\tt FOL}
lcp@105
    17
or~{\tt LK}).  Try working some of the examples provided, and others from
lcp@105
    18
the literature.  Set theory~({\tt ZF}) and Constructive Type Theory~({\tt
lcp@105
    19
  CTT}) form a richer world for mathematical reasoning and, again, many
lcp@105
    20
examples are in the literature.  Higher-order logic~({\tt HOL}) is
lcp@105
    21
Isabelle's most sophisticated logic, because its types and functions are
lcp@105
    22
identified with those of the meta-logic; this may cause difficulties for
lcp@105
    23
beginners.
lcp@105
    24
lcp@105
    25
Choose a logic that you already understand.  Isabelle is a proof
lcp@105
    26
tool, not a teaching tool; if you do not know how to do a particular proof
lcp@105
    27
on paper, then you certainly will not be able to do it on the machine.
lcp@105
    28
Even experienced users plan large proofs on paper.
lcp@105
    29
lcp@105
    30
We have covered only the bare essentials of Isabelle, but enough to perform
lcp@105
    31
substantial proofs.  By occasionally dipping into the {\em Reference
lcp@105
    32
Manual}, you can learn additional tactics, subgoal commands and tacticals.
lcp@105
    33
Isabelle's simplifier and classical theorem prover are
lcp@105
    34
difficult to learn, and can be ignored at first.
lcp@105
    35
lcp@105
    36
lcp@105
    37
\section{Deriving rules in Isabelle}
lcp@105
    38
\index{rules!derived}
lcp@105
    39
A mathematical development goes through a progression of stages.  Each
lcp@105
    40
stage defines some concepts and derives rules about them.  We shall see how
lcp@105
    41
to derive rules, perhaps involving definitions, using Isabelle.  The
lcp@105
    42
following section will explain how to declare types, constants, axioms and
lcp@105
    43
definitions.
lcp@105
    44
lcp@105
    45
lcp@105
    46
\subsection{Deriving a rule using tactics} \label{deriving-example}
lcp@105
    47
\index{examples!of deriving rules}
lcp@105
    48
The subgoal module supports the derivation of rules.  The \ttindex{goal}
lcp@105
    49
command, when supplied a goal of the form $\List{\theta@1; \ldots;
lcp@105
    50
\theta@k} \Imp \phi$, creates $\phi\Imp\phi$ as the initial proof state and
lcp@105
    51
returns a list consisting of the theorems 
lcp@105
    52
${\theta@i\;[\theta@i]}$, for $i=1$, \ldots,~$k$.  These assumptions are
lcp@105
    53
also recorded internally, allowing \ttindex{result} to discharge them in the
lcp@105
    54
original order.
lcp@105
    55
lcp@105
    56
Let us derive $\conj$ elimination~(\S\ref{deriving}) using Isabelle.
lcp@105
    57
Until now, calling \ttindex{goal} has returned an empty list, which we have
lcp@105
    58
thrown away.  In this example, the list contains the two premises of the
lcp@105
    59
rule.  We bind them to the \ML\ identifiers {\tt major} and {\tt
lcp@105
    60
minor}:\footnote{Some ML compilers will print a message such as {\em
lcp@105
    61
binding not exhaustive}.  This warns that {\tt goal} must return a
lcp@105
    62
2-element list.  Otherwise, the pattern-match will fail; ML will
lcp@105
    63
raise exception \ttindex{Match}.}
lcp@105
    64
\begin{ttbox}
lcp@105
    65
val [major,minor] = goal FOL.thy
lcp@105
    66
    "[| P&Q;  [| P; Q |] ==> R |] ==> R";
lcp@105
    67
{\out Level 0}
lcp@105
    68
{\out R}
lcp@105
    69
{\out  1. R}
lcp@105
    70
{\out val major = "P & Q  [P & Q]" : thm}
lcp@105
    71
{\out val minor = "[| P; Q |] ==> R  [[| P; Q |] ==> R]" : thm}
lcp@105
    72
\end{ttbox}
lcp@105
    73
Look at the minor premise, recalling that meta-level assumptions are
lcp@105
    74
shown in brackets.  Using {\tt minor}, we reduce $R$ to the subgoals
lcp@105
    75
$P$ and~$Q$:
lcp@105
    76
\begin{ttbox}
lcp@105
    77
by (resolve_tac [minor] 1);
lcp@105
    78
{\out Level 1}
lcp@105
    79
{\out R}
lcp@105
    80
{\out  1. P}
lcp@105
    81
{\out  2. Q}
lcp@105
    82
\end{ttbox}
lcp@105
    83
Deviating from~\S\ref{deriving}, we apply $({\conj}E1)$ forwards from the
lcp@105
    84
assumption $P\conj Q$ to obtain the theorem~$P\;[P\conj Q]$.
lcp@105
    85
\begin{ttbox}
lcp@105
    86
major RS conjunct1;
lcp@105
    87
{\out val it = "P  [P & Q]" : thm}
lcp@105
    88
\ttbreak
lcp@105
    89
by (resolve_tac [major RS conjunct1] 1);
lcp@105
    90
{\out Level 2}
lcp@105
    91
{\out R}
lcp@105
    92
{\out  1. Q}
lcp@105
    93
\end{ttbox}
lcp@105
    94
Similarly, we solve the subgoal involving~$Q$.
lcp@105
    95
\begin{ttbox}
lcp@105
    96
major RS conjunct2;
lcp@105
    97
{\out val it = "Q  [P & Q]" : thm}
lcp@105
    98
by (resolve_tac [major RS conjunct2] 1);
lcp@105
    99
{\out Level 3}
lcp@105
   100
{\out R}
lcp@105
   101
{\out No subgoals!}
lcp@105
   102
\end{ttbox}
lcp@105
   103
Calling \ttindex{topthm} returns the current proof state as a theorem.
lcp@105
   104
Note that it contains assumptions.  Calling \ttindex{result} discharges the
lcp@105
   105
assumptions --- both occurrences of $P\conj Q$ are discharged as one ---
lcp@105
   106
and makes the variables schematic.
lcp@105
   107
\begin{ttbox}
lcp@105
   108
topthm();
lcp@105
   109
{\out val it = "R  [P & Q, P & Q, [| P; Q |] ==> R]" : thm}
lcp@105
   110
val conjE = result();
lcp@105
   111
{\out val conjE = "[| ?P & ?Q; [| ?P; ?Q |] ==> ?R |] ==> ?R" : thm}
lcp@105
   112
\end{ttbox}
lcp@105
   113
lcp@105
   114
lcp@105
   115
\subsection{Definitions and derived rules} \label{definitions}
lcp@105
   116
\index{rules!derived}
lcp@105
   117
\index{Isabelle!definitions in}
lcp@105
   118
\index{definitions!reasoning about|bold}
lcp@105
   119
Definitions are expressed as meta-level equalities.  Let us define negation
lcp@105
   120
and the if-and-only-if connective:
lcp@105
   121
\begin{eqnarray*}
lcp@105
   122
  \neg \Var{P}          & \equiv & \Var{P}\imp\bot \\
lcp@105
   123
  \Var{P}\bimp \Var{Q}  & \equiv & 
lcp@105
   124
                (\Var{P}\imp \Var{Q}) \conj (\Var{Q}\imp \Var{P})
lcp@105
   125
\end{eqnarray*}
lcp@105
   126
\index{rewriting!meta-level|bold}
lcp@105
   127
\index{unfolding|bold}\index{folding|bold}
lcp@105
   128
Isabelle permits {\bf meta-level rewriting} using definitions such as
lcp@105
   129
these.  {\bf Unfolding} replaces every instance
lcp@105
   130
of $\neg \Var{P}$ by the corresponding instance of $\Var{P}\imp\bot$.  For
lcp@105
   131
example, $\forall x.\neg (P(x)\conj \neg R(x,0))$ unfolds to
lcp@105
   132
\[ \forall x.(P(x)\conj R(x,0)\imp\bot)\imp\bot.  \]
lcp@105
   133
{\bf Folding} a definition replaces occurrences of the right-hand side by
lcp@105
   134
the left.  The occurrences need not be free in the entire formula.
lcp@105
   135
lcp@105
   136
\begin{warn}
lcp@105
   137
Isabelle does not distinguish sensible definitions, like $1\equiv Suc(0)$, from
lcp@105
   138
equations like $1\equiv Suc(1)$.  However, meta-rewriting fails for
lcp@105
   139
equations like ${f(\Var{x})\equiv g(\Var{x},\Var{y})}$: all variables on
lcp@105
   140
the right-hand side must also be present on the left.
lcp@105
   141
\index{rewriting!meta-level}
lcp@105
   142
\end{warn}
lcp@105
   143
lcp@105
   144
When you define new concepts, you should derive rules asserting their
lcp@105
   145
abstract properties, and then forget their definitions.  This supports
lcp@105
   146
modularity: if you later change the definitions, without affecting their
lcp@105
   147
abstract properties, then most of your proofs will carry through without
lcp@105
   148
change.  Indiscriminate unfolding makes a subgoal grow exponentially,
lcp@105
   149
becoming unreadable.
lcp@105
   150
lcp@105
   151
Taking this point of view, Isabelle does not unfold definitions
lcp@105
   152
automatically during proofs.  Rewriting must be explicit and selective.
lcp@105
   153
Isabelle provides tactics and meta-rules for rewriting, and a version of
lcp@105
   154
the {\tt goal} command that unfolds the conclusion and premises of the rule
lcp@105
   155
being derived.
lcp@105
   156
lcp@105
   157
For example, the intuitionistic definition of negation given above may seem
lcp@105
   158
peculiar.  Using Isabelle, we shall derive pleasanter negation rules:
lcp@105
   159
\[  \infer[({\neg}I)]{\neg P}{\infer*{\bot}{[P]}}   \qquad
lcp@105
   160
    \infer[({\neg}E)]{Q}{\neg P & P}  \]
lcp@105
   161
This requires proving the following formulae:
lcp@105
   162
$$ (P\Imp\bot)    \Imp \neg P   \eqno(\neg I)$$
lcp@105
   163
$$ \List{\neg P; P} \Imp Q.       \eqno(\neg E)$$
lcp@105
   164
lcp@105
   165
lcp@105
   166
\subsubsection{Deriving the introduction rule}
lcp@105
   167
To derive $(\neg I)$, we may call \ttindex{goal} with the appropriate
lcp@105
   168
formula.  Again, {\tt goal} returns a list consisting of the rule's
lcp@105
   169
premises.  We bind this list, which contains the one element $P\Imp\bot$,
lcp@105
   170
to the \ML\ identifier {\tt prems}.
lcp@105
   171
\begin{ttbox}
lcp@105
   172
val prems = goal FOL.thy "(P ==> False) ==> ~P";
lcp@105
   173
{\out Level 0}
lcp@105
   174
{\out ~P}
lcp@105
   175
{\out  1. ~P}
lcp@105
   176
{\out val prems = ["P ==> False  [P ==> False]"] : thm list}
lcp@105
   177
\end{ttbox}
lcp@105
   178
Calling \ttindex{rewrite_goals_tac} with \ttindex{not_def}, which is the
lcp@105
   179
definition of negation, unfolds that definition in the subgoals.  It leaves
lcp@105
   180
the main goal alone.
lcp@105
   181
\begin{ttbox}
lcp@105
   182
not_def;
lcp@105
   183
{\out val it = "~?P == ?P --> False" : thm}
lcp@105
   184
by (rewrite_goals_tac [not_def]);
lcp@105
   185
{\out Level 1}
lcp@105
   186
{\out ~P}
lcp@105
   187
{\out  1. P --> False}
lcp@105
   188
\end{ttbox}
lcp@105
   189
Using \ttindex{impI} and the premise, we reduce subgoal~1 to a triviality:
lcp@105
   190
\begin{ttbox}
lcp@105
   191
by (resolve_tac [impI] 1);
lcp@105
   192
{\out Level 2}
lcp@105
   193
{\out ~P}
lcp@105
   194
{\out  1. P ==> False}
lcp@105
   195
\ttbreak
lcp@105
   196
by (resolve_tac prems 1);
lcp@105
   197
{\out Level 3}
lcp@105
   198
{\out ~P}
lcp@105
   199
{\out  1. P ==> P}
lcp@105
   200
\end{ttbox}
lcp@105
   201
The rest of the proof is routine.
lcp@105
   202
\begin{ttbox}
lcp@105
   203
by (assume_tac 1);
lcp@105
   204
{\out Level 4}
lcp@105
   205
{\out ~P}
lcp@105
   206
{\out No subgoals!}
lcp@105
   207
val notI = result();
lcp@105
   208
{\out val notI = "(?P ==> False) ==> ~?P" : thm}
lcp@105
   209
\end{ttbox}
lcp@105
   210
\indexbold{*notI}
lcp@105
   211
lcp@105
   212
\medskip
lcp@105
   213
There is a simpler way of conducting this proof.  The \ttindex{goalw}
lcp@105
   214
command starts a backward proof, as does \ttindex{goal}, but it also
lcp@105
   215
unfolds definitions:
lcp@105
   216
\begin{ttbox}
lcp@105
   217
val prems = goalw FOL.thy [not_def]
lcp@105
   218
    "(P ==> False) ==> ~P";
lcp@105
   219
{\out Level 0}
lcp@105
   220
{\out ~P}
lcp@105
   221
{\out  1. P --> False}
lcp@105
   222
{\out val prems = ["P ==> False  [P ==> False]"] : thm list}
lcp@105
   223
\end{ttbox}
lcp@105
   224
The proof continues as above, but without calling \ttindex{rewrite_goals_tac}.
lcp@105
   225
lcp@105
   226
lcp@105
   227
\subsubsection{Deriving the elimination rule}
lcp@105
   228
Let us derive $(\neg E)$.  The proof follows that of~{\tt conjE}
lcp@105
   229
(\S\ref{deriving-example}), with an additional step to unfold negation in
lcp@105
   230
the major premise.  Although the {\tt goalw} command is best for this, let
lcp@105
   231
us try~\ttindex{goal}.  As usual, we bind the premises to \ML\ identifiers.
lcp@105
   232
We then apply \ttindex{FalseE}, which stands for~$(\bot E)$:
lcp@105
   233
\begin{ttbox}
lcp@105
   234
val [major,minor] = goal FOL.thy "[| ~P;  P |] ==> R";
lcp@105
   235
{\out Level 0}
lcp@105
   236
{\out R}
lcp@105
   237
{\out  1. R}
lcp@105
   238
{\out val major = "~ P  [~ P]" : thm}
lcp@105
   239
{\out val minor = "P  [P]" : thm}
lcp@105
   240
\ttbreak
lcp@105
   241
by (resolve_tac [FalseE] 1);
lcp@105
   242
{\out Level 1}
lcp@105
   243
{\out R}
lcp@105
   244
{\out  1. False}
lcp@105
   245
\ttbreak
lcp@105
   246
by (resolve_tac [mp] 1);
lcp@105
   247
{\out Level 2}
lcp@105
   248
{\out R}
lcp@105
   249
{\out  1. ?P1 --> False}
lcp@105
   250
{\out  2. ?P1}
lcp@105
   251
\end{ttbox}
lcp@105
   252
For subgoal~1, we transform the major premise from~$\neg P$
lcp@105
   253
to~${P\imp\bot}$.  The function \ttindex{rewrite_rule}, given a list of
lcp@105
   254
definitions, unfolds them in a theorem.  Rewriting does {\bf not}
lcp@105
   255
affect the theorem's hypothesis, which remains~$\neg P$:
lcp@105
   256
\begin{ttbox}
lcp@105
   257
rewrite_rule [not_def] major;
lcp@105
   258
{\out val it = "P --> False  [~P]" : thm}
lcp@105
   259
by (resolve_tac [it] 1);
lcp@105
   260
{\out Level 3}
lcp@105
   261
{\out R}
lcp@105
   262
{\out  1. P}
lcp@105
   263
\end{ttbox}
lcp@105
   264
Now {\tt?P1} has changed to~{\tt P}; we need only use the minor premise:
lcp@105
   265
\begin{ttbox}
lcp@105
   266
by (resolve_tac [minor] 1);
lcp@105
   267
{\out Level 4}
lcp@105
   268
{\out R}
lcp@105
   269
{\out No subgoals!}
lcp@105
   270
val notE = result();
lcp@105
   271
{\out val notE = "[| ~?P; ?P |] ==> ?R" : thm}
lcp@105
   272
\end{ttbox}
lcp@105
   273
\indexbold{*notE}
lcp@105
   274
lcp@105
   275
\medskip
lcp@105
   276
Again, there is a simpler way of conducting this proof.  The
lcp@156
   277
\ttindex{goalw} command unfolds definitions in the premises as well
lcp@105
   278
as the conclusion:
lcp@105
   279
\begin{ttbox}
lcp@105
   280
val [major,minor] = goalw FOL.thy [not_def]
lcp@105
   281
    "[| ~P;  P |] ==> R";
lcp@105
   282
{\out val major = "P --> False  [~ P]" : thm}
lcp@105
   283
{\out val minor = "P  [P]" : thm}
lcp@105
   284
\end{ttbox}
lcp@105
   285
Observe the difference in {\tt major}; the premises are now {\bf unfolded}
lcp@105
   286
and we need not call~\ttindex{rewrite_rule}.  Incidentally, the four calls
lcp@105
   287
to \ttindex{resolve_tac} above can be collapsed to one, with the help
lcp@105
   288
of~\ttindex{RS}\@:
lcp@105
   289
\begin{ttbox}
lcp@105
   290
minor RS (major RS mp RS FalseE);
lcp@105
   291
{\out val it = "?P  [P, ~P]" : thm}
lcp@105
   292
by (resolve_tac [it] 1);
lcp@105
   293
{\out Level 1}
lcp@105
   294
{\out R}
lcp@105
   295
{\out No subgoals!}
lcp@105
   296
\end{ttbox}
lcp@105
   297
lcp@105
   298
lcp@105
   299
\medskip Finally, here is a trick that is sometimes useful.  If the goal
lcp@105
   300
has an outermost meta-quantifier, then \ttindex{goal} and \ttindex{goalw}
lcp@105
   301
do not return the rule's premises in the list of theorems.  Instead, the
lcp@105
   302
premises become assumptions in subgoal~1:
lcp@105
   303
\begin{ttbox}
lcp@105
   304
goalw FOL.thy [not_def] "!!P R. [| ~P;  P |] ==> R";
lcp@105
   305
{\out Level 0}
lcp@105
   306
{\out !!P R. [| ~ P; P |] ==> R}
lcp@105
   307
{\out  1. !!P R. [| P --> False; P |] ==> R}
lcp@105
   308
val it = [] : thm list
lcp@105
   309
\end{ttbox}
lcp@105
   310
The proof continues as before.  But instead of referring to \ML\
lcp@105
   311
identifiers, we refer to assumptions using \ttindex{eresolve_tac} or
lcp@105
   312
\ttindex{assume_tac}: 
lcp@105
   313
\begin{ttbox}
lcp@105
   314
by (resolve_tac [FalseE] 1);
lcp@105
   315
{\out Level 1}
lcp@105
   316
{\out !!P R. [| ~ P; P |] ==> R}
lcp@105
   317
{\out  1. !!P R. [| P --> False; P |] ==> False}
lcp@105
   318
\ttbreak
lcp@105
   319
by (eresolve_tac [mp] 1);
lcp@105
   320
{\out Level 2}
lcp@105
   321
{\out !!P R. [| ~ P; P |] ==> R}
lcp@105
   322
{\out  1. !!P R. P ==> P}
lcp@105
   323
\ttbreak
lcp@105
   324
by (assume_tac 1);
lcp@105
   325
{\out Level 3}
lcp@105
   326
{\out !!P R. [| ~ P; P |] ==> R}
lcp@105
   327
{\out No subgoals!}
lcp@105
   328
\end{ttbox}
lcp@105
   329
Calling \ttindex{result} strips the meta-quantifiers, so the resulting
lcp@105
   330
theorem is the same as before.
lcp@105
   331
\begin{ttbox}
lcp@105
   332
val notE = result();
lcp@105
   333
{\out val notE = "[| ~?P; ?P |] ==> ?R" : thm}
lcp@105
   334
\end{ttbox}
lcp@105
   335
Do not use the {\tt!!}\ trick if the premises contain meta-level
lcp@105
   336
connectives, because \ttindex{eresolve_tac} and \ttindex{assume_tac} would
lcp@105
   337
not be able to handle the resulting assumptions.  The trick is not suitable
lcp@105
   338
for deriving the introduction rule~$(\neg I)$.
lcp@105
   339
lcp@105
   340
lcp@105
   341
\section{Defining theories}
lcp@105
   342
\index{theories!defining|(}
lcp@105
   343
Isabelle makes no distinction between simple extensions of a logic --- like
lcp@105
   344
defining a type~$bool$ with constants~$true$ and~$false$ --- and defining
lcp@105
   345
an entire logic.  A theory definition has the form
lcp@105
   346
\begin{ttbox}
lcp@105
   347
\(T\) = \(S@1\) + \(\cdots\) + \(S@n\) +
lcp@105
   348
classes      {\it class declarations}
lcp@105
   349
default      {\it sort}
lcp@105
   350
types        {\it type declarations}
lcp@105
   351
arities      {\it arity declarations}
lcp@105
   352
consts       {\it constant declarations}
lcp@105
   353
rules        {\it rule declarations}
lcp@105
   354
translations {\it translation declarations}
lcp@105
   355
end
lcp@105
   356
ML           {\it ML code}
lcp@105
   357
\end{ttbox}
lcp@105
   358
This declares the theory $T$ to extend the existing theories
lcp@105
   359
$S@1$,~\ldots,~$S@n$.  It may declare new classes, types, arities
lcp@105
   360
(overloadings of existing types), constants and rules; it can specify the
lcp@105
   361
default sort for type variables.  A constant declaration can specify an
lcp@105
   362
associated concrete syntax.  The translations section specifies rewrite
lcp@105
   363
rules on abstract syntax trees, for defining notations and abbreviations.
lcp@105
   364
The {\ML} section contains code to perform arbitrary syntactic
lcp@105
   365
transformations.  The main declaration forms are discussed below; see {\em
lcp@105
   366
  Isabelle's Object-Logics} for full details and examples.
lcp@105
   367
lcp@105
   368
All the declaration parts can be omitted.  In the simplest case, $T$ is
lcp@105
   369
just the union of $S@1$,~\ldots,~$S@n$.  New theories always extend one
lcp@105
   370
or more other theories, inheriting their types, constants, syntax, etc.
lcp@105
   371
The theory \ttindexbold{Pure} contains nothing but Isabelle's meta-logic.
lcp@105
   372
lcp@105
   373
Each theory definition must reside in a separate file, whose name is
lcp@105
   374
determined as follows: the theory name, say {\tt ListFn}, is converted to
lcp@105
   375
lower case and {\tt.thy} is appended, yielding the filename {\tt
lcp@105
   376
  listfn.thy}.  Isabelle uses this convention to locate the file containing
lcp@105
   377
a given theory; \ttindexbold{use_thy} automatically loads a theory's
lcp@105
   378
parents before loading the theory itself.
lcp@105
   379
clasohm@109
   380
Calling \ttindexbold{use_thy}~{\tt"}{\it T\/}{\tt"} reads a theory from the
clasohm@109
   381
file {\it t}{\tt.thy}, writes the corresponding {\ML} code to the file
clasohm@109
   382
{\tt.}{\it t}{\tt.thy.ML}, reads the latter file, and deletes it if no errors
clasohm@109
   383
occured.  This declares the {\ML} structure~$T$, which contains a component
clasohm@109
   384
{\tt thy} denoting the new theory, a component for each rule, and everything
clasohm@109
   385
declared in {\it ML code}.
lcp@105
   386
lcp@105
   387
Errors may arise during the translation to {\ML} (say, a misspelled keyword)
lcp@105
   388
or during creation of the new theory (say, a type error in a rule).  But if
clasohm@109
   389
all goes well, {\tt use_thy} will finally read the file {\it t}{\tt.ML}, if
lcp@105
   390
it exists.  This file typically begins with the {\ML} declaration {\tt
lcp@105
   391
open}~$T$ and contains proofs that refer to the components of~$T$.
lcp@105
   392
Theories can be defined directly by issuing {\ML} declarations to Isabelle,
lcp@105
   393
but the calling sequences are extremely cumbersome.
lcp@105
   394
lcp@105
   395
If theory~$T$ is later redeclared in order to delete an incorrect rule,
lcp@105
   396
bindings to the old rule may persist.  Isabelle ensures that the old and
lcp@105
   397
new versions of~$T$ are not involved in the same proof.  Attempting to
lcp@105
   398
combine different versions of~$T$ yields the fatal error
lcp@105
   399
\begin{ttbox} 
lcp@105
   400
Attempt to merge different versions of theory: \(T\)
lcp@105
   401
\end{ttbox}
lcp@105
   402
lcp@105
   403
\subsection{Declaring constants and rules}
lcp@105
   404
\indexbold{constants!declaring}\indexbold{rules!declaring}
lcp@105
   405
Most theories simply declare constants and some rules.  The {\bf constant
lcp@105
   406
declaration part} has the form
lcp@105
   407
\begin{ttbox}
lcp@105
   408
consts  \(c@1\) :: "\(\tau@1\)"
lcp@105
   409
        \vdots
lcp@105
   410
        \(c@n\) :: "\(\tau@n\)"
lcp@105
   411
\end{ttbox}
lcp@105
   412
where $c@1$, \ldots, $c@n$ are constants and $\tau@1$, \ldots, $\tau@n$ are
lcp@105
   413
types.  Each type {\em must\/} be enclosed in quotation marks.  Each
lcp@105
   414
constant must be enclosed in quotation marks unless it is a valid
lcp@105
   415
identifier.  To declare $c@1$, \ldots, $c@n$ as constants of type $\tau$,
lcp@105
   416
the $n$ declarations may be abbreviated to a single line:
lcp@105
   417
\begin{ttbox}
lcp@105
   418
        \(c@1\), \ldots, \(c@n\) :: "\(\tau\)"
lcp@105
   419
\end{ttbox}
lcp@105
   420
The {\bf rule declaration part} has the form
lcp@105
   421
\begin{ttbox}
lcp@105
   422
rules   \(id@1\) "\(rule@1\)"
lcp@105
   423
        \vdots
lcp@105
   424
        \(id@n\) "\(rule@n\)"
lcp@105
   425
\end{ttbox}
lcp@105
   426
where $id@1$, \ldots, $id@n$ are \ML{} identifiers and $rule@1$, \ldots,
lcp@105
   427
$rule@n$ are expressions of type~$prop$.  {\bf Definitions} are rules of
lcp@105
   428
the form $t\equiv u$.  Each rule {\em must\/} be enclosed in quotation marks.
lcp@105
   429
lcp@105
   430
\index{examples!of theories}
lcp@105
   431
This theory extends first-order logic with two constants {\em nand} and
lcp@105
   432
{\em xor}, and two rules defining them:
lcp@105
   433
\begin{ttbox} 
lcp@105
   434
Gate = FOL +
lcp@105
   435
consts  nand,xor :: "[o,o] => o"
lcp@105
   436
rules   nand_def "nand(P,Q) == ~(P & Q)"
lcp@105
   437
        xor_def  "xor(P,Q)  == P & ~Q | ~P & Q"
lcp@105
   438
end
lcp@105
   439
\end{ttbox}
lcp@105
   440
lcp@105
   441
lcp@105
   442
\subsection{Declaring type constructors}
lcp@105
   443
\indexbold{type constructors!declaring}\indexbold{arities!declaring}
lcp@105
   444
Types are composed of type variables and {\bf type constructors}.  Each
lcp@105
   445
type constructor has a fixed number of argument places.  For example,
lcp@105
   446
$list$ is a 1-place type constructor and $nat$ is a 0-place type
lcp@105
   447
constructor.
lcp@105
   448
lcp@105
   449
The {\bf type declaration part} has the form
lcp@105
   450
\begin{ttbox}
lcp@105
   451
types   \(id@1\) \(k@1\)
lcp@105
   452
        \vdots
lcp@105
   453
        \(id@n\) \(k@n\)
lcp@105
   454
\end{ttbox}
lcp@105
   455
where $id@1$, \ldots, $id@n$ are identifiers and $k@1$, \ldots, $k@n$ are
lcp@105
   456
natural numbers.  It declares each $id@i$ as a type constructor with $k@i$
lcp@105
   457
argument places.
lcp@105
   458
lcp@105
   459
The {\bf arity declaration part} has the form
lcp@105
   460
\begin{ttbox}
lcp@105
   461
arities \(tycon@1\) :: \(arity@1\)
lcp@105
   462
        \vdots
lcp@105
   463
        \(tycon@n\) :: \(arity@n\)
lcp@105
   464
\end{ttbox}
lcp@105
   465
where $tycon@1$, \ldots, $tycon@n$ are identifiers and $arity@1$, \ldots,
lcp@105
   466
$arity@n$ are arities.  Arity declarations add arities to existing
lcp@105
   467
types; they complement type declarations.
lcp@105
   468
lcp@105
   469
In the simplest case, for an 0-place type constructor, an arity is simply
lcp@105
   470
the type's class.  Let us declare a type~$bool$ of class $term$, with
lcp@105
   471
constants $tt$ and~$ff$:\footnote{In first-order logic, booleans are
lcp@105
   472
distinct from formulae, which have type $o::logic$.}
lcp@105
   473
\index{examples!of theories}
lcp@105
   474
\begin{ttbox} 
lcp@105
   475
Bool = FOL +
lcp@105
   476
types   bool 0
lcp@105
   477
arities bool    :: term
lcp@105
   478
consts  tt,ff   :: "bool"
lcp@105
   479
end
lcp@105
   480
\end{ttbox}
lcp@105
   481
In the general case, type constructors take arguments.  Each type
lcp@105
   482
constructor has an {\bf arity} with respect to
lcp@105
   483
classes~(\S\ref{polymorphic}).  A $k$-place type constructor may have
lcp@105
   484
arities of the form $(s@1,\ldots,s@k)c$, where $s@1,\ldots,s@n$ are sorts
lcp@105
   485
and $c$ is a class.  Each sort specifies a type argument; it has the form
lcp@105
   486
$\{c@1,\ldots,c@m\}$, where $c@1$, \dots,~$c@m$ are classes.  Mostly we
lcp@105
   487
deal with singleton sorts, and may abbreviate them by dropping the braces.
lcp@105
   488
The arity declaration $list{::}(term)term$ is short for
lcp@105
   489
$list{::}(\{term\})term$.
lcp@105
   490
lcp@105
   491
A type constructor may be overloaded (subject to certain conditions) by
lcp@105
   492
appearing in several arity declarations.  For instance, the built-in type
lcp@105
   493
constructor~$\To$ has the arity $(logic,logic)logic$; in higher-order
lcp@105
   494
logic, it is declared also to have arity $(term,term)term$.
lcp@105
   495
lcp@105
   496
Theory {\tt List} declares the 1-place type constructor $list$, gives
lcp@105
   497
it arity $list{::}(term)term$, and declares constants $Nil$ and $Cons$ with
lcp@105
   498
polymorphic types:
lcp@105
   499
\index{examples!of theories}
lcp@105
   500
\begin{ttbox} 
lcp@105
   501
List = FOL +
lcp@105
   502
types   list 1
lcp@105
   503
arities list    :: (term)term
lcp@105
   504
consts  Nil     :: "'a list"
lcp@105
   505
        Cons    :: "['a, 'a list] => 'a list" 
lcp@105
   506
end
lcp@105
   507
\end{ttbox}
lcp@105
   508
Multiple type and arity declarations may be abbreviated to a single line:
lcp@105
   509
\begin{ttbox}
lcp@105
   510
types   \(id@1\), \ldots, \(id@n\) \(k\)
lcp@105
   511
arities \(tycon@1\), \ldots, \(tycon@n\) :: \(arity\)
lcp@105
   512
\end{ttbox}
lcp@105
   513
lcp@105
   514
\begin{warn}
lcp@105
   515
Arity declarations resemble constant declarations, but there are {\it no\/}
lcp@105
   516
quotation marks!  Types and rules must be quoted because the theory
lcp@105
   517
translator passes them verbatim to the {\ML} output file.
lcp@105
   518
\end{warn}
lcp@105
   519
lcp@105
   520
\subsection{Infixes and Mixfixes}
lcp@105
   521
\indexbold{infix operators}\index{examples!of theories}
lcp@105
   522
The constant declaration part of the theory
lcp@105
   523
\begin{ttbox} 
lcp@105
   524
Gate2 = FOL +
lcp@105
   525
consts  "~&"     :: "[o,o] => o"         (infixl 35)
lcp@105
   526
        "#"      :: "[o,o] => o"         (infixl 30)
lcp@105
   527
rules   nand_def "P ~& Q == ~(P & Q)"    
lcp@105
   528
        xor_def  "P # Q  == P & ~Q | ~P & Q"
lcp@105
   529
end
lcp@105
   530
\end{ttbox}
lcp@105
   531
declares two left-associating infix operators: $\nand$ of precedence~35 and
lcp@105
   532
$\xor$ of precedence~30.  Hence $P \xor Q \xor R$ is parsed as $(P\xor
lcp@105
   533
Q) \xor R$ and $P \xor Q \nand R$ as $P \xor (Q \nand R)$.  Note the
lcp@105
   534
quotation marks in \verb|"~&"| and \verb|"#"|.
lcp@105
   535
lcp@105
   536
The constants \hbox{\verb|op ~&|} and \hbox{\verb|op #|} are declared
lcp@105
   537
automatically, just as in \ML.  Hence you may write propositions like
lcp@105
   538
\verb|op #(True) == op ~&(True)|, which asserts that the functions $\lambda
lcp@105
   539
Q.True \xor Q$ and $\lambda Q.True \nand Q$ are identical.
lcp@105
   540
lcp@105
   541
\indexbold{mixfix operators}
lcp@105
   542
{\bf Mixfix} operators may have arbitrary context-free syntaxes.  For example
lcp@105
   543
\begin{ttbox} 
lcp@105
   544
    If :: "[o,o,o] => o"       ("if _ then _ else _")
lcp@105
   545
\end{ttbox}
lcp@105
   546
declares a constant $If$ of type $[o,o,o] \To o$ with concrete syntax
lcp@105
   547
$if~P~then~Q~else~R$ instead of $If(P,Q,R)$.  Underscores denote argument
lcp@105
   548
positions.  Pretty-printing information can be specified in order to
lcp@105
   549
improve the layout of formulae with mixfix operations.  For details, see
lcp@105
   550
{\em Isabelle's Object-Logics}.
lcp@105
   551
lcp@105
   552
Mixfix declarations can be annotated with precedences, just like
lcp@105
   553
infixes.  The example above is just a shorthand for
lcp@105
   554
\begin{ttbox} 
lcp@105
   555
    If :: "[o,o,o] => o"       ("if _ then _ else _" [0,0,0] 1000)
lcp@105
   556
\end{ttbox}
lcp@105
   557
The numeric components determine precedences.  The list of integers
lcp@105
   558
defines, for each argument position, the minimal precedence an expression
lcp@105
   559
at that position must have.  The final integer is the precedence of the
lcp@105
   560
construct itself.  In the example above, any argument expression is
lcp@105
   561
acceptable because precedences are non-negative, and conditionals may
lcp@105
   562
appear everywhere because 1000 is the highest precedence.  On the other
lcp@105
   563
hand,
lcp@105
   564
\begin{ttbox} 
lcp@105
   565
    If :: "[o,o,o] => o"       ("if _ then _ else _" [100,0,0] 99)
lcp@105
   566
\end{ttbox}
lcp@156
   567
defines concrete syntax for a
lcp@156
   568
conditional whose first argument cannot have the form $if~P~then~Q~else~R$
lcp@156
   569
because it must have a precedence of at least~100.  Since expressions put in
lcp@156
   570
parentheses have maximal precedence, we may of course write 
lcp@156
   571
\begin{quote}
lcp@156
   572
\it  if (if P then Q else R) then S else T
lcp@156
   573
\end{quote}
lcp@156
   574
Conditional expressions can also be written using the constant {\tt If}.
lcp@105
   575
lcp@105
   576
Binary type constructors, like products and sums, may also be declared as
lcp@105
   577
infixes.  The type declaration below introduces a type constructor~$*$ with
lcp@105
   578
infix notation $\alpha*\beta$, together with the mixfix notation
lcp@105
   579
${<}\_,\_{>}$ for pairs.  
lcp@105
   580
\index{examples!of theories}
lcp@105
   581
\begin{ttbox}
lcp@105
   582
Prod = FOL +
lcp@105
   583
types   "*" 2                                 (infixl 20)
lcp@105
   584
arities "*"     :: (term,term)term
lcp@105
   585
consts  fst     :: "'a * 'b => 'a"
lcp@105
   586
        snd     :: "'a * 'b => 'b"
lcp@105
   587
        Pair    :: "['a,'b] => 'a * 'b"       ("(1<_,/_>)")
lcp@105
   588
rules   fst     "fst(<a,b>) = a"
lcp@105
   589
        snd     "snd(<a,b>) = b"
lcp@105
   590
end
lcp@105
   591
\end{ttbox}
lcp@105
   592
lcp@105
   593
\begin{warn}
lcp@105
   594
The name of the type constructor is~{\tt *} and not {\tt op~*}, as it would
lcp@105
   595
be in the case of an infix constant.  Only infix type constructors can have
lcp@105
   596
symbolic names like~{\tt *}.  There is no general mixfix syntax for types.
lcp@105
   597
\end{warn}
lcp@105
   598
lcp@105
   599
lcp@105
   600
\subsection{Overloading}
lcp@105
   601
\index{overloading}\index{examples!of theories}
lcp@105
   602
The {\bf class declaration part} has the form
lcp@105
   603
\begin{ttbox}
lcp@105
   604
classes \(id@1\) < \(c@1\)
lcp@105
   605
        \vdots
lcp@105
   606
        \(id@n\) < \(c@n\)
lcp@105
   607
\end{ttbox}
lcp@105
   608
where $id@1$, \ldots, $id@n$ are identifiers and $c@1$, \ldots, $c@n$ are
lcp@105
   609
existing classes.  It declares each $id@i$ as a new class, a subclass
lcp@105
   610
of~$c@i$.  In the general case, an identifier may be declared to be a
lcp@105
   611
subclass of $k$ existing classes:
lcp@105
   612
\begin{ttbox}
lcp@105
   613
        \(id\) < \(c@1\), \ldots, \(c@k\)
lcp@105
   614
\end{ttbox}
lcp@105
   615
Type classes allow constants to be overloaded~(\S\ref{polymorphic}).  As an
lcp@105
   616
example, we define the class $arith$ of ``arithmetic'' types with the
lcp@105
   617
constants ${+} :: [\alpha,\alpha]\To \alpha$ and $0,1 :: \alpha$, for
lcp@105
   618
$\alpha{::}arith$.  We introduce $arith$ as a subclass of $term$ and add
lcp@105
   619
the three polymorphic constants of this class.
lcp@105
   620
\index{examples!of theories}
lcp@105
   621
\begin{ttbox}
lcp@105
   622
Arith = FOL +
lcp@105
   623
classes arith < term
lcp@105
   624
consts  "0"     :: "'a::arith"                  ("0")
lcp@105
   625
        "1"     :: "'a::arith"                  ("1")
lcp@105
   626
        "+"     :: "['a::arith,'a] => 'a"       (infixl 60)
lcp@105
   627
end
lcp@105
   628
\end{ttbox}
lcp@105
   629
No rules are declared for these constants: we merely introduce their
lcp@105
   630
names without specifying properties.  On the other hand, classes
lcp@105
   631
with rules make it possible to prove {\bf generic} theorems.  Such
lcp@105
   632
theorems hold for all instances, all types in that class.
lcp@105
   633
lcp@105
   634
We can now obtain distinct versions of the constants of $arith$ by
lcp@105
   635
declaring certain types to be of class $arith$.  For example, let us
lcp@105
   636
declare the 0-place type constructors $bool$ and $nat$:
lcp@105
   637
\index{examples!of theories}
lcp@105
   638
\begin{ttbox}
lcp@105
   639
BoolNat = Arith +
lcp@105
   640
types   bool,nat    0
lcp@105
   641
arities bool,nat    :: arith
lcp@105
   642
consts  Suc         :: "nat=>nat"
lcp@105
   643
rules   add0        "0 + n = n::nat"
lcp@105
   644
        addS        "Suc(m)+n = Suc(m+n)"
lcp@105
   645
        nat1        "1 = Suc(0)"
lcp@105
   646
        or0l        "0 + x = x::bool"
lcp@105
   647
        or0r        "x + 0 = x::bool"
lcp@105
   648
        or1l        "1 + x = 1::bool"
lcp@105
   649
        or1r        "x + 1 = 1::bool"
lcp@105
   650
end
lcp@105
   651
\end{ttbox}
lcp@105
   652
Because $nat$ and $bool$ have class $arith$, we can use $0$, $1$ and $+$ at
lcp@105
   653
either type.  The type constraints in the axioms are vital.  Without
lcp@105
   654
constraints, the $x$ in $1+x = x$ would have type $\alpha{::}arith$
lcp@105
   655
and the axiom would hold for any type of class $arith$.  This would
lcp@105
   656
collapse $nat$:
lcp@105
   657
\[ Suc(1) = Suc(0+1) = Suc(0)+1 = 1+1 = 1! \]
lcp@105
   658
The class $arith$ as defined above is more specific than necessary.  Many
lcp@105
   659
types come with a binary operation and identity~(0).  On lists,
lcp@105
   660
$+$ could be concatenation and 0 the empty list --- but what is 1?  Hence it
lcp@105
   661
may be better to define $+$ and 0 on $arith$ and introduce a separate
lcp@105
   662
class, say $k$, containing~1.  Should $k$ be a subclass of $term$ or of
lcp@105
   663
$arith$?  This depends on the structure of your theories; the design of an
lcp@105
   664
appropriate class hierarchy may require some experimentation.
lcp@105
   665
lcp@105
   666
We will now work through a small example of formalized mathematics
lcp@105
   667
demonstrating many of the theory extension features.
lcp@105
   668
lcp@105
   669
lcp@105
   670
\subsection{Extending first-order logic with the natural numbers}
lcp@105
   671
\index{examples!of theories}
lcp@105
   672
lcp@105
   673
The early part of this paper defines a first-order logic, including a
lcp@105
   674
type~$nat$ and the constants $0::nat$ and $Suc::nat\To nat$.  Let us
lcp@105
   675
introduce the Peano axioms for mathematical induction and the freeness of
lcp@105
   676
$0$ and~$Suc$:
lcp@105
   677
\[ \vcenter{\infer[(induct)*]{P[n/x]}{P[0/x] & \infer*{P[Suc(x)/x]}{[P]}}}
lcp@105
   678
 \qquad \parbox{4.5cm}{provided $x$ is not free in any assumption except~$P$}
lcp@105
   679
\]
lcp@105
   680
\[ \infer[(Suc\_inject)]{m=n}{Suc(m)=Suc(n)} \qquad
lcp@105
   681
   \infer[(Suc\_neq\_0)]{R}{Suc(m)=0}
lcp@105
   682
\]
lcp@105
   683
Mathematical induction asserts that $P(n)$ is true, for any $n::nat$,
lcp@105
   684
provided $P(0)$ holds and that $P(x)$ implies $P(Suc(x))$ for all~$x$.
lcp@105
   685
Some authors express the induction step as $\forall x. P(x)\imp P(Suc(x))$.
lcp@105
   686
To avoid making induction require the presence of other connectives, we
lcp@105
   687
formalize mathematical induction as
lcp@105
   688
$$ \List{P(0); \Forall x. P(x)\Imp P(Suc(x))} \Imp P(n). \eqno(induct) $$
lcp@105
   689
lcp@105
   690
\noindent
lcp@105
   691
Similarly, to avoid expressing the other rules using~$\forall$, $\imp$
lcp@105
   692
and~$\neg$, we take advantage of the meta-logic;\footnote
lcp@105
   693
{On the other hand, the axioms $Suc(m)=Suc(n) \bimp m=n$
lcp@105
   694
and $\neg(Suc(m)=0)$ are logically equivalent to those given, and work
lcp@105
   695
better with Isabelle's simplifier.} 
lcp@105
   696
$(Suc\_neq\_0)$ is
lcp@105
   697
an elimination rule for $Suc(m)=0$:
lcp@105
   698
$$ Suc(m)=Suc(n) \Imp m=n  \eqno(Suc\_inject) $$
lcp@105
   699
$$ Suc(m)=0      \Imp R    \eqno(Suc\_neq\_0) $$
lcp@105
   700
lcp@105
   701
\noindent
lcp@105
   702
We shall also define a primitive recursion operator, $rec$.  Traditionally,
lcp@105
   703
primitive recursion takes a natural number~$a$ and a 2-place function~$f$,
lcp@105
   704
and obeys the equations
lcp@105
   705
\begin{eqnarray*}
lcp@105
   706
  rec(0,a,f)            & = & a \\
lcp@105
   707
  rec(Suc(m),a,f)       & = & f(m, rec(m,a,f))
lcp@105
   708
\end{eqnarray*}
lcp@105
   709
Addition, defined by $m+n \equiv rec(m,n,\lambda x\,y.Suc(y))$,
lcp@105
   710
should satisfy
lcp@105
   711
\begin{eqnarray*}
lcp@105
   712
  0+n      & = & n \\
lcp@105
   713
  Suc(m)+n & = & Suc(m+n)
lcp@105
   714
\end{eqnarray*}
lcp@105
   715
This appears to pose difficulties: first-order logic has no functions.
lcp@105
   716
Following the previous examples, we take advantage of the meta-logic, which
lcp@105
   717
does have functions.  We also generalise primitive recursion to be
lcp@105
   718
polymorphic over any type of class~$term$, and declare the addition
lcp@105
   719
function:
lcp@105
   720
\begin{eqnarray*}
lcp@105
   721
  rec   & :: & [nat, \alpha{::}term, [nat,\alpha]\To\alpha] \To\alpha \\
lcp@105
   722
  +     & :: & [nat,nat]\To nat 
lcp@105
   723
\end{eqnarray*}
lcp@105
   724
lcp@105
   725
lcp@105
   726
\subsection{Declaring the theory to Isabelle}
lcp@105
   727
\index{examples!of theories}
lcp@105
   728
Let us create the theory \ttindexbold{Nat} starting from theory~\verb$FOL$,
lcp@105
   729
which contains only classical logic with no natural numbers.  We declare
lcp@105
   730
the 0-place type constructor $nat$ and the constants $rec$ and~$Suc$:
lcp@105
   731
\begin{ttbox}
lcp@105
   732
Nat = FOL +
lcp@105
   733
types   nat 0
lcp@105
   734
arities nat         :: term
lcp@105
   735
consts  "0"         :: "nat"    ("0")
lcp@105
   736
        Suc         :: "nat=>nat"
lcp@105
   737
        rec         :: "[nat, 'a, [nat,'a]=>'a] => 'a"
lcp@105
   738
        "+"         :: "[nat, nat] => nat"              (infixl 60)
lcp@105
   739
rules   induct      "[| P(0);  !!x. P(x) ==> P(Suc(x)) |]  ==> P(n)"
lcp@105
   740
        Suc_inject  "Suc(m)=Suc(n) ==> m=n"
lcp@105
   741
        Suc_neq_0   "Suc(m)=0      ==> R"
lcp@105
   742
        rec_0       "rec(0,a,f) = a"
lcp@105
   743
        rec_Suc     "rec(Suc(m), a, f) = f(m, rec(m,a,f))"
lcp@105
   744
        add_def     "m+n == rec(m, n, %x y. Suc(y))"
lcp@105
   745
end
lcp@105
   746
\end{ttbox}
lcp@105
   747
In axiom {\tt add_def}, recall that \verb|%| stands for~$\lambda$.
lcp@105
   748
Opening the \ML\ structure {\tt Nat} permits reference to the axioms by \ML\
lcp@105
   749
identifiers; we may write {\tt induct} instead of {\tt Nat.induct}.
lcp@105
   750
\begin{ttbox}
lcp@105
   751
open Nat;
lcp@105
   752
\end{ttbox}
lcp@105
   753
File {\tt FOL/ex/nat.ML} contains proofs involving this theory of the
lcp@105
   754
natural numbers.  As a trivial example, let us derive recursion equations
lcp@105
   755
for \verb$+$.  Here is the zero case:
lcp@105
   756
\begin{ttbox} 
lcp@105
   757
goalw Nat.thy [add_def] "0+n = n";
lcp@105
   758
{\out Level 0}
lcp@105
   759
{\out 0 + n = n}
lcp@105
   760
{\out  1. rec(0,n,%x y. Suc(y)) = n}
lcp@105
   761
\ttbreak
lcp@105
   762
by (resolve_tac [rec_0] 1);
lcp@105
   763
{\out Level 1}
lcp@105
   764
{\out 0 + n = n}
lcp@105
   765
{\out No subgoals!}
lcp@105
   766
val add_0 = result();
lcp@105
   767
\end{ttbox} 
lcp@105
   768
And here is the successor case:
lcp@105
   769
\begin{ttbox} 
lcp@105
   770
goalw Nat.thy [add_def] "Suc(m)+n = Suc(m+n)";
lcp@105
   771
{\out Level 0}
lcp@105
   772
{\out Suc(m) + n = Suc(m + n)}
lcp@105
   773
{\out  1. rec(Suc(m),n,%x y. Suc(y)) = Suc(rec(m,n,%x y. Suc(y)))}
lcp@105
   774
\ttbreak
lcp@105
   775
by (resolve_tac [rec_Suc] 1);
lcp@105
   776
{\out Level 1}
lcp@105
   777
{\out Suc(m) + n = Suc(m + n)}
lcp@105
   778
{\out No subgoals!}
lcp@105
   779
val add_Suc = result();
lcp@105
   780
\end{ttbox} 
lcp@105
   781
The induction rule raises some complications, which are discussed next.
lcp@105
   782
\index{theories!defining|)}
lcp@105
   783
lcp@105
   784
lcp@105
   785
\section{Refinement with explicit instantiation}
lcp@105
   786
\index{refinement!with instantiation|bold}
lcp@105
   787
\index{instantiation!explicit|bold}
lcp@105
   788
In order to employ mathematical induction, we need to refine a subgoal by
lcp@105
   789
the rule~$(induct)$.  The conclusion of this rule is $\Var{P}(\Var{n})$,
lcp@105
   790
which is highly ambiguous in higher-order unification.  It matches every
lcp@105
   791
way that a formula can be regarded as depending on a subterm of type~$nat$.
lcp@105
   792
To get round this problem, we could make the induction rule conclude
lcp@105
   793
$\forall n.\Var{P}(n)$ --- but putting a subgoal into this form requires
lcp@105
   794
refinement by~$(\forall E)$, which is equally hard!
lcp@105
   795
lcp@105
   796
The tactic {\tt res_inst_tac}, like {\tt resolve_tac}, refines a subgoal by
lcp@105
   797
a rule.  But it also accepts explicit instantiations for the rule's
lcp@105
   798
schematic variables.  
lcp@105
   799
\begin{description}
lcp@105
   800
\item[\ttindexbold{res_inst_tac} {\it insts} {\it thm} {\it i}]
lcp@105
   801
instantiates the rule {\it thm} with the instantiations {\it insts}, and
lcp@105
   802
then performs resolution on subgoal~$i$.
lcp@105
   803
lcp@105
   804
\item[\ttindexbold{eres_inst_tac}] 
lcp@105
   805
and \ttindexbold{dres_inst_tac} are similar, but perform elim-resolution
lcp@105
   806
and destruct-resolution, respectively.
lcp@105
   807
\end{description}
lcp@105
   808
The list {\it insts} consists of pairs $[(v@1,e@1), \ldots, (v@n,e@n)]$,
lcp@105
   809
where $v@1$, \ldots, $v@n$ are names of schematic variables in the rule ---
lcp@105
   810
with {\bf no} leading question marks!! --- and $e@1$, \ldots, $e@n$ are
lcp@105
   811
expressions giving their instantiations.  The expressions are type-checked
lcp@105
   812
in the context of a particular subgoal: free variables receive the same
lcp@105
   813
types as they have in the subgoal, and parameters may appear.  Type
lcp@105
   814
variable instantiations may appear in~{\it insts}, but they are seldom
lcp@105
   815
required: {\tt res_inst_tac} instantiates type variables automatically
lcp@105
   816
whenever the type of~$e@i$ is an instance of the type of~$\Var{v@i}$.
lcp@105
   817
lcp@105
   818
\subsection{A simple proof by induction}
lcp@105
   819
\index{proof!by induction}\index{examples!of induction}
lcp@105
   820
Let us prove that no natural number~$k$ equals its own successor.  To
lcp@105
   821
use~$(induct)$, we instantiate~$\Var{n}$ to~$k$; Isabelle finds a good
lcp@105
   822
instantiation for~$\Var{P}$.
lcp@105
   823
\begin{ttbox} 
lcp@105
   824
goal Nat.thy "~ (Suc(k) = k)";
lcp@105
   825
{\out Level 0}
lcp@105
   826
{\out ~Suc(k) = k}
lcp@105
   827
{\out  1. ~Suc(k) = k}
lcp@105
   828
\ttbreak
lcp@105
   829
by (res_inst_tac [("n","k")] induct 1);
lcp@105
   830
{\out Level 1}
lcp@105
   831
{\out ~Suc(k) = k}
lcp@105
   832
{\out  1. ~Suc(0) = 0}
lcp@105
   833
{\out  2. !!x. ~Suc(x) = x ==> ~Suc(Suc(x)) = Suc(x)}
lcp@105
   834
\end{ttbox} 
lcp@105
   835
We should check that Isabelle has correctly applied induction.  Subgoal~1
lcp@105
   836
is the base case, with $k$ replaced by~0.  Subgoal~2 is the inductive step,
lcp@105
   837
with $k$ replaced by~$Suc(x)$ and with an induction hypothesis for~$x$.
lcp@105
   838
The rest of the proof demonstrates~\ttindex{notI}, \ttindex{notE} and the
lcp@105
   839
other rules of~\ttindex{Nat.thy}.  The base case holds by~\ttindex{Suc_neq_0}:
lcp@105
   840
\begin{ttbox} 
lcp@105
   841
by (resolve_tac [notI] 1);
lcp@105
   842
{\out Level 2}
lcp@105
   843
{\out ~Suc(k) = k}
lcp@105
   844
{\out  1. Suc(0) = 0 ==> False}
lcp@105
   845
{\out  2. !!x. ~Suc(x) = x ==> ~Suc(Suc(x)) = Suc(x)}
lcp@105
   846
\ttbreak
lcp@105
   847
by (eresolve_tac [Suc_neq_0] 1);
lcp@105
   848
{\out Level 3}
lcp@105
   849
{\out ~Suc(k) = k}
lcp@105
   850
{\out  1. !!x. ~Suc(x) = x ==> ~Suc(Suc(x)) = Suc(x)}
lcp@105
   851
\end{ttbox} 
lcp@105
   852
The inductive step holds by the contrapositive of~\ttindex{Suc_inject}.
lcp@105
   853
Using the negation rule, we assume $Suc(Suc(x)) = Suc(x)$ and prove $Suc(x)=x$:
lcp@105
   854
\begin{ttbox} 
lcp@105
   855
by (resolve_tac [notI] 1);
lcp@105
   856
{\out Level 4}
lcp@105
   857
{\out ~Suc(k) = k}
lcp@105
   858
{\out  1. !!x. [| ~Suc(x) = x; Suc(Suc(x)) = Suc(x) |] ==> False}
lcp@105
   859
\ttbreak
lcp@105
   860
by (eresolve_tac [notE] 1);
lcp@105
   861
{\out Level 5}
lcp@105
   862
{\out ~Suc(k) = k}
lcp@105
   863
{\out  1. !!x. Suc(Suc(x)) = Suc(x) ==> Suc(x) = x}
lcp@105
   864
\ttbreak
lcp@105
   865
by (eresolve_tac [Suc_inject] 1);
lcp@105
   866
{\out Level 6}
lcp@105
   867
{\out ~Suc(k) = k}
lcp@105
   868
{\out No subgoals!}
lcp@105
   869
\end{ttbox} 
lcp@105
   870
lcp@105
   871
lcp@105
   872
\subsection{An example of ambiguity in {\tt resolve_tac}}
lcp@105
   873
\index{examples!of induction}\index{unification!higher-order}
lcp@105
   874
If you try the example above, you may observe that {\tt res_inst_tac} is
lcp@105
   875
not actually needed.  Almost by chance, \ttindex{resolve_tac} finds the right
lcp@105
   876
instantiation for~$(induct)$ to yield the desired next state.  With more
lcp@105
   877
complex formulae, our luck fails.  
lcp@105
   878
\begin{ttbox} 
lcp@105
   879
goal Nat.thy "(k+m)+n = k+(m+n)";
lcp@105
   880
{\out Level 0}
lcp@105
   881
{\out k + m + n = k + (m + n)}
lcp@105
   882
{\out  1. k + m + n = k + (m + n)}
lcp@105
   883
\ttbreak
lcp@105
   884
by (resolve_tac [induct] 1);
lcp@105
   885
{\out Level 1}
lcp@105
   886
{\out k + m + n = k + (m + n)}
lcp@105
   887
{\out  1. k + m + n = 0}
lcp@105
   888
{\out  2. !!x. k + m + n = x ==> k + m + n = Suc(x)}
lcp@105
   889
\end{ttbox} 
lcp@105
   890
This proof requires induction on~$k$.  But the 0 in subgoal~1 indicates
lcp@105
   891
that induction has been applied to the term~$k+(m+n)$.  The
lcp@105
   892
\ttindex{back} command causes backtracking to an alternative
lcp@105
   893
outcome of the tactic.  
lcp@105
   894
\begin{ttbox} 
lcp@105
   895
back();
lcp@105
   896
{\out Level 1}
lcp@105
   897
{\out k + m + n = k + (m + n)}
lcp@105
   898
{\out  1. k + m + n = k + 0}
lcp@105
   899
{\out  2. !!x. k + m + n = k + x ==> k + m + n = k + Suc(x)}
lcp@105
   900
\end{ttbox} 
lcp@105
   901
Now induction has been applied to~$m+n$.  Let us call \ttindex{back}
lcp@105
   902
again.
lcp@105
   903
\begin{ttbox} 
lcp@105
   904
back();
lcp@105
   905
{\out Level 1}
lcp@105
   906
{\out k + m + n = k + (m + n)}
lcp@105
   907
{\out  1. k + m + 0 = k + (m + 0)}
lcp@105
   908
{\out  2. !!x. k + m + x = k + (m + x) ==> k + m + Suc(x) = k + (m + Suc(x))}
lcp@105
   909
\end{ttbox} 
lcp@105
   910
Now induction has been applied to~$n$.  What is the next alternative?
lcp@105
   911
\begin{ttbox} 
lcp@105
   912
back();
lcp@105
   913
{\out Level 1}
lcp@105
   914
{\out k + m + n = k + (m + n)}
lcp@105
   915
{\out  1. k + m + n = k + (m + 0)}
lcp@105
   916
{\out  2. !!x. k + m + n = k + (m + x) ==> k + m + n = k + (m + Suc(x))}
lcp@105
   917
\end{ttbox} 
lcp@105
   918
Inspecting subgoal~1 reveals that induction has been applied to just the
lcp@105
   919
second occurrence of~$n$.  This perfectly legitimate induction is useless
lcp@105
   920
here.  The main goal admits fourteen different applications of induction.
lcp@105
   921
The number is exponential in the size of the formula.
lcp@105
   922
lcp@105
   923
\subsection{Proving that addition is associative}
lcp@105
   924
\index{associativity of addition}
lcp@105
   925
\index{examples!of rewriting}
lcp@105
   926
Let us do the proof properly, using~\ttindex{res_inst_tac}.  At the same
lcp@105
   927
time, we shall have a glimpse at Isabelle's rewriting tactics, which are
lcp@105
   928
described in the {\em Reference Manual}.
lcp@105
   929
lcp@105
   930
\index{rewriting!object-level} 
lcp@105
   931
Isabelle's rewriting tactics repeatedly applies equations to a subgoal,
lcp@105
   932
simplifying or proving it.  For efficiency, the rewriting rules must be
lcp@105
   933
packaged into a \bfindex{simplification set}.  Let us include the equations
lcp@105
   934
for~{\tt add} proved in the previous section, namely $0+n=n$ and ${\tt
lcp@105
   935
  Suc}(m)+n={\tt Suc}(m+n)$: 
lcp@105
   936
\begin{ttbox} 
lcp@105
   937
val add_ss = FOL_ss addrews [add_0, add_Suc];
lcp@105
   938
\end{ttbox} 
lcp@105
   939
We state the goal for associativity of addition, and
lcp@105
   940
use \ttindex{res_inst_tac} to invoke induction on~$k$:
lcp@105
   941
\begin{ttbox} 
lcp@105
   942
goal Nat.thy "(k+m)+n = k+(m+n)";
lcp@105
   943
{\out Level 0}
lcp@105
   944
{\out k + m + n = k + (m + n)}
lcp@105
   945
{\out  1. k + m + n = k + (m + n)}
lcp@105
   946
\ttbreak
lcp@105
   947
by (res_inst_tac [("n","k")] induct 1);
lcp@105
   948
{\out Level 1}
lcp@105
   949
{\out k + m + n = k + (m + n)}
lcp@105
   950
{\out  1. 0 + m + n = 0 + (m + n)}
lcp@105
   951
{\out  2. !!x. x + m + n = x + (m + n) ==> Suc(x) + m + n = Suc(x) + (m + n)}
lcp@105
   952
\end{ttbox} 
lcp@105
   953
The base case holds easily; both sides reduce to $m+n$.  The
lcp@105
   954
tactic~\ttindex{simp_tac} rewrites with respect to the given simplification
lcp@105
   955
set, applying the rewrite rules for~{\tt +}:
lcp@105
   956
\begin{ttbox} 
lcp@105
   957
by (simp_tac add_ss 1);
lcp@105
   958
{\out Level 2}
lcp@105
   959
{\out k + m + n = k + (m + n)}
lcp@105
   960
{\out  1. !!x. x + m + n = x + (m + n) ==> Suc(x) + m + n = Suc(x) + (m + n)}
lcp@105
   961
\end{ttbox} 
lcp@105
   962
The inductive step requires rewriting by the equations for~{\tt add}
lcp@105
   963
together the induction hypothesis, which is also an equation.  The
lcp@105
   964
tactic~\ttindex{asm_simp_tac} rewrites using a simplification set and any
lcp@105
   965
useful assumptions:
lcp@105
   966
\begin{ttbox} 
lcp@105
   967
by (asm_simp_tac add_ss 1);
lcp@105
   968
{\out Level 3}
lcp@105
   969
{\out k + m + n = k + (m + n)}
lcp@105
   970
{\out No subgoals!}
lcp@105
   971
\end{ttbox} 
lcp@105
   972
lcp@105
   973
lcp@105
   974
\section{A {\sc Prolog} interpreter}
lcp@105
   975
\index{Prolog interpreter|bold}
lcp@105
   976
To demonstrate the power of tacticals, let us construct a {\sc Prolog}
lcp@105
   977
interpreter and execute programs involving lists.\footnote{To run these
lcp@105
   978
examples, see the file {\tt FOL/ex/prolog.ML}.} The {\sc Prolog} program
lcp@105
   979
consists of a theory.  We declare a type constructor for lists, with an
lcp@105
   980
arity declaration to say that $(\tau)list$ is of class~$term$
lcp@105
   981
provided~$\tau$ is:
lcp@105
   982
\begin{eqnarray*}
lcp@105
   983
  list  & :: & (term)term
lcp@105
   984
\end{eqnarray*}
lcp@105
   985
We declare four constants: the empty list~$Nil$; the infix list
lcp@105
   986
constructor~{:}; the list concatenation predicate~$app$; the list reverse
lcp@105
   987
predicate~$rev$.  (In {\sc Prolog}, functions on lists are expressed as
lcp@105
   988
predicates.)
lcp@105
   989
\begin{eqnarray*}
lcp@105
   990
    Nil         & :: & \alpha list \\
lcp@105
   991
    {:}         & :: & [\alpha,\alpha list] \To \alpha list \\
lcp@105
   992
    app & :: & [\alpha list,\alpha list,\alpha list] \To o \\
lcp@105
   993
    rev & :: & [\alpha list,\alpha list] \To o 
lcp@105
   994
\end{eqnarray*}
lcp@105
   995
The predicate $app$ should satisfy the {\sc Prolog}-style rules
lcp@105
   996
\[ {app(Nil,ys,ys)} \qquad
lcp@105
   997
   {app(xs,ys,zs) \over app(x:xs, ys, x:zs)} \]
lcp@105
   998
We define the naive version of $rev$, which calls~$app$:
lcp@105
   999
\[ {rev(Nil,Nil)} \qquad
lcp@105
  1000
   {rev(xs,ys)\quad  app(ys, x:Nil, zs) \over
lcp@105
  1001
    rev(x:xs, zs)} 
lcp@105
  1002
\]
lcp@105
  1003
lcp@105
  1004
\index{examples!of theories}
lcp@105
  1005
Theory \ttindex{Prolog} extends first-order logic in order to make use
lcp@105
  1006
of the class~$term$ and the type~$o$.  The interpreter does not use the
lcp@105
  1007
rules of~\ttindex{FOL}.
lcp@105
  1008
\begin{ttbox}
lcp@105
  1009
Prolog = FOL +
lcp@105
  1010
types   list 1
lcp@105
  1011
arities list    :: (term)term
lcp@105
  1012
consts  Nil     :: "'a list"
lcp@105
  1013
        ":"     :: "['a, 'a list]=> 'a list"            (infixr 60)
lcp@105
  1014
        app     :: "['a list, 'a list, 'a list] => o"
lcp@105
  1015
        rev     :: "['a list, 'a list] => o"
lcp@105
  1016
rules   appNil  "app(Nil,ys,ys)"
lcp@105
  1017
        appCons "app(xs,ys,zs) ==> app(x:xs, ys, x:zs)"
lcp@105
  1018
        revNil  "rev(Nil,Nil)"
lcp@105
  1019
        revCons "[| rev(xs,ys); app(ys,x:Nil,zs) |] ==> rev(x:xs,zs)"
lcp@105
  1020
end
lcp@105
  1021
\end{ttbox}
lcp@105
  1022
\subsection{Simple executions}
lcp@105
  1023
Repeated application of the rules solves {\sc Prolog} goals.  Let us
lcp@105
  1024
append the lists $[a,b,c]$ and~$[d,e]$.  As the rules are applied, the
lcp@105
  1025
answer builds up in~{\tt ?x}.
lcp@105
  1026
\begin{ttbox}
lcp@105
  1027
goal Prolog.thy "app(a:b:c:Nil, d:e:Nil, ?x)";
lcp@105
  1028
{\out Level 0}
lcp@105
  1029
{\out app(a : b : c : Nil, d : e : Nil, ?x)}
lcp@105
  1030
{\out  1. app(a : b : c : Nil, d : e : Nil, ?x)}
lcp@105
  1031
\ttbreak
lcp@105
  1032
by (resolve_tac [appNil,appCons] 1);
lcp@105
  1033
{\out Level 1}
lcp@105
  1034
{\out app(a : b : c : Nil, d : e : Nil, a : ?zs1)}
lcp@105
  1035
{\out  1. app(b : c : Nil, d : e : Nil, ?zs1)}
lcp@105
  1036
\ttbreak
lcp@105
  1037
by (resolve_tac [appNil,appCons] 1);
lcp@105
  1038
{\out Level 2}
lcp@105
  1039
{\out app(a : b : c : Nil, d : e : Nil, a : b : ?zs2)}
lcp@105
  1040
{\out  1. app(c : Nil, d : e : Nil, ?zs2)}
lcp@105
  1041
\end{ttbox}
lcp@105
  1042
At this point, the first two elements of the result are~$a$ and~$b$.
lcp@105
  1043
\begin{ttbox}
lcp@105
  1044
by (resolve_tac [appNil,appCons] 1);
lcp@105
  1045
{\out Level 3}
lcp@105
  1046
{\out app(a : b : c : Nil, d : e : Nil, a : b : c : ?zs3)}
lcp@105
  1047
{\out  1. app(Nil, d : e : Nil, ?zs3)}
lcp@105
  1048
\ttbreak
lcp@105
  1049
by (resolve_tac [appNil,appCons] 1);
lcp@105
  1050
{\out Level 4}
lcp@105
  1051
{\out app(a : b : c : Nil, d : e : Nil, a : b : c : d : e : Nil)}
lcp@105
  1052
{\out No subgoals!}
lcp@105
  1053
\end{ttbox}
lcp@105
  1054
lcp@105
  1055
{\sc Prolog} can run functions backwards.  Which list can be appended
lcp@105
  1056
with $[c,d]$ to produce $[a,b,c,d]$?
lcp@105
  1057
Using \ttindex{REPEAT}, we find the answer at once, $[a,b]$:
lcp@105
  1058
\begin{ttbox}
lcp@105
  1059
goal Prolog.thy "app(?x, c:d:Nil, a:b:c:d:Nil)";
lcp@105
  1060
{\out Level 0}
lcp@105
  1061
{\out app(?x, c : d : Nil, a : b : c : d : Nil)}
lcp@105
  1062
{\out  1. app(?x, c : d : Nil, a : b : c : d : Nil)}
lcp@105
  1063
\ttbreak
lcp@105
  1064
by (REPEAT (resolve_tac [appNil,appCons] 1));
lcp@105
  1065
{\out Level 1}
lcp@105
  1066
{\out app(a : b : Nil, c : d : Nil, a : b : c : d : Nil)}
lcp@105
  1067
{\out No subgoals!}
lcp@105
  1068
\end{ttbox}
lcp@105
  1069
lcp@105
  1070
lcp@105
  1071
\subsection{Backtracking}
lcp@105
  1072
\index{backtracking}
lcp@105
  1073
Which lists $x$ and $y$ can be appended to form the list $[a,b,c,d]$?
lcp@105
  1074
Using \ttindex{REPEAT} to apply the rules, we quickly find the solution
lcp@105
  1075
$x=[]$ and $y=[a,b,c,d]$:
lcp@105
  1076
\begin{ttbox}
lcp@105
  1077
goal Prolog.thy "app(?x, ?y, a:b:c:d:Nil)";
lcp@105
  1078
{\out Level 0}
lcp@105
  1079
{\out app(?x, ?y, a : b : c : d : Nil)}
lcp@105
  1080
{\out  1. app(?x, ?y, a : b : c : d : Nil)}
lcp@105
  1081
\ttbreak
lcp@105
  1082
by (REPEAT (resolve_tac [appNil,appCons] 1));
lcp@105
  1083
{\out Level 1}
lcp@105
  1084
{\out app(Nil, a : b : c : d : Nil, a : b : c : d : Nil)}
lcp@105
  1085
{\out No subgoals!}
lcp@105
  1086
\end{ttbox}
lcp@105
  1087
The \ttindex{back} command returns the tactic's next outcome,
lcp@105
  1088
$x=[a]$ and $y=[b,c,d]$:
lcp@105
  1089
\begin{ttbox}
lcp@105
  1090
back();
lcp@105
  1091
{\out Level 1}
lcp@105
  1092
{\out app(a : Nil, b : c : d : Nil, a : b : c : d : Nil)}
lcp@105
  1093
{\out No subgoals!}
lcp@105
  1094
\end{ttbox}
lcp@105
  1095
The other solutions are generated similarly.
lcp@105
  1096
\begin{ttbox}
lcp@105
  1097
back();
lcp@105
  1098
{\out Level 1}
lcp@105
  1099
{\out app(a : b : Nil, c : d : Nil, a : b : c : d : Nil)}
lcp@105
  1100
{\out No subgoals!}
lcp@105
  1101
\ttbreak
lcp@105
  1102
back();
lcp@105
  1103
{\out Level 1}
lcp@105
  1104
{\out app(a : b : c : Nil, d : Nil, a : b : c : d : Nil)}
lcp@105
  1105
{\out No subgoals!}
lcp@105
  1106
\ttbreak
lcp@105
  1107
back();
lcp@105
  1108
{\out Level 1}
lcp@105
  1109
{\out app(a : b : c : d : Nil, Nil, a : b : c : d : Nil)}
lcp@105
  1110
{\out No subgoals!}
lcp@105
  1111
\end{ttbox}
lcp@105
  1112
lcp@105
  1113
lcp@105
  1114
\subsection{Depth-first search}
lcp@105
  1115
\index{search!depth-first}
lcp@105
  1116
Now let us try $rev$, reversing a list.
lcp@105
  1117
Bundle the rules together as the \ML{} identifier {\tt rules}.  Naive
lcp@105
  1118
reverse requires 120 inferences for this 14-element list, but the tactic
lcp@105
  1119
terminates in a few seconds.
lcp@105
  1120
\begin{ttbox}
lcp@105
  1121
goal Prolog.thy "rev(a:b:c:d:e:f:g:h:i:j:k:l:m:n:Nil, ?w)";
lcp@105
  1122
{\out Level 0}
lcp@105
  1123
{\out rev(a : b : c : d : e : f : g : h : i : j : k : l : m : n : Nil, ?w)}
lcp@105
  1124
{\out  1. rev(a : b : c : d : e : f : g : h : i : j : k : l : m : n : Nil, ?w)}
lcp@105
  1125
val rules = [appNil,appCons,revNil,revCons];
lcp@105
  1126
\ttbreak
lcp@105
  1127
by (REPEAT (resolve_tac rules 1));
lcp@105
  1128
{\out Level 1}
lcp@105
  1129
{\out rev(a : b : c : d : e : f : g : h : i : j : k : l : m : n : Nil,}
lcp@105
  1130
{\out     n : m : l : k : j : i : h : g : f : e : d : c : b : a : Nil)}
lcp@105
  1131
{\out No subgoals!}
lcp@105
  1132
\end{ttbox}
lcp@105
  1133
We may execute $rev$ backwards.  This, too, should reverse a list.  What
lcp@105
  1134
is the reverse of $[a,b,c]$?
lcp@105
  1135
\begin{ttbox}
lcp@105
  1136
goal Prolog.thy "rev(?x, a:b:c:Nil)";
lcp@105
  1137
{\out Level 0}
lcp@105
  1138
{\out rev(?x, a : b : c : Nil)}
lcp@105
  1139
{\out  1. rev(?x, a : b : c : Nil)}
lcp@105
  1140
\ttbreak
lcp@105
  1141
by (REPEAT (resolve_tac rules 1));
lcp@105
  1142
{\out Level 1}
lcp@105
  1143
{\out rev(?x1 : Nil, a : b : c : Nil)}
lcp@105
  1144
{\out  1. app(Nil, ?x1 : Nil, a : b : c : Nil)}
lcp@105
  1145
\end{ttbox}
lcp@105
  1146
The tactic has failed to find a solution!  It reached a dead end at
lcp@105
  1147
subgoal~1: there is no~$\Var{x1}$ such that [] appended with~$[\Var{x1}]$
lcp@105
  1148
equals~$[a,b,c]$.  Backtracking explores other outcomes.
lcp@105
  1149
\begin{ttbox}
lcp@105
  1150
back();
lcp@105
  1151
{\out Level 1}
lcp@105
  1152
{\out rev(?x1 : a : Nil, a : b : c : Nil)}
lcp@105
  1153
{\out  1. app(Nil, ?x1 : Nil, b : c : Nil)}
lcp@105
  1154
\end{ttbox}
lcp@105
  1155
This too is a dead end, but the next outcome is successful.
lcp@105
  1156
\begin{ttbox}
lcp@105
  1157
back();
lcp@105
  1158
{\out Level 1}
lcp@105
  1159
{\out rev(c : b : a : Nil, a : b : c : Nil)}
lcp@105
  1160
{\out No subgoals!}
lcp@105
  1161
\end{ttbox}
lcp@105
  1162
\ttindex{REPEAT} stops when it cannot continue, regardless of which state
lcp@105
  1163
is reached.  The tactical \ttindex{DEPTH_FIRST} searches for a satisfactory
lcp@105
  1164
state, as specified by an \ML{} predicate.  Below,
lcp@105
  1165
\ttindex{has_fewer_prems} specifies that the proof state should have no
lcp@105
  1166
subgoals.  
lcp@105
  1167
\begin{ttbox}
lcp@105
  1168
val prolog_tac = DEPTH_FIRST (has_fewer_prems 1) 
lcp@105
  1169
                             (resolve_tac rules 1);
lcp@105
  1170
\end{ttbox}
lcp@105
  1171
Since {\sc Prolog} uses depth-first search, this tactic is a (slow!) {\sc
lcp@105
  1172
Prolog} interpreter.  We return to the start of the proof (using
lcp@105
  1173
\ttindex{choplev}), and apply {\tt prolog_tac}:
lcp@105
  1174
\begin{ttbox}
lcp@105
  1175
choplev 0;
lcp@105
  1176
{\out Level 0}
lcp@105
  1177
{\out rev(?x, a : b : c : Nil)}
lcp@105
  1178
{\out  1. rev(?x, a : b : c : Nil)}
lcp@105
  1179
\ttbreak
lcp@105
  1180
by (DEPTH_FIRST (has_fewer_prems 1) (resolve_tac rules 1));
lcp@105
  1181
{\out Level 1}
lcp@105
  1182
{\out rev(c : b : a : Nil, a : b : c : Nil)}
lcp@105
  1183
{\out No subgoals!}
lcp@105
  1184
\end{ttbox}
lcp@105
  1185
Let us try {\tt prolog_tac} on one more example, containing four unknowns:
lcp@105
  1186
\begin{ttbox}
lcp@105
  1187
goal Prolog.thy "rev(a:?x:c:?y:Nil, d:?z:b:?u)";
lcp@105
  1188
{\out Level 0}
lcp@105
  1189
{\out rev(a : ?x : c : ?y : Nil, d : ?z : b : ?u)}
lcp@105
  1190
{\out  1. rev(a : ?x : c : ?y : Nil, d : ?z : b : ?u)}
lcp@105
  1191
\ttbreak
lcp@105
  1192
by prolog_tac;
lcp@105
  1193
{\out Level 1}
lcp@105
  1194
{\out rev(a : b : c : d : Nil, d : c : b : a : Nil)}
lcp@105
  1195
{\out No subgoals!}
lcp@105
  1196
\end{ttbox}
lcp@156
  1197
Although Isabelle is much slower than a {\sc Prolog} system, Isabelle
lcp@156
  1198
tactics can exploit logic programming techniques.  
lcp@156
  1199